1 Introduction

1.1 Background

Artificial intelligence (AI) has shown its usefulness across several domains and applications, making it a potentially useful resource for government, society, and the economy. Despite AI’s rising status as a technology or science, it has not yet been widely recognized as a pivotal innovation by political and academic elites. Consequently, assisting the public sector in defining the scope of AI is challenging. Since the 1950s [25], the study of artificial intelligence (AI) has proliferated. Within the artificial intelligence (AI) field, a wide range of variations in approach, focus, and ultimate goals exist. Humans' ability to give digital help and decision-making in crucial and very complex scenarios is greatly enhanced by industrial processes and equipment supported by AI systems [34]. Therefore, it is in the public sector's best interest to fully exploit AI’s potential advantages.

AI should not be seen as introducing new technology because it is already in use and will certainly be used in the future. According to the first government departments to adopt AI, the economy benefits from the technology’s increased productivity [40]. Many public sector organizations could benefit from public officials implementing AI. The navigation of drones, the prioritization of medical treatment, bail hearings, citizen inquiries, the design of public facilities, the detection of fraud, the selection of immigrants, and the distribution of benefits are all instances. It is essential to understand better the risks, opportunities, obstacles, and incentives for the public sector's usage of AI [43].

1.2 Existing research

There has been limited interest in research on the pros and cons of using AI in the public sector, with most studies focusing on the general use of the technology in all sectors. Initial investigations for public AI applications indicate a wide range of interdisciplinary difficulties, not just those related to the technology. Artificial intelligence (AI) is the study of how computers may learn and behave intelligently, such as by solving problems and picking up new skills. Both citizens and officials are increasingly interested in AI. The public sector might use AI to address several issues, including but not limited to the language barrier, delays in service delivery, long waiting periods, massive unmanageable caseloads, and high turnover rates (Mohtar et al. [53]). Increased productivity, decreased workloads, and simplified procedures are just some ways AI will help the public sector, government, and society. AI research aims to enhance various skills, from manipulating and moving objects to the realizing understanding of natural language, learning, planning, displaying information, and thinking. All of these things are considered to be long-term goals in the field of AI.

To provide better services to the public, some government agencies are investing heavily in AI research and development. Even basic research shows that AI may significantly enhance government programmes, policies, and operations. [21]. The benefits, challenges, and opportunities of the application of AI in improving the delivery of government services can be understood by reviewing relevant literature on the topic using organizational theory as a conceptual framework. This theory is applied by considering that different levels and institutions that offer government services often do so as individual organizations with their own cultures and policies.

1.3 Research aim and questions

The research aims to discover the benefits of harnessing AI for Public Sector Innovation while also exploring the Challenges and Opportunities in Transforming Government Services. Research Questions:

  • What are the biggest obstacles to introducing AI in Government Sector?

  • In what ways does the use of AI help improve public service delivery?

  • When using AI in the public sector, how can organizational theory help devise strategies to overcome obstacles and make the most of opportunities?

1.4 Novelty of the paper

This paper represents a novel contribution to the discourse on artificial intelligence (AI) in government services by carefully examining and synthesizing existing research while distinctly advancing the field. In contrast to previous studies, this work focuses on the specific challenges and opportunities associated with AI adoption in the public sector, addressing language barriers, service delays, and cross-disciplinary difficulties. What sets this paper apart is its explicit comparison and contrast with prior research, underlining its unique contributions. By integrating organizational theory as a conceptual framework, it goes beyond a mere exploration of AI benefits and challenges, offering strategic insights tailored to the complexities of government service delivery. The novelty of this paper lies in its in-depth analysis, strategic approach, and a clear delineation of advancements, ensuring it stands out in the evolving landscape of AI research in government services.

2 Literature review

2.1 Use of AI in the public sector

Informed by prior research on public sector and e-government innovation, the proposed conceptual framework of applying organizational theory to explore the opportunities and challenges involved in the use of AI in government services underscores the prerequisite enabling factors that make it possible for the meaningful impact of artificial intelligence (AI) to be achieved. Emphasizing the importance of correct foundational components, the approach integrates lessons from technical impact evaluations that had previously been conducted [72]. When criticizing existing frameworks that analyze government ICT use, scholars have found them inadequate due to the absence of alternative metrics, data, or research establishing causal links between ICT investment and outcomes [59].

The expansive area of AI development offers numerous possibilities, emphasizing the need for discernment in pursuing specific avenues [33]. This review stresses that the transformative power of AI lies not solely in the technology itself but in how it is applied and alters existing paradigms, as Horowitz et al. [34] argued. Contextual elements, including demographic and cultural factors, contribute to varied responses among local populations.

The influence of AI systems, when they are made accessible to the general public, introduces potential unforeseen changes in behavior, even among government personnel. Quantifying the public impact of AI technology proves challenging as it is more than the complexities acknowledged in earlier publications [16]. Therefore, assessing the consequences of AI calls for a clear understanding of the specific AI system in question. To enhance immediate comprehension, policymakers and the public should engage in a comparative analysis of conditions before and after AI integration, mirroring strategies employed in algorithmic studies [47]. This approach, complementing traditional policy examination methods, proves particularly effective.

Our conceptual framework, intentionally conceptual rather than practical, seeks to capture the attention of a critical mass of academics [63]. Scrutinizing assumptions about AI’s impact on the public sector, researchers will analyze and evaluate their validity. Recent advancements in hardware and software contribute significantly to the progression of AI, unveiling its potential societal consequences. As data becomes more accessible, data-intensive AI systems exhibit increased efficacy. Despite the transformative potential of AI in the public sector, assistance is essential for agencies to attain high-performance levels and fully leverage such technology.

Past research on technological innovation’s various forms reveals a key organizational imperative: the effective organization and deployment of complementary resources for success [21]. The general population has not witnessed substantial productivity gains despite the proliferation of AI assets. This emphasizes the urgency for developing AI competence and prioritizing its integration. Strategic knowledge of AI implementation locations becomes pivotal for recommending infusion techniques and application procedures (Mohtar et al. [53]). The successful implementation by the public sector and future prospects hinge on preemptive measures taken at the initiation of technology adoption. Having looked into the general use of AI in the public sector, it is now important to look at the major areas in which government services are improved through the application of AI.

2.2 Benefits of AI implementation in government services

2.2.1 Unlocking potentials and mitigating risks

The conceptual framework previously delineated reveals the need to establish enabling conditions for impactful AI implementation. In this exploration, we look into the specific benefits that manifest when these conditions are met, drawing insights from extensive research on the public sector and eGovernment innovation. Considering the outcomes of prior technical impact evaluations, the emphasis on an analytical approach seeks to expose the complexities surrounding AI's integration into government services.

Current research frameworks probing the government's influence on Information and Communication Technology (ICT) have been deemed insufficient due to a lack of counterfactual metrics, data, and studies demonstrating causal relationships, as highlighted by Petit [59]. While AI holds theoretical promise for revolutionary feats, its impact hinges on its application and capacity to disrupt existing systems. This contextual dependence results in varied impacts across diverse applications and fields, as elucidated by Horowitz et al. [34]. Furthermore, individual reactions to AI vary widely, influenced by personal history, upbringing, and location. Even government personnel may undergo behavioral changes in response to the availability of AI systems, potentially impacting the data and procedures upon which these systems rely.

Understanding and predicting public reactions to AI pose significant challenges, as acknowledged in prior research by Bundy [16]. Consequently, a nuanced understanding of specific AI systems becomes imperative for impact evaluations. Systematic comparisons of the policy or public environment before and after AI implementation prove crucial in enhancing short-term comprehension of AI effects, drawing parallels with methodologies employed in algorithm research [47].

While our emphasis leans towards theory over application to engage a broad academic audience [63], it is crucial to critically examine common beliefs about how AI would impact government operations. Recent hardware and software innovations have propelled the development of AI, particularly in machine learning processes and data-dependent AI systems. Despite revolutionizing government agencies' operations, challenges persist in achieving high performance and fully exploiting AI technology.

Addressing hurdles identified in various forms of technological innovation [21], organizations, including government agencies, must adeptly organize and deploy complementary resources. Despite AI resources not unequivocally demonstrating superior performance, the imperative to build and foster an AI capacity remains. Effective AI implementation necessitates a strategic understanding of deployment locations to suggest methods for incorporating AI infusions and streamlining application processes (Mohtar et al. [53]). Failure to undertake requisite measures may impede the government’s adoption of new technologies.

The pervasive use of AI technology in legislation and public services poses new challenges for governments. Balancing the benefits and risks of AI technologies becomes the responsibility of general managers, administrators, and civil authorities [45]. The broader community refers to the decision-making process on AI deployment as “AI governance,” focusing on administrative efficiency and improved public service delivery. Simultaneously, regulatory decision-making prioritizes economic and social maximization. However, the associated risks, evaluated within the framework of broader concepts like bias, fairness, privacy, and preserving democratic values, present formidable challenges [27].

Effectively managing the above risks in the public sector is complicated due to a constantly shifting policy landscape, cautious citizens, and limited expertise among government personnel and institutions. This is always the case whether the approach involves instituting quality assurance methods for algorithmic case management decisions, establishing fair disclosure regulations, or handling data in the private sector [26]. A comprehensive analysis of AI research in government underscores that these challenges “permeate all application layers” [31]. Despite this, existing frameworks and techniques to mitigate these risks often veer towards high abstraction or technical complexity levels, posing challenges in operationalizing them within the public sector.

2.2.2 Enhancing administrative efficiency through AI governance

Integrating AI technologies brings about transformative advantages in the landscape of government services. One notable benefit lies in enhancing administrative efficiency, a cornerstone for effective public service delivery. As AI governance takes center stage, decisions regarding the deployment of AI are geared towards optimizing administrative processes, thereby streamlining operations.

Lewis, Bellomo, and Galyardt [45] argue that governments are now tasked with weighing the advantages and risks associated with the widespread use of AI technology in legislation and public services. This responsibility falls on the shoulders of general managers, administrators, and civil authorities. The decision-making process, commonly termed “AI governance,” is a multifaceted endeavor to balance administrative efficiency and improved public service delivery. Concurrently, regulatory decision-making aims at maximizing economic and social benefits.

However, evaluating risks tied to AI implementation goes beyond traditional metrics and delves into broader societal concepts. Factors such as bias, fairness, privacy, and preserving fundamental democratic values play a pivotal role in shaping regulatory frameworks [27]. These considerations underscore the intricate nature of AI governance, where decisions extend beyond mere technical considerations and encompass complex ethical and societal dimensions.

Navigating the challenges associated with AI governance in the public sector requires a nuanced approach. The ever-shifting policy landscape, a populace wary of rapid technological advancements, and a dearth of expertise among government personnel compound the complexities. The challenges are pervasive regardless of the specific measures taken, whether instituting quality assurance methods for algorithmic decision-making or formulating fair disclosure regulations [26].

A comprehensive analysis of AI research in government reveals that these challenges “permeate all layers of application” [31]. Despite this acknowledgment, existing frameworks and risk mitigation techniques tend to gravitate towards high abstraction and technical complexity levels. The practical operationalization of these frameworks within the public sector becomes a formidable task, necessitating a more pragmatic and adaptive approach to AI governance.

2.3 Challenges/obstacles associated with AI in the public sector (research question 1)

The findings in this part contribute to answering the first research question. The benefits mentioned above are not without some challenges that face the delivery of government services with the help of AI. In exploring the benefits of AI implementation in government services, there is a need to acknowledge and address the challenges of this transformative technological shift. While AI holds the potential to act as a catalyst for development, especially in nations with low per capita incomes, certain challenges loom large [15]. Understanding and effectively managing these challenges becomes paramount for ensuring the successful integration of AI into public sector operations. One significant challenge lies in the need for capacity-building, a crucial aspect emphasized by Bughin et al. [15]. While AI can revolutionize various facets of public service delivery, there is a shortage of field experts who can evaluate outcomes. This scarcity complicates the implementation process, making it imperative for public organizations to foster a more profound understanding of AI in terms of its technical aspects and broader implications.

The demand for AI talent has increased beginning salaries, creating hurdles for sectors with limited recruiting budgets, such as the public sector, in attracting top talent [16]. Consequently, public organizations must cultivate a more critical understanding of AI to overcome these recruitment challenges and successfully integrate and operate AI initiatives. Moreover, integrating AI into government services requires the collaboration of technical experts and non-technical government employees, including procurement officials, lawmakers, and department heads [37]. These individuals must enhance their data and AI comprehension to navigate the complex landscape of AI applications. Technical competence and a solid understanding of massive data sets' professional and practical ramifications become prerequisites.

Additionally, the focus on safeguarding privacy rights, as highlighted by Yerlikaya and Erzurumlu [70], further emphasizes the need for a comprehensive understanding of the legal frameworks governing AI functions. However, the challenge extends beyond comprehension,it delves into the realms of government procurement processes. This domain, as acknowledged by van Helden and Reichard [65], is notorious for being tedious and drawn out. The long approval processes and the fulfillment of extensive contract obligations pose obstacles to the seamless integration of AI solutions. As Kemp [38] suggests, solution-oriented recommendations are vital to overcome the challenges associated with adapting to new technologies in the government procurement landscape.

Providers, particularly those in the SMB market, face additional hurdles adjusting to these changes. The prolonged wait times make it challenging for small enterprises to make legally enforceable promises regarding expected recruiting needs. This proves problematic as they often need to engage employees promptly for projects as they become available. Moreover, Baek et al. [9, 10] highlight substantial barriers to implementing AI across government agencies. While technology itself is not the primary impediment, it is the most easily remedied part. To unlock the incredible capacity of AI, adjustments must be made to establish organizational culture and practice [8]. This shift necessitates a comprehensive transformation in how government agencies approach and embrace AI, ensuring that the organizational fabric aligns well with the potential AI offers. Generally, as we navigate the challenges associated with AI integration into the public sector, a holistic approach involving capacity-building, cross-functional collaboration, and an overhaul of existing procurement processes becomes imperative. Only through such comprehensive strategies can governments harness the full potential of AI, transforming public service delivery and governance.

2.4 Current applications and case studies

Businesses and institutions within the public sector are increasingly investing in AI research and development, paving the way for the integration of AI across government operations. However, this expansion is not without its challenges. Immediate action is crucial to educate the public and government employees about the potential benefits of incorporating AI into governmental processes. Government bodies must prioritize Accountability and consider the implications of AI efforts on public and national safety. Deloitte’s research [25] identifies key areas that demand the government’s focus to foster the development of trustworthy AI. One critical aspect is Accountability and responsibility, necessitating policy changes to ensure that AI systems can be held accountable, particularly in tasks with life-or-death consequences. Transparency is key, making AI systems open and comprehensible allowing individuals to understand their functioning. Moreover, a commitment to honesty and unbiased decision-making is vital, aiming to reduce implicit and explicit biases in AI systems that may contribute to discrimination based on factors such as race and gender.

In addition to these ethical considerations, compliance with data and privacy regulations is paramount. Adhering to regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) is essential for ensuring that AI applications respect individuals' privacy rights. Another crucial aspect is the security and safety of AI systems [25]. GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) are data protection and privacy regulations that set guidelines for collecting, processing, and storing personal data. While GDPR is a European Union regulation, CCPA is specific to the state of California in the United States. Both regulations are designed to protect individuals' privacy rights and give them greater control over their personal information, including when using AI [58]. The use of DevSecOps methods is needed for the protection of AI systems from potential cyber threats. Furthermore, the capacity to accommodate a growing user base without compromising precision or consistency is a key factor for successful AI implementation. Despite the challenges associated with providing adequate resources for AI research and development, Cooke et al. [25] note that advancements in AI accessibility to the public are anticipated to alleviate these challenges in the coming years.

2.5 Case studies

The extent to which various governments are applying AI in improving their service delivery can be seen in a number of case studies. Countries around the world are adopting AI solutions for their public services. Governments are investing heavily in AI to develop more innovative and better public services for the residents. Some of the applications of AI in the public sector to improve services include the following:

2.5.1 AI implementation in New South Wales (NSW) revenue department

Revenue NSW is utilizing artificial intelligence to identify and support disadvantaged customers who cannot pay their penalties. The programme has been in operation since 2018, diverts vulnerable clients from enforcement action, and gives other settlement choices. This means fewer needy individuals will be compelled to pay fines they cannot afford. It also improves the overall efficiency of the garnishee procedure. With many 46,000 consumers classified as 'vulnerable,' the community advantages of an efficient AI solution are significant (NSW, [56]).

Previously, Revenue NSW could only learn that a customer was vulnerable after debt collection action was taken. This programme aids in predicting vulnerability and providing alternative resolution choices. Various indicators are assessed to determine susceptibility, such as how frequently they contact Revenue NSW, the number of major penalties they have, their age and expected socioeconomic position, and the weekly payment amount if their debt is to be paid via an installment plan.

This forecast is intended to supplement, not replace, human decision-making. Revenue NSW employees can assess the program’s forecasts and connect identified clients to more appropriate resolution channels that provide focused assistance. This may result in sanctions being lifted, enforcement being halted, repayment plans being established, or a labor and Development Order being implemented to allow customers to decrease their punishment by participating in unpaid labor, training, or therapy.

2.5.2 National health services (NHS) UK

Artificial intelligence has been used to improve healthcare significantly in various ways, including earlier disease detection and prevention and the support of clinical decision-making. With the help of AI, a patient’s health may be tracked in real-time. We may warn professionals of possible threats by transmitting information about a patient's weight, height, blood sugar, stress levels, heart rate, etc., to artificial intelligence healthcare systems. Governments may use AI to deliver high-quality healthcare to their citizens. For instance, AI affected how the COVID-19 pandemic virus was identified and handled [47].

The UK’s National Health Service (NHS) has begun a data collection initiative for people with chronic obstructive pulmonary disease (COPD). The NHS created the National COVID-19 Chest Imaging Collection (NCCID) via various partnerships; it is an open-source collection of chest X-rays from COVID-19 patients throughout the UK. This study aimed to develop deep-learning strategies for hospitalized COVID-19 patients. The NHS has also developed an AI tool that can diagnose heart disease in only 20 s, even while the patient is still inside an MRI scanner. A physician would need at least 13 min to examine the MRI scans of a patient personally.

2.5.3 Annotation of buildings by the French Government

The French consulting firm Capgemini collaborated with Google, the search engine giant, to develop AI technology capable of analyzing aerial pictures and locating previously unidentified buildings. The project uncovered twenty thousand previously unknown ponds throughout France. Because of this discovery, the French government received an additional €10 million in tax revenue. The government says it will use the scheme to uncover gazebos and patios that have not been registered. US federal departments and insurance companies utilize an analogous AI method to monitor physical assets for signs of tampering [47]. The artificial intelligence programme Near Map, developed by an Australian company, can identify and partition geographic objects in aerial photos. More than 380,000 square kilometers of imagery from the United States and Australia were used to teach the Near Map AI. In conclusion, thanks to AI, governments can better monitor infrastructure for tax evasion and illicit property transfers [47].

2.5.4 Smarter policy-making by Belgium

Government agencies and politicians can leverage AI as a powerful tool for intelligent and public-interest policy-making. The utilization of AI technologies enables policymakers to conduct in-depth analyses of publicly accessible data, shedding light on novel issues relevant to both their constituents and geographic locations. According to Fabregue et al. (2021), the application of analytical information in policy-making yields two significant benefits: it expedites the process of polling and identifying the root causes of problems, leading to improved policy outcomes, and it enhances awareness of societal shifts, allowing for more timely policy adjustments. An illustrative example is the Belgian authorities' use of an AI crowdsourcing tool during the 2019 climate change rallies. Developed by the Belgian tech company Citizen Lab, this tool facilitated a better understanding of protesters' demands through public input [29]. Consequently, Belgium prioritized 15 climate action initiatives based on the insights gained from the AI-driven analysis of public sentiment and concerns.

2.6 Organizational change and AI adoption

The organizational theory perspective sheds light on the multifaceted impact of AI on government service delivery, recognizing that distinct government departments function as autonomous entities. Within the organizational framework, artificial intelligence (AI) is identified as a potential catalyst for growth, presenting an opportunity for underdeveloped countries to tackle persistent challenges through organizational transformation [16]. However, this transformative technology introduces challenges at the organizational level, including its profound impact on the workforce, necessitating a reevaluation of future training requirements and emphasizing the imperative for capacity-building.

One organizational challenge lies in the scarcity of individuals with expertise in AI and performance evaluation [57]. Despite the seemingly steep learning curve in data management associated with AI implementation, acquiring skills essential for leveraging AI technology is not straightforward. Higher salaries offered to top AI talent further intensify the competitive landscape, making it challenging for sectors with limited recruiting budgets, such as the public sector, to attract the most qualified candidates [16]. Organizational theory underscores that the effectiveness of AI strategies is contingent upon integrating AI expertise within public organizations, a capability often lacking.

Moreover, the organizational theory perspective highlights the need for technical proficiency among non-technical public workers, including procurement officials, politicians, and department heads [36]. This calls for understanding the ethical and practical considerations involved in handling vast amounts of data while safeguarding users' privacy. The organizational challenges are further compounded by a lack of awareness among some individuals regarding the latest regulations governing AI development efforts, including those related to privacy and data protection. In navigating these organizational complexities, organizational theory offers valuable insights to enhance AI's successful integration and utilization within government service delivery.

For example, current procurement practices do not consider the commercial sector’s perception of algorithms as intellectual property. Governments that buy prefabricated models may want to adjust and gain insight as they use the technology. Given the widespread use of these models in the software procurement process, it is reasonable to believe that AI service providers would agree to these criteria [41]. Since this impacts both the frequency with which technologies need to be updated and the government’s ability to do so with new data, it has far-reaching consequences for the durability of AI. Moreover, the government procurement process is considered tedious and convoluted. Difficulties include long approval processes after the presentation of a proposal and the fulfillment of all contract obligations. Solution-oriented recommendations are offered rather than issue or opportunity identification [38]. Smaller service providers, in particular, have a tough time adjusting to these changes. For instance, when there are significant job wait times, it is difficult for small firms to commit to potential hiring needs since they need to start hiring as soon as a position opens up. The government has a challenging challenge in getting AI into the hands of the general population [9, 10]. While addressing the technical aspects of the problem may seem like the most pressing priority, this is only a fraction of the work that needs to be done. Before AI utilizes its capacity, adjustments must be made to establish organizational culture and practices.

2.7 Key success factors for technology adoption (research question 2)

These findings help in addressing the question on how AI helps to improve public service delivery. When institutions implement AI at the organizational level, it is necessary to consider the factors that can enable its adoption and workability. There is much debate, both online and in the mainstream media, over whether or not artificial intelligence and machine learning can help with the world’s most serious issues. The study by Jensen [35] demonstrates that interest groups interpret and make assumptions about public-sector AI issues differently. Managers in the IT sector, for example, do not perceive the same technical impediments to AI adoption as their public sector and healthcare sector colleagues [35]. Unlike IT businesses, hospital managers have budget constraints [51]. This range of perspectives should help public officials avoid making policy decisions based only on the priorities of a select few AI interest groups. This pertains to the guidance offered to governments about the development of contracts with private IT firms to prevent vendor lock-in. It is imperative that governments resist the temptation of vision lock-in and instead develop comprehensive public sector AI frameworks and policy guidance. Making a chart of stakeholders' differing expectations might be the first step towards reaching such a specific optimization among conflicting concerns. Politics, not advances in AI technology, drive this kind of optimization. Researchers in artificial intelligence generally believe that it is still very speculative and unpredictable whether or not AI will be developed capable of doing tasks that need exceptionally high levels of creativity, planning, and caring conduct. Public policy decisions must consider ethical compromises, creativity, and individual and group identity difficulties [31].

To successfully connect with humans, AI systems need contextualized, imaginative, and caring actions. In the public sector, algorithmic governance is most often considered in the context of mission-critical activities [16]. Two distinct governance issues with AI are now being conflated in the public discourse. The Economic Effects of Government-Sponsored Artificial Intelligence Study in the US Research in artificial intelligence has significant potential for enhancing healthcare, speeding up scientific inquiry, and improving people’s quality of life in the United States [32]. As part of a global reaction to healthcare and public demand, the public sector in the United States will deploy artificial intelligence (AI) technology. The United States has a robust innovation environment and is thus poised to maintain its position as the world’s leader in the AI business [19]. The government, academic institutions, and enterprises must work together to fully realize AI's benefits to the country [20]. The research also provides some food for thought on the fundamentally good character of the average American. This is because it requires the federal government's participation, which is actively working on laws and initiatives to promote AI innovation in the UAE. The success factors outlined in the research provide key insights into the prerequisites for the effective adoption of AI in the delivery of government services, which aligns with organizational theory principles. The following are the major success factors necessary for the successful adoption of AI in the delivery of government services.

2.7.1 Activating a local network trust

A newcomer makes little progress in bringing advanced technologies to a group of people. This is especially crucial when traveling to outlying communities with deep social bonds and few outsiders. We have tried to locate groups with preexisting networks in the region since we have a firm grasp of this social dynamic. There is a wide variety of examples of this, including but not limited to cooperatives, savings and loan associations, schools, churches, and mom-and-pop stores in the local community. We help local business owners set up “tech kiosks” to sell items like solar lights, water filters, and clean cookstoves by providing them with training and the necessary equipment [2].

2.7.2 Lowering financial barriers

In order to develop a reliable local infrastructure for delivering to and servicing clients in the last mile, lowering associated costs is essential. Families that rely on farming and fishing as their primary or secondary source of income tend to be very price-conscious. After conducting hundreds of needs and effect analyses, we know that price matters, even if a product has significant long-term economic advantages [12].

2.7.3 Riding the technological wave

In addition to learning that it takes time for technology to be adopted, we also learned the importance of contacting last-mile communities via a trusted network with reduced financial barriers. Technology adoption does not occur in a single, fleeting wave; instead, it requires a prolonged, low-key effort to persuade people of the merits of the technologies in question and the reliability of those who advocate for them [73].

2.7.4 Focusing on tangible benefits

Users may benefit from several of the market's simplest but most transformative technologies. For instance, solar lanterns may be used as an alternative to kerosene lamps, which are both hazardous and costly. This makes them unique and rare [73].

2.7.5 Staying engaged and showing commitment

Lastly, keeping up with Interaction is crucial for getting technology to users in the last stretch. Organizations should ensure the intended communities, whether via local partners or on their own, have access to and maintain the necessary technology [22].

2.8 Ethical considerations and AI in public sector

Some moral principles and values should guide the use of AI in the provision of government services at various stages of its application. Ethical considerations in AI within the public sector refer to the adherence to moral principles and values associated with developing, deploying, and impacting artificial intelligence technologies in government and administrative functions [22]. As AI increasingly becomes integrated into public services, there is a growing recognition of the need to address ethical concerns to ensure responsible and fair use of these technologies. The following are some of the ethical considerations when using AI in the public sector,

2.8.1 Data privacy

Data leaking should always be considered when dealing with such massive amounts of information. There is a potential for indiscreet information to be uncovered when more and more data is utilized to train algorithms. On the other hand, AI has certain advantages when it comes to protecting personal information. Data breaches may be mitigated with the help of AI by encrypting sensitive information, decreasing the likelihood of human mistakes, and monitoring for any signs of cyberattacks [73].

2.8.2 Algorithmic bias and fairness

Bias and unfairness are frequent issues for AI. Machine learning is used in various fields and contexts, and it can potentially marginalize some segments of society further. Facial recognition errors are prevalent, leading to incorrect healthcare diagnoses or the inability to distinguish people of color. AI systems can aid in making fair and equitable judgments, while bias and fairness are handled in AI algorithms [22]. Bias and unfairness are frequent issues for AI [22]. Machine learning is used in various fields and contexts, and it can potentially marginalize some segments of society further. Facial recognition errors are prevalent, leading to incorrect healthcare diagnoses or the inability to distinguish people of color. AI systems can be designed to assist in making fair and equitable judgments and how bias and fairness are handled in AI algorithms [52].

Public administration scholars increasingly discuss how the government's widespread adoption of digital technologies has reshaped the social landscape from the organizational theory perspective. The organizational theory lens in this context emphasizes the impact of these technologies on decision-makers autonomy and expertise, particularly (front-line) bureaucrats [45]. Concepts like “digital discretion” [17], “automated discretion” [74], and “artificial discretion” (Young, Bullock, & Lecy, [71]) offer alternative perspectives on the discretion exercised by bureaucratic actors within the administration. Organizational theory underscores that integrating digital technologies into public service delivery may not only influence but also replace human judgment [17], thereby significantly impacting the role of public managers (Kim, Andersen, and Lee, [39]).

This transformative technology can potentially reshape decision-making processes within the public sector, altering the significance of human knowledge and judgment. From an organizational theory standpoint, if administrative decision-making authority is increasingly delegated to AI technology, it raises concerns about the potential ramifications for administrations' legitimacy and adherence to societal values [17]. The role of organizational theory becomes pivotal in navigating these shifts and ensuring that the adoption of digital technologies aligns with the core values and legitimacy of public administrations.

It is becoming more vital to understand how decision-makers in the public sector incorporate algorithmic outputs into decision-making, the repercussions of doing so, and whether or not the processing of traditional (human-sourced) advice is considerably different. Due to a lack of theoretical work on the likely cognitive biases in this new industry, we draw on social psychology literature on automation and public administration research on information processing to operationalize the possible effects of AI advice for decision-making. Seeing the contrast in the forecasts these two literary masterpieces make is fascinating.

For more than a decade, policymakers have debated the pros and cons of artificial intelligence (AI). As time passes and laws and technology improve, the relevance of public sector effects will only increase. The topic of machine learning is also fascinating since it explains computers that can adapt and improve their algorithms on their own as new data is gathered. Officials at all levels of government are investing heavily in AI and ML research and implementation to solidify the UAE's position as a global leader in a field that will have far-reaching consequences soon. Efficiency gains, menial work automation, improved decision-making, and new possibilities for other exciting developments are all within reach, thanks to the advent of artificial intelligence (AI). Despite its many positive applications, AI is not without its dangers; one of the most pressing concerns today is the prospect of widespread job loss due to automation. The long-term effects of this new technology on governments remain to be seen. Knowledge evaluation, task completion, and pattern recognition are just a few areas where AI is used at all enterprise levels.

2.8.3 Accountability and governance

A fundamental principle of governing AI is Accountability. The widespread practice of turning over analytical activities (such as prediction and decision-making) to AI systems likely contributes to this trend. If we are going to use the help of or delegate decisions to AI increasingly, we need to make sure these systems are fair in their impact on people's lives, that they are in line with values that should not be compromised, and that they can act accordingly [9, 10]. Several issues arise when roles are not clearly defined. Such occurrences are rare in societies with sophisticated legal systems that precisely define misbehavior and punish offenders accordingly. It often occurs under less restrictive legal environments, and it is hard to identify the underlying costs and benefits of different accountability regimes because of the lack of a universally accepted definition of responsibility. The lack of political and legal consensus on basic matters, such as the responsibility of many AI services, is cause for worry [9, 10]

3 Theoretical framework

The organizational theory used in evaluating the opportunities and challenges available in the adoption of AI within the public sector can be looked at from different perspectives, including pressures and expectations of a public organization, government regulation, technological innovation, stakeholder collaboration, and a resource-based view.

3.1 Pressures and expectations influencing AI adoption

In the realm of organizational theory within AI adoption, the evolution of e-government practices, which originally emphasized efficiency and cost-effectiveness, has paved the way for technological advancements driven by Artificial Intelligence (AI) within public administration. The infusion of smart, technology-centric governance endeavors to streamline service delivery and uphold quality by engaging individuals in policy-making and decision-making processes through digital platforms [68]. As AI-driven progress permeates public administration, its impact is palpable not only on the general public but also on government personnel and the organizational structure itself. The perpetual challenges of power dynamics, trust, and legitimacy come to the forefront as artificial intelligence becomes increasingly intertwined with people's lives and societal functions. Within the framework of organizational theory, comprehending the intricacies of factors influencing the adoption and dissemination of AI in public administration is vital for the creation of public value (Ashok, Narula, and Martinez-Noya, [6]).

Difficulty in defining AI stems from problems with terminology. Dwivedi et al. [28] propose an “institutional hybrid” method for characterizing AI’s definition, the scope of application, and related academic disciplines. Artificial intelligence (AI) is defined as “a cluster of digital technologies that enable machines to learn and solve cognitive problems autonomously without human intervention” by Madan and Ashok [46]. For this study, “public administration” refers to the sector of government tasked with enforcing and, perhaps, bettering public policy. Process automation, virtual agents and voice analytics, predictive decision-making analytics, sentiment analysis, and document reviews are some of the best-known uses of AI in this field (Wirtz, Weyerer, and Geyer, [69]). In particular, the study looks into the AI subfields of machine learning (ML) and NLP (natural language processing). Most public administration AI systems incorporate cross-case analysis and data storage and retrieval, as shown by Madan and Ashok [46] and the European Commission and Joint Research Centre (JRC) (2021).

Agarwal [1] notes that several experts believe introducing AI into a company would have far-reaching consequences on its culture, operations, and personnel. Several writers, including Ashak, Madan, Joha, and Sivarajah [7], and Kuziemski and Misuraca [44], have pointed out that the application of AI in government administration raises ethical difficulties. There is little question that AI has many beneficial applications, but it also poses severe risks to society that must be considered (Medaglia, Gil-Garcia, and Pardo, [50]).

3.2 Role of government regulations and societal expectations (research question 3)

In the organizational context of AI adoption, the European Union (EU) (European Commission, 2019), Canadian (Canada, 2020), and British (Gov. UK, 2019b) governments, alongside tech corporations and other entities, have established ethical guidelines governing the application of artificial intelligence. These comprehensive ethical frameworks impose substantial constraints on government AI utilization, shaping the organizational landscape. However, the resolution of conflicts in public value arising from AI implementation has encountered limited progress at the meso and micro levels of governance. According to Morley et al. [55], AI experts face the imperative of translating widely held beliefs about the field into practical applications, underscoring the organizational challenge of aligning AI principles with operational realities.

In discussions surrounding artificial intelligence, the government often assumes the watchdog role, monitoring ethical adherence. Despite the escalating significance of AI in public administration [44], Medaglia et al. [50]), scant attention has been directed toward examining the public administration's role as a technology user. Wirtz, and Müller, [68] conducted a literature study revealing gaps in our comprehension of the challenges the public sector faces in adopting AI. Through a comparative study, Madan and Ashok [46] highlight the limited understanding of how governments deploy and implement AI, raising questions about its optimal utilization for maximum benefit (Wang, Teo, and Janssen, [67]). Researchers are encouraged to delve into the causes and barriers underlying this phenomenon [3] within the realm of organizational theory in AI adoption.

3.3 Resource-based view

3.3.1 Internal resources and capabilities for AI adoption

The resource-based view (RBV), a widely utilized lens in organizational theory due to its insight into organizational performance through the lens of internal resource heterogeneity (Barney, [11]), offers valuable perspectives. Clausen, Demircioglu, and Alsos [23] highlight the considerable influence public organizations exert over people and resources. Unlike tangible resources, capabilities encompass intangible qualities such as organizational culture, processes, and employees' skills. Organizations strategically leverage both human and material resources [60] to fulfill their missions, emphasizing the importance of dynamic capabilities in adapting to changing environments.

In the context of public administration, where elections and policy changes introduce a dynamic external environment, the RBV underscores the need for effective internal knowledge processes. Public managers grapple with the challenge of cultivating these processes to navigate external fluctuations and reconcile competing demands [7]. The focus for public managers is on developing the “ability to integrate, build, and reconfigure internal and external competencies to address rapidly changing environments.” This emphasis aligns with the RBV’s capacity to endow public sector organizations with dynamic capabilities crucial for policy implementation and service provision [60]. The RBV thus becomes instrumental in facilitating effective regular rigidity avoidance and essential capability refreshment within the realm of public administration.

Three aspects of a company’s strategy must be considered: public values, legitimacy and support, and internal competency (Moore, [54]). In order to carry out such ground-breaking innovations with a variety of public value configurations, organizations need “internal capabilities,” which may be thought of as their dynamic capacities and internal knowledge processes associated with AI implementation. Political leaders and central governments with digital transformation goals provide AI with the legitimacy and support it needs to succeed. The legitimacy and broad acceptance of AI-driven services are bolstered by citizens' participation in their development and use [61]. Digital. The precise features and design of the AI employed are also important factors in value generation for society. Thus, it is evident that a trifecta of technological, organizational, and environmental factors impacts the evolution of AI.

3.3.2 Challenges in acquiring and developing resources (research question 1)

The findings in this part help in answering the first research question on challenges in adopting AI in government service delivery.

3.3.2.1 Inadequate or missing data

Artificial intelligence systems can only complete their goals after training using data from the relevant issue area. Businesses may struggle to “feed” their AI systems with sufficient quality or quantity data [42]. Possible causes include a lack of access to the information or a delay in its collection. This difference may introduce inconsistency or bias in the AI system's operation results. Data that is representative and of good quality may help avoid this issue. Start with simpler algorithms that are easier to comprehend and are bias-proof, and alter it starting with AI [42].

3.3.2.2 Outdated infrastructure

In order to provide valuable results, AI systems will need to be able to process vast amounts of data quickly and efficiently. That can only be done on machines with sufficient memory and processing speed. Despite the importance of AI development, many businesses are still using outdated systems that can’t keep up with the demand. Therefore, companies that want to use machine learning to transform their L&D procedures should be ready to invest in state-of-the-art infrastructure, tools, and apps [46].

3.3.2.3 Adaptability to existing buildings and infrastructure

It takes more than adding a few plugins to LMS to include artificial intelligence (AI) in your training programme. They established that evaluating the system’s storage, processors, and underlying infrastructure is crucial to ensure they are enough for the task [42]. Meanwhile, the team needs training on how to make the most of the new resources available to them and how to troubleshoot common problems and spot signs that an AI algorithm is falling short. Suppose we want a smooth transition to machine learning without hitting any of these snags. In that case, it is in your best interest to work with a service provider with substantial expertise and understanding of artificial intelligence (Glikson and Woolley, [30]).

3.3.2.4 Lack of qualified workers in the AI industry

Given how innovative the idea of AI is in learning and education, it is reasonable to claim that finding persons with the appropriate knowledge and talents is a huge challenge. Due to a lack of in-house experience, many businesses hesitate to experiment with artificial intelligence [4]. While hiring a third-party consultancy to assist with the transition to machine learning is possible, forward-thinking companies are beginning to appreciate the value of building their own in-house expertise. They suggest educating and guiding employees on AI development and deployment, hiring specialists, and even licensing experts from other IT companies to produce learning prototypes in-house [46].

3.3.2.5 Having excessive confidence in an AI system/unreliability

Sometimes, technology can also cause harm because of all the good it has done. However, AI is only as good as the data given, so that erroneous input will yield erroneous output. It is challenging to reduce the complexities of learning to a set of facts that can be put into an AI system. This emphasizes the significance of AI explainability in facilitating a painless evolution toward machine learning. Analyzing algorithms and training on how AI makes decisions may increase openness and reduce the likelihood of inappropriate use (Glikson and Woolley, [30]).

3.3.2.6 Budgetary preferences

Incorporating AI into the training plan will not be cheap, as should be evident after reviewing the information presented. AI experts with the proper knowledge implement a continuous AI training plan and perhaps modernize IT infrastructure so it can handle machine learning tools [42]. While some of these costs will inevitably be incurred, others can be mitigated by exploring no- or low-cost software and education alternatives. Various resources are determined to tell which artificial intelligence training resources would best use time and money [46].

3.4 Technological innovation systems (TIS) (research question 3)

The findings and discussions in this part are useful in addressing the third research question on how organizational theory can help devise strategies to overcome obstacles.

3.4.1 Interaction between actors, networks, and institutions

In the context of organizational theory applied to AI adoption, innovation is perceived as a dynamic process shaped by institutional elements, drawing from a structural viewpoint. This perspective aligns with the historical definition of a “technological (innovation) system” as “a network of agents interacting in a specific economic/industrial area under a particular institutional infrastructure or a set of infrastructures and involved in the generation, diffusion, and utilization of technology” [49].

Within the framework of Organizational Theory, the classification of Technological Innovation Systems (TISs) is often based on the products they encompass and the specialized fields they engage with [14]. This organizational lens transforms client-vendor relationships into dynamic problem-solving networks [18]. Consequently, TIS actors are positioned at various points along the organizational value chain, including the supply chain, with institutions like banks and government agencies playing integral roles. Notably, within the organizational boundaries of a TIS, only certain organizational players, networks, and institutions are considered, while others are contextualized as part of the broader organizational system. This application of organizational theory sheds light on the strategic organizational aspects that influence the dynamics of technological innovation systems in the realm of AI adoption.

Most recent work focuses on innovation functions, even if structural components constitute the foundation of any TIS study. The system’s capabilities determine the creation and dissemination of novel technologies, goods, and services [5]). This means that they are elements of the more extensive innovation process that affect the system’s overall effectiveness in terms of innovation [5]. The dynamic interplay between the actions and interactions of people, the impacts of institutions, and the effects of self-reinforcing processes gives birth to these characteristics throughout time [14]

3.4.2 Stakeholders' collaboration and their role in AI implementation

External players, networks, and institutions impact the system at several levels of analysis, including those not directly related to the core functionality of TIS. Although the structure of the TIS framework is still fragile and unfinished during development, these environmental influences are crucial [48].

Figure 1 offers a high-level summary of the paper's stance on TIS, highlighting two aspects that will be discussed in further depth below. The article starts by discussing the external factors that impact a main TIS before moving on to discuss how this TIS interacts with other TISs and the business sector. Second, it indicates whether parts of the focal TIS are blue because of upstream (green) or downstream (red) effects. This research focuses on how context structures affect the core TIS, although the Interaction between the two is feasible. This is because a young TIS is unlikely to impact its surroundings significantly.

Fig. 1
figure 1

A Conceptual Framework of AI Use in Government Services Using Organisational Theory [66]

4 Types of contexts

In previous studies, multiple context structures have been shown to affect the formation and performance of a focused TIS. Technology and industry settings are the primary ones discussed in this work [64].

The technical (innovation) environment may be seen as an interconnected network of TISs. Bergek et al. [13] state that technologies that are either laterally or vertically related to the intended TIS are supplementary or antagonistic. Where the line is drawn between the targeted TIS and the context TISs depends on how the system is configured. Many aspects of technology, such as its range of development and production methods and the breadth of its potential applications and markets, can be used to characterize it (Sandén and Hillman, [62]). The number of context TISs would drop, and the focus TIS would become more complex if we used a narrower definition (Fig. 2).

Fig. 2
figure 2

TIS Relation Structure [64]

Similarly, if the geographic boundaries of TIS were expanded, more innovation activity would be attributed inside the TIS. In contrast, if they were contracted, a wider variety of context structures would be generated by TIS situated outside [13]. Since the focal TIS is defined at the national level, the international TIS is considered a context structure in this work.

The TIS-related production, distribution, and consumption systems comprise the sectoral (production and consumption) framework. These sectors also include the industries that will put into practice and use the innovations created by the targeted TIS because, in some instances, these sectors will be the most important [13]. This setting is often defined as an existing socio-technical configuration or regime in studies of societal shifts. Multiple sectors may include a single TIS framework and an affiliation of TIS with a specific sector may shift over time [13].

Although these two issues are the most urgent, the geographical setting is also critical. A TIS serves a particular country or geographical area. Thus, ‘local’ factors like a country’s institutional setup or a region's specialization might affect a focal THI’s functioning [24]. Examining innovation patterns and regional performance gaps is indeed essential, but the study has chosen to zero in on the effects of technology and industry.

The following is a summary of the Findings in the form of a Table 1;

Table 1 Findings

5 Conclusion

Artificial intelligence (AI) has been the subject of heated discussion among policymakers for over a decade. The importance of public sector impacts will grow as time goes on and laws and technology advance. Machine learning is another fascinating field of study that looks at the possibility of algorithmic improvement in computers via use. To further cement their status as a world power, authorities at all levels of government are investigating and adopting AI and ML. Increased productivity, the automation of mundane processes, and the simplicity of decision-making are just a few of the exciting developments that might be made possible by the development of artificial intelligence (AI). The potential for widespread job displacement due to automation is one of the most severe drawbacks of AI despite its many benefits. The long-term effects of this cutting-edge technology on governments are yet unknown. AI is employed at all levels of businesses for things like knowledge assessment, manual task fulfillment, and pattern recognition.

The review was based on the conceptual framework of applying organizational theory in evaluating the opportunities and challenges involved in the use of AI in the effective delivery of government services. The three key questions are answered by the study as follows;

5.1 Biggest obstacles to introducing AI in government sector

The literature identifies various obstacles, including the need for enabling conditions, challenges in assessing AI consequences, and the potential impact on societal behavior. The conceptual framework stresses the importance of proper conditions before AI has a substantial effect, emphasizing the contextual nature of AI impacts. It also highlights the challenges associated with understanding and quantifying the public impact of AI technology.

5.2 Ways AI helps improve public service delivery

The literature emphasizes the benefits of AI in enhancing efficiency, cost savings, and improved public service outcomes. However, it acknowledges governance, ethical considerations, and Accountability challenges. The discussion on AI governance and decision-making processes underscores the need to balance administrative efficiency with economic and social considerations.

5.3 Organizational theory’s role in overcoming AI obstacles

Organizational theory can play a crucial role in overcoming obstacles and maximizing opportunities in AI adoption. The literature suggests that effective organizational culture and practice adjustments are essential for realizing the full potential of AI. The challenges associated with government procurement processes and the need for adjustments in organizational culture highlight the importance of organizational theory in addressing impediments to AI adoption.

5.4 Contributions to the scholarly understanding of AI adoption in the public sector

However, we suggest that the model's integration with the SDG policy framework may dramatically raise awareness and policy salience in a public sector setting and provide a valuable and approachable jumping-off point for those who are not technical or subject matter experts. We believe the conceptual framework, suggested approaches, and, most importantly, the guiding questions may help public sector workers understand, anticipate, and manage the societal impact of AI. It does so by expanding on and extending prior efforts towards the operationalization of ethical AI, particularly the integrative framework proposed by Wirtz et al. and by describing the processes and levels at which public sector professionals must engage to avert harms produced by AI. In order to ensure that values such as justice and safety, which we will term social sustainability, are upheld when the public sector makes decisions on AI governance, the current model establishes boundaries that must be respected. One of the first things to do is to ensure the government can keep AI under control indefinitely.

5.5 Future research directions and limitations

The purpose of this research was to raise awareness of the role of AI in creating long-term value for government organizations when they are offering services to the public. According to the present study’s literature synthesis, based on a thorough evaluation of AI studies, this area has much-untapped potential. Therefore, additional research is required to supplement and broaden our current knowledge. In future studies, empirical analysis may be used to look at this phenomenon. Organizations should consider the broader societal implications of AI. Due to the study's limitations, caution is advised when applying the analyses' findings in other settings. Researchers of the future may decide to poll government workers to learn more about the government’s plans to invest in artificial intelligence applications and the most significant obstacles standing in the way of this change.

The key implication for future study is the need to consider international dynamics, sectoral contexts, and competition while analyzing TISs in their infancy. Moreover, future research might examine if the adverse contextual effects vary with the institutionalization of the context structures in question and how the manner, site, and relevance of contextual influences change over a TIS’s lifecycle. However, thorough evaluations of context impacts may be complex due to the sheer volume of relevant context structures. Therefore, we suggest that researchers first do a primary study to identify the most critical structures prior to conducting more in-depth studies. Some research suggests that as TISs advance, context structures will become less important, simplifying the process of undertaking context-sensitive analysis.

However, there are a few qualifiers that need to be made. It’s important to immediately emphasize that there is a lot of literature analyzing the effects of AI and society on government policy-making and that our assessment is comprehensive but incomplete. Our efforts to evaluate sustainable AI to its essentials have closed off specific paths that may be useful in other contexts, and there is a vast body of literature we have barely analyzed. For instance, ideas for AI sustainability that are more directly related to environmental or economic issues may be helpful in other policy or research endeavors. Despite these conceptual limitations, we are confident that our research makes a substantial conceptual and theoretical contribution by setting the framework for rigorously understanding the notion of sustainable AI in the public sector, and we thus do not feel that these are incompatible. Research would focus on the public sector, specifically on the problems and opportunities presented by using AI there. The study’s overarching goal is to learn how to use artificial intelligence best to spur innovation in the public sector. It will also examine the obstacles and possibilities for improving government service delivery.