Abstract
This paper presents a study of the use of artificial intelligence (AI) in the Norwegian public sector. The study focused particularly on projects involving personal data, which adds a risk of discriminating against individuals and social groups. The study included a survey of 200 public sector organizations and 19 interviews with representatives for AI projects involving personal data. The findings suggest that AI development in the public sector is still immature, and few projects involving personal data have reached the stage of production. Political pressure to use AI in the sector is significant. Limited knowledge and focus on AI development among managements has made individuals and units with the resources and interest in experimenting with AI an important driving force. The study found that the journey from idea to production of AI in the public sector presents many challenges, which often leads to projects being temporarily halted or terminated. While AI can contribute to the streamlining and improvement of public services, it also involves risks and challenges, including the risk of producing incorrect or discriminatory results affecting individuals and groups when personal data is involved. The risk of discrimination was, however, not a significant concern in the public sector AI projects. Instead, other concepts such as ethics, fairness, and transparency took precedence in most of the project surveyed here.
You have full access to this open access chapter, Download conference paper PDF
Keywords
1 Introduction
Artificial intelligence (AI) has made significant progress in recent years. It has been identified as one of the most important technologies of the 21st century [1], expected to have a major impact on solving small and large societal challenges with an effect for private and public sectors and for individuals [2]. AI is considered an important tool for improving services and making the Norwegian public sector more efficient [3]. Adopting AI, however, also entails challenges and risks. One of the most worrying risks for citizens is the risk of incorrect, unfair, or discriminatory results when AI is used in public services to draw conclusions based on personal data.
This paper reports from a study of AI projects in the Norwegian public sector. A survey helped mapping AI projects in the public sector. The survey included questions about what challenges and possible risks AI projects faced, and about efforts to prevent potential risks, in particular regarding unfair or discriminatory treatment of individuals. Through follow-up interviews with AI projects involving information about individuals and the use of personal data, we further explored in-depth the experiences involved in developing AI systems in the public sector, including a special focus on how the risk of discrimination was perceived and dealt with.
In this paper we present the results of this study, illustrating several challenges, obstacles and stumbling blocks that AI projects in the public sector had experienced. Considering these challenges in relation to requirements for AI projects, we use the metaphor of a hop-on-hop-off journey as a way of understanding the public sector organizations’ experiences.
We start with a literature review of relevant aspects of the public sector engagement with AI before describing the methodological framework and the empirical data. Then we present the findings with an emphasis on the elements producing barriers, or “stops”, on the hop-on-hop-off journey.
2 Literature Review
The public sector in Norway is working purposefully to contribute to the development and improvement of public services in an efficient and sustainable manner. Digitalization and new technologies including AI are important tools to achieve this. AI can make a significant contribution for the wish of creating public digital services tailored to users’ needs [3]. Used at its best, AI can contribute to the streamlining and improvement of public services, to make services easier to use and to reduce unfair differentiation in services offered to different social groups. There are good examples of the use of AI in Norway, for instance, in the healthcare sector. The benefit of using AI is emphasized and further development encouraged in political strategy documents such as the National Strategy for Artificial Intelligence [4]. The national strategy points out that Norway has the socio-technical infrastructure in place for succeeding with AI. This includes elements such as a high level of public trust in authorities and the public sector in Norway, a high degree of digital competence in the population, a well-developed technological infrastructure, and a public sector that has come a long way in developing a digital administration. Registry data developed over a long period of collecting data, including personal data, can provide important access to data for developing AI for citizen services [5]. The national AI strategy also emphasizes that public organizations “have the capacity and competence to experiment with new technologies,” which can be crucial for the public sector organizations’ ability to adopt new technology, such as AI.
However, the risks for errors, unfair, or biased results of AI systems that have concerned researchers the last decades [6,7,8,9,10,11,12], also concern the public sector in Norway. National and international initiatives are developing guidelines for minimizing such risk [4, 13,14,15,16]. One step on the way is regulation of the use of personal data through the European Union’s framework GDPR – the General Data Protection Regulation, which has been in work since 2018 in Norway. The introduction of GDPR received massive attention and succeeded in putting the issues of protecting and limiting storage and use of personal data on the agenda, according to the Norwegian Data Protection Authority’s yearly report of 2018 [17]. The overarching principle of GDPR is to store as little personal data for the shortest time possible [18]. This principle, however, is not in line with requirements for developing high quality AI systems, which rather requires large amounts of data. Such paradoxes are the subject of assessment and prioritization, and the recent EU AI Act aims to balance these considerations. A draft for the EU AI Act was approved in June 2023, pointing towards the first European law on AI.Footnote 1 The AI Act aims to regulate the use of across sectors in terms of risks it poses. The use of AI in ways that are considered a threat to people, such as biometric surveillance, emotion recognition, and predictive policing, is banned as an “unacceptable risk”. High-risk application of AI that involves using personal data, for instance for recruitment or for providing services, will be regulated and subject to specific legal requirements. AI systems that do not belong in the two first categories are mainly left outside this regulation [16, 19].
As reflected in the EU AI Act, a critical risk of AI producing discriminatory results for social groups and individuals is related to the use of personal data in an AI system. While this is a risk in private as well as public sectors, AI involves a particular set of challenges for the public sector, which relies on accessing a large amount of personal data in order to deliver its services to the citizens. The public sector has a particular obligation to provide good and fair services [3]. Compared to many other western countries, Norway has a large public sector managing public resources and providing vital services that depend on sufficient and updated information about citizens [20]. Therefore, the Norwegian government and public sector have a unique access to a large amount of data about citizens that has been collected over time. This data includes a wide set of information about citizens, from sensitive personal data, health data, to data about employment, education, and finances [5, 20]. While the Norwegian population has a high level of trust in the public sector [21], responsible use of technology, including AI, is crucial for maintaining the trust-based relation between citizens and authorities and, ultimately, for democracy [22]. Experts have expressed concern that the public sector’s access to large amounts of data, along with increased digitalization, “can create pressure to use data in new and intrusive ways,” and see a risk that the development may point towards greater control, for example, through an increasing use of AI to “uncover fraud and deception” [12, 23]. While negative effects of AI are often unintended [24], clear goals and strategies are also necessary to avoid a development towards “public surveillance capitalism” [12, 25].
United Nations has warned about the combination of AI with large amounts of data about citizens collected through government agencies, referring to this as “a digital welfare dystopia” [23]. Broomfield and Lintvedt claim that also in Norway there is a risk of stumbling into this digital welfare dystopia [12], despite the Norwegian public sector organizations being subjects to strict laws and regulations for handling personal data. This is particularly critical as public organizations are obligated to ensure equality for a diverse population in public services. Equally critical for the public sector organizations is the fact that their customers, i.e. the citizens, cannot choose another service provider. Thus, while market forces might punish private companies that produce unfair results for their clients, there need to be other mechanisms, for instance an auditing system, to secure fairness across public sector services.
In this study we aimed to learn more about challenges as well as potential risks of using AI in the public sector, and in particular how potential risks of discrimination were dealt with to ensure the fairness and high level of trust expected by the public sector.
3 Methods
The main aim of this study was to map the use of AI in the public sector, and to learn more about challenges and risks associated with AI projects in the sector. AI is not operating in isolation, therefore, it must be understood within the context in which it operates. In order to capture and understand the relationship between society and technology, and between opportunities, challenges, and risks, in particular related to discrimination, we worked in a cross-disciplinary team including sociological, technical, and feminist perspectives.
We also engaged a mixed methods strategy that involved initial meetings with AI experts from public and private sector as well as from academia and NGOs. Based on literature review and these meetings, we developed a survey to map the situation, before inviting a selection of public sector organizations to participate in in-depth interviews.
3.1 Online Survey
The online nation-wide survey was sent to nearly 500 governmental and municipal organizations from various sectors including health care, education, labour and welfare administration, tax authorities and more. The main goal of the survey was to map the status for plans and projects related to AI in the public sector. The survey included questions about aim of using AI in the organization, competence involved, challenges encountered in relation to access and use of data, competence about technical, organizational, and juridical issues, and how risks of discrimination were perceived and dealt with. The response rate was 40%, with 200 organizations responding to the survey. Among the 200 received responses, 59 organizations had active projects or plans for using AI, and out of these, 39 AI projects involved the use of personal data.
3.2 Interviews
The next phase involved an in-depth study including interviews with public sector organizations that had AI projects and plans, identified through the survey. Here we were mainly interested in the projects involving personal data, as that brings up some of the special issues for public sector as a service provider for citizens and raise critical questions regarding risks of discrimination when engaging AI in the public sector. The interview questions focused on the concrete AI projects and plans and the organizational setting; technical, organizational, and juridical competences and challenges; and risks of discrimination and mitigation of such risks.
We invited the organizations that had AI projects or plans involving personal data to participate in interviews. A total of 19 interviews were organized with organizations representing a wide spectrum of governmental, municipal, and inter-municipal organizations of different sizes and localized in different regions of the country.
In the invitation for the interviews, we encouraged the organizations to include a team of individuals with diverse roles in their AI project. The informants included 18 men and 9 women. Some were managers, while the majority had a professional background in information technology and AI.
3.3 Analysis
The survey was used to map the status of AI projects and to identify which issues that appeared most challenging for the public service organizations. The qualitative interviews allowed us to explore more in-depth the challenges and risks experienced in the AI projects.
In the qualitative analysis we used an image of an AI project lifecycle based on relevant research literature emphasizing different stages of an AI project, various aspects of maturity and requirements of such a lifecycle from start to finish [26,27,28] to map and understand the development of the projects. Due to how the AI projects had started, developed, and temporarily or permanently ended, many of them before reaching the expected end point, we pictured this as a journey with a set of stops. The metaphor of a journey guides the presentation of the main findings below. We share some of the findings from the survey, while the main part of the analysis is based on the in-depth interviews focusing on the projects that involved personal data.
4 Findings
The National AI Strategy [4] encourages public sector to develop AI as a tool supporting decision making and as a part of public digital services, including the interaction between government bodies and citizens. The survey mapping AI activity in public sector showed that many organizations are currently exploring how AI can be used to improve and make public services more efficient. The interviews further showed that the motivation varied from a curiosity about what technology can accomplish, to an argument of spending taxpayers’ money wisely. This could be interpreted as a duty to explore how to become more efficient with the use of new technology such as AI, one informant pointed out.
The National AI strategy employs a wide definition of AI, from machine learning (ML) to automatised processes. We started the survey with a similar wide definition to let the respondents decide what they included in a definition of AI, thus, among the AI projects involved in this study were both ML and simple automated procedures.
The survey showed that many organizations had plans for using AI. However, less than 20% of the organizations responding included personal data in their AI project. These AI projects had a variety of goals, from improving the quality of data, detecting suspicious patterns and errors in the data, predicting needs in the organization or users’ behaviour, and more. Some of these projects had been initiated to explore the possibilities of AI for the services, or for testing AI models.
Most of the AI projects involving personal data were, however, still in an early stage, exploring and developing the possibility of using AI, and only a handful of these projects confirmed that they had reached the final stage of being in production. Many of the projects had encountered challenges, which will be further elaborate below.
4.1 AI Development as a Hop-on-Hop-off Journey
The survey as well as the in-depth interviews showed that the journey from idea to production of AI in public sector presents many challenges that often lead to AI projects being temporarily halted or terminated. The result was that only a few AI projects operating on personal data had reached the production stage, however, their stories involve important lessons. Figure 1 illustrates identified challenges of AI projects as a series of elements that need to be in place to safely navigate from start to end, starting with the design of the project to the end of the journey where the AI system is in use, or “in production”.
Figure 1 illustrates the development of an AI project with the different elements representing stops on an imaginary hop-on-hop-off journey. The elements of Fig. 1 do not represent a perfect or necessary chronological development.
4.2 Project Design
Project design is an important stage of an AI project where crucial decisions and choices are made. However, not all the AI projects had started with an overall design or management strategy anchored at the organizational level. In several cases the origin of the project had been tech people finding space and resources for exploring AI in context of the organization.
4.3 Leader Engagement and Competence
Most projects had a clearly defined goal and objective within the organization. However, the previous examples of exploring AI as the main driving force illustrated a weak leadership involvement in some organizations. AI is still a new technology for most public sector organizations, and it is dependent on those who want to participate, one of the informants told us. Similar to other studies among leaders in Norway [29], most respondents to the survey agreed that there was a need for more knowledge about AI among leaders.
4.4 Data
Data is a crucial element of AI, and Gröger claims that there will be no AI without data [30]. Data, however, also introduced a number of challenges for the public sector organizations, including navigating the European General Data Protection Regulation (GDPR) and Norwegian juridical frameworks as well as known and unknown bias in data [31, 32]. While many of these challenges are not specific to the public sector, some of them are, in particular due to the position and responsibility that the public sector organizations have as service providers for citizens. This, for instance, includes the public sector’s access to a huge amount of data collected from the service recipients. While this suggests that the public sector has access to valuable data that would be useful for developing good and precise AI systems, the juridical framework for public sector challenges this use of the data. The Norwegian government established a regulatory sandbox for AI systems in 2020, under the Norwegian Data Protection Authority [33]. One of the early public sector organizations to be assessed through the sandbox was the Norwegian Labour and Welfare Organization (NAV), which has access to a large amount of personal data from the service recipients. The concluding report suggests that NAV has the right to use such data in an AI system, however, using the same data for developing and training the AI system was questioned. The concluding report points to the need for developing laws that also take into account a responsible development of AI projects in the public sector [34].
4.5 Technical Competence
Access to technical expertise varied with the size of the organization. Norway has many small and medium sized municipalities that do not have the same access to technical expertise as the bigger municipalities have. The long-stretched country also adds challenges for rural organizations to access networks of competence gathered in and around the bigger cities. Access to technical competence influenced whether the AI system in question had been developed in-house or bought from external companies, sometimes specially designed, sometimes referring to shelf-ready AI algorithms and systems. This could introduce doubt regarding what kind of data the models were based on, and thus doubt to whether the models would fit the organization’s needs. This could also make it more challenging to assess the risk of discrimination from a particular AI system.
4.6 Domain Competence
It might seem like an obvious statement that digital systems must meet the requirements of the organization implementing them, implying that a certain level of domain competence is necessary. Some of the projects that had bought or received external tech support, had, however, experienced that lack of domain competence and insights into the organization’s practical and legal requirements had introduced weaknesses and challenges to the AI system. Here we also saw how lacking multi-disciplinary competence could result in an unclear situation, for instance by making it difficult to judge whether the AI system complied with objectives of the organization and relevant regulations.
4.7 Juridical Competence
Juridical competence is vital for establishing how personal data can be used in an AI project. While this is true for any AI project, it includes an extra set of regulations for public sector organizations that depend on collecting personal data from its clients. In general, permission for such data collection is strictly limited to the kind of data needed for providing services. Thus, as the case of NAV above illustrates, some of the relevant laws and regulations are challenged by new digital technologies, such as AI, that require public organizations to consider a new set of juridical questions. Some of the informants had experienced that failing to involve juridical competence at an early stage of their AI project had led to an abrupt stop. It was typical for such cases that the AI project had started without an overall design for mapping the socio-technical opportunities and requirements. Others struggled with strict interpretations of GDPR and risk assessments, and for some, this was perceived as barriers for exploring AI.
4.8 Competence About Discrimination
Finally, competence about discrimination is not positioned as the last stop before the end point of this AI development journey because it is the least important, but rather because few of the AI projects had engaged with questions of discrimination. The public sector is required to provide similar services for all citizens. Thus, considering the risk of discriminating between groups or individuals when introducing new technical systems including AI, is crucial. Research has shown that when using personal data in AI systems, it is necessary to deal with known as well as unknown biases in the data [31, 32]. Historic as well as recent data might portray different social groups’ education, employment, social positions, economy and more, in ways that can appear discriminating if they were to be reproduced. These issues have been illustrated in examples using AI in recruitment processes in which, for instance, current gender imbalance in working life has been turned into preferences for candidates [35].
Thus, even when everything is “correct”, bias in data can still produce wrong, unfair, and discriminating results [6]. In our survey, however, only 3% of the respondents believed that AI would increase the risk for discrimination. In the interviews we found that few of the organizations had engaged with this topic: “We haven’t thought about that, thank you for reminding us”, one informant said. We also found a tendency for other considerations and concepts such as AI ethics, fairness, and transparency, or privacy taking precedence while discrimination in line with the definition of the Norwegian Anti-Discrimination Act as unwanted discrimination, was a topic only for a handful of AI projects. In some cases, we found that discrimination was translated into a technical concept of “differentiation”. This made discrimination appear as natural and as a wanted discrimination, quite different from the Anti-Discrimination Act, which aims to prevent unwanted discrimination. While some of the alternative concepts that were introduced when we asked about discrimination reflect important discussions of AI [36, 37], they also move the question further away from the issue of preventing unwanted discrimination and consequently weaken the perceived need for dealing with these issues. The interviews demonstrated that lack of knowledge about discrimination resulted in little attention devoted to the risk of discrimination with AI.
4.9 In Production
The many hurdles and barriers experienced in the development phase of the AI projects in the public sector had resulted in only a handful of projects involving personal data reaching the end point of putting AI in production. The interviews illustrated that also this stage could introduce challenges for the AI system. In this phase, however, the challenges were more directly affecting users, for instance, a risk of differentiating users based on digital competence, and posing a threat towards the citizens’ trust to the public sector if something was unclear or error happened.
5 Discussion
The analysis above focused on the public sector organizations with ongoing AI projects that involved personal data. The findings illustrate that the AI development is still in an early stage in large parts of the Norwegian public sector. The development seems in some cases to be less driven by management and more by interest and willingness to experiment with the technology. In addition, there is a notable political pressure to use AI in the public sector, which was mentioned several times in the interviews. Thus, we can identify three levels of driving forces for the current AI development in the Norwegian public sector:
-
The political level with strategies for digitalization in the public sector and for AI in Norway created expectations about the use of AI [3, 4].
-
The management level: The management level in the public sector, with some exceptions, was considered to have limited knowledge of AI and a limited focus on establishing and directing the development of AI in the sector.
-
Individuals and units within each organization that expressed a willingness to contribute to AI development were important for initiating AI projects.
Thus, the political level contributed to pushing AI into the sector, the management level was less visible in many of the AI projects, while individuals and units with knowledge about and the resources to experiment with AI were important driving forces in many organizations. This indicates that the public sector is still in an early stage of exploring and learning about AI, also reflected in other studies [5, 38].
Another critical issue for engaging AI in the public sector was the confusion arising from the wide definition of AI in the National AI strategy, involving everything from simple automation to machine learning (ML) techniques. Only automated procedures can be used for decision making in public sector, while AI techniques involving ML can only be used for supporting decisions where humans have the final say. Putting these different technologies in the same pot creates a confusion both inside and outside the public sector, about whether it is technology or humans making the decisions.
This also highlights the importance of interdisciplinary competence for successful AI projects, as these projects do not operate in a digital vacuum but must interact with a range of different social, cultural, political, and legal rules and regulations [13, 37].
Many of the challenges we found revolve around paradoxes that arise when AI is introduced in the public sector. Firstly, the public sector has different requirements for accuracy in services compared to the private sector, as the Public Administration Act requires all individual decisions to be justified [4]. Therefore, AI in the public sector must be transparent and explainable to avoid producing errors. Such issues are further complicated by the fact that not all decisions in the public sector warrant full transparency, especially concerning the government’s control functions. This creates room for interpretation and discussion, illustrating that regulatory guidelines and legislation do not guarantee a shared understanding among all parties involved. Secondly, the regulations and legal framework are not well-adapted to the digital reality, leaving many questions unanswered or open to multiple and conflicting interpretations. Thirdly, the different regulations that intersect in this field are partly in conflict with each other. The overarching principle in GDPR legislation is to store as little data for the shortest period possible, while AI requires a significant amount of data, sometimes more than the original registration of data, if discrimination is to be avoided. Such paradoxes become subject to assessment and prioritization: Which carries more weight, development, and efficiency on one side, or the risk of errors on the other side?
While there were many challenges and barriers for the public sector AI projects, a reflection on the risk of discrimination was not particularly prominent in the landscape of challenges described by the public sector organizations. Other concepts such as AI ethics, fairness, and transparency appear to take precedence before discrimination. The concept of discrimination functions as what theorists Laclau and Mouffe refer to as a “myth,” as a term we can discuss together without having agreed on a specific definition, thus we can put different content into it [39]. “Bias” was often discussed, usually as unconscious bias, while the term “discrimination” was less frequently used in this field. If questions of discrimination are translated into alternative concepts that make it appear less of a problem, it is less likely to be addressed as a challenge. The lack of focus on unwanted discrimination reflects a gap to be filled by future policy and practice in the public sector.
There is an increasing number of guidelines, frameworks, and models aimed at countering AI from producing biased, unfair, or harmful outcomes [1], and more are currently being developed. Many of these resources target technologists who are responsible for the technological development of an AI system [6]. However, AI researchers warn that AI developers alone should not be held accountable for the broad set of considerations that need to be made in AI development, including those necessary for avoiding discrimination [32].
6 Conclusion
Artificial intelligence is still a young technology in the Norwegian public sector. While a handful of larger organizations are well advanced, many smaller public sector organizations do not have adequate resources or access to the necessary competences for developing AI systems. Our study illustrates how the process of developing AI can be described with the metaphor of a hop-on-hop-off journey, where the stops represent necessary elements that sometimes turn into barriers. Thus, it was not all the AI projects that had entered via the first stop of project design. Many of the projects had left, temporarily or permanently, on various stations. Only a handful of projects involving personal data had reached the final stage of putting their AI system into production. The metaphor of the journey illustrates the many elements and questions that need to be dealt with during this process. Technical, juridical, and domain competence together with competence about discrimination, are all vital for an AI project to make it successfully to the final stage of production with as little risk as possible involved.
While some of the public organizations had succeeded and had a good structure for developing AI with a multi-disciplinary team covering the different required competences, most of the smaller units recognized that they were lacking in one or more competences. Questions regarding juridical regulation and domain competence are necessary for developing a legal and precise AI system, and risk of discrimination should not be left behind as the final stop on this journey. Thus, our study confirms the importance of the recommendations from the Council of Europe’s study of the impact of artificial intelligence on gender equality, as the authors conclude that “the regulatory subject is not AI taken in isolation but rather the broader socio-technical apparatus constituted by the interaction of social elements with algorithmic technologies” [13]. AI makes a good example of how technology and society are interwoven, and thus why leaving technology to tech people alone is not a good strategy for developing technology that can support societal needs.
Notes
- 1.
The EU AI Act was approved after the data collection in our study had been concluded.
References
Di Noia, T., Tintarev, N., Fatourou, P., Schedl, M.: Recommender systems under European AI regulations. Commun. ACM 65(4), 69–73 (2022)
Sousa, W.G., Melo, E.R.P., Bermejo, P.H.D.S., Farias, R.A.S., Gomes, A.O.: How and where is artificial intelligence in the public sector going? A literature review and research agenda. Gov. Inform. Q. 36(4), 101392 (2019). https://doi.org/10.1016/j.giq.2019.07.004
KMD. Én digital offentlig sektor: Digitaliseringsstrategi for offentlig sektor 2019–2025. Kommunal- og moderniseringsdepartementet (2019). https://www.regjeringen.no/no/dokumenter/en-digital-offentlig-sektor/id2653874/
KMD. Nasjonal strategi for kunstig intelligens. Kommunal- og moderniseringsdepartementet (2020). https://www.regjeringen.no/no/dokumenter/nasjonal-strategi-for-kunstig-intelligens/id2685594/
Broomfield, H., Reutter, L.M.: Towards a data-driven public administration: an empirical analysis of nascent phase implementation. Scand. J. Public Adm. 25(2), 73–97 (2021)
Belenguer, L.: AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics 2, 771–787 (2022). https://doi.org/10.1007/s43681-022-00138-8
Barbieri, D., Caisl, J., Lanfredi, G., Linkeviciute, J., Mollard, B., Ochmann, J., et al.: Artificial intelligence, platform work and gender equality. European Institute for Gender Equality (EIGE) (2022)
White, J.M., Lidskog, R.: Ignorance and the regulation of artificial intelligence. J. Risk Res. 25, 488–500 (2021)
Lepri, B., Oliver, N., Pentland, A.: Ethical machines: the human-centric use of artificial intelligence. IScience 24(3), 102249 (2021). https://doi.org/10.1016/j.isci.2021.102249
Mannes, A.: Governance, risk, and artificial intelligence. AI Mag. 41(1), 61–69 (2020)
Zuiderveen B.F.: Discrimination, artificial intelligence, and algorithmic decision-making (2018)
Broomfield, H., Lintvedt, M.N.: Is Norway stumbling into an algorithmic welfare dystopia? Tidsskrift for velferdsforskning 25(3), 1–15 (2022). https://doi.org/10.18261/tfv.25.3.2
Bartoletti, I., Xenidis, R.: Preliminary draft Council of Europe study on the impact of artificial intelligence, its potential for promoting equality, including gender equality, and the risks to non-discrimination. The Gender Equality Commission (GEC) and the Steering Committee on Anti-Discrimination, Diversity and Inclusion (CDADI), The Council of Europe (2022). https://rm.coe.int/gec-2022-9-study-on-ai-211022/1680a8ad89
Xenidis, R., Senden, L.: EU non-discrimination law in the era of artificial intelligence: mapping the challenges of algorithmic discrimination. In: Bernitz, U., Groussot, X., Paju, J., de Vries, S.A., (eds.) General Principles of EU law and the EU Digital Order. Kluwer Law International, pp. 151–82 (2020)
UNESCO: Artificial intelligence and gender equality: key findings of UNESCO’s Global Dialogue. Division for Gender Equality, UNESCO2020 (2020)
European Commission: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. Office for Official Publications of the European Communities Luxembourg (2021)
Kommunal- og moderniseringsdepartementet. Datatilsynets og Personvernnemndas årsrapporter for 2018. (Meld. St. 28 2018–2019)
Lovdata. Act relating to the processing of personal data (The Personal Data Act) (2018)
European Parliament. MEPs ready to negotiate first-ever rules for safe and transparent AI (2023). https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai. Accessed 11 July 2023
Parmiggiani, E., Mikalef, P.: The case of Norway and digital transformation over the years. In: Mikalef, P., Parmiggiani, E., (eds.) Digital Transformation in Norwegian Enterprises. Springer International Publishing, Cham, pp. 11-8 (2022). https://doi.org/10.1007/978-3-031-05276-7_2
OECD: Drivers of trust in public institutions in Norway, building trust in public institutions. OECD Publishing, Paris (2022). https://doi.org/10.1787/81b01318-en. Accessed 11 July 2023
Andreasson, U., Stende, T.: Nordiske kommuners arbeid med kunstig intelligens. Nordic Council of Ministers (2019)
Alston, P.: Report of the special rapporteur on extreme poverty and human rights. UN General Assembly A/74/493 (2019). https://documents-dds-ny.un.org/doc/UNDOC/GEN/N19/312/13/PDF/N1931213.pdf?OpenElement
Redden, J.: Democratic governance in an age of datafication: lessons from mapping government discourses and practices. Big Data Soc. 5(2) (2018). https://doi.org/10.1177/2053951718809145
Jørgensen, R.F.: Data and rights in the digital welfare state: the case of Denmark. Inf. Commun. Soc. 26(1), 123–138 (2023). https://doi.org/10.1080/1369118X.2021.1934069
Suresh, H., Guttag, J.: A framework for understanding sources of harm throughout the machine learning life cycle. Equity and access in algorithms, mechanisms, and optimization, pp. 1–9 (2021)
OECD: Scoping the OECD AI principles (2019). https://doi.org/10.1787/d62f618a-en
SILO: The Nordic state of AI (2022). https://www.silo.ai/ebooks-reports/nordic-state-of-ai-2022
Norwegian cognitive center: Bergen Næringsråd. Digital Modenhet på Vestlandet. Delrapport 1: Kunstig intelligens (2022)
Gröger, C.: There is no AI without data. Commun. ACM 64(11), 98–108 (2021). https://doi.org/10.1145/3448247
Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM transactions on information systems (TOIS) 14(3), 330–347 (1996)
Srinivasan, R., Chander, A.: Biases in AI systems. Commun. ACM 64(8), 44–49 (2021)
The Norwegian data protection authority. Sandbox forever (2022). https://www.datatilsynet.no/en/news/aktuelle-nyheter-2022/sandbox-forever/
Datatilsynet. Sluttrapport fra sandkasseprosjektet med NAV. Temaer: rettslig grunnlag, rettferdighet og forklarbarhet (2022). https://www.datatilsynet.no/regelverk-og-verktoy/sandkasse-for-kunstig-intelligens/ferdige-prosjekter-og-rapporter/nav-sluttrapport/
Köchling, A., Wehner, M.C.: Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Bus. Res. 13(3), 795–848 (2020). https://doi.org/10.1007/s40685-020-00134-w
Gjerdsbakk, T.C.G.: Åpen og rettferdig kunstig intelligens. Lov & Data 150(3), (2022)
Gerards J, Xenidis R. Algorithmic discrimination in Europe: challenges and opportunities for gender equality and non-discrimination law. European commission (2021)
Andréasson, U., Stende, T.: Nordic municipalities’ work with artificial intelligence (2019)
Laclau, E., Mouffe, C.: Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. Verso, London (1985)
Acknowledgements
This paper reports from the project “Use of artificial intelligence in the public sector and risk of discrimination”, which was carried out on behalf of the Norwegian Directorate for Children, Youth and Family Affairs (Bufdir) by Rambøll Management Consulting and Vestlandsforsking (Western Norway Research Institute) in the period from November 2021 to December 2022.
We want to thank all contributors including Norwegian and international AI experts participating in the reference group and in meetings to discuss the project. We also want to thank the contributors from the public sector responding to the survey and participating in interviews.
The paper is developed from the project report, available in Norwegian: https://www.vestforsk.no/en/project/artificial-intelligence-public-sector-and-risk-discrimination.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Corneliussen, H.G., Seddighi, G., Iqbal, A., Andersen, R. (2024). Artificial Intelligence in the Public Sector in Norway:. In: Akerkar, R. (eds) AI, Data, and Digitalization. SAIDD 2023. Communications in Computer and Information Science, vol 1810. Springer, Cham. https://doi.org/10.1007/978-3-031-53770-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-53770-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53769-1
Online ISBN: 978-3-031-53770-7
eBook Packages: Computer ScienceComputer Science (R0)