1 Introduction

Auditing is a governance mechanism that technology providers and policymakers can use to identify and mitigate risks associated with artificial intelligence (AI) systems [1,2,3,4,5].Footnote 1 Auditing is characterised by a systematic and independent process of obtaining and evaluating evidence regarding an entity's actions or properties and communicating the results of that evaluation to relevant stakeholders [6]. Three ideas underpin the promise of auditing as an AI governance mechanism: that procedural regularity and transparency contribute to good governance [7, 8]; that proactivity in the design of AI systems helps identify risks and prevent harm before it occurs [9, 10]; and, that the operational independence between the auditor and the auditee contributes to the objectivity and professionalism of the evaluation [11, 12].

Previous work on AI auditing has focused on ensuring that specific applications meet predefined, often sector-specific, requirements. For example, researchers have developed procedures for how to audit AI systems used in recruitment [13], online search [14], image classification [15], and medical diagnostics [16, 17]. However, the capabilities of AI systems tend to become ever more general. In a recent article, Bommasani et al. [18] coined the term foundation models to describe models that can be adapted to a wide range of downstream tasks. While foundation models are not necessarily new from a technical perspective,Footnote 2 they differ from other AI systems insofar as they have proven to be effective across many different tasks and display emergent capabilities when scaled [19]. The rise of foundation models also reflects a shift in how AI systems are designed and deployed, since these models tend to be trained and released by one actor and subsequently adapted for a wide range of different applications by a plurality of other actors.

From an AI auditing perspective, foundation models pose significant challenges. For example, it is difficult to assess the risks that AI systems pose independent of the context in which they are deployed. Moreover, how to allocate responsibility between technology providers and downstream developers when harms occur remains unresolved. Taken together, the capabilities and training processes of foundation models have outpaced the development of tools and procedures to ensure that these are ethical, legal, and technically robust.Footnote 3 This implies that, while application-level audits have an important role in AI governance, they must be complemented with new forms of supervision and control.

This article addresses that gap by focusing on a subset of foundation models, namely large language models (LLMs). LLMs start from a source input, called the prompt, to generate the most likely sequences of words, code, or other data [20]. Historically, different model architectures have been used in natural language processing (NLP), including probabilistic methods [21]. However, most recent LLMs—including those we focus on in this article—are based on deep neural networks trained on a large corpus of texts. Examples of such LLMs include GPT-3 [22], GPT-4 [23], PaLM [24], LaMDA [25], Gopher [26] and OPT [27]. Once an LLM has been pre-trained, it can be adapted (with or without fine-tuningFootnote 4) to support various applications, from spell-checking [28] to creative writing [29].

Developing LLM auditing procedures is an important and timely task for two reasons. First, LLMs pose many ethical and social challenges, including the perpetuation of harmful stereotypes, the leakage of personal data protected by privacy regulations, the spread of misinformation, plagiarism, and the misuse of copyrighted material [30,31,32,33]. In recent months, the scope of impact from these harms has been dramatically scaled by unprecedented public visibility and growing user bases of LLMs. For example, ChatGPT attracted over 100 million users just two months after its launch [34]. The urgency of addressing those challenges makes developing a capacity to audit LLMs’ characteristics along different normative dimensions (such as privacy, bias, safety, etc.) a critical task in and of itself [35]. Second, LLMs can be considered proxies for other foundation models.Footnote 5 Consider CLIP [36], a vision-language model trained to predict which text caption accompanied an image, as an example. CLIP too displays emergent capabilities, can be adapted for multiple downstream applications, and faces similar governance challenges as LLMs. The same holds of text2image models such as DALL·E 2 [37]. Developing feasible and effective procedures for how to audit LLMs is therefore likely to offer transferable lessons on how to audit other foundation models and even more powerful generative systems in the future.Footnote 6

The main contribution offered in this article is a novel blueprint for how to audit LLMs. Specifically, we propose a three-layered approach, whereby governance audits (of technology providers that design and disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), and application audits (of applications based on LLMs) complement and inform each other. Figure 1 (see Sect. 4.1) provides an overview of this three-layered approach. As we demonstrate throughout this article, many tools and methods already exist to conduct audits at each individual level. However, the key message we seek to stress is that, to provide meaningful assurance for LLMs, audits conducted on the governance, model, and application levels must be combined into a structured and coordinated procedure. Figure 2 (see Sect. 4.5) illustrates how outputs from audits on one level become inputs for which audits on other levels must account. To the best of our knowledge, our blueprint for how to audit LLMs is the first of its kind, and we hope it will inform both technology providers’ and policymakers’ efforts to ensure that LLMs are legal, ethical, and technically robust.

In the process of introducing and discussing our three-layered approach, the article also offers two secondary contributions. First, it makes seven claims about how LLM auditing procedures should be designed to be feasible and effective in practice. Second, it identifies the conceptual, technical, and practical limitations associated with auditing LLMs. Together, these secondary contributions lay a groundwork that other researchers and practitioners can build upon when designing new, more refined, LLM auditing procedures in the future.

Our efforts tie into an extensive research agenda and ongoing policy formation process. AI labs like Cohere, OpenAI, and AI21 have expressed interest in understanding what it means to develop LLMs responsibly [38], and DeepMind, Microsoft, and Anthropic have highlighted the need for new governance mechanisms to address the social and ethical challenges that LLMs pose [30, 39, 40]. Individual parts of our proposal (e.g., those related to model evaluation [24] and red teaming [41, 42])Footnote 7 have thus already started to be implemented across the industry, although not always in a structured manner or with full transparency. Policymakers, too, are interested in ensuring that societies benefit from LLMs while managing the associated risks. Recent examples of proposed AI regulations include the EU AI Act [43] and the US Algorithmic Accountability Act of 2022 [44]. The blueprint for auditing LLMs outlined in this article neither seeks to replace existing best practices for training and testing LLMs nor to foreclose forthcoming AI regulations. Instead, it complements them by demonstrating how governance, model, and application audits—when conducted in a structured and coordinated manner—can help ensure that LLMs are designed and deployed in ethical, legal, and technically robust ways.

A further remark is needed to narrow down this article’s scope. Our three-layered approach concerns the procedure of LLM audits and answers questions about what should be audited, when, and according to which criteria. Of course, when designing a holistic auditing ecosystem, several additional considerations exist, e.g., who should conduct the audit and how to ensure post-audit action [12]. While such considerations are important, they fall outside the scope of this article. How to design an institutional ecosystem to audit LLMs is a non-trivial question that we have neither the space nor the capacity to address here. That said, the policy process required to establish an LLM auditing ecosystem will likely be gradual and involve negotiations between numerous actors, including AI labs, policymakers, and civil rights groups. For this reason, our early blueprint for how to audit LLMs is intentionally limited in scope to not forego but rather to initiate this policy formation process by eliciting stakeholder reactions.

The remainder of this article proceeds as follows: Sect. 2 highlights the ethical and social risks posed by LLMs and establishes the need to audit them. In doing so, it situates our work in relation to recent technological and societal developments. Section 3 reviews previous literature on AI auditing to identify transferable best practices, discusses the properties of LLMs that undermine existing AI auditing procedures, and derives seven claims for how LLM auditing procedures should be designed to be feasible and effective. Section 4 outlines our blueprint for how to audit LLMs, introducing a three-layered approach that combines governance, model, and application audits. The section explains in detail why these three types of audits are needed, what they entail, and the outputs they should produce. Section 5 discusses the limitations of our three-layered approach and demonstrates that any attempt to audit LLMs will face several conceptual, technical, and practical constraints. Finally, Sect. 6 concludes by discussing the implications of our findings for technology providers, policymakers, and independent auditors.

2 The need to audit LLMs

This section summarises previous research on LLMs and their ethical and social challenges. It aims to situate our work in relation to recent technological and societal developments, stress the need for auditing procedures that capture the risks LLMs pose, and address potential objections to our approach.

2.1 The opportunities and risks of LLMs

Although LLMs represent a major advance in AI research, the idea of building text-processing machines is not new. Since the 1950s, NLP researchers and practitioners have been developing software that can analyse, manipulate, and generate natural language [45]. Until the 1980s, most NLP systems used logic-based rules and focused on automating the structural analysis of language needed to enable machine translation and speech recognition [46]. More recently, the advent of deep learning, advances in neural architectures such as transformers, growth in computational power and the availability of internet-scraped training data have revolutionised the field [47] by permitting the creation of LLMs that can approximate human performance on some benchmarks [48, 49]. Further advances in instruction-tuning and reinforcement learning from human feedback have improved model capabilities to predict user intent and respond to natural language requests [50,51,52].

LLMs’ core training task is to produce the most likely continuation of a text sequence [53]. Consequently, LLMs can be used to recognise, summarise, translate, and generate texts, with near human-like performance on some tasks [54]. Exactly when a language model becomes ‘large’ is a matter of debate—referring to either more trainable parameters [55], a larger training corpus [56] or a combination of these. For our purposes, it is sufficient to note that LLMs are highly adaptable to various downstream applications, requiring fewer in-domain labelled examples than traditional deep learning systems [57]. This means that LLMs can more easily be adapted for specific tasks, such as diagnosing medical conditions [58], generating code [59, 60] and translating languages [61]. Previous research has demonstrated that LLMs can perform well on a task with few-shot or zero-shot reasoning [22, 62].Footnote 8 Moreover, a scaling law has been identified whereby the training error of an LLM falls off as a power of training set size, model size or both [63]. Simply scaling the model can thus result in emergent gains on a wide array of tasks [64], though those gains are non-uniform, especially for complex mathematical or logical reasoning domains [26]. Finally, while some pre-trained models are protected by paywalls or siloed within companies, many LLMs are accessible via open-source libraries such as HuggingFace, democratising the gains from deep language modelling and allowing non-experts to use it in their applications [65].

Alongside such opportunities, however, the use of LLMs is coupled with ethical challenges [31, 32]. As recent controversies surrounding ChatGPT [66] have shown, LLMs are prone to give biased or incorrect answers to user queries [67]. More generally, a recent article by Weidinger et al. [30] suggests that the risks associated with LLM include the following:

  1. (1)

    Discrimination. LLMs can introduce representational and allocational harms by perpetuating social stereotypes and biases;

  2. (2)

    Information hazards. LLMs may compromise privacy by leaking private information and inferring sensitive information;

  3. (3)

    Misinformation hazards. LLMs producing misleading information can lead to less well-informed users and erode trust in shared information;

  4. (4)

    Malicious use. LLMs can be co-opted by users with bad intent, e.g., to generate personalised scams or large-scale fraud;

  5. (5)

    Human–computer interaction harms. Users may overestimate the capabilities of LLMs that appear human-like and use them in unsafe ways; and

  6. (6)

    Automation and environmental harms. Training and operating LLMs require lots of computing power, incurring high environmental costs.

Each of these risk areas constitutes a vast and complex field of research. Providing a comprehensive overview of each field’s nuances is beyond this paper’s scope. Instead, we take Weidinger et al.’s summary of the ethical and social risks associated with LLMs as a starting point for pragmatic problem-solving.

2.2 The governance gap

From a governance perspective, LLMs pose both methodological and normative challenges. As previously mentioned, foundation models—like LLMs—are typically developed and adopted in two stages. Firstly, a model is pre-trained using self-supervised learning on a large, unstructured text corpus scraped from the internet. Pre-training captures the general language representations required for many tasks without explicitly labelled data. Secondly, the weights or behaviours of this pre-trained model can be adapted on a far smaller dataset of labelled, task-specific, examples.Footnote 9 That makes it methodologically difficult to assess LLMs independent of the context in which they will be deployed [18].

Furthermore, although performance is predictable at a general level, performance on specific tasks, or at scale, can be unpredictable [40]. Crucially, even well-functioning LLMs force AI labs and policymakers to face hard questions, such as who should have access to these technologies and for which purposes [68]. Of course, the challenges posed by LLMs are not necessarily distinct from those associated with classical NLP or other ML-based systems. However, LLMs' widespread use and generality make those challenges deserving of urgent attention. For all these reasons, analysing LLMs from ethical perspectives requires innovation in risk assessment tools, benchmarks, and frameworks [69].

Several governance mechanisms designed to ensure that LLMs are legal, ethical, and safe have been proposed or piloted [70]. Some are technically oriented, including the pre-processing of training data, the fine-tuning of LLMs on data with desired properties, and procedures to test the model at scale pre-deployment [42, 69]. Others seek to address the ethical and social risks associated with LLMs through sociotechnical mitigation strategies, e.g., creating more diverse developer teams [71], human-in-the-loop protocols [72] and qualitative evaluation tools based on ethnographic methods [73]. Yet others seek to ensure transparency in AI development processes, e.g., through a structured use of model cards [74, 75], datasheets [76], system cards [77], and the watermarking of system outputs [78].Footnote 10

To summarise, while LLMs have shown impressive performance across a wide range of tasks, they also pose significant ethical and social risks. Therefore, the question of how LLMs should be governed has attracted much attention, with proposals ranging from structured access protocols designed to prevent malicious use [68] to hard regulation prohibiting the deployment of LLMs for specific purposes [79]. However, the effectiveness and feasibility of these governance mechanisms have yet to be substantiated by empirical research. Moreover, given the multiplicity and complexity of the ethical and social risks associated with LLMs, we anticipate that policy responses will need to be multifaceted and incorporate several complementary governance mechanisms. As of now, technology providers and policymakers have only started experimenting with different governance mechanisms, and how LLMs should be governed remains an open question [80].

2.3 Calls for audits

Against the backdrop of the technological and regulatory landscape surveyed in this section, auditing should be understood as one of several governance mechanisms different stakeholders can employ to ensure and demonstrate that LLMs are legal, ethical, and technically robust. It is important to stress that auditing LLMs is not a hypothetical idea but a tangible policy option that has been proposed by researchers, technology providers, and policymakers alike. For instance, when coining the term foundation models, Bommasani et al. [18] suggested that ‘such models should be subject to rigorous testing and auditing procedures’. Moreover, in an open letter concerning the risks associated with LLMs and other foundation models, OpenAI’s CEO Sam Altman stated that ‘it’s important that efforts like ours submit to independent audits before releasing new systems’ [81]. Finally, the European Commission is considering classifying LLMs as ‘high-risk AI systems’ [82].Footnote 11 This would imply that technology providers designing LLMs have to undergo ‘conformity assessments with the involvement of an independent third-party’, i.e., audits by another name [83].

Despite widespread calls for LLM auditing, central questions concerning how LLMs can and should be audited have yet to be systematically explored. This article addresses that gap by outlining a procedure for auditing LLMs. The main argument we advance can be summarised as follows. What auditing means varies between different academic disciplines and industry contexts [84]. However, three strands of auditing research and practice are particularly relevant with respect to ensuring good governance of LLMs. The first stems from IT audits, whereby auditors assess the adequacy of technology providers’ software development processes and quality management procedures [85]. The second strand stems from model testing and verification within the computer sciences, whereby auditors assess the properties of different computational models [86]. The third strand stems from product certification procedures, whereby auditors test consumer goods for legal compliance and technical safety before they go to market [87]. As we argue throughout this paper, it is necessary to combine auditing tools and procedural best practices from each of these three strands to identify and manage the social and ethical risks LLMs pose. Therefore, our blueprint for auditing LLMs combines governance audits of technology providers, model audits of LLMs, and application audits of downstream products and services built on top of LLMs. The details of this ‘three-layered approach’ are outlined in Sect. 4.

2.4 Addressing initial objections

Before proceeding any further, it is useful to consider some reasonable objections to the prospect of auditing LLMs—as well as potential responses to these objections. First, one may argue that there is no need to audit LLMs per se and that auditing procedures should be established at the application level instead. Although audits on the application level are important, the objection presents a false dichotomy: quality and accountability mechanisms can and should be established at different stages of supply chains. Moreover, while some risks can only be addressed at the application level, others are best managed upstream. It is true that many factors, including some beyond the technology provider’s control, determine whether a specific technological artefact causes harm [88]. However, technology providers are still responsible for taking proportional precautions regarding reasonably foreseeable risks during the product life cycle stages that they do control. For this reason, we propose that application audits should be complemented with governance audits of the organisations that develop LLMs. The same logic underpins the EU’s AI liability directive [89]. Our proposal is thereby compatible with the emerging European AI regulations.

Second, identifying and mitigating all LLM-related risks at the technology level may not be possible. As we explain in Sect. 5, this is partly because different normative values may conflict and require trade-offs [90,91,92]. Using individuals’ data, for example, may permit improved personalisation of language models, but compromise privacy [93]. Moreover, concepts like ‘fairness’ or ‘transparency’ hide deep normative disagreements [94]. Different definitions of fairness (like demographic parity and counterfactual fairness) are mutually exclusive [95,96,97], and prioritising between competing definitions remains a political question. However, while audits cannot ensure that LLMs are ‘ethical’ in any universal sense, they nevertheless contribute to good governance in several ways. For example, audits can help technology providers identify risks and potentially prevent harm, shape the continuous (re-design) of LLMs, and inform public discourse concerning tech policy. Bringing all this together, our blueprint for how to audit LLMs focuses on making implicit choices and tensions visible, giving voice to different stakeholders, and generating resolutions that—even when imperfect—are, at least, more explicit and publicly defensible [98].

Third, one may contend that designing LLM auditing procedures is difficult. We agree and would add that this difficulty has both practical and conceptual components. Different stages in the software development life cycle (including curating training data and the pre-training/fine-tuning of model weights) overlap in messy and iterative ways [99]. For example, open-source LLMs are continuously re-trained and re-uploaded on collaborative platforms (like HuggingFace) post-release. That creates practical problems concerning when and where audits should be mandated. Yet the conceptual challenges run even more deeply. For instance, what constitutes disinformation and hate speech are contested questions [100]. Despite widespread agreement that LLMs should be ‘truthful’ and ‘fair’, such notions are hard to operationalise. Because there exists no universal condition of validity that applies equally to all kinds of utterances [101], it is hard to establish a normative baseline against which LLMs can be audited.

However, these difficulties are not reasons for abstaining from developing LLM auditing procedures. Instead, they are healthy reminders that it cannot be assumed that one single auditing procedure will capture all LLM-related ethical risks or be equally effective in all contexts [102]. The insufficiency and limited nature of auditing as a governance mechanism is not an argument against its complementary usefulness. With those caveats highlighted, we now review previous work on AI auditing. The aim of the next section is thus to explore the merits and limitations of existing AI auditing procedures when applied to LLMs and, ultimately, identify transferable best practices.

3 The merits and limits of existing AI auditing procedures

In this section, we provide an overview of previous work.Footnote 12 In doing so, we introduce auditing as an AI governance mechanism, highlight the properties of LLMs that undermine the feasibility and effectiveness of existing AI auditing procedures, and derive and defend seven claims about how LLM auditing procedures should be designed. Taken together, this section provides the theoretical justification for the LLM auditing blueprint outlined in Sect. 4.

3.1 AI auditing

In the broadest sense, auditing refers to an independent examination of any entity, conducted with a view to express an opinion thereon [103]. Auditing can be conceived as a governance mechanism because it can be used to monitor conduct and performance [104] and has a long history of promoting procedural regularity and transparency in areas like financial accounting and worker safety [105]. The idea behind AI auditing is thus simple: just like financial transactions can be audited for correctness, completeness, and legality, so can the design and use of AI systems be audited for technical robustness, legal compliance, or adherence with pre-defined ethics principles.

AI auditing is a relatively recent field of study, sparked in 2014 by Sandvig et al.’s article Auditing Algorithms [1]. However, auditing intersects with almost every aspect of AI governance, from the documentation of design procedures to model testing and verification [106]. AI auditing is thus both a multifaceted practice and a multidisciplinary field of research, harbouring contributions from computer science [107, 108], law [109, 110], media and communication studies [1, 111], and organisation studies [112, 113].

Different researchers have defined AI auditing in different ways. For example, it is possible to distinguish between narrow and broad conceptions of AI auditing. The former is impact-oriented and focuses on probing and assessing the outputs of AI systems for different input data [114]. The latter is process-oriented and focuses on assessing the adequacy of technology providers’ software development processes and quality management systems [115]. This article takes the broad perspective, defining AI auditing as a systematic and independent process of obtaining and evaluating evidence regarding an entity's actions or properties and communicating the results of that evaluation to relevant stakeholders. Note that the entity in question, i.e., the audit’s subject, can be either an AI system, an organisation, a process, or any combination thereof [116].

Different actors can employ AI auditing for different purposes [117]. In some cases, policymakers mandate audits to ensure that AI systems used within their jurisdiction meet specific legal standards. For example, New York City’s AI Audit Law (NYC Local Law 144) requires independent auditing of companies utilising AI systems to inform employment-related decisions [118]. In other cases, technology providers commission AI audits to mitigate technology-related risks, calling on professional services firms like PwC, Deloitte, KPMG, and EY [119,120,121,122]. In yet other cases, other stakeholders conduct AI audits to inform citizens about the conduct of specific companies.Footnote 13

The key takeaway from this brief overview is that while AI auditing is a widespread practice, both the design and purpose of different AI auditing procedures vary. Moreover, procedures to audit LLMs and other foundation models have yet to be developed. Therefore, it is useful to consider the merits and limitations of existing AI auditing procedures when applied to LLMs.

3.2 Seven claims about auditing LLMs

As demonstrated above, a wide range of AI auditing procedures have already been developed.Footnote 14 However, not all auditing procedures are equally effective in handling the risks posed by LLMs. Nor are they equally likely to be implemented, due to factors including technical limitations, institutional access, and administrative costs [3]. In what follows, we discuss some key distinctions that inform the design of auditing procedures and defend seven claims about making such designs feasible and effective for LLMs.

To start with, it is useful to distinguish between compliance audits and risk audits. The former compares an entity’s actions or properties to predefined standards or regulations. The latter asks open-ended questions about how a system works to identify and control risks. When conducting risk audits of LLMs, auditors can draw on well-established procedures, including standards for AI risk management [123, 124] and guidance on how to assess and evaluate AI systems [112, 125,126,127,128,129]. In contrast, compliance audits require a normative baseline against which AI systems can be evaluated. However, LLM research is a quickly developing field in which standards and regulations have yet to emerge. Moreover, the fact that LLMs are adaptable to many downstream applications [40] undermines the feasibility of auditing procedures designed to ensure compliance with sector-specific norms and regulations. This leads us to our first claim:

Claim 1

AI auditing procedures focusing on compliance alone are unlikely to provide adequate assurance for LLMs.

Our blueprint for how to audit LLMs outlined in Sect. 4 accounts for Claim 1 by incorporating elements of both risk audits (at governance and model levels) and compliance audits (at the application level).

Further, it is useful to distinguish between external and internal audits. The former is conducted by independent third-parties and the latter by an internal function reporting directly to its board [130]. External audits help address concerns regarding accuracy in self-reporting [1], so they typically underpin formal certification procedures [131]. However, they are constrained by limited access to internal processes [9]. For internal audits, the inverse is true: while constituting an essential step towards informed model design decisions [132], they run an increased risk of collusion between the auditor and the auditee [133]. Moreover, without third-party accountability, decision-makers may ignore audit recommendations that threaten their business interests [134]. The risks stemming from misaligned incentives are especially stark for technologies with rapidly increasing capabilities and for companies facing strong competitive pressures [135]. Both conditions apply to LLMs, undermining the ability of internal auditing procedures to provide meaningful assurance in this space. This observation, combined with the need to manage the social and ethical risks posed by LLMs surveyed in Sect. 2, leads us to assert that:

Claim 2

External audits are required to ensure that LLMs are ethical, legal, and technically robust, as well as to hold technology providers accountable in case of irregularities of incidents.

As we explain in Sect. 4, each step in our blueprint for how to audit LLMs should be conducted by independent third-party auditors. However, external audits come with their own challenges, including how to access information that is protected by privacy or IP rights [12, 136]. This is especially challenging in the case of LLMs since some are only accessible via an application programming interface (API) and others are not published at all. Determining the auditor’s level of access is thus an integral part of designing LLM auditing procedures.

Koshiyama et al. [10] proposed a typology that distinguishes between different access levels. At lower levels, auditors have no direct access to the model but base their evaluations on publicly available information about the development process. At middle levels, auditors have access to the computational model itself, meaning they can manipulate its parameters and review its task objectives. At higher levels, auditors have access equivalent to the system developer to all the details encompassing a system, i.e., full access to organisational processes, actual input and training data, and information about how and why the system was initially created. In Sect. 4, we use this typology to indicate the level of access auditors need to conduct audits at the governance, model, and application levels.

The question about access leads us to a further distinction made in the AI auditing literature, i.e., between adversarial and collaborative audits. Adversarial audits are conducted by independent actors to assess the properties or impact an AI system has—without privileged access to its source code or technical design specifications [1, 114]. Collaborative audits see technology providers and external auditors working together to assess and improve the process that shapes future AI systems’ design and safeguards [115, 116]. While the former primarily aims to expose harms, the latter seeks to provide assurance. Previous research has shown that audits are most effective when technology providers and independent auditors collaborate towards the common goal of identifying and managing risks [11]. This implies that:

Claim 3

To be feasible and effective in practice, procedures to audit LLM require active collaboration between technology providers and independent auditors.

Accounting for Claim 3, this article focuses on collaborative audits. All steps in our three-layered approach outlined in Sect. 4 demand that technology providers provide external auditors with the access they need and proactively feed their own know-how into the process. After all, evaluating LLMs requires resources and technical expertise that technology providers are best positioned to provide.

Moving on, it is also useful to distinguish between governance audits and technology audits. The former focus on the organisation designing or deploying AI systems and include assessments of software development and quality management processes, incentive structures, and the allocation of roles and responsibilities [85]. The latter focus on assessing a technical system’s properties, e.g., reviewing the model architecture, checking its consistency with predefined specifications, or repeatedly querying an algorithm to understand its workings and potential impact [114]. Some LLM-related risks can be identified and mitigated at the application level. However, other issues are best addressed upstream, e.g., those concerning the sourcing of training data. This implies that, to be feasible and effective:

Claim 4

Auditing procedures designed to assess and mitigate the risks posed by LLMs must include elements of both governance and technology audits.

Our blueprint for how to audit LLMs satisfies this claim in the following way. The governance audits we propose aim to assess the processes whereby LLMs are designed and disseminated, the model audits focus on assessing the technical properties of pre-trained LLMs, and the application audits focus on assessing the technical properties of applications built on top of LLMs.

However, both governance audits and technology audits have limitations. During governance audits, for example, it is not possible to anticipate upfront all the risks that emerge as AI systems interact with complex environments over time [102, 137]. Further, not all ethical tensions stem from technology design alone, as some are intrinsic to specific tasks or applications [138]. While these limitations of governance audits are well-known, LLMs introduce new challenges for technology audits, which have historically focused on assessing systems designed to fill specific functions in well-defined contexts, e.g., improving image analysis in radiology [139] or detecting corporate fraud [140]. Because LLMs enable many downstream applications, traditional auditing procedures are not equipped to capture the full range social and ethical risks they pose. While existing best practices in governance auditing appear applicable to organisations designing or deploying LLMs, that is not true for technology audits. In short:

Claim 5

The methodological design of technology audits will require significant modifications to identify and assess LLM-related risks.

As mentioned above, our blueprint for how to audit LLMs incorporates elements of technology audits on both the model and the application levels. To understand why that is necessary to identify and mitigate the ethical risks posed by LLMs, we must first distinguish between different types of technology audits.

Previous work on technology audits distinguish between functionality, model, and impact audits [141]. Functionality audits focus on the rationale underpinning AI systems by asking questions about intentionality, e.g., what is this system’s purpose [142]? Model audits review the system’s decision-making logic. For symbolic AI systems,Footnote 15 that entails reviewing the source code. For sub-symbolic AI systems, including LLMs, it entails asking how the model was designed, what data it was trained on, and how it performs on different benchmarks. Finally, impact audits investigate the types, severity, and prevalence of effects from an AI system’s outputs on individuals, groups, and the environment [143]. These approaches are not mutually exclusive but rather highly complementary [116]. Still, technology providers that design and disseminate LLMs have limited information about the future deployment of their systems by downstream developers and end-users. This leads us to our sixth claim:

Claim 6

Model audits will play a key role in identifying and communicating LLMs’ limitations, thereby informing system redesign, and mitigating downstream harm.

This claim constitutes a key justification for the three-layered approach to LLM auditing proposed in this article. As highlighted in Sect. 4, governance audits and application audits are both well-established practices in systems engineering and software development. Hence, it is precisely by adding structured and independent audits on the model level that our blueprint for auditing LLMs complements and enhances existing governance structures.

Finally, within technology audits, it is important to distinguish between ex-ante and ex-post audits, which take place before and after a system is deployed, respectively. The former can identify and prevent some harms before they occur while informing downstream users about the model’s appropriate, intended applications. Considerable literature already exists within computer science on techniques such as red teaming [41, 42], model fooling [144], functional testing [145] and template-based stress-testing [146], which all play important roles during technology audits of LLMs. However, ex-ante audits cannot fully capture all the risks associated with systems that continue to ‘learn’ by updating their internal decision-making logic [147].Footnote 16 This limitation applies to all learning systems but is particularly relevant for LLMs that display emergent capabilities [148].Footnote 17 Ex-post audits can be divided into snapshot audits (which occur once or on regular occasions) and continuous audits (which monitor performance over time). Most existing AI auditing procedures are snapshots.Footnote 18 Like ex-ante audits, however, snapshots are unable to provide meaningful assurance regarding LLMs as they display emergent capabilities and, in some cases, can learn as they are fed new data. This leads to our final claim:

Claim 7

LLM auditing procedures must include elements of continuous ex-post monitoring to meet their regulatory objectives.

In our blueprint, continuous ex-post monitoring is one of the activities conducted at the application level. However, as detailed in Sect. 4.5, audits on the different levels are strongly interconnected. For example, continuous monitoring of LLM-based applications presupposes that technology providers have established ex-post monitoring plans—which can only be verified by audits at the governance level. Invertedly, technology providers rely on feedback from audits at the application level to continue improving their software development and quality management procedures.

To summarise, much can be learned from existing AI auditing procedures. However, LLMs display several properties that undermine the feasibility of such procedures. Specifically, LLMs are adaptable to a wide range of downstream applications, display emergent capabilities, and can, in some cases, continue to learn over time. As this section has shown, that means that neither functionality audits (which hinge on the evaluation of the purpose of a specific application) nor impact audits (which hinge on the ability to observe a specific system’s actual impact) alone can provide meaningful assurance against the social and ethical risks LLMs pose. It also means that ex-ante audits must be complemented by continuous post-market monitoring of outputs from LLM-based applications.

In this section, we have built on these and other insights to derive and defend seven claims about how auditing procedures should be designed to account for the governance challenges LLMs pose. These seven claims provided our starting point when designing the three-layered approach for auditing LLMs that will be outlined in Sect. 4. However, we maintain that these claims are more general and could serve as guardrails for other attempts to design auditing procedures for all foundation models.

4 Auditing LLMs: a three-layered approach

This section offers a blueprint for auditing LLMs that satisfies the seven claims in Sect. 3 about how to structure such procedures. While there are many ways to do that, our proposal focuses on a limited set of activities that are (i) jointly sufficient to identify LLM-related risks, (ii) practically feasible to implement, and (iii) have a justifiable cost–benefit ratio. The result is the three-layered approach outlined below.

4.1 A blueprint for LLM auditing

Audits should focus on three levels. First, technology providers developing LLMs should undergo governance audits that assess their organisational procedures, accountability structures and quality management systems. Second, LLMs should undergo model audits, assessing their capabilities and limitations after initial training but before adaptation and deployment in specific applications. Third, downstream applications using LLMs should undergo continuous application audits that assess the ethical alignment and legal compliance of their intended functions and their impact over time. Figure 1 illustrates the logic of our approach.

Fig. 1
figure 1

Blueprint for how to audit LLMs: A three-layered approach

Some clarifications are needed to flesh out our blueprint. To begin with, governance, model and application audits only provide effective assurance when coordinated. This is because the affordances and limitations of audits conducted at the three levels differ in ways that make them critically complementary. For example, as Sect. 3 showed, LLM audits must include elements of both process- and performance-oriented auditing (Claim 4). In our three-layered approach, the governance audits are process-oriented, whereas the model and application audits are performance-oriented. Moreover, feasible and effective LLM auditing procedures must include aspects of continuous, ex-post assessments (Claim 7). In our blueprint, these elements are incorporated at the application level. But this is just two examples. As we discuss what governance, model and applications audits entail in this section, we also make highlight how they, when combined, satisfies all seven claims listed in Sect. 3.

While the three types of audits included in our blueprint are individually necessary, their boundaries overlap and can be drawn in multiple ways. For example, the collection and pre-processing of training data ties into software development practices. Hence, reviewing organisational procedures for obtaining and curating training data is legitimate during holistic governance audits. However, the characteristics LLMs display during model audits may also reflect biases in their training data [149, 150].Footnote 19 Reviewing such data is, therefore, often necessary during the model audits too [151, 152]. Nevertheless, the conceptual distinction between governance, model and application audits remains useful when identifying varied risks that LLMs pose.

It is theoretically possible to add further layers to our blueprint. For example, downstream developers could also be made subject to process-oriented governance audits. But such audits would be difficult to implement, given that many decentralised actors build applications on top of LLMs. The combination of governance, model, and application audits, we argue, strikes a balance between covering a sufficiently large part of the development and deployment lifecycle to identify LLM-related risks, on the one hand, and being practically feasible to implement, on the other. Regardless of how many layers are included, however, the success of our blueprint relies on responsible actors at each level who actively want to or are incentivised to ensure good governance.

Finally, to provide meaningful assurance, audits on all three levels should be external (Claim 2) yet collaborative (Claim 3). In practice, this implies that independent third parties not only seek to verify claims made by technology providers but also work together with them to identify and mitigate risks and shape the design of future LLMs. As mentioned in the introduction, the question of who should conduct the audits falls outside the scope of this article. That said, reasonable concerns about how independent collaborative audits really are can be raised regardless of who is conducting the audit. In Sect. 5, we discuss this and other limitations.

With those clarifications in mind, we will now present the details of our three-layered approach. The following three subsections discuss governance, model, and application audits respectively, focusing on why each is needed, what each entails, and what outputs each should produce.

4.2 Governance audits

Technology providers working on LLMs should undergo governance audits that assess their organisational procedures, incentive structures, and management systems. Overwhelming evidence shows that such features influence the design and deployment of technologies [4]. Moreover, research has demonstrated that risk-mitigation strategies work best when adopted transparently, consistently, and with executive-level support [153, 154]. Technology providers are responsible for identifying the risks associated with their LLMs and are uniquely well-positioned to manage some of those risks. Therefore, it is crucial that their organisational procedures and governance structures are adequate.

Governance audits have a long history in areas like IT governance [85, 155, 156] and systems and safety engineering [157,158,159]. Tasks include assessing internal governance structures, product development processes and quality management systems [115] to promote transparency and procedural regularity, ensure that appropriate risk management systems are in place [160], and spark deliberation regarding ethical and social implications throughout the software development lifecycle. Governance audits can also improve accountability, e.g., publicising their results prevents companies from covering up undesirable outcomes and incentivises better behaviour [136]. Thus defined, governance audits incorporate elements of both compliance audits, regarding completeness and transparency of documentation, and risk audits, regarding the adequacy of the risk management system (Claim 1).

Specifically, we argue that governance audits of LLM providers should focus on three tasks:Footnote 20

  1. (1)

    Reviewing the adequacy of organisational governance structures to ensure that model development processes follow best practices and that quality management systems can capture LLM-specific risks. While technology providers have in-house quality management experts, confirmation bias may prevent them from recognising critical flaws; involving external auditors addresses that issue [161]. Nevertheless, governance audits are most effective when auditors and technology providers collaborate to identify risks [162]. Therefore, it is important to distinguish accountability from blame at this stage of an audit.

  2. (2)

    Creating an audit trail of the LLM development process to provide chronological documentary evidence of the development of an LLM’s capabilities, including information about its intended purpose, design specifications and choices, as well as how it was trained and tested through the generation of model cards [74] and system cards [77].Footnote 21 This includes the structured use of datasheets [76] to document how the datasets used to train and validate LLMs were sources, labelled, and curated. The creation of such audit trails serves several related purposes. Stipulating design specifications upfront facilitates checking system adherence to jurisdictional requirements downstream [157]. Moreover, information concerning intended use cases should inform licensing agreements with downstream developers [163], thereby restricting the potential for harm through malicious use. Finally, requiring providers to document and justify their design choices sparks ethical deliberation by making trade-offs explicit.

  3. (3)

    Mapping roles and responsibilities within organisations that design LLMs to facilitate the allocation of accountability for system failures. LLMs’ adaptability downstream does not exculpate technology providers from all responsibility. Some risks are ‘reasonably foreseeable’. In the adjacent field of machine learning (ML) image recognition, a study found that commercial gender classification systems were less accurate for darker-skinned females than lighter-skin males [15]. After the release of these findings, all technology providers speedily improved the accuracy of their models, suggesting that the problem was not intrinsic, but resulted from inadequate risk management. Mapping the roles and responsibilities of different stakeholders improves accountability and increases the likelihood of impact assessments being structured rather than ad-hoc, thus helping identify and mitigate harms proactively.

To conduct these three tasks, auditors primarily require what Koshiyama et al. [10] refer to as white-box auditing. This is the highest level of access and suggests that the auditor knows how and why an LLM was developed. In practice, it implies privileged access to facilities, documentation, and personnel, which is standard practice in governance audits in other fields. For example, IT auditors have full access to material and reports related to operational processes and performance metrics [85]. It also implies access to the input data, learning procedures, and task objectives used to train LLMs. White-box auditing requires that nondisclosure and data-sharing agreements are in place, which adds to the logistical burden of governance audits. However, granting such a high level of access is especially important from an AI safety perspective because, in addition to auditing LLMs before market deployment, governance audits should also evaluate organisational safeguards concerning high-risk projects that providers may prefer not to discuss publicly.

The results of governance audits should be provided in formats tailored to different audiences. The primary audience is the management and directors of the LLM provider. Auditors should provide a full report that directly and transparently lists and discusses the vulnerabilities of existing governance structures. Such reports may recommend actions, but taking actions remains the provider’s responsibility. Usually, such audit reports are not made public. However, some evidence obtained during governance audits can be curated for two secondary audiences: law enforcers and developers of downstream applications. In some jurisdictions, hard legislation may demand that technology providers follow specific requirements. For instance, the proposed EU AI Act required providers to register high-risk AI systems with a centralised database [43] or implement a risk management system [164]. In such cases, reports from independent governance audits can help providers demonstrate adherence to legislation. Reports from governance audits also help developers of downstream applications to understand an LLM’s intended purpose, risks, and limitations.

Before concluding this discussion, it is useful to reflect on how governance audits contribute to relieving some of the social and ethical risks LLMs pose. As mentioned in Sect. 2, Weidinger et al. [30] listed six broad risk areas: discrimination, information hazards, misinformation hazards, malicious use, human–computer interaction harm, and automation and environmental harms. Governance audits address some of these directly. By assessing the adequacy of the governance structures surrounding LLMs, including licencing agreements [163] and structured access protocols [68], governance audits help reduce the risk of malicious use. Further, some information hazards stem from the possibility of extracting sensitive information from LLMs via adversarial attacks [165]. By reviewing the process whereby training datasets were sourced, labelled, and curated, as well as the strategies and techniques used during the model training process—such as differential privacy [166] or secure federated learning [167]—governance audits can minimise the risk of LLMs leaking sensitive information. However, for most of the risk areas listed by Weidinger et al. [30], governance audits have only an indirect impact insofar as they contribute to transparency about the limitations and intended purposes of LLMs. Hence, risks areas like discrimination, misinformation hazards, and human–computer interaction harms are better addressed by model and application audits.

4.3 Model audits

Before deployment, LLMs should be subject to model audits that assess their capabilities and limitations (Claim 6). Model audits share some features with governance audits. For instance, both happen before an LLM is adapted for specific applications. However, model audits do not focus on organisational procedures but on LLMs’ capabilities and characteristics. Specifically, they should identify an LLM’s limitations to (i) inform the continuous redesign the system, and (ii) communicate its capabilities and limitations to external stakeholders. These two tasks use similar methodologies, but they target different audiences.

The first task—limitation identification—aims primarily to support organisations that develop LLMs with benchmarks or other data points that inform internal model redesigning and retraining efforts [168]. Model audits’ results should also inform API license agreements, helping prevent applications in unintended use cases [163] and restricting the distribution of dangerous capabilities [68]. The second task—communicating capabilities and limitations—aims to inform the design of specific applications built on top of LLMs by downstream developers. Such communication can take different forms, e.g., interactive model cards [169], specific language model risk cards [75], and information about the initial training dataset [170, 171], to help downstream developers adapt the model appropriately.

In Sect. 3, we argued that the way technology audits are being conducted requires modifications to address the governance challenges associated with LLMs (Claim 5). In what follows, we demonstrate that evaluating an LLM’s characteristics independent of an intended use case is challenging but not impossible.Footnote 22 To do so, auditors can use two distinct approaches. The first involves identifying and assessing intrinsic characteristics. For example, the training dataset can be assessed for completeness and consistency without reference to specific use cases [112]. However, it is often expensive and technically challenging to interrogate large datasets [172]. The second involves employing an indirect approach that tests the model across multiple potential downstream use cases, links the results to different characteristics, and assesses the aggregated results using different weighting techniques. That second approach may prove more fruitful when assessing an LLM’s performance.

Nevertheless, selecting the characteristics to focus on during model audits remains challenging. Given such audits’ purpose, we recommend examining characteristics that are (i) socially and ethically relevant, i.e., can be directly linked to the social and ethical risks posed by LLMs; (ii) predictably transferable, i.e., impact the nature of downstream applications; and (iii) meaningfully operationalisable, i.e., can be assessed with the available tools and methods.

Keeping those criteria in mind, we posit that model audits should focus on (at least) the performance, robustness, information security and truthfulness of LLMs. As other characteristics may meet the three criteria listed above, those four characteristics are just examples highlighting the role of model audits in our three-layered approach. The list of relevant model characteristics can be amended as required when developing specific auditing procedures. With those caveats out of the way, we now proceed to discuss how four example characteristics can be assessed during model audits:

  1. (1)

    Performance, i.e., how well the LLM functions on various tasks. Standardised benchmarks can help assess an LLM’s performance by comparing it to a human baseline. For example, GLUE [173] aggregates LLM performance across multiple tasks into a single reportable metric. Such benchmarks have been criticised for overestimating performance over a narrow set of capabilities and quickly becoming saturated, i.e., rapidly converging on the performance of non-expert humans, leaving limited space for valuable comparisons. Therefore, it is crucial to evaluate LLMs’ performance against many tasks or benchmarks, and sophisticated tools and methods have been proposed for that purpose, including SuperGLUE [49], which is more challenging and ‘harder to game’ with narrow LLM capabilities, and BIG-bench [64], which can assess LLM’s performance on tasks that appear beyond their current capabilities. These benchmarks are particularly relevant for model audits because they were primarily developed to evaluate pre-trained models, without task-specific fine-tuning.

  2. (2)

    Robustness, i.e., how well the model reacts to unexpected prompts or edge cases. In ML, robustness indicates how well an algorithm performs when faced with new, potentially unexpected (i.e., out-of-domain) input data. LLMs lacking robustness introduce, at least, two distinct risks [174]. First, the risk of critical system failures if, for example, an LLM performs poorly for individuals, unlike those represented in the training data [175]. Second, the risk of adversarial attacks [176, 177]. Therefore, researchers and developers have created tools and methods to assess LLMs’ robustness, including adversarial methods like red teaming [58], evaluation toolkits like the Robustness Gym [178], benchmark datasets like ANLI [179], and open-source platforms for model-and-human-in-the-loop testing like Dynabench [180]. Particularly relevant for our purposes is AdvGLUE [181], which evaluates LLMs’ vulnerabilities to adversarial attacks in different domains using a multi-task benchmark. By quantifying robustness, AdvGLUE facilitates comparisons between LLMs and their various affordances and limitations. However, robustness can be operationalised in different ways, e.g., group robustness, which measures a model’s performance across different sub-populations [182]. Therefore, model audits should employ multiple tools and methods to assess robustness.

  3. (3)

    Information security, i.e., how difficult it is to extract training data from the LLM. Several LLM-related risks can be understood as ‘information hazards’ [30], including the risk of compromising privacy by leaking personal data. As demonstrated by [165], adversarial agents can perform training data extraction attacks to recover personal information like names and social security numbers. However, not all LLMs are equally vulnerable to such attacks. The memorisation of training data can be minimised through differentially private training techniques [183], but their application generally reduces accuracy [184] and increases training time [151]. Promisingly, it is possible to assess the extent to which an LLM has unintentionally memorised rare or unique training data sequences using metrics such as exposure [185]. Testing strategies, like exposure, can be employed at the model level, although that requires auditors to have access to the LLM and its training corpus. Still, assessing LLMs’ information security during model audits does not address all information hazards because some risk of correctly inferring sensitive information about users can only be audited on an application level.

  4. (4)

    Truthfulness, i.e., to what extent the LLM can distinguish between the real world and possible worlds. Some LLM-related risks stem from their capacity to provide false or misleading information, which creates less well-informed users and potentially erodes public trust in shared information [30]. Statistical methods struggle to distinguish between factually correct versus plausible but factually incorrect information. That problem is exacerbated by the fact that many LLM training practices, like imitating human text on the web or optimising for clicks, are unlikely to create truthful AI [186].Footnote 23 However, during model audits, our concern is not developing truthful AI but evaluating truthfulness. Such audits should focus on evaluating overall truthfulness, not the truthfulness of an individual statement. Yet that does not preclude focusing on multiple aspects, e.g., how frequent falsehoods are on average, and how bad worst-case falsehoods are. One benchmark that measures truthfulness is TruthfulQA [187], which generates a percentage score using 817 questions spanning 38 application domains, including healthcare and politics. When evaluating an LLM with the help of TruthfulQA, auditors would get a percentage score on how truthful the model is. However, even a strong performance on TruthfulQA does not imply that an LLM will be truthful in a specialised domain. Nevertheless, such benchmarks offer helpful tools for model audits.

These four characteristics pertain to pre-trained LLMs. However, model audits should also review training datasets. It is well-known that training data gaps or biases create models that perform poorly on different datasets [188]. Training LLMs with biased or incomplete data can cause representational and allocational harms [189]. Therefore, a recent European Parliament report [152] discussed mandating third-party audits of AI-training datasets. Technology providers should prepare for such suggestions potentially becoming legal requirements.

Despite these technical and legal considerations, training datasets are often collected with little curation, supervision, or foresight [190]. While curating ‘unbiased’ datasets may be impossible, disclosing how a dataset was assembled can suggest its potential biases [191]. Model auditors can use existing tools and methods that interrogate biases in LLMs’ pre-trained word embeddings, such as the metrics DisCo [192], SEAT [193] or CAT [194]. So-called data statements [195] can provide developers and users with the context required to understand specific models’ potential biases. Data representativeness criterion [196] can determine how representativeFootnote 24 a training dataset is, and manual datasets audits can be supplemented with automatic analysis [197]. The Text Characterisation Toolkit [198] permits automatic analysis of how dataset properties impact model behaviour. While the availability of such tools is encouraging, it is important to remain realistic about what dataset audits can achieve. Model audits do not aim to ensure that LLMs are ethical in any global sense. Instead, they contribute to better precision in claims about an LLM’s capabilities and inform the design of downstream applications.

Model audits require auditors to have privileged access to LLMs and their training datasets. In the typology provided by Koshiyama et al. [10], this corresponds to medium-level access, whereby auditors have access to an LLM equivalent to its developer, meaning they can manipulate model parameters and review learning procedures and task objectives. Such access is required to assess LLMs' capabilities accurately during model audits. However, in contrast to white-box audits, the access model auditors enjoy is limited to the technical system and does not extend to technology providers’ organisational processes.

Some of the characteristics tested for during model audits correspond directly to the social and ethical risks LLMs pose. For example, model audits entail evaluating LLMs according to characteristics like information security and truthfulness, which correspond to information hazards and misinformation hazards, respectively, in Weidinger et al.’s taxonomy [30]. Yet it should be noted that our proposed model audits only focus on a few characteristics of LLMs. That is because the criterion of meaningful operationalisability sets a high bar: not all risks associated with LLMs can be addressed at the model level. Consider discrimination as an example. Model audits can expose the root causes of some discriminatory practices, such as biases in training datasets that reflect historic injustices. However, what constitutes unjust discrimination is context-dependent and varies between jurisdictions. That problematises saying anything meaningful about risks like unjust discrimination on a model level [199]. While important, that observation does not argue against model audits but for complementary approaches like application audits, as discussed next.

4.4 Application audits

Products and services built using LLMs should undergo application audits that assess the legality of their intended functions and how they will impact users and societies. Unlike governance and model audits, application audits focus on actors employing LLMs in downstream applications. Such audits are well-suited to ensure compliance with national and regional legislation, sector-specific standards, and organisational ethics principles.

Application audits have two components: functionality audits, which evaluate applications using LLMs based on their intended and operational goals, and impact audits, which evaluate applications based on their impacts on different users, groups, and the natural environment. As discussed in Sect. 3.2, both functionality and impact audits are well-established practices [200]. Next, we consider how they can be combined into procedures for auditing applications based on LLMs.

During functionality audits, auditors should check whether the intended purpose of a specific application is (1) legal and ethical in and of itself and (2) aligned with the intended use of the LLM in question. The first check is for legal and ethical compliance, i.e., the adherence to the laws, regulations, guidelines, and specifications relevant to a specific application [201], as well as to voluntary ethics principles [202] or codes of conduct [203]. The purpose of these compliance checks is straightforward: if an application is unlawful or unethical, the performance of its LLM component is irrelevant, and the application should not be permitted on the market.

The second check within functionality audits aim to address the risks stemming from developers overstating or misrepresenting a specific application’s capabilities [204]. To do so, functionality audits build on—and accounts for outputs from—audits on other levels. During governance audits, technology providers are obliged to define the intended and disallowed use cases of their LLMs. During model audits, the limitations of LLMs are documented to inform their adaptation downstream. Using such information, functionality audits should ensure that downstream applications are aligned with a given LLM’s intended use cases in ways that take account of the model’s limitations. Functionality audits thus combines the elements of compliance and risks audit needed to provide assurance for LLMs (Claim 1).

During impact audits, auditors disregard an application’s intended purpose and technological design to focus only on how its outputs impact different user groups and the environment. The idea behind impact audits is simple: every system can be understood in terms of its inputs and outputs [142]. However, despite that simplicity, implementing impact audits is notoriously hard. AI systems and their environments co-evolve in non-linear ways [137]. Therefore, the link between an LLM-based application’s intended purpose and its actual impact may be neither intuitive nor consistent over time. Moreover, it is difficult to track impacts stemming from indirect causal chains [205, 206]. Consequently, establishing which direct and indirect impacts are considered legally and socially relevant remains a context-dependent question which must be resolved on a case-by-case basis. The application must be redesigned or terminated if the impact is considered unacceptable.

Importantly, impact audits should include both pre-deployment (ex-ante) assessments and post-deployment (ex-post) monitoring (Claim 7).Footnote 25 The former leverages either empirical evidence or plausible scenarios, depending on how well-defined the application is and the predictability of the environments in which it will operate. For example, applications can be tested in sandbox environments [207] that mimic real-world environments and allow developers and policymakers to understand the potential impact before an application goes to market. When used for ML-based systems, sandboxes have proven safe harbours in which to detect and mitigate biases [208]. However, real-world environments often differ from training and testing environments in unforeseen ways [209]. Hence, pre-deployment assessments of LLM-based applications must also use analytical strategies to anticipate the application’s impact, e.g., ethical impact assessments [110, 210, 211] and ethical foresight analysis [153].

Pre-deployment impact assessments and post-deployment monitoring are both individually necessary. As policymakers are well-aware, capturing the full range of potential harms from LLM-based applications requires auditing procedures to include elements of continuous oversight (again, see Claim 7). For example, the EU AI Act requires technology providers to document and analyse high-risk AI systems’ performance throughout their life cycles [43]. Methodologically, post-deployment monitoring can be done in different ways, e.g., periodically reviewing the output from an application and comparing it to relevant standards. Such procedures can also be automated, e.g., by using oversight programs [212] that continuously monitor and evaluate system outputs and alert or intervene if they transgress predefined tolerance spans. Such monitoring can be done by both private companies and government agencies [213]. Overall, application audits seek to ensure that ex-ante testing and impact assessments have been conducted following existing best practices; that post-market plans have been established to enable continuous monitoring of system outputs; and that procedures are in place to mitigate or report different types of failure modes.

By focusing on individual use cases, application audits are well-suited to alerting stakeholders to risks that require much contextual information to understand and address. This includes risks related to discrimination and human–computer interaction harms in Weidinger et al.’s taxonomy [30]. Application audits help identify and manage such risks in several ways. For example, quantitative assessments linking prompts with outputs can give a sense of what kinds of language an LLM is propagating and how appropriate that communication style and content is in different settings [214, 215]. Moreover, qualitative assessments (e.g., those based on interviews and ethnographic methods) can provide insights into users’ lived experiences of interacting with an LLM [73].

However, despite those methodological affordances, it remains difficult to define some forms of harm in any global sense [216]. For example, several studies have documented situations in which LLMs propagate toxic language [150, 217], but the interpretation of toxicity and the materialisation of its harms vary across cultural, social, or political groups [218,219,220]. Sometimes, ‘detoxifying’ an LLM may be incompatible with other goals and potentially suppress texts written about or by marginalised groups [221]. Moreover, certain expressions might be acceptable in one setting but not in another. In such circumstances, the most promising way forward is to audit not LLMs themselves but downstream applications—thereby ensuring that each application’s outputs adhere to contextually appropriate conversational conventions [101].

Another example concerns harmfulness, i.e., the extent to which an LLM-based application inflicts representational, allocational or experiential harms.Footnote 26 An LLM that lacks robustness or performs poorly for some social groups may permit unjust discrimination [30] or violate capability fairness [222] when informing real-world allocational decisions like hiring. Multiple benchmarks exist to assess model stereotyping of social groups, including CrowS-Pairs [223], StereoSet [194] or Winogender [224]. To assess risks from experiential harms, quantitative assessments of LLM outputs give a sense of the language it is propagating. For example, [150] have developed the RealToxicityPrompts benchmark to assess the toxicity of a generated completion.Footnote 27 However, the tools mentioned above are only examples. The main point here is that representational, allocational and experiential harms associated with LLMs are best assessed at the application level through functionality and impact audits as described in this section.

To conduct application audits, lower levels of access are sufficient. For example, to make quantitative assessments to determine the relationship between inputs and outputs, it is sufficient that auditors have what Koshiyama et al. [10] refer to as black-box model access or, in some cases, input data access. Similarly, to audit LLM-based applications for legal compliance and ethical alignment, auditors do not require direct access to the underlying model but can rely on publicly available information—including the claims technology providers and downstream developers make about their systems and the user instructions attached to them.

We contend that governance audits and model audits should be obligatory for all technology providers designing and disseminating LLMs. However, we recommend that application audits should be employed more selectively. Further, although application audits may form the basis for certification [225], auditing does not equal certification. Certification requires predefined standards against which a product or service can be audited and institutional arrangements to ensure the certification process’s integrity [131]. Even when not related to certification, application audits’ results should be publicly available (at least in summary form). Registries publishing such results incentivise companies to correct behaviour, inform enforcement actions and help cure informational asymmetries in technology regulation [12].

4.5 Connecting the dots

In order to make a real difference to the ways in which LLMs are designed and used, governance, model, and application audits must be connected into a structured process. In practice, this means that outputs from audits on one level become inputs for audits on other levels. Model audits, for instance, produce reports summarising LLMs’ properties and limitations, which should inform application audits that verify whether a model’s known limitations have been considered when designing downstream applications. Similarly, ex-post application audits produce output logs documenting the impact that different applications have in applied settings. Such logs should inform LLMs’ continuous redesign and revisions of their accompanying model cards. Finally, governance audits must check the extent to which technology providers’ software development processes and quality management systems include mechanisms to incorporate feedback from application audits. Figure 2 illustrates how governance, model, and application audits are interconnected in our blueprint.

Fig. 2
figure 2

Outputs from audits on one level become inputs for audits on other levels

Each step in our three-layered approach should involve independent third-party auditors (Claim 2). However, two caveats are required here. First, it need not be the same organisation conducting audits on all three levels as each requires different competencies. Governance audits require understanding corporate governance [226] and soft skills like stakeholder communication. Model audits are highly technical and require knowledge about evaluating ML models, operationalising different normative dimensions, and visualising model characteristics. Application auditors typically need domain-specific expertise. All these competencies may not be found within one organisation.

Second, as institutional arrangements vary between jurisdictions and sectors, the best option may be to leverage the capabilities of institutions operating within a specific geography or industry to perform various elements of governance, model, and application audits. For example, medical devices are already subject to various testing and certification procedures before being launched. Hence, application audits for new medical devices incorporating LLMs could be integrated with such procedures. In part, this is already happening. The US Food and Drug Administration (FDA) has proposed a regulatory framework for modifying ML-based software as a medical device [227]. The point is that different independent auditors can perform the three different types of audits outlined here and that different institutional arrangements may be preferable in different jurisdictions or sectors.

5 Limitations and avenues for further research

This section highlights three limitations of our work that apply to any attempt to audit LLMs: one conceptual, one institutional and one practical. First, model audits pose conceptual problems related to construct validity. Second, an institutional ecosystem to support independent third-party audits has yet to emerge. Third, not all LLM-related social and ethical risks can be practically addressed on the technology level. We consider these limitations in turn, discuss potential solutions, and provide directions for future research.

5.1 Lack of methods and metrics to operationalise normative concepts

One bottleneck to developing effective auditing procedures is the difficulty of operationalising normative concepts like robustness and truthfulness [228]. A recent case study found that organisations' lack of standardised evaluation metrics is a crucial challenge when implementing AI auditing procedures [229, 230]. The problem is rooted in construct validity, i.e., the extent to which a given metric accurately measures what it is supposed to [231]. Construct validity problems primarily arise in our blueprint from attempts to operationalise characteristics like performance, robustness, information security and truthfulness during model audits.

Consider truthfulness as an example. LLMs do not require a model of the real world. Instead, they compress vast numbers of conditional probabilities by picking up on language regularities [232, 233]. Therefore, they have no reason to favour any reality but can select from various possible worlds, provided each is internally coherent [234].Footnote 28 However, different epistemological positions disagree about the extent to which this way of sensemaking is unique to LLMs or, indeed, a problem at all. Simplifying to the extreme, realists believe in objectivity and the singularity of truth, at least insofar as the natural world is concerned [235]. In contrast, relativists believe that truth and falsity are products of context-dependent conventions and assessment frameworks [236]. Numerous compromise positions can be found on the spectrum between those poles. However, tackling pressing social issues cannot await the resolution of long-standing philosophical disagreements. Indeed, courts settle disagreements daily based on pragmatist operationalisations of concepts like truth and falsehood in keeping with the pragmatic maxim that theories should be judged by their success when applied practically to real-world situations [237].

Following that reasoning, we argue that refining pragmatist operationalisations of concepts like truthfulness and robustness do more to promote fairness, accountability, and transparency in using LLM than either dogmatic or sceptical alternatives [238]. However, developing metrics to capture the essence of thick normative concepts is difficult and entails many well-known pitfalls. Reductionist representations of normative concepts generally bear little resemblance to real-life considerations, which tend to be highly contextual [239]. Moreover, different operationalisations of the same normative concept (like ‘fairness’) cannot be satisfied simultaneously [240]. Finally, the quantification of normative concepts can itself have subversive or undesired consequences [241, 242]. As Goodhart’s Law reminds us, a measure ceases to be a good metric once it becomes a target.

The operationalisation of characteristics like performance, robustness, information security and truthfulness discussed in Sect. 4 is subject to the above limitations. Resolving all construct validity problems may be impossible, but some ways of operationalising normative concepts are better than others for evaluating an LLM’s characteristics. Consequently, an important avenue for further research is developing new methods to operationalise normative concepts in ways that are verifiable and maintain high construct validity.

5.2 Lack of an institutional ecosystem

A further limitation is that our blueprint does not decisively identify who should conduct the audits it recommends. This is a limitation, since any auditing procedure will only be as good as the institution delivering it [243]. However, we have left the question open for two reasons. First, different institutional ecosystems intended to support audits and conformity assessments of AI systems are currently emerging in different jurisdictions and sectors [244]. Second, our blueprint is flexible enough to be adopted by any external auditor. Hence, the feasibility and effectiveness of our approach do not hinge on the question of institutional design.

That said, the question of who audits whom is important, and much can be learned from auditing in other domains. Five institutional arrangements for structuring independent audits are particularly relevant to our purposes. Audits of LLMs can be conducted by:

  1. (1)

    Private service providers, chosen by and paid for by the technology provider (equivalent to the role accounting firms play during financial audits or business ethics audits [245]).

  2. (2)

    A government agency, centrally administered and paid for by government, industry, or a combination of both (equivalent to the FDA’s role in approving food and drug substances [246]).Footnote 29

  3. (3)

    An industry body, operationally independent yet funded through fees from its member companies (equivalent to the British Safety Council’s role in audits of workers’ health and safety [247]).

  4. (4)

    Non-profit organisations, operationally independent and funded through public grants and voluntary donations (equivalent to the Rainforest Alliance role in auditing forestry practices [248]).

  5. (5)

    An international organisation, administered and funded by its member countries (equivalent to the International Atomic Energy Agency’s role in auditing nuclear medicine practices [249]).

Each of these arrangements has its own set of affordances and constraints. Private service providers, for example, are under constant pressure to innovate, which can be beneficial given the fast-moving nature of LLM research. However, private providers’ reliance on good relationships with technology providers to remain in business increases the risk of collusion [250]. Therefore, some researchers have called for more government involvement, including an ‘FDA for algorithms’ [251]. Establishing a government agency to review and approve high-risk AI systems could ensure the uniformity and independence of pre-market audits but might stifle innovation and cause longer lead times. Moreover, while the FDA enjoys a solid international reputation [252], not all jurisdictions would consider the judgement of an agency with a national or regional mandate legitimate.

The lack of an institutional ecosystem to implement and enforce the LLM auditing blueprint outlined in this article is a limitation. Without clear institutional arrangements, claims that an AI system has been audited are difficult to verify and may exacerbate harms [133]. Further research could usefully investigate the feasibility and effectiveness of different institutional arrangements for conducting and enforcing the three types of audits proposed.

5.3 Not all risks from LLMs can be addressed on the technology level

Our blueprint for auditing LLMs has been designed to contribute to good governance. However, it cannot eliminate the risks associated with LLMs for three reasons. First, most risks cannot be reduced to zero [125]. Hence, the question is not whether residual risks exist but how severe and socially acceptable they are [253]. Second, some risks stem from deliberate misuse, creating an offensive-defensive asymmetry wherein responsible actors constantly need to guard against all possible vulnerabilities while malicious agents can cause harm by exploiting a single vulnerability [254]. Third, as we will expand on below, not all risks associated with LLMs can be addressed on the technology level.

Weidinger et al. [30] list over 20 risks associated with LLMs divided into six broad risk areas. In Sect. 4, we highlighted how our three-layered approach helps identify and mitigate some of these risks. To recap, governance audits can help protect against risks associated with malicious use; model audits can help identify and manage information and misinformation hazards; and application audits can help protect against discrimination as well as experiential harms. Of course, these are just examples. Audits at each level contribute, directly or indirectly, to addressing many different risks. However, not all the risks listed by Weidinger et al. are captured by our blueprint. Consider automation harm as an example. Increasing the capabilities of LLMs to complete tasks that would otherwise require human intelligence threatens to undermine creative economies [255]. While some highly potent LLMs may remove the basis for some professions that employ many people today—such as translators or copywriters—that is not a failure on the part of the technology. The alternative of building less capable LLMs is counterproductive since abstaining from technology usage generates significant social and economic opportunity costs [256].

The problem is not necessarily change per se but its speed and how the fruits of automation are distributed [257, 258]. Hence, problems related to changing economic environments may be better addressed through social and political reform rather than audits of specific technologies. It is important to remain realistic about auditing’s capabilities and not fall into the trap of overpromising when introducing new governance mechanisms [259]. However, the fact that no auditing procedures can address all risks associated with LLMs does not diminish their merits. Instead, it points towards another important avenue for further research: how can and should social and political reform complement technically oriented mechanisms in holistic efforts to govern LLMs?

6 Conclusion

Some of the features that make LLMs attractive also create significant governance challenges. For instance, the potential to adapt LLMs to a wide range of downstream applications undermines system verification procedures that presuppose well-defined demand specifications and predictable operating environments. Consequently, our analysis in Sect. 3 concluded that existing AI auditing procedures are not well-equipped to assess whether the checks and balances put in place by technology providers and downstream developers are sufficient to ensure good governance of LLMs.

In this article, we have attempted to bridge that gap by outlining a blueprint for how to audit LLMs. In Sect. 4, we introduced a three-layered approach, whereby governance, model and application audits inform and complement each other. During governance audits, technology providers’ accountability structures and quality management systems are evaluated for robustness, completeness, and adequacy. During model audits, LLMs’ capabilities and limitations are assessed along several dimensions, including performance, robustness, information security, and truthfulness. Finally, during application audits, products and services built on top of LLMs are first assessed for legal compliance and subsequently evaluated based on their impact on users, groups, and the natural environment.

Technology providers and policymakers have already started experimenting with some of the auditing activities we propose. Consequently, auditors can leverage a wide range of existing tools and methods, such as impact assessments, benchmarking, model evaluation, and red teaming, to conduct governance, model, and application audits. That said, the feasibility and effectiveness of our three-layered approach hinge on two factors. First, only when conducted in a combined and coordinated fashion can governance, model and application audits enable different stakeholders to manage LLM-related risks. Hence, audits on the three levels must be connected in a structured process. Governance audits should ensure that providers have mechanisms to take the output logs generated during application audits into account when redesigning LLMs. Similarly, application audits should ensure that downstream developers take the limitations identified during model audits into account when building on top of a specific LLM. Second, audits at each level must be conducted by an independent third-party to ensure that LLMs are ethical, legal, and technically robust. The case for independent audits rests not only on concerns about the misaligned incentives that technology providers may face but also on concerns about the rapidly increasing capabilities of LLMs [260].

However, even when implemented under ideal circumstances, audits will not solve all tensions or protect against all risks of harm associated with LLMs. So, it is important to remain realistic about what auditing can achieve and the main limitations of our approach discussed in Sect. 5 are worth reiterating. To begin with, the feasibility of model audits hinges on the construct validity of the metrics used to assess characteristics like robustness and truthfulness. This is a limitation because such normative concepts are notoriously difficult to operationalise. Further, our blueprint for how to audit LLMs does not specify who should conduct the audits it posits. No auditing procedure is stronger than the institutions backing it. Hence, the fact that an ecosystem of actors capable of implementing our blueprint has yet to emerge constrains its effectiveness. Finally, not all risks associated with LLMs arise from processes that can be addressed through auditing. Some tensions are inherently political and require continuous management through public deliberation and structural reform.

Academics and industry researchers can contribute to overcoming these limitations by focusing on two avenues for further research. The first is to develop new methods and metrics to operationalise normative concepts in ways that are verifiable and maintain a high degree of construct validity. The second is to disentangle further the sources of different types of risks associated with LLMs. Such research would advance our understanding of how political reform can complement technically oriented mechanisms in holistic efforts to govern LLMs.

Policymakers can facilitate the emergence of an institutional ecosystem capable of carrying out and enforcing governance, model, and application audits of LLMs. For example, policymakers can encourage and strengthen private sector auditing initiatives by creating standardised evaluation metrics [261], harmonising AI regulation [262], facilitating knowledge sharing [263] or rewarding achievements through monetary incentives [256]. Policymakers should also update existing and proposed AI regulations in line with our three-layered approach to address LLM-related risks. For example, while the EU AI Act’s conformity assessments and post-market monitoring plans mirror application audits, the proposed regulation does not contain mechanisms akin to governance and model audits [83]. Without amendments, such regulations are unlikely to generate adequate safeguards against the risks associated with LLMs.

Our findings most directly concern technology providers as they are primarily responsible for ensuring that LLMs are legal, ethical, and technically robust. Such providers have both moral and material reasons to subject themselves to independent audits, including the need to manage financial and legal risks [264] and build an attractive brand [265]. So, what ought technology providers do? To start with, they should subject themselves to governance audits and their LLMs to model audits. That would create a demand for independent auditing and accreditation bodies and help spark methodological innovation in governance and model audits. Mid-term, Technology providers should also demand that products and services built on top of their LLMs undergo application audits. That could be done through structured access procedures, whereby permission for using an LLM is conditional on such terms. In the long-term, like-minded technology providers should establish, and fund, an independent industry body that conducts or commissions governance, model, and application audits.

Taking a long-term perspective, our three-layered approach holds lessons for how to audit more capable and general future AI systems. This article has focused on LLMs because they have broad societal impacts via widespread applications already today. However, elements of the governance challenges—including generativity, emergence, lack of grounding, and lack of access—have some general applicability to other ML-based systems [266, 267]. Hence, we anticipate that our blueprint can inform the design procedures for auditing other generative, ML-based technologies.

That said, the long-term feasibility and effectiveness of our blueprint for how to audit LLMs may also be undermined by future developments. For example, governance audits make sense when only a limited number of actors have the ability and resources to train and disseminate LLMs. The democratisation of AI capabilities—either through the reduction of entry barriers or a turn to business models based on open-source software—would challenge this status quo [268]. Similarly, if language models become more fragmented or personalised [93], there will be many user-specific branches or instantiations of a single LLM which would make model audits more complex to standardise. As a result, while maintaining the usefulness of our three-layered approach, we acknowledge that it will need to be continuously revised in response to the changing technological and regulatory landscape.

It is worth concluding with some words of caution. Our blueprint is not intended to replace existing governance mechanisms but to complement and interlink them by strengthening procedural transparency and regularity. Rather than being adopted wholesale by technology providers and policymakers, we hope that our three-layered approach can be adopted, adjusted, and expanded to meet the governance needs of different stakeholders and contexts.