Use case cards: a use case reporting framework inspired by the European AI Act

Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases, that we call"use case cards", based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we focus on the intended purpose and operational use of an AI system. It consists of two main parts. Firstly, a UML-based template, tailored to allow implicitly assessing the risk level of the AI system and defining relevant requirements. Secondly, a supporting UML diagram designed to provide information about the system-user interactions and relationships. The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists. We have validated our proposal with 11 experts with different backgrounds and a reasonable knowledge of the AI Act as a prerequisite. We provide the 5"use case cards"used in the co-design and validation process."Use case cards"allows framing and contextualising use cases in an effective way, and we hope this methodology can be a useful tool for policy makers and providers for documenting use cases, assessing the risk level, adapting the different requirements and building a catalogue of existing usages of AI.


Introduction
Nowadays, Artificial Intelligence (AI) is living a groundbreaking moment from many perspectives, including the technological, societal and legal ones.On the one hand, more and more powerful and technologically mature AI systems are being used by the wide public on a daily basis, including recommender systems, decision-support systems, content generation systems, person identification and object recognition systems, and conversational systems.On the other hand, policy makers are starting to put in place legal grounds aiming at regulating the trustworthy use of AI.
With this exponential trend in the daily use of AI, there is need to put in place robust mechanisms to foster a better understanding of AI systems by all impacted stakeholders -experts and non-experts-in order to help ensuring their trustworthy, safe and fair use.Indeed, several studies have acknowledged that the issue of how to communicate about the functioning and potential limits of increasingly complex AI systems remains an open challenge (Laato, Tiainen, Najmul Islam, & Mäntymäki, 2022).
In particular, transparency in the form of well-structured documentation practices is considered a key step towards trustworthy AI (European Commission, 2019).Some methodologies for AI documentation have emerged and been rapidly adopted in the recent years.Nevertheless, their target audience is typically AI technical practitioners (e.g.AI developers, designers, data scientists) leaving aside other important personas such as policy makers or citizens (Hupont, Micheli, Delipetrev, Gómez, & Soler Garrido, 2022).Moreover, the focus is mainly put on technical characteristics (e.g.performance, representativity) of the data used for training (Gebru et al., 2021) and/or general-purpose AI models (Mitchell et al., 2019).When it comes to document more specific use cases of AI systems, i.e. a real-world deployment of an AI system in a concrete operational environment and for a particular purpose, documentation is generally limited to a brief textual description without a standardised format (Louradour & Madzou, 2021).
Nowadays, voluntary AI documentation practices are in the process of becoming legal requirements in some countries.The recent European Commission's proposal for the Regulation of Artificial Intelligence, the AI Act (European Commission, 2021), aims at regulating software systems that are developed with AI techniques such as machine learning.Interestingly, the legal text does not mandate concrete technical solutions to be adopted; instead, it focuses on the intended purpose of an AI system which determines its risk profile and, consequently, a set of legal requirements that must be met.Thus, the AI Act's approach further reinforces the need to properly cover the documentation of AI use cases, which are directly related to the intended purpose of an AI system.
The technique of use case modelling has been used for decades in classic software development (Cockburn, 2001).The so-modelled use cases provide insights into how different actors interact with a software system, the user interface design and the main system's components.It allows developers to identify the system's boundaries and required functionalities, ensuring that all stakeholders are satisfied and have a shared understanding of the system's expected behaviour (Fantechi, Gnesi, Lami, & Maccari, 2003).The use case modelling technique therefore serves as a common mean of communication between stakeholders, including developers, designers, testers, business analysts, clients and end users, allowing for effective collaboration and reducing misunderstandings with respect to functional requirements.
Building upon some preliminary work focusing on the affective computing domain (Hupont & Gomez, 2022), this study revisits classic software use case modelling methodologies, more specifically the widely-used Unified Markup Language (UML) specification (Object Management Group, 2017), to propose a standardised template-based approach for AI use case documentation: the use case card.To ensure that use case cards cover all the information needs required for the assessment of use cases through the lenses of the European AI Act, the methodology has been developed following a co-design process involving European Commission's AI policy experts, AI scientific officers and an external UML and User Experience (UX) expert.Several examples of use case cards are then validated in a user study to check for adequateness, completeness and usability.The use case card template and all implemented examples are publicly available at the GitLab repository https://gitlab.com/humaint-ecpublic/use-case-cards.
The remainder of the paper is as follows.Section 2 reviews the central role of use cases within the AI Act, identifies the needs in terms of information elements for their documentation, and reflects on how current AI documentation methodologies fail to cover these needs.Section 3 presents the use case card documentation methodology and details how to fill it.Section 4 elaborates on the co-design process and validation of use case cards with key stakeholders.Finally, Section 5 concludes the work.

The central role of use cases in the AI policy context
An AI model is a mathematical algorithm designed to perform a computational task.It is generally trained using large datasets and machine learning techniques, from which it learns patterns and relationships to make predictions or generate outputs when presented with new input.Popular examples of AI models include object detectors, language/image generation models or content search algorithms.AI models are typically created in a controlled environment, such as a research lab.At this stage an AI model is in most cases generic, meaning that a very same model can be used for many different purposes.For instance, an object detector can be embedded in car's software system to recognise vehicles, road signs and pedestrians (Gupta, Anpalagan, Guan, & Khwaja, 2021), or be used for automatic people counting during a demonstration for surveillance purposes (Sánchez, Hupont, Tabik, & Herrera, 2020).
Bringing an AI model to a real-world application is not immediate, as it implies the effort of integrating it in a functional system, including the necessary infrastructure, user interfaces, data pipelines, and other components required for the application to operate effectively in a production environment (Hupont, Tolan, Gunes, & Gómez, 2022).Further, it is important to consider in the process the use cases or variety of scenarios where the resulting system can be deployed.Use cases illustrate how users can utilize the AI system to accomplish their goals and therefore provide a key user-centric perspective on its functionality.
The European AI Act supports precisely this human-centric approach, putting the concept of intended purpose at the centre of regulation (European Commission, 2021;Panigutti, Ronan, & et al., 2023).This paper discusses the AI Act as proposed by the Commission in April 2021 (European Commission, 2021).We also mention some modifications made by the Council when adopting its common position ("general approach") in December 2022 (Council of the European Union, 2022).The proposal is currently being debated by the EU co-legislators: the European Parliament and the Council and therefore the content of the final legislation may differ from what is described herein.The AI Act defines the intended purpose of an AI system as: "[...] the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation" According to the proposed regulation, the system's intended purpose determines its risk profile which can be, from highest to lowest: (1) unacceptable risk, covering harmful uses of AI or uses that contradict European values; (2) high-risk, covering uses identified through a list of high-risk application areas that may create an adverse impact on safety and fundamental rights; (3) transparency risk, covering uses that pose risks of manipulation and are subject to a set of transparency rules (e.g.systems that interact with humans such as conversational agents, are used to detect emotions or generate or manipulate content such as deep fakes); and (4) minimal risk, covering all other AI systems.Figure 1 illustrates this risk level approach.The AI Act establishes a clear set of harmonised rules that link use cases to risk levels, which in turn imply different legal requirements.AI systems classified as high-risk according to these rules are those subject to conformity obligations.The rules to categorise an AI system's risk level depend on a series of key information elements that are essential to document its intended purpose.We have compiled them in the list presented in Table 1.As can be seen, the system shall be put into context by providing information on: the operational, geographical, behavioural and functional contexts of use that are foreseen; who will be the users and impacted stakeholders; and which are the system's inputs and outputs.In addition, it is as important to clearly specify the intended use of the system as its foreseeable potential misuses.Finally, there are three elements that are particularly important when it comes to identify an AI system's risk level.The first one is type of product; AI Act's Annex II lists a number of Union product regulations (e.g.machinery, toys, medical devices regulations) and if the system -either as a component of the product or a product itself-is subject to any of them, it is considered high-risk.The second element is safety component; if an AI system is a safety component of a product or system then it is high-risk.The third one is application area; AI Act's Annex III provides a concrete list of application areas under which an AI system is deemed high-risk (e.g.remote biometric identification systems, AI systems used to prioritise the dispatch of emergency services, those used as polygraphs by law enforcement).
Having all these information elements adequately covered in a unique use case documentation methodology would be a valuable tool both for policy makers and AI systems' providers to better navigate the AI Act and properly assessing AI systems' risk level as well as tailoring the different requirements.However, current AI documentation approaches fail to provide a full coverage as we will see in the next section.

Existing approaches for AI documentation
In the recent years key academic, government and industry players have proposed methodologies aiming at defining documentation approaches that increase transparency and trust in AI.Table 2 summarises the most popular Table 1: Key information elements related to use cases under the AI Act.ones, and analyses the extent to which they cover the use case-related information needs identified in the previous section.Note that the table exclusively considers documentation methodologies focusing on AI models, systems or services.For instance, it does not include works tackling only dataset documentation such as Datasheets for Datasets (Gebru et al., 2021), The Dataset Nutrition Label (Chmielinski et al., 2022) or Data Cards (Pushkarna, Zaldivar, & Kjartansson, 2022).
Firstly, the table shows the importance the AI community is paying to documentation, as big tech (Google, IBM, Microsoft, Meta) and high stakes institutions such as the Organisation for Economic Co-operation and Development (OECD) are behind most adopted methodologies.For instance, Google's Model cards (Mitchell et al., 2019) can now be automatically generated from the widely used TensorFlow framework1 , which is strongly fostering its adoption by AI practitioners.
Nevertheless, as anticipated in the Introduction, the majority of methodologies have a strong technical focus.They have been generally conceived as tools for AI developers and providers to demonstrate AI models' performance and accuracy.Most recently proposed methodologies, including the Framework for the classification of AI systems by the OECD (OECD, 2022), AI usage cards (Wahle, Ruas, Mohammad, Meuschke, & Gipp, 2023) and System cards (Meta, 2023), are broadening towards other audiences such as policymakers and end-users.Even though some methodologies do explicitly ask about the intended use of AI the system (e.g."What is the intended use of the service output?" in Arnold et al. (2019), "Intended Use" section in Mitchell et al. (2019) and "Task(s) of the system" in OECD ( 2022)), it is just in very broad terms and provided examples lack sufficient details to address complex legal concerns.Moreover, none of these methodologies are based on a formal standard or specification.In summary, to date there is no unified and comprehensive AI documentation approach focusing exclusively on use cases and covering information elements such as type of product, safety component and application area.Our proposed use case cards aim at bridging this gap.
3 The use case card documentation approach 3.1 Revisiting UML for AI use case documentation Among use case modelling methodologies, the one proposed in the Unified Modelling Language (UML) specification is the most popular one in software engineering (Koç, Erdogan, Barjakly, & Peker, 2021).It has the advantage of being an official standard with more than 25 years of life and backed by a strong community (Object Management Group, 2017).Further, it is easy to use, offering a highly intuitive and visual way of modelling use cases by means of diagrams and a set of simple graphic elements (Figure 2).
UML use cases capture what a system is supposed to do without entering into technical details (e.g.concrete implementation details, algorithm architectures).They rather focus on the context of use, the main actors using the system, and actor-actor and actor-system interactions.A use case is triggered by an actor (it might be a person or group of persons), who is called primary actor.The use case describes the various sets of interactions that can occur between the various actors, while the primary actor is in pursuit of a goal.A use case is completed successfully when the goal that is associated with it is reached.Use case descriptions also include possible extensions to this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure in completing the goal.
Once the use case has been modelled in a diagrammatic form (Figure 2right), the next step is to describe it in a brief and structured written manner.The UML standard does not impose this step to be implemented, but it is commonly done in the form of a table.The most widely-used layout is the one proposed in Cockburn (2001)   The information elements related to use cases under the AI Act (c.f.Table 1) were found to closely match those of the software use case documentation under UML, e.g.: context of use and scope ←→ intended purpose; primary actor ←→ user; stakeholders and interests ←→ stakeholders; open issues ←→ foreseeable misuses; and main course ←→ inputs/outputs.For this reason, we decided to ground our proposed use case cards in UML.The process of transforming classic UML use case diagrams into use case cards was carried out in a co-design workshop with stakeholders that is detailed farther in Section 4.1.In the next sections, we focus on presenting the final use case card design and explaining how to fill it.

Use case cards
The designed use case card template is shown in Figure 3.It is composed of two main parts: a canvas for visual modelling (right) and an associated table for written descriptions (left).Both are very close to the UML standard, with only some few extra information elements inspired by European AI policies as follows.The canvas contains the following visual elements: • AI system boundary: It delimits the functionalities of the AI system.It is represented by a rectangle that encloses all the use cases.• Actors: They represent users or external systems that interact with the AI system.They are depicted as stick figures placed outside the AI system's boundary.Actors can be individuals, groups, other software systems or even hardware devices.Each actor has a unique name to identify their role.• Use Cases: They represent specific functionalities or behaviors of the AI system.They describe the interactions between actors and the AI system to achieve a specific goal.Use cases are represented as ovals within the system boundary.Differently from traditional UML, we distinguish between AI use cases (with blue background) and non-AI use cases (with white background).Each use case has a name that reflects the action or functionality it represents.• Relationships: They show the associations and dependencies between actors and use cases.Associations are depicted by solid lines connecting an actor to a use case, indicating that the actor interacts with or participates in that particular use case.Associations can also exist between use cases to represent dependencies between different functionalities."Include" and "extend" relationships are depicted with dashed arrows."Include" shows that one use case includes the functionality of another use case."Extend" indicates that a use case can extend another one with additional behavior.Generalization is depicted by a solid arrow pointing from the specialized actor to the generalized actor (i.e. the specialized actor inherits the characteristics and interactions of the generalized actor).
It is particularly important to understand the distinction between AI system and use case.The system perspective considers the AI system as a whole and helps in understanding its components (both AI and non-AI) and their relationships.Use cases, on the other hand, represent the specific interactions that actors have with the system and the functionalities the system provides them.By distinguishing systems from use cases, UML provides a modular and flexible modelling approach, allowing to focus on different aspects of the system at different levels of abstraction and granularity.Also note that for a system to be considered AI system in a use case card it has to content at least one AI use case.
The table layout has some changes with respect to the one proposed in (Cockburn, 2001).First, the intended purpose of the system encompasses three fields.Two of them already appeared in the original table, namely, context of use and scope.Both are to be filled with a short text description; we recommend a maximum of 100 words.Remaining field is Sustainable Development Goals (SDGs) and its values should by picked from the official United Nations' list presented in Appendix A's Figure A1.Note that the purpose of this field is stating the SDGs to which the use case contributes (i.e. has a positive impact).
In addition, three new fields have been added as they are essential to determine the use case's risk level -and thus the one of the AI system containing it-according to the AI Act.Their description can be found in Table 1 and below we comment on their possible values: • Type of product: It must be one value from the list in Appendix A's Table A1.Top rows in the list correspond to type of products that might be subject to other EU regulations and, as such, be high-risk according to AI Act's Annex II.• Is it a safety component?:This "yes/no" field determines whether the use case fulfills a safety function for a product or system whose failure might harm persons or material.It is therefore a flag field that indicates a high-risk level.
• Application area(s): One or more areas of application of the use case, as listed in Appendix A's Table A2.Some of these areas are high-risk under the AI Act and therefore need to be clearly identified.
Remaining fields correspond one-to-one with to those in the original table.The only change appears in the description of the open issues field where we have emphasised the need to include foreseeable misuses of the system.

Filling in use case cards
This section illustrates the process of filling in a use case card through the example of a scene narrator application installed in a smartphone.This AIbased application aims at helping people with visual impairments to obtain information about their environment, namely, about surrounding objects, text (e.g.panels, signs, menus) and people (both familiar and unknown persons).The user wears goggles connected to the smartphone, allowing to take a picture of the scene by pressing a button in the right ear temple.Then, the application narrates with a synthetic voice and in natural language the scene description, such as: "You are in an office; there are four persons in front of you, the one on your left is John; there is a table with four chairs and the exit door is at the end of the room on the left hand side." This application is inspired by real products in the market, including Microsoft's Seeing AI App (Microsoft, 2023), Cloudsight's Tap-TapSee (Cloudsight, 2023) and Google's Lookout (Google, 2023).It is a complex application in computational terms, as it combines AI algorithms of different nature: object and person detection, optical character recognition (OCR), face recognition, text and synthetic voice generation.There are also data use and data privacy issues to be carefully addressed, e.g., regarding the management of captured facial images or the possibility of using extracted scene information for other purposes that assisting visually impaired such as targeted marketing.We propose the use case card presented in Figure 4. First, we focus on the visual modelling side.The key questions to pose is what is the AI system, which are the use cases within it we want to document and the main actors involved.The AI system can be easily identified as the scene narrator application.This system may have multiple use cases, ranging from classic software functionalities (e.g.installing the app, user registration, user logging, manage settings) to the more complex AI-based functionalities related to the scene narration part (i.e.object/person detection, OCR, face recognition, etc.).We decide to include within the system's boundary only the uses cases directly linked to the scene narration functionality for the sake of clarity.Then, we reflect on a simplified interaction pipeline for the person with visual impairment to get a scene description, which is: opening the app in the smartphone → taking a picture of the scene → the system computes scene description → the person listens to the audio narration.
Within this pipeline, we realise that the whole AI core is contained under the computation of scene description phase.We therefore decide to introduce a describe scene use case as the principal one, which includes all AI-based functionalities (those with blue background colour).By modelling describe scene as the main use case with "include" dependencies to other AI functionalities, we simplify the documentation process to a single UML table2 .We additionally decide to show some non-AI use cases in the diagram to provide a complete and self-contained overview of the pipeline, namely: take scene photo and register familiar person.The register familiar person use case is particularly interesting, as it shows that certain persons (e.g.family, friends, caregivers) might be registered in the platform by the user, and thus subject to identification through face recognition.The last point to define in the diagram are the actors involved.The main actor is clearly the person with visual impairments as s/he is the one triggering the scene narration process.The modelling process has nevertheless allowed to identify other relevant actors, namely the (unknown) surrounding persons that might appear in the scene and the familiar faces that might eventually be present.Note that surrounding persons are a generalisation of familiar persons, and that the identify people use case "extends" the detect people one.
After the visual modelling exercise, we proceed to complete the table associated to the main use case describe scene.The context of use field provides an overview of pre-conditions and conditions of normal usage (e.g. the app is already installed in the smartphone, the primary actor wears goggles, s/he has already registered some familiar faces in the system), while scope delimits the concrete functionality of the use case.This use case has a strong positive social impact, allowing for a better inclusion and social life for the visually impaired, and therefore contributes to two SDGs: good health and well-being and reduce inequalities.The use case is part of a software product and may not be considered a safety component, as it is meant to assist but not to fulfil a safety function.Interestingly, it has two application areas.The first one is social assistance, and the second one is remote biometric identification systems as it includes face recognition to identify familiar people.This is particularly important as the former is not considered a high-risk application area under the AI Act, while the latter does.Therefore, if the system's provider prefers to bring the application to the market as a low-risk one, the face recognition functionality should be removed.The following fields are relatively straightforward to document, as they merely describe the main actors and course of actions within the use case.In our example, the main course field contains as steps the calls to the different AI algorithms.Extensions tackle problems that may arise, e.g. if the taken picture has poor quality, which are simply addressed with the failure protection mechanism of asking the person to retake the shot.Last but of extremely importance, the open issues field allows the provider to clearly state that the application is conceived for ethical use.It stresses that the system is not intended for use by people who are not visually impaired, clarifies that data privacy is adequately treated (the provider does not keep a copy of taken scene images) and that under no circumstances will the provider do any marketing with the extracted information.
Fig. 5: Two-phase protocol followed for the design and validation of use case cards with key stakeholders.
Through this example, we have shown that use case cards is a powerful, standardised methodology to document AI use cases.Besides the end of documentation, the process of filling in a use case card fosters reflections of the utmost importance about an AI system, such as its risk level, foreseeable misuses and failure protection mechanisms to put in place.Appendix B provides four other use case cards involving different types of AI systems with varying levels of complexity, to provide the reader with a variety of illustrative examples.

Co-designing and validating use case cards
with key stakeholders The use case card methodology was developed following a two-phase protocol with key stakeholders, as depicted in Figure 5. First, we carried out a co-design workshop involving two European Commission (EC) policy experts, three EC scientific officers and an external expert on User eXperience (UX) and UML.
The resulting version of use case cards was then evaluated in a second phase through a questionnaire to 11 scientists contributing to different EU digital policy initiatives, and with varying expertise levels on UML and the AI Act.
In the following, we provide details on the implementation of both phases and present the main results.

Co-design process
Co-design, co-creation or participatory design refers to an approach where stakeholders come together, as equals, to conceptually develop solutions that respond to certain matters of concern (Zamenopoulos & Alexiou, 2018).As such, the co-design method aims to develop a solution "with" the target individuals/groups rather than "for" them.There has been a increasing trend in recent years towards greater inclusion of stakeholders in designing and carrying out research through adoption of co-design methods (Nesbitt, Beleigoli, Du, Tirimacco, & Clark, 2022).Given the multidisciplinary nature of our work, involving both policy and technical matters, we decided to take advantage from this methodology in this first design phase.The co-design phase involved six participants.Two of them are EC policy experts with legal background, and having a high involvement and proficiency expertise on the AI Act.Three are EC scientific officers with a proficiency background on AI and medium-to-high knowledge on UML.It is important to note that, although these three experts have primarily a technical profile, they are involved on a daily basis in digital policy issues, including scientific advice related to the AI Act.Finally, we invited an external expert with high expertise on AI, and a proficiency background on UX and UML.
We organised a two-day physical workshop to conduct the co-design of use case cards.Scientific officers alternated in making questions and taking copious notes throughout the workshop, counting with all participants' permission.
The three scientific officers and the external UML/UX expert prepared a three-hour tutorial on UML to kick off the first day.The tutorial started with the presentation of the UML standard (Object Management Group, 2017), with particular emphasis on the use case modelling part.Then, three exemplar AI use cases modelled in classic UML format (c.f. Figure 2) were presented for illustrative purposes: an affective music recommender, a driver monitoring system and a smart-shooting camera system.
After the tutorial, the six participants engaged in a guided discussion covering the following key points: • Potential of UML as a standard methodology for AI use case documentation.
• Relevance, clarity and adequateness of the UML diagram and related table with regard to the AI Act (e.g.missing fields, ease to understand/implement).• Relevance of the method for the assessment of an AI system's risk level according to the AI Act.
Results can be summarised as follows.First, participants unanimously agreed on the high overlap between UML's information elements and those required to document use cases under the AI Act (c.f.Table 1).Therefore the standard was considered fit for purpose.Participants however identified missing fields essential in the context of the AI Act and that should be added to the UML table, namely: (i) the type of product to which the AI system belongs; (ii) its application area(s); and (iii) whether the use case is a safety component of a product.
Participants raised important additional points.They mentioned different uses of the methodology, including the creation of a public repository of AI use cases, useful in the context of the registration processed mentioned in Article 51 and Annex VIII-part II of the AI Act.This repository would be a valuable and usable tool to help companies -and more particularly SMEs, with more limited legal resources-identify the risk level of their AI systems: "use case cards would give companies a hook to go through the AI Act".Authorities would also benefit from such repository, allowing to "have a better overview of the landscape of existing AI systems" and "engage with companies to articulate bordercases".Although not an explicit information requirement under the AI Act, given its human-centered nature it was deemed interesting to include the link of each use case to Sustainable Development Goals (SDGs) which "would help keep track of AI-for-good applications".
During the second workshop day, participants proceeded to the design of use case cards according to the findings identified the previous day.They first added the four missing fields (i.e."type of product", "application area(s)", "is it a safety component?"and "SDGs") to the UML table, and agreed on its final layout (e.g.colors, order/position of the different fields).Then they developed altogether the list of type of products (c.f.Appendix A's Table A1) and application areas (c.f.Appendix A's Table A2), carefully considering AI Act's Annex II and III, respectively.Finally, participants concluded with a practical exercise, where they converted the three UML use cases in the tutorial to the new use case card format.They additionally implemented two new use case cards: the scene narrator one (presented in the previous section) and a student proctoring one.New use case cards can be found in Appendix B. This final exercise allowed us to confirm the ease of use and implementation of the methodology, whose adaptation is "straightforward with respect to traditional UML" as confirmed by the UML/UX expert.

Questionnaire-based validation study
Once the first solid version of the use case cards was available, we conducted a questionnaire-based study to validate two main aspects.On the one hand, those components referring to the clarity and complexity of the proposed approach, such as its learning curve, its level of detail and granularity, the importance of the visual components with respect to the table, as well as open questions regarding possible missing or unnecessary fields.On the other hand, those elements related to the level of contextualization with respect to the AI Act, risk level assessment, requirements, etc.A summary of the questions is provided in Table 3.As can be seen, 9 questions were designed to have a possible answer aligned with a 5-point Likert scale, 2 questions allowed for a yes/no answer plus an elaboration if the answer was yes, and 2 questions were designed as completely open questions.
The online survey included an introduction with the description of the project, the main goals and procedure.Then a brief introduction of the main components of use cases modelled with UML was provided, followed by a short description of the proposed structure for the use case cards.After some demographic questions, the participants were provided with three exemplar use case cards.The first one corresponds to the scene narrator system previously presented in Section 3.3 (Figure 4).Remaining two correspond to the driver attention monitoring system and the student proctoring tool presented in Appendix B (Figures B4 and B5, respectively).We involved 11 participants (5 female, 5 male, 1 prefer not to say), 7 of whom had a technical background (computer scientists/engineers), and the rest with varied profiles including 1 legal expert, 1 social scientist and 1 mathematician.All of them had experience in trustworthy AI, science for policy, and the AI Act, as well as varying degrees of knowledge of UML.More specifically, their knowledge about the AI Act was self-assessed between "low" and "very high", with mean M 1 = 3.27 (question 1, Figure 6-left), whereas about UML it was self-assessed between "none" and high, with mean M 2 = 2.36 (question 2, Figure 6-right).Since the use case cards are intended to be used in the context of the AI Act, it is coherent to validate them with participants with some knowledge of the AI Act.However, in principle, it is not strictly necessary to have knowledge of UML, so validation should incorporate participants with little or no knowledge of UML. Figure 7 shows the histograms of answers for the questions related to the intrinsic features of the method.The difficulty to understand the three exemplar use case cards was assessed as "somewhat easy" (M 3 = 4.09), the level of detail as "adequate" (M 4 = 3.00), the importance of the UML diagram (the canvas) between "moderately important" and "important" (M 5 = 3.45), and the learning curve at the midpoint between "moderately appropriate" and "quite appropriate" (M 6 = 3.55).Regarding the question on missing fields (OQ1), 6 participants answered "no" and 5 "yes".The suggestions provided by those who answered "yes" can be seen in Figure 8.Most of them can be easily integrated into the "Open issues" field of the table.Other suggestions such as "more explicit contextualization with the AI Act" or "other relevant EU policies" could be considered in future versions.And as for the question on possible dispensable fields (OQ2), 73% of the participants answered "no", and 27% "yes".As depicted in Figure 9, there were three concerns, one referring to the type of product, another focusing on the Sustainable Development Goals (SDGs), and one comment on the UML diagram.First, it is important to note that the type of product has to be considered together with the specific area.Otherwise, we cannot obtain a detailed classification.On the other hand, we believe that asking about the SDGs can have positive effects on AI systems providers, as a way for them to consider whether or not their systems contribute to sustainable development.Finally, the importance of the UML diagram has been positively assessed by most of the participants in question 5.
Concerning the alignment of use case cards with the AI Act, the feedback from the participants is also very positive.For example, regarding the level of contextualisation with the AI Act (question 7, Figure 10 left), the mean answer is between "somewhat" and "to a great extent", with M 7 = 4.18.Regarding its utility to assess the risk-level (question 8, Figure 10 right) the answers are between "very little" and "to a great extend", with a mean value very close to "somewhat" (M 8 = 3.82).And the general feedback from question 9 (Figure 11) is mostly positive towards an agreement on its appropriateness to different AI Act specific aspects."the level of risk associated with failure" "risks associated to each use case" "foreseeable misuse and risks" "potential biases" "information on who controls the AI system" "more explicit contextualization with the AI Act" "other relevant EU policies" Fig. 8: Answers to open question 1: "Is there any important field that you miss in the table?".
"the type of product is not very illustrative in some cases" "the relationship to the SDGs" "the UML diagram is less understandable and informative than the table" Fig. 9: Answers to open question 2: "Is there any field that you would remove?".From the participants' answers to open question 3, we highlight the following suggestions for other potential uses: • "Documentation and training".
• "As a standard to show the use of AI systems to citizens".
• "Create a database of sample use cases".
• "Elaborating on possible mitigation measures after risk assessment".
• "To help non-experts to understand how a product works".Some of these answers echo our goal of proposing a methodology for documenting use cases for AI systems that is easy to understand by a non-expert audience.Other answers also point in the direction of a possible standard that could help with documentation needs, risk mitigation or conformity assessment.
However, there are also some issues raised by some participants in the last open question.In almost all cases, the feedback obtained refers in one way or another to a limited expertise on UML for documenting use cases.For example, some participants did not clearly understood the difference between the "AI system" and the "use cases", including some confusion about the type of dependencies between the use cases.This issue is highly correlated with the lack of previous knowledge on UML.Difficulties in learning and using UML are well-known issues in the research and industry communities (Siau & Loo, 2006).However, the benefits of UML have been empirically validated in multiple studies (Chaudron, Heijstek, & Nugroho, 2012).While we recognise the potential initial difficulties of a wider audience in interpreting the UML canvas, we do not expect a major impact for AI providers, as UML is a de facto industry standard for modelling software systems.Moreover, as most of the participants emphasised, the table is the main element of the proposed approach, and its clarity has been validated regardless of prior knowledge about UML.

Conclusions
In this work we present use case cards, a standardised methodology for the documentation of AI use cases.It is grounded on four strong pillars: (1) the UML use case modelling standard; (2) the recently proposed European AI Act; (3) the result of a co-design with high-profile stakeholders including European policy and scientific experts with a proficiency level on AI, UML and the AI Act; and (4) a validation with 11 experts combining technical knowledge on AI, social sciences, human rights and/or legal background, and having a strong experience in EU digital policies.
Differently from other widely used methodologies for AI documentation, such as Model Cards (Mitchell et al., 2019), Method Cards (Adkins et al., 2022a) or System cards (Wahle et al., 2023), use case cards focuses on describing the intended purpose and operational use of an AI system rather than on the technical aspects related to -in most cases, a generic-AI model.This allows to frame and put the use case in context, in a highly visual, complete and efficient manner.It has also be proven a useful tool for both policy makers and providers in assessing the risk level of an AI system, which is key to determine the legal obligations to which it must be subject.
It is important to emphasize nevertheless that use case cards is not meant to be a final and exhaustive documentation methodology for compliance with any future legal requirement.First, because the AI Act is still under negotiation and therefore subject to possible modifications in its road towards adoption.Second, because the objective of this work is the documentation of use cases, which is just a small piece of the technical documentation required to demonstrate full conformity with the legal text.
Use case cards has the potential to serve as a standardised methodology for documenting for use cases in the context of the European AI Act, as stated by participants in the co-design and validation exercises.In the future, we plan to develop a web-based prototype of this registry integrating a machine-editable version of use case cards and allowing for the automated analysis of related statistics such as the number of use cases per application area, per product type, and most covered SDGs.
author.Additionally the use case card template and related examples are available at the public GitLab repository https://gitlab.com/humaint-ecpublic/ use-case-cards.

Biometrics
Remote biometric identification systems.
Critical infrastructure AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
Education and vocational training AI systems used to determine access, admission or to assign natural persons to educational and vocational training institutions or programmes.
AI systems intended to be used to evaluate learning outcomes.
Employment, workers management and access to self-employment AI systems used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
AI systems to make decisions on promotion and termination of work-related relationships, to allocate tasks or monitor and evaluate performance based on person's behavior, personal traits or characteristics.
Access to essential private services, public services and benefits AI systems used by public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, and to grant, reduce, revoke or reclaim such benefits and services.
AI systems used to evaluate the creditworthiness of natural persons or establish their credit score.
AI systems used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
AI systems for risk assessment and pricing in the case of life and health insurance.
Law enforcement AI systems used by law enforcement to assess the risk of a natural person for offending or reoffending or the risk for a natural person to become a potential victim of criminal offences.
AI systems used by law enforcement as polygraphs or to detect the emotional state of a natural person.
AI systems used by law enforcement to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences.
AI systems used by law enforcement to predict the (re)occurrence of a criminal offence based on profiling of natural persons or to assess personality traits and characteristics or past criminal behaviour.
AI systems used by law enforcement to profile natural persons in the course of detection, investigation or prosecution of criminal offences.
Migration, asylum and border control management AI systems used by public authorities as polygraphs or to detect the emotional state of a natural person.
AI systems used by public authorities to assess a risk (security risk, risk of irregular immigration, health risk) posed by a person who enters or has entered into the territory of a Member State.
AI systems to assist public authorities to examine applications for asylum, visa and residence permits and associated complaints.
Administration of justice and democratic processes AI systems used by a judicial authority to interpret facts or the law and to apply the law to a concrete set of facts.Affective music recommender.Figure B3 shows the use case card of a music recommender system proposing songs to the user based on personality, mood and playlist history.This use case has been inspired by Amini, Willemsen, and Graus (2019).Several studies have demonstrated that music playlists can be used to infer user's emotions, personality traits and vulnerabilities (Deshmukh & Kale, 2018); the other way round, certain music pieces can induce behaviours and manipulate listeners' emotions (Gómez-Cañón et al., 2021).The use case card allows to frame the ethical use of the system by stating that the sole purpose is providing the most appropriate music recommendations, and in any case manipulate listener's emotions or behaviour.Driver attention monitoring.This AI system records a driver's face from a car's in-cabin camera and monitors facial behaviour to detect potential drowsiness and distraction.The monitor attention use case is the one in charge of detecting such situations and sending alerts in the form of beep tones and light symbols in the car dash (Figure B4).Driver attention monitoring systems are nowadays commonly available as market products (Post, 2022;Subaru, 2022).The corresponding use case card states that the system is part of a safety component of the vehicle, which positions it as a high-risk system.Further, it highlights that the system is conceived to alert the driver but in any case to allow the vehicle to take full control of the car in an autonomous manner.Student proctoring.This AI system detects potential cheating in students during exams.It is inspired by the literature (Baldassarri, Hupont, Abadía, & Cerezo, 2015;Roa'a, Aljazaery, ALRikabi, & Alaidi, 2022) and market products (Meazure Learning, 2023;Respondus, 2023).The use case card presented in Figure B5 documents its main use case detect cheating.It is a complex one as it includes AI computational tasks of different nature: video analysis for the detection of third persons in the room and relevant objects (e.g.books, phones); detection of impersonation through voice and face identification; and detection of suspicious behaviours (e.g.talking, facial/gaze movements).Alerts are triggered to instructors for review and action.This system's application area is high-risk and, as such, open issues such as ensuring non-discriminatory access and appropriate data governance must be carefully documented.

Fig. 1 :
Fig. 1: Risk level approach proposed in the AI Act.

Fig. 2 :
Fig. 2: Traditional components of a use case modelled with UML.Left: table for use case description as proposed by Cockburn (2001).Right: visual elements, as established in the UML standard (Object Management Group, 2017).

Fig. 3 :
Fig. 3: Proposed use case card template.Left: use case table.Right: canvas for the visual modelling of the use case in the context of the AI system it belongs to or it is a component of.

Fig. 4 :
Fig. 4: Filling in a use case card : example of a scene narrator application.

Fig. 6 :
Fig. 6: Histograms of the answers to questions 1 and 2, and mean values.

Fig. 7 :
Fig. 7: Histograms of the answers to questions 3 to 6, and mean values.

Fig. 10 :
Fig. 10: Histograms of the answers to questions 7 and 8, and mean values.

Fig. 11 :
Fig. 11: Visualization of answers to question 9 ("In the context of the AI Act, use case card is appropriate for...").

Fig. B3 :
Fig. B3: Use case card for an affective music recommender system.

Fig. B4 :
Fig. B4: Use case card for a driver attention monitoring system.

Fig. B5 :
Fig. B5: Use case card for a student proctoring system.
and shown in Figure 2-left.

Table 2 :
Comparison of state-of-the-art AI documentation approaches to our proposed use case cards.The symbol denotes a good coverage of the information element, T is used for elements only covered from a technical perspective, and × means no coverage.The methods have been assessed based on examples publicly available.

Table 3 :
Summary of the questionnaire.Qx denotes 5-point Likert-scale questions and OQx stands for open questions.
Q8Does the use case card provide information to assess the risk-level according to the AI Act?Q9In the context of the AI Act, use case card is appropriate for:(1) risk-level assessment, (2) requirements, (3) catalogue of usages, (4) other:

Table A2 :
List of application areas for use case cards.Subareas marked with are high-risk under AI Act's Annex III (as of AI Act's "General Approach", December 2022).