Keywords

1 Introduction

Scope and Motivation

Numerous examples show that cybersecurity and data protection measures need to be designed in such a way that end users can interact safely with digital systems (e.g., [81, 100]). This user orientation is addressed by the field of usable security and privacy (USP). USP aims to support the design of security and data protection measures in a way that: (1) users, designers, and developers are supported in the best way possible in their security- or privacy-related projects and (2) the measures contribute to a continuously positive user experience. Because of our research background and projects, in this chapter, we will focus primarily on usable privacy. However, usable security and usable privacy usually go hand in hand, which makes it sensible to view them as a common research discipline. Thus, our recommendations should be equally applicable to security-related topics.

Problem and Idea

In practice, the fields of requirements engineering (RE) and user experience (UX) design are tasked with translating data protection regulations into a system’s implementation through requirements and design concepts. Although both disciplines have decades-long expertise with security and data protection requirements, the changes in international data protection regulations and the digital transformation impose new challenges on these disciplines. A particularly challenging question is how to equip end users of systems—both data processors and data subjectsFootnote 1—with the appropriate decision support, transparency, empowerment, and other resources.

Proper design of USP requires new questions to be answered to make the right data protection design choices, such as: What understanding do stakeholders—particularly end users—have of privacy and data protection? What kind of privacy-specific needs do they have? And how can we categorize stakeholders into groups? The answers to these questions strongly affect the way a system is designed and its security and privacy properties are made usable, thereby achieving usable security and privacy (USP; see Sect. 2.2). Functionally, they influence what the system should do; quality-wise, they affect how well it is adapted to the stakeholders’ characteristics and context. Unfortunately, existing security frameworks (e.g., BSIMM, SAMM, Common Criteria), models (e.g., MS SDL), and best practices (e.g., the OWASP guides) barely consider usability and user experience.

Our idea is to firmly cement security and privacy into the human-centered design (HCD) process. The HCD process includes the human perspective in a software system’s design process and ensures that it is developed in a way that its interaction helps human actors make the “right” security and data protection decisions intuitively in the corresponding use cases, thereby minimizing potential errors by misconfiguration. Techniques for RE and UX that guide practitioners to obtain the correct privacy-related answers are still emerging. Including these techniques in the HCD process helps to ensure that the stakeholders are properly considered during the design of these security and privacy aspects.

Contribution and Structure

In this chapter, we present three complimentary USP-oriented methods developed on the basis of good practices, which can be used in HCD, RE, and UX design processes. Together, these methods provide practitioners tasked with designing the USP for a system with a practical toolkit, helping them to assure that USP aspects are sufficiently considered in the HCD process:

  1. 1.

    Eliciting and modeling the mental models of end users with respect to security and privacy in order to understand the stakeholders’ assumptions and expectations (Sect. 3)

  2. 2.

    Eliciting and analyzing the privacy needs of data subjects and the data usage needs of data processors in order to understand the stakeholders’ privacy-related requirements (Sect. 4)

  3. 3.

    Collecting and structuring user characteristics along dimensions into user group profiles and privacy personas in order to understand typical stakeholder perspectives on the protection of their data (Sect. 5)

Figure 1 presents the structure of this chapter. In Sect. 2, we first describe the two concepts underlying this chapter: HCD and USP. We then present the three aforementioned methods in Sects. 35. In Sect. 6, we conclude by outlining implications of the USP elicitation techniques. This structure enables readers interested in applying a particular technique to consult only its corresponding section, while we recommend that casual readers follow this chapter’s sequential order.

Fig. 1
An illustration depicts the overview of the chapter. It represents introduction that covers human-centered design, mental models, usable security and privacy needs, user group profiles and privacy personas, and usable security and privacy.

Overview of this chapter’s structure

2 Background

2.1 Human-Centered Design

HCD reflects the consideration of end users in the design of systems. The goal and main argument for using an HCD process is to increase the fit of the product to the requirements of end users by involving the stakeholders themselves in the design process. Stakeholder involvement is also intended to minimize the risk of erroneous design decisions. So far, however, the various requirements for security and user experience have mostly been elicited as an incidental by-product—if at all—in HCD. With regard to usable security and privacy (USP) requirements, no best practices exist for many application domains yet, let alone verified research experience. Feth, Maier and Polst [30] proposed a model for USP, using smart homes as an application example. They mapped different parts of their model to the activities in the HCD process. However, mental models and personas were not considered in the model, and USP needs were not addressed as detailed as in this chapter.

ISO 9241-210 [49] is the international standard for HCD for interactive systems. It is complementary to existing design methodologies and provides a human-centered perspective that can be integrated into different design and development processes. ISO 9241-210 provides the following principles for human-centered approaches that should be followed, regardless of the design process or the allocation of responsibilities and roles:

  • The design is based upon an explicit understanding of end users, tasks, and environments.

  • End users are involved throughout the design and development.

  • The design is driven by and refined through user-centered evaluation.

  • The process is iterative.

  • The design addresses the whole user experience.

  • The design team performing HCD includes multidisciplinary skills and perspectives; this does not require a team to be large, but it should be sufficiently diverse to collaboratively make trade-off decisions regarding design and implementation at appropriate times.

The HCD process consists of four activities for designing an interactive system, which in this chapter we relate to USP:

Activity 1: Understanding and Specifying the Context of Use

According to ISO 9241-210, the context-of-use description shall include the following:

  • The end users and other stakeholder groups: There can be a range of different user groups as well as other stakeholder groups whose needs are important, also regarding security and privacy.

  • The characteristics of the end users or groups of users: End users have different needs and characteristics regarding privacy and security. Eliciting the end users’ mental models promotes a better understanding of their subjective conception of technical processes (e.g., data processing) and tasks. The identified user groups can be described in the form of user group profiles or personas. Their relation to personal data helps determine whether they have privacy needs as data subjects and/or data usage needs as data processors.

  • The goals and tasks of the end users: The types and frequency of tasks that end users typically perform can be part of the persona descriptions, while USP needs are intertwined with particular goals regarding the use or protection of personal data.

  • The environment(s) of the system: Relevant questions concerning the technical, physical, or socio-cultural environment can be, e.g., Do the end users need to interact with the system when they are preoccupied with other activities? Are there presumably many bystanders? Can someone watch over a user’s shoulder and read the screen? Is the data transferred via public or private Wi-Fi?

Activity 2: Specifying the User Requirements

Identifying user needs and specifying the functional and other requirements for the system being designed are crucial activities. A specific subset of needs are USP needs, which can, among other things, be analyzed in order to derive further privacy requirements. The intended context of use includes the (personal) data used in and transferred by the system. A quality model can ensure that the goals of different stakeholders are taken into account and that all relevant quality aspects are considered: data quality, product quality, quality in use, process quality, and structural quality [82].

Activity 3: Producing Design Solutions

Established best practices facilitate the design of solution prototypes and final designs. In the USP context, three different levels of best practices can be distinguished [92]: (1) principles as general fundamentals that should be considered during the development process; (2) guidelines as descriptions for adopting these principles, and (3) patterns as reusable and proven solutions to commonly occurring problems appearing in system development.

Activity 4: Evaluating the Design

User-centered evaluation is an essential element of HCD. Because user tests are usually time-consuming and costly, it is advisable to first conduct an expert-based heuristic evaluation for USP [29]. Another part of the evaluation is the assessment of compliance with legal standards, for which the USP needs provide a helpful basis. In the European Union, the GDPR has the greatest regulatory impact, especially as it defines the rights of the data subject, which include the right of access, the right to rectification, and the right to erasure.

2.2 Usable Security and Privacy

As mentioned earlier, USP refers to inter- and transdisciplinary methods for designing security- and privacy-enhancing measures in such a way that: (1) users and security engineers (e.g., designers and developers) are supported in the best way possible in their security- or privacy-related goals and projects and (2) the measures contribute to a continuously positive user experience (e.g., promoting intuitive decision-making on choices regarding data privacy) [39, 81].

USP gained attention and relevance in the mid-1990s when computers entered every household and the Internet became widespread. In 1996, Zurko and Simon [105] proposed three categories for a user-friendly security agenda: (1) usability testing for security systems; (2) security models and mechanisms for user-friendly systems, and (3) consideration of the end users’ needs as the primary goal(s) for secure system development. This was a radically new perspective, as end users were often still regarded as a security threat at the time. Other works such as those by Whitten and Tygar [99], Adams and Sasse [3], and Blythe, Koppel and Smith [8] built upon this work.

In the 2000s, USP gained momentum in research. Several standard works dedicated to this topic were published—such as [7, 23, 46, 85]—and many studies were conducted [20, 35,36,37, 87]. 34 of those earliest works were collected in an anthology [21] focusing on realignment of usability and security, authentication mechanisms, secure systems, privacy and anonymity systems, and commercialization of usability. Garfinkel and Lipford [39] provide a good summary of the field up to 2014, while the work of Fischer-Hübner et al. [31] (in German) is also still up-to-date in many areas.

In recent years, the field of USP expanded to cover topics including ubiquitous computing, smart home, and online privacy. Current research trends also reflect trends in social challenges and themes. Two examples are inclusiveness and diversity, which are both increasingly getting attention in the USP community [59, 70]. Moreover, the surge in employees working from home during and after the COVID-19 pandemic has increased the need for USP [26]. For a more detailed summary of USP research, please refer to the chapter “Empirical Research Methods in Usable Privacy and Security”.

3 Mental Models in Security and Privacy

Mental models are personal internal representations of the external reality that help people understand their surroundings and guide their actions [52]. On a more abstract level, the external reality can represent any kind of target system or problem space that people have to deal with: It can be simple and concrete, like finding our way to the kitchen, or complex and abstract, like dealing with climate change. Mental models essentially convey an individual’s perception, imagination, knowledge, and comprehension of a particular target system. When people deal with security and privacy issues in cyberspace, their actions are inevitably the result of their concepts regarding technology, tools, or threats contained in their mental models. In case of misconceptions, people may bypass security measures or avoid using privacy settings because they do not understand how they work or what benefit they bring. It is therefore important to consider end users’ mental models in the design of a system, as this helps designers and developers of security and privacy mechanisms to align those mechanisms with the end users’ understanding and expectations. This can help increase end users’ acceptance and enables them to make informed decisions regarding security and privacy. In Sect. 3.1, we will first define key properties of mental models that are specific to their application in human–computer interaction (HCI). In Sect. 3.2, we will detail for which purposes mental models are suitable in usable security and privacy (USP) and provide examples. In Sect. 3.3, we will conclude this section by outlining how mental models can be elicited in practice.

3.1 Mental Models in Human–Computer Interaction

In the field of HCI, mental models are commonly used to capture the various elements of an individual’s awareness and perception of theoretical concepts or the specific information of systems they use [74, 93]. Human beings employ (predominantly simplistic) mental models to grasp complicated processes and systems in their daily lives, rather than spending a lot of time studying them in depth [19]. Nevertheless, irrespective of their accuracy, mental models guide people’s decision-making process in both familiar and unfamiliar situations [19, 51]. An end user’s mental model is created through interactions with the target system (e.g., the Internet), respectively its system image (e.g., an Internet browser) [71]. Individuals construct mental models of unknown systems by attempting to explain their observations and experiences using analogies from concepts they are familiar with [15]. Thus, the model is affected by an end user’s experience and understanding. However, a mental model does not have to be technically correct; it only needs to be practical.

The elicitation of mental models can provide insights into the perceptions and sensations of individuals, which in turn helps to better understand the reasons for and the factors influencing their behavior [19]. If one then tries to elicit an end user’s mental model, a conceptualization of this model emerges (i.e., a model of a model). The insights gained from this model can be used to align the target system with the end user’s mental model by either supporting them in their understanding or adapting the design of the target system or system image. For example, conceptualized models can be used to design a system in such a way that the cognitive effort required to use it is minimized.

Mental models are generally considered to be vague and highly contextual representations [71]. Based on observations, the use of mental models is subject to the following restrictions [71], which have also been confirmed in related studies [10, 32, 58, 62, 77, 84, 88]: (1) Mental models are incomplete, unstable, and simplified. (2) Mental models have no sharp boundaries. (3) Mental models are “unscientific” and tend to be incorrect. (4) The ability of end users to use mental models is limited. Consequently, there cannot be one unambiguous mental model for a target system; rather—due to subjectivity—several models must always be considered. If the complexity of a target system exceeds the cognitive abilities of a human being, they depend on using a more or less suitable mental model to plan the actions they assume to be “correct” for achieving a goal.

3.2 Mental Models in Usable Security and Privacy

In the field of USP research, mental models are often studied regarding particular tools and technologies (e.g., password managers [12], Wi-Fi [55]), abstract systems (e.g., the Internet [53], smart home [103, 104]), or other abstract concepts (e.g., privacy perceptions [60, 72, 88]). Table 1 presents a non-exhaustive overview of the body of literature on mental model studies in USP and maps these to the stakeholders they address. Numerous studies have sought to understand how end users—and laypeople in particular—envision networks and communication channels, what entities they assume are involved in them, and what threats they believe these entities pose to security and privacy. For example, lay users tend to underestimate the complexity and multi-layered nature of Internet communication, meaning that the actual (personal) data flow remains obscure to them [38, 53, 67]. At the same time, end users also underestimate the capabilities of secure protocols because their complexity exceeds the end users’ knowledge and understanding [1, 56]. From previous studies, it is evident that the nature of security and privacy does not permit a mental model that is universally true. Instead, individuals use highly simplified models [2] and rely on various incomplete and poorly formed sub-models [77]. Because the complexity of information systems is often high, simplified mental models can cause end users to behave in unexpected ways, such as unintentionally disclosing private information [2]. Surveying mental models in USP can help mitigate such effects because they help researchers and developers understand why end users may or may not use certain tools or security and privacy mechanisms. Comprehensive summaries of the contents and applications of mental models in security and privacy can be found in [17, 93]. We can distinguish between three main purposes of using mental models in USP:

Table 1 Overview of mental model studies in USP

Purpose 1: Developing Systems in Which Cognitive Effort Is Optimized for Usability [9, 61, 84, 93]

Mental models are frequently used to address the common difficulty in USP in order to ensure that the end users of a system accurately perceive the presented information [5, 9, 11] and to facilitate security- and privacy-preserving behavior [12, 84, 100]. For example, a study on security warnings revealed that novice end users and experienced end users seek out different cues when confronted with a warning and also perform different actions [9]. Likely due to their more limited knowledge and experience, novice end users tend to ignore potential security risks because they, for example, do not understand what “SSL certificate” means or because they believe that “saving” and “opening” a file is equivalent. As a result, novice end users may lower their device’s overall security level or run unknown software and just wait to see what happens. So, instead of presenting end users with warnings that require them to engage in manual and complex security checks, better wording and automated security checks are ways to increase both usability and security. Many other mental model studies on information systems and security or privacy mechanisms in use contexts highlight similar issues [12, 56, 69, 104]. However, few researchers have used mental models in the development process to inform the design of metaphors and to ensure that security or privacy information is conveyed as intended [5].

Purpose 2: Effective Communication Between Researchers, Experts, Developers, and Laypeople [56, 78, 94, 96]

Studies in USP have repeatedly found that lay users and developers or researchers do not speak the same language; for example, lay users talk about “encryption” but actually refer to concepts related to “authentication” [1, 80]. Also, mental models of actual encryption are limited to concepts of symmetric encryption because asymmetric encryption is beyond laypersons’ understanding [101]. In such cases, the laypersons’ mental models can serve as a tool or template onto which the knowledge of experts can be mapped, thereby making expert knowledge accessible to non-experts. At the same time, the aspects that laypersons consider to be pivotal to their decision-making can be identified and mapped to experts’ mental models [11]. Effective communication may also refer to certain UI designs or support tools. These can be designed to correct misconceptions in laypersons’ mental models, thereby facilitating security- and privacy-enhancing decisions [32]. This is especially relevant because people have been found to behave rationally in their decision-making, acting congruently with the framework of their mental models [11]. USP research has also suggested approaches that stimulate “false” mental models of end users, which can still lead to a secure use of tools [95].

Purpose 3: Capturing and Exploring Concerns, Expectations, and Understanding [34, 40, 53, 65, 79, 103]

End users’ mental models can be used, e.g., to get an overview of the variety of mental models that an information system should support [66, 75]. In particular, mental models can serve as an additional basis for dividing the potential target user group into smaller and more homogeneous subgroups of end users that share certain key characteristics related to privacy or security (see also Sect. 5.1 on user group profiles). In doing so, the designers in an HCD process can decide which subgroups to prioritize or give more attention to based on their mental models, for example, user groups with mental models that lead to potentially undesirable, non-privacy-compliant, or insecure behavior [75]. Figure 2 shows three mental models that were identified in a study on employees’ understanding of their right to privacy in employment [88]. Each mental model is characterized by different objectives, desires for self-determination and transparency, and acceptance of restrictions. Naming the different mental models makes them more “tangible” and allows them to be used by researchers or developers when implementing privacy controls to take specific sets of properties into consideration. For example, based on the three models, when employers promote a new information system as privacy-friendly, although all employees expect greater control over their personal information, only “Privacy Doctrinairists” would also expect greater transparency. In contrast, a system that only provides transparency may not even be perceived as privacy-friendly by employees with the other two mental models.

Fig. 2
A flow diagram depicts the example, mental models. It includes categories, privacy doctrinairists, control-seeking pragmatists, and data flow-concerned protectionists including their objective, self-determination, transparency, and restrictions.

Example mental models of the right to privacy in employment (adapted from [88])

3.3 Mental Model Elicitation

Eliciting mental models involves extracting an individual’s internal representation of a target system. Common elicitation techniques can be divided into two categories according to their methodology [73]: (1) direct methods, which rely upon a stakeholder’s ability to articulate and partially structure their knowledge and train of thoughts, and (2) indirect methods, which employ analyses of written or verbal recordings when stakeholders might be unaware of how they perceive the target system. In the latter case, an evaluator interprets the results of previously processed tasks and structures them, e.g., by frequency.

Open-ended semi-structured interviews are a frequently used instrument for eliciting mental models because the stakeholders can express themselves freely while allowing the interviewer to explore relevant aspects in greater depth by asking targeted follow-up questions [93]. Different support methodologies can be used during these interviews, such as card-sorting tasks or verbal and graphical methods. To ensure the mental models are elicited as completely as possible, purely verbal elicitation can be supplemented, e.g., by presenting the interviewees with illustrations depicting typical elements of the contextual topic and asking them to sort these according to relevance, draw them for themselves, or verbally explain and define certain terms [44, 68]. The interviewees can also be asked to solve practical tasks. During all activities, participants should be encouraged to describe their thought process aloud, which allows inferences to be made about their mental model [68].

Some researchers also use a combination of focus groups and individual interviews [84]. Focus groups are a special type of workshop that allows a larger number of subjects to be interviewed simultaneously [57]. Moreover, they help to uncover previously unidentified aspects through discussions between the participants when their opinions diverge. However, researchers should be aware that participants may adapt or even withhold their personal opinions due to group dynamics [57]. Other elicitation techniques are based on hypothetical scenarios that put stakeholders in a situation where they must make decisions according to their mental model [9, 25, 84, 95]. For example, participants may role-play a hypothetical end user who has to manage their privacy and security in everyday tasks [25]. All of these methodologies have their respective advantages and limitations [6]. In order to overcome these limitations, it is a common practice to employ two or more elicitation techniques [55, 78, 84].

When eliciting mental models by means of surveys, covering all topics of interest poses a challenge. A sound understanding of the target system is usually required. For this purpose, it can be helpful to first model the target system completely and then derive the survey from it. This is also referred to as an expert model approach [10, 68]. First, a model of the target system is created, which ideally contains a complete overview of influencing factors and their relationships. The modeling process may include literature reviews and expert involvement. The model may be revised and fine-tuned over several iterations [88]. Subsequently, questions are derived from the created model, which can be used in the context of an interview study, among other things. In this way, each aspect of the previously created model can be examined specifically. Initial conceptualizations of mental models can be based upon these results. The results can then be verified or validated using a survey for which questions are formulated that test the key points of the previously found conceptualizations. To assure that the outcomes have statistical power, measures must be taken to conduct such a survey with a sufficiently large number of participants. If the survey confirms the initial conceptualizations, the mental model can be further tested with experts, for example, through a practical evaluation in which the mental model is tested against several application scenarios.

In most cases, the interviews are analyzed by means of inductive coding. Here, an initial code list can be created either by coding a few transcripts [53, 58] or by using the available literature and expert knowledge of the research group [77]. To make the mental models more tangible, some researchers create word clouds of the codes to identify their relevance by frequency [84]. Other researchers use graphical approaches [16]. For example, they split the interviewees’ responses into short phrases and identify connections between two objects within a statement. These are then represented by nodes in a diagram. The relationships between the nodes are visualized by paths, which are also taken from the analyzed statement (action, relationship). If the elicitation is based on an expert model, this can also serve as a code book for deductive coding. If a statement cannot be assigned to a code, the expert model is expanded to include it. During the evaluation, frequently occurring nodes or paths can be highlighted to visualize the frequency of keywords in the expert model. This visualization can furthermore be used to evaluate the accuracy of the statements [68].

4 Usable Security and Privacy Needs

In Sect. 3, we learned how end users think about usable security and privacy (USP) in the context of (software) systems. With this understanding, we can elicit the end-user needs that flow from these perceptions. Because we found that there is no widely used process for integrating USP into the design process of a secure system, we developed an approach based on the human-centered design (HCD) process that proves very helpful. In this section, we present the specific aspects of our approach to inspire design processes in other organizations. Although the focus here is on USP, keep in mind that aspects other than USP also need to be considered that pertain solely to user experience (UX; e.g., usage needs) and security (e.g., risk analysis).

The USP needs play well into the phases of the HCD process (see Sect. 2.1). Regarding context of use, stakeholders are assessed in terms of their role regarding personal data, which helps determine whether they have privacy needs and/or data usage needs. When specifying the user requirements, these needs are elicited and documented, and other types of requirements can be derived from them through analysis. The needs can be used as principles that guide the activities toward producing design solutions. Finally, they play a crucial role in uncovering and negotiating requirements conflicts when evaluating the design.

We will begin with an overview of the five types of requirements that are central to this approach (Sect. 4.1). We will then detail the activities for eliciting and analyzing them (Sect. 4.2) and then those for documenting and validating (Sect. 4.3) USP needs. Finally, we will apply them in a real-world example (Sect. 4.4).

4.1 USP Needs as a Requirements Type

In essence, a user requirement describes “a need perceived by a stakeholder” [43]. Typically, these needs are aimed at what a (software) system should do (functional requirement) and how well that system should do this (quality requirement) [42]. The requirements regarding data protection are somewhat different because the end users’ concern is not so much with the system, but with what is (potentially) being done with personal data. These needs are therefore often more abstract than functional and quality requirements and cannot be translated directly into organizational measures or software properties. In this chapter, we consider a need to be a goal expressed by a data subject or data processor regarding the processing of personal data.

Various models and methods have been developed to transform abstract legal requirements into suitable measures and to evaluate these measures. The Standard Data Protection Model (SDM) [4] by the technology working group of the German Data Protection Conference standardizes the implementation of the GDPR requirements in concrete technical and organizational measures. One approach to defining privacy requirements is to consider “data privacy” a software product quality characteristic—much like “security”—and taking the seven protection goals from the SDM as subcharacteristics [82]. This helps to define and organize security and privacy requirements. To promote USP, the Usable Privacy Cube model [50] considers both objective and perceived usability criteria when evaluating data protection.

Still, documenting privacy aspects in the form of a traditional user requirement might cause them to either be too unspecific (e.g., “The system shall maintain the users’ privacy.”) or too much in the solution space (e.g., “When the user logs in, the system shall perform the following actions: …”), while other notations such as soft goal models do not differentiate between privacy aspects. Something appeared to be missing in between: a different type of need that must be elicited in order to understand what drives the stakeholders. This led us to propose USP needs, which we introduce as a novel concept in this chapter: a set of five needs organized into two logical groups as shown in Fig. 3. Each of these needs represents a desire expressed by a stakeholder regarding how personal data are handled [83]. It is paramount to distinguish the stakeholders into the two main user group profiles regarding data privacy—data subjects and data processors—because they have divergent needs that, as we will see, might contradict each other.

Fig. 3
A flow diagram depicts the usable security and privacy needs. U S P needs are classified into privacy needs and data usage needs. Privacy needs are further classified into transparency needs, self-determination needs, security needs, and others.

The five usable security and privacy needs, grouped by type of end user

What sets USP needs apart from typical user requirements is that they are general purpose, i.e., not oriented toward a particular software implementation. They more strongly relate to the legal aspects of personal data. This way, when the needs of both user groups are fulfilled optimally, the effectiveness and the legal compliance of the developed system are inherently maximized, which helps to argue against needs that data protection regulations do not tolerate.

Let us start with data subjects. They are concerned with knowing how their data are protected and what their data are being used for, and they want to exert control over (what happens with) their data. This translates into three privacy needs:

  • Transparency need: The data subjects’ need or desire for understandable information and openness about the processing of their personal data.

  • Self-determination need: The data subjects’ need or desire for autonomous control over the processing of their personal data.

  • Security need: The data subjects’ desire for their personal data to be protected, particularly with regard to the privacy violations that should be prevented. These will often be phrased negatively, i.e., a need for something not to occur.

Data processors, on the other hand, want to use personal data for particular purposes and understand what is allowed. This translates into two data usage needs:

  • Data processing need: The data processors’ need to process certain personal data for a specific purpose, including the ability to access such data.

  • Processing information need: The data processors’ need for information about regulations regarding the processing of personal data in order to be legally compliant.

4.2 USP Needs Elicitation and Analysis

In this section, we will describe our recommended approach for embedding the needs into typical requirements engineering (RE) activities, with suggestions on how needs are elicited in workshops and interviews. The elicitation of needs should begin very early in the RE phase; this activity can be initiated as soon as the most important stakeholders have been identified during the initial stakeholder analysis. This is possible because USP needs describe a personal perception or desire that is largely independent of the system being developed. The needs analysis is performed in various stages depending on the status of other requirements artifacts and illustrates the useful contribution of USP needs within the RE activities:

  • Open needs analysis: The basic needs are determined for the key stakeholders through workshops or interviews. This results in an initial set of basic needs that provide input for deriving user requirements, akin to soft-goal analysis.

  • Scenario-based needs analysis: Once the project’s topic, scope, and goals have crystallized and high-level scenarios have been formulated, the second type of needs analysis can be performed. This analysis associates basic needs with scenarios, while further needs are uncovered as the domain is understood better. In workshops and interviews, scenarios can be used to trigger the stakeholders to express previously uncovered needs. As this activity aims to enrich the understanding of a scenario with the needs that apply to it, scenarios do not have to be associated with all the needs nor vice versa.

  • Detailed needs analysis: When the to-be (process) situation has been described and use cases have been formulated, it becomes possible to triangulate the use cases with the user requirements and needs associated with them. This ensures that the stakeholders’ privacy-related needs have been considered and that the system will deliberately promote, ignore, or actively prevent these needs from getting fulfilled. At this stage, workshops and interviews are usually no longer performed; this is only recommended if the analysis reveals gaps in the elicitation.

The first two analyses, in particular, require active participation of the stakeholders. We recommend eliciting the needs through workshops, but if the project context demands it, semi-structured interviews can also be used. Below, we provide a general-purpose template for both elicitation techniques, which can be tailored to specific project contexts (e.g., a company for which a privacy solution is being designed, the analysis of a particular website, or the definition of a process in which personal data are processed). We also suggest holding at least two different workshops: one with stakeholders who are primarily data subjects to elicit their privacy needs and one with stakeholders who are predominantly data processors to elicit their data usage needs.

The workshop for data subjects should be held with at least six participants so that at least two groups can be formed. Ideally, a much larger workshop with a diverse sample of stakeholders is best, but with more than fifteen participants, the workshop becomes harder to manage. Moderation cards should be prepared in three colors, such as yellow, green, and blue. To write down their needs, the participants should use the following template, which is derived from user stories [14]: “As a < data subject> , I would like < need> , so that < rationale> ”—see Sect. 4.3 for several examples. The workshop can be structured as follows:

  • Form groups of three.

  • Each person in the group selects one data class that is likely to contain personal data about them.

  • For each data class, the group repeats the following steps:

    • Discuss what a company, individuals, or third parties do with this data class, and for what purpose. Optionally: describe typical security problems with protecting this data class.

    • As a data subject, which needs do you have (accordingly) regarding the protection of this data? Write each need on a yellow card using the sentence template. (This question elicits security needs.)

    • As a data subject, what would you like to know regarding the collection, processing, or use of this data? Write each need on a green card using the sentence template. (This question elicits transparency needs.)

    • As a data subject, what need do you have regarding self-determination? Write each need on a blue card using the sentence template. (This question elicits self-determination needs.)

  • Each person in the group picks one data protection, transparency, and self-determination need that they find most important. If another person in the group already picked their most important need, they should choose their second most important one. It is also possible for the workshop organizers to use a prioritization technique (e.g., “buy-a-feature” [41]).

  • Discuss as a group what use cases/overall features of a system should be available to fulfill the selected needs.

The workshop for data processors can be performed with fewer participants than the workshop with data subjects because this is a smaller stakeholder group whose goals are more homogeneous. Six or nine participants will therefore suffice. Prepare moderation cards in two colors, such as purple and orange. Their needs are also documented as a user story, but using this template instead: “As a < data processor> , I would like < need> , so that < rationale> .” The workshop can be structured as follows:

  • Form groups of three.

  • Each person in the group selects one class of personal data that they are likely to process.

  • For each data class, the group repeats the following steps:

    • Discuss what you, your company, other individuals, or third parties do with this data class, and for what purpose. Optionally: describe typical problems regarding the processing of this data class.

    • As a data processor, which needs do you have (accordingly) regarding the processing of this data? Write each need on a purple card using the sentence template. (This question elicits data processing needs.)

    • As a data processor, what would you like to know regarding the collection, processing, or use of these data? Write each need on an orange card using the sentence template. (This question elicits processing information needs.)

  • Each person in the group picks one data processing need and one processing information need that they find most important. If another person in the group already picked their most important need, they should choose their second most important one.

  • Discuss as a group what use cases/overall features of a system should be available to fulfill the selected needs.

In case the above two workshops cannot be organized, or whenever there are key stakeholders who cannot participate in the workshops, we recommend including the following questions in an elicitation interview with the stakeholders:

  1. 1.

    Which of your personal data do you consider worth protecting in the context of the system under development?

  2. 2.

    What (potential) problems do you see in protecting your privacy?

  3. 3.

    In what way should a tool that allows you to set and monitor your privacy settings improve the protection of your privacy? (This question implicitly probes for self-determination needs.)

  4. 4.

    For a specific data category: Which actor or role has or should have access to these data, and what do they use it for?

  5. 5.

    Which need do you have regarding the protection (or, for data processors: processing) of these data? (This question elicits security or data processing needs.)

  6. 6.

    What would you want to know regarding the collection, processing, or use of these data? (This question elicits transparency or processing information needs.)

  7. 7.

    Which of each discussed need is most important to you, and why?

4.3 USP Needs Documentation and Validation

When needs have been elicited during the open or scenario-based needs analysis, they should be documented accordingly. In addition to guidelines on how to document them, in this section, we present a procedure for validating the needs by examining their legal basis.

Typically, documenting the needs begins with typing up the needs from the workshop’s moderation cards. We recommend including every elicited need from the workshops or interviews, but differentiating between the needs that the participants identified as important as opposed to those they did not, for example, by prioritizing them according to the MoSCoW method into must-, should-, could-, and will-not-have needs [13]. At this stage, it is important to assure the quality of the contents by checking for wrongly attributed needs (written on the wrong color card) and verifying that the expressed need and the rationale make sense and are self-explanatory. The need should also be given a name, which can often be derived quite simply from the keywords in the need section of the template. Table 2 shows examples of a documented security need and a data processing need.Footnote 2

Table 2 Examples of a security need and a data processing need, adapted from [89]

The key difference in validating needs compared to typical requirements is that instead of verifying with the stakeholders that the documented needs are correct,Footnote 3 they are instead analyzed for their legal merit based on applicable regulations, legislature, and case law. The resulting legal interpretations form an important basis for assessing to what degree a particular need can be met in the intended context: Should it be allowed, limited, or forbidden? We recommend storing the legal interpretations as separate entities that are subsequently linked to one or more needs. For example, in the German-language documentation of the TrUSD project [89], the needs shown in Table 2 are linked to the two legal interpretations “Processing for purposes of the employment relationship” and “Processing of business emails/determination of private email use,” while the data processing need is additionally linked to the legal interpretation “Communication control.” These descriptions explain that the processing is permissible pursuant to the German Federal Data Protection Act (BDSG) under specific conditions (e.g., that business and private emails are clearly separated) and for specific purposes detailed in the BDSG and the GDPR.

There is a constant tension between whether limiting one’s personal privacy is justified for a specific processing purpose. The use of personal data to optimize work processes often sparks concerns regarding monitoring and performance evaluation, which shows that especially data processing needs and security needs may clash.Footnote 4 A key activity within RE is to analyze requirements during the requirements validation phase in order to identify conflicting requirements that need to be reconciled. This is important because failing to identify such conflicts might lead to an implemented system that does not satisfy the needs of at least one stakeholder. By specifying these needs, requirements reconciliation can be performed, and the decisions made to address conflicting needs are made explicit. In many cases, this will involve communicating the legal interpretation to the stakeholders involved (e.g., that privacy legislation does not permit a need to be fulfilled, or that higher value is assigned to a different need). The best solution approach depends on the context. For example, if potentially many data subjects have a concern, the explanation of why the system helps fulfill a particular data processing need could take the form of an information campaign explaining the necessity and benefits of processing the data and what security measures are being taken. Similarly, if a project reveals there is a great demand for transparency, this can be met through the development of solutions such as privacy dashboards in organizations [90] or privacy cockpits in digital ecosystems [22].

4.4 Example Case Study

In 2021, the regional public broadcaster L1 of the Dutch province of Limburg got international media coverage due to a serious privacy-related incident. A quickly escalating dispute caused their newly appointed director to be suspended after nine months; a court ruling particularly blamed a disorganized works council.Footnote 5 Problems had arisen even before the director took up his post, with staff disputing the Supervisory Board’s appointment procedure.Footnote 6 Dissatisfaction over his communication and leadership style caused employees to respond in a way the director described as a guerilla war waged against him.Footnote 7 But things really culminated when he presented a draft of the new privacy regulations that would infringe on the workers’ privacy through the use of hidden cameras in the office, and—if there were compelling reasons—access to browsing histories and email accounts, including those of journalists, the company physician, and members of the works council.Footnote 8 The director had the sole power to determine what he considered compelling, and he would also be responsible for handling any complaints. The outrage among staff, Dutch journalists and lawyers, and in society as a whole resulted in the draft being retracted just three days later.

This case study shows that the director had several data processing needs, such as “View email contents of employees” (shown in Table 2) and “View browsing history of employees” for the purpose of assessing individual employees. Typically, professional correspondence may be reviewed if it is clearly distinct from private communications, but this assessment should then be performed by a superior, not by the director. This specific situation, however, uncovers a domain-specific type of processing information need: “Privacy rights of journalists.” The fundamental principle that safeguards freedom of the press limits the ability to put journalists under surveillance to ensure that they can exert their duty of protecting their sources.Footnote 9 The director should have been aware that this kind of data processing contradicts the special rights of journalists that safeguard the security needs of “Business email communication” (shown in Table 2) and “Protect the identity of news sources from others” (including their employer), to which they are legally and ethically entitled. This demonstrates that these kinds of needs are not general purpose; while work emails may normally be monitored under certain conditions, these particular security needs are prioritized as “must-have” for journalists.

The director also had the data processing need “Video surveillance of employee activities.” Under strict conditions, data protection legislation allows video surveillance for specific purposes (e.g., preventing illegal activities or industry espionage; improving work floor safety). However, monitoring employee activities in non-public spaces using CCTV cameras is only allowed if it is the mildest and most suitable measure, and should in that case be openly announced instead of through the use of hidden cameras. For the same reasons as above, this conflicts with and is overruled by journalists’ security needs.

5 User Group Profiles and Privacy Personas

Section 4 described what needs the end users of a software system have with regard to usable security and privacy (USP). But who are these end users, and how can we typify them? The ISO 9241-210 standard [49] names two artifacts for describing user characteristics: user group profiles and personas. Although they are introduced as part of the context of use, they can also be used in other activities of the human-centered design (HCD) process, for example to specify the usage requirements of specific groups. These artifacts can accompany the development team throughout the development process up to the evaluation, which can be carried out as a walkthrough from the perspective of a specific persona. We will discuss the concept of user group profiles in Sect. 5.1 and that of (privacy) personas in Sect. 5.2.

User characteristics strongly influence the context in which a system is used. It is therefore useful to gather and analyze relevant information about them in order to understand the current context and to specify the context for the future system. User group profiles summarize typical characteristics of end users, while personas are concrete examples of typical end users [86]. As examples of the characteristics of different user types, ISO 9241-210 cites end users with different levels of experience or physical capability.

5.1 User Group Profiles

Concerning data protection, the most essential user group profiles are the two main types of end users distinguished in the GDPR [27]: data subjects, whose personal data are processed, and data processors, who process personal data. Section 4.1 demonstrates one practical use of these profiles.

The consumer study “DsiN-Sicherheitsindex 2022” [63] distinguishes five different groups of end users of Internet services by their knowledge and behavior and provides suggestions on how to address security deficits for each (percentages according to DsiN):

  1. 1.

    Fatalistic users (17.7%) see dangers lurking everywhere but question the effectiveness of security measures. They often do not realize that their own behavior is an important component in the security concept.

  2. 2.

    Outsiders (5.3%) often feel overwhelmed by new digital offers but consider themselves to be primarily responsible for protecting their personal data.

  3. 3.

    Thoughtless users (37.1%) have a very high level of security knowledge but apply it too rarely. They are the least concerned about being at risk and have little interest in risk reduction measures.

  4. 4.

    Driving users (22.2%) are open to new things and try out more new digital services and offers than other end users. Due to their open-mindedness and curiosity, they are particularly suitable as multipliers to raise awareness.

  5. 5.

    Considerate users (17.8%) have the highest security knowledge and are also forerunners in the implementation processes. They are the most cautious and privacy-aware users when it comes to new digital offerings.

A similar approach is taken by Dupree et al. [97]. They divide end users of privacy and security tools into five categories according to their attitudes, beliefs, and behaviors: marginally aware, fundamentalist, struggling amateur, technician, and lazy expert. Some of these categories are compatible with the DsiN classification; for example, the lazy expert resembles the thoughtless users. Based on the rather abstract user group profiles, Dupree et al. also created personas (see Sect. 5.2) that cover the user space of privacy and security tools (e.g., “Henry—The Lazy Expert”). With respect to end users’ attitude and motivation toward giving feedback, Groen et al. [45] identified seven categories: privacy-tolerant and socially ostentatious, privacy-fanatical but generous, passive and stingy, loyal & passionate, incentive seekers, perfectionists & complainers, and impact seekers. Due to cultural differences, corresponding categorizations often only apply to the inhabitants of the country examined. For example, in a recent study, 65% of the participants in Cyprus were open to sharing their facial images with public administration for identity purposes, compared to 9% of the participants in Germany, Poland, and Romania [28]. For user group profiles that describe end users according to their use of particular security measures, it must be noted that corresponding security measures often become outdated after a few years, which may cause these user classifications to also become outdated over time.

User group profiles are a helpful means of painting a much more accurate picture of the key stakeholders that directly interact with the system. For example, the stakeholder group of end users can correspond to the five DsiN groups. By categorizing them accordingly, it is possible to analyze and document the needs and requirements of this stakeholder group in a much more differentiated way.

5.2 Privacy Personas

Personas are fictitious individuals representing typical user groups as archetypes [18]. Creating personas is not the same as defining user groups or creating user group profiles. Personas are descriptions of stereotypical individual end users that are derived from the identified user groups in order to emphasize the most important characteristics and details of the respective user group [47]. Usually, as many personas are created as are needed to cover all relevant user groups [64].

The intention behind creating personas is to get a more vivid description of the end users than with the more abstract user group profiles. The basis for the creation of personas can be quantitative or qualitative data collections, online surveys, interviews, or participatory observations of potential end users. Personas for the USP domain, or privacy personas, should emphasize the different ways in which personal data are handled and the different security needs of end users, among other things. Importantly, no discriminatory aspects should be highlighted nor associations made with real people [48]. Cooper, Reimann and Cronin [18] recommend that after the research with end users is complete, the distinct aspects of user behavior be listed as a set of behavioral variables. While demographic variables such as age or geographic location influence behavior, behavioral variables are much more useful in developing effective personas. The most important variables for distinguishing behavioral patterns according to Cooper et al. are activities, attitudes, aptitudes (e.g., education, training), motivations, and skills (related to the product domain and technology). In enterprise applications, behavioral variables are often closely related to job roles. Therefore, they recommend listing the variables separately for each role, i.e., creating a separate persona for each role.

Personas can be used by a system’s design and development team to imagine themselves in the role of current and future end users and better emphasize with them. This enables them to better understand their needs and play through different usage scenarios from the end users’ point of view. By understanding the way the end user thinks and acts, it is easier to make the right design decisions—in overall product development, but also during the design of security features and data protection mechanisms to ensure they become as user-friendly as possible for specific user groups [64].

Various projects in the area of USP [22, 90] have developed templates and examples that support the creation of personas. Templates make it possible to evaluate research data and summarize the collected findings in a structured and clear way so that they can be referred to in the further course of development. Table 3 shows one of eight personas developed for a privacy dashboard that caters to the goals and needs of employees. Figure 4 shows a persona template for representing different user groups of digital ecosystems used to design and develop privacy cockpits. In both examples, some variables of “conventional” templates were adapted or further specified in order to collect and analyze USP needs in a structured way.

Fig. 4
An illustration of the persona template. It includes fields for age, occupation, personal values, personal or professional situation, and others.

Persona template for end users of digital ecosystems

Table 3 Example description of a privacy persona [89]

In addition to persona templates, workshop concepts for supporting the creation of personas have been proposed. Workshops for creating personas are particularly useful if no comprehensive research material is available. For example, a workshop concept for elaborating personas in companies or organizations was developed that involves around 18 participants (including the moderator) and has a duration of two hours [48]. This workshop’s schedule is as follows:

  • Welcome and round of introductions.

  • Presentation of the method.

  • Formation of small groups.

  • Each group works out an organization-specific persona using the persona template (e.g., on a Metaplan wall).

  • Each group presents its persona in a group discussion.

  • Summarization of the results in a feedback round.

6 Summary and Conclusion

In this chapter, we presented three methods regarding the interface between human-centered design (HCD) and usable security and privacy (USP): (1) mental models in security and privacy, (2) USP needs, and (3) stakeholder descriptions using user group profiles and privacy personas. These methods are complimentary in that they elicit or collect different types of information, with their own documentation formats and contributions to the design of a digital system.

The methods can play a constructive role throughout the HCD process. They can all be used to specify and understand the stakeholders’ characteristics regarding USP: Mental models enable this by conceptually exploring their subjective perception and assumptions (implicit expectations); USP needs by inventorying their desires and requirements (explicit expectations), and profiles/personas by organizing them into logical groups. In other process steps, goals and tasks can be identified from the USP needs and included in the persona descriptions. Through analysis, user requirements can be derived from analyzing the USP needs. For the design solutions, mental models can inform patterns on end users’ preconceptions, while USP needs provide possible principles. Finally, all methods help to evaluate the design: in terms of how well the system plays into the mental models, in terms of assuring that the USP needs are being fulfilled or overruled by other USP needs and comply with legal standards, and in terms of considering the usage scenarios of the system from the perspective of each persona.

Together, the three methods augment the HCD process with practical approaches to analyzing and assuring that USP is correctly implemented in a system by ensuring that the stakeholders are known, understood, and validated in the system’s design. The additional work involved in applying these methods is manageable and can be justified by their contribution of employing good requirements engineering (RE) and user experience (UX) design practices, with which they integrate perfectly in our experience. Their use makes a positive contribution to a system’s overall quality, not only in terms of constraints (e.g., improved assurance of compliance with data protection regulations), but also in terms of system quality (e.g., because security aspects have been analyzed in more depth) and quality in use (e.g., greater trust in the system). Specifically, we argue that these techniques will help the system to better achieve two core principles stipulated in Article 25 of the GDPR: (1) Data Protection by Design or Security by Design, which postulates the consideration of technical and organizational measures in the system design and development from the very beginning to ensure the best possible privacy and security as well as smooth human–machine interaction, and (2) Data Protection by Default or Security by Default, which postulates that the privacy and security of a system should not rely on end users making good settings, but rather that the default settings should already be as user-friendly, privacy-promoting, and secure as possible.

By presenting these methods, we hope to support the reader with practical knowledge and skills to help them achieve better USP in their systems. We do not claim that these are the only USP-related techniques that can be used in the HCD process, but in our context, we found these methods to be sufficient supplements to the tried and tested RE and UX techniques for achieving our goals. We do encourage the reader to try these approaches for themselves.