Keywords

1 Introduction

We present a novel method for validating ideas. In science, one can say that ideas are validated through the “test of time” [4, 18], where the ideas the author publishes in a research article are, in time, either adopted or forgotten by the community. Here we develop and show how to use a method that, metaphorically, “speeds up time” so that the ideas are immediately validated with a representative selection of the community, i.e., with experts from relevant fields. Our method is an adaptation of existing methods that are currently used to validate the data that studies are carried on. We adapt these methods, as explained in Sect. 2, to validate, instead of data, the research ideas and concepts proposed in [9, 10].

We consider the method detailed here a contribution to the research community in general, but maybe even more valuable are the results that each of its applications would bring in terms of validation of particular ideas. For our case here, we find it very important to validate the ideas behind the research program started in [9]. An interesting outcome of our study is that expert opinions are a very good method for bringing out open problems.

Most of this chapter will be spent on applying the expert opinions method in a study for validating the following five (types of) ideas or concepts.

First, we validate a rather general idea, of the type often found in position papers, which usually propose or motivate a direction of research. In [9], a case is made for the need to produce measurable evaluations of the usability with which privacy goals of data protection are reached. Having a scale showing how well a product respects the privacy of its users, and how easy it is for the user to understand the level of privacy protection that a product offers, works toward fulfilling the goal expressed in Recital (100) of the European General Data Protection Regulation (GDPR), i.e., that of “allowing data subjects to quickly assess the level of data protection of relevant products and services.”

Second, we validate the definition of Usable Privacy, Footnote 1 which extends and adapts the definition of usability from the ISO 9241-11:2018 [6] to privacy.

Usable privacy refers to the extent to which a product or a service protects the privacy of the users in an efficient, effective, and satisfactory way by taking into consideration the particular characteristics of the users, goals, tasks, resources, and the technical, physical, social, cultural, and organizational environments in which the product/service is used.

Third, we evaluate a list of 30 Usable Privacy Goals (UP Goals)1 extracted from the GDPR text. One such goal is, e.g., found in the Article 12:

…any information …and communication …relating to processing [to be provided] to the data subject in a concise, transparent, intelligible and easily accessible form, using clear and plain language,

How concise, transparent, or intelligible the form of presentation is can be determined by measurements of efficiency, effectivity, and satisfaction, in a respective context of use. The emphasized words are those that can be interpreted differently based on the context they are used in and can result in objective and perceived measurements when evaluated using usability methods.

Validating this list involves looking at properties such as adequacy, completeness, or coverage, e.g., whether the list covers well the GDPR document from which it was extracted. Such lists often appear, for example, in surveys.

Fourth, we validate how appropriate is the set of 24 Usable Privacy Criteria1 (UP Criteria), which are meant in [9, 10] to produce measurable evaluations of usability of privacy that can be translated into scales to be used in certifications. For example, the above goal is associated a criterion that contains several specific sub-criteria worded so to produce measurements, such as the one below that requires to measure efficiency:

How much time/effort/financial and material resources does the data subject need to invest in order to access the information related to the processing of his/her personal data?

Finally, we validate a model called the Usable Privacy Cube (UP Cube) model,1 which is proposed in [9] with the purpose of guiding the process of evaluation of usability in privacy certifications. The UP Cube model depicted in Fig. 1 has three axes of variability, with the two at the base containing the existing criteria of the European certification body EuroPriSe, reorganized into:

  1. (i)

    Rights of the data subjects

  2. (ii)

    Data protection principles

These two axes also capture the two usual perspectives on privacy, i.e.:

  1. (i)

    The perspective of the users of whom private information is being collected (and the ones that the regulations usually seek to protect)

  2. (ii)

    The perspective of the industry/controllers developing products or services that collect and process private information (the ones who must conform with regulations such as GDPR and show compliance by going through certifications such as the EuroPriSe)

The third vertical axis is composed of Usable Privacy Criteria intended for measuring the usability level of privacy in a specific context of use. The UP Cube model comes with additional concepts beneficial for certification processes, such as allowing/asking for ordering and prioritization of the criteria on each axis, or the possibility to identify intersections between the axes.

Fig. 1
An illustration of the Usable Privacy Cube model. 1. It has context of use 1 with 3 questions and answers for each. 2. It has a small cube inside a big cube. The usable privacy criteria is labeled on the vertical axis, the rights on the horizontal axis, and the principles on the diagonal axis.

Usable Privacy Cube model from [9]

2 Method

To validate the type of concepts described above, we employ a critical qualitative research [3], where we take an interrogative stance toward the experts’ meanings and experiences expressed in the opinions we collect through interviews. Special for our method is that the participants are not brought to discuss the data, but to discuss the ideas and concepts under study. Their meanings then represent the data that one analyzes to obtain a validation result.

With this approach we seek to validate the ideas under study within the scientific and practice community [1], as represented by the experts brought into the discussion. With the intention to reach both an ethical and substantive validation, one recruits experts who have had experience with specific topics related to the ideas under study and then works to create an environment of cooperation between the researcher and the researched in a social constructivism manner [1]. This can be done through interviews [12], where a slice of the research and practice community could present their perspectives. These are then analyzed in order to identify conflicting or agreeing interpretations, as well as possibilities for future development of the knowledge brought by the ideas under study. The technical goal is to bring forth a “disciplinary matrix” [13] of assumptions, theories, and practices shared on the topics around the studied ideas.

For our specific study to validate the above five ideas from [9, 10], we are particularly interested in how usability is understood by the broader communities that the participants’ expertise are representing. To achieve this, we involve three different theoretical perspectives in a “theory triangulation” manner [14]. We have thus grouped our participants around one of three specific kinds of expertise that we consider important for validating the above concept, namely into:

  1. (A)

    “Usability group,” used to study/validate the Usable Privacy Definition and the Usable Privacy Criteria

  2. (B)

    “Certifications group,” used for the Usable Privacy Cube model

  3. (C)

    “Law group” for the Usable Privacy Goals

2.1 Collecting Interview Data

Conducting interviews for collecting our data (i.e., experts’ opinions) is the best suited method for our case where we need to explore understandings, perceptions, and mental constructions (for our specific study, these will refer to topics related to usability in data protection). Moreover, this would generate rich and detailed responses when the chosen participants have a personal stake in the study topics; for our study, most of our participants work with privacy certifications and standards, and thus they most probably need to address on a regular basis aspects covered in the interviews. Since [9] use methods and terminology from the field of Ergonomics of human–system interaction to evaluate usability of data protection, we also invited experts from this field, especially those that have been working at the interaction between usability and privacy, e.g., from the field known as usable privacy and security.

The interviews were semi-structured, having a list of questions to guide the conversation, while the participants were encouraged to talk freely on the main topics of the interview. The topics and questions were adapted to the different expertise the participants had, concerning different aspects of the study ideas.

All three interview types had two main parts:

  1. (I)

    A first part is used to learn about the participants’ current understanding of (and their relation with) the ideas under study, without biasing them by presenting the views expressed by these ideas.

  2. (II)

    The second, and larger, part of the interview starts with presenting the concepts under study, in our case through a short video. Then the participants are asked to express their opinion directly in relation to what was presented.

Before the interview, we informed the participants only about the general research topic, as we did not want to influence them with our opinions. We also wanted spontaneous and not preconceived responses, so to reflect ingrained knowledge of the respective fields and areas of practice the participants represent.

We started each interview with a topic common for all three groups, where the participants presented their understanding and experience with usability in data protection (see [11, Appendix B]). The intentions for this common first part were: (i) to reveal the current understanding of usability in the respective domains, (ii) whether there are differences or overlaps between these, and (iii) to collect unswayed opinions from which we could analyze how their perspectives are (or could be) related to the definition of “usable privacy” that we were validating. Afterwards we had specific topics for each of the groups:

  1. (A)

    With the “certifications group,” we discussed topics related to evaluating and measuring usability, as well as the Usable Privacy Cube model, because these participants have knowledge about processes and methods currently used in evaluation and certification of privacy. For the exact topics and questions, see [11, Appendix E.1 and E.2].

  2. (B)

    The “usability group” addressed topics related to the definition of usable privacy and Usable Privacy Criteria because this group is acquainted with the ISO 9241-11:2018 standard on usability that was used as a basis for the definition of usable privacy in [9]. Moreover, this group knows well methods and processes of evaluating usability of digital products in general, as well as the process of formulating goals into evaluation criteria. For the exact topics and questions, see [11, Appendix C.1 and C.2].

  3. (C)

    The participants in the “law group,” being well-acquainted with the GDPR text, were asked to check the completeness of the Usable Privacy Goals list from [10], and whether the goals were correctly chosen to represent usability aspects. For the exact topics and questions, see [11, Appendix D.1].

2.2 Participants

The participants were sampled using convenience and snowball methods. Most of the participants have a composite background, a mixture of computer science, law, and human factors. Common for all is that they are working on aspects related to privacy and European data protection applied to IT services/products, thus all having knowledge of information technology.

The “certifications group” consists of six people working with standards, certifications, and data protection organizations. This is confirmed by their answers to our demographic questions: 4 out of 6 have this as their main field of expertise, with the remaining two working for DPAs. Moreover, all these participants have law/data protection as part of their expertise (one as primary and 5 as secondary). The years of experience range from 6 to 32, and the gender is equally represented. The work experience ranges from leadership and research for DPAs, consulting, audit, or technical assessment for certification bodies and other governmental organizations, or board membership and other functions for standardization committees. We consider these backgrounds to represent well our target group.

The “usability group” contains seven people working with usability (sometimes also called HCI/IxD/UX), confirmed by their answers: 6 out of 7 have this as their main field of expertise. Their secondary expertise was somewhat more diverse, including: law/data protection, privacy and security, cybersecurity, contract design, design thinking, and information systems development from an organizational perspective. The years of experience range from 3 to 28, among 4 females and 3 males. Three of the participants have experience with work in industry as: freelance consulting on privacy as a competitive advantage, CEO and head designer for legal design consultancy, and member of task group of usable security and privacy. Even though all participants have academic positions ranging from PhD student to Professor, we consider these backgrounds to represent well our target group.

The “law group” consists of four people, three having law/data protection as their main field of expertise. As the second field of expertise, one chose again law/data protection, another chose certifications/ISO standards/regulations, and the other two chose usability/HCI/IxD/UX. The fourth participant chose usability/HCI/IxD/UX as primary field of expertise and law/data protection as secondary expertise. The years of experience range from 5 to 14, with 3 females and one male. The balance here is skewed toward academic roles (three out of four) ranging from PhD student to Professor, with one participant working for a privacy consultancy firm. For this group, it was more difficult to find people who had knowledge of usability, besides privacy and data protection (for a discussion of interdisciplinary HCI and law research, please refer to the chapter “What HCI Can Do for (Data Protection) Law—Beyond Design”).

2.3 Thematic Analysis

We use thematic analysis (TA) for analyzing the opinions from the interviews (representing our data), following [3]. We identify the themes in a “top-down” fashion, where we use data to explore the concepts of interest, related to the ideas being validated. Since the analysis is guided by existing theoretical concepts, as well as by our standpoints, disciplinary knowledge, and epistemology, we adopt a theoretical variant of TA. However, we also employ experiential and constructionist variants of TA. For example, a critical and constructionist analysis is used to identify the concepts and ideas that underpin the assumptions and meanings in our data (e.g., we look at how the field of expertise of the participants influences the way they define and understand usability of privacy). We also use TA to develop a detailed descriptive account of usable privacy and related concepts such as processes and criteria for evaluating usable privacy. At the same time, in an experiential TA fashion, we are interested in the participants’ standpoints toward, and how they experience and make sense of, the presented privacy aspects as related to evaluating and measuring usability.

We adopted a researcher-derived approach while performing our coding. When analyzing the opinions, we focused on identifying answers that could be used to (dis)prove the validity of the five ideas we are studying. The themes have been created based on how meaningful the specific comments of the participants are, how many of the participants have mentioned the specific aspect, as well as on how strongly an opinion was articulated and argued for.

3 The Need to Evaluate and Measure Usability of Privacy

This section shows how to validate the first of the fine study ideas from the introduction. The following sections are each dedicated to one of the remaining concepts, respectively. The results of this study are summed up in Sect. 8.

3.1 Evaluating Usability of Privacy

The first idea under validation—“evaluating and measuring on scales the usability of privacy”—was formulated in interview questions [11, Appendix E.1] that were addressed specifically to the certifications group, as they are best acquainted with the existing certifications, their needs, and practices. One of the interview questions aimed to elicit whether they find it important to evaluate usability aspects when certifying for compliance with data protection. The answers all fall into a theme that we called: “we need evaluations of usability of privacy”:

we need evaluations of usability, All the GDPR certification programs or schemas need to also look at usability (CertP1).

Moreover, all participants identified several areas where the evaluation of usability is of special importance, or that evaluation should be done “at least” in these instances that they exemplified.

One outstanding example (i.e., mentioned by three out of four participants that specified cases where usability is important) is that “usability is important for exercising data subjects’ rights.” Usable transparency and usable intervenability are presented by one of the participants as preconditions for the users to exercise their rights. At this point, we can conclude that a sub-theme representative for the “certifications group” is that:

evaluating usability is important for data subjects to exercise their rights and for data controllers to comply with the transparency principle.

3.2 Measuring Usability of Privacy

The other side of our first idea under study—“measuring on scales the usability of privacy”—is important for making evaluations of usability of privacy more objective and easier to follow by both the companies wanting to be certified and by the certification organizations and lay persons. During the interview, the respondents were asked whether they see as useful to concretely measure and evaluate how well the usability of privacy is dealt with by companies wishing to be GDPR compliant. We also explained to each participant that by measurements it is meant some form of scale or score of the type used to indicate energy consumption for home appliances. The theme representative for the answers at this question is: “Measuring is definitely useful but where do we start?”.

Yeah, I think measurement is a good thing. It is something everybody or those who are in the community agree on. (CertP1)

That the community is favorable toward scale-based measurements, such as traffic lights, is also exemplified through research work such as [2, 17] or by the work done on privacy icons [5, 8].

Even though the respondents were in favor of measuring privacy, they all brought up several challenges. These are indicative to where the community is at the moment in terms of measuring (usability of) privacy, and what are possible solutions that the community sees.

One of the discussed challenges for measuring (usability aspects of) privacy is the fact that in privacy we do not deal with “stabilized knowledge” (citing one of the respondents). One example from a respondent is of actors such as the Stiftung WarentestFootnote 2 who compare products/services based on aspects such as usefulness, functionality, or environmental impact, and that are using a scoring system based on percentages. However, usability of privacy is not as easy to measure as, for example, the “consistency for the shampoo” (CertP1). One conclusion from several of the participants is that we are still in a rather initial phase regarding measuring usability of privacy, where one still asks basic questions such as:

how do you measure it and what do you measure (CertP2)

Other aspects that were brought up by the participants and, similar to the above one, are relevant to some of the five ideas under study are the context of use and the target group:

what do you measure in what kind of context and who is the target group (CertP3);

This is related to the concept of Usable Privacy Cube model [9] and the Usable Privacy Criteria [10] that account for the specific context of use, as well as the users with their goals and specific environments.

4 Usable Privacy Definition Adapts Well ISO 9241-11:2018

Since the definition of usable privacy from [9] adapts the ISO standard 9241-11:2018 to privacy, we validate this definition here primarily with experts from the HCI/IxD/UX community, as these are supposedly more acquainted with this ISO standard. During the interviews, we presented the definition and explained how it is relevant for GDPR, after which the respondents had to answer whether this definition captures their own (or their community’s) current understanding of usable privacy. The multiple-choice answers (i.e., “completely,” “partially,” “not at all”) were followed by an explanation of their choice [11, Appendix C.1]. In addition to asking directly the usability experts to validate our definition, the participants in all three groups have been asked to explain their understanding of usability in the context of data protection and to also anchor it in the reality of their practice [11, Appendix B]. In order to gather unswayed perspectives, these questions were asked in the beginning of the interview, before presenting our definition; we refer to these as the “unswayed perspectives on usable privacy.”

All participants agreed that adapting the definition of the ISO 9241-11:2018 to privacy captures (the choice “completely” being used by the majority, while the remaining chose the alternative “partially”), the current understanding of usable privacy in their field, e.g.:

I would say that it is a complete coverage of the different concepts that one could expect within the usable privacy domain because I think indeed there is quite a resemblance to the definition that comes from the ISO standard. (UsabilityP2)

Moreover, besides agreeing with the definition itself, one of the participants also appreciated our exemplification of how the definition applies to GDPR.

This was a more marvelous thing to see how well you related to the GDPR and to the ISO standard. (UsabilityP7)

We thus formulate the following theme where all answers fit: “We trust the usability definition from the ISO standard 9241-11:2018”.

For the respondents that checked the “partially” choice, we can group their answers under the theme “Instances of the usable privacy definition”, as these are more specific cases or occurrences of the aspects that are represented at a higher level by the definition. For example, the following comment can be mapped to the part of the definition “…taking into consideration the particular characteristics of the users …”.

People would be able to do so [understand the privacy policies if they are written in a non legal way], but in practice they don’t [read] because it just doesn’t work with their lives and it doesn’t match the current goal of just signing up for the service and using it. (UsabilityP5)

The literature on usable privacy and security covers well this topic, e.g., [7, 16] speak of a privacy gap between what the user says that would do when asked or tested in the laboratory and what it actually does when in a real situation.

5 A Comprehensive List of Usable Privacy Goals

In the same sense as in the previous section, what is called Usable Privacy Goals in [9, 10] can be considered “instances” of the usable privacy definition. Here we validate the UP Goals with the “law group,” since this is well-acquainted with the GDPR text from which the UP Goals were extracted.

The participants were given a list with all 28 UP Goals (see [11, Appendix D.1]) and were asked to choose the ones that they thought relate to usability. We then discussed their choices and opinion about this list, whether they thought it was exhaustive, and whether they could provide additional goals.

After counting the numbers of goals checked by the participants, the mean is 21,75 choices out of 28, giving a 77,67% coverage. Thus, the participants generally agree with our UP Goals, where particularly LawP1 checked all the goals, whereas LawP3 and LawP4 expressed directly their satisfaction with how well the list covers usability aspects.

…your list was very complete. I cannot think of something that is not on this list. …I think this list here is very broad and very comprehensive regarding usability. I cannot think of anything else. (LawP3)

Therefore, we can derive the following theme (see also [11] for more details): “I am happy with the list of Usable Privacy Goals”.

6 Ways to Meet the Usable Privacy Criteria

Having established the list of usability goals that GDPR stipulates, the practice in the Interaction Design field is to operationalize these by turning them into usability criteria formulated as questions [15]. Criteria can be seen as specific objectives to be reached by those that aim to reach the set of goals that the criteria relate to. In our case, the Usable Privacy Criteria enable one to assess the privacy-related features that a product or system provides in terms of how much these improve the control that the data subjects have over their data. Examples of commonly used usability criteria (i.e., not specific to privacy) are:

  1. (i)

    Time used to complete a task (efficiency), such as reading a privacy statement

  2. (ii)

    The number of errors made when carrying out a given task (effectiveness), such as when choosing desired privacy settings

Usability criteria can provide quantitative indicators of the extent to which, for example, the data subjects understand the implications for their privacy from using a certain technology.

The UP Criteria are validated in this study with the “usability group,” as they are most acquainted with the process of formulating criteria to meet goals such as efficiency, effectiveness, and satisfaction.

The participants were given examples of the UP Criteria and were asked to comment on them (see [11, Appendix C.2]). The UP Criteria have been assessed as good by most participants, using quick and simple statements, such as:

I definitely see the reasoning behind it and it makes sense for me. (UsabilityP2)

However, the participants were keen on the discussion to quickly turn toward another related topic that seems to be preoccupying the community at the moment, that of establishing standards, recommendations, and creating guidelines or design patterns, to help with meeting such criteria.

That such more concrete guidance is needed is confirmed by the participants in the “certifications group” as well. Their assent is especially valuable as they are the ones that are actually performing the evaluation in practice. The focus of the participants was thus more on the particularities of the evaluation, addressing questions such as who would perform the evaluation, what kind of expertise the evaluators would need to have, or which specific HCI methods should they use.

Since the UP Criteria functioned more as a trigger for discussing other more particular aspects of the privacy evaluation processes, a theme that would characterize best the type of feedback that we received from the participants is “Ways to meet the Usable Privacy Criteria”.

7 Usable Privacy Cube Model as an Abstraction of Known and Implied Principles of Privacy Evaluations

Finally, we validate whether the Usable Privacy Cube (or UP Cube) model reflects the existing privacy and data protection evaluation processes, and to what extent (i.e., totally, partially, or not at all). Specifically, we discuss with the participants the following features of the UP Cube model:

  1. (i)

    Represents the perspectives of both data subjects and controllers/processors

  2. (ii)

    Grouping, prioritization, and organization of the criteria

  3. (iii)

    Interactions between the different criteria

  4. (iv)

    Context of use (or context of processing, as a term often used in GDPR)

To our question “Does the UP Cube model represent, at a high level, the existing data protection and privacy evaluation processes?,” two out of the five participants chose “Completely,” while three chose “Partially.”

The answer of CertP1 is exemplary:

What I know best is EuroPriSe and the previous data protection seals from ULD [Landeszentrum für Datenschutz Schleswig-HolsteinFootnote 3], so it’s quite very much related, but I think not completely. So I would say partially, although on the abstract level will be the same as the Standard Data Protection model that also uses the different axes for something like that. So the general principle I think is quite well known …

The Standard Data Protection modelFootnote 4 has the notion of allocating the legal requirements of the German Federal Data Protection Act—BDGS (Data minimization, Availability, Integrity, Confidentiality, Unlinkability, Transparency, Intervenability) to the protection goals, in a tabular manner. A cube (like the UP Cube model) can be understood as a three-dimensional tabulation mechanism, i.e., represents three tables, each based on the combination of two of the axes. Therefore, the model mentioned by the respondent can replace the EuroPriSe in the base square of the UP Cube, and it is already fitting, to some extent, within the two axes of organization in rights and principles.

As in the case of the Usable Privacy Definition, here too we had questions preceding the presentation of the model, asking if the participants know whether the certifications or standards that they are acquainted with have a high-level model to guide the process of evaluation. The conclusion from these answers is that it does not exist a published or well-established model to guide the process of the evaluation, but there are some main guiding pillars. These are following the GDPR text, or in the case of the standards for evaluating management systems, the risk management or the Privacy Impact Analysis (PIA) is the focal point. The theme that we extract from all the answers is that “UP Cube is an abstract representation of known, but implied or covert practices.”

8 Summarizing the Results of the Validation Study

Figure 2 collects the main themes that we identified regarding the validation of the five ideas/concepts under study (full details of this study are in [11]). We present these themes hierarchically starting at the top with the theme regarding the first and most general study concept.

Fig. 2
A multi-tier tree diagram of the hierarchical and lateral themes starts with the need to evaluate and measure usability of privacy. This branches out into evaluating usability is important, and measuring is definitely useful, but where do we start? Each of the components further branch out into sub-categories.

Overview of hierarchical and lateral themes and their relations

At the second level, we place two lateral themes about the importance of “evaluating” and “measuring” usability of privacy. For evaluations, one needs clear definitions, and for our purposes, the Usable Privacy Definition is well-accepted by the study participants as an adaptation of the usability definition from the ISO standard 9241-11:2018. One important outcome of this study is that the participants were preoccupied more with finding instances of the Usable Privacy Definition, for example, related to “transparency and data protection rights” (the theme appearing at the bottom level of Fig. 2). This confirms the importance of the work on identifying Usable Privacy Goals done in [10]. Another outstanding finding (schematized on the right side of the tree in Fig. 2) is that the experts were often wondering about “where to start” with the measuring. In the end, it is generally agreed that starting points can be the Usable Privacy Criteria, integrated in the UP Cube model that is considered a good abstraction of known, but implied principles of existing privacy evaluations.

We interviewed experts from three relevant fields of practice and research: data protection law, privacy/data protection certifications and standardization, and usability (spanning fields such as Human–Computer Interaction, Usable Privacy and Security, or User Experience). The experts were asked to share their knowledge, understanding, and opinions on the studied concepts. The study plan used one group of experts to address one specific topic; the expertise of the group was thus thought to match the topic. Therefore, the analysis of each of the five concepts is done within the frame of one group. Nevertheless, the topic of usability in privacy, being more general, was addressed by all participants and was therefore analyzed across the groups. Moreover, we sometimes found answers in one group to be relevant for a different study concept than the one in focus. We thus often use such additional opinions to strengthen the findings from a group.

A second design aspect of the interviews was to have two main parts: (i) in the first part, the interviewees present their opinions without being influenced by the ideas under validation, whereas (ii) in the second part, we present to them the study concepts after which we ask them to directly comment on what was presented. The answers from the first part were used to corroborate the responses from the second part, and we found that the participants were consistent in their opinions, the change being only in adapting their answers to what was relevant for the study concepts.

A characteristic of all the participants was to look for more concrete, particular, and practical aspects to address in the future and to suggest possible solutions. For example, in conjunction with validating the UP Criteria, they were pointing out what is yet to be done to meet these criteria, and even proposing possible solutions. Some of these overlap with the further work proposed in [9]. Among the answers from the experts, one can find a substantial list of open problems that the community can address.

9 Conclusion

We have shown how to validate five types of ideas (or we can call these also concepts) that have been proposed and described in depth in [9, 10]. This provides researchers with a method to use when needing to validate their research results that are similar to the ones we have studied here, namely:

  1. (i)

    Models (in our case, we validated the components of the UP Cube model).

  2. (ii)

    Definitions (we have validated the Usable Privacy definition, which was an extension/adaptation of a standard definition of usability) (see also chapter “Data Collection Is Not Mostly Harmless: An Introduction to Privacy Theories and Basics” for privacy definitions).

  3. (iii)

    Prescriptive lists (our list of Usable Privacy Goals was extracted from a well-known legal text, the GDPR).

  4. (iv)

    A set of criteria (the Usable Privacy Criteria have been built using HCI methods out of the above list of goals).

  5. (v)

    General research ideas (e.g., described in position papers) that are not always as focused or as clearly stated as the above four types; in our case, this idea was stating “the need for evaluations and measuring of usability of privacy”.

To do these in a systematic and controlled manner, we devised a method for validating such research ideas. The method that we have presented and then applied to the above five types of ideas uses (i.e., combines) several methods for interacting with human respondent. In particular, we take a method that is normally used to analyze data and adapt it to analyze opinions from a special type of human, namely experts in a domain relevant to the idea under validation. Thus, for us the experts’ opinions form the data, which we then analyze with respect to the ideas under study. But care must be taken along the way, from the selection of the experts to be interviewed, to the design of the interviews and then running the interviews (e.g., taking care to guide the discussions so not to lose focus on the studied ideas), and in the end when coding and extracting the information from the content produced by the interviews. A much welcomed side effect of this method is that experts quite often focus on the open problems, i.e., on expanding and going beyond the ides of the study. This can potentially be a gold mine of avenues of research; therefore, the application of this method of validating ideas with expert opinions can sometimes prove more valuable in identifying the next problem the researchers can focus on.