HCI Requirements for Transparency and Accountability Tools for Cloud Service Chains

  • Simone Fischer-Hübner
  • John Sören Pettersson
  • Julio Angulo
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8937)

Abstract

This paper elaborates HCI (Human-Computer Interaction) requirements for making cloud data protection tools comprehensible and trustworthy. The requirements and corresponding user interface design principles are derived from our research and review work conducted to address in particular the following HCI challenges: How can the users be guided to better comprehend the flow and traces of data on the Internet and in the cloud? How can individual end users be supported to do better informed decisions on how their data can be used by cloud providers or others? How can the legal privacy principle of transparency and accountability be enforced by the user interfaces of cloud inspection tools? How can the user interfaces help users to reassess their trust/distrust in services? The research methods that we have used comprise stakeholder workshops, focus groups, controlled experiments, usability tests as well as literature and law reviews. The derived requirements and principles are grouped into the following functional categories: (1) ex-ante transparency, (2) exercising data subject rights, (3) obtaining consent, (4) privacy preference management, (5) privacy policy management, (6) ex-post transparency, (7) audit configuration, (8) access control management, and (9) privacy risk assessment. This broad categorization makes our results accessible and applicable for any developer within the field of usable privacy and transparency-enhancing technologies for cloud service chains.

Keywords

Usable privacy HCI requirements Cloud service Transparency Accountability 

1 Introduction

Responsibilities of cloud services are complicated and difficult to comprehend due to the drive to make service delivery to companies and other organisations on heavily automatised subscriptions processes. Privacy principles and questions of accountability are thus hidden in standardized policy documents that furthermore may change every year or so when new possibilities are available for one or several of the cloud services that together make up a cloud service for a particular organisation. Organisations relying on cloud services for their own activities towards customers, members, or employees, are thus risking to not being able to fulfill their obligations concerning data processing and are thus also risking their reputation. A whole range of functions are needed to make data processing transparent to data subjects1 as well as to privacy officers and privacy auditors. Moreover, however advanced these functions must be, they also have to be available to private persons, to officers at the companies using the cloud services, and to the cloud service providers as well as to privacy auditors and data protection board officers. Thus, the user interfaces for such functionality must meet each user at his/her level of competence and responsibility.

This paper reports on requirements collected and structured for the development of user interface guidelines for tools that will enhance transparency and accountability in cloud service chains. The work has been conducted within the EU7FP project A4Cloud – Cloud Accountability project (www.a4cloud.eu) and reported in more detail in the A4Cloud project Deliverable D:C-7.1 on “General HCI principles and guidelines for accountability and transparency in the cloud” [1].

The relation of this work to the A4Cloud project is given in the following section after which the general research questions motivating our work and the specific functionality framework employed for the present paper are stated.

1.1 Relations to the A4Cloud Project

The A4Cloud project deals with accountability for the cloud and other future Internet services. It conducts research with the objective of increasing trust in cloud computing by developing methods and tools for different stakeholders through which cloud providers across the entire cloud service value chains can be made accountable for the privacy and confidentiality of information held in the cloud. The A4Cloud stakeholders, for whom methods and tools will be developed, comprise cloud customers in the form of individual end users and business end users (i.e., service providers outsourcing data processing to the cloud), data subjects whose data have been outsourced to the cloud (and who may or may not be individual end users), as well as regulators, such as data protection commissioners, and cloud auditors. The methods and tools that are developed are combining risk analysis, policy enforcement, monitoring and compliance auditing with tailored IT mechanisms for security, assurance and redress. In particular, the A4Cloud project is creating solutions to support cloud users in deciding and tracking how their data are used by cloud service providers [2].

A4Cloud solutions thus also include tools for enhancing transparency of data processing for the different stakeholders, so-called transparency-enhancing tools or transparency tools. The concept of transparency, as it is considered by us in A4Cloud, comprises both ‘ex ante transparency’, which enables the anticipation of consequences before data are actually disclosed (e.g., with the help of privacy policy statements), as well as ‘ex post transparency’, which informs about consequences if data already has been revealed (what data are processed by whom and whether the data processing is in conformance with negotiated or stated policies) (compare [3]).

1.2 Research Questions

This paper results from the work aimed at providing a set of general HCI principles and guidelines, which have a basis in human-centered design and which should support User Interface (UI) design for transparency functions in cloud services.

For deriving the requirements for such HCI principles and guidelines, our group conducted research and review work for addressing particularly the following HCI challenges that are of relevance for tools for different cloud stakeholders:
  • How can users be guided to better comprehend the flow and traces of data on the Internet and in the cloud?

  • How can individual end users (i.e. data subjects) be supported to do better informed decisions on how their data can be used by cloud providers or others?

  • How can the legal privacy principle of transparency and accountability be enforced by the user interfaces of A4Cloud or other transparency and accountability tools?

  • How can the user interfaces help users (in particular individual end users) to reassess their trust/distrust in services?

This paper summarizes the work conducted for addressing these challenges and the results achieved in the form of HCI requirements for user interfaces for tools for accountability and transparency in cloud service ecology.

1.3 Framework of Functional Categories for Presenting the Derived Requirements

In order to make use of the requirements brought forth by our data collection and analysis, the resulting requirements have to be inserted in a framework of function categories where each category conceivably could be handled by different but interacting tools within a future highly integrated framework for accountable and transparent cloud services. The set of broadly defined functions we will use in the concluding presentation in Sect. 5 have been worked out in the planning and on-going work of A4Cloud and find close parallels in other projects. In short, we categorize the transparency and accountability functionality in the following nine groups:
  • Ex ante transparency (policy display incl. policy mismatches, mediating of trustworthiness or risks to individual end users).

  • Exercising data subject rights (permitting data subjects to access or to delete, block, correct their data).

  • Obtaining informed consent (from data subjects for the processing of their personal data).

  • Privacy preference management (helping individual end users to manage their privacy preferences).

  • Privacy policy management (for business end users).

  • Ex post transparency (incl. display of policy violations and help with risk mitigation).

  • Audit configuration (help with settings in regard to collection of evidences).

  • Access control management.

  • Privacy risk assessment (for business end users).

1.4 Outline

The remainder of this paper is structured as follows:

Section 2 presents related previous work on HCI principles and guidelines for Privacy-Enhancing Technologies (PETs) and privacy-enhancing identity management including transparency-enhancing tools and functions. It is discussed how far these guidelines can also be applied within the cloud context, and what the limitations of these guidelines are.

Section 3 motivates the choice of HCI challenges addressed in our approach, mostly as an answer to the limitations found in Sect. 2. It also discusses the research questions that those challenges imply in more detail.

Then Sect. 4 presents and motivates the different research methods we applied when addressing these HCI challenges and deriving HCI requirements. The actual research work has been reported in various publications and is not reported here (see esp. [4, 5, 6]. For a comprehensive summary, see A4Cloud Deliverable D-C.7.1 [1]).

Section 5, “Mapping Requirements to Functional Categories”, presents the “grand total” of elicited HCI requirements by sorting them into the nine functional categories mentioned in 1.3. Moreover, for each requirement, the observation(s) behind the requirement is (are) mentioned as well as suggestions for possible HCI design guidelines.

Finally, Sect. 6, “Concluding Remarks”, will provide conclusions of this work and provide an outlook into the future HCI work.

2 Related Work

In this section, we present an overview of related HCI principles, recommendations and guidelines for usable privacy and security, which are based on earlier research and that can be of relevance for cloud technologies. We point out how far existing guidelines need further enhancements for the context of accountability and transparency in the cloud.

HCI guidelines for both security and privacy technologies have to address specific HCI challenges, as noted first by [7] for security, and later by many others for privacy:
  • Security and privacy protection are typically secondary goals for ordinary users;

  • They contain difficult concepts that may be unintuitive to lay users;

  • True reversal of actions is not possible.

Jakob Nielsen published one of the most referred to collection of general HCI principles, his so-called 10 Usability Heuristics for User Interface Design [8], which are called “heuristics” because they are rather rules of thumb than specific usability guidelines. These HCI heuristics, which were originally derived from an analysis of 249 usability problems, comprise: “Visibility of system status”, “Match between system and the real world”, “User control and freedom”, “Consistency and standards”, “Error prevention”, “Recognition rather than recall”, “Flexibility and efficiency of use”, “Aesthetic and minimalist design”, “Help users recognize, diagnose, and recover from errors”, “help and documentation.” Johnston et al. expanded and modified the Nielsen’s list of principles to derive criteria for a successful HCI applied in the area of IT security (“HCI-S”; [9]).

Further relevant HCI guidelines for aligning security and usability for secure applications were for instance proposed by Yee [10] and by Garfinkel [11]. Even though these guidelines are related to secure applications, some of them can be interpreted and adapted to privacy-enhancing transparency and accountability. For instance, Yee’s guideline of “Explicit authorization” stating that “a user’s authority should only be granted to another actor through an explicit user action understood to imply granting” can be translated to the guideline that informed consent to personal data disclosure should require an explicit user action understood to imply disclosure. Similarly, also his principles of “Visibility” and “Revocability” of authority could be applied to personal data disclosures. Dhamija and Dusseault [12] discussed flaws of identity management posing HCI and security challenges, and provide some HCI-related recommendations how to address them, which are partly based on Yee’s guidelines.

Important domain-specific HCI requirements can be derived from privacy legislation. In the EU FP5 project PISA (Privacy Incorporated Software Agents), Patrick et al. [13, 14] have studied in detail how legal privacy principles derived from the EU Data Protection Directive 95/46/EC [15] can be translated into HCI requirements and what are possible design solutions to meet those requirements. Their research focused on legal privacy principles of (a) transparency, (b) purpose specification and limitation and (c) data subject rights, as well as (d) informed consent as a basis for legitimate data processing. As concluded by the project, these legal principles “have HCI implications because they describe mental processes and behaviours that the data subject must experience in order for a service to adhere to the principles. For example, the principles require that users understand the transparency options, are aware of when they can be used, and are able to control how their personal data are handled. These legal requirements are related to mental processes and human behaviour, and HCI techniques are available to satisfy these requirements” [13]. Therefore, the HCI requirements that were derived comprised requirements on comprehension (to understand, or to know), consciousness (to be aware of or to be informed), control (to manipulate, or be empowered) and consent (to agree) in relation to the selected legal principles.

As a possible HCI solution for achieving informed consent and (ex ante) transparency, the PISA project proposed the concept of ‘Just-In-Time-Click-Through Agreements’ (JITCTAs), which instead of providing complex and lengthy service terms, should confirm the users’ understanding or consent on an as-needed basis. JITCTAS therefore provide small agreements that are easier for the user to read and process, and that facilitate a better understanding of the decision being made in context.

The Art. 29 Data protection Working Party2 has in its opinion on “More Harmonised Information Provisions” given the recommendation of providing information in a “multi-layered format under which each layer should offer individuals the information needed to understand their position and make decisions” [16]. They suggest three layers of information provided to individuals, which include the short privacy notice (basically corresponding to JITCTAs), the condensed notice and the full privacy notice. The short notice (layer 1) must offer individuals the core information required under Article 10 of the EU Data Protection Directive 95/46/EC, which includes at least the identity of the controller and the purpose of processing. In addition, a clear indication must be given as to how the individual can access additional information. The condensed notice (layer 2) includes in addition all other relevant information required under Art. 10, such as the recipients or categories of recipients, whether replies to questions are obligatory or voluntary and information about the data subject’s rights. The full notice (layer 3) includes in addition to layers 1 and 2 also “national legal requirements and specificities.”

In the EU FP6 PRIME project on “Privacy and Identity Management for Europe”, one built upon the legal privacy principles and HCI requirements from the PISA project along with HCI requirements for socio-cultural privacy principles to derive proposed UI design solutions for privacy-enhancing Identity Management systems [17].

The PRIME project has also followed the Working Party’s recommendations to use multi-layered privacy notices and the concept of a JITCTA in its design proposals for “Send Data?” dialogue boxes for obtaining the user’s informed consent. However, a problem with click-through agreements including JITCTAs is that users have the tendency to automate behaviours so that the individual parts of an action are executed without conscious reflection [18]. The PRIME HCI work package therefore also developed the alternative concept of Drag-And-Drop-Agreements (DaDAs), by which users have to express consent by moving graphical representations of their data to a graphical representation of the receiver, and thus forces users to make better informed decisions while also allowing the system to detect erroneous conceptions of the user if data are dropped on the wrong recipient (e.g. credit card symbol is dropped on web shop symbol instead of on pay service symbol) [19].

Based on experiences gained from developing UIs for privacy-enhancing identity management systems over several years, the EU FP7 project PrimeLife provided an experience report “Towards Usable Privacy Enhancing Technologies: Lessons Learned from the PrimeLife Project” (Graf et al. 2011) which discusses HCI fallacies and provides HCI heuristics, best practice solutions and guidance for the development of usable PETs, which will be of relevance for A4Cloud. This report started with identifying major HCI fallacies that were experienced, which included the problem of many users to differentiate whether data are stored on the user side (under the user’s control) and to comprehend to which network entities personal data flows during online transactions. Furthermore, the mediation of trustworthiness, intercultural differences and a well comprehensible terminology to be used in UIs are challenges to be taken into consideration. Many of the HCI issues that were experienced are mental model issues which are difficult to solve for novel PET concept, which are unfamiliar for the users. This is especially true for those PETs, for which no obvious real world analogies exist. Based on those experiences and lessons learned, the report provides HCI heuristics for PETs, which adapt, extend and exemplify the classical list of Nielsen’s Usability Heuristics for the PET domain. Finally, the report also provides some evaluation guidelines for PET user interfaces, and what needs to be considered for the preparation and performance of usability tests.

In particular, PET-USES (Privacy-Enhancing Technology Users’ Self-Estimation Scale) is introduced, which was developed in PrimeLife as a post-test questionnaire that enables users to evaluate PET-User Interfaces both in terms of the primary task and specific PET related secondary tasks [21].

In complementation to the HCI heuristics, the PrimeLife project also developed HCI Patterns for PETs which provide best practice solutions (“design patterns”, after Alexander et al. [22]) for the PET user interface design [23]. Relevant also is the on-going Privacy Design Pattern project described by Doty and Gupta (http://privacypatterns.org/).

Finally, a couple of recent reports on trust marks and privacy seals should be mention as these also deals with HCI questions for transparency in the web. The recent recommendations in ECC-Net’s Trust mark report ([24], pp. 54ff) does not give specific details on graphics and interaction design, but lists a number of criteria that should be easy to check for consumers, and it is noted that presently trust marks are not well-known or easily identified. Besides being easily identifiable, the standards of a trust mark must also be easily understood. Finally it can be mentioned that the report stresses a number of criteria that “give a trust mark the ‘added value’ beyond the legal requirements that companies are forced to meet anyway.” These criteria are often directly related to HCI questions: easy access to member-checkups, access to multilingual information and service, access to each trader’s internal complaint handling system and also to alternative dispute resolutions, especially online dispute resolution. These recommendations have much in parallel to work done in PRIME and PrimeLife on DataTrack. Referring to the ECC-Net’s report and some other sources, ENISA [25] identifies five challenges in communicating privacy seals to ordinary web users and then presents directions to go in order to find solutions to these challenges.

While the existing HCI principles and guidelines presented in this section are still valid and applicable to cloud PET, some work is needed to elaborate and derive further HCI principles and guidelines addressing specifically HCI challenges for transparency and accountability technologies in the cloud context. Most HCI fallacies identified by the PrimeLife project in regard to the users’ comprehension of his personal data flows and traces, trust in PETs and comprehension of novel PET concepts will also be important to address when designing user interfaces for privacy-enhancing transparency and accountability tools for the cloud. Legal privacy principles may be interpreted differently for the cloud and are currently re-discussed under the proposed reform of data protection legislation in Europe. Therefore, we have specifically researched related HCI challenges on comprehension of personal data flows, PET concepts such as policy notices, trust and the interpretation of legal privacy principles in the cloud context to derive further specific HCI principles and guidelines for the cloud context.

3 HCI Challenges and Related Research Questions

This section presents the HCI challenges and related research questions that we regard as fundamental and which therefore have motivated the research methodology accounted for in Sect. 4.

As discussed above, previous HCI research has revealed that many users have problems to differentiate whether data are stored on the user side (under the user’s control) or on a remote services side and the problem to comprehend to which network entities personal data flows during online transactions (e.g. [23]). Evoking the correct mental model in regard to where data are transferred to and where they are processed will be a particularly pertinent challenge for cloud computing because one or several chains of cloud service providers may be involved:

How can users be guided to better comprehend the flow and traces of data on the Internet and in the cloud?
  • What are the mental models of different stakeholders and types of users in regard to the distribution of personal data in a complex cloud ecosystem?

  • What HCI concepts are suitable for evoking the correct mental models of data flows and traces?

These questions will be significant for both ex ante technologies for transparency, e.g. in the form of privacy policy tools, as well as for ex-post transparency enhancing technologies, which will allow users to track their data in the cloud.

However, for supporting individual users in making decisions on how their data are used by cloud providers, it has to be taken into consideration that previous research has shown that lay users often do not behave rationally with regard to decisions on personal data disclosure [26, 27] meaning that we cannot assume either that they will do so when deciding on the disclose or outsourcing their data to the cloud. In order to design usable tools that offer transparency and accountability of the users’ data in the cloud, we have to understand their attitudes, behaviours and mental models in relation to cloud services. Having these understandings can help to reveal what these users value, what they think is important, and what useful features that can be included in the user-friendly tools for transparency and accountability and how these features can be designed to be valued and well understood by individual users.

When it comes to the business end users, their security officers face the challenge generating and managing access control rule sets for controlling the use of data in the cloud.

These aspects have motivated us to research also the following:

How can end users be supported to make more informed decisions on how their data can be used by cloud providers or others?
  • How much cognitive effort or time are people willing to spend in order to understand what happens to different types of personal information in the cloud?

  • How can the user interfaces of ex ante transparency tools be designed to support and motivate users to take more rational and informed decisions?

  • How can service providers obtain usable access control rule sets for data outsourced to the cloud that are reflecting the organisation’s access control policy and are easy to understand and manage?

The EU Legal Data Protection Directive has defined legal principles for providing transparency and control to users. In the context of cloud computing, the existing legal requirements may partly need some re-interpretation. Currently, also new legal principles for providing better transparency and control for individual cloud users and increasing accountability for cloud providers have been discussed as part of the proposed EU data protection regulation [28]. Therefore, a third HCI challenge that we addressed, which is also related to the other two HCI challenges mentioned above, is:

How can the legal privacy principles of transparency and accountability be enforced by the user interfaces of A4Cloud tools?
  • What legal privacy principles for transparency and accountability for the cloud need to be taken into consideration by the HCI design of A4Cloud tools?

  • How can legal privacy principles for transparency and accountability for the cloud be mapped to HCI principles and solutions?

Finally, as concluded in the recent ECC-Net [24] and ENISA [25] reports, trust plays a key role in the acceptance and uptake of PET solutions. The ISO 25010:2011 definition of trust runs: “Degree to which a user or other stakeholder has confidence that a product or system will behave as intended.” [29]. From this definition it may appear as if people’s notions of trustworthiness in an electronically delivered service were simply an issue of how well they trust an automaton. However, there is ample evidence that trust stems from a range of sources – previous encounters with the service provider (possibly in non-electronic form), the general reputation of the brand of the service provider, as well as statements made by friends – rather than from any direct understanding of the privacy and security reliability of the service in question. In addition, users may lack trust in novel PETs with functionality which may not fit their mental models of how the technology works. For this reason, one more challenge to be tackled is:

How can the user interfaces help users (in particular individual end users) to reassess their trust/distrust in services?
  • What are suitable HCI means for mediating trust in trustworthy services?

  • How can user interfaces connect to known reliable sources for trust?

Thus, these were the questions motivating our research. In the next section we will discuss the methodology that was developed to start to address these challenges. Several methods were used, each generating a set of specific requirements and HCI design suggestions. Section 5 makes a comprehensive summary of the requirements (and in brief also of proposed HCI principle and design suggestions) by presenting them in relation to the functional categories identified in Sect. 1.3.

4 Research Methods

This section discusses and motivates our research methodologies and ethical considerations.

4.1 Human-Centred Design

We strived to follow a human-centred design approach for eliciting and testing HCI requirements and guiding the development of user interface design principles. Human-centred design is defined by ISO 9241-210 as “an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques” [30]. In the research reported here we have elicited and refined user requirements and related HCI principles through methods including stakeholder focus groups, controlled usability testing and other methods described in the subsection below.

For the choice of methods, we have taken into consideration that important concepts for the comprehension of transparency and related risks, such as what information is stored and where it is processed, are usually difficult to understand for the lay users, while other end user groups such as regulators or security administrators usually have a clearer understanding. Therefore, different user-groups require different interfaces and interaction paradigms. This also means that the different user groups have to be involved using different approaches to human-centred design. For this reason, we have used controlled experiments and mock-up-based evaluations in addition to focus groups in order to explore the needs of lay users, while the needs of professional stakeholder groups were mainly investigated by means of stakeholder workshops and focus groups. The controlled experiments and mock-up-based evaluations had as an objective to analyse lay users’ mental models of cloud-related technical concepts, since our earlier work has shown that many HCI issues are mental model issues which are difficult to solve for novel PET concepts [20, 31].

The following subsection briefly describes the methodologies applied and the reason they were regarded as suitable approaches for eliciting HCI requirements. The sequence of methods applied during the first 12 months of the project is also depicted in Fig. 1.
Fig. 1.

Methods used during A4Cloud’s first project year for eliciting HCI requirements and principles

4.1.1 Methods Employed

This subsection discusses and motivates the different research methods that we have applied when addressing these HCI challenges and deriving HCI principles while following a human-centred design approach.

Stakeholder workshop. Stakeholder workshops provide the opportunity for active face-to-face interactions between different influential actors who can express their opinions and needs for a system being developed. This method is strongly encouraged during the initial design processes, as a way of ensuring that the needs of those who might be impacted by the system are taken into account, as well as trying to achieve a common vision of the system [32]. An important step of this method is identifying those stakeholders that can have a say on the development of the system. Typically one stakeholder representative is selected from a user group and invited to participate in a workshop.

Once the stakeholders have been identified different approaches can be followed during the meeting in order to incite discussions, to promote the exchange of ideas and to identify the needs of the different user groups being represented by invited stakeholders. Such approaches can include general discussions, moderated interviews, focus groups, as well as Open Space [33] and World Cafés [34] methodologies, and others. Depending on the approach taken and the number of participants, the discussions might derive from one main question (as is often the case of Open Space), or from a series of questions. Also, participants might be divided into groups trying to identify challenges related to different themes, or they can be all exchanging ideas while a moderator leads the discussions. The results from the discussions can then be compiled, interpreted and expressed as a set of system requirements. Follow-up interviews or feedback from participants can also be setup in case the researchers need to complement or correct the information acquired during the workshop session.

We have carried out a stakeholder workshop concentrating on the HCI aspects of cloud services. Participants came from the Swedish Data Inspection Board, the Swedish branch of the European Consumer Centre Network, IT service and planners from the region of Karlstad (the University, the Public Health Care provider, the Municipality), and from two IT companies. The purpose of running such a workshop was to discover HCI requirements. These initial requirements also served as the basis and motivations for our subsequent experiments and tests. The workshop was divided into two main sessions, a morning and an afternoon session. A moderator encouraged participants, without biasing the discussions, to elaborate on common questions, concerns and decisions regarding cloud computing services, such as client opinions, the considerations that are important when acquiring cloud services, the decision process of business and individual users surrounding adopting and using cloud computing services, as well as the issues encountered during the use of these services. Observers were assigned to record notes and occasionally ask questions to clarify points or to keep discussions alive. During the afternoon session participants were divided into two parallel groups, where the discussions in one group concentrated on business end users and on the other group focused on individual end users. Workshop notes were later put together to a list of HCI requirements and principles.

Focus groups. Focus groups are appropriate for bringing together a cross-section of users so that they can collaboratively unveil their opinions and needs regarding particular challenges foreseen in the design of a system. Moderators of a focus group can stimulate participants to discuss these opinions with the other group members by using different approaches, such as asking direct questions to participants, encouraging brainstorming, instructing them to work with various probes, etc.

To understand the different ways in which individuals with different levels of familiarity with technology perceive cloud services and comprehend the flow of their personal data on the Internet and in the cloud and vulnerabilities in Internet services, we conducted three focus groups session (including a pilot session with undergraduate students) with participants that were considered expert and non-expert users.

The group of expert users was formed of 16 Ph.D. students in computer science coming from different Swedish Universities (but with different nationalities) who were taking a graduate course on the topic of Privacy Enhancing Technologies. The non-expert users consisted of a group of 15 individuals from different age ranges, cultural and educational backgrounds, who were participants of a project for personal development towards employment opportunities.

Semi-structured Interviews. Semi-structured interviews are interviews where not all questions are designed or planned before the interview, allowing the interview to follow and explore new directions as they come up in the interview process [35].

Semi-structured interviews were considered a good method for capturing the challenges regarding the management of access control lists by system administrators, and how those challenges are commonly handled in their field of work. The results were used as the base for Experiment 4 described below.

Controlled experiments. In experimental studies so-called dependent variables of interest are identified. Then the factors in the study, or independent variables, can be controlled for checking the level of influence of these factors on the variables of interest. By performing experiments using control groups, different hypotheses about people’s behaviours, actions, attitudes, opinions and performance can be tested. The ecological validity in an experiment measures the extent to which the setup of the experiment matches real world situations.

We designed and carried out four controlled experiments in order to study the mental models, motivations, and needs of lay users when subscribing to cloud storage services.3 In order to improve the ecological validity of the experiments, participants were deceived into believing that the cloud service was a real service but afterwards informed about the experimental nature.

Experiment 1: Understanding willingness to distribute personal data to cloud services. Participants: 120. Hypothesis: End users are more willing to release personal data to a cloud service in exchange for observable valuables (such as free cloud storage). This was confirmed, especially as concerns what end users themselves perceive as non-sensitive.

Experiment 2: Framing and terminology. Participants: 190. Hypothesis: End users willingness to release personal data depends on how the cloud service expresses benefits at the moment of releasing data. This hypothesis was indeed confirmed.

Experiment 3: Desired cloud services’ features. Participants: 179. Hypothesis: End users would have preferences over certain features for managing their data released to a cloud service. Results in short: users did not show a strong willingness to spend cognitive efforts setting privacy controls, but transparency features are appealing features.

Details and results of the first three experiments are found in Angulo et al. [5].

Experiment 4: A between-subjects experiment design was deployed to gather evidence for the accuracy of the metrics proposed by Beckerle and Martucci [6]. This type of experiment was chosen because a control group was needed for comparing the results of the participants that were assisted by a tool that provided them with measurements regarding the security and usability of their access control rule sets with the results of the participants that didn’t have such a support. Participants: 12. Eight were non-experts regarding access control configuration and management. The other 4 participants were IT support professionals (experts), who manage access control mechanisms on a regular basis. Four experienced system administrators were then used to rank the results. This evaluation showed that the participants in the group that had the support from our formal tool, rule sets, and metrics performed significantly better than those in the group without this support (t(3.629) = 7.621, p = 0.007; details of this fourth experiment are published in Beckerle and Martucci [6]).

Usability evaluation. Usability testing is a technique that can measure the actual performance of users when trying to achieve a tasks with a given user interface. During a usability test session test participants are given a set of tasks and a test moderator guides the participant through the tasks while at the same time observing and annotating the interactions of the participants with the interface. The moderator also encourages participants to express aloud their opinions, actions and reactions to the prototype, in an approach commonly referred to as the “think aloud” protocol (Jaspers et al. [43]).

Usability testing of low-fidelity prototypes was considered a suitable method for our purposes because it has the advantage of letting lay users communicate their needs, opinions and expectations about new technologies. Expressing this in the open format of the “think aloud” protocol rather than by questionnaires is important as such users might not be very familiar with the terminologies and technologies related to cloud computing, and might not have a clear understanding of how Internet technologies and data handling works either. (We also used the technique of counterbalancing to minimize the introduction of cofounding variables; Rubin and Chisnell [44]).

The objective of the usability test was to test whether graphical illustrations of data flows can improve the lay users’ understanding of their personal data traces. Earlier studies of a transparency-enhancing tool called Data Track carried out during the PRIME and PrimeLife projects (Pettersson et al. [45]; Wästlund and Fischer-Hübner [46]) had revealed the difficulty for lay users to comprehend the flow and traces of their data on the Internet. In addition, recent work has shown a privacy-friendly mechanism in which data could be stored remotely, for instance, at a cloud service, but still under the users’ control [47]. Therefore, in A4Cloud we have tested alternative HCI concepts consisting of graphical UI illustrations of where data are stored and to which entities data have been distributed. Based on the usability heuristic suggesting a “match between the system and the real world” [8], graphical illustrations of data storage and data flows have a potential to display data traces more naturally as in real world networks, as discussed in the PRIME deliverable D06.1.f, Sect. 5.8.1 [17]. Besides, previous research studies suggest that network-like visualizations provide a simple way to understand the meaning behind some types of data [48, 49], and other recent studies claim that users appreciate graphical representations of their personal data flows in forms of links and nodes [50, 51]. The results of usability evaluations that we performed in A4Cloud for graphical user interfaces of the Data Track trace view as depicted in Fig. 2 are reported in Fischer-Hübner et al. [4].
Fig. 2.

The user interface of the Data Track trace view that was subject to usability evaluations.

Eliciting and mapping legal requirements. Legal principles will have to be enforced by the user interfaces of transparency and accountability tools. Such principle were elicited in four ways: from the stakeholder group workshops, by a review of relevant legal documents (including the EU Data Protection Directive 95/46/EC [15], the newly proposed EU data protection regulation [28], and relevant opinions published by Art. 29 Data Protection Working Party [16] and [52]), by interviews with legal experts from the A4Cloud project, as well as from input from the A4Cloud advisory board. The mapping of these legal principles to HCI principles and proposed design solutions were partly based on, and partly extending the work of, the PISA project [13], the PrimeLife HCI patterns [23], as well as other relevant HCI guidelines and heuristics. Detailed results are published in Fischer-Hübner et al. [4].

Eliciting requirements from trust issues mentioned in studies and surveys on cloud and Internet use. For eliciting HCI requirements for mediating trustworthiness of services, including cloud services when they (in the future) have been evaluated by transparency-enhancing tools, a literature review was conducted. Many studies on Internet services and users, in particular those involving individual end users, have focused on the degree of confidence people have in e-commerce web sites and more recently in cloud services. Our literature review concentrated on a number of studies from which it has been possible to crystallise HCI requirements and, to some extent, map onto tentative HCI principles or UI examples. Many of the studies refer to other works on trust but it has not been within the scope to follow up on every work. Rather, only one or a few references for an interesting trust-related phenomenon have been deemed sufficient to motivate the discussion of the phenomenon in question and its possible inclusion in the collection of requirements and this is reflected in Sect. 5. We acknowledge that for many of the “observations” derived from publications, more references could possibly have been provided.

4.2 Ethical Consideration

Before the work with external participants in tests, experiments, focus groups, and workshops commenced a description of the work planned and the relation to the A4Cloud project in large was sent to the local board for ethical evaluations at Karlstad University. The plan described the recruitment of participants of focus groups, workshops, tests, and experiments where we only involved “adult (healthy) volunteers” who provided their informed consent. The plan furthermore described routines for handling and anonymising data at the earliest possible time, providing transparency, and guaranteeing all participants the rights they should have as data subjects. As no sensitive data were obtained and rules of the Swedish data protection act and the EU Data Protection Directive 95/46/EC were clearly followed, no ethical or legal privacy concerns were found by the board.

5 Mapping HCI Requirements to Functional Categories

In order to propose a concise set of HCI principles and guidelines for cloud service chain transparency tools, we group the HCI requirements and related HCI principles obtained from the different research activities indicated in Sect. 4 into general categories related to required functionality of possible accountable and transparent tools. This categorization is on a high functional level and is of the general applicability for tools developed to make cloud service chains transparent and service providers accountable.

From the analysis provided in A4Cloud and other projects, we recognize functionality for:
  1. 1.

    Ex ante transparency (policy display incl. policy mismatches, mediating of trustworthiness or risks to individual end users);

     
  2. 2.

    Exercising data subject rights;

     
  3. 3.

    Obtaining consent;

     
  4. 4.

    Privacy preference management (helping individual end users to manage their privacy preferences)

     
  5. 5.

    Privacy policy management (for business end users)

     
  6. 6.

    Ex post transparency (incl. display of policy violations and help with risk mitigation)

     
  7. 7.

    Audit configuration (help with settings in regard to collection of evidences)

     
  8. 8.

    Access control management

     
  9. 9.

    Privacy risk assessment (for business end users)

     

Below in Sects. 5.15.9, we map the obtained HCI requirements and related HCI principles and design suggestions onto these functional categories. This mapping has the objective to show for each type of functionality what HCI requirements need to be met and what HCI principles should be followed during the UI design. For each requirement we list one or several observations noted in our studies. For each observation we attempt to formulate an HCI principle as well as one or several suggestions for UI design solutions. (To lessen the complexity of presentation, we do not refer to our own published results; the interested reader is directed to [1]).

5.1 Ex Ante Transparency

Ex ante transparency tools should meet the HCI requirements motivated and discussed in the following subsections.

5.1.1 Make Explicit Data Disclosures and Implicit Data Collections Transparent

This requirement is based on four observations made in our focus groups and in the usability test.

Focus group observation: Non-expert users believe that acting entities are more related to each other than they might be in reality. Tendency to believe that personal information is distributed among many of the entities represented: “All internet companies can share information about me”. HCI Principle: The interface should clearly show the different entities that could get a hold of which kind of personal information. Design suggestion: Create a network visualization that clearly shows the entities (nodes) getting users’ information and the pieces of information that each entity has (as the links).

Focus group observation: Both non-experts and advanced users are aware that service providers can do analysis of their data to find out more information about them. However, non-expert users are less aware of the consequences of the possible misuse of their data. HCI Principle: Users could be informed about some of the possible inferences that a service provider (or a group of service providers) can make based on their previous and current data disclosures. Design suggestion: Show how different data items can be linked together to form new information or deduce information about them which they might not like to disclose. A series of small network visualisation can be done showing common examples of combinations of data that can reveal more than people tend to imagine.

Focus group observation: Both groups are aware that it is not only the explicit release of personally identifiable information that is important, but also what can be deduced from the data (like behaviours, attitudes, etc.). Information about such inferred data is even less transparent than explicitly disclosed data. HCI Principle: Show people the data that they have disclosed explicitly, and show some of the possible inferences that a service can do based on that data. Design suggestion: Show a form where people enter data. Then a tool will present a list that shows the possible inferences about their behaviour and personal data based on simple searches that can be conducted.

Usability test observation: There is a difference in the understanding of explicit and implicit collection of data. HCI Principle: Users should be made aware of implicit collection of data done by the service provider. Design suggestion: When informing about explicit, implicit and inferred personal data, make the look of the explicitly sent (i.e. information that user sent explicitly, for example during registration to a service) information different from the look of implicitly collected information or inferred information (i.e. information that the service provider collects without the user being fully aware of it, such as location, browser version, whether the customer is reliable, etc.).

5.1.2 Make Data Sharing and Data Processing Along the Cloud Chain Transparent, and Provide the Means to Verify It

This requirement is based on four observations from different studies.

Workshop note: There is a lack of transparency along the chain of (cloud) service providers in regard to their location and applicable laws. The main services providers that are contacted may be located in Sweden, while back-end (cloud) service providers are located in another country (Cf. also [53]). HCI Principle: Users have to be informed about the country and legal regime of the data controller and data processors and/or the contract’s explicit choice of law along the cloud chain. Design suggestion: Policy icons illustrating the storage location (e.g., inside or outside EEA) and/or legal rules or practices.

Workshop note: It is difficult for individual and business end users as well as auditors to track data in the cloud and to find out who has or has had access to the data for what purposes. HCI Principle: There should be usable and selective audit and transparency tools which even make the handling of explicitly and implicitly collected data (e.g. via the Facebook Like button) transparent (incl. information about data processing purposes). Design suggestion: Different visualisations of the users’ previous implicit and explicit data disclosures and of the data flows to different service providers could be applied, using, for instance, a timeline view or a trace view. Information about agreed-upon policies can be provided by clicking on service provider representations in a trace view visualisation.

Focus group observation: Both non-experts and advanced users have an idea that data are being forwarded to third parties by service providers. However, non-expert users seem to have a less clear idea of who these third parties may be. HCI Principle: (i) The interface should put emphasis on explaining the distribution of information to third parties in a clear way. (ii) Present the purposes for which these third parties are allowed to use the data. (iii) The interface should also explain that sometimes the third parties are not specified or identified by service providers in their policies. (No specific principle suggested.)

Trust review: Well-placed trust grows out of active enquiry [54, 55, 56]. HCI Principle: Users should be able to pursue experimentation and enquiring. Users should be guided beyond enquiring only of friends and relatives. Design suggestion: (i) Safe environments for experimentations and enquiries (the environments must not oversimplify the complex cloud service ecology). (ii) Make it possible to enquire “good sources”.

5.1.3 Provide Indicators for the Trustworthiness of Nodes Along the Cloud Chain

Workshop note: Services (such as hotels.com, resia.se) operate only as a media-tor/broker, but take no responsibility if something goes wrong. Service brokers have to inform the users about who is the responsible data controller/service provider, with whom the agreement/service contract is actually made. HCI Principle: User interfaces of service brokers have to clearly inform the users about the identity of responsible data controller/service provider with whom the contract is made. (No specific principle suggested.)

Focus group observation: Expert users have a clearer idea of where attacks can happen and of possible counter measures. Non-expert users had an idea that information can be at risk, but it is very unclear for them what can be attacked, why the information is vulnerable, and the approaches to mitigate the problems. HCI Principle: Lay users need help creating correct mental models of what is vulnerable/risky and what is safe. They should be able to understand when they are performing risky actions and feel comfortable or confident when their risks are minimal. Risks should be communicated to users by showing consequences of behaviours in a minimalistic way. Design suggestion: Indicate different risk levels with colours and clear explanations. Use adequate language that would communicate the right message to the right user group. Provide layered explanations in an understandable way that can be read in more detail if users are interested, thus catering for the different experience of users.

5.1.4 Policies Need to Make Transparent the Possible Consequences of Data Disclosures in Different Recurrent Situations

Experiment result: Perceived sensitivity of data can influence people’s behaviours in regard to exercising control. However, data that might be perceived as non-sensitive (or harmless) can become sensitive with changes in time and/or context. HCI Principle: (i) Users should be informed about possible scenarios in which data items could become sensitive. (ii) Users should also be aware about the different purposes for which their information might be used, as well as the possible recipients of their data, since this can affect their behaviour. The perceived sensitivity of data can be dependent on the context in which it is used. Design suggestion: (i) In the user interface, provide inline examples of data aggregation or misuse of seemingly harmless data. (ii) Provide a visual indication of how their data might be transferred across the cloud chain or shared with third party services. Icons for data processing purposes could indicate context information in regard to the potential use of the data by these services.

Experiment result: Users are willing to disclose personal data that are perceived as non-sensitive in exchange for a reward that seems valuable. HCI Principle: Users should be made aware of the risk and benefits of disclosing their data to a service. Design suggestion: Make users conscious about the value of the data they are releasing comparable to something they can relate to, like the estimated monetary value (that the data has for the service providers).

5.1.5 Make Explicit that a Service Is a Cloud-Based Service and What This Implies in Terms of Privacy/Security for the Intended User

Experiment result: Users are unaware or not well informed about the types of online services they subscribe to in regards to the handling of their data and personal privacy. HCI Principle: (i) Cloud providers should inform individual end users about the services’ privacy policies and make the implications of data disclosures transparent to these users. (ii) Ex ante transparency awareness should be promoted, in order for users to know what type of service they are subscribing to. Design suggestion: Make it explicit through the wording and the use of standard icons the consequences in terms of benefits and risks of having personal data in the cloud.

Trust review: Transfer of trust: trust in the company itself is often transferred to trust in the security of company’s cloud services [40]. HCI Principle: Users should be clear about the difference between service performance and privacy performance. Design suggestion: Make evaluation results concerning trustworthiness prominent.

5.1.6 Provide Easily Comprehensible Policies Informing Data Subjects at Least About the Identity of the Controller, Other Responsible Parties, for What Purposes the Data Will Be Used Plus Other Details Needed, so that They Can Understand the Implications

Four observations motivate this requirement:

Workshop note: It is unclear for individual users how they can get redress or compensation if something goes wrong, and whom they should contact in this case, especially if sub cloud providers are used (for instance, a user signs up with the service “Box” providing a cloud service, and Box uses Amazon as a sub cloud provider). HCI Principle: It has to be clear and understandable for the user who the responsible parties are and how they can be contacted in case of disputes. Design suggestion: (i) Clearly display the contact address of responsible parties on the top layer of multi-layered policies. (ii) Redress tools have to support end users in contacting the data controller or responsible party.

To this problem comes the workshop note enlisted under Sect. 5.1.2 (lack of transparency along the chain of services as regards applicable laws).

Experiment result: Knowing who is able to view/access and see users’ data stored in the cloud as well as how the data are used are appealing features. HCI Principle: It should be easy for users to find and adjust functionality related to the visibility and usage of their data for specific purposes. Design suggestion: Provide privacy-friendly default settings for data access controls and usage that can be easily adapted “on the fly” (as for instance suggested in [57]).

Legal considerations: Data subjects have the right to be informed at least about the controller’s identity, purposes and other details as required under Art. 10 EU Data Protection Directive 95/46/EC and should also be informed about any further information needed for making data processing in the cloud transparent and compliant with data protection laws. HCI Principle: The data subjects know at least who is the controller of their data, for what purposes the data are obtained plus other details (e.g., contacts and geographic locations of data centres along the cloud chain, applicable laws, how requests by law enforcement are handled, etc.), so that they can understand the implications. Design suggestion: Policy information is (i) provided in a way that accounts for the users’ mental models; (ii) structured in multiple layers following the recommendation by the Art. 29 Data Protection Working Party [16]; (iii) complemented with suitable policy icons.

5.1.7 Make Trust-Enhancing Indicators Intuitive, Consistent and Believable, as Well as Be Appealing for the Appropriate User Group

Workshop note: There are no commonly used seal/labels for security and trustworthiness for cloud services (apart from the more recent CSA and Cloud Industry Forum certifications). If there were, how would the users know what labels to trust? (ii) Individuals are often not interested in understanding all details of trust seals, but would rather like to know in general whether their data are “secure”. HCI Principle: Information about trust seals should be displayed in an understandable manner. Further information about the meaning of the seal should be easily accessible. Design suggestion: (i) Information about trust-related aspects of seals can be hierarchically structured in different layers (similarly as multi-layered privacy policies – cf. [16]). (ii) Standardized and broadly used seals can be more easily recognized and understood. (iii) In-place information about what a seal means can be provided, e.g. via tooltips or information dialogs and/or links to official information regarding the seals.

Experiment result: People may become skeptical towards unknown services that promise them to guard their privacy. HCI Principle: The cloud provider should explain not only the benefits for users, but also the benefits for the cloud provider itself when offering accountable and privacy-friendly features to its customers. (No specific principle suggested.)

Experiment result: Trust regarding unknown cloud services might have a cultural component to it. Users from different cultures exhibit different levels of trust. HCI Principle: Cloud provider should consider their customers in terms of the culture, location of service, and legislative regimes and cater for their collective mental models and attitudes towards data in the cloud. Design suggestion: (i) When users are about to subscribe to a cloud service, appeal to their cultural background by emphasizing features of security, access policies and the like. (ii) Accountability and transparency features might balance the level of trust across different cultures.

Trust review: Internet is regarded as intrinsically insecure ([58]; cf. the cloud study by Ion et al. [38]; cf. also Tsai et al. [59] and the ECC-Net report [24]). HCI Principle: “Users needed more accurate and robust models to be able to discover and trust cloud computing services.” (Marshall and Tang 2012) Design suggestion: In the user interface: users should be directed to sources they would normally rely on. The Trustguide speaks of the necessity of taking measures also outside the user interface; this does not directly translate into HCI requirements, but the UI should relate to it [56].

Trust review: “…perceived availability, access, security, and reliability would be key variables of cloud computing acceptance in public sectors since they were found to be influential in predicting the behavioural intention to use cloud technologies” Shin ([60], p. 200). Stakeholder workshop: A “business first attitude” in cloud adoption where economic considerations far outweigh privacy concerns. HCI Principle: Business end users need to be correctly informed about cloud security, performance, and availability for individual cloud services they consider. This requirement holds for private sector [61] and public sector [60] alike. For private sector this requirement also meets the problem of the business first attitude if accountability measurements are included in the information so that such aspects can easily be included in the decision process. Design suggestion: If available, display trustworthiness by evaluation results in regard to security, privacy, performance, and availability. Use visualisation of an accountability model and match with visualisation of current chain.

5.1.8 Users Should Be Able to Know the Approach and Consequences When Deciding to End the Service

Workshop note: At the time of service registration, end users do not think about how to end the service in the future. While the registration for a service is usually made easy, it is often (made) difficult for end users/organizations to unregister/terminate a service contract, delete data or transfer data to other service providers. It is not always clear to end users whether they “own” their data (or are still in control of their data), as they do not check the terms and conditions carefully. HCI Principle: Information about service termination, any continued use of the data by the provider or others even after the termination, data deletion and portability should be easily accessible and comprehensible for end users. Design suggestion: Clearly present information about the option and rights of deletion and data portability in the context when it is relevant (e.g., when a service is terminated).

Trust review: Perceived lack of longevity of identifiers makes users blur partial identities: preference for long-lasting identifiers (such as personal email addresses rather than appropriate work-related email addresses; [62]). HCI Principle: Users must trust that they can manage in a life-long way the information associated with different identities (implications for transparency and restitution controls). Design suggestion: (No obvious way to bridge the trust gap or where to bridge it.)

5.1.9 Users Should Be Aware of the Extent to Which They Can Act Under Pseudonyms

Trust review: Anonymity option unknown: unawareness of options for identity management has negative effects on trust in privacy-enhancing technology [58]. HCI Principle: Users must be able to understand the extent to which they can act under pseudonyms and that they could also access to transparency information when acting under a pseudonym. Design suggestion: Within the user interface demonstrate how pseudonymity and anonymity options work, and how users could access their data under a pseudonym.

5.1.10 Inform Users About the Termination of Their Contract in a Clear and Straight-Forward Manner

See the workshop note under Sect. 5.1.8.

5.1.11 Make Reasonable Claims About the Privacy and Security Policies and Technical Capabilities of the Service to Promote Trust

Trust review: Unsubstantiated claims do not build trust ([56]: this issue concerns a long-term perspective; one company’s misconduct can affect a whole sector). HCI Principle: Users must be able to put the right scope to their distrust. Design suggestion: Make privacy and security statements short and very clear, and make the scope (i.e. to whom they apply) very explicit.

5.2 Exercising Data Subject Rights

Tools for exercising data subject rights should meet the HCI requirements motivated and discussed in the following subsections.

5.2.1 Make Users Aware of Their Data Subject Rights, and Support Them to Exercise Their Rights; in Particular, Make Control Options that Are Relevant in Certain Situations More Obvious at Those Particular Situations

We refer to several observations here. As in Sect. 5.1.6, there are the rights to be informed but also the problem that redress and compensations are unclear to users. In additions, the following three observations motivate Sect. 5.2.1:

Focus group observation: Expert users’ concerns go beyond the use of personal data, but deal also with people’s rights and democratic governments. Non-expert users are less aware of their rights concerning the protection of their data. HCI Principle: Interested users should be able to audit the chain of cloud services. Who has accessed data, for what purpose, why did they access those data a particular occasion, with whom data were shared with, etc. It should be easy for people to exercise their rights regarding data protection and handling practices. Design suggestion: Make users aware of their rights with links to information (in further policy layers), and help them exercise them by providing them with clear options for action and show a list of logged data that users can query with various questions related to their personal information. Queries can help to filter results that are of relevance for the users. Display a visualization of the chain of clouds and their potential vulnerabilities.

Legal considerations: Data subjects have the right to access their data pursuant to Art. 12 EU Data Protection Directive. Data subjects may have further rights in regard to the processing of their data according to that Directive and more specific laws, e.g. in Sweden a data subject has the right to information on who have accessed the data subject’s data according to the Swedish Patient Act. HCI Principle: Data subjects are conscious of their ex post transparency rights, understand and can exercise their rights. Design suggestion: (i) Ex ante Transparency functions are displayed prominently and obvious to operate. (ii) Transparency functions are based on a suitable metaphor and/or account for the user’s mental models. (iii) Transparency functions are made available at the right time/in the right context, e.g. tracking logs should display online functions to exercise the right to access. A “right of access button” could be provided that if clicked allows to see list of what info can be accessed including tracking logs.

Legal considerations: Data subjects have the right to correct, delete or block their data pursuant to Art. 12 (b) EU Data Protection Directive. Further rights, such as the data erasure or the right to data portability, are currently proposed. HCI Principle: Data subjects are conscious of their control rights, understand and can exercise their rights. Design suggestion: (i) Functions for exercising data subject rights are displayed prominently and obvious to operate. (ii) Transparency functions are based on a suitable metaphor and/or account for the user’s mental models. (iii) Transparency functions are made available at the right time/context, e.g. at the time when users are accessing their data locally or online.

5.2.2 Provide Clear Statements of What Rights Apply to Individual Users Considering Different Factors, Such as the Users’ Culture or Location and Applicable Legal Regime

Again, we refer as in 5.1.6 to the data subjects’ rights. In addition, we refer:

Workshop note: Web services that target their business to Swedish customers (by having a Swedish website, a Swedish telephone support number, using SEK as a currency, etc.) fall under Swedish consumer and data protection laws, even if the business is located outside of Sweden and independent of what contracts say. HCI Principle: User should be informed about the applicable consumer rights. Redress tools should (at least in these cases) allow users to contact the data controller in their native language. (No specific principle suggested.)

Trust review: “Users from different countries may have different privacy expectations and understanding of privacy guarantees offered by the cloud storage system” [38]. HCI Principle: Internationalisation “involves going beyond just translating the service interface and privacy policy” [38]. Design suggestion: When seeking customers outside EEA, seek expertise to cover different populations’ expectations.

Trust review: Restitution measures have positive trust effects [56]. HCI Principle: Clearly mark the possibility and ways of redress. Design suggestion: Users’ interfaces for transparency tools, such as the Data Track, could mark restitution measures.

5.3 Obtaining Consent

Tools for obtaining consent should meet the HCI requirements motivated and discussed in the following subsections.

5.3.1 Make Users Aware of Pros and Cons of Their Possible Choices in an Unbiased Manner

Experiment result: Users’ willingness to release personal data is influenced by the description of alternatives (users tend to prefer short-term benefits). HCI Principle: Make users aware of all pros and cons of their choice in an unbiased fashion. Design suggestion: Tooltips and/or help texts to clarify consequences of actions.

Trust review: Unsubstantiated claims build trust [63]: the problem here is that well-articulated privacy assurances make many individual end users trust a service’ competence and intentions. HCI Principle: As users do not scrutinise privacy statements etc., users must be made aware of trustworthy assessments of trustworthiness. Design suggestion: Make evaluation results concerning trustworthiness as prominent as cloud providers’ privacy and security claims.

5.3.2 Obtain Users’ Informed Consent by Helping and Motivating Them to Understand Policies and Service Agreements, so that They Understand the Implications

Workshop note: Often individual end users do not make a really informed choice. It is easy to deceive people because they often neither read nor understand the agreements. HCI Principle: Display privacy policies in a simple and understandable manner. Design suggestion: (i) Privacy policy statements could be explained in short videos clips (produced by consumer organizations), at the time when the user has to make choices. (ii) Display a graph view of personal data flow, showing how the service provider that users are contacting is connected to other services and the possible distribution of users’ data for different purposes. (iii) Drag-and-drop data handling agreements (DaDAs (Drag and Drop Agreements), c.f. Sect. 2) can also help users to consciously understand what they are agreeing to.

Workshop note: Individual users find it difficult to read and understand long and complicated contracts/terms and conditions that are posted online. Often data loss, i.e. unavailability of their data, is the greatest of the consumers’ concerns, but limitations of availability (in terms of the amounts of time that data are accessible) mentioned in terms and conditions are not transparent to them. HCI Principle: Users have to be aware of and understand important service limitations. Design suggestion: Use of UI elements for making users aware, e.g. suitable icons.

Legal considerations: Personal data processing in the cloud can be legitimised by the data subject’s unambiguously given consent pursuant Art. 7 (a) EU Directive. HCI Principle: Users give informed consent and are understanding the implications. Design suggestion: Consent is obtained by click-through agreements associated to short privacy notices (top layer notices of multiple-layered policies), or via DaDAs as discussed in Sect. 2.

5.4 Privacy Preference Management

Privacy preference management tools should meet the HCI requirements motivated and discussed in the following subsections.

5.4.1 UIs for Preference Settings Need to Make Consequences in Different Recurrent Situations and Risks and Benefits of Disclosure Transparent

Two observations: Lay users need help creating correct mental models of what is vulnerable/risky as noted in Sect. 5.1.3 above, and users must be able to understand the extent to which they can act under pseudonyms and that such identification schemas can provide access to transparency information, as noted in Sect. 5.1.9.

5.4.2 Make Users Aware of Pros and Cons of Choices in a Comprehensible and Unbiased Manner

Same as Sect. 5.3.1 under “Obtaining consent”.

5.4.3 Offer Appropriate Default Settings and Choices that Are Privacy-Friendly and Reflect the Users Preferred Options

Workshop note: Users have the need to classify their data or groups of data (e.g., by marking sensitive personal data, confidential data). Data classification is needed in particular for risk analysis and by policy tools. HCI Principle: Users should be guided when defining and editing labels to classify their data in an easy and meaningful way. Moreover, the user should be able to browse through these data by the defined categories. Design suggestion: Provide a filter that allows users to select which categories (labels) are displayed. A tree view can be provided where users can check/uncheck the data to be shown. Alternatively, use tabs to divide the different categories.

Workshop note: Security and privacy risks are not very clear and comprehensible to many individual end users. Even security incidents have no long lasting impacts on the user’s risk awareness. On the other hand, they are not interested in policy details but just would like to know whether their data are “safe”. HCI Principle: Users should be able to understand risk evaluation results, especially if these describe serious risks of non-compliance. They must be informed about privacy breaches/non-compliance in regard to data that they have already disclosed, in such a way that they are aware of and understand those risks. Design suggestion: An overall risk evaluation results can be displayed in a prominent way, using a multi-layered structure as suggested by the Art. 29 Data Protection Working Party [16]. The presentation should be based on suitable metaphors.

5.4.4 Let Users Do Settings at the Moment When It Is Relevant (“on-the-Fly” Management of Privacy Settings)

Experiment result: Users are unmotivated to spend cognitive effort or time at setting up privacy controls. HCI Principle: Users should be motivated to spend the necessary cognitive effort or time in adjusting their privacy preferences at a moment that is relevant to them and meaningful to their actions. Consequences are easier to grasp than technical features and terms. Inform users not only about how settings can be adjusted, but the consequences of adjusting such settings. Design suggestion: (i) Provide appropriate privacy-friendly defaults for a set of situations in order to ease the users’ burden of setting privacy preferences. (ii) Let users adjust their preferences “on the fly” as needed. Providing brief but meaningful explanations in terms of the privacy consequences might motivate users to care about adjusting. (iii) In order to enhance users’ comprehension and motivation, a cloud provider should present its privacy-enhancing features in a way that relates to users’ everyday reality and strive to put technical explanations in secondary information layers.

5.4.5 Explain Consequences not in Technical Terms, but in Practical Terms (“speak the user’s language”)

Workshop note: It is often unclear for individual users what cloud providers really do with the data (e.g., if they are linking and merging different registers) and whether they are following negotiated or agreed-upon policies and contracts HCI Principle: Users should understand data processing purposes and consequences. And, as in Sect. 5.4.3, users must be informed about serious risks of non-compliance and what this may imply when they set preferences, and about privacy breaches/non-compliance in regard to data that they disclosed. Design suggestion: Present consequences by “speaking the user’s language”.

5.5 Privacy Policy Management

Privacy Policy management tools should meet the HCI requirements motivated and discussed in the following subsections.

5.5.1 Make It Possible for Business End Users to Negotiate What Is Negotiable, and Make Negotiation Clear and Simple

Workshop note: In contrast to traditional outsourcing, standard contracts are usually used for cloud computing, which are often less negotiable for business end users in terms of security and privacy and indeed most other matters. HCI Principle: Make it possible for users to negotiate what is negotiable, and make the negotiation process clear and simple. Design suggestion: Provide opt-in alternatives, e.g. in regard to the country/legal regime of the data storage location.

5.5.2 Provide Opt-in Alternatives, E.G. in Regard to the Country/Legal Regime of the Data Storage Location

We have the same motivating observation as for the requirement immediately above, but here we raise the suggested HCI design to a requirement.

5.6 Ex Post Transparency

Ex post transparency tools should meet the HCI requirements motivated and discussed in the following subsections.

5.6.1 Make Users Conscious of Their Ex Post Transparency Rights, so that They Understand and Can Exercise Their Right of Access

The motivating facts behind this requirement is of course that data subjects have the right to access their data pursuant Art. 12 EU Data Protection Directive etc., as mentioned in Sect. 5.2.1 above.

5.6.2 Make Users Aware of What Information Services Providers Have Implicitly Derived from Disclosed Data

As in Sect. 5.1.1, we noted in focus groups that non-expert users are less aware of the possibility of further data about them being derived or inferred from their explicitly-disclosed data, and of the consequences of possible misuse of their data.

5.6.3 Make Users Aware of the Data Processing and Sharing Practices of the Service Provider

One should observe that it is often unclear for individual users what cloud providers really do with the data as mentioned in Sect. 5.4.5; cf. also Sect. 5.1.1 on non-expert users’ belief that acting entities are more related to each other than they might be in reality, and Sect. 5.1.2 on non-expert users do not appear to have a clear idea of who third parties may be.

5.6.4 Help Users Making Data Traces Transparent, E.G. by Providing Interactive Visualisations

The general problem is the one quoted in Sect. 5.1.2 that it is difficult for individual end users, business end users as well as auditors to track data in the cloud and to find out who has or has had access to the data for what purposes. In addition to the obvious requirement deriving from this, we add the suggestion about interactive visualisations from the following experimental results:

Usability test observation: Visualizing data releases through a trace view was found useful, intuitive and informative. It seems to be preferred over a timeline view. HCI Principle: Users should have an intuitive and interactive way of visualising previous disclosures of personal data. Design suggestion: Data releases could be visualised as a bipartite network, with one possibility having the user as a node in the centre and links branching on one side to the different services (and chain of services) with whom he has had a relationship, and on the other side linking to the data items that have been released.

5.7 Audit Configuration

Audit configuration tools should meet the HCI requirements motivated and discussed in the following subsections.

5.7.1 Provide a Standard Way to Perform Audits Across the Chain of Services. In Particular, Provide Audit Functions that Visualise Differences of SLAs Along the Cloud Chain

Workshop note: Service Level Agreements (SLAs) of different cloud services along the chain may not match (in addition to the problem that SLAs aren’t even defined in the same way). HCI Principle: Tools for auditors and business users should visualize the differences between different SLAs. Design suggestion: Display a visual chain of SLAs and indicate with colors or icons when there is a mismatch of SLAs. Let users click on a particular mismatching connection to see the details and support his decisions.

5.7.2 Provide Audit Functions that Make also Implicitly Collected Data Transparent

As in Sects. 5.1.2 and 5.6.4, there is a notable difficulty for auditors (among others) to follow up data collection and processing.

5.8 Access Control Management

Access control management tools should meet the HCI requirements motivated and discussed in the following subsections.

5.8.1 Allow Users to Classify Their Data Items and Easily Provide Access Control Rules for These Data

See Sect. 5.4.3, the first workshop note.

5.8.2 Allow System Administrators to Verify the Accuracy of Access Control Rules in a Straightforward and Simple Manner

Experiment result: It is very difficult for system administrators to verify the accuracy of access control rule sets regarding the access control policy. Thus rule sets need to be understandable and manageable to assist system administrators in their task. HCI Principle: (i) Concise rule sets are better than large sets. (ii) Redundant/contradicting rules are to be avoided. (iii) Rule sets need to be designed to facilitate tasks for administrators. Design suggestion: Tools, sets, and metrics that can support administrators to evaluate and compare the security and usability properties of different rule sets.

5.9 Privacy Risk Assessment

Privacy Risk assessment tools should meet the HCI requirements motivated and discussed in the following subsections.

5.9.1 Provide Different Types of User (Business End Users Versus Individual End Users) with Appropriate Indicators Obtained from Risk Assessment Activities. Make Risk Awareness Long Lasting

Cf. Sect. 5.4.3, second note, and Sect. 5.1.7 for the problem that many lay users regard Internet as intrinsically insecure.

5.9.2 Provide Clear Visualizations of Vulnerability of Private Data Depending on Different Situations

As noted in the focus group observation quoted in Sect. 5.1.3, it is very unclear for non-expert users what can be attacked, why is the information vulnerable and the approaches to mitigate the problems.

6 Concluding Remarks

In comparison to traditional forms of outsourcing and internet services, transparency and control in the cloud requires that more complex information about the data handling along the cloud chain is provided to data subjects and other stakeholders. The items presented in Sects. 5.15.9 are in their objective similar to those for other kinds of ‘usable privacy’ ecologies, even though transparency-enhancing tools for the cloud have to inform users about some additional aspects and it is important that developers apply them against the background of the complex picture of the cloud service chain.

To conclude, in this work we have elaborated HCI requirements for making transparency-enhancing tools comprehensible and trustworthy in order to make them useful for analysing cloud service chains. The paper reports on how we have applied human-centred design methods to derive HCI requirements and related HCI principles. The analyses conducted provided requirements on how users can be guided to better understand their data traces, how they can be supported to make better informed decisions in regard to the use of their data by cloud and other service providers, how legal privacy principles and social trust requirements can be enforced by e.g. transparency-enhancing tools. The different studies have been published in different papers [4, 5, 6]. Here a set of high level guidelines was presented that summarises the derived HCI principles.

This work has also revealed more specific open HCI research challenges to be addressed within the A4Cloud project and beyond. In particular, we identified the following research questions: How can ex ante transparency tools better inform users about the consequences of data disclosures? How can one derive good privacy default settings that are both privacy friendly and matching the user’s preferences? How can ex ante transparency tools best illustrate and make obvious who will be in control of the data and/or who will be processing data under which conditions, and what means of legal or technical control exist in which situations? How can mismatches of policies or SLAs along the cloud chain best be presented to individual and business end users?

Footnotes

  1. 1.

    A ‘data subject’ is a natural person about whom personal data are processed.

  2. 2.

    Under Article 29 of the Data Protection Directive, a Working Party on the Protection of Individuals with regard to the Processing of Personal Data is established, made up of the Data Protection Commissioners from the Member States together with a representative of the European Commission. The Working Party is independent and acts in an advisory capacity. The Working Party seeks to harmonize the application of data protection rules throughout the EU, and publishes opinions and recommendations on various data protection topics.

  3. 3.

    These areas were motivated by a range of studies, in particular Brandimarte et al. [36], Gross and Acquisti [27], Hoadley et al. [37], Ion et al. [38], Langer [39], Marshall and Tang [40], Tversky and Kahneman [41], and Xu [42].

Notes

Acknowledgements

This work has in part been financed by the European Commission, grant FP7-ICT-2011-8-317550-A4CLOUD.

We thank project co-workers that have contributed to the research with the help of whom these requirements were derived, especially Erik Wästlund, Leonardo Martucci, and Tobias Pulls. Besides, we thank W Kuan Hon from Queen Mary University London for very helpful comments.

References

  1. 1.
    Angulo, J., Fischer-Hübner, S., Pettersson, J.S.: General HCI principles and guidelines for accountability and transparency in the cloud. A4Cloud Deliverable D:C-7.1, September 2013 (2013)Google Scholar
  2. 2.
    Pearson, S., Tountopoulos, V., Catteddu, D., Sudholt, M., Molva, R., Reich, C., Fischer-Hübner, S., Millard, C., Lotz, V., Jaatun, M.G.: Accountability for cloud and other future Internet services. In IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), 2012. IEEE (2012)Google Scholar
  3. 3.
    Hildebrandt, M.: Behavioural biometric profiling and transparency enhancing tools. FIDIS Deliverable D7.12, March 2005. FIDIS EU project (2009)Google Scholar
  4. 4.
    Fischer-Hübner, S., Angulo, J., Pulls, T.: How can cloud users be supported in deciding on, tracking and controlling how their data are used? In: Hansen, M., Hoepman, J.-H., Leenes, R., Whitehouse, D. (eds.) Privacy and Identity 2013. IFIP AICT, vol. 421, pp. 77–92. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  5. 5.
    Angulo, J., Wästlund, E., Högberg, J.: What would it take for you to tell your secrets to a cloud? - studying decision factors when disclosing information to cloud services. In: Bernsmed, K., Fischer-Hübner, S. (eds.) NordSec 2014. LNCS, vol. 8788, pp. 129–145. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  6. 6.
    Beckerle, M., Martucci, L.A.: Formal definitions for usable access control rule sets from goals to metrics. In: Proceedings of the Ninth Symposium on Usable Privacy and Security (SOUPS 2013), New Castle, UK, 24–26 July. ACM (2013)Google Scholar
  7. 7.
    Whitten, A., Tygar, J.D.: Why Johnny can’t encrypt: a usability evaluation of PGP 5.0. In: The Proceedings of the 8th USENIX Security Symposium (1999)Google Scholar
  8. 8.
    Nielsen, J.: Usability inspection methods. In: Conference Companion on Human Factors in Computing Systems. ACM (1995)Google Scholar
  9. 9.
    Johnston, J., Eloff, J.H., Labuschagne, L.: Security and human computer interfaces. Comput. Secur. 22(8), 675–684 (2003)CrossRefGoogle Scholar
  10. 10.
    Yee, K.: Aligning security and usability. IEEE Secur. Priv. 2(5), 48–55 (2004)CrossRefGoogle Scholar
  11. 11.
    Garfinkel, S.: Design principles and patterns for computer systems that are simultaneously secure and usable. Massachusetts Institute of Technology (2005)Google Scholar
  12. 12.
    Dhamija, R., Dusseault, L.: The seven flaws of identity management: usability and security challenges. IEEE Secur. Priv. 6(2), 24–29 (2008)CrossRefGoogle Scholar
  13. 13.
    Patrick, A.S., Kenny, S.: From privacy legislation to interface design: implementing information privacy in human-computer interactions. In: Dingledine, R. (ed.) PET 2003. LNCS, vol. 2760, pp. 107–124. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Patrick, A.S., Kenny, S., Holmes, C., van Breukelen, M.: Human computer interaction. In: van Blarkom, G.W., Borking, J.J., Olk, J.G.E. (eds.) Handbook of Privacy and Privacy-Enhancing Technologies: The Case of Intelligent Software Agents, pp. 249–290. College Bescherming Persoonsgegevens, Den Haag (2003)Google Scholar
  15. 15.
    European Commission: Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Office Journal L. 281. 23.11.1995 (1995)Google Scholar
  16. 16.
    Art. 29 Data Protection Working Party: Opinion 10/2004 on More Harmonised Information Provisions, 25 November 2004. European Commission (2004)Google Scholar
  17. 17.
    Pettersson, J.S.: HCI Guidelines. PRIME Deliverable D06.1.f. Final Version. PRIME project (2008)Google Scholar
  18. 18.
    International Standard Organization (ISO): Ergonomic requirements for office work with visual display terminals (VDTs)-Part 11: guidance on usability-Part 11 (ISO 9241-11:1998) (1998)Google Scholar
  19. 19.
    Pettersson, J.S., Fischer-Hübner, S., Danielsson, N., Nilsson, J., Bergmann, M., Clauss, S., Kriegelstein, T., Krasemann, H.: Making PRIME usable. In: Proceedings of the 2005 Symposium on Usable Privacy and Security (SOUPS 2005), Pittsburg, PA, USA. ACM (2005)Google Scholar
  20. 20.
    Graf, C., Hochleitner, C., Wolkerstorfer, P., Angulo, J., Fischer-Hübner, S., Wästlund, E., Hansen, M., Holtz, L.: Towards Usable Privacy Enhancing Technologies: Lessons Learned from the PrimeLife Project. PrimeLife Deliverable D4.1.6. PrimeLife (2011)Google Scholar
  21. 21.
    Wästlund, E., Wolkerstorfer, P., Köffel, C.: PET-USES: privacy-enhancing technology – users’ self-estimation scale. In: Bezzi, M., Duquenoy, P., Fischer-Hübner, S., Hansen, M., Zhang, G. (eds.) IFIP AICT 320. IFIP AICT, vol. 320, pp. 266–274. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  22. 22.
    Alexander, C., Ishikawa, S., Silverstein, M.: Pattern Languages. Center for Environmental Structure. Oxford University Press, New York (1977)Google Scholar
  23. 23.
    PrimeLife WP4.1: HCI Pattern Collection – Version 2. In: Fischer-Hübner, S., Köffel, C., Pettersson, J., Wästlund, E., Zwingelberg, H. (eds.) PrimeLife Deliverable D4.1.3. PrimeLife (2010). http://www.primelife.eu/results/documents
  24. 24.
    ECC-Net: Trust marks report 2013: “Can I trust the trust mark?”. The European Consumer Centres, Network (2013). www.konsumenteuropa.se/PageFiles/159275/Trust%20Mark%20Report%202013.pdf
  25. 25.
    ENISA: On the security, privacy, and usability of online seals. An overview Version December 2013. European Union Agency for Network and Information Security (2013). www.enisa.europa.eu
  26. 26.
    Spiekermann, S., Grossklags, J., Berendt, B.: E-privacy in 2nd generation e-commerce: privacy preferences versus actual behavior. In: Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa, Florida, USA. ACM (2001)Google Scholar
  27. 27.
    Gross, R., Acquisti, A.: Information revelation and privacy in online social networks. In: Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society, Pittsburg, PA, USA. ACM (2005)Google Scholar
  28. 28.
    European Commission: Proposal for a Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). COM (2012) 11 Final. Brussels, 25.1.2012 (2012)Google Scholar
  29. 29.
    International Standard Organization (ISO): 25010-2011. Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) – System and software quality models (2011)Google Scholar
  30. 30.
    International Standard Organization (ISO): 9241-210: 2009. Ergonomics of human system interaction-Part 210: Human-centred design for interactive systems (formerly known as 13407) (2010)Google Scholar
  31. 31.
    Wästlund, E., Angulo, J., Fischer-Hübner, S.: Evoking comprehensive mental models of anonymous credentials. In: Camenisch, J., Kesdogan, D. (eds.) iNetSec 2011. LNCS, vol. 7039, pp. 1–14. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  32. 32.
    Maguire, M., Bevan, N.: User requirements analysis. In: Hammond, J., Gross, T., Wesson, J. (eds.) Usability. IFIP — The International Federation for Information Processing, vol. 99, pp. 133–148. Springer, New York (2002)CrossRefGoogle Scholar
  33. 33.
    Owen, H.: Open Space Technology: A User’s Guide. Berrett-Koehler Publishers, San Francisco (2008)Google Scholar
  34. 34.
    Brown, J., Isaacs, D.: The World Café: Shaping Our Futures Through Conversations that Matter. Berrett-Koehler Publishers, San Francisco (2005)Google Scholar
  35. 35.
    Bernard, H.R.: Research Methods in Cultural Anthropology. Sage, Newbury Park (1988)Google Scholar
  36. 36.
    Brandimarte, L., Acquisti, A., Loewenstein, G.: Misplaced confidences: privacy and the control paradox. Social Psychological and Personality Science 4(3), 340–347 (2012). SAGE PublicationsCrossRefGoogle Scholar
  37. 37.
    Hoadley, C.M., Xu, H., Lee, J.J., Rosson, M.B.: Privacy as information access and illusory control: The case of the Facebook News Feed privacy outcry. Electron. Commer. Res. Appl. 9(1), 50–60 (2010)CrossRefGoogle Scholar
  38. 38.
    Ion, I., Sachdeva, N., Kumaraguru, P., Capkun, S.: Home is safer than the cloud!: privacy concerns for consumer cloud storage. In: Proceedings of the Seventh Symposium on Usable Privacy and Security, Pittsburg, PA, USA, p. 13:1. ACM (2011)Google Scholar
  39. 39.
    Langer, E.J.: The illusion of control. J. Pers. Soc. Psychol. 32(2), 311 (1975)CrossRefGoogle Scholar
  40. 40.
    Marshall, C., Tang, J.C.: That syncing feeling: early user experiences with the cloud. In: Proceedings of the Designing Interactive Systems Conference. ACM (2012)Google Scholar
  41. 41.
    Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. In: Wright, G. (ed.) Behavioral Decision Making, pp. 25–41. Springer, New York (1985)CrossRefGoogle Scholar
  42. 42.
    Xu, H.: The effects of self-construal and perceived control on privacy concerns. In: Proceedings of the 28th Annual International Conference on Information Systems (ICIS 2007) (2007)Google Scholar
  43. 43.
    Jaspers, M.W.M., Steen, T., van den Bos, C., Geenen, M.: The think aloud method: a guide to user interface design. Int. J. Med. Inform. 73(11–12), 781–795 (2004)CrossRefGoogle Scholar
  44. 44.
    Rubin, J., Chisnell, D.: Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Wiley Publ., Indianapolis (2008)Google Scholar
  45. 45.
    Pettersson, J.S., Fischer-Hübner, S., Bergmann, M.: Outlining “Data Track”: privacy-friendly data maintenance for end-users. In: Wojtkowski, W., Wojtkowski, W.G., Zupancic, J., Magyar, G., Knapp, G. (eds.) Advances in Information Systems Development, pp. 215–226. Springer, New York (2007)CrossRefGoogle Scholar
  46. 46.
    Wästlund, E., Fischer-Hübner, S.: End User Transparency Tools: UI Prototypes. PrimeLife Deliverable D.4.2.2. PrimeLife project (2010)Google Scholar
  47. 47.
    Pulls, T.: Privacy-friendly cloud storage for the data track. In: Jøsang, A., Carlsson, B. (eds.) NordSec 2012. LNCS, vol. 7617, pp. 231–246. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  48. 48.
    Freeman, L.C.: Visualizing social networks. J. Soc. Struct. 1(1), 4 (2000)Google Scholar
  49. 49.
    Becker, R.A., Eick, S.G., Wilks, A.R.: Visualizing network data. IEEE Trans. Vis. Comput. Graph. 1(1), 16–28 (1995)CrossRefGoogle Scholar
  50. 50.
    Kani-Zabihi, E., Helmhout, M.: Increasing service users’ privacy awareness by introducing on-line interactive privacy features. In: Laud, P. (ed.) NordSec 2011. LNCS, vol. 7161, pp. 131–148. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  51. 51.
    Kolter, J., Netter, M., Pernul, G.: Visualizing past personal data disclosures. In: ARES 2010 International Conference on Availability, Reliability, and Security, 2010, p. 131. IEEE (2010)Google Scholar
  52. 52.
    Art. 29 Data Protection Working Party (2012). Opinion 5/2012 on Cloud Computing. European Commission, 1 July 2012Google Scholar
  53. 53.
    Art. 29 Data Protection Working Party (2010). Opinion 1/2010 on the concepts of “controller” and “processor”. European Commission, 16 February 2010Google Scholar
  54. 54.
    O’Neill, O.: A Question of Trust. CUP, Cambridge (2002)Google Scholar
  55. 55.
    Wamala, C.: Does IT count?: complexities between access to and use of information technologies among Uganda’s farmers. Luleå Tekniska universitet, Luleå (2010)Google Scholar
  56. 56.
    Lacohée, H., Crane, S., Phippen, A.: Trustguide: Final report. Trustguide, October 2006 (2006)Google Scholar
  57. 57.
    Angulo, J., Fischer-Hübner, S., Wästlund, E., Pulls, T.: Towards usable privacy policy display and management. Inf. Manag. Comput. Secur. 20(1), 4–17 (2012)Google Scholar
  58. 58.
    Andersson, C., Camenisch, J., Crane, S., Fischer-Hübner, S., Leenes, R., Pearson, S., Pettersson, J.S., Sommer, D.: Trust in PRIME. In: Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology. IEEE (2005)Google Scholar
  59. 59.
    Tsai, J.Y., Kelley, P., Drielsma, P., Cranor, L.F., Hong, J., Sadeh, N.: Who’s viewed you?: the impact of feedback in a mobile location-sharing application. In: CHI 2009 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2009)Google Scholar
  60. 60.
    Shin, D.: User centric cloud service model in public sectors: policy implications of cloud services. Gov. Inf. Q. 30, 194–203 (2013)CrossRefGoogle Scholar
  61. 61.
    Pearson, S.: Privacy, security and trust in cloud computing. In: Pearson, S., Yee, G. (eds.) Privacy and Security for Cloud Computing, pp. 3–42. Springer, London (2013)CrossRefGoogle Scholar
  62. 62.
    Voida, A., Olson, J.S., Olson, G.M.: Turbulence in the clouds: challenges of cloud-based information work. In: CHI 2013 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2013)Google Scholar
  63. 63.
    Joinson, A.N., Reips, U.-D., Buchanan, T., Paine Schfield, C.B.: Privacy, trust, and self-disclosure online. Hum.-Comput. Interact. 25(1), 1–24 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Simone Fischer-Hübner
    • 1
  • John Sören Pettersson
    • 1
  • Julio Angulo
    • 1
  1. 1.Karlstad UniversityKarlstadSweden

Personalised recommendations