Privacy targets and how they address activities that can create harm
To create a comprehensive overview of privacy targets, we incorporated all relevant regulatory frameworks, at least from a European point-of-view. The resulting list of privacy targets as presented in Table A1 are based on:
all elements of the current and proposed EU data protection regulation (EC, 1995; EC, 2012);
all data protection principles included in the OECD privacy guidelines (OECD, 1980) and Fair Information Practice Principles (FTC, 1998);
all elements of the ISO/IEC Privacy Architecture Framework (ISO, 2011);
data protection targets proposed by Rost & Bock (2011) that emphasise individuals’ information self-determination rights.
These mostly European legal privacy ‘targets’ are then evaluated concerning their impact on ‘harming activities’ as identified in the American legal system. We ask whether privacy harm is likely to occur if the privacy targets are effectively tackled. The harmful activities in the table originate from Solove’s taxonomy of privacy (Solove, 2006), which offers the most comprehensive and structured view on this matter. The concepts that the activities in the table are based on are restricted to the context of information privacy. For example, decisional interference (P5.4) is restricted to the consideration of decisions that are derived from collected data. In contrast, governmental interference, which normally incorporates bodily and territorial privacy, is excluded from this table.
Three independent privacy experts judged the relationship between privacy targets and harmful activities. Where such a relationship is given, the intersection is marked. PIA assessors can use the table’s judgements to determine whether to consider a privacy target in their context.
Because P4.3 is functionally only an extension of P4.1 and P4.2, it is the only target that is not explicitly assigned to any of the activities.
When conducting a PIA, it might be helpful to choose the activities that are relevant for the system or business case, then identify the privacy targets to consider.
Analysis of the relative utility of the proposed PIA methodology
After evaluating the methodology’s absolute utility using action research, we evaluate the methodology’s relative utility by comparing it with other risk assessment approaches.
Although PIAs are only beginning to be used in practice, there are some proposals on how to conduct PIAs. An extensive review of the proposed methodologies can be found in Clarke (2011) as well as in the first delivery of the PIAF project that was conducted for the European Commission (Wright et al, 2011). None of the existing PIA approaches has become a recognised standard to date. However, one of the most heralded PIAs in Europe is the UK PIA Handbook (ICO, 2009). We therefore want to compare this UK PIA to our approach. Secondly, we want to compare our proposed methodology with a recognised standard. For this purpose, we use the ISO 31000 risk management standard (ISO, 2009), which describes how to generally handle risks in organisations. The reason why we compare our PIA methodology with this privacy-independent standard is that privacy is just one of several organisational risks. If an organisation regularly addresses privacy as an ethical risk, privacy must become part of an organisation’s overall risk management processes. PIAs and our approach should therefore fit into such processes.
Before we dive into the detailed comparison of the different approaches, we must discuss the differences in perspective between the three risk assessments. Both UK PIA and ISO 31000, like other current methodologies, consider a project as their subject of analysis. Our methodology, in contrast, focuses on systems. We consciously adopt a system-centric perspective for three reasons: First, our PIA methodology aims to lead a project team into privacy-by-design for a new system. Therefore we concentrate more on the concrete risks of a system’s design and less on the organisational framework of a new project. We take an engineering perspective that is supported by business processes where needed. Because the other two approaches embrace organisational risk reflections, they are detached from system design. The project-centric perspective also makes assessors operate within a project’s scope. However, in doing so they can overlook the larger context for the systems. For example, personal data flows may go beyond systems considered in the immediate project scope. The flows may therefore be defined as outside of project scope even though they are highly relevant from a privacy perspective. Finally, IT manufacturers develop systems that are initially independent of deployment. Since they should use PIAs in their system development lifecycle it also makes sense to focus on a system.
To compare our PIA methodology to the UK PIA Handbook and the ISO 31000 standard, we use seven PIA quality criteria that were recently published by Wright & De Hert (2012):
Early start: A PIA process should start as early as possible so that it can influence the design of a project.
Project description: A project subjected to a PIA should be adequately described, including: (1) a general description of the project, (2) information flows and (3) requirements of legal data protection instruments and other types of privacy.
Stakeholder consultation: An organisation should identify stakeholders, inform them about the project, seek their views and duly take them into consideration.
Risks management: The assessor should identify, assess and mitigate all possible risks resulting from a project using (1) risk assessment and (2) risk mitigation approaches.
Legal compliance check: The assessor should ensure that the project complies with any legislative or other regulatory requirements.
Recommendation and report: The assessor should (1) provide recommendations and an action plan, (2) justify decisions about and implementation of recommendations and (3) provide a PIA report.
Audit and review: PIA reports should be audited or reviewed externally.
All three methodological approaches (UK PIA, ISO 31000 and our PIA) agree that an assessment should start early. The UK PIA links the assessment to a project lifecycle and recommends that the assessment begin in the initiation phase of a project; it also requires a cyclical approach, meaning that the different phases of the PIA can be re-executed at any time. ISO 31000 sees risk management as an integral part of organisational processes and asks for a ‘systematic, timely and iterative approach that is responsive to change’ (ISO, 2009). Like the two other approaches, our PIA methodology is linked to a process; our methodology is linked to a system’s development process, which ensures a systematic and iterative course of action. Timeliness is ensured because our PIA starts at the beginning of system development or when a system is upgraded.
Regarding a description of the overall project and system, both UK PIA and ISO 31000 require a general description of the project. The UK PIA Handbook requires a project outline and project plan. ISO 31000 requires a description of the internal organisational context, such as organisational objectives and attitudes towards risk, as well as the context for the risk management process. Our proposed methodology is much more specific because it focuses on in the aspects of a system that raise concrete privacy risks. Our proposed methodology demands a more systematic system characterisation in Step 1, requiring the assessor to take four distinct views on the system and its IT infrastructure context (system, functional, data, physical). Because it is system-centred, our approach does provide less information about the general project and organisational context. We believe that companies can achieve privacy-by-design in a more cost-effective way by focusing on the immediate risks inherent in a system. However, because privacy-by-design includes governance measures and strategic decisions on personal data asset handling, we acknowledge that our PIA approach should be complemented by reflections on organisational privacy risk attitudes and risk responsibilities. Such reflections can take place before our PIA process begins and can inform later judgements and reports.
Regarding the description of data flows, our system-centric perspective gives us an advantage over the two other approaches. We explicitly introduce a ‘data view’ that requires data categories and data flow diagrams of internal and external data flows, including actors and data types. In contrast, UK PIA’s phase 1 contains a background paper that can describe flows of personal information. ISO 31000’s internal context simply contains information flows without further detail.
The recommended description of privacy requirements cannot be found in ISO 31000 because it is a general risk management standard and not privacy-specific. In contrast, the UK PIA Handbook extensively explains the concept of privacy and describes four aspects of privacy that could be considered in a PIA: privacy of personal information, privacy of the person, privacy of personal behaviour, privacy of personal communications. This categorisation of privacy in four spheres is very intuitive and helps readers understand the ‘chameleon like’ privacy concept (Solove, 2006). We take a different approach though. In our methodology’s ‘definition of privacy targets’, we list privacy targets that should be used by engineers as their privacy design goals. Our targets are both more concrete and more extensive than the UK PIA’s four categories. In contrast to the UK PIA, we also ensure that European legal requirements are covered. Furthermore, in our approach assessors must describe and analyse each privacy target against the background of their respective context. For these reasons, we consider our methodology to be more anchored in the concept of privacy and more practical to apply.
The third quality criterion, stakeholders’ consultation, is part of all three assessment approaches. The UK PIA Handbook analyses stakeholders and establishes a consultative group during its preparation phase. The Handbook also involves stakeholders in its consultation and analysis phase. ISO 31000 contains an activity called ‘communication and consultation’ that involves communication with internal and external stakeholders and a consultative team approach. The precise input demanded from stakeholders is not specified in these two approaches. Although we again include a less-detailed description of organisational structures, we do include stakeholders in our approach and give them a concrete role. In Step 3, for example, where the protection demand for different privacy targets is evaluated, we explicitly recommend involving stakeholders.
For risk management, the UK PIA Handbook does not provide any specific guideline. Its consultation and analysis phase contains only three generic cues: risk analysis, identification of problems and search for solutions. It does not specify how these activities should be concretely pursued. ISO 31000’s risk assessment includes three activities that offer detailed recommendations: risk identification, risk analysis and risk evaluation. All three activities are reflected in our methodology. First, ISO 31000 proposes that an organisation apply risk identification tools and techniques that are suited to its objectives and capabilities. In terms of techniques, we chose damage scenarios and considerations of an operator and data subject perspective (Step 3), as well as a systematic identification of threats for each target that uses a numbering scheme (Step 4). Second, ISO 31000 recommends that organisations consider the likelihood that threats will occur and analyse risk qualitatively, semi-quantitatively or quantitatively. Step 4 of our methodology requires that organisations determine the likelihood that each threat will occur. For both the likelihood of a threat and the data protection demand, we chose a qualitative approach. We accept qualitative judgements in our approach because human privacy risks are often harder to describe or quantify than the loss of an asset.
Risk mitigation is described in ISO 31000’s activity ‘risk treatment’, which involves selecting risk treatment options, balancing costs against benefits and considering stakeholders. Our methodology treats this aspect of mitigation in Steps 5 (identification and recommendation of controls suited to protect against threats) and 6 (assessment and documentation of residual risks). In contrast to ISO 31000, we treat risk mitigation more extensively. We do so not only by specifically addressing privacy concerns but also by explaining how the mitigation is applied; in our methodology, organisations systematically identify controls for all likely threats, use a numbering scheme, define three levels of rigour, match rigour and protection demand, and provide an implementation plan. We address what ISO 31000 calls for but do so in a more detailed manner.
Considering the fifth quality criterion, it is not entirely clear whether Wright & De Hert (2012) and De Hert et al (2012) recommend an actual legal compliance check, which is conducted later and in addition to a PIA (ICO, 2009), or whether they recommend ensuring legal compliance as part of the assessment. All three approaches aim to achieve compliance with legal and regulatory requirements. The UK PIA Handbook and our methodology emphasise that privacy implications are a fast moving target in the changing technical world and that legal privacy targets may often not address all ensuing challenges.
All three assessment approaches require organisations to provide recommendations and justify their implementation. In the UK PIA’s consultation and analysis phase, organisations create an issues register that lists the avoidance measures that were considered, explains why they were rejected or adopted, and identifies any that are not addressed. For ISO 31000’s activity ‘risk treatment’, organisations select risk treatment options that balance costs against benefits, recognise stakeholder views, and prepare and implement a risk treatment plan. Similar to these approaches, Step 5 of our methodology requires that organisations recommend controls; however, organisations must also choose a justified level of rigour for each control to meet the level of protection demand identified in Step 3. Furthermore, we offer an extensive list of exemplary technical and non-technical controls for each potential privacy risk. Again, our methodology provides greater specificity, practical applicability and step-by-step guidance. A cost-benefit analysis, a control implementation plan and the documentation of residual risks are then required in our Step 6. All three approaches require documentation of the assessment process. The UK PIA and our methodology call it a PIA report and recommend publishing it. Our PIA goes a step further and offers a set of concrete content elements that a PIA report can include for different target audiences (Step 7).
Finally, only our proposed methodology meets the last quality criterion, which recommends that a PIA report should be externally audited and reviewed. To facilitate external audit and review of the PIA report, we recommend creating a machine-readable PIA report.
Table B1 shows that our PIA methodology is the most advanced of all three approaches with respect to the given quality criteria for a best practice PIA process.