A Privacy Engineering Framework for the Internet of Things
This paper describes a privacy engineering framework for the Internet of Things (IoT). It shows how existing work and research on IoT privacy and on privacy engineering can be integrated into a set of foundational concepts that will help practice privacy engineering in the IoT. These concepts include privacy engineering objectives, privacy protection properties, privacy engineering principles, elicitation of requirements for privacy and design of associated features. The resulting framework makes the key difference between privacy engineering for IoT systems targeting data controllers, data processors and associated integrators, and privacy engineering for IoT subsystems, targeting suppliers.
KeywordsPrivacy-by-design Internet of things IoT system IoT subsystem Integrator Supplier
7.1.1 The Internet of Things
The(IoT) refers to smart devices, sensors, and actuators that are embedded in the physical world, connected to each other and to further computing resources, allowing applications and intelligent services to understand, track, and control almost anything in the physical world through standard communication networks.
“Things” in the IoT can be machines controlling the production in a factory, electrocardiography sensors in clothing, temperature sensors or light bulbs in homes and buildings, moisture sensors in the garden, and persons and animals providing (via IoT devices) personal data to location-based services or to comfort control systems that adapt the environment to their preferences or context. The data can be linked together using semantic methods enhancing the information interoperability in heterogeneous systems and thus enabling automated services composition. The data can be analyzed with statistical methods, business intelligence, predictive analytics, and machine learning. As we interact with our world and explore the collected data, the benefits will increase. The resulting “reality mining” applications offer increasingly extensive information about our lives, both individually and collectively and transform our understanding of ourselves, our organizations, and our society. MIT’s Technology Review has identified reality mining as one of the “10 Emerging Technologies That Will Change the World”, see (Sweeny 2013).
7.1.2 Privacy, a Transversal Problem
The IoT vision entails the tacit assumption that data can first be collected and then later the analysis shows for which concrete purposes it can be used. Large amounts of seemingly non-personal data (temperature, motion, ambient noise, etc.) may be linked together and may later be used to identify individuals and their activities. Technologies such as reality mining will be able to reveal “hidden” information and relationships that were never suspected in the first place, when the data was collected. This contradicts the main privacy principles: “Specification of purpose is an essential first step in applying data protection laws and designing data protection safeguards for any processing operation and a pre-requisite for applying other data quality requirements” (see Article 29 Data Protection Working Party 2013).
IoT is also a major trend driving growth in Big Data. Already today, data is an asset with an enormous economic value. Over the last years, the economic value of data followed by a respective industry has grown exponentially and the impact on other sectors (healthcare, transportation, ecommerce, etc.) has been equally increasing. A common understanding is that data is an asset that belongs to the parties generating the data, who are free to decide what they want to do with that asset to achieve their business goals. This again contradicts fundamental privacy principles: it is the “data subject”, i.e. the person about whom the data is gathered and not simply the “data generator” that should determine how the data can be used and by whom.
Some authors believe that IoT and Big Data are fundamentally in contradiction with privacy (for a snapshot of the controversy, see Lee and Anderson 2014). Indeed, to reconcile them is clearly difficult, and there are many problems that are hard to solve: even innocently looking data, such as noise levels or room temperatures in the different rooms of a building can reveal the activities of a person and thus become privacy relevant. At the same time data gathered in public spaces relating to many different data subjects creates the challenge how to inform the data subjects about the purpose and obtain their consent. The transparency of the data provenance and integrity is difficult to guarantee in a scenario where subjects are continuously being monitored and tracked by a large number of devices. Furthermore, the use of (resource) constrained devices in IoT makes it hard to implement, configure and use complex security and privacy mechanisms. Since each IoT instantiation will collect and use other data it may appear impossible to create privacy building blocks for IoT in general. Finally, even if all the rather technical problems can be solved, it needs to be considered that the business model of some companies is based on extracting value from personal data and tolerate data protection related risks or even financial penalties to continue their endeavours.
7.1.3 IoT Ecosystems
Data controllers and data processors must comply with existing privacy-related legal obligations, while integrators and suppliers do not, at least explicitly. This raises the issue that neither integrators nor suppliers take properly into account the privacy-related legal obligations of the data controllers and data processors they are working with. One can argue that data controllers and data processors will integrate privacy-related legal obligations in supply and procurement contracts. We therefore argue that both integrators and suppliers must include privacy engineering in their practice.
There is another point that we would like to emphasize: privacy engineering for suppliers of subsystems has to be approached differently as the engineering of such subsystems generally takes place independently from the design of the system in which they will finally be integrated. Suppliers therefore cannot be aware of the data collection intentions of the stakeholders that buy their systems. Here are a few examples: a biometric subsystem that is purchased by a bank office; a video camera subsystem that is used by a security operator; a smart phone operating system and middleware that is used by an application developer.
Therefore while data controllers, data processors and integrator practice privacy engineering should know the initial purpose for data collection, developers of subsystems on the other hand may have no precise clue on the purpose for which their subsystems will be integrated. One could argue that the issue will be fixed anyway since suppliers that do not provide suitable privacy management features will end up having less business. We believe this does not work well for the following reasons: firstly, the relationship between supply chain stakeholders can be unbalanced. The data controller could be much less influential than the supplier (consider the developer of a smart phone application in a small company and its relation with the smart phone operating system managed by a major company). A take it or leave it position would be created by powerful suppliers, yielding a situation where proper privacy engineering is only applied in minor parts of the overall system; further, data controller could take advantage of this situation to escape ethical obligations while still meeting legal obligations. Secondly it is not obvious that suppliers today are aware that they should be concerned. Current policy makers and regulations do not refer to them. Directive 95/46/EC (Directive 95/46/EC 2016) defines for instance the term third party but only uses this term to refer to stakeholders to whom data can be disclosed. The General Data Protection Regulation (GDPR) (General Data Protection Regulation 2016) published in May 2016 to replace the directive defines third parties as stakeholders that are authorised to process personal data; In addition it is interesting to note that none of the referenced work presented in Sects. 7.3 and 7.4 below covers suppliers. Recently however, the European Commission issued a mandate for standardization focusing on privacy management of security products and related services (Mandate M530 2016). While not explicitly mentioning suppliers, the mandate mentions “manufacturers and service providers” for security technologies. We believe that many such manufacturers or service providers will not play the role of data controllers, data processors or integrators. They will just be suppliers.
Two recommendations are identified for privacy in the context of the Internet of Things. The first is to make a clear difference between IoT system privacy engineering and IoT subsystem privacy engineering. The second is to build a wealth of IoT subsystem privacy engineering practice.
IoT support features for privacy
Privacy control support
User-defined privacy policies
Transparency, Intervenability, Availability
Lightweight pseudonym system
Malleable and group signatures
Integrity, Authentication, Data minimization
Reconfiguration facility of security and privacy mechanisms
Credential bootstrapping mechanism
But the needed genericity for IoT subsystems privacy engineering could deter suppliers from providing privacy control support if no clear business incentives are provided. It is therefore important to create a wealth of successful privacy engineering practices. Those practices should actually involve concertation with the demand side, i.e., IoT system designers.
7.1.4 The Need for Privacy Engineering Guidelines
Privacy-by-design is a term that has been a buzzword since the very moment when it was coined by Ann Cavoukian (Privacy-by-Design 2016). With the advent of the General Data Protection Regulation (GDPR) which explicitly refers to data protection by design and data protection by default (General Data Protection Regulation 2016), and with a projection that the Internet of Things will consist of 50 billion connected devices by 2020 (Evans 2011), this very term will have to become a reality. In other words privacy engineering in the ICT ecosystem where such devices are produced will have to be a reality.
This paper defines a privacy engineering framework for the IoT that will help going towards that direction. This framework is the result of a cooperation of the two European FP7 projects RERUM2 and PRIPARE.3
RERUM’s objective is to develop a framework enabling dependable, reliable and innovative Smart City applications.4 The framework follows the concept of “security and privacy by design”, addressing the most critical factors for the success of Smart City applications. RERUM therefore has developed an architectural framework with the addition of hardware products that allow one to adjust the level of privacy, security and trust from the earliest possible stage of data acquisition up to a point next to data processing by service providers, covering most of the data lifecycle in IoT.
PRIPARE’s objective is to define a privacy-by-design methodology and support its uptake by the ICT research community in order to prepare for industry practice, and to produce a set of educational material. PRIPARE has therefore developed a methodology which integrates the various contributions made in the area of privacy engineering. The results of PRIPARE have led to the creation of a new work item in ISO JTC 1/SC 27/WG 55 on privacy engineering.
The rest of the paper is structured into three main sections. Section 7.2 explains the impact at the architecture level of security and privacy in an Internet of Things system. Section 7.3 provides a rationale for the elaboration of a privacy engineering framework, as well as an analysis of recent contributions to privacy engineering. Section 7.4 finally describes the proposed privacy engineering framework for IoT. The framework includes four categories of information, concept, stakeholders, process and organisation.
7.2 Privacy in the Internet of Things
Data is an asset of immense business value that involves the interest of companies in keeping them secure as well as the trust of customers and users. Security and privacy breaches endanger both, the economic value of the companies' assets and the trust that customers have placed in them. In the previous section we have pointed out how data can be collected and later be processed in such a way that it can be referred to a subject. In the following sections, we explain how privacy protection can be embedded into a system's architecture and how it can help to prevent privacy problems incurred after data collection. This section in particular presents how an IoT architecture could look like based on proposed reference models and how privacy engineering can be applied to the architecture using privacy controls.
7.2.1 Internet of Things Architecture
The IoT-A: Internet of Things – Architectural Reference Model (ARM), see (Bassi et al. 2013), is the most influential proposal. This reference model was developed as part of the European Internet of Things Research Cluster, it analyses use cases and business needs, eliciting common requirements and proposing a reference that unifies building blocks that fulfil those requirements. The proposal has served as the ground for numerous IoT projects, including RERUM.
The AIOTI: Alliance for Internet of Things Innovation High Level Architecture, see (AIOTI 2016), is working group under the IERC as well that takes into account the work of several projects such as IoT-A, RERUM, BUTLER and others. The proposal focuses on Large Scale Pilot deployments and points out lessons learned from respective partner projects.
The ISO/IEC Working Draft 30141, see (International Organization for Standardization (ISO) n.d.), specifies a layered structure identifying IoT standardization areas and key components of IoT Systems. The draft includes definitions of conceptual models by generic IoT domains, an IoT Reference Architecture (IoT RA), IoT specific requirements and terminology. The draft either adopts from existing proposals, modifies existing ones, or develops new definitions if required. This reference architecture focuses on systems developers, but conceptually stays close to e.g. the IoT-A model. In this respect it is important to note that ISO JTC 1 SWG 5 pays special attention to following requirements: “...IoT systems cannot be used for malicious purposes by unauthorized entities” and that “...personal and business information is kept confidential”.6 In order to minimize future effort RERUM provided comments during the standardization process to the ISO/IEC 30141 working group.7
7.2.2 RERUM Architecture and Privacy Controls for the IoT
The RERUM project has focused on the identification and development of a set of privacy control features for the IoT, taking into account work from past projects or ISO 29151, see (Pöhls et al. 2014) for the methodology. These controls are described in this section.
There are several generic IoT architectures, for instance the IoT-A ARM as described in the previous subsection. The first questions to ask are: Which components of the given architecture have been defined with privacy in mind? What further privacy-related extensions are necessary in the architecture?
OASIS-PMRM (2013) recommends to base architectural components and modifications on privacy requirements and their technical solutions. That implies that privacy affects the design of the architecture. This can be achieved by codifying them into user-defined privacy policies. These should be linked in the architecture in such a way that they are available when needed.
In IoT systems, smart devices monitor or control physical entities (which could be users or particular aspects of users). In IoT-A physical entities are represented in the virtual space as software artefacts called “virtual entities”. A natural way of binding the privacy policies to the object is to link them to the entity that represents the object in the virtual space. In this way the policy can be consequently consulted each time that the virtual object is being accessed. In order to follow a strict opt-in approach and guarantee data minimization a number of privacy related extensions are still necessary. All together these extensions must enable the user to describe his policies, to have fine grained control on the data collection, and to understand which data is being collected for what purpose. The components that RERUM envisioned are as follows.
The Consent Manager supports also the goal of transparency and intervenability; it is the component of the architecture that allows the data subject to review which applications are requesting his personal information and the purpose of the request, and may give or deny his consent to them, or withdraw a previously granted one.
The Deactivator/Activator of Data Collection allows the data subject to intervene and de-activate the collection of data from any devices when he/she wishes to protect his/her privacy and to re-activate the collection later on, when he/she decides to do so.
The Privacy Dashboard empowers the user to manage its policies for private information. It derives from the goal of availability (the user shall be able to access all data about him/her at any time), intervenability and transparency. Generally, it cannot be expected and assumed that users (data subjects) that use an IoT application have the technical knowledge to be able to express their privacy policies in a conventional policy language. The role of the Privacy Dashboard is to support the user with a graphical user interface, which visualizes a device's behaviour and allows setting a specific behaviour according to users' preferences. The user preferences are then translated automatically to detailed machine-readable policies. To sum it up, a privacy dashboard answers the common users’ question “What does the system know (track) about me?” and provides a graphical interface that the user can understand and take appropriate action, if necessary. Additionally, the Privacy Dashboard allows tracking how many Physical Entities are connected an IoT system and which kind of data they are exposing. Without the visibility of the actual data collected data subjects may not fully understand the abstract description of what types of data are collected; simultaneously, data subjects may be overwhelmed by access to raw data without knowing what that data means and what are the implications.
A lightweight and efficient pseudonym system is able to hide the identities of the users from the applications that do not necessarily require them, but also from any attacker or intruder that is able to exploit vulnerabilities of the system or weaknesses in humans to get access to the central databases, preventing any of those to track down individuals through their identities and thus serves the goal of unlinkability.
Important secure pseudonym exchanging concepts can be categorized in spatial concepts, time-related concepts and user-oriented concepts (see Titkov 2006). Spatial concepts are best based on mix-zones, where pseudonyms are exchanged when system participants meet physically. Time-related mechanisms propose to change pseudonyms after a certain time, where a secure pseudonym exchange is only possible when the changing participant is not participating in the system any more. One possible solution is a so called silent period. This means that a system participant stops his/her participation for a short time until his/her pseudonym is changed successfully. User-oriented concepts allow the user to decide when he/she wants to change his/her current identity. The decision can hereby be completely subjective, allowing defining own policies and thresholds for the pseudonym change.
Geo-Location PETs enable the system to send the minimal amount of information to, say, traffic monitoring or other location-based applications. In general, two methods exist: pseudonym exchange technologies for vehicular ad-hoc networks (see Scheuer et al. 2008), and data based technologies for floating car observation such as the one suggested in RERUM (see Tragos et al. 2014c). Through the combination of both, data obfuscation, pseudonym systems and methods for replacing regularly the pseudonyms, the association of users to location can be entirely obfuscated, depending on circumstances. This is very important, as the tracking of location information discloses a large amount of information about the habits, activities, preferences of people.
Malleable and Group Signatures (see Demirel et al. 2015; Manulis et al. 2012) allow balancing the requirements for integrity and origin authentication for the gathered data with the need to remove information to protect privacy. Namely, Malleable Signatures allow the data origin to authorize selected subsequent changes to signed data, e.g. to blacken-out a privacy-violating data field. Group Signatures allow hiding the specific creator of data for enhancing the data privacy, e.g. instead of the exact origin only a group of potential creators can be identified as the source. Both mechanisms decrease the data quality only gradually, but enhance intervenability, confidentiality and integrity.
Reconfiguration facility of security and privacy mechanisms allows updating or exchanging the mechanisms to enforce security and privacy. In order to be able to adjust to a changing landscape (to circumvent new vulnerabilities found in security and privacy mechanisms as well as in applications) devices need to be re-programmable. This can be achieved by making the device firmware updatable by secure remote over-the air programming (OAP), or related methods.
A secure credential bootstrapping mechanism is necessary to place the necessary cryptographic keys into the devices, in order to enable them to communicate securely with the rest of the system. Of course, lightweight and efficient privacy preserving authentication and authorization protocols are necessary. They must support constrained nodes, in terms of computing power, energy consumption and bandwidth.
The experiences of RERUM show that the generic and specific PET-extensions to IoT-A are valuable in four quite different application scenarios (Smart Transportation, Environmental Monitoring, Domestic and Building Energy Efficiency and Indoor Comfort Quality Management). However, the IoT Privacy Architecture must be flexible to support different scenarios as requirements will change over time.
The table below summarises the resulting IoT privacy control support features for privacy.
7.2.3 Comparison with Other Solutions
Comparison of privacy related IoT projects
RERUM has integrated and developed privacy enhanced technologies to address all relevant privacy controls in IoT systems.
IoT-A has covered conceptual aspects of confidentiality protection (by introducing the idea of pseudonym into the IoT architecture) but has not proposed details on engineering privacy-by-design, data minimization technologies and other privacy controls.
PEARS feasibility has introduced privacy enhanced identification tags to the area of RFID, allowing authentication, confidentiality and integrity protection. Neither privacy-by-design, nor data minimization nor IoT use cases have been specifically addressed.
Prime and Prime Life have addressed many controls to allow privacy-by-design and proposed several privacy enhanced technologies such as the prominent Idemix anonymous credential system, see (Camenisch 2002). Prime and Prime life have not focused their efforts on the constraints and lossy environment of IoT systems, but rather on the user of IT systems. The Prime and RERUM projects can therefore be complimentary: RERUM enhances privacy on IoT devices and the architectural part of the system, while Prime and Prime Life support privacy controls of the user.
7.2.4 Recommendations for Privacy Engineering in IoT
The privacy controls described in the previous section do not replace the privacy engineering effort. In each concrete IoT implementation or deployment the initial steps are to define the functional and operational requirements (purpose), the personal data that should be collected for the purpose and start deciding how to minimize the collection and analysis and which PETs could be used.
It is important to use an IoT framework that easily allows for or already incorporates a privacy-by-design approach, and provides several PETs, in particular to create, hold, and use privacy policies. The chosen PET components and the generic or default privacy policies should be instantiated to the specific application and domain. The privacy policies need to be installed as close to the data collection points, i.e. the devices, as possible, or at the first place where a data subject is identified.
It is recommended to use hardware components that are able to apply the necessary PETs as early as possible, i.e. on data collection points or their gateways. Simply put there is a need for “intelligence” in the data collecting subsystems such that privacy policies can be retrieved, understood, policies can be stuck to data sets and data minimization and aggregation can be applied already near the data collection points.
The hardware components, especially for the smart devices or the “things”, should be able to run security and privacy protocols and the software should be securely managed remotely. This allows software to be adapted to changes over time, e.g. to fix flaws, or to add new more advanced mechanisms. Also it allows to re-purpose hardware for other applications in the future, reducing costs for physical re-deployments.
It should be noted, that the application of security and privacy mechanisms does not correlate with increased hardware costs. Efficient PETs should be identified and implemented during privacy engineering using state of the art technologies, in particular for constrained environment, as elliptic curve cryptography, see (Lejla et al. 2006), delegation of heavy computation such as the generation and verification of policies, see (Cuellar 2015), etc.. Advances in security and privacy research yield mechanisms with continuous efficiency improvements, which can be updated with proper hardware (again, flexibility is key).
Furthermore, the deployed hardware itself should be updatable. This can be achieved by making it interoperable, e.g. through offering common APIs, and adhering to agreed standards. This allows hardware flaws to be fixed, thus reducing the risk of a hardware vendor lock in and the security risks specific to “monoculture” deployments.
7.3 Understanding Privacy Engineering
7.3.1 Privacy Engineering in Organisations
The centre part shows two important viewpoints: the management viewpoint, which focuses on elements (processes, practices, concepts) that are important to managers in their activities; the engineering viewpoint, which focuses on elements (engineering requirements, design, implantation, verification, maintenance) that are important to engineers in their activities.
The right part shows important process concerns to both managers and engineers: risk assessment, system development, and system compliance: risk assessment focuses on quantifying privacy risks in systems dealing with personal data and mitigating them by reducing their likelihood or their consequences; system development focuses on specifying and implementing technical solutions for privacy control in systems dealing with personal data. System development can involve decisions to integrate sub-systems supplied by third parties; system compliance focuses on ensuring that an organisation is doing what is expected and that developed systems developed have the expected behavior. System compliance involves challenging processes such as privacy protection assurance, evaluation and verification. System compliance allows external stakeholders (e.g. consumers, policy makers, procurers) to assess whether they can trust the organisation and/or the systems.
The difference between a management viewpoint and an engineering viewpoint is the following: management focuses on what system is developed and on checking that it is developed properly while engineering focuses on how a system is developed and on testing it properly. We observe that managers and engineers do not work in isolation, because they build the system together. They must interact in such a way that sufficiently precise information is exchanged. Interactions can also be iterative, for instance when using the Deming cycle.9 Examples of interactions that will take place are discussions on legal and ethical aspects which are treated at management level. Managers are concerned that the resulting undertakings comply with regulations and also meet ethical principles. They must interact with engineers so that these latter understand the requirements they must meet.
Privacy impact assessment (PIA) is a risk assessment process concern. For instance, ISO 29134 (2016 draft) is a privacy impact assessment impact standard under development. The CNIL PIA (2015) and the data protection impact assessment template specified by the smart grid task force (EC Data Protection Impact Assessment Template 2014) are other examples.
Privacy analysis is a system development process concern. The OASIS privacy management reference model and methodology or OASIS-PMRM (2013) is a standard focusing on privacy requirement analysis enabling the identification of a set of privacy management functions. Using a code of practice is another system development process concern. ISO 29151 (2016 draft) is a code of practice for personally identifiable information standard under development that will provide recommendations on the practice and use of privacy controls.
Threat analysis is a risk assessment process concern. For instance, LINDDUN (2015) is a methodology that can be used by engineers to identify threats and design mitigation solutions.
Architecture is a system development process concern. For instance PEARS (Kung 2014) explains how to specify an architecture which improves privacy using architecture concepts such as quality attributes and architecture tactics. It is based on Carnegie-Mellon work on software architecture (Software Architecture in Practice 2012). It provides a list of architecture strategies (minimisation, enforcement, transparency, modifiability).
Design strategy is another system development concern. Hoepman (Hoepman 2014) describes how a system can be designed following a number or strategies and how to identify and describe reusable solutions called privacy patterns.
The integration of privacy engineering into development process methodologies is a system development concern. For instance an organization could use the agile development methodology (Beck et al. 2015), a design methodology which focuses on flexible and evolutionary development, allowing engineers to develop prototypes that can iteratively evolve into improved versions.10
Software documentation is another system development process concern. OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) (OASIS 2016) is an example of standard under development.
Examples in Figs. 7.5 and 7.6 are not meant to be definitive categorization of viewpoints (management vs engineering). Standards such as ISO 29134 (2016 draft), OASIS-PMRM (2013), ISO 29151 (2016 draft) are also useful from an engineering viewpoint. In particular OASIS-PRMM explains to engineers how to apply an iterative process to identify privacy management functions and associated requirements.
7.3.2 The Need for a Privacy Engineering Framework
The previous section showed the many aspects of privacy engineering and the complexity of integrating them in organisations. As we have shown earlier, there is an ample range of privacy controls that could be applied in IoT systems but a structured engineering approach is required for appropriate selection and integration. To this end, an integrated vision is necessary. We suggest that this can be achieved by defining a privacy engineering framework.
It may be useful to define the meaning of the term framework. The Cambridge online dictionary defines a framework as a “system of rules, ideas, or beliefs that is used to plan or decide something”, e.g. a legal framework for resolving disputes. The Oxford online dictionary defines framework as “a basic structure underlying a system, concept, or text”, e.g. the theoretical framework of political sociology. Finally, the free dictionary11 defines a framework as “a set of assumptions, concepts, values, and practices that constitutes a way of viewing reality”. In the rest of this section we will use this latter definition.
Interestingly, there is already a standard, ISO 29100 (2011) which defines a privacy framework. As stated in the standard “it specifies a common privacy terminology; it defines the actors and their roles in processing personally identifiable information (PII); it describes privacy safeguarding considerations; and it provides references to known privacy principles for information technology”. ISO 29100 provides therefore a set of assumptions, concepts, values and practices for privacy in organisations dealing with personal data. A detailed look at ISO 29100 shows that it contains two parts: a set of concepts (actors and roles, interactions, recognizing Personally Identifiable Information, privacy safeguarding requirements, privacy policies; and privacy controls) and a set of privacy principles (c onsent and choice, purpose legitimacy and specification, collection limitation, data minimization, use, retention and disclosure limitation, accuracy and quality, openness, transparency and notice, individual participation and access, accountability, information security, privacy compliance).
From a privacy engineering viewpoint we believe that a number of concepts and principles should be added. Paraphrasing ISO 29100, a privacy engineering framework is needed which specifies a common privacy engineering terminology; which defines the actors and their roles in the engineering of systems integrating privacy management; which describes considerations on engineering privacy safeguards; and which provides references to known privacy engineering principles.
There are a number of reasons to define a privacy engineering framework. First we need a convergence of terms. A number of concepts and principles for privacy engineering have been debated in the last years, for instance privacy-by-design principles (privacy-by-policy, privacy-by-architecture (Spiekermann and Cranor 2009), minimization (Gürses et al. 2011), enforcement, transparency (Kung et al. 2011), privacy protection objectives (predictability, manageability, disassociability (NISTIR 8062 2015) or privacy engineering objectives (unlinkability, transparency, intervenability (Hansen et al. 2015).
Secondly we also need some guidance on how to have an integrated view of the today complex maze of initiatives and how to extract concerns that are important from an engineering viewpoint. Most guideline documents today are management oriented and not engineering oriented making them difficult to use.
The call for a privacy engineering framework has been made in a technical paper published by MITRE in July 2014. It highlights the need to address privacy from both an organisational and an engineering viewpoint. The organisational viewpoint integrates elements such as a privacy program management, a compliance-focused risk assessment approach, a strategy and planning, and policies. The engineering viewpoint integrates elements such as privacy testing, privacy-sensitive design decisions, privacy requirements and control selection, and system focused risk assessment. The paper argues that the latter elements are not well taken into account.
7.3.3 Analysis of Privacy Engineering
This section provides an analysis of what is meant by privacy engineering, relying on research contributions made in the area, and showing that privacy engineering goes beyond security engineering. It will first cover definitions suggested concerning privacy engineering, privacy engineering objectives and privacy protection properties: it will then explain the advances made concerning the understanding of a resulting engineering lifecycle, covering the specific capabilities needed for privacy engineering, i.e. the operationalisation of privacy principles, the application of design strategies, and the integration of risk management.
The Privacy Engineer’s Manifesto (2014) states that it uses the term privacy engineering in recognition that the techniques used to design and build other types of purposefully architected systems can and should be deployed to build or repair systems that manage data related to human beings. It then explains that the book discusses how to develop good functionalized privacy policies and shows recognized methodologies and modeling approaches adapted to solve privacy problems. NIST published in May 2015 a report focusing on privacy risk management (NISTIR 8062 2015) where it states that for the purposes of the publication, privacy engineering is a collection of methods to support the mitigation of risks to individuals arising from the processing of their personal information within information systems.
In the work carried out by PRIPARE to define a privacy engineering methodology (2016; Notario et al. 2015), it was realized that two viewpoints must be taken, the goal oriented approach in design and the risk oriented approach concepts. We therefore define privacy engineering as a collection of methods to design and build privacy capabilities while integrating the treatment of risks to individuals arising from the processing of their personal information.
Privacy Engineering Objectives
In the development of systems, concern is on agreeing and achieving system or product qualities. The ISO standard 25010 (2011) which focuses on systems and software quality requirements states the following: “this can be achieved by defining the necessary and desired quality characteristics associated with the stakeholders' goals and objectives for the system”. The standard also makes a difference between quality in use of a system which characterizes the impact that the product (system or software product) has on stakeholders, and product quality. Examples of qualities in use are effectiveness, efficiency, freedom of risk. The standard further provides examples of product quality categories such as functional suitability, performance efficiency, compatibility, security, or portability, and for each category examples of qualities. For instance confidentiality, integrity, and availability are well-known qualities in the security category.
The NIST report (NISTIR 8062 2015) defines three privacy engineering objectives, predictability, manageability, and disassociability. Predictability is the enabling of reliable assumptions by individuals, owners, and operators about personal information and its processing by an information system. Manageability is providing the capability for granular administration of personal information including alteration, deletion, and selective disclosure. Disassociability is enabling the processing of personal information or events without association to individuals or devices beyond the operational requirements of the system.
Compared to ISO 25010, we conclude that privacy engineering integrate new concerns and qualities. While Predictability and manageability are well known categories, there is a need to express the specific capabilities that are related to personal information processing. Disassociability can be considered as a specific engineering objective for privacy.
Properties for Privacy Protection
Three privacy engineering axes
No access to data
Full access to data
No access to services
Full access to services
Authorised entities only
No changes to data
All types of changes
No access to process
Full process flexilibility
Defined by processor
Defined by individual
No linkable data
Full linkability of data
No disclosure of process
Full disclosure of process
Privacy Engineering Lifecycle
Life cycle processes
Environment & Infrastructure
Organisational privacy architecture
Promote privacy awareness
Preliminary, functional description and high-level privacy analysis, privacy requirements operationalisation, legal compliance
Privacy control design, Architecture impact evaluation, Privacy control detailed Design
Privacy control implementation
Privacy control verification (static analysis, dynamic Analysis), Accountability,
Create incident response plan, Create system decommissioning plan, Final privacy review, Publish privacy impact assessment report
Execute incident response plan, Privacy verifications, Updating privacy impact assessment report
Execute decommissioning plan
The analysis phase includes a high level functional analysis (e.g. identifying the types of data that have to be collected), an operational requirements elicitation process, the objective of which is transform high-level privacy principle requirements into operational engineering requirements. For instance the accountability principle can lead to a requirement on protection enforcement capabilities. It also includes a legal compliance process which will involve privacy impact assessment activities.
The design phase will cover the specification and design of privacy controls or PETs. For instance an access log capability could be a privacy control covering the accountability requirement. The design phase must also include an evaluation of the impact on architecture. For instance the decision to keep data in a device instead of transferring it in the cloud has a strong architectural impact.
The implementation phase deals with transforming a design into a built system, For the sake of simplicity, we have assumed that this phase also includes all integration activities. For instance, the resulting implementation could contain several privacy controls that need to be integrated.
The verification phase ensures that the system meets privacy operational requirements. The verification that privacy controls are correctly implemented could be done through different approaches including static and dynamic verifications. For instance (Martin Kost et al. 2011) elaborates on how privacy can be verified through ontologies. An accountability capability check has to be included in order to ensure that all the needed measures for protection (e.g. enforcing confidential access to some data) and for proof of protection (e.g. provable log of access to data) have been implemented correctly.
The release phase focuses on publishing the first complete privacy impact assessment report after the final privacy review, and creating two important plans: the incident response plan which focuses on all the measures that are anticipated in case of the discovery of a privacy breach, and the system decommissioning plan which focuses on all the measures related to obligations to remove data.
The maintenance phase focuses on reacting to privacy breach incidents, i.e. the execution of the incident response plan, on preventive maintenance, and on updating the privacy impact assessment report.
The decommission phase focuses on executing the system decommissioning plan and on correctly dismantling the systems according to current legislation and policies.
Operationalisation of Privacy Principles
Privacy principles as defined by Ann Cavoukian13 or in ISO 29100 (2011)14 are the starting points for a privacy engineering practice. The operationalization of privacy principles is a process that leads to the definition of services necessary to support privacy management,
OASIS-PMRM (2013) defines a comprehensive methodology for operationalization. The methodology is iterative and based on a use case specification approach. It leads to the identification of a set of operational services (agreement, usage, validation, certification, enforcement, security, interaction, access). PMRM methodology includes on important concept: touch points. Touch points are defined as intersection of data flows with privacy domains or systems within privacy domains. From an engineering and management viewpoint they represent important interfaces that may lead to contractual and/or legal obligations (e.g. a touch point could be an interface with a business partner). Here is an example taken from the PMRM specification, related to electric vehicle charging: when a customer plugs into the charging station, the electric vehicle on-board system embeds communication functionality to send its identification and charge requirements to the customer communication portal. This functionality corresponds to a touch point.
Application of Design Strategies
Design strategies and examples of patterns
Amount of processed personal data restricted to the minimal amount possible
Select before you collect
Anonymisation / Pseudonyms
Personal data, and their interrelationships, hidden from plain view
Storage and transit encryption of data
Hide traffic patterns
Attribute based credentials
Anonymisation / Pseudonyms
Personal data processed in a distributed fashion, in separate compartments whenever possible
Isolation and virtualization
Personal data processed at highest level of aggregation and with least possible detail in which it is (still) useful
Aggregation over time (used in smart metering)
Dynamic location granularity (used in location based services)
Platform for privacy preferences
Data breach notification
Data subjects provided agency over the processing of their personal data
User centric identity management
End-to-end encryption support control
Sticky policies and privacy rights management
Privacy management systems
Use of logging and auditing
Integration of Risk Management
As pointed out previously, risk assessment is a major concern of privacy management. One major aspect of privacy engineering is the proper integration of risk management activities. Risk management is a well-known domain. ISO 31000 (2009) is the overarching standard, providing generic guidelines for the design, implementation and maintenance of risk management processes. Security risk is also quite well covered: for instance ISO/IEC 27005 (2011) is a standard on security risk management; TVRA (ETSI 2011) is a security risk analysis methodology published by ETSI; STRIDE (Meier 2003) is an approach which focuses on the threats associated with desirable security properties. Spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privileges threats (hence STRIDE threats) will prevent authentication, integrity, non-repudiation, confidentiality, availability, and authorization properties respectively.
LINDDUN categories of privacy threats
Hiding the link between two or more actions, identities, and pieces of information.
Hiding the link between an identity and an action or a piece of information
Ability to deny having performed an action that other parties can neither confirm nor contradict
Undetectability and unobservability
Hiding the user’s activities
Hiding the data content or controlled release of data content
Disclosure of information
User’s consciousness regarding his own data
Policy and consent compliance
The requirement analysis phase takes place in conjunction with risk management analysis. Risk management focuses on identifying the assets to protect in the system under development and the threats that might compromise the accomplishment of the privacy principles on these assets. Then a treatment is proposed to address the risk associated with a threat. This treatment may range from doing nothing (accept the risk) to including requirements that may avoid or reduce the risk.
Requirement analysis is goal oriented: each principle is considered as a high level goal that the system must fulfil. Each goal is then refined into a set of lower-level guidelines required to meet the goal. Then a success criterion is proposed to address a guideline. The set of treatments and success criteria are jointly referred as operational requirements for privacy controls.
The design phase has the objective to design the privacy controls. They are realised by measures designed to meet the success criteria and by countermeasures designed to meet the treatments.
There is a correspondence between the concepts of threats-treatments and of guidelines-criteria: threat is the counterpart of guideline; treatment is the counterpart of success criterion. Both are operational requirements; countermeasure is the counterpart of measure. Both are privacy controls.
Operationalisation example (data minimization and proportionality)
Example on data minimization and proportionality
Data minimization and proportionality
Accidental Data Leak
Risk relevance (negligible, limited, significant, maximum)
Avoid or minimise the use of personal data along its whole lifecycle
Goal relevance (less relevant, relevant, relevant and essential)
Operational requirement (Treatment or Criteria)
Keep data from different services or different parties separated, and avoid combining them
When some personal data is no longer needed for the specified purpose, delete or anonymise all the back-up data corresponding to that personal data
Architecture change to keep personal data in smart phone
Anonymisation and attribute based credentials
Conformance testing of architecture (personal data kept in smart phone)
Conformance testing of anonymisation
Operationalisation example (transparency)
Example on transparency
Transparency (ex post transparency)
A data leak occurred. Organisation does not know which operation caused the leak
Risk relevance (negligible, limited, significant, maximum)
Provide a public privacy notice to the data subject
Goal relevance (less relevant, relevant, relevant and essential)
Relevant and essential
Operational requirement (Treatment or Criteria)
Describe how the organisation processes personal data
Describe the internal uses of personal data
Secure log of access and operations
Battery of penetration tests
7.4 Privacy Engineering Framework for the IoT
The previous sections showed that an understanding has been reached on privacy in the Internet of Things and on privacy engineering. This section describes a privacy engineering framework for the IoT.
We first propose a structure containing four sections: a concept section, a stakeholder section, a process section and an organisation section. The concept section is domain independent, i.e. we believe it can be used for any IT domain. The other sections are specific to IoT. They further make the difference between privacy engineering of IoT systems and privacy engineering of IoT subsystems. The rationale to make a distinction between IoT systems and IoT subsystems is to address the profound difference between integrators (who know the use case they have to cover) and suppliers (who only know the domain they have to cover).
7.4.1 Structure of a Privacy Engineering Framework
Privacy engineering framework
Category of artefacts
Item in framework
Privacy Engineering and Privacy-by-Design
Privacy Engineering Objectives
Privacy Protection Properties
Privacy Engineering Principles
Privacy Control Requirements
Privacy Control Design
Environment and Infrastructure
The framework includes the following categories of artefacts: the concepts, the stakeholders, the processes to build a system, and the organisations building the system.
The following sections focus on each category of artefacts. In order to allow for easy use of the framework, all the important definitions in the framework are listed in tables, while the rest of the text focuses on rationale and explanations.
7.4.2 Concepts in the Framework
This section is common to all domains, i.e. it is not specific to the IoT. It covers the definition of the following concepts: privacy-by-design, privacy engineering, privacy engineering objectives and privacy engineering principles.
Privacy Engineering and Privacy-by-Design
Institutionalisation of the concepts of privacy and security in organisations and integration of these concepts in the engineering of systems.
Collection of methods to design and build privacy capabilities while integrating the treatment of risks to individuals arising from the processing of their personal information.
The privacy-by-design definition is taken from a blog entry contributed by PRIPARE.15 The blog entry also provides other definitions: (1) an approach to protecting privacy by embedding it into the design specifications of technologies, business practices, and physical infrastructures (this definition is inspired from Ann Cavoukian); (2) an approach to system engineering which takes into account privacy and measures to protect ICT assets during the whole engineering process; (3) embedding privacy and security in the technology and system development from the early stages of conceptualisation and design and institutionalizing privacy and security considerations in organisations; (4) applying a set of principles from the design phase of ICT systems in order to mitigate security and privacy concerns guiding designers and implementers decisions throughout the development of the systems.
Privacy Engineering Objectives
Privacy engineering objectives
Enabling reliable assumptions by individuals, owners, and operators about personal information and its processing by a system.
Providing capability for granular administration of personal information including alteration, deletion, and selective disclosure.
Enabling the processing of personal information or events without association to individuals or devices beyond the operational requirements of the system.
This section is taken from (NISTIR 8062 2015).
Privacy Protection Properties
Privacy protection properties
Ensures that privacy-relevant data cannot be linked across privacy domains or used for a different purpose than originally intended.
Ensures that all privacy-relevant data processing including the legal, technical and organizational setting can be understood and reconstructed.
Ensures that data subjects, operators and supervisory authorities can intervene in all privacy-relevant data processing.
This section is taken from (Hansen et al. 2015).
Privacy Engineering Principles
Privacy engineering principles
Integration of privacy engineering objectives
Privacy engineering activities extend other engineering objectives focusing on predictability, manageability of managed data and on disassociability
Specific engineering objectives for privacy management are needed
Integration of risk management
Privacy engineering activities must be carried out jointly with the risk management activities needed to ensure proper handling of privacy. ISO 29134 and associated practices can be used as reference.
While an engineering viewpoint must be taken, engineers must include a risk management perspective
Integration of compliance
Privacy engineering activities must be carried out jointly with the compliance checking (e.g. technical obligations, legal obligations).
While an engineering viewpoint must be taken, engineers must include a compliance perspective. This can involve impact assessment documents, assurance and conformance activities.
Integration or privacy protection properties
Privacy engineering activities must integrate specific protection properties, unlinkability, transparency, intervenability
Specific privacy protection properties extend other requirement properties
Integration of goal-orientation in requirement engineering
The identification of requirements in privacy engineering must include is goal orientation approach where engineers describe requirements in terms of the goals that must be met by systems.
Goal orientation is needed for engineering. It will complement requirements elicited through risk analysis.
Data oriented Design strategies
Privacy engineering includes data oriented design strategies. These strategies can help address the unlinkability objective. They often lead to architectural decisions (privacy enhancing architectures).
Data oriented design strategies will help meet unlinkability properties
Process oriented Design strategies
Privacy engineering includes process oriented design strategies. These strategies can help address the transparency and intervenability objectives
Process oriented design will help meet transparency and intervenability properties.
Privacy engineering extends to the entire lifecycle
Privacy management extends over the entire lifecycle. Consequently privacy engineering must extend over the entire lifecycle.
Privacy engineering knowledge capitalisation
Privacy engineering relies on knowledge capitalisation. Privacy controls can be stored and reuse (e.g. through privacy patterns). Processes can also be stored in organisation libraries
Privacy-by-design must be institutionalised within organisations.
7.4.3 Stakeholders in the Framework
Stakeholder operating a service that involves personal data collection
An operator of a social care network to assist elderly people.
Stakeholder processing personal data on behalf to a data controller
A cloud platform operator providing data storage and processing capability
Stakeholder integrating supplier systems in order to build a service
The developer of turnkey social care system
Stakeholder developing a subsystem that is subsequently integrated
The designer of a sensor that can be integrated in the turnkey social care system
The designer of an smart phone operating system that is subsequently used to run social care network capability
Privacy engineering for subsystems is not the same as privacy engineering for systems, because suppliers of subsystems are generally not aware of the privacy requirements of the system in which the subsystem will be integrated.
7.4.4 Processes in the Framework
This section focuses on process consideration in IoT privacy engineering, covering two items: the process for eliciting privacy control requirements and the process for designing privacy control. The framework makes a distinction between processes to build an IoT system compared to processes to build an IoT subsystem.
IoT systems are systems that are under the responsibility of a data controller, a data processor or an integrator carrying out a turnkey development for a data controller or a data processor. From a privacy point of view, the purpose for which personal data is collected is known in an IoT system. Here is an example: Let the IoT system consists of a set of body sensors monitoring specific health data, a smart phone managing the data collected by the sensors, and an information system facility at the cloud level managing social networking capabilities between the user of the system, carers, family members and friends.
IoT subsystems are systems that will be used by an integrator carrying out a development of an IoT system. From a privacy point of view, the purpose for which personal data can be collected in the IoT system in which the IoT subsystem will be integrate is not known beforehand. This has a strong impact on how privacy engineering is carried out. While it is possible to determine privacy control requirements at the IoT system level, it is only possible to determine generic requirements at the IoT subsystem level.
Privacy Control Requirements for IoT
Artefacts used in operationalisation in IoT
Assets to protect
Assets to protect
Threats on assets
Concerns on assets
Output: Privacy control requirements
Features to address threats
Features to address concerns
The specification of features necessitates a number of analysis steps as showed below (based from OASIS-PMRM (2013)).
Analysis steps for IoT systems
Definition of scope of use case involving system
Inventory of applications and services associated with systems
Detailed use case analysis
Inventory of stakeholders, data flows and touch points (i.e. data exchange with other stakeholders)
Identification of privacy control requirements by carrying out a risk and a goal oriented analysis
Inventory of threats
Inventory of privacy controls to address threats
Risk analysis practices
Inventory of concerns
Inventory of privacy controls to address concerns
Analysis steps for IoT subsystems
Identification of use cases involving the subsystem
Inventory of use cases involving the subsystem. Each use case is associated with data flow diagrams including touch points
Identification of range of privacy control support requirements
Inventory of privacy control support requirements
Each use case is associated with privacy control needs scenarios
Design of Privacy Controls for IoT
Design steps for IoT systems
Starting from privacy control requirements, global design of privacy control
Inventory of design strategies and privacy controls
If needed identification of architecture decisions (PEARs)
Inventory of architecture decisions associated with privacy controls
Architecture design practices
If needed evaluate architecture
Architecture evaluation practices
Detailed design of privacy control
Inventory of privacy patterns
If possible, identification of privacy patterns
Identification of resulting privacy control support in subsystems
Inventory of supplier products with privacy control support features
If possible, identification of subsystems composition rules
Inventory of IoT subsystems compositions schemes
Evaluation of privacy control effectiveness (e.g. privacy quantification)
Privacy control usage return on experience
Evaluation of compliance
Design steps for IoT subsystems
Design of privacy control features
Inventory of features
if needed identification of architecture decisions (PEARs)
Inventory of architecture decisions
Identification of resulting privacy control features in subsystems
Inventory of privacy control support features
7.4.5 Organisations in the Framework
This section focuses on organisation considerations in IoT privacy engineering, covering two items: the overall environment and infrastructure process and the lifecycle approach.
Environment & Infrastructure for IoT
Lifecycle Approach for IoT
Organisations integrating privacy in their engineering activities must take into account all phases of the lifecycle. In order to allow for easier integration of privacy engineering activities into existing methodologies (waterfall, agile, prototyping), it is advised to structure a privacy engineering methodology into phases and processes that can then be easily integrated in an organisation development methodology.
Phases and activities for IoT systems
Privacy engineering phase
Privacy engineering activities adapted to IoT systems
System life cycle processes (ISO 15288)
Environment & Infrastructure
Organisational privacy architecture
Infrastructure management process
Promote privacy awareness
Project privacy portfolio management process
Use case functional description
Stakeholder privacy requirements definition process
Use case privacy analysis
Privacy requirements analysis process
Use case privacy requirements
Privacy Control Design
Privacy architectural design process
Privacy Control detailed design
Privacy control implementation
Privacy implementation process
Privacy Control Verification, Accountability, Static analysis, Dynamic Analysis
Privacy Verification process
Create Incident Response Plan, Create system decommissioning plan, Final Privacy review, Publish PIA report
Execute incident response plan, Privacy verifications
Execute decommissioning plan
Phases and activities
Privacy engineering phase
Privacy engineering activities adapted to IoT subsystems
Environment & infrastructure
Organisational privacy architecture
Promote privacy awareness
Domain functional description
Domain privacy analysis
Privacy support requirements
Privacy support design
Privacy support detailed design
Privacy support verification
Create privacy support documentation
As we have explained, privacy issues are of particular relevance in the Internet of Things. Besides the large amount of personal data amassed by such systems, traditional privacy measures, based on well-established principles such as transparency, purpose specification, legitimate use or consent, break down in the face of IoT features such as pervasive sensing, indiscriminate data collection, invisible interfaces, and deferred de-anonymization. However the relevance it may have, privacy is oftentimes dismissed or overlooked when developing IoT systems. One of the possible reasons is that privacy initiatives in the field are yet disparate, unorganized and unconnected. This lack of a systematic approach upholds the need for a framework that provides common grounds to methodically capture and address privacy issues in the IoT.
This paper represents a first step towards the description of such a privacy engineering framework for the Internet of Things. The framework draws on complementary perspectives. A conceptual framework covers privacy engineering, privacy objectives, principles, properties, life cycle, and strategies. These concepts pave the foundations of a series of privacy engineering processes or development activities involved in privacy engineering, which move from the requirement elicitation (including the operationalization of abstract privacy principles into concrete requirements) to the analysis (including the analysis of privacy threats), to the design (including strategies for architectural design), to the implementation (including a catalogue of useful controls for IoT) and the validation; besides others that take place already during the operation of a system (release, maintenance and decommissioning). These processes synthesize different perspectives: managerial and engineering, organizational and systemic, risk-based and goal-oriented, environment and infrastructure, etc.
Special emphasis is given to the fact that IoT value chain includes a role such as the subsystem supplier, which is not usually considered by privacy regulations (focused only on data controllers and processor), yet it may have a decisive impact in the final properties of all the systems that include their products. The difference stems from the fact that the engineering of an IoT system requires the design of privacy controls while the engineering of an IoT subsystem requires the design of privacy control supports, or features that can be used to build privacy controls at integration time. This distinction also shows that unfortunately not much has been done in the area of privacy engineering for subsystems. We believe that future research is needed in this area.
The research leading to these results has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration, through the projects PRIPARE (PReparing Industry to Privacy-by-design by supporting its Application in Research) and RERUM (RERUM: REliable, Resilient and secUre IoT for sMart city applications) under grant agreements n° 610613 and 609094 respectively. We would like to acknowledge as well the contributions of all the partners of both projects.
The structure into three layers is inspired from architecture discussions held within AIOTI working group 4 (http://www.aioti.eu/)
For an overview, the reader is referred to (Tragos et al. 2014a).
See the report published in http://www.iso.org/iso/internet_of_things_report-jtc1.pdf
A comprehensive list of the IERC projects can be found in http://www.internet-of-things-research.eu/partners.htm
Also called Plan-Do-Check-Act cycle.
Note that the integration of privacy engineering into Agile methodologies is a challenge because of the lack of a clear design phase.
Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein (www.datenschutzzentrum.de). Data protection authority in the federal state of Schleswig-Holstein, Germany
i.e. proactive not reactive; preventative not remedial, privacy as the default setting, privacy embedded into design, full functionality, end-to-end security, visibility and transparency, respect for user privacy.
i.e. consent and choice, purpose legitimacy and specification, collection limitation, data minimization, use retention and disclosure limitation, accuracy and quality, openness, transparency and notice, individual participation and access, accountability, information security, privacy compliance.
- David Sweeny, MIT Technology Review’s New Issue Reveals Annual 10 Breakthrough Technologies. Digital Press Release. 2013. Available via: http://www.technologyreview.com/pressroom/pressrelease/20130423-10-breakthrough-technologies/, last visited on 21.06.2016.
- Article 29 Data Protection Working Party: Opinion 03/2013 on purpose limitation adopted on 2 April 2013. http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf;, last visited on 21.06.2016.
- Lee Rainie and Janna Anderson, The Future of Privacy. Pew Research Center. December 18, 2014. Available via: http://www.pewinternet.org/2014/12/18/future-of-privacy/, last visited on 21.06.2016.
- Directive 95/46/EC http://ec.europa.eu/justice/policies/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf, last visited on 21.06.2016.
- General Data Protection Regulation: http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf, last visited on 21.06.2016.
- Mandate M530 on privacy management of security projects and services. http://ec.europa.eu/growth/tools-databases/mandates/index.cfm?fuseaction=search.detail%26id=548; last visited on 21.06.2016.
- Privacy-by-Design. http://www.ipc.on.ca/english/Privacy/Introduction-to-PbD, last visited on 21.06.2016.
- Dave Evans, The Internet of Things: How the Next Evolution of the Internet Is Changing Everything. April 2011. http://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf, last visited on 21.06.2016
- Tragos, E. Z., Angelakis, V., Fragkiadakis, A., Gundlegard, D., Nechifor, C. S., Oikonomou, G. & Gavras, A. (2014a, March). Enabling reliable and secure IoT-based smart city applications. In Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on (pp. 111–116). IEEE.Google Scholar
- Pöhls, H. C., Angelakis, V., Suppan, S., Fischer, K., Oikonomou, G., Tragos, E. Z., & Mouroutis, T. (2014, April). RERUM: Building a reliable IoT upon privacy-and security-enabled smart objects. In Wireless Communications and Networking Conference Workshops (WCNCW), 2014 IEEE (pp. 122-127). IEEE.Google Scholar
- Bassi, A., Bauer, M., Fiedler, M., Kramp, T., Van Kranenburg, R., Lange, S., & Meissner, S. (2013). Enabling things to talk. Designing IoT Solutions With the IoT Architectural Reference Model, 163-211.Google Scholar
- AIOTI - High Level Architecture, 2015. https://docbox.etsi.org/smartM2M/Open/AIOTI/!!20151014Deliverables/AIOTI WG3 IoT High Level Architecture - Release_2_0-lines.pdf, last visited on 25.06.2016.
- International Organization for Standardization (ISO) (n.d.), Internet of Things Reference Architecture (IoT RA), Under development.Google Scholar
- Elias Tragos, et al., Deliverable D2.5 – Final System Architecture. RERUM Deliverable. 2014b. Available via: https://bscw.ict-rerum.eu/pub/bscw.cgi/d31979/RERUM%20deliverable%20D2_5.pdf, last visited on 21.06.2016.
- Organization for the Advancement of Structured Information Standards (OASIS) Privacy Management Reference Model and Methodology (PMRM), Version 1.0. July 2013. http://docs.oasis-open.org/pmrm/PMRM/v1.0/PMRM-v1.0.pdf, last visited on 21.06.2016.
- Leonid Titkov, Poslad Stefan, and Jim Tan Juan, An integrated approach to user-centered privacy for mobile information services. Applied Artificial Intelligence 20.2-4 (2006): 159-178.Google Scholar
- Florian Scheuer, Klaus Plößl and Hannes Federrath, Preventing profile generation in vehicular networks. Networking and Communications, 2008. WIMOB'08. IEEE International Conference on Wireless and Mobile Computing, IEEE, 2008.Google Scholar
- Elias Tragos, et al., Deliverable D2.3 - System Architecture. RERUM Deliverable. 2014c. Available via: https://bscw.ict-rerum.eu/pub/bscw.cgi/d18321/RERUM%20deliverable%20D2_3.pdf, last visited on 21.06.2016.
- Siani Pearson and Marco Casassa Mont, Sticky policies: an approach for managing privacy across multiple parties. Computer 9 (2011): 60-68.Google Scholar
- Denise Demirel et al., Deliverable D4.4 - Overview of Functional and Malleable Signature Schemes. Prisma Cloud Deliverable. 2015. Available via: https://online.tugraz.at/tug_online/voe_main2.getvolltext?pCurrPk=86456, last visited on 21.06.2016.
- Mark Manulis, et al., Group Signatures: Authentication with Privacy. Federal Office for Information Security-Study, Cryptographic Protocols Group, Department of Computer Science, Technische Universität Darmstadt, Germany, 2012.Google Scholar
- Camenisch, Jan, and Els Van Herreweghen. “Design and implementation of the idemix anonymous credential system.” Proceedings of the 9th ACM conference on Computer and communications security. ACM, 2002.Google Scholar
- Batina Lejla, et al., Low-cost elliptic curve cryptography for wireless sensor networks. Security and Privacy in Ad-Hoc and Sensor Networks (pp. 6–17). Springer Berlin Heidelberg, 2006.Google Scholar
- Jorge Cuellar, Santiago Suppan, and Henrich Poehls. Privacy-Enhanced Tokens for Authorization in ACE. Internet Draft. 2015.Google Scholar
- ISO/IEC 29134 (2016 draft) Draft International Standard. Information technology — Security techniques — Privacy impact assessment — GuidelinesGoogle Scholar
- CNIL Privacy Impact Assessment. Methodology (2015): https://www.cnil.fr/sites/default/files/typo/document/CNIL-PIA-1-Methodology.pdf Tool: https://www.cnil.fr/sites/default/files/typo/document/CNIL-PIA-2-Tools.pdf. Good practices: https://www.cnil.fr/sites/default/files/typo/document/CNIL-PIA-3-GoodPractices.pdf, last visited on 21.06.2016
- EC Data Protection Impact Assessment Template for Smart Grid and Smart Metering Systems (2014). https://ec.europa.eu/energy/sites/ener/files/documents/2014_dpia_smart_grids_forces.pdf, last visited on 21.06.2016
- ISO/IEC 29151. (2016 draft) Draft International Standard. Code of Practice for Personally identifiable information protection,Google Scholar
- LINDDUN privacy threat analysis methodology 2015, https://distrinet.cs.kuleuven.be/software/linddun/. last visited on 21.06.2016
- Antonio Kung, PEARs: Privacy Enhancing Architectures. Annual Privacy Forum, May 21–22, 2014, Athens, Greece. Proceedings APF14 “Privacy Technologies and Policy”. Lecture Notes in Computer Science Volume 8450, 2014, pp 18–29Google Scholar
- Software Architecture in Practice (3rd Edition), Len Bass, Paul Clementz, Rick Kazman. Addison-Wesley, 2012Google Scholar
- Japp Henk Hoepman, Privacy design strategies. ICT Systems Security and Privacy Protection – 29th IFIP TC 11 Int.Conf, SEC 2014, Marrakech, MoroccoGoogle Scholar
- Kent Beck et al., Manifesto for Agile Software Development. Agile Alliance. http://agilemanifesto.org/, last visited on 29.09.2015.
- OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) TC https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=pbd-se, last visited on 21.06.2016.
- ISO/IEC 29100:2011. Information technology – Security techniques – Privacy framework,Google Scholar
- Sarah Spiekermann and Lorrie Cranor, Privacy Engineering. IEEE Transactions on Software Engineering, Vol. 35, Nr. 1, January/February 2009, pp. 67–82.Google Scholar
- Sesa Gürses, Carmela Troncoso, and Claudia Diaz, Engineering Privacy-by-Design. Computers, Privacy & Data Protection, 2011Google Scholar
- Antonio Kung, Johan-Christoph Freytag, and Frank Kargl, “Privacy-by-design in ITS applications. 2nd IEEE International Workshop on Data Security and Privacy in wireless Networks, June 20, 2011, Lucca, Italy.Google Scholar
- NISTIR 8062 (Draft). “Privacy Risk Management for Federal Information Systems”. May 2015. http://csrc.nist.gov/publications/drafts/nistir-8062/nistir_8062_draft.pdf, last visited on 21.06.2016.
- Marit Hansen, Meiko Jensen, and Martin Rost, Protection Goals for Engineering Privacy. 2015 International Workshop on Privacy Engineering – IWPE'15.Google Scholar
- MITRE Privacy engineering framework. July 2014. http://www.mitre.org/publications/technical-papers/privacy-engineering-framework, last visited on 21.06.2016.
- The Privacy Engineer’s Manifesto. Getting from Policy to Code to QA to Value. Michelle Finnaran Dennedy, Jonathan Fox, Thomas Finneran. Apress. ISBN13: 978-1-4302-6355-5, January 2014.Google Scholar
- PRIPARE methodology. Final version. http://pripareproject.eu/wp-content/uploads/2013/11/PRIPARE-Methodology-Handbook-Final-Feb-24-2016.pdf, last visited 21.06.2016.
- Nicolás Notario et al., PRIPARE: Integrating Privacy Best Practices into a Privacy Engineering Methodology. 2015 International Workshop on Privacy Engineering – IWPE'15.Google Scholar
- ISO/IEC 25010:2011 Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE)) — System and software quality models.Google Scholar
- ISO/IEC 27034:2011 Information technology — Security techniques — Application securityGoogle Scholar
- Martin Kost, Johann-Christoph Freytag, Frank Kargl, Antonio Kung. Privacy Verification Using Ontologies. First International Workshop on Privacy by Design (PBD 2011), August 28, 2011, Vienna, AustriaGoogle Scholar
- Ann Cavoukian. Privacy-by-Design. The seven foundational principles. https://www.ipc.on.ca/images/resources/7foundationalprinciples.pdf, last visited on 21.06.2016.
- Nick Doty, Privacy Design Patterns and Anti-Patterns. Trustbusters Workshop at the Symposium on Usable Privacy and Security. July 2013.Google Scholar
- ISO 31000:2009. Risk managementGoogle Scholar
- ISO/IEC 27005:2011 Information technology — Security techniques — Information security risk managementGoogle Scholar
- ETSI. Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); Methods and protocols; Part 1: Method and proforma for Threat, Risk, Vulnerability Analysis ETSI TS 102 165-1 V4.2.3 (2011-03)Google Scholar
- J.D. Meier, Alex Mackman, Michael Dunner, Srinath Vasireddy, Ray Escamilla and Anandha Murukan. Improving Web Application Security: Threats and Countermeasures, Microsoft Corporation. Published: June 2003. Chapter 2 Threats and Countermeasures. https://msdn.microsoft.com/en-us/library/ff648641.aspx, last visited 21.06.2016.
- A. van Lamsweerde, Goal-Oriented Requirements Engineering: A Guided Tour. 5th International Symposium on Requirements Engineering, IEEE Computer Society Press, 2001Google Scholar
- ISO/IEC/IEEE 15288:2015 Systems and software engineering – System life cycle processesGoogle Scholar