If at first the idea is not absurd then there is no hope for it.
- Personal Information
- Unify Modeling Language
- Silk Road
- OECD Guideline
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
If at first the idea is not absurd then there is no hope for it.
This chapter covers the development of policies that will be used as the basis for development of the controls and measures to protect personal information (i.e., privacy standards, guidelines, business rules, and mechanisms). When we discuss policy creation in this context, we are talking about starting with business requirements (a task or series of tasks needed to serve a goal) and functionality goals. Once defined for goals and basic functions, we add requirements driven by applicable law. We then fit and bend our requirements to view the policies we must create through a lens of functionality(i.e., each action taken or demanded may be viewed as a requirement specification that must be included in a system). That system may be an enterprise, a subunit, end-to-end processing cycle, application, an element of functionality, a person-managed governance activity, among others. There is no exclusive list of what constitutes a system.
Every discussion in this chapter must be considered in this operational, requirement-driven context otherwise it will be easy to slip into traditional “policy” mode. This is not a discussion chief privacy officers (CPOs; or whomever is leading the privacy function) will have with every privacy engineer; however, every CPO must consider the output of his or her labor in terms of the concrete and measurable requirements and the outcomes discussed here.
Following chapters will show Unified Modeling Language (UML) and systems creation techniques for metadata as a methodology for taking the requirements derived from privacy policies and other technical sources and creating solutions that reflect those requirements. Where neither systems nor features nor privacy enhancing technologies can meet the requirements set forth, governance, training, and leadership “systems” involving the human players in the privacy engineering drama are discussed.
Elements of Privacy Engineering Development
Privacy engineering is the discipline of developing privacy solutions that consist of procedures, standards, guidelines, and mechanisms. Part 2 covers the process of developing privacy solutions, as depicted in Figure 4-1.
The elements of the process of developing a privacy solution, based on a set of privacy policies, are:
Enterprise goals: They must be reflected and aligned with privacy engineering solutions, including their privacy policies, standards, and guidelines. To make this happen, a privacy development teamFootnote 1 must first understand the goals and objectives of the enterprise in which the solution will operate. For the purposes of this book, “enterprise” includes organizations large and small that manage or otherwise process data. This definition would, of course, include government entities that may be governed by specific or additional rules and regulations and the organizing principles will still apply.
User/individual goals: These must be incorporated to develop effective and flexible privacy policies that will be accepted by the end user and individuals. The team members must understand the goals and objectives (and privacy sensibilities) of the end users and individuals who will participate in the system or become the data subjects for PI managed by the system.
Privacy requirements: Requirement gathering is critical for effective policy creation and solution development. Chapter 5 describes the application of use cases for requirement collection and introduces a unique use-case metadata model.
Privacy procedures and processes: These are the overall privacy activities (procedures) and their human or automated tasks (processes). Chapters 5 and 6 cover developing and using these as part of the privacy engineering discipline. Mandated standards and recommended guidelines factor into the creation of procedures and processes. It is procedures, processes, standards, and guidelines that translate “policy” into reality.
Privacy mechanisms: These are the automated solutions built with software and hardware to enforce privacy policies. Examples are created for illustration in Chapters 7, 8, and 9 using the development process presented in Chapter 6, including a privacy engineering component and how it can fit within an application system environment.
Quality assurance: This is required to ensure that the privacy engineering solution functions properly, as well as satisfies enterprise goals, user goals, and accepted privacy standards within the context they are to operate. Quality assurance for privacy solutions is discussed in Chapter 10.
Feedback loop: This will ensure that the privacy engineering solution is improved continuously as it will periodically quality assess or audit the solution and build in the ability to do so as a technical and procedural requirement.
After reading Part 2, whether you are a privacy professional or an engineer without a privacy background, you should have an understanding of how privacy is engineered into systems.
Although drafting privacy policies can be the subject of entire legal or organizational tomes, this chapter will go into enough depth so that the principles that comprise privacy policies are sufficiently understandable as the foundational layer of privacy engineering and use-case requirements. These policies enable the management of the principles as a framework, which in turn can also lead to:
The development and deployment of privacy engineered systems
The exciting missing beast—the framework to build and innovate the privacy engineered data-centric networks, tools, and solutions of the future
What Is a Good Policy?
A policy is considered good based on the manner in which it functions as well as its contextual fit (i.e., how well it balances the needs and objectives of the enterprise with the objectives of the users or customers or employees whose data ultimately flows through that organization). A good policy:
Arises from well-articulated enterprise goals, which are based on a clear statement of belief or purpose
Describes what is wanted or intended by the various parties of interest impacted by the enterprise
Explains why these things are wanted
Provides positive direction for enterprise employees and contractors
Provides transparency to the users of systems or individuals interacting with the enterprise
Is flexible enough so there can be adjustments to changing conditions without changing the basic policy itself
Is evaluated regularly
Can be readily understood by all
Policies must be designed to meet a complex set of competing needs:
Local and international legal, jurisdictional, and regulatory necessities, depending on the scope of the enterprise
Organization or business requirements
Permission for the marketing–customer relationship for management or business intelligence
Usability, access, and availability for end users of information systems
Economic pressure to create value through efficient sharing or relationship building
Enforceability and compliance
Realistic technology capabilities and limitations
Everything with a digital heartbeat is connected through dynamically formed relationships governed by privacy, security, and trust policies. This means there may be multiple interactive or cascading privacy policies based on the role of the various parties of interest:
Employees or contractors
Third parties impacted by the enterprise
Intellectual property owners
Respecting and managing regulatory and industrial standards compliance
Using personal information and confidential data related to it safely and ethically
Reconciling differences and leveraging synergies between overlapping or competing enterprise policies and goals for other areas, such as audit or litigation data preservation, records management, and physical and IT security
Establishing a basis for objective respect and trust between an enterprise and its customers, employees, and other impacted groups
It should, of course, be noted in the privacy requirements that:
Not all laws are granular enough to provide one objective interpretation that must be instantiated
All rules and regulations can always be harmonized to be free of directly conflicting standards and so-called best practices
What is possible is an objective working framework that will become the policy for the enterprise and, ultimately, the basis for process and technology policies, as described in the sidebar.
By Dr. Mark Watts, Head of Information Technology Law, Bristows
Europe is not a country. It isn’t. And while this will be blindingly obvious to most people reading this book, it’s surprising how often I hear it assumed that Europe is essentially a country, with a single, homogenous data privacy law that sets out the rules applicable across the entire region (50 or so countries). If only life were that simple. If only European privacy rules were that simple. Sadly, they’re not. And the point here is not to ridicule anyone’s understanding of European geography or laws, but rather to make the point that, although when working “internationally” in privacy we all make assumptions—we have to, to rationalize the almost overwhelming legal complexity involved—making the wrong assumptions can quickly cause a project to go astray.
Shaky assumptions can lead to another, more subtle but equally problematic risk—the risk of unnecessary overcompliance. Now, this isn’t to suggest that companies should develop policies requiring only the minimum amount of compliance required by local law (essentially as little as the company can get away with) but would a company really want to apply the highest common denominator—the strictest standard anywhere—to all of its operations worldwide? Surely not. For example, would it really be wise to export the highly restrictive Finnish laws on monitoring employee communications to every country where a company does business? Most unlikely, because although this approach would ensure compliance with the communication monitoring laws of almost all other countries where the company has employees, it could seriously hamper its business operations in countries with more permissive regimes. This isn’t a risk of noncompliance; it isn’t a risk of breach. It’s a risk of overcompliance that can fetter existing business processes, potentially inhibit sales, and, just as importantly for the privacy professional and privacy engineer, can damage their internal credibility within the company. All in all, overcompliance can be as much of a problem for the company as undercompliance.
One of the first things to be determined when drawing up privacy policies is which geopolitical regions or jurisdictions impact the enterprise. Privacy policies for a global enterprise, for example, can start the foundational development process by basing a strategy on the OECD Guidelines and GAPP. In some cases, other localized articulations of fair information processing may be the foundational basis for policy creation. For whatever framework is chosen, the policy creators will need to be able to translate how the various principles are managed if the policy is going to be an effective tool for process and privacy-enhanced systems and features in a privacy engineering context.
For example, a policy statement might require that data be collected relevant to services provided by the current enterprise. The general policy would require a well-defined privacy notice to provide for transparency between the collector of data and the data subject as well as to build an enforceable governance structure where the data asset is known as it enters and moves through its predicted lifecycle. An enterprise must be able to articulate and document how much personal information would be collected for specific purposes according to proportionality principle.
A policy statement should cover proportionality requirements: the benefit derived from the processing of the data should be proportional to its impact to privacy of the individual whose data is being processed. To achieve data proportionality at the time of collection, the data subject’s perspective needs must be balanced within the enterprise’s objectives.
Allowances for revisions and exceptions should be included in privacy policies to address the fact that policy needs will change. There are occasions when a customer’s, employee’s, supplier’s, or other party of interest’s feedback or requirements may lead to the need to modify privacy policies or grant exceptions.
When an enterprise operates internationally, privacy policies should address the transfer of data among various jurisdictions. The underlying strategies should be people-process and technology oriented and include governance mechanisms that must be designed and executed to follow the data wherever they travel.
Enterprise-Specific Privacy Development
The nature and culture of an enterprise business impacts privacy policies and the creation process. For instance, in the United States, the legal approach is often sectorial governed. An example of this is health care in the United States, where the Health Insurance Portability and Accountability Act of 1996 (HIPAA)policies and privacy rules should be incorporated. This type of enterprise will always be extremely open with many third parties, operating in a nonstop high-stakes context (in some cases, life and death). Getting the balance between use, sharing, access, and accuracy will be a supreme consideration. The rights and sensitivities of the data subjects within this context are highly subjective while also the subject of extensive regulation. Although other jurisdictions may not have standalone health data protection statutes, this type of context, and health data specifically, is governed as a protected class—or even an enhanced protected class, as in the European Union, a “sensitive” data class of data worldwide.
A health care-, financial-, or politically sensitive type of context is actually the proving grounds for many other types of businesses. These enterprises require personalization and intimate knowledge of personal information, but also value a certain level of autonomous innovation with data and financial models based on data. Innovating for high-risk data is a bit like the lyrics from the song “New York, New York”: “If I can make it there, I’ll make it anywhere.”
Internal vs. External Policies
Data protection standards such as the OECD Guidelines and GAPP, among others, require that privacy policies should be published both internally in enterprises and externally (actually, externally, it is usually a statement or notice of an enterprise practices that is posted, not the actual policy) to give notice to users of systems, customers, or other data subjects interacting with the enterprise. Failure to comply with the enterprise public notices can lead to:
Dissatisfied customers: Customers and other users will expect compliance to the privacy protection actions as indicated within the notice. It may be considered an implied contract. If there is a breach, users will tend to look to safer sites. If a user discovers identity theft that seems to have come from personal information collected by an enterprise, that user will take it out on the enterprise maintaining the site that failed them.
Regulatory investigations: Where an enterprise has not lived up to its notice commitments, regulators from one or more jurisdictions will likely investigate the problems and may take either criminal or civil actions or both against both the enterprise and, conceivably, against employees within the enterprise.
Bad publicity: Forty-six US states, the District of Columbia, plus other US territories have security breach notification laws that involve personal information. There are comparable laws throughout the world. The media keep a lookout for such notifications and determine when breaches are significant. Any breach scares people, and serious breaches equal bad publicity.
Litigation: Potential liability in privacy-related lawsuits has been increasing steadily in recent years. This expanding legal exposure has been fueled by plaintiffs’ class action lawyers targeting privacy litigation as a growth area. Moreover, federal and state government agencies, as well as data protection agencies throughout Europe and Asia, are becoming increasingly aggressive in their efforts to investigate and respond to privacy and data security concerns and incidents. The Federal Trade Commission (FTC) is imposing stricter standards on businesses, while state attorneys general are pursuing enforcement actions and conducting high-profile investigations in response to data breaches and other perceived privacy violations.
Harm to brand: For most enterprises, the equity invested in their brands is an invaluable but fragile asset. When privacy protection problems occur, the reaction of the enterprise is crucial to the maintenance of a very positive brand.
Weak innovation: Effective innovation comes from making improved products that deliver what people want. To find what customers and potential customers want requires the collection of data. An enterprise that does not protect the privacy of data will weaken the ability to collect the data needed to determine where innovation is required.
Employee distrust: Just as customers can be turned off when privacy notice failures occur, employees can begin to distrust their enterprise when their data is not protected as the privacy notice promise.
An enterprise should consider creating training based on internal privacy rules that are more granular, specific, and more restrictive than externally posted notices. These internal policies should be coordinated with a human resources policy team to ensure that staff and business partners know exactly what to do, how to get help when they need it, and how and when these may be enforced and encouraged.
ENGINEERS AND LAWYERS IN PRIVACY PROTECTION: CAN WE ALL JUST GET ALONG?
By Dr. Annie I. Antón, Professor in and Chair of the School of Interactive Computing at the Georgia Institute of Technology
Peter Swire, Nancy J. and Lawrence P. Huang Professor, Scheller College of Business, Georgia Institute of Technology
1. How lawyers make simple things complicated. A first-year law student takes Torts, the study of accident law. A major question in that course is whether the defendant showed “reasonable care.” If not, the defendant is likely to be found liable. Sometimes a defendant has violated a statute or a custom, such as a standard safety precaution. More often, the answer in a lawsuit is whether the jury thinks the defendant acted as a “reasonable person.” The outcome of the lawsuit is whether the defendant has to pay money or not. We all hope that truth triumphs, but the operational question hinges on who can prove what in court.
The legal style is illustrated by the famous Palsgraf case.Footnote 3 A man climbs on a train pulling out of the station. The railroad conductor assists the man into the car. In the process, the man drops a package tucked under his arm. It turns out the package contains fireworks, which explode, knocking over some scales at the far end of the platform. The scales topple onto a woman, causing her injury.
From teaching the case, here is the outline of a good law student answer, which would take several pages. The answer would address at least four issues. For each issue, the student would follow IRAC (Issue, Rule, Analysis, Conclusion) form, discussing the issue, the legal rule, the analysis, and the conclusion: (1) Was the man negligent when he climbed on the moving train? (2) When the railroad conductor helped the man up, was the conductor violating a safety statute, thus making his employer, the railroad, liable? (3) When the man dropped the fireworks, was it foreseeable that harm would result? (4) Was the dropping of the package the proximate cause of knocking over the scales? In sum, we seek to determine whether the railroad is liable. The law student would explain why it is a close case; indeed, the actual judges in the case split their decision 4-3.
Engineers design and build things. As such, they seek practical and precise answers. Instead of an IRAC form, engineers seek to apply scientific analytic principles to determine the properties or state of the “system.” The mechanisms of failure in the Palsgraf case would be analyzed in isolation: (1) The train was moving, therefore, the policy of only allowing boarding while the train is stopped was not properly enforced, thereby introducing significant safety risk into the system. (2) The scales were apparently not properly secured, thus a vibration or simple force would have dislodged the scales, introducing safety risk into the system. Is the railroad liable? An engineer would conclude the compliance violation and unsecured scales means that it would be liable. The engineering professor would congratulate the engineering student for the simple, yet elegant, conclusion based on analysis of isolated components in the system. In engineering, simplicity is the key to elegance.
The lawyer may agree in theory that simplicity is the key to elegance, but law students and lawyers have strong reasons to go into far more detail. The highest score in a law school exam usually spots the greatest number of issues; it analyzes the one or two key issues, but also creates a research plan for the lawyers litigating the case. For example, the railroad has a safety rule that says the conductor shouldn’t help a passenger board when the train is moving, but surely there are exceptions? In the actual case (or the law school exam), the lawyer would likely analyze what those exceptions might be, especially because finding an applicable exception will free the railroad from liability. The good exam answer may also compare the strange chain of events in Palsgraf to other leading cases, in order to assess whether the plaintiff can meet her burden for satisfying the difficult-to-define standard for showing proximate cause.
In short, lawyers are trained to take the relatively simple set of facts in Palsgrafand write a complex, issue-by-issue analysis of all the considerations that may be relevant to deciding the case. The complexity becomes even greater because the lawyer is not seeking to find the “correct” answer based on scientific principles; instead, the lawyer needs to prepare for the jury or judge, and find ways, if possible, to convince even skeptical decision-makers that the client’s position should win.
2. How engineers make simple things complicated. A typical compliance task is that our company has to comply with a new privacy rule. For lawyers, this basically means applying the Fair Information Privacy Principles (FIPPs), such as notice, choice, access, security, and accountability. The law is pretty simple.
The engineer response is: How do we specify these rules so that they can be implemented in code? Stage one: specify the basic privacy principles (FIPPs). Stage two: specify commitments expressed in the company privacy notice. Stage three: specify functional and nonfunctional requirements to support business processes, user interactions, data transforms and transfers, security and privacy requirements, as well as corresponding system tests.
As an example, some privacy laws have a data minimization requirement. Giving operational meaning to “data minimization,” however, is a challenging engineering task, requiring system-by-system and field-by-field knowledge of which data are or are not needed for the organization’s purposes. Stuart Shapiro , Principal Information Privacy & Security Engineer, The MITRE Corporation, notes that an implementation of data minimization in a system may have 50 requirements and 100 associated tests. Input to the system is permitted only for predetermined data elements. When the system queries an external database, they are permitted only to the approved data fields. There must be executable tests—apply to test data first and then confirm that data minimization is achieved under various scenarios.
For the lawyer, it is simple to say “data minimization.” For the engineer, those two words are the beginning of a very complex process.
3. Why it may be reasonable to use the term “reasonable” in privacy rules. Swire was involved in the drafting of the HIPAA medical privacy rule in 1999–2000. Antón, the engineer, has long chastised Swire for letting the word “reasonable” appear over 30 times in the regulation. Words such as “promptly” and “reasonable” are far too ambiguous for engineers to implement. For example, consider HIPAA §164.530(i)(3): “the covered entity must promptly document and implement the revised policy or procedure.” Engineers can’t test for “promptly.” They can, however, test for 24 hours, 1 second, or 5 milliseconds. As for reasonable, the rule requires “reasonable and appropriate security measures”; “reasonable and appropriate polices and procedures” for documentation; “reasonable efforts to limit” collection and use “to the minimum necessary”; a “reasonable belief” before releasing records relating to domestic violence; and “reasonable steps to cure the breach” by a business associate.
The engineer’s critique is: How do you code for “promptly” and “reasonable”? The lawyer’s answer is that the HIPAA rule went more than a decade before being updated for the first time, so the rule has to apply to changing circumstances. The rule is supposed to be technology neutral, so drafting detailed technical specs is a bad idea even though that’s exactly what engineers are expected to do to develop HIPAA-compliant systems. There are many use cases and business models in a rule that covers almost 20% of the US economy. Over time, the Department of Health and Human Services can issue FAQs and guidance, as needed. If the rule is more specific, then the results will be wrong. In short, lawyers believe there is no better alternative in the privacy rule to saying “reasonable.”
The engineer remains frustrated by the term “reasonable,” yet accepts that the term is intentionally ambiguous because it is for the courts to decide what is deemed reasonable. If the rule is too ambiguous, however, it will be inconsistently applied and engineers risk legal sanctions on the organization for developing systems not deemed to be HIPAA compliant. In addition, “promptly” is an unintentional ambiguity that was preventable in the crafting of the law. By allowing engineers in the room with the lawyers as they decide the rules that will govern the systems the engineers must develop, we can avoid a lot of headaches down the road.
4. How to achieve happiness when both lawyers and engineers are in the same room. Organizations today need to have both lawyers and engineers involved in privacy compliance efforts. An increasing number of laws, regulations, and cases, often coming from numerous states and countries, place requirements on companies. Lawyers are needed to interpret these requirements. Engineers are needed to build the systems.
Despite their differences, lawyers and engineers share important similarities. They both are very analytic. They both can drill down and get enormously detailed in order to get the product just right. And, each is glad when the other gets to do those details. Most engineers would hate to write a 50-page brief. Most lawyers can’t even imagine specifying 50 engineering requirements and running 100 associated tests.
The output of engineering and legal work turns out to be different. Engineers build things. They build systems that work. They seek the right answer. Their results are testable. Most of all, it “works” if it runs according to spec. By contrast, lawyers build arguments. They use a lot of words; “brief” is a one-word oxymoron. Lawyers are trained in the adversary system, where other lawyers are trying to defeat them in court or get a different legislative or regulatory outcome. For lawyers, it “works” if our lawyers beat their lawyers.
Given these differences, companies and agencies typically need a team. To comply, you need lawyers and engineers, and it helps to become aware of how to create answers that count for both the lawyers and the engineers. To strike an optimistic note, in privacy compliance the legal and engineering systems come together. Your own work improves if you become bilingual, if you can understand what counts as an answer for the different professions.
We look forward to trying to find an answer about how to achieve happiness when both lawyers and engineers are in the room. Antón presumably is seeking a testable result. Swire presumably will settle for simply persuading those involved. However, we both agree that the best results come from collaboration because of the value, knowledge, and expertise that both stakeholder groups bring to the table.
Policies, Present, and Future
Policies have to be living documents that can be readily changed as a business changes or as the regulatory environment changes; however, they should not be changed lightly or at whim. There is overhead associated with policy changes, especially in the privacy space. For instance, a change in policy may indicate a change in use of data, which then may require an enterprise to provide notice of the change to whomever’s data is affected and get permission for the new uses of the data. Even without a pressing need for change, it is important to review policies on a regular basis, perhaps annually, to determine if change is necessary.
A good policy needs to be forward looking and, at the same time, accurate to the current state. It should be sufficiently detailed as to give direction and set parameters, but not so detailed as to be overly specific or to require excessive change. Each enterprise will need to find the balance between what is communicated as “policy” and what is communicated as an underlying standard or guideline for meeting the requirements of the policy. Key stakeholders should review policies and practices at least annually to see if revisions are warranted.
Engineered privacy mechanisms can ease the change and improvement of the policies, especially with the specific procedures, standards, guidelines, and privacy rules that need to change if there are policy revisions. The privacy component discussed in Chapters 6, 7, 8, and 9 addresses this crucial need.
Privacy policies are powerful tools in the overall privacy engineering process. Privacy professionals, lawyers, and compliance teams can use them to communicate expected behaviors and leverage them to create accountability measures. In the process of policy creation, internal and external—including systems’ users and regulators—requirements and expectations must be gathered. These same requirements and expectations in the traditional lexicon can also be leveraged as engineering requirements in the privacy engineering model and execution sense. We will explore how such requirements fit into a system’s model in Chapters 5 and 6. In the remaining chapters of Part 2, we will continue to call on these policy requirements in the context of discrete tools and features that rest in the privacy engineering toolkit.
OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data are available at www.oecd.org/document/18/0,3343,en_2649_201185_1815186_1_1_1_1,00.html . A downloadable version of the Generally Accepted Privacy Principles (GAPP), along with additional information about the development and additional privacy resources, can be found at www.aicpa.org/privacy . Information about the European Union’s Directive on Data Protection is available at http://ec.europa.eu/justice_home/fsj/privacy/index_en.htm .
Palsgraf v. Long Island Railroad Co., 248 N.Y. 339 (N.Y. 1928).
© 2014 Michelle Finneran Dennedy
About this chapter
Cite this chapter
Dennedy, M.F., Fox, J., Finneran, T.R. (2014). Developing Privacy Policies. In: The Privacy Engineer’s Manifesto. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4302-6356-2_4
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4302-6355-5
Online ISBN: 978-1-4302-6356-2