Encyclopedia of Global Bioethics

Living Edition
| Editors: Henk ten Have

Safety, Data

  • M. Murat CivanerEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-05544-2_445-1

Abstract

This entry discusses the importance of data safety, and the problems relating to (a) failure to maintain data safety and (b) abuse of data by state and commercial entities are discussed. The issues related to healthcare provision context are elaborated as well. Finally, some suggestions are made in order to improve the current practices and regulations on data safety.

Keywords

Data Privacy Confidentialty Healthcare 

Introduction

Gathering information and keeping records were always crucial for mankind in the endeavor to understand the people and the environment surrounding them. It has always been one of the most critical components of the struggle for survival. As our memory is limited and our vision is narrow at a given moment, we need data to comprehend the big picture and to make predictions and future plans accordingly. Along with the development of digitalization and Internet, now it is possible to gather data faster, wider, and larger more than ever. The possibility of coding all kinds of information as “1” and “0” enables us storing huge amount of data and transferring it swiftly.

Personal information might be in various forms and used for different purposes. It is not limited to name, age, or gender. It includes all information that identifies a certain person such as ethnic origin, faith, political affiliation, sexual orientation, social security number, fingerprints, e-mail address, IP numbers, hobbies, and relatives. It also covers personal health records, predictive analytics, data of health-related sensors, and gene sequencing technology (Liu et al. 2014). In the context of healthcare, recording personal information is a must to provide services adequately. Organizing a healthcare system, which allows healthcare workers to communicate more quickly and accurately, is only possible where the appropriate means are available. We use videotapes, audiotapes, photographs, DVDs, or hard disks to store personal and medical information, in addition to traditional instruments. Data are needed not just for diagnostic and therapeutic services but also implementing and monitoring public health programs for prevention and promotion, improving services and patient safety, biobanks, participating community in decision-making and audit mechanisms, and conducting researches to improve the services. This is where the new technologies come to scene by providing new opportunities, not just for the local context but also on a global scale. Telemedicine, teleconsultation, teleradiology are increasingly used for consultation of diagnostic tests and treatment planning as they provide fast and convenient ways to communicate all around the world while decreasing the costs. As the Internet and mobile technologies develop, storing and transferring sensitive data increase parallel to the new concepts such as electronic health (e-health), mobile-health (m-health), and ubiquitous-health (u-health) as a recent development. The endeavor to digitize the information of human body including genetic materials, cells, organs, and even the brain transforms a human being herself into a kind of data.

However, various kinds of data provided by new technologies come with their own costs while creating advantages. Keeping data safe while recording, storing, transmitting, and disposing has been a vital issue as electronic media are becoming an indispensable part of daily life. Three main concepts mostly referred about securing the information are Confidentiality, Integrity, and Availability. “Confidentiality” is defined as “the assurance that information is not disclosed to individuals or systems that are not authorised to receive it,” whereas “Integrity” is “the assurance that information can’t be modified by those who are not authorised to modify it, or that any such modifications will not pass undetected” (UK Government 2015). As for the “Availability,” it is described as “the assurance that information is available when it’s needed, and that mishap or malice cannot affect the ability of systems to provide information when requested.” These concepts point out the problematic areas of data safety and vulnerabilities as well.

Confidentiality, Privacy, Autonomy

The question of who controls the information does matter more than the question of who owns it (British Medical Association 2012). In that sense, breaches of confidentiality and privacy cause individual losing control on personal integrity and even her own body by transferring the power on self to the other entities. It could stigmatize her in the society for a lifetime, cause financial losses, and might be harmful by causing various disadvantages including discrimination. Maintaining the private information’s control in the limits of personal will is paramount so that the person loses her freedom and autonomy otherwise. In the Internet era dissemination of information is now much faster, wider, and irreversible, therefore uncontrollable dissemination affects not just that particular person but also her family and acquaintances. Leakage of celebrities’ private pictures from a cloud-based service is a good example of it.

Failure to maintain the data safety causes problems not just on the personal level but also on the societal level. One of the problematic areas where confidentiality, privacy, and autonomy are compromised is the citizen-state relationship. The organization created by society for the purpose of common good, which is called “state,” is authorized to collect data for providing all kinds of public services starting by birth. It creates concerns about personal liberties as the state itself is the legitimized construct that holds the most intensified power in the society, and therefore its possible abuse by administrators and politicians might have gravely harmful consequences to society and individual. These concerns increase where the data about citizens are collected in the name of national security. Especially after the attacks to World Trade Center in USA in 2011, a new concept of security was created where the notion of “public security” was upgraded to a higher level and replaced by “national security.” “Preventive war” became the central concept of national security argument against the enemies of state and regime. Using new technologies, states now has the ability to monitor and record their citizens’ movements constantly, not just in the public places but in the virtual world as well. “Big brother” or Bentham’s “Panopticon” are becoming a real phenomenon with surveillance networks, while more data provide more control. Ability to track people’s movement was extremely increased so that a software developed recently is capable of predicting future behavior of a certain person by mining data from social networking websites (The Guardian 2013). This invisible intrusion into private life of individuals causes a conflict between personhood rights and national security. Along with the oppression created by ideological state apparatuses as defined by Althusser (2006), an individual develops an autocontrol in herself eventually, and at the end, there would be no need to control her by exercising power externally. As Allmer put it clearly, “The Panopticon creates a consciousness of permanent visibility as a form of power, where no bars, chains, and heavy locks are necessary for domination any more” (Allmer 2012). Maintaining data safety seems crucial to avoid this “permanent visibility” status therefore protecting autonomy.

The other problem with failure to keep the data safe on societal level is that it provides novel possibilities of exploitations to commercial entities. Internet pages, e-mails, and cell phone messages provide faster and cheaper marketing bombardment and open up new doors to a nearly uncontrolled world of subliminal messages and hidden advertising. Lifestyle and personal preferences are monitored by spending patterns over time. Companies track down the customers with the help of mobile phone service providers, and advertisements or discount messages are immediately sent about a certain product which is sold in the store that customer stepped in. They trade and exchange the databases consisting of millions of customer’s personal information, which were collected by phishing messages, product campaigns, or simply from each other. Data itself is becoming a valuable commodity not just for marketing purposes but also for the decisions of bank loans and insurance premiums. Facebook, for instance, in which people willingly write information about themselves, becomes one of the major data sources for companies in job hiring processes.

Data Safety in Healthcare

It is obvious that persons would not want their secrets disclosed without their consent, since unauthorized dissemination of personal information could be seriously harmful to that person in her family, in the society, or in her job. This risk is especially true when people get healthcare, preventive or curative, as they should give their most intimate secrets to healthcare workers correctly and accurately, if it is a necessity to provide the service they need. This is exactly the reason to make them more vulnerable in the relationship with medicine than the other parts of their life, in which they have more freedom to act and control. But in the context in healthcare, misery and pain caused by the health problem, knowledge asymmetry between them and representatives of medicine, education and language differences, all factors put them in a disadvantaged position limiting their liberty of giving personal information to others as they see fit. The only thing that could relieve them is the assumption that their information will be kept hidden, and that is why trust is fundamental for medicine.

Traditionally it is healthcare workers, especially physicians, who were defined as the responsible party to protect confidentiality and therefore to establish the relationship with the patient based on trust. However, in the new era of digitalization and healthcare reforms based on cost-effectivity, it is getting more difficult to honor this professional duty since the data are substantially beyond their control now. Ministry of Health, reimbursement institutions, insurance companies, drug stores, or companies in the medical industry would like to access data collected by physicians and healthcare institutions in order to decrease the costs, improve services, allocate resources, or invest in more profitable areas. Electronic media allows many people even in the hospital, including researchers, experts in quality management units, forensic medicine, administration, and data processing centers to access personal and medical information.

The collection of data in large databases and sharing them by vast networks increase the difficulty of protecting confidentiality, integrity, and accessibility. According to a recent study conducted in the registries of health data breaches in USA between 2010 and 2013, 6 breaches out of 949 reported breaches involved more than one million records (Liu et al. 2014). It was found that most occurred via electronic media, while theft of electronic devices, hacking, and unauthorized access were the most frequent type of breaches.

New technological abilities create new ethical problems as usual. The question of who should be able to access which kind of data becomes an important issue. Physicians, for instance, might like to be authorized to access all information of all patients admitted to the hospital before for different health problems, including sexual orientation or HIV markers, whether or not their actual complaint is relevant. They might claim that they need all information to protect themselves and their patients, and if this information, especially about the communicable diseases, would not be available to them, they might claim that they would have a right to refuse to treat those patients. It creates a specific challenge in an atmosphere where professional values and patient rights are not protected as much as they deserve.

Another problem emerges within the trend of commercialization in healthcare. Companies that operate the data management systems of hospitals might claim that databases are their property therefore they have the right to have a copy and use it as they see fit after the contract is over. Disclosing the databases to commercial companies might be done by the health authorities of a country as well. By a recent policy called “care.data” in the UK (UK National Health Service 2015), it will be possible to use patient information for purposes other than their direct medical care and “the intention is to make it available – with some of the identifying information removed, but not always – to organisations outside of the NHS including universities, commercial companies, medical researchers and information intermediaries” (MedConfidential 2014). Furthermore, in another instance from Turkey, it was found out by the Turkish Court of Accounts that the Social Security Institution, the reimbursement body of the government, had sold the databases of all patients to five companies (Turkish Court of Accounts 2013).

It is possible to claim that basically two consequences emerge from the problems mentioned above. Informed consent, taken in order to respect right to self-determination, is losing its meaning in practice as the use of data for secondary and even tertiary purposes is possible, and the answer to the question of “who is the owner of data?” is getting blurred. Secondly, as the patients have less trust in medicine, they might feel that the only options they have are disclosing inaccurate or false information to physicians or not to visit them at all. It is obvious that it would be harmful both for their personal health and for the society, especially regarding communicable diseases.

Data as Intellectual and Commercial Property

The ownership of data is an important issue in production of knowledge as well. Regarding scientific researches, it is usually argued that maintaining data safety is vital in order to protect researchers’ intellectual production and the value of their labor. In a knowledge production system where researchers usually compete with each other in isolated and uncooperative manner through academies and research institutes, data become a kind of property, albeit intellectual. Producing and keeping its ownership of that property bring income, possibilities for future grants, reputation, societal position, and “academic capital” as Bourdieu defined (Bourdieu 1984), while not having it or losing it brings otherwise. Therefore it is protected carefully until it is registered – licensed – and patented to the researcher by publishing it. Published data is protected as well, by copyright laws, patent laws, or intellectual property laws, where using it without permission or referring properly becomes a major crime in the context of intellectual production, called “plagiarism.” Therefore, this is the reason why plagiarism and even theft are the main topics coming to mind when data safety in research context is discussed.

Various forms of plagiarism, copying datasets without permission while peer-reviewing, or violations of intellectual property rights are not rare and important problems indeed. However, there is another dimension of the problem which should be on the agenda of data safety discussions. The concept of intellectual property rights is not used just for protection of single researcher’s rights; rather, that concept is used to protect commercial interests. In current practice, life sciences researches are mainly sponsored by companies of medical industry; therefore companies claim that they have the right to keep the data without disclosing. Knowledge is transformed into a kind of commodity, where the data safety argument is used for hiding the data as companies put profits before society. Researchers have to sign contracts binding them not to release the data they produce in any form without the permission of the sponsor. According to a study aiming to identify the prevalence and determinants of data-withholding behaviors among 2,167 academic life scientists, one in every five participants had reported that “publication of their research results had been delayed by more than six months at least once in the last three years to allow for patent application, to protect their scientific lead, to slow the dissemination of undesired results, to allow time to negotiate a patent, or to resolve disputes over the ownership of intellectual property” (Blumenthal et al. 1997). It was found out by the same study that participation in an academic-industry research relationship and engagement in the commercialization of university research were significantly associated with delays in publication. Contracts forbidding disclosure create a pressure on researchers, and the cost of disclosing research results could be severe. Two examples of the most known cases are Dr. Nancy Olivieri and Dr. David Healy, who spoke publicly about potential dangers to patients found by their researches. They had personally experienced serious negative consequences, including getting fired from their positions and being exposed to damaging rumors to discredit them (Schafer 2004). Furthermore, in addition to contracts with individual researchers, companies secure their data ownership with international agreements such as The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) (World Trade Organization 1994). Thus legal right to hide the data becomes a usual part of data safety discourse, which legitimizes society losing control over it.

Measures to Keep Data Safe

Breaches and abuse of data cause mistrust to medicine, science, and state seriously, in addition to the harms given to individual and society. In order to protect the rights of vulnerables there are many regulations on national and international level including UNESCO Universal Declaration on Bioethics and Human Rights, World Medical Association Declaration on Ethical Considerations regarding Health Databases, European Council Convention on Human Rights and Biomedicine, and several legislations in the European Union (European Commission 2015a). Those regulations basically cover responsibilities to maintain confidentiality, integrity and accessibility, informed consent, and deidentification of data.

However it is not easy to claim that regulations provide sufficient protection, therefore data safety is continuing to be a critical problem. Implementing general principles of the regulations is getting more difficult as the speed and variety of technological advancements boost. Education of healthcare workers, medical students, contracted workers, and voluntary staff about the duty of confidentiality and practical rules of security is one of the points to improve. UK Department of Health suggests some security measures against theft inappropriate access by staff such as (British Medical Association 2012)
  • Lock doors, offices, and filing cabinets.

  • Avoid leaving paper or computer files open where they may be seen by others.

  • Do not leave files unattended.

  • Password-protect computer systems, and do not share passwords with other people.

  • Change passwords at regular intervals to prevent anyone else using them.

  • Always clear the screen of a previous patient’s information before seeing another.

  • Always log out of any computer system or application when finished.

It should be kept in mind that data safety should be handled holistically and that the responsibilities of healthcare workers are just a part of it. Improving institutional and national policies are equally important, if not more. For instance, with the help of new possibilities it might be easier to reidentify anonymized data, therefore deidentification might lose its effectivity. Pseudonymisation might be useful as a second security layer. Encryption of data for all kinds of electronic storage and transmission might provide further protection. As for informed consent, its power of protection from misuse for secondary purposes is limited when the consent is blanket, since individuals lose its control; so it should be conditional instead of open ended. Patients should be informed about secondary usage, potential incorporation into aggregated databases, and who will be authorized to access data. Opting out from secondary usage should be an available option before and after giving consent. Public participation and accountability on how data are used should be a part of institutional and national policies, as it was suggested by Nuffield Council on Bioethics (Nuffield Council on Bioethics 2015): “Any data project should first take steps to find out how people expect their data to be used and engage with those expectations through a process of continued participation and review.” People should be able to demand deletion for the records about themselves. The ruling of the Court of Justice of the European Union in 2014 on the “right to be forgotten” in relation to online search engines is a great improvement in that sense (European Commission 2015b). Finally, data as intellectual and commercial property should be a part of the debates on data safety, and intellectual property rights should be balanced against public rights and interests.

Conclusion

Data safety is an important problem in today’s highly digitized world that increases security deficiencies inevitably. Breaches have a potential to affect individuals, institutions, even countries, and it could be claimed that actual regulations guided by the triad of “confidentiality, integrity, accessibility” which is classically recommended for information security are not sufficient enough to lessen concerns. Yet there are significant spaces for improvement regarding vulnerabilities on micro, mezzo, and macro levels.

In addition to the need of improving the measures aiming to protect data from unauthorized usage, there is a clear need to deal with the problems created by authorized usage in the era of Panopticon-state and commercialization. An individual’s control over the data about her should be increased, and personal information should not be used for controlling or manipulating her by state and commercial entities. Public participation on every stage of data collection, storage, transmission, and disposal might be helpful to prevent abuses. Democratization would be a constructive step in order to guarantee that production and usage of data primarily aims at public interests, therefore re-establishes trust. This is also true for medicine, where trust relationship is crucial. Maintaining data safety and involving patients into decision-making about its usage seem essential for a better physician-patient relationship.

It is inevitable that we will live in more high-tech environments in future and electronic data will be a more inseparable part of us. It seems that control over data will be decisive on personal liberties and even our existence in society. From the perspective of human rights and professional values, advocating data safety in the name of improving human life and health will be the guiding point.

Cross-References

References

  1. Allmer, T. (2012). Towards a critical theory of surveillance in informational capitalism. Frankfurt am Main: Peter Lang.CrossRefGoogle Scholar
  2. Althusser, L. (2006). On the reproduction of capitalism: Ideology and ideological state apparatuses (In Turkish). Istanbul: Ithaki Publishing.Google Scholar
  3. Blumenthal, D., Campbell, E. G., Anderson, M. S., Causino, N., & Louis, K. S. (1997). Withholding research results in academic life science: Evidence from a national survey of faculty. JAMA, 277, 1224–1228.CrossRefGoogle Scholar
  4. Bourdieu, P. (1984). Distinction. A social critique of the judgement of taste. Cambridge: Harvard University Press.Google Scholar
  5. British Medical Association. (2012). Medical ethics today (3rd ed.). West Sussex: Wiley.Google Scholar
  6. European Commission. (2015a, July). Data protection legislation. Retrieved from http://ec.europa.eu/justice/data-protection/law/index_en.htm
  7. European Commission. (2015b, July). Protection of personal data. Retrieved from http://ec.europa.eu/justice/data-protection/
  8. Gallagher, R. (2013, February 10). Software that tracks people on social media created by defense firm. The Guardian. Retrieved from http://www.theguardian.com/world/2013/feb/10/software-tracks-social-media-defence
  9. Liu, V., Musen, M. A., & Chou, T. (2014). Data breaches of protected health information in the United States. JAMA, 313, 1472–1474.Google Scholar
  10. MedConfidential. (2014, November). Important changes to the way your personal data is handled. Retrieved from http://medconfidential.org/wp-content/uploads/2014/01/caredata_poster_green.pdf
  11. Nuffield Council on Bioethics. (2015, February). Public participation should be at the heart of big data projects. Retrieved from http://nuffieldbioethics.org/news/2015/public-participation-heart-big-data-projects/
  12. Schafer, A. (2004). Biomedical conflicts of interest: A defence of the sequestration thesis – Learning from the cases of Nancy Olivieri and David Healy. Journal of Medical Ethics, 30, 8–24.CrossRefGoogle Scholar
  13. Turkish Court of Accounts. (2013). Report on Social Security Institute (In Turkish). Retrieved from http://www.sayistay.gov.tr/rapor/kid/2013/Sosyal_Güvenlik_Kurumları/SOSYAL%20GÜVENLİK%20KURUMU.pdf
  14. UK Government. (2015). Information security. Retrieved from https://www.gov.uk/service-manual/making-software/information-security.html
  15. UK National Health Service. (2015, August). Your health and care records. Retrieved from http://www.nhs.uk/NHSEngland/thenhs/records/healthrecords/Pages/care-data.aspx
  16. World Trade Organization. (1994, April). The agreement on trade-related aspects of intellectual property rights. Retrieved from https://www.wto.org/english/tratop_e/trips_e/t_agm0_e.htm

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  1. 1.Department of Medical EthicsUludag University School of MedicineBursaTurkey