Using the methodology described in “Collection and selection”, we distilled from the four lists of values detailed in the previous section a set of ten values in total for big data technologies used in multiple contexts: human welfare, autonomy, dignity, privacy, non-maleficence, justice, accountability, trustworthiness, solidarity and environmental welfare (see Table 1).
In this section we elaborate on each of these values and their meaning in emerging big data technologies. First, we depict them with reference to all four lists of values detailed in the previous section. Second, we demonstrate our arguments why each value should be considered and treated as a placeholder for ethical interactions throughout big data contexts. The latter also entails that we provide examples for issues where each of these values are put under pressure in big data contexts. Finally, notwithstanding the starting point of this article that all listed values should be upheld throughout the whole life-cycle, we will explicate for every value in which phase of the big data life-cycle we foresee extra attention will be needed to ensure an ethical foundation for big data technologies.
Note that we do not put these values in any hierarchical order, although we recognize that some of the identified values can be derived from combinations of other values and thereby overlap. Putting the values in a hierarchical order would raise many questions that are beyond the scope of this paper. We do try to highlight in the text below to what extent some values are derived from other values and how values from the different lists overlap.
Below we discuss the integrated list of values and illustrate how they may be under pressure in contexts of big data technologies. The issues putting pressure upon each value are also highlighted in Table 2.
Table 2 Overview of the identified ethical issues Human welfare
Human welfare is perhaps one of the most straightforward values to protect (Hurthouse 2006). This value did not come explicitly to the foreground on all four value lists, yet each list where human welfare is not listed contains values that are very close to human welfare.
On the list of technomoral values human welfare can rather be seen as related to the value of care. Care (loving serving to others) is defined by Vallor as the skilful, attentive, responsible and emotionally responsive disposition to personally meet the needs of those with whom we share our techno-social environment. In relation to care Vallor argues that some of the biggest dangers of emerging technologies for human welfare can appear when “emerging technologies are used as supplements for human caring” (Vallor 2016, p. 119), for instance, when a robot delivers medicine could be seen as a caring task being completed, yet the human touch that a patient would need might be missing. Value-sensitive design distinguishes it specifically. In VSD, human welfare refers to people’s physical, material, and psychological well-being. Anticipatory emerging technology ethics refers to well-being and the common good in general. Brey refers to well-being as one of the most general values on his list. In anticipatory technology ethics well-being means being supportive of happiness, health, knowledge, wisdom, virtue, friendship, trust, achievement desire-fulfillment. This also includes being supportive of vital social institutions and structures, democratic institutions and cultural diversity. In biomedical ethics the value of beneficence is described as the principle of acting with the best interest of the other in mind. Therefore, from the value list of biomedical ethics beneficence is therefore most closely related to human welfare. Beneficence can also be regarded as something that preserves human welfare. Welfare may also overlap with the value of solidarity (see below).
Regarding our criteria for an integrated approach that accounts for the circular life cycle of big data and that a broad range of stakeholders have a share in how values become upheld in big data contexts, human welfare finds a pivotal place on our value list. Human welfare should be accounted for both during the design and the application stage of big data technologies,—no matter whether these stages follow-up, run parallel or are self-generating each other holistically by all stakeholders. This value can explicitly be put under pressure in big data contexts as the example of the healthcare robot, also referred to by Vallor (Ibid.), also demonstrates this. Healthcare robots are designed to register patient’s data. Yet, in practice newer and newer data is fed into robots by multiple stakeholders, e.g. doctors, pharmacists, nurses or others, generating new data by regular updates and prognosis on a patient’s health condition. The latter could also be considered as a form of self-generated renewal in design regarding prognosis on a patient’s health by big data. Although the role of robots in supplying services for patients (and healthcare workers) should not be underestimated, this example also underlines that the network of designers, developers, users and other stakeholders, including robots themselves, mutually share responsibility in how human welfare as a value becomes upheld or put under pressure throughout the whole lifecycle of big data technologies.
Autonomy
Autonomy as a value is present on the value lists of VSD, anticipatory technology ethics and biomedical ethics, but not on the list of techno moral values. Autonomy on the VSD list refers to people’s ability to decide, plan, and act in ways that they believe will help them to achieve their goals. In anticipatory emerging technology ethics, autonomy is the ability to think one’s own thoughts and form one’s own opinions and the ability to make one’s own choices. According to anticipatory emerging technology ethics, autonomy pre-requires responsibility and accountability and in the digital era it should also include informed consent. In biomedical ethics, autonomy, as mentioned above, is the right of the individual of making his or her own decision or choice. Vallor’s technomoral value of magnanimity and courage—daring to take risks—are closely related to autonomy. Magnanimity or moral leadership, for instance, in Vallor’s view is a value that everyone needs to cultivate in him/herself. Magnanimous leaders are those who can inspire, guide, mentor and lead the rest of us at least towards the vicinity of the good. Magnanimity is therefore a quality of someone who can by example be a moral leader especially during techno-social change. Such stakeholders as technology designer, entrepreneurs, data scientists but also others involved in big data networks should dispose with this quality or ethical characteristic. The technomoral value of courage, what Vallor describes as intelligent fear and hope, is also closely related to autonomy because only a person who is autonomous could choose courageously. Yet, courage facilitates autonomy. Courage is a special facilitator of autonomy, because it is an admirable quality for making rather difficult choices than easy ones. As to emerging technologies, Vallor calls for embracing this value by cultivating a “reliable disposition toward intelligent fear and hope with respect to the moral and material dangers and opportunities presented by emerging technologies”. Certain expectations are by design inherent in big data technologies, yet unexpected implications emerge trough their use. To cultivate a reliable disposition toward intelligent fear and hope by humans and to equip stakeholders by features that facilitate such disposition are both pivotal requirements to enact a form of autonomy that is conscious about the unexpected implications of big data use.
An autonomous person in our big data era should desirably be a magnanimous and courageous one in his/her activities. Furthermore, this autonomy should be facilitated by the broader network of stakeholders connected to big technologies. In order to uphold autonomy, it is crucial to acknowledge the connection between the design stage and the application stage. A feature that is instrumental in facilitating such autonomy by a diverse set of big data stakeholders is transparency. If transparency is put under pressure in the design stage, so is autonomy in the application stage. For instance, if a big data mediated decision process is designed in such a manner that there is no way for citizens to come to understand how a decision has been reached, they cannot enact their autonomous character, for instance, to seek recourse. Furthermore, autonomy can also be in danger when big data-driven profiling practices limit free will, free choice and turn out to be rather manipulative instead of supportive in raising awareness or cultivating knowledge about, for instance, news, culture, politics and consumption. As this could be an unforeseen consequence of the application of big data technologies, it is necessary to keep a close eye on how autonomy actually takes shape in practice. Consequently, autonomy should be embraced as a value throughout the entire life-cycle of big data technologies, with extra attention to the direct connection between the design and application stage.
Dignity
Dignity (Düwell 2017) as a value cannot be found on all four lists of values, only on the list of anticipatory emerging technology ethics and biomedical ethics. Human dignity in anticipatory emerging technology ethics includes both self-respect and respect towards all humans by humans without any interest. On the list of technomoral values empathy is closest in notions to dignity. Empathy (compassionate concern for others) is explained by Vallor as a technomoral value that cultivates openness to being morally moved to a caring action and led by the emotions of the other members of our technosocial world. This could mean for cultivating compassion both for others’ joy as well as pain. Vallor calls for embracing this value, because the amount of joyful and painful events that are mediated via digital technologies is unseen, therefore we can too often be called for empathic action. Nevertheless, she argues that we need to adapt our receptors of empathy to these technological changes. This is crucial in order not to become narcissists and we consider this is also crucial to uphold someone’s dignity within big data contexts.
The value-sensitive design list does not refer to dignity but to identity. Yet, in our view these two notions are intertwined. In VSD, identity refers to people’s understanding of who they are, embracing both continuity and discontinuity over time, but dignity also requires a similar understanding so that one can respect this value. Following from this we could say that dignity and identity mutually predestine each other: when dignity is preserved so is identity, and when identity is preserved so is dignity. Human dignity can therefore be regarded as a prime principle since all implications of big data-mediated interactions affect humans and to differing degrees also their identity and dignity.
Dignity is under pressure in big data contexts. For instance, when revealing too much about a user to others, principles of data minimization and design requirements of encryption appear to be insufficient. Adverse consequences of algorithmic profiling, such as discrimination or stigmatization demonstrate that dignity is fragile in many contexts of big data, but it in particular becomes pressured in the application stage. Being confronted with discriminating profiles and opaque data-driven decisions, a citizen may experience a Kafkaesque situation in which her view and agency no longer is acknowledged. When a person no longer is treated as someone with particular interests, feelings and commitments, but merely as a bundle of data, her dignity may be compromised. The example of a black couple having been auto-tagged as gorillas (Gray 2015), also demonstrate this. For instance, by actively raising awareness about the consequences of online search results upon algorithmic decision-making processes, where search results are predicated upon the design and application of big data, could assist in effectuating respect for the value of dignity in big data contexts. By raising awareness and, if stakeholders acknowledge their responsibilities in upholding dignity, much of the diverse effects of big data-mediated interactions can be minimized.
Privacy
Big data based analytics are often highly privacy-invasive; therefore we acknowledge the importance of privacy as a value and include it also on our list of values for big data contexts. Out of the four lists of values detailed in section three only value-sensitive design lists privacy as a separate value to cherish. The value lists of biomedical ethics and technomoral values do not refer to privacy explicitly. Anticipatory emerging technology ethics in relation to privacy focuses on the fundamental right to privacy and on notions of privacy that are known from the data protection perspectives, such as informational privacy but also on its physical extensions such as the notion of bodily privacy depicts this. A third notion of privacy in anticipatory emerging technology ethics relates to the relational character of privacy. This is also a useful notion for the integrated approach in this paper, as privacy should come about through mutually respected (not only legal) agreements among stakeholders. Anticipatory emerging technology ethics underlines that privacy has a property component, therefore the value of property in a certain form is also considered useful to support privacy.
VSD refers to privacy as a claim, an entitlement, or a right of an individual to determine what information about him or herself can be communicated to others. Given the scope of privacy for big data networks, we also considered informed consent and ownership and property as related values. Informed consent refers to garnering people’s agreement, encompassing criteria of disclosure and comprehension (for “informed”) and voluntariness, competence, and agreement (for “consent”). Ownership and property refers to a right to possess an object (or information), use it, manage it, derive income from it, and bequeath it. All these rights can be regarded as substitutes of a broader definition of privacy, as for instance, personal data can be owned by others than the individual referred to by the data within big data contexts. Therefore, actively enacting the value of informed consent is another prerequisite to facilitate privacy as a value by all stakeholders in big data-mediated relationships.
Since the rise of the internet there is extensive literature on privacy (Custers and Ursic 2016; De Hert and Gutwirth 2006; Gray and Citron 2013; Hildebrandt and de Vries 2013; Koops et al. 2017; Moerel et al. 2016; Nissenbaum 2010; Omer and Polonetsky 2012). Privacy as an ethical value for big data contexts is the closest in its scope to privacy as a fundamental right (as also listed on Brey’s list). Privacy as fundamental value first includes respect for others, and specifically respect for someone’s private sphere, conversations, writing—e.g. confidentiality of mail—and also any action that one keeps intentionally unexposed to the broad public. A cornerstone of privacy as a fundamental value lies in respecting the boundaries someone else has drawn him/herself and the boundaries one would also like to be seen protected from intrusion by others.
These instances of privacy within big data-mediated interactions are under constant pressure. Persons can increasingly easily be identified in datasets even if technical measures of anonymization are in place. Simply the myriad of correlations between personal data in big data schemes allows for easy identifiability. Therefore, the growing technological possibilities offered by big data allow for many ways to disrespect one’s private interactions and relations. Therefore, privacy needs to be at the heart of the design process, by making sure it is a key design requirement and not merely something that is being “added” at the end. However, due to the mentioned possibilities of identifying or singling out someone, privacy can also become under siege in the implementation and application stage. Therefore, an integrated approach in respecting one’s privacy holistically by all stakeholders remains pivotal.
Non-maleficence
Non-maleficence is the value of not causing harm to others. Non-maleficence is only explicitly stated as a core value on the value list of biomedical ethics (Beauchamp and Childress 2012a, b). Non-maleficence, according to Beauchamp and Childress, is rooted in the Hippocratic Oath, and means ‘above all, do no harm’ to others. Regarding the technomoral values of Vallor, non-maleficence is intertwined with humility and self-control. Regarding the value sensitive design value list, calmness (or certain degree of self-control) is most closely related to non-maleficence. Considering the anticipatory emerging technology ethics list avoiding harms and risks as core principle provides the closest overlap with the value of non-maleficence.
Humility focuses on knowing what we do not know. Vallor stresses the importance of this value regarding new technologies, because practicing humility as a technomoral virtue, in her view acknowledges “the real limits of our technosocial knowledge and ability, reverence and power at the universe’s power to surprise and confound us, and the renunciation of the blind faith that new technologies inevitably lead to human mastery and control of our environments”. Practicing humility in a big data era, therefore, could serve as an ethically cautious behaviour both during design and application of big data. Calmness in VSD refers to a peaceful and composed psychological state can be regarded as a pre-requisite for remaining non-maleficent within and across big data contexts. Avoiding harms and risks is a principle and desired attitude which in Brey’s view encompasses that new technologies may contain different harms and risks consequently precaution toward these technologies is necessary. This implicit precautionary attitude renders this principle of Brey being closely related the value of non-maleficence (‘do not harmFootnote 11’).
Non-maleficence is first and foremost a central value in the design process. Actors such as developers, data scientists, and commissioning parties have to anticipate and take into account the wider societal impact of their applications. It has to be noted that, within the context of big data technologies, non-maleficence is perhaps one of the most difficult values to implement as the predictive logic in big data based algorithms can often result in unforeseeably offensive outcomes. These algorithmically mediated results of big data can therefore easily put pressure upon non-maleficence. For instance, false positive matches in law enforcement, such as the recent case of falsely flagged children as security risks on no-fly lists in Canada, demonstrate that big data-based analysis can result in unjustified accusations of innocent persons (Bronskill 2018). In order to uphold non-maleficence, it is important to be humble and acknowledge that big data technologies might have unforeseen and unwanted consequences. Therefore, non-maleficence both as a design requirement for technologies and an attitude requirement for a wide range of stakeholders, including policy makers, data scientists and users should be embraced throughout different contexts of big data.
Justice
Justice as a value can be considered as a prime requirement in everyday life. Except the list of VSD, justice appears on all three other lists. Justice on the techno moral value list is the broadest among values and based on Aristotelian and Confucian understandings Vallor explains justice as the “just treatment of others”. We can also understand this value under the broader notion of human benevolence. Vallor claims that “data-mining, pervasive digital surveillance and algorithmic profiling, robotics” currently fuel the growth in social inequalities and technomoral justice is needed to restore or remedy inequalities. In biomedical ethics justice is about upholding rightness. In anticipatory emerging technology ethics justice is understood in its broadest terms. Justice involves the just distribution of primary goods, capabilities, risks and hazards, non-discrimination and equal treatment relative to age, gender, sexual orientation, social class, race, ethnicity, religion, disability, etc. Furthermore, it also involves the geographical dimensions, as global north–south justice and age-related aspects, such as intergenerational justice.
Value-sensitive design does not list justice as a value specifically. It lists however freedom from bias and universal usability and we consider these two closely related to justice. First, regarding the freedom from bias Friedman et al. underline the essential need to prevent or remedy “systematic unfairness” as a form of bias. Within the context of big data, systematic unfairness can also be regarded as a form of pressure upon the value of justice. Systematic unfairness occurs, for instance, in the stage of analysis when false negatives are generated during biometric identification processes of migrants (La Fors-Owczynik 2016; La Fors-Owczynik and Van der Ploeg 2015) or false positives of suspects (Tims 2015). However, justice as a value also plays an important role in both the implementation and application stage. Given that big data accelerates the speed and quantity of data transfer and creates immense possibilities for reuse, instances for false identity verification, false accusation or stigmatization as a consequence of big data-led correlations also grow. Second, universal usability as a value on the VSD list refers to making persons successful users of information technology. Although success is a highly contested notion, we consider here a mutually respectful and just design and application of big data technologies holistically by all stakeholders as a form of success. All in all, justice is a value that is central to all stages of the big data life cycle.
Accountability
Accountability is a value that requires constant assessment in democratic societies (Braithwaite 2006; Jos and Tompkins 2004). From the four lists of values, accountability is explicitly stated only on the list of value-sensitive design. In VSD accountability refers to the properties, which facilitate that the actions of a person or institution are traceable to that person or institution. The technomoral value ‘perspective’ encompasses pivotal qualities for upholding accountability. As Vallor argues ‘perspective’ means the acknowledgement that emerging technologies will have implications that are “global in scale and unforeseeable in depth”. Cultivating perspective (holding on to the moral whole) as a value, Vallor argues, is necessary because by doing it we acknowledge that the emerging technologies we continuously develop have implications that are unparalleled, global in scale and perhaps unforeseeable in depth. Accountability as a value does not appear on the value list of anticipatory emerging technologies and of biomedical ethics.
Yet, given our integrated approach for big data technologies and the fact that the techno-social choices for and implications of using big data technologies increasingly magnify, accountability is a pivotal value on our list for big data. Transparency is a facilitator of accountability, in a similar way as it is a key condition for autonomy. If transparency is under pressure in domains of big data, so is accountability. For instance, auto-tagging instances can result in highly discriminative and offensive outcomes and put pressure on this value. Instances, when for example, two black persons have been tagged as gorillas (Gray 2015) or when the Dachau concentration camp has been tagged as a sport centre on a web-site (Hern 2015) demonstrates pressure upon the value of accountability. It is difficult to identify which parties in these big data mediated contexts are responsible for such offensive outcomes. Is it the algorithm, the designer, the user of a certain programme or also those searching users from whom an algorithm learns, or perhaps more of them? Given our holistic approach we consider that accountability is a pivotal value to uphold within all big data contexts, yet it pre-requires more transparent structures in big data networks and efforts from all stakeholders.
Trustworthiness
Trustworthiness is a value that requires mutual caring especially in a networked world (Keymolen 2016). From our four lists only VSD refers to this value, more specifically to trust. In VSD trust is described through the expectations that exist between people who can experience good will, act with good will toward others, feel vulnerable, and experience betrayal. Honesty and self-control are values on the list of technomoral values that are closest in notion to trustworthiness. Honesty and self-control can be regarded as qualities of a person upon which his/her trustworthiness can rely. On the anticipatory emerging technology ethics list no trustworthiness, honesty or trust is mentioned. On the biomedical ethics lists we consider veracity—telling the truth—as most closely related to trustworthiness.
As to honesty from the techno-moral value list, although the parameters of one general truth are quite difficult if not impossible to outline as Vallor points out, because the truth content of a message can change by to whom we speak, where we gather our information or how we present it. Still the intention of speaking with honesty within the context of big data is a desirable quality, because honesty (‘respecting truth’) serves flourishing in interactions with other people. Vallor argues that upholding honesty shall be the primary task of ethics, because human flourishing would be “impossible without the general expectation of honesty”. In this respect honesty is also connected to self-control. Vallor explains self-control as a person’s “ability to align one’s desire with the good” that includes good will and good intention. Both honesty and self-control underpin trustworthiness and only increase its relevance for big data contexts. Regarding the biomedical ethics list, trustworthiness is closely related to the value of veracity. The principle of veracity, when implemented as a moral requirement for big data collectors, data brokers, data scientists and other stakeholders would increase trustworthiness within relationships in big data networks. Therefore, given our integrated approach for big data contexts and the fact that trustworthiness is under constant pressure within big data contexts, to embrace trustworthiness holistically by all stakeholders is crucial. In particular, trustworthiness can become set under pressure in the application stage; for instance, because data transfer processes are obscure and citizens can easily become victims of precautionary algorithmic decisions as citizens are unable to refute the basis up which such decisions were drawn, see for instance occurrences of false positives in law enforcement (“False match shows no fly-list isn’t perfect,” 2010). Therefore upholding trustworthiness as a value is of utmost worth throughout all interactions in big data networks, and especially those that fall beyond the limitations of law. Given the relational character of this value—it can only flourish in interaction—prescribing the practice of trustworthiness (and also veracity) for all stakeholders during big data-based processes could, for instance, strengthen the effectivity of such legal prescriptions as informed consent (Custers et al. 2013, 2014); the right not to be subject to profiling or the right to explanation. These rights can easily be violated when trustworthiness, veracity and honesty are not effectuated in big data networks.
Solidarity
Solidarity as a value does not appear on any of the value lists referred to above. Yet, we consider solidarity as a highly relevant value in domains of big data (Tischner 2005). In VSD, the value of courtesy seems being the closest in notion to it. In VSD, courtesy refers to treating people with politeness and consideration.
Out of the technomoral value list empathy, flexibility, courage and civility relate most to solidarity as these values can be seen as building blocks to cultivate a mutually shared and upheld attitude of solidarity. Solidarity is a value which has been embraced to different degrees over time. For instance, before the fall of the iron curtain in Poland, solidarity became a value that assembled crowds, creating a new positive ideology of compassion and brotherhood against decades of Soviet political oppression. This symbolic but also enacted form of solidarity in today’s circumstances, at least in Western Europe, cannot be comparable as there is no oppression. Yet, the implications of showing/expressing solidarity in daily life by having the courage to come up for others and having the flexibility—the latter being closely related to the liberal value of tolerance according to Vallor—in order to “enable the co-flourishing of diverse human societies” is crucial to uphold during discussions on big data. Empathy (compassionate concern for others) as mentioned earlier is a value that cultivates openness to being morally moved to caring action by the emotions of the other members of our techno-social world. This is an essential quality of being solidaire with someone else. Flexibility as a techno moral value(skilful adaptation to change) is a highly useful enabler of solidarity as it facilitates the co-flourishing of diverse human societies and requires mutual cultivation. As to courage, solidarity requires courage from those who want to enact compassionate behaviour with others and therefore require risk taking and prioritizing interests with a goodwill for others. Solidarity also resonates with civility, which according to Vallor (making common cause) is the “sincere disposition to live well with one’s fellow citizens of a globally networked information society, to collectively and wisely deliberate about matters of local, national and global policy and to work cooperatively towards those goods of techno social life that we seek and expect to share with others”. Solidarity may also overlap with welfare (see above), because from some perspectives (such as Shannon’s technomoral values) both values are focused on a responsive disposition towards others. However, from other perspectives (such as VSD), welfare focuses more on individual well-being. Therefore, we include both welfare and solidarity as a separate values in our list, because welfare focuses more on the individual aspect and solidarity focuses more on the inter-relational, societal aspect.
Given these qualities underpinning solidarity as well as our integrated approach and holistic view on stakeholders, it is also highly important to stress that solidarity within big data networks requires mutually respectful cooperation of stakeholders. This can especially be seen as an asset when limitations for the use of novel big data technologies are scarce yet. Many instances of big data-based calculations in which commercial interests are prioritized versus non-profit interests, are examples of situations in which solidarity is under pressure. When, for instance, immigrants are screened by big data-based technologies, they may not have the legal, lingual, physical or psychological position or capacity to defend themselves from potential false accusations resulting from digital profiling. These are examples of solidarity being under pressure within big data-based interactions.
Environmental welfare
This value only appears as environmental sustainability on the list of value-sensitive design. The value list of biomedical ethics does not include it. On the list of techno moral values empathy and courage seem most closely related to it, and on the list of anticipatory emerging technology ethics ‘no environmental harm (including animal welfare)’ is closest in notion. On the list of VSD, environmental sustainability refers to sustaining ecosystems such that they meet the needs of the present without compromising future generations. The list of anticipatory emerging technology ethics contains no harm, this also entails the prohibition of causing environmental harm (including animal welfare). Among the techno moral values, empathy and courage seem to be the most closely related to environmental welfare. Although empathy has not been developed as a concept concerning non-humans, the environment (including animals), for instance, in Japanese (Callicott and McRae 2017) and tribal philosophies, such as the Native American culture (Booth 2008) or the South-African Ubuntu (Chuwa 2014) culture is highly respected, potentially as much as human life.
Given our integrated and holistic approach although big data has rather indirect effects on the environment, for instance the current rush for lithium in Latin-America (Frankel and Whoriskey 2016), which is the critical ingredient of all batteries on the world, shows how environmental welfare as a value is under pressure by big data technologies. Considering environmental welfare in big data contexts an environment conscious mind-set could help limit the negative implications of big data technologies throughout the whole big data life-cycle. For instance, the use of lithium based batteries puts pressure upon the value of environmental welfare. However, the wider use of solar energy-based batteries could limit the specific negative effects of big data technologies on the environment. Limiting such negative effects is also crucial, because any pressure upon the value of environmental welfare can also negatively affect human welfare.
Certainly our list including these ten values, summarized in Table 1, is not exhaustive. Yet, these ten value dimensions are comprehensive with respect to emerging big data technologies in multiple contexts. This value list is also aimed at providing value-specific perspectives for discussion. Such discussions could regard the extent to which using big data technologies brings along issues that put pressure upon pivotal ethical values in our current societies (Table 2). Human welfare, autonomy, non-maleficence, justice, accountability, trust, privacy, dignity, solidarity, environmental welfare are all values that are constantly under pressure within the context of big data and in our view deserve reflection throughout the circular life cycle of big data technologies in all contexts by the broadest range of stakeholders involved.