Mobile phones track us as we shop at stores and can infer where and when we vote. Algorithms based on commercial data allow firms to sell us products they assume we can afford and avoid showing us products they assume we cannot. Drones watch our neighbors and deliver beverages to fishermen in the middle of a frozen lake. Autonomous vehicles will someday communicate with one another to minimize traffic congestion and thereby energy consumption. Technology has consequences, tests norms, changes what we do or are able to do, acts for us, and makes biased decisions (Friedman and Nissenbaum 1996). The use of technology can also have adverse effects on people. Technology can threaten individual autonomy, violate privacy rights (Laczniak and Murphy 2006), and directly harm individuals financially and physically. Technologies can also be morally contentious by “forcing deep reflection on personal values and societal norms” (Cole and Banerjee 2013, p. 555). Technologies have embedded values or politics, as they make some actions easier or more difficult (Winner 1980), or even work differently for different groups of people (Shcherbina et al. 2017). Technologies also have political consequences by structuring roles and responsibilities in society (Latour 1992) and within organizations (Orlikowski and Barley 2001), many times with contradictory consequences (Markus and Robey 1988).

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. As emphasized in a recent Journal of Business Ethics article, Johnson (Johnson 2015) notes the possibility of a responsibility gap: the abdication of responsibility around decisions that are made as technology takes on roles and tasks previously afforded to humans. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. Within the symposium, digital technologies are conceptualized to include applications of machine learning, information and communications technologies (ICT), and autonomous agents such as drones. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. How ought organizations recognize, negotiate, and govern the values, biases, and power uses of technology? How should the inevitable social costs of technology be shouldered by companies, if at all? And what responsibilities should organizations take for designing, implementing, and investing in technology?

This introduction is organized as follows. First, we identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

Technology and the Scope of Business Ethics

For some it may seem self-evident that the use and application of digital technology is value-laden in that how technology is commercialized conveys a range of commitments on values ranging from freedom and individual autonomy, to transparency and fairness. Each of the contributions to this special issue discusses elements of this starting point. They also—implicitly and explicitly—encourage readers to explore the extent to which technology firms are the proper locus of scrutiny when we think about how technology can be developed in a more ethically grounded fashion.

Technology as Value-Laden

The articles in this special issue largely draw from a long tradition in computer ethics and critical technology studies that sees technology as ethically laden: technology is built from various assumptions that—either implicitly or explicitly—express certain value commitments (Johnson 2015; Moor 1985; Winner 1980). This literature argues that, through affordances—properties of technologies that make some actions easier than others—technological artifacts make abstract values material. Ethical assumptions in technology might take the form of particular biases or values accidentally or purposefully built into a product’s design assumptions, as well as unforeseen outcomes that occur during use (Shilton et al. 2013). These issues have taken on much greater concern recently as forms of machine learning and various autonomous digital systems drive an increasing share of decisions made in business and government. The articles in the symposium therefore consider ethical issues in technology design including sources of data, methods of computation, and assumptions in automated decision making, in addition to technology use and outcomes.

A strong example of values-laden technology is the machine learning (ML) algorithms that power autonomous systems. ML technology underlies much of the automation driving business decisions in marketing, operations, and financial management. The algorithms that make up ML systems “learn” by processing large corpi of data. The data upon which algorithms learn, and ultimately render decisions, is a source of ethical challenges. For example, biased data can lead to decisions that discriminate against individuals due to morally arbitrary characteristic, such as race or gender (Danks and London 2017; Barocas and Selbst 2016). One response to this problem is for companies to think more deliberately about how the data driving automation are selected and assessed to understand discriminatory effects. However, the view that an algorithm or computer program can ever be ‘clean’ feeds into the (mistaken) idea that technology can be neutral. An alternative approach is to frame AI decisions—like all decisions—as biased and capable of making mistakes (Martin 2019). The biases can be from the design, the training data, or in the application to human contexts.

Corporate Responsibility for the Ethical Challenges of Technology

It is becoming increasingly accepted that the firms who design and implement technology have moral obligations to proactively address problematic assumptions behind, and outcomes of, new digital technologies. There are two general reasons why this responsibility rests with the firms that develop and commercialize digital technologies. First, in a nascent regulatory environment, the social costs and ethical problems associated with new technologies are not addressed through other institutions. We do not yet have agencies of oversight, independent methods of assessment or third parties that can examine how new digital technologies are designed and applied. This may change, but in the interim, the non-ideal case of responsible technological development is internal restraint, not external oversight. An obvious example of this is the numerous efforts put forth by large firms, such as Microsoft and Google, focused on developing principles or standards for the responsible use of artificial intelligence (AI). There are voices of skepticism that such industry efforts will genuinely focus on the public’s interest; however, it is safe to say that the rate of technological development carries an expectation that firms responsible for innovation are also responsible for showing restraint and judgment in how technology is developed and applied (cf. Smith and Shum 2018).

A second reason that new technologies demand greater corporate responsibility is that technologies require attention to ethics during design, and design choices are largely governed by corporations. Design is the projection of how a technology will work in use and includes assumptions as to which users and uses matter and which do not, and how the technology will be used. As STS scholar Akrich notes “…A large part of the work of innovators is that of ‘inscribing’ this vision of (or prediction about) the world in the technical content of the new object” (Akrich 1992, p. 208). Engineers and operations directors need to be concerned about how certain values—like transparency, fairness, and economic opportunity—are translated into design decisions.

Because values are implicated during technology design, developers make value judgments as part of their corporate roles. Engineers and developers of technology inscribe visions or preferences of how the world works (Akrich 1992; Winner 1980). This inscription manifests in choices about how transparent, easy to understand and fix, or inscrutable a technology is (Martin 2019), as well as who can use it easily or how it might be misused (Friedman and Nissenbaum 1996). Ignoring the value-laden decisions in design does not make them disappear. Philosopher Richard Rudner addresses this in realm of science; for Rudner, scientists as scientists make value judgements; and ignoring value-laden decisions means those decisions are made badly because they are made without much thought or consideration (Rudner 1953). In other words, if firms ignore the value implications of design, engineers still make moral decisions; they simply do so without an ethical analysis.

Returning to the example of bias-laden ML algorithms illustrates ways that organizations can work to acknowledge and address those biases through their business practices. For example, acknowledging bias aligns with calls for algorithms to be “explainable” or “interpretable”: capable of being deployed in ways that allow users and affected parties to more fully understand how an algorithm rendered its decisions, including potential biases (cf. Kim and Routledge 2018; Kim 2018; Selbst and Barocas 2018). Explainable and interpretable algorithms require design decisions that carry implications for corporate responsibility. If a design team creates an impenetrable AI-decision, where users are unable to judge or address potential bias or mistakes, then the firm in which that team works can be seen to have responsibility for those decisions (Martin forthcoming).

It follows from these two observations—technology firms operate with nascent external oversight and designers are making value-laden decisions as part of their work in firms—that the most direct means of addressing ethical challenges in new technology is through management decisions within technology firms. The articles in this special issue point out many ways this management might take place. For example, in their paper “A Micro-Ethnographic Study of Big Data Innovation in the Financial Services Sector,” authors Richard Owen and Keren Naa Abeka Arthur give a descriptive account focusing on how an organization makes ethics a selling point of a new financial services platform. Ulrich Leicht-Deobald and his colleagues take a normative tact, writing in “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity” that firms designing technologies to replace human decision making with algorithms should consider their impact on the personal integrity of humans. Tae Wan Kim and Allan Scheller-Wolf present a case for increased corporate responsibility for what they call technological unemployment: the job losses that will accompany an accelerated pace of automation in the workplace. Their discussion, “Technological Unemployment, Meaning in Life, Purpose of Business and the Future of Stakeholders,” asks what corporations owe not only to employees who directly lose their jobs to technology, but what corporations owe to a future society when they pursue workerless production strategies.

The Interface of Business and Technology Ethics

One of the central insights discussed in the pages of this special issue is that technology-driven firms assume a role in society that demands a consideration of ethical imperatives beyond their financial bottom line. How does a given technology fit within a broader understanding of the purpose of a firm as value creation for a firm and its stakeholders? The contributions to this special issue, directly or indirectly, affirm that neither the efficiencies produced by the use of digital technology, nor enhanced financial return to equity investors solely justify the development, use, or commercialization of a technology. These arguments will not surprise business ethicists, who routinely debate the purpose and responsibilities of for-profit firms. Still, the fact that for-profit firms use new technology and profit from the development of technology raises the question of how the profit-motive impacts the ethics of new digital technology.

One way of addressing this question is to take a cue from other, non-digital technologies. For example, the research, development and commercialization necessary for pharmaceutical products carries ethical considerations for associated entities, whether individual scientists, government agencies, non-governmental organizations, or for-profit companies. Ethical questions include: how are human test subjects treated? How is research data collected and analyzed? How are research efforts funded, and are there any conflicts of interest that could corrupt the scientific validity of that research? Do medical professionals fully understand the costs and benefits of a particular pharmaceutical product? How should new drugs be priced? The special set of ethical issues related to pharmaceutical technology financed through private capital markets include the ones raised above plus a consideration of how the profit-motive, first, creates competing ethical considerations unrelated to pharmaceutical innovation itself, and second, produces social relationships within firms that may compromise the standing responsibilities that individuals and organizations have to the development of pharmaceutical products that support the ideal of patient health.

A parallel story can be told for digital technology. There are some ethical issues that are closely connected to digital technology, such as trust, knowledge, privacy, and individual autonomy. These issues, however, take on a heightened concern when the technologies in question are financed through the profit-motive. We have to be attentive to the extent to which a firm’s inclination to show concern for customer privacy, for instance, can be marginalized when its business model relies on using predictive analytics for advertising purposes (Roose 2019). A human resource algorithm that possibly diminishes employee autonomy may be less scrutinized if its use cuts operational expenses in a large, competitive industry. The field of business ethics contributes to the discussion about the responsible use of new technology by illustrating how the interface of the market, profit-motive and the values of technology can be brought into a more stable alignment. Taken together, the contributions in this special issue provide a blueprint for this task. They exemplify the role of technology firmly within the scope of business ethics in that managers and firms can (and should) create and implement technology in a way that remains attentive to the value creation for a firm and its stakeholders including employees, users, customers, and communities.

At the same time, those studying the social aspects of technology need to remain mindful of the special nature—and benefits—of business. Business is a valuable social mechanism to finance large-scale innovation and economic progress. It is hard to imagine that some of the purported benefits of autonomous vehicles, for example, would be on our doorstep if it were not for the presence of nimble, fast-paced private markets in capital and decentralized transportation services. Business is important in the development of technology even if we are concerned about how well it upholds the values of responsible use and application of technology. The challenge taken up by the discussions herein is to explore how we want to configure the future and the role that business can play in that future. Are firms exercising sufficient concern for privacy in the use of technology? What are the human costs associated with relegating more and more decisions to machines, rather than ourselves? Is there an opportunity for further regulatory oversight? If so, in what technological domain? Business ethicists interested in technology need to pay attention to the issues raised by this symposium’s authors and those that study technology need to appreciate the special role that business can play in financing the realization of technology’s potential.

In addition, the articles in this symposium illustrate how the intersection of business ethics and technology ethics illuminates how our conceptions of work—and working—shape the ethics of new technology. The symposium contributions herein have us think critically about how the employment relationship is altered by the use and application of technology. Again, Ulrich Leicht-Deobald and his co-authors prompt an examination of how the traditional HR function is altered by the assistance of machine-learning platforms. Kim and Scheller-Wolf force an examination of what firms using job-automation technologies owe to both displaced and prospective employees, which expands our conventional notions of employee responsibility beyond those who happens to be employed by a particular firm, in a particular industry. Although not exclusively focused on corporate responsibility within the domain of employment, Aurelie Laclercq-Vandelannoitte’s contribution “Is Technological ‘Ill-Being’ Missing from Corporate Responsibility?” encourages readers to think about the implications of “ubiquitous” uses of information technology for future individual well-being and social meaning. There are clear lines between her examination of how uses of technology can adversely impact freedom, privacy and respect and how ethicists and policy makers might re-think firms’ social responsibilities to employees. And, even more pressing, these discussions provide a critical lens for how we think through more fundamental problems such as the rise of work outside of the confines of the traditional employment relationship in the so-called “gig economy” (Kondo and Singer 2019).

How Business Ethics Informs Technology Ethics

Business ethics can place current technology challenges into perspective by considering the history of business and markets behaving outside the norms, and the corrections made over time. For example, the online content industry’s claim that changes to the digital marketing ecosystem will kill the industry echoes claims made by steel companies fighting environmental regulation in the 1970s (IAB 2017; Lomas 2019). Complaints that privacy regulation would curtail innovation echo the automobile industry’s complaints about safety regulation in the 1970s. Here we highlight two areas where business ethics’ understanding of the historical balance between industry desires and pro-social regulation can offer insights on the ethical analysis of technology.

Human Autonomy and Manipulation

There are a host of market actors impacted by the rise of digital technology. Consumers are an obvious case. What we buy and how our identities are created through marketing is, arguably, ground zero for many of the ethical issues discussed by the articles in this symposium. Recent work has begun to examine how technology can undermine the autonomy of consumers or users. For example, many games and online platforms are designed to encourage a dopamine response that makes users want to come back for more (“Technology Designed for Addiction” n.d.). Similar to the high produced by gambling [machines for which have long been designed for maximum addiction (Schüll 2014)], games and social media products encourage users to seek the interaction’s positive feedback to the point where their lives can be disrupted. Through addictive design patterns, technology firms create a vulnerable consumer (Brenkert 1998). Addictive design manipulates consumers and takes advantage of human proclivities to threaten their autonomy.

A second example of manipulation and threatened autonomy is the use of aggregated consumer data to target consumers. Data aggregators can frequently gather enough information about consumers to infer their concerns and desires, and use that information to narrowly and accurately target ads. By pooling diverse information on consumer behavior, such as location data harvested from a phone and Internet browsing behavior tracked by data brokers, consumers can be targeted in ways that undermine individuals’ ability to make a different decision (Susser et al. 2019). If marketers infer you are worried about depression based on what you look up or where you go, they can target you with herbal remedies. If marketers guess you are dieting or recently stopped gambling, they can target you with food or casino ads. Business ethics has a long history of examining the ways that marketing strategies target vulnerable populations in a manner that undermines autonomy. A newer, interesting twist on this problem is that these tactics have been extended beyond marketing products into politics and the public sphere. Increasingly, social media and digital marketing platforms are being used to inform and sway debate in the public sphere. The Cambridge Analytica scandal is a well-known example of the use of marketing tactics, including consumer profiling and targeting based on social media data, to influence voters. Such tactics have serious implications for autonomy, because individuals’ political choices can now be influenced as powerfully as their purchasing decisions.

More generally, the articles in this symposium help us understand how the creation and implementation of new technology fits alongside the other pressures experienced within businesses. The articles give us lenses on the relationship between an organization’s culture—its values, processes, commitments, and governance structures—and the challenge of developing and deploying technology in a responsible fashion. There has been some work on how individual developers might or might not make ethical decisions, but very little work on how pressures from organizations and management matter to those decisions. Recent work by Spiekermann et al., for example, set out to study developers, but discovered that corporate cultures around privacy had large impacts on privacy and security design decisions (Spiekermann et al. 2018). Studying corporate cultures of ethics, and the complex motivations that managers, in-house lawyers and strategy teams, and developers bring to ethical decision making, is an important area in business ethics, and one upon which the perspectives collected here shed light.

Trust

Much of the current discussion around AI, big data, algorithms, and online platforms centers on trust. How can individuals (or governments) trust AI decisions? How do online platforms reinforce or undermine the trust of their users? How is privacy related to trust in firms and trust online? Trust, defined as someone’s willingness to become vulnerable to someone else, is studied at three levels in business ethics: an individual’s general trust disposition, an individual’s trust in a specific firm, and an individual’s institutional trust in a market or community (Pirson et al. 2016). Each level is critical to understanding the ethical implications of technology. Trust disposition has been found to impact whether consumers are concerned about privacy: consumers who are generally trusting may have high privacy expectations but lower concerns about bad acts by firms (Turow et al. 2015).

Users’ trust in firms can be influenced by how technology is designed and deployed. In particular, design may inspire consumers to overly trust particular technologies. This problem arguably creates a fourth level of trust unique to businesses developing new digital technologies. More and more diagnostic health care decisions, for example, rely upon automated data analysis and algorithmic decision making. Trust is a particularly pressing topic for such applications. Similar concerns exist for autonomous systems in domains such as financial services and transportation. Trust in AI is not simply about whether a system or decision making process will “do” what it purportedly states it will do; rather, trust is about having confidence that when the system does something that we do not fully understand, it will nevertheless be done in a manner that supports in our interests. David Danks (2016) has argued that such a conception of trust moves beyond mere predictability—which artificial intelligence, by definition, makes difficult—and toward a deeper sense of confidence in the system itself (cf. LaRosa and Danks 2018). Finally, more work is needed to identify how technology—e.g., AI decisions, sharing and aggregating data, online platforms, hyper-targeted ads—impact consumers’ institutional trust online. Do consumers see questionable market behavior and begin to distrust an overall market? For example, hearing about privacy violations—the use of a data aggregator—impacts individuals’ institutional trust online and makes consumers less likely to engage with market actors online (Martin 2019). The study of technology would benefit from the ongoing conversation about trust in business ethics.

Stakeholder Relations

Technology firms face difficult ethical choices in their supply chain and how products should be developed and sold to customers. For example, technology firms such as Google and Microsoft are openly struggling with whether to create technology for immigration and law enforcement agencies and U.S and international militaries. Search engines and social networks must decide the type of relationship to have with foreign governments. Device companies must decide where gadgets will be manufactured, under what working conditions, and where components will be mined and recycled.

Business ethics offers a robust discussion about whether and how to prioritize the interests of various stakeholders. For example, oil companies debate whether and how to include the claims of environmental groups. Auto companies face claims from unions, suppliers, and shareholders and must navigate all three simultaneously. Clothing manufacturers decide who to partner with for outsourcing. So when cybersecurity firms consider whether to take on foreign governments as clients, their analysis need not be completely new. An ethically attuned approach to cybersecurity will inevitably face the difficult choice of how technology, if at all, should be limited in development, scope, and sale. Similarly, firms developing facial recognition technologies have difficult questions to ask about the viability of those products, if they take seriously the perspective of stakeholders who may find those products an affront to privacy. More research in the ethics of new digital technology should utilize existing work on the ethics of managing stakeholder interests to shed light on the manner in which technology firms should appropriately balance the interests of suppliers, financiers, employees, and customers.

How Technology Ethics Informs Business

Just as business ethics can inform the study of recent challenges in technology ethics, scholars who have studied technology, particularly scholars of sociotechnical systems, can add to the conversation in business ethics. Scholarship in values in design—how social and political values become design decisions—can inform discussions about ethics within firms that develop new technologies. And research in the ethical implications of technology—the social impacts of deployed technologies—can inform discussions of downstream consequences for consumers.

Values in Design

Values in design (ViD) is an umbrella term for research in technology studies, computer ethics, human–computer interaction, information studies, and media studies that focuses on how human and social values ranging from privacy to accessibility to fairness get built into, or excluded from, emerging technologies. Some values in design scholarship analyzes technologies themselves to understand values that they do, or don’t, support well (Brey 2000; Friedman and Nissenbaum 1996; Winner 1980). Other ViD scholars study the people developing technologies to understand their human and organizational motivations and the ways those relate to design decisions (Spiekdermann et al. 2018; JafariNaimi et al. 2015; Manders-Huits and Zimmer 2009; Shilton 2018; Shilton and Greene 2019). A third stream of ViD scholarship builds new technologies that purposefully center particular human values or ethics (Friedman et al. 2017).

Particularly relevant to business ethics is the way this literature examines how both individually and organizationally held values become translated into design features. The values in design literature points out that the material outputs of technology design processes belong alongside policy and practice decisions as an ethical impact of organizations. In this respect, the values one sees in an organization’s culture and practices are reflected in its approach to the design of technology, either in how that technology is used or how it is created. Similarly, an organization’s approach to technology is a barometer of its implicit and explicit ethical commitments. Apple and Facebook make use of similar data-driven technologies in providing services to their customers; but how those technologies are put to use—within what particular domain and for what purpose—exposes fundamental differences in the ethical commitments to which each company subscribes. As Apple CEO Tim Cook has argued publicly, unlike Facebook, Apple’s business model does not “traffic in your personal life” and will not “monetize [its] customers” (Wong 2018). How Facebook and Apple managers understand the boundaries of individual privacy and acceptable infringements on privacy is conveyed in the manner in which their similar technologies are designed and commercialized.

Ethical Implications of Technology and Social Informatics

Technology studies has also developed a robust understanding of technological agency—how technology acts in the world—while also acknowledging the agency of technology users. Scholars who study the ethical implications of technology and social informatics focus on the ways that deployed technology reshapes power relationships, creates moral consequences, reinforces or undercuts ethical principles, and enables or diminishes stakeholder rights and dignity (Martin forthcoming; Kling 1996). Importantly, technology studies talks about the intersecting roles of material and non-material actors (Latour 1992; Law and Callon 1988). Technology, when working in concert with humans, impacts who does what. For example, algorithms influence the delegation of roles and responsibilities within a decision. Depending on how an algorithm is deployed in the world, humans working with their results may have access to the training data (or not), understand how the algorithm reached a conclusion (or not), and have an ability to see the decision relative to similar decisions (or not). Choices about the delegation of tasks between algorithms and individuals may have moral import, as humans with more insight into the components of an algorithmic decision may be better equipped to spot systemic unfairness. Technology studies offers a robust vocabulary for describing where ethics intersect with technology, ranging from design to deployment decisions. While business includes an ongoing discussion about human autonomy as noted above, technology studies adds a conversation about technological agency.

Navigating the Special Issue

The five papers that comprise this thematic symposium range in their concerns from AI and the future of work to big data to surveillance to online cooperative platforms. They explore ethics in the deployment of future technologies, ethics in the relationship between firms and their workers, ethics in the relationship between firms and other firms, and ethical governance of technology use within a firm. All five articles place the responsibility for navigating these difficult ethical issues directly on firms themselves.

Technology and the Future of Employment

Tae Wan Kim and Allan Scheller-Wolf raise a number of important issues related to technologically enabled job automation in their paper “Technological Unemployment, Meaning in Life, Purpose of Business, and the Future of Stakeholders.” They begin by emphasizing what they call an “axiological challenge” posed by job automation. The challenge, simply put, is that trends in job automation (including in manufacturing, the service sector and knowledge-based professions) will likely produce a “crisis in meaning” for individuals. Work—apart from the economic means that it provides—is a deep source of meaning in our lives and a future where work opportunities are increasingly unavailable means that individual citizens will be deprived of the activities that heretofore have defined their social interactions and given their life purpose. If such a future state is likely, as Kim and Scheller-Wolf speculate, what do we expect of corporations who are using the automation strategies that cause “technological unemployment”?

Their answer to this question is complicated, yet instructive. They argue that neither standard shareholder nor stakeholder conceptions of corporate responsibility provide the necessary resources to fully address the crisis in meaning tied to automation. Both approaches fall short because they conceive of corporate responsibility in terms of what is owed to the constituencies that make up the modern firm. But these approaches have little to say about whether there is any entitlement to employment opportunities or whether society is made better off with employment arrangements that provide meaning to individual employees. As such, Kim and Scheller-Wolf posit that there is a second, “teleological challenge” posed by job automation. The moral problem of a future without adequate life-defining employment is something that cannot straightforwardly be answered by existing conceptions of the purpose of the corporation.

Kim and Scheller-Wolf encourage us to think about the future of corporate responsibility with respect to “technological unemployment” by going back to the “Greek agora,” which they take to be in line with some of the premises of stakeholder theory. Displaced workers are neither “employees” nor “community” members in the standard senses of the terms. So, as in ancient Greece, the authors imagine a circumstance where meaningful social interactions are facilitated by corporations who offer “university-like” communities where would-be employees and citizens can participate and collectively deliberate about aspects of the common good, including, but not limited to, how corporations conduct business and how to craft better public policy. This would add a new level of “agency” into their lives and allow them to play an integral role in how business takes place. The restoration of this agency allows individuals to maintain another important sense of meaning in their lives, apart from the work that may have helped define their sense of purpose in prior times. This suggestion is proscriptive and, at times, seems idealistic. But, as with other proposals, such as the recent discussion of taxing job automation, it is part of an important set of conversations that need to be had to creatively imagine the future in light of technological advancement (Porter 2019).

The value in this discussion, which frames a distinctive implication for future research, is that it identifies how standard accounts of corporate responsibility are inadequate to justify responsibilities to future workers displaced by automation. It changes the way scholars should understand meaningful work beyond meaning at work to meaning in place of work and sketches an alternative to help build a more comprehensive social response to changing nature of employment that technology will steadily bring.

Technology and Human Well-Being

Aurelie Leclercq-Vandelannoitte’s “Is Employee Technological ‘Ill-Being’ Missing From Corporate Responsibility? The Foucauldian Ethics of Ubiquitous IT Uses in Organizations” explores the employment relationship more conceptually by introducing the concept of “technological ill-being” with the adoption of ubiquitous information technology in the workplace. Leclercq-Vandelannoitte defines technological ill-being as the tension or disconnect between an individual’s social attributes and aspirations when using modern information technology (IT) and the system of norms, rules, and values within the organization. Leclercq-Vandelannoitte asks a series of research questions as to how technological ill-being is framed in organizations, the extent to which managers are aware of the idea, and who is responsible for employees’ technological ill-being.

Leclercq-Vandelannoitte leverages Foucauldian theory and a case study to answer these questions. Foucault offers a rich narrative about the need to protect an individual’s ability to enjoy “free thought from what it silently thinks and so enable it to think differently” (Foucault 1983, p. 216). The Foucauldian perspective offers an ethical frame by which to analyze ubiquitous IT, where ethics “is a practice of the self in relation to others, through which the self endeavors to act as a moral subject.” Perhaps most importantly, the study, through the lens of Foucault, highlights the importance of self-reflection and engagement as necessary to using IT ethically. An international automotive company provides a theoretically important case of the deployment of ubiquitous IT contemporaneous with strong engagement with corporate social responsibility. The organization offers a unique case in that the geographically dispersed units adopted unique organizational patterns and working arrangements for comparison.

The results illustrate that technological ill-being is not analyzed in broader CSR initiatives but rather as “localized, individual, or internal consequences for some employees.” Further, the blind spot toward employees’ ill-being constitutes an abdication of responsibility, which benefits the firm. The paper has important implications for the corporate responsibility of organizations with regard to the effects of ubiquitous IT on employee well-being—an underexamined area. The author brings to the foreground the value-laden-ness of technology that is deployed within an organization and centers the conversation on employees in particular. Perhaps most importantly, ethical self-engagement becomes a goal for ethical IT implementation and a critical concept to understand technological ill-being. Leclercq-Vandelannoitte frames claims of “unawareness” of the value-laden implications of ubiquitous IT as “the purposeful abdication of responsibility” thereby placing the responsibility for technological ill-being squarely on the firm who deploys the IT. Future work could take the same critical lens toward firms who sell (rather than internally deploy) ubiquitous IT and their responsibility to their consumers.

Technology and Governance

Richard Owen and Keren Naa Abeka Arthur’s “A Micro-Ethnographic Study Of Big Data—Based Innovation In The Financial Services Sector: Governance, Ethics And Organisational Practices” uses a case study of a financial services firm to illustrate how organizations might responsibly govern their uses of big data. This topic is timely, as firms in numerous industries struggle to self-regulate their use of sensitive data about their users. The focus on how a firm achieves ethics-oriented innovation is unusual in the literature and provides important evidence of the factors that influence a firms’ ability to innovate ethically.

The authors describe a company that governs its uses of big data on multiple levels, including through responses to legislation, industry standards, and internal controls. The authors illustrate the ways in which the company strives for ethical data policies that support mutual benefit for their stakeholders. Though the company actively uses customer data to develop new products, the company’s innovation processes explicitly incorporate both customer consent mechanisms, and client and customer feedback. The company also utilizes derived, non-identifiable data for developing new insights and products, rather than using customers’ identifiable data for innovation. The authors describe how national regulation, while not directly applicable to the big data innovations studied, guided the company’s data governance by creating a culture of compliance with national data privacy protections. This has important consequences for both regulators and consumers. This finding implies that what the authors refer to as “contextual” legislation—law that governs other marginally related data operations within the firm—can positively influence new innovations, as well. The authors write that contextual data protection legislation was internalized by the company and “progressively embedded” into future innovation.

The authors also found that company employees directly linked ethical values with the success of the company, highlighting consumer trust as critical to both individual job security and organizational success. This finding speaks to the importance of corporate culture in setting the values incorporated into technology design. Owen & Arthur use the company’s practices as a case study to begin to define ethical and responsible financial big data innovation. Their evidence supports frameworks for responsible innovation that emphasize stakeholder engagement, anticipatory ethics, reflexivity on design teams, and deliberative processes embedded in development practice.

Technology and Personal Integrity

Ulrich Leicht-Deobald and his colleagues unpack the responsibilities organizations have to their workers when adopting and implementing new data collection and behavior analysis tools in “The Challenges of Algorithm-based HR Decision-making for Personal Integrity.” It unites theory from business ethics and the growing field of critical algorithm and big data studies to study the topical issue of algorithmic management of workers by human resource departments. The authors focus on tools for human resources decision making that monitor employees and use algorithms and machine learning to make assessments, such as algorithmic hiring and fraud monitoring tools. The authors argue that, in addition to well-documented problems with bias and fairness, such algorithmic tools have the potential to undermine employees’ personal integrity, which they define as consistency between convictions, words, and actions. The authors argue that algorithmic hiring technologies threaten a fundamental human value by shifting employees to a compliance mindset. Their paper demonstrates how algorithmic HR tools undermine employees’ personal integrity by encouraging blind trust in rules and discouraging moral imagination. The authors argue that the consequences of such undermining include increased information asymmetries between management and employees. The authors classify HR decision making as an issue of corporate responsibility and suggest that companies that wish to use predictive HR technologies must take mitigation measures. The authors suggest participatory design of algorithms, in which employees would be stakeholders in the design process, as one possible mitigative tactic. The authors also advocate for critical data literacy for managers and workers, and adherence to private regulatory regimes such as the Association of Computing Machinery’s (ACM) code of ethics and professional conduct and the Toronto Declaration of Machine Learning.

This paper makes an important contribution to the scoping of corporate responsibility for the algorithmic age. By arguing that companies using hiring algorithms have a moral duty to protect their workers’ personal integrity, it places the ethical dimensions of the design and deployment of algorithms alongside more traditional corporate duties such as responsibility for worker safety and wellness. And like Owen and Arthur, the authors believe that attention to ethics in design—here framed as expanding employees’ capacity for moral imagination—will open up spaces for reflection and ethical discourse within companies.

Technology and Trust

Livia Levine’s “Digital Trust and Cooperation with an Integrative Digital Social Contract” focuses on digital business communities and the role of the members in creating communities of trust. Levine notes that digital business communities, such as online markets or business social networking communities, have all the markers of a moral community as conceived by Donaldson and Dunfee in their Integrative Social Contract Theory (ISCT) (Donaldson and Dunfee 1999): these individuals in the community form relationships which generate authentic ethical norms. Digital business communities, on the other hand, differ in that participants cannot always identify each other and do not always have the legal or social means to punish participant businesses who renege on the community’s norms.

By identifying the hypernorm of “the efficient pursuit of aggregate economic welfare,” which would transcend communities and provide guidance for the development of micronorms in a community, Levine then focuses on trust and cooperation micronorms. Levine shows that trust and cooperation are “an instantiation of the hypernorm of necessary social efficiency and that authentic microsocial norms developed for the ends of trust and cooperation are morally binding for members of the community.” Levine uses a few examples, such as Wikipedia, open-source software, online reviews, and Reddit, to illustrate micronorms at play. In addition, Levine illustrates how the ideas of community and moral free space should be applied in new arenas including online.

The paper has important implications for both members of the social contract community and platforms that host the community to develop norms focused on trust and cooperation. First, the idea of community has traditionally been applied to people who know each other. However, Levine makes a compelling case as to why community can and should be applied for groups online of strangers—strangers in real life, but known online. Future research could explore the responsibilities of platforms who facilitate or hinder the development of authentic norms for communities on their service. For example, if a gaming platform is seen as a community of gamers, then what are the obligations of the gaming platform to enforce hypernorms and support the development of authentic micronorms within communities? Levine’s approach opens up many avenues to apply the ideas behind ISCT in new areas.

While each discussion in this symposium offers a specific, stand-alone contribution to the ongoing debate about the ethics of the digital economy, the five larger themes addressed by the articles—the future of employment, personal identity and integrity, governance and trust—will likely continue to occupy scholars’ attention for the foreseeable future. More importantly, the diversity of theoretical perspectives and methods represented within this issue is illustrative of the how the ethical challenges presented by new information technologies are likely best understood through continued cross-disciplinary conversations with engineers, legal theorists, philosophers, organizational behaviorists, and information scientists.