This article has shown how algorithms act as structuring agents in both mundane and key decisions and developed how and why firms are responsible for the design of algorithms within a decision. First, I offered a systematic account of the value-laden-ness of algorithms. Second, I leveraged STS scholars Latour and Akrich to frame algorithms as actors in ethical decision making—delegated tasks and responsibilities akin to other actors in the decision. Third, I grounded the normative obligation of developers for the ethical implications of algorithms. If a firm’s technology, such as an algorithm, acts to influence others, then companies could be held accountable for the acts, biases, and influence of their technology. I conclude with the implications for corporate responsibility, fiduciary duties, transparency, and research on algorithms.
Corporate Responsibility for Algorithms
Based on the arguments here, responsibility for algorithmic decision making is constructed in the design and development of the algorithm. Yet, corporate responsibility about products and services centers on situations where something goes wrong: a breach of a contract, product liability, or a tort harm created by a company. And, business ethics struggles to identify how and when firms are responsible for the use of a product that is working correctly and as designed (Brenkert 2000; Sollars 2003). A parallel argument about gun manufactures, where the correct use of the product can cause harm, has focused on marketing and distribution (Byrne 2007). Brenkert goes further to include not only product defects, but general harms caused by products as with gun manufacturers, “in which a non-defective product does what it is designed to do, but, because of the social circumstances in which it comes to be used, imposes significant harms through the actions of those who are using it in ways in which it was not supposed to be used” (Brenkert 2000, p. 23). Algorithmic harms can differ as the unjust biases can be due to the “correct” use.
One possible avenue for future corporate responsibility research is linking the role of the algorithm in a decision with the responsibility of the firm as shown in Fig. 4. In other words, firms (1) construct algorithms to take on a large or small role in a decision (y-axis) and (2) sell that algorithm to be used within a specific context (x-axis); both decisions contribute to the appropriate type of responsibility we expect of the firm. For example, a firm that develops an algorithm to take on a larger role in a decision of minimal societal importance—e.g., deciding where to place an ad online—could be seen as standard setting as to appropriate biases as well as the delegation of roles and responsibility encoded in the design. The firm acts as an expert in heavily influencing the decision including what factors are important and appropriate for a decision. Alternatively, if the role of the algorithm in a decision is minimized, by providing tools to allow users to revisit how the algorithm works, the firm would have more of a traditional handoff of a product with associated (minimal) responsibility around product liability. The difference between A and B in Fig. 4 would be the role of individuals using the algorithm as inscribed in the algorithm design; greater agency of the individual over the algorithm in use means less accountability attributed to the algorithm within the decision.
For decisions seen as pivotal in the life of individuals (O’Neil 2016)—whereby the decision provides a gatekeeping function to important social goods such as sentencing, allocation of medical care, access to education, etc.—the expected relationship could be akin to a principle–agent relationship where the algorithm acts as an agent for the design firm. The developer scripts the agent (algorithm) and the algorithm carries out its prescribed duties (e.g., Johnson and Noorman (2014); Powers and Johnson). Delegating decisions to drones in military situations takes on similar scrutiny where the developer (a contractor for the government or the military itself) remains responsible for the actions of the agent. If the developer wishes the algorithm to take a smaller role in a pivotal decision, the responsibility may be closer to a contract with a responsibility to remain engaged for the duration of the algorithm’s use in case the role changes because the decision is pivotal. Key for future work about appropriate corporate responsibility would be acknowledging that how the firm designed the algorithm to take on a role within the decision implies an associated responsibility for the decision itself.
Ethics of Algorithmic Design
Positioning the algorithm as having an important role within the larger ethical decision highlights three areas of concern for designing and coding algorithms. First, developing accountable algorithms requires identifying the principles and norms of decision making, the features appropriate for use, and the dignity and rights at stake in the situated use of the algorithm. Algorithms should be designed understanding the delegation of roles and responsibilities of the decision system. Second, give the previous section, algorithms should be designed and implemented toward the appropriate level of accountability within the decision, thereby extending the existing work on algorithm accountability (Kroll et al. 2017).
Finally, the ethical implications of algorithms are not necessarily hard-coded in the design and firms developing algorithms would need to be mindful of indirect biases. For COMPAS, individuals across races do not have an equal chance of receiving a high-risk score. The question is, why? Assuming COMPAS did not design the algorithm to code “Black Defendant” as a higher risk directly, why are black defendants more likely to be falsely labeled as high risk when they are not? Algorithms can be developed with an explicit goal such as to evade detection of pollution by regulators as with Volkswagon (LaFrance 2017).Footnote 13 For algorithms, two mechanisms can also indirectly drive bias in the process: proxies and machine learning. First, when a feature cannot or should not be used directly (e.g., race), an algorithm can be designed to use closely correlated data as a proxy that stands in for the non-captured feature. While race is not one of the questions for the risk assessment algorithms, the survey includes questions such as “Was one of your parents ever sent to jail or prison?” which can be highly correlated with race given drug laws and prosecutions in the 1970s and 1980s (Angwin et al. 2016a; Gettman et al. 2016; Urbina 2013). For example, researchers were able to identify individuals’ ethnicity, sexual orientation, or political affiliation from the person’s Facebook “likes” (Tufekci 2015). Similarly, loan terms or pricing should not vary based on race, but banks, insurance companies, and retail outlets can target based on neighborhoods or social connections, which can be highly correlated with race (Angwin et al. 2017; Waddell 2016).Footnote 14 In this case, basing scores on the father’s arrest record or the neighborhood where the defendant lives or “the first time you were involved with the police” can prove to be a proxy for race (Andrews and Bonta 2010; Barry-Jester et al. 2015; O’Neil 2016).
In addition to using proxies, value-laden algorithms could also be due to training the algorithm on biased data with machine learning. Some algorithms learn which factors are important to achieving a particular goal through the systematic examination of historical data as shown in Fig. 1. The COMPAS algorithm is designed to take into consideration a set number of factors and weight each factor according to its relative importance to a risk assessment. A classic example used by Cynthia Dwork, a computer scientist, the Distinguished Researcher at Microsoft Research, and quoted at the beginning of this article, is of university admissions. In order to identify the best criteria by which to judge applicants, a university could use a machine learning algorithm with historical admissions, rejection, and graduation records going back decades to identify what factors are related to “success.” Success could be defined as admittance or as graduating within 5 years or a particular GPA (or any other type of success). Importantly, historical biases in the training data will be learned by the algorithm, and past discrimination will be coded into the algorithm (Miller 2015). If one group—women, minorities, individuals of a particular religion—was systematically denied admissions or even underrepresented in the data, the algorithm will learn from the biased data set.
Biased training data are an issue that crosses contexts and decisions. Cameras trained to perform facial recognition often fail to correctly identify for certain races: a facial recognition program could recognize white faces but was less effective detecting faces of non-white races. The data scientist “eventually traced the error back to the source: In his original data set of about 5000 images, whites predominated” (Dwoskin 2015). The data scientist did not write the algorithm to focus on white individuals; however, the data he used to train the algorithm included predominately white faces. As noted by Aylin Caliskan, a postdoc at Princeton University, “AI is biased because it reflects effects about culture and the world and language…So whenever you train a model on historical human data, you will end up inviting whatever that data carries, which might be biases or stereotypes as well” (Chen 2017).
Machine learning biases are insidious because the bias is yet another level removed from the outcome and more difficult to identify. In addition, the idea behind machine learning—to use historical data to teach the algorithm what factors to take into consideration to achieve a particular goal—appears to further remove human bias, until we acknowledge that the historical data were created by biased individuals. Machine learning biases have the veneer of objectivity when the algorithm created by machine learning can be just as biased and unjust as one written by an individual.
Transparency
Calls for algorithmic transparency continue to grow: yet full transparency may be neither feasible nor desirable (Ghani 2016). Transparency as to how decisions are made can allow individuals to “game” the system. People could make themselves algorithmically recognizable and orient their data to be viewed favorably by the algorithm (Gillespie 2016), and gaming could be available to some groups more than others, thereby creating a new disparity to reconcile (Bambauer 2017). Gaming to avoid fraud detection or avoid SEC regulation is destructive and undercuts the purpose of the system. However, algorithmic opacity is also framed as a form of proprietary protection or corporate secrecy (also Pasquale 2015), where intentional obscurity is designed to avoid scrutiny (Burrell 2016; Diakopoulos 2015; Pasquale 2015).
Based on the model of algorithmic decision making in Fig. 4, calls for transparency in algorithmic decision making may need to be targeted for a specific purpose or type of decision. Annany and Crawford rightly question the quest for transparency as an overarching and unquestioned goal (Ananny and Crawford 2016). For example, the transparency to identify unjust biases may be different from the transparency for due process. Similarly, the transparency needed for corporate responsibility in the principal–agent relationship in Fig. 4 (a large role of the algorithm in a pivotal decision) would differ from the transparency needed for an algorithm that decides where to place an ad. Further, transparency can take on different forms. Techniques to understand the output based on changing the input (Dattam et al. 2016) may work for journalistic inquiries (Diakopoulos and Koliska 2017) but not for due process in the courts where a form of certification may be necessary.
Importantly, this range of transparency is possible. For example, a sentencing algorithm in Pennsylvania is being developed by a public agency, and the algorithms are open to the public for analysis (Smith 2016). Similarly, a company CivicScape released its algorithm and data online in order to allow experts to examine the algorithm for biases and provide (Wexler 2017). In fact, Wexler describes two competing risk assessment algorithms—one secret and one disclosed to defense attorneys—and both are competitive in the market. Based on the arguments here, the level and type of transparency would be a design decision and would need to adhere to the norms of the decision context. If a firm does not wish to be transparent about the algorithm, they need not be in a market focused on pivotal decisions allocating social goods with due process norms.
Implications for Ethical Decision-Making Theory
Just as ethical decision making offers lessons for algorithmic decisions, so to acknowledging the value-laden role of algorithms in decisions has implications for scholarship in decision making. First, more work is needed to understand how individuals make sense of the algorithm as contributing to the decision and the degree of perceived distributive and procedural fairness in an algorithmic decision. For example, Newman et al. (2016) empirically examine how algorithmic decisions within a firm are perceived as fair or unfair by employees. Recent work by Derek Bambauer seeks to understand the condition under which algorithmic decisions are accepted by consumers (Bambauer 2017).
Algorithms will also impact the ability of the human actors within the decision to make ethical decisions. Group decision making and the ability of individuals to identify ethical issues and contribute to a discussion could offer a road map as to how to research the impact of algorithms as members of a group decision (e.g., giving voice to values Arce and Gentile 2015). While augmented labor with robots is regularly examined, we must next consider the ethics and accountability of algorithmic decisions and how individuals are impacted by being a part of the algorithmic decision-making process with non-human actors in the decision.
Fiduciary Duties of Coders and Firms
The breadth and importance of the value-laden decisions of algorithms suggest greater scrutiny of designers and developers of algorithms used for pivotal decisions. If algorithms act as silent structuring agents deciding who has access to social goods and whose rights are respected, as is argued here, algorithmic decisions would need oversight akin to civil engineers building bridges, CPAs auditing firms, and lawyers representing clients in court. Similar to calls for Big Data review boards (Calo 2013), algorithms may need a certified professional involved for some decisions. Such professionalized or certified programmer would receive not only technical training but also courses on the ethical implication of algorithms. As noted by Martin (2015), many data analytics degrees do not fall under engineering schools and do not have required ethics courses or professional certification.
Research on Algorithms
Finally, firms should do more to support research on algorithms. Researchers and reporters run afoul of the CFAA, the Computer Fraud and Abuse Act, when performing simple tests to identify unjust biases in algorithms (Diakopoulos 2015; Diakopoulos and Koliska 2017). While the CFAA was designed to curtail unauthorized access to a protected computer, the act is now used to stop researchers from systematically testing output and service of websites based on different user types (Kim 2016). For example, researchers can violate the current version of the CFAA when changing a mock user profile to see whether Facebook’s NewsFeed shows different results based on gender (Sandvig et al. 2016), whether AirBnB offers different options based on the race of the user, or to test whether Google search results are biased (Datta et al. 2015). And firms can make researchers’ jobs harder even without the CFAA. After Sandvig et al. published their analysis on Facebook’s NewsFeed, companies modified the algorithm to render the research technique used ineffective. Such tactics, whether using the CFAA or obscuring algorithms, serve to make researchers jobs harder in attempting to hold corporations accountable for their algorithmic decisions. Modifying the CFAA is one important mechanism to help researchers.
Conclusion
Algorithms impact whether and how individuals have access to social goods and rights, and how algorithms are developed and implemented within managerial decision making is critical for business ethics to understand and research. We can hold firms responsible for an algorithm’s acts even when the firm claims the algorithm is complicated and difficult to understand. Here, I argue, the deference afforded to algorithms and associated outsized responsibility for decisions constitutes a design problem to be addressed rather than a natural outcome of identifying the value-laden-ness of algorithms.