Informed by the literature and significant practitioner experience, the following preliminary conceptual model is offered as a framework for the proposed qualitative studies (Fig. 1).
A constructivist grounded theory method (Charmaz 2014) formed the basis for our literature review and subsequent development of the conceptual model. This approach provides for methodological flexibility, dismissing certain rules, recipes, and requirements that force preconceived ideas and theories on acquired data. Instead, it invites the researcher to draw interpretive understandings and “follow leads that [researchers] define in the data, or design another way of collecting data to pursue our initial interests” (p. 32). The method is further substantiated as a precursor to our proposed qualitative research study in which a goal of data collection is to elucidate the meanings, beliefs, and values—the lived experiences—of research participants (Maxwell 2013).
Our systematic literature review utilizes the dynamic, reflexive, and integrative (DRI) zipper framework (El Hussein et al. 2017) that supports the intent of the constructivist grounded theory method. Classical and contemporary grounded theorists utilize systematic exploration to audit and appraise acquired data (Corbin and Strauss 2015; Stebbins 2001). The DRI zipper framework extends this interpretation to provide “traceable evidence to support [stated] generalizations and to justify the need to conduct the study” (El Hussein et al. 2017, p. 1207). Therefore, we approach the literature review as a concatenated activity (Stebbins 2001) that incorporates theoretical sensitivity (Glaser and Strauss 1999) to think “across fields and disciplines …. without letting it stifle [our] creativity or strangle [our] theory” (Charmaz 2014, p. 308).
Introduction to the literature review
We identify the relevant literature informing our conceptual model by drawing from the following key theories transcending fields of management, economics, and moral and political philosophy: behavioral theory of the firm (Cyert and March 1992), organizational decision-making theory (March 1994; March and Simon 1993 [1958]), ethical pluralism (James 2016 [1908]; Ross 2002 [1930]), and social contract theory (Hobbes 1982 [1651]; Locke 1980 [1690]; Rawls 1971).
We begin with a construction of the legal constraints and ethical considerations within which firms struggle to balance pressures and temptations that ultimately define the organizational factors influencing the decision-making process. Kaptein (2017) models the dimensions, conditions, and consequences of ethics struggles, proposing that greater ethical gaps created between opposing forces consequently require greater struggle. The multilevel model facilitates mutual interaction between individual and organizational units of analysis, allowing interdependent struggle through interactive combat. Nissenbaum’s (2010) theoretical framework of contextual integrity embraces the dynamicity of resulting ethical norms, thus bridging between organizational and technology factors by contextualizing the extent to which algorithmic models use flows of personal information that conform to consumers’ expectations. Accordingly, the process of personalized price setting depends both on algorithmic models and human judgement derived, in part, from a consideration of whether personal data are properly appropriated for use in a specific context.
Having described the broad theoretical framework from which our conceptual model is informed, we now briefly discuss each elemental component.
Legal constraints: conceptual foundations
The tenets of economic liberalism emerged from the Enlightenment-era to repudiate the feudal privileges and traditions of aristocracy. Smith (1975 [1776]) surmises that a laissez-faire philosophy incorporating minimal government intervention leads businesses, via unseen market forces, to distribute wealth “in the proportion most agreeable to the interest of the whole society” (p. 630). Recognizing the complexity of a knowledge-based society, Hayek (2011 [1960]) places some boundary conditions upon Smith’s invisible hand by arguing for modest yet limited government to create and enforce a general rule of law serving as individuals’ guardian of personal freedom. Liberty in the contemporary era is the result of an empirical and evolutionary view of politics wherein government creates and preserves the conditions to allow free markets to flourish. Friedman extends a somewhat tepid supposition, concluding that on a case-by-case basis, government intervention may mitigate negative externalities (as he terms, “neighborhood effects”), but cautions that mediating a market failure with legislative action likely increases other external costs (1978, 2012 [1962], pp. 30–32). Common within the economic liberalist tradition is the preservation of laissez-faire capitalism, yet many foundational proponents recognize limited government intervention as an antecedent to preserve individual liberty. The conditions and extent to which the political environment should provoke free markets remain a matter of considerable debate.
Antitrust as a legal constraint
Within the United States, the Department of Justice Antitrust Division maintains investigatory, enforcement, and prosecutorial jurisdiction over statutes related to both the illegal commercial restraint of fair competition and monopolistic practices (United States Department of Justice 2018, pp. II-1–II-25). Of particular importance, the Robinson-Patman Act (1936) specifically prohibits certain forms of wholesale discriminatory pricing, yet Robert Bork persuasively argues it is “the misshapen progeny of intolerable draftsmanship coupled to wholly mistaken economic theory” (1978, p. 382). At issue is the Act’s protection of competitors rather than competition—an antithetical and misguided approach to antitrust policy—by making unlawful the discrimination “in price between different purchasers of commodities of like grade and quality … and where the effect of such discrimination may be substantially to lessen competition or tend to create a monopoly in any line of commerce” (15 U.S.C. § 13(a)). Consequently, the phenomenon considered by the Act is not price discrimination, but rather, price difference—and it fails to consider those legitimate price differentials wholesalers may offer to their customers that in turn drive lower prices for consumers (Blair and DePasquale 2014). A novel empirical study of court cases involving consideration of the Robinson-Patman Act was conducted by Bulmash (2012) demonstrating how complainants in “secondary line” injury cases (purchasers of price-discriminated goods operating in a resale context) result in coordinated competition among suppliers, harming both consumers and competition. Despite historical recommendations for significant revision (Neal et al. 1968) and a contemporary call for outright repeal (Antitrust Modernization Commission 2007), the Act persists despite widespread abandonment of prosecutorial application in “primary line” injury claims (competitors of the price-discriminating firm).
While the effect of antitrust law seeks largely to prevent second-degree price discrimination through “acquisition or exercise of market power” (Fisher III, 2007, p. 14), Kochelek (2009) considers whether the Sherman Act indirectly applies in those cases where data-mining-based price discrimination may exert monopolistic or oligopolistic restraints on trade (15 U.S.C. §§ 1-2). While disfavoring such practices, the emergence of “data-mining-based price discrimination schemes fall into a gap between antitrust doctrine and the policies underlying the doctrine … [resulting in] reduce[d] consumer welfare, waste[d] resources, and reduce[d] allocative efficiency in exchange for increased producer profits that are insufficient to justify their cost” (Kochelek 2009, p. 535). Miller (2014) concurs, arguing that existing antitrust doctrine serves to limit a single firm to the extent they exert market power to raise price levels for most customers, yet data-mining-based price discrimination specifically targets unlikely defectors with higher prices without requiring widespread market influence. Effectively, classical antitrust laws are incompatible with modern methods of pricing discrimination.
Notwithstanding this limitation, the specter of seemingly unintentional collusion could result from algorithms that “are designed independently by competitors to include decisional parameters that react to other competitors’ decisions in a way that strengthens or maintains a joint coordinated outcome …. and the combination of these independent coding decisions leads to higher prices in the market” (Gal 2019, p. 19). As opposed to first-generation ‘adaptive’ algorithms that estimate and optimize price objectives subject to prior data, second-generation ‘learning’ algorithms use machine learning to condition their strategies upon experience (Calvano et al. 2019). Over time, independent algorithms may conclude that collusion is profitable, ultimately factoring competitive behavior as one among many changing market environmental variables. A small number of such cases have progressed through the judicial process (notably, “U.S. v. David Topkins,” 2015), but each is ultimately the result of algorithms designed by programmers (at the direction of managers) with collusive intent. Calvano et al. (2019) posit that the nature of learning algorithms to outperform humans in other contexts may result in collusive pricing that is hard to detect, presenting significant challenges to effective antitrust regulation.
Proposition 1
Antitrust factors are
not
legal constraints
presently
considered by firms that deploy algorithmic pricing models.
Antidiscrimination as a legal constraint
The extent to which discriminatory schemes in pricing are addressed in US federal law is largely embodied within the Civil Rights Act of 1964 mandating that “[A]ll persons shall be entitled to the full and equal enjoyment of the goods, services, facilities, privileges, advantages, and accommodations of any place of public accommodation, … without discrimination or segregation on the ground of race, color, religion, or national origin” (42 U.S.C. § 2000a). Spurred by California’s Unrah Civil Rights Act (Cal. Civ. Code, § 51), state-level legislation has extended such equal protection to include gender, language, and other factors easily detected and exploited by a price discrimination algorithm. Yet evidence suggests that even when states expand civil and equal rights doctrine, illegal discriminatory pricing schemes persist and prohibitions are broadly unenforced (Massachusetts Post Audit and Oversight Bureau, 1997). Federal opinion is mixed—the Federal Trade Commission suggests plentiful examples how big data can exploit underserved populations (2016, pp. 9–12), yet offers a counterpoint from the Executive Office of the President stipulating “if historically disadvantaged groups are more price-sensitive than the actual consumer, profit-maximizing differential pricing should work to their benefit” in competitive markets (2015, p. 17). Distinguishing between the aforementioned disparate treatment and disparate impact, the Presidential report “suggests that policies to prevent inequitable application of big data should focus on risk-based pricing in high-stakes markets such as employment, insurance, or credit provision” (p. 17).
The European Union extends legislation to prohibit “discrimination on grounds of nationality of the recipient or national or local residence” (Directive 2006/123/EC, Article 20), as demonstrated by a 2016 settlement reached between the European Commission and Disneyland Paris to settle allegations of inconsistent booking practices across member states (Abele 2016). Despite enforcement efforts, an extensive 2016 review of 10,537 e-commerce websites conducted by the European Commission (European Commission 2016) demonstrated the extensive practice of “geo-blocking” in 63% of cases to effectively limit interstate commerce. The resulting regulation “seeks to address direct, as well as indirect discrimination” (Regulation 2018/302), and, while not prohibiting price discrimination per se, eliminates one conduit through which such practices are facilitated.
Proposition 2
Antidiscrimination doctrine is a legal constraint considered by firms that deploy algorithmic pricing models.
Data privacy as a legal constraint
As described by Acquisti et al., “personal privacy is rapidly emerging as one of the most significant public policy issues” (2016, p. 485) and both social and behavioral science literature conclude that individual concerns are driven by uncertainty, context dependence, and malleability of privacy preferences (Acquisti et al. 2015). The Identity Theft and Assumption Deterrence Act of 1998 (18 U.S.C. § 1028) considers the surreptitious theft of personal data, but largely depends upon a consumer-initiated complaint to launch an investigative inquiry. To spur proactivity in those cases arising from organizational data vulnerabilities, individual states have adopted disclosure laws requiring firms to report security and privacy breaches, yet evidence indicates this leads to a minimal decrease in widespread data theft (Romanosky et al. 2011). Several studies further indicate that the financial impact of firm disclosures of security and data privacy breaches are negligible (Acquisti et al. 2006; Campbell et al. 2003). This breadth of research suggests that while legal doctrine and disclosure requirements may minimally reduce data theft, financial impacts to the firm are minimal and the graver concern to data privacy is the burden placed on consumers to wrestle with an apparent dichotomy between privacy attitudes and privacy behaviors (Berendt et al. 2005).
A 2009 study indicates that American consumers overwhelmingly reject behavioral targeting, yet express widespread erroneous views about the breadth and durability of existing privacy laws (Turow et al. 2009). While online users often seek anonymity at the application level (e.g., a website), broad anonymity is subject to technology and governmental factors that make it hard to achieve (Kang et al. 2013). Owing to the complexity of coexisting factors, this observed dichotomy is likely attributable to consumer perceptions of risk versus trust (Norberg et al. 2007) and behavioral heuristics (e.g., endowment effect) (Acquisti et al. 2013). Roberds and Schreft (2009) even argue that some loss of data privacy (and, by analogy, some degree of identity theft) enables the efficient sharing of information necessary for our modern payment systems. The debate over how best to preserve privacy while acknowledging the benefit of information sharing exacerbates a considerable wedge between proponents of regulation versus self-regulation of consumer data. At one extreme, Solove recommends a regulatory “architecture that establishes control over the data security practices of institutions and that affords people greater participation in the uses of their information” (2003, p. 1266). The template for such regulation could model the OECD Privacy Framework (2013) requiring radical continuous transparency about what information is being shared, a participatory interface for consumers to verify, amend, limit, or remove erroneous information, and accountability mandates for entrusted entities. At the opposite extreme, Rubin and Lenard (2002) argue forcefully for industry self-regulation, pointing to their study indicating that imposition of regulation adds unnecessary economic costs absent justification of an existing market failure.
The United States and European Union have approached the data privacy debate from different angles, with the latter opting to enact a considerable regulatory framework with the 2016 General Data Protection Regulation (GDPR) (Regulation 2016/679). The GDPR broadened its scope far beyond the 1995 EU Data Protection Directive (Directive 95/46/EC), widely targeting even foreign firms that collect, use, and maintain personal data of EU citizens. The regulation obliges both controllers (GDPR Article 4(7)) and processors (GDPR Article 4(8)) of personal data to meet certain quality requirements including demonstrable consent of the subject for a specified and legitimate purpose (GDPR Articles 5(1) & 6(1)). Inge Graef succinctly describes the contextual requirements for price discrimination. “…in order to engage in personalized pricing, which is a form of profiling, a controller must have the explicit consent of the data subject involved. Once the controller has obtained consent of the data subject and has met the data quality requirements, it is free to engage in personalized pricing under data protection law” (2018, p. 551).
Proposition 3
Data privacy doctrine is a legal constraint considered by firms that deploy algorithmic pricing models.
Ethical considerations: conceptual foundations
In a world increasingly observed by a triangulation of biometric surveillance, embedded sensors, and dynamic decision-making algorithms, it is unsurprising that academic discourse spars over the degree to which such panoptic technologies pose a moral hazard risking an inherent loss of freedom (Gandy 1993; Reiman 1995). The specter of Bentham’s Panopticon (2015 [1843]) as reference to a prison supervised by an insidious all-knowing overseer seems oddly applicable to our time, in a way reminiscent of Foucault (1979) and Orwell (2017 [1949]). As it relates to price discrimination, fairness is an ethical framework often considered. A conceptual framework capturing the perceptions of price fairness is nicely outlined by Xia et al. (2004), concluding that personalized pricing approaches may diminish trust if unmitigated by certain factors like product differentiation. A multidimensional view of trust by Garabino and Lee supports this assertion, concluding that such pricing schemes reduce the trust consumers perceive in the benevolence of the firm (2003). Yet, what constitutes “fairness” is subjectively defined, and firms have an incentive to reframe economic exchanges (e.g., transforming personalized pricing into a norm) in such a way to make them seem more fair (Kahneman et al. 1986).
Other ethical considerations addressed by the literature include the degree to which deception (or perceived deception) creates individual harm. Analogously, information asymmetry and negative externalities caused by price discrimination can result in widespread social injustice. Although now over a decade old, a research study conducted at the University of Pennsylvania in 2005 revealed a majority of US adults recognize their online behavior is tracked, but 65% reasoned that they know what is required to prevent exploitation of this data. Upon taking a short true/false assessment, no statistical difference was noted between those expressing or lacking such confidence—and the overall scores were abysmally poor, averaging 63–72% incorrect on questions relating to legal rights and privacy policy (Turow et al. 2005). Given this wide perceptual schism, it is unsurprising that concerns of deception, fairness, and social justice have attracted widespread attention.
Deception as an ethical consideration
US legal doctrine articulates that a sellers’ non-disclosure of a fact (e.g., that a lower price may exist via alternative channels) is only equivalent to an assertion that the fact does not exist in severely limited cases, notably that the fact “would correct a mistake of the other party as to a basic assumption on which that party is making the contract” (Restatement (Second) of Contracts § 161(b), 1981). As elaborated by Miller (2014), unless a seller has publically advertised a price, a buyer is rarely positioned to claim that uniformity of prices represented the grounding assumption upon which they entered into the transaction.
The rise of algorithmic pricing to exploit individual preferences and behavioral discrimination increases the magnitude of information asymmetry, and to the extent that pricing is personalized, minimizes the benefit consumers may receive from price comparison websites (Kannan and Kopalle 2001). Danna and Gandy Jr. (2002) illuminate the process of data mining to demonstrate how algorithmic decision-making may segment high-value from low-value customers, allowing firms to better maximize the difference between customer acquisition cost and lifetime value. Yet “techniques that create and exploit consumers’ high search costs undermine the ability to compare prices and can lower overall welfare and harm consumers” (Miller 2014, p. 80) leading some to recommend that broadened disclosure laws are necessary to inform consumers when individual data profiles are mined to personalize pricing. In particular, Solove and Hoofnagle (2006) conceptualize a “Model Regime” to mitigate gaps and limitations in existing legal doctrine, ultimately arguing for widespread expansion of the Fair Credit Reporting Act (15 U.S.C. § 1681 et seq.) to apply whenever data brokers use personal information—not altogether dissimilar from the European Union’s GDPR requirements.
Some seek to address discriminatory pricing within the framework of the Uniform Commercial Code’s unconscionability provision (U.C.C. § 2-302), serving as a roundabout alternative to ill-effective antitrust regulation (e.g., the Robinson-Patman Act) by labeling price discrimination as an egregiously deceptive trade practice (Darr 1994; Klock 2002). Yet the practicality of such application is questionable, yielding to the freedom to contract in which multiple parties seek to maximize their utility within the transaction. Klock concurs with this limitation, yet points to seemingly applicable state-level statutes (Ala. Code § 8-31 et seq.; Tex. Bus. & Com. Code § 17 et seq.) that could prohibit unconscionable discriminatory pricing actions. Aside from questionable applicability (e.g., the Alabama law applies only during a state of emergency), the widespread practice of price discrimination among online merchants across interstate boundaries makes the application of state statute unlikely.
Proposition 4
Deception is an ethical framework considered by firms that deploy algorithmic pricing models.
Fairness as an ethical consideration
Price fairness in a sales transaction has been defined in literature as “the extent to which sacrifice and benefit are commensurate for each party involved” (Bolton et al. 2003, p. 475) premised on a principle of dual entitlement stipulating that buyers are entitled to the terms of a reference transaction and sellers are entitled to a reference profit (Kimes 1994). To determine the perceived fairness of the reference profit, buyers often rely upon past prices (Briesch et al. 1997; Jacobson and Obermiller 1990), in-store comparative prices (Rajendran and Tellis 1994), and competitive prices (Kahneman et al. 1986). Yet, it has been demonstrated how consumers underestimate inflation, ignore in-store comparative cost categories beyond cost of goods sold, and attribute competitive price differences to profit rather than cost (Bolton et al. 2003). Regardless of the attribution, consumers are far less likely to make a purchase when they perceive the price as both unfair and personally unfavorable (Richards et al. 2016).
To the extent that different prices result from personalized or dynamic pricing, Garbarino and Lee (2003) empirically demonstrate the loss of benevolent trust in the firm on behalf of customers. Fisher (2007) and Cox (2001), for example, illustrate the exceptional wrath incurred by Amazon and Microsoft when loyal trusting customers learned of discriminatory schemes offering new buyers most favored pricing opportunities. Such recognition by technology-savvy customers drives their adoption of watchdog tools (Mikians et al. 2012) and false identities (Acquisti and Varian 2005) to seek the lowest possible prices. Trust is thereby replaced with attentive suspicion and preemptive measures to avoid discrimination.
Proposition 5
Fairness is an ethical framework considered by firms that deploy algorithmic pricing models.
Social justice as an ethical consideration
Walzer (1983) argues that distributive justice is a practice of allocating goods appropriately given a complex equality. An injustice occurs when that complex equality is provoked by a tyrannical force that unhinges the pluralist trinity of free exchange, desert (being deserving), and need. Miller links the concept of social justice to price discrimination: “[W]e are not talking about the distribution of products or services as such. Rather, we are discussing the distribution of price advantages and disadvantages—the opportunities to buy the same product at a discount or exclusion from these opportunities. Price discrimination challenges the idea of free market exchange as a just distributional criterion for goods in the market sphere because it distorts the price system” (Miller 2014, p. 91).
On the one hand, most online users freely provide private information through credit card transactions, social media outlets, online services, and loyalty card patronage, yet the inequity between consumer costs and firm benefits represents a market inefficiency imposing considerable negative externalities on the marketplace (Baumer et al. 2003; Bergelson 2003). Those consumers identified by a firm as “low value” may give private information more frequently, but find themselves at a relative disadvantage to “high-value” customers who contribute very little. On the other hand, if volunteering private information yields a significant benefit, those withholding it encounter the issue of adverse selection wherein negative consequences accrue to non-participants. The issue broadens when fueled by vast quantities of information available without voluntary disclosure—constructed from opaque integrated information systems that facilitate the unjust allocation of price advantages.
Proposition 6
Social justice is an ethical framework considered by firms that deploy algorithmic pricing models.