Introduction

Pricing is a tremendously important component of the organizational marketing mix (Borden 1964). Decision-making within the pricing function has evolved from an individual clerical task driven by instinct to a cross-functional team activity that drives strategic value (Mitchell 2012). Evidence supports the assertion that developing significant pricing capabilities positively impacts overall firm performance (Dutta et al. 2003). Liozu and Hinterhuber (2014) suggest a causal model of specific capabilities that includes “robust internal processes related to price setting, pricing tools, and training” (p. 153). Such processes could incorporate a combination of data-mining techniques to harness inferential analytics (Furnas 2012) and information asymmetries to exploit behavioral discrimination (Ezrachi and Stucke 2016a).

A paucity of descriptive research obscures the extent to which executives implement pricing strategies consistent with theoretical models and determinants (Noble and Gruca 1999; Rao and Kartono 2009). Data science has enabled “black box” algorithmic decision-making (Pasquale 2013), replete with technical complexity and secrecy, representing an evolution from classical pricing approaches and making a compelling case for price conditioning in certain industries (Acquisti and Varian 2005). The extent to which consumers develop an awareness of discriminatory pricing practices leads to a “culture of suspicion and envy,” illuminating a growing social concern (Turow 2006, p. 182). The specific mechanisms underpinning the competitive necessity of price discrimination take place largely in secret, beyond the current reach of effective regulation and the larger conversation in academic discourse. Miller makes an admirable first effort to codify various ethical concerns related to common forms of price discrimination (2014, pp. 99–101), and this paper seeks to build upon his call for public debate.

Throughout this paper, we refer to algorithms, algorithmic models, and algorithmic decision-making interchangeably, although we acknowledge a technical distinction between these terms. For simplicity, we use these terms to describe the broader ecosystem of four interrelated algorithm technologies (Treleaven et al. 2019):

  • Artificial intelligence (AI): the program that interprets and analyzes the data without explicit coding, makes predictions, and executes decisions to maximize a utility function (or goal). We limit our scope of AI to the subset of machine learning where programs learn to change when exposed to new data.

  • Blockchain: provides the data processing and storage infrastructure via a decentralized and distributed ledger of blocks (records) linked using cryptography. Blockchain also enables the creation and execution of smart contracts that guarantee performance of credible transactions.

  • Internet of Things (IoT): provides raw data through the extension of enabled processors, sensors, and communications hardware into physical devices and everyday objects. IoT devices communicate through a shared access point or directly between other connected devices.

  • Big data analytics: the process of cleaning, structuring, and analyzing (mining) raw data to discern patterns, preferences, and trends. AI requires the raw data provided through analytics to learn how to make better decisions.

Popular literature is replete with examples debating the extent to which opaque and non-auditable algorithms should require regulation and oversight (Bostrom 2014; O’Neil 2016; Pasquale 2015). Accordingly, academia must more robustly examine the potential anticompetitive dynamics and possible predatory abuses where our locations and interactions can be quantified to accurately model our behaviors and preferences (Ezrachi and Stucke 2016b; Mayer-Schönberger and Cukier 2013; Papandropoulos 2007). Perhaps more poignant for analysis are instances where the lack of transparency in algorithmically determined personalized pricing creates a potential for exclusion facilitating actual discrimination (Executive Office of the President 2015; Pasquale 2015). Brunk (2012) operationalizes a construct for consumer perceived ethicality (CPE) to describe individual perception of moral organizational disposition, acknowledging this may differ from actual organizational behavior. The research concludes “that, contrary to philosophical scholars’ exclusively consequentialist (teleological) or non-consequentialist (deontological) positions, a consumer’s ethical judgment of a company or brand can be a function of both evaluation principles, sometimes applied simultaneously” (p. 562). The pluralistic implication of this construct provides one possible reconciliation of consumers’ paradoxical attitudes toward privacy versus their demonstrated ‘anti’-privacy behaviors (Brown 2001; Norberg et al. 2007). Despite outward attitudes of concern about infringement of personal private data, anecdotal and empirical evidence suggests that consumers willingly divulge such information in return for a relatively small immediate benefit. Acquisti (2004) concludes “it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions …. [and] even ‘sophisticated’ individuals may under certain conditions become privacy ‘myopic’” (p. 22). While government policymakers might favor regulatory oversight on evidence of collective privacy concerns, contradictory behaviors weaken the justification for such action. Kokolakis (2017) conducts a laudable review of the literature to tentatively resolve the paradox, concluding that contradictory research findings to explain the relationship between privacy attitudes and behaviors suggests a complex rather than paradoxical phenomenon. Categorization of research findings depends upon the theoretical background, methodological approach, and specific context of study. Interestingly, Kokolakis’ literature review specifically and directly excluded the body of literature relating to ethical and legal aspects of the information privacy paradox. We contend that in consideration of a complex system, the corpus of literature under review should be inclusive of probable ethical considerations and legal constraints.

Drawing on extant literature and supported by extensive practitioner experience, we propose a concept model to describe how personalized price levels may derive from a combination of organizational and technology factors. The model retains its grounding in liberal economic theory wherein businesses may establish or change price levels freely within a subjective and utility-based conception of value (Bigwood 2003). Yet competition requires competitors to make independent actions, and Gal (2019) suggests that “with the advent of algorithms and the digital economy, it is becoming technologically possible for computer programs to autonomously coordinate prices and trade terms” (p. 18). Potential legal constraints emerge when “algorithms are designed independently by competitors to include decisional parameters that react to other competitors’ decisions in a way that strengthens or maintains a joint coordinated outcome” (p. 19). Businesses race to convert mountains of data into generative insights to improve personalization-centered retail practices. They deploy internally developed and acquired technology factors that enable the marketing function to bridge from customer segmentation to individual personalization. Turow (2017) alludes to the consideration of ethical constructs as retailers use algorithms as a tool of social discrimination, sending customers “different messages, even different prices, based on profiles retailers have created about them—profiles that the individuals probably don’t know exist and, if they could read them, might not even agree with” (pp. 247–248).

The ethical and legal constructs we have adopted into our concept model are informed by Nissenbaum’s contextual integrity theory, positing that technology-based practices affecting flows of personal data evolve within distinctive contexts of informational norms (Nissenbaum 2010). Miller (2014) adopts this view within the specific context of using personal information to fuel algorithms that explore legal and ethical principles in retail pricing, yet this analysis predates the emerging ubiquity of both artificial intelligence and data science methods that enable autonomous decision-making within the pricing function. Deployment of algorithms to tailor pricing and promotions in real-time is the highest reported use case (40%) among retailers already adopting AI technology. Projected increases in hiring data scientists approach 50% over the next 3 years as companies seek to integrate direct and indirect channels of personal data to better train AI-based algorithmic models on unified platforms leveraging data-driven capabilities (Deloitte Consulting & Salesforce 2018). Our concept model lays the foundation for future empirical studies to evaluate how companies address considerations of ethics and mitigate legal constraints by using algorithmic pricing models in conjunction with human judgment.

Research questions

Transdisciplinary research encircles technology factors, ethical considerations, and legal constraints as a triumvirate defining the broad context within which algorithmic models set (often dynamic) personalized pricing levels. We propose a concept model to suggest how firms reconcile the predictive value of algorithmic decision-making with organizational factors offering human judgment in consideration of consumer ethical interests and the legal and regulatory policy consequences of informational wrongdoing. We further propose multiphased qualitative research based on consumer focus groups and entrenched case studies at firms utilizing algorithmic models to make pricing decisions. Our objective from this research is to address the following research questions:

  1. 1.

    What factors contribute to consumers’ ethical and legal schemata when confronting situational contexts of algorithmic personalized pricing?

  2. 2.

    What contributes to the organizational schema that firms consider when adopting algorithmic price-setting models? What feedback mechanisms impact the price-changing decision?

Key implications of our research may extend to both the scholarly and practitioner communities to provide generative insights about the extent of congruence between firm practices and consumer perceptions of personalized pricing using algorithmic models. We seek to address the paucity of empirical literature that considers the complexity of organizational and technological factors concurrent with algorithmic decision-making. We further anticipate contributing to a call by policymakers to determine suitable remedies where secrecy of black box algorithms may enable harmful use of aggregated personal information to make pricing decisions.

Conceptual model and literature review

Informed by the literature and significant practitioner experience, the following preliminary conceptual model is offered as a framework for the proposed qualitative studies (Fig. 1).

Fig. 1
figure 1

Preliminary conceptual model

A constructivist grounded theory method (Charmaz 2014) formed the basis for our literature review and subsequent development of the conceptual model. This approach provides for methodological flexibility, dismissing certain rules, recipes, and requirements that force preconceived ideas and theories on acquired data. Instead, it invites the researcher to draw interpretive understandings and “follow leads that [researchers] define in the data, or design another way of collecting data to pursue our initial interests” (p. 32). The method is further substantiated as a precursor to our proposed qualitative research study in which a goal of data collection is to elucidate the meanings, beliefs, and values—the lived experiences—of research participants (Maxwell 2013).

Our systematic literature review utilizes the dynamic, reflexive, and integrative (DRI) zipper framework (El Hussein et al. 2017) that supports the intent of the constructivist grounded theory method. Classical and contemporary grounded theorists utilize systematic exploration to audit and appraise acquired data (Corbin and Strauss 2015; Stebbins 2001). The DRI zipper framework extends this interpretation to provide “traceable evidence to support [stated] generalizations and to justify the need to conduct the study” (El Hussein et al. 2017, p. 1207). Therefore, we approach the literature review as a concatenated activity (Stebbins 2001) that incorporates theoretical sensitivity (Glaser and Strauss 1999) to think “across fields and disciplines …. without letting it stifle [our] creativity or strangle [our] theory” (Charmaz 2014, p. 308).

Introduction to the literature review

We identify the relevant literature informing our conceptual model by drawing from the following key theories transcending fields of management, economics, and moral and political philosophy: behavioral theory of the firm (Cyert and March 1992), organizational decision-making theory (March 1994; March and Simon 1993 [1958]), ethical pluralism (James 2016 [1908]; Ross 2002 [1930]), and social contract theory (Hobbes 1982 [1651]; Locke 1980 [1690]; Rawls 1971).

We begin with a construction of the legal constraints and ethical considerations within which firms struggle to balance pressures and temptations that ultimately define the organizational factors influencing the decision-making process. Kaptein (2017) models the dimensions, conditions, and consequences of ethics struggles, proposing that greater ethical gaps created between opposing forces consequently require greater struggle. The multilevel model facilitates mutual interaction between individual and organizational units of analysis, allowing interdependent struggle through interactive combat. Nissenbaum’s (2010) theoretical framework of contextual integrity embraces the dynamicity of resulting ethical norms, thus bridging between organizational and technology factors by contextualizing the extent to which algorithmic models use flows of personal information that conform to consumers’ expectations. Accordingly, the process of personalized price setting depends both on algorithmic models and human judgement derived, in part, from a consideration of whether personal data are properly appropriated for use in a specific context.

Having described the broad theoretical framework from which our conceptual model is informed, we now briefly discuss each elemental component.

Legal constraints: conceptual foundations

The tenets of economic liberalism emerged from the Enlightenment-era to repudiate the feudal privileges and traditions of aristocracy. Smith (1975 [1776]) surmises that a laissez-faire philosophy incorporating minimal government intervention leads businesses, via unseen market forces, to distribute wealth “in the proportion most agreeable to the interest of the whole society” (p. 630). Recognizing the complexity of a knowledge-based society, Hayek (2011 [1960]) places some boundary conditions upon Smith’s invisible hand by arguing for modest yet limited government to create and enforce a general rule of law serving as individuals’ guardian of personal freedom. Liberty in the contemporary era is the result of an empirical and evolutionary view of politics wherein government creates and preserves the conditions to allow free markets to flourish. Friedman extends a somewhat tepid supposition, concluding that on a case-by-case basis, government intervention may mitigate negative externalities (as he terms, “neighborhood effects”), but cautions that mediating a market failure with legislative action likely increases other external costs (1978, 2012 [1962], pp. 30–32). Common within the economic liberalist tradition is the preservation of laissez-faire capitalism, yet many foundational proponents recognize limited government intervention as an antecedent to preserve individual liberty. The conditions and extent to which the political environment should provoke free markets remain a matter of considerable debate.

Antitrust as a legal constraint

Within the United States, the Department of Justice Antitrust Division maintains investigatory, enforcement, and prosecutorial jurisdiction over statutes related to both the illegal commercial restraint of fair competition and monopolistic practices (United States Department of Justice 2018, pp. II-1–II-25). Of particular importance, the Robinson-Patman Act (1936) specifically prohibits certain forms of wholesale discriminatory pricing, yet Robert Bork persuasively argues it is “the misshapen progeny of intolerable draftsmanship coupled to wholly mistaken economic theory” (1978, p. 382). At issue is the Act’s protection of competitors rather than competition—an antithetical and misguided approach to antitrust policy—by making unlawful the discrimination “in price between different purchasers of commodities of like grade and quality … and where the effect of such discrimination may be substantially to lessen competition or tend to create a monopoly in any line of commerce” (15 U.S.C. § 13(a)). Consequently, the phenomenon considered by the Act is not price discrimination, but rather, price difference—and it fails to consider those legitimate price differentials wholesalers may offer to their customers that in turn drive lower prices for consumers (Blair and DePasquale 2014). A novel empirical study of court cases involving consideration of the Robinson-Patman Act was conducted by Bulmash (2012) demonstrating how complainants in “secondary line” injury cases (purchasers of price-discriminated goods operating in a resale context) result in coordinated competition among suppliers, harming both consumers and competition. Despite historical recommendations for significant revision (Neal et al. 1968) and a contemporary call for outright repeal (Antitrust Modernization Commission 2007), the Act persists despite widespread abandonment of prosecutorial application in “primary line” injury claims (competitors of the price-discriminating firm).

While the effect of antitrust law seeks largely to prevent second-degree price discrimination through “acquisition or exercise of market power” (Fisher III, 2007, p. 14), Kochelek (2009) considers whether the Sherman Act indirectly applies in those cases where data-mining-based price discrimination may exert monopolistic or oligopolistic restraints on trade (15 U.S.C. §§ 1-2). While disfavoring such practices, the emergence of “data-mining-based price discrimination schemes fall into a gap between antitrust doctrine and the policies underlying the doctrine … [resulting in] reduce[d] consumer welfare, waste[d] resources, and reduce[d] allocative efficiency in exchange for increased producer profits that are insufficient to justify their cost” (Kochelek 2009, p. 535). Miller (2014) concurs, arguing that existing antitrust doctrine serves to limit a single firm to the extent they exert market power to raise price levels for most customers, yet data-mining-based price discrimination specifically targets unlikely defectors with higher prices without requiring widespread market influence. Effectively, classical antitrust laws are incompatible with modern methods of pricing discrimination.

Notwithstanding this limitation, the specter of seemingly unintentional collusion could result from algorithms that “are designed independently by competitors to include decisional parameters that react to other competitors’ decisions in a way that strengthens or maintains a joint coordinated outcome …. and the combination of these independent coding decisions leads to higher prices in the market” (Gal 2019, p. 19). As opposed to first-generation ‘adaptive’ algorithms that estimate and optimize price objectives subject to prior data, second-generation ‘learning’ algorithms use machine learning to condition their strategies upon experience (Calvano et al. 2019). Over time, independent algorithms may conclude that collusion is profitable, ultimately factoring competitive behavior as one among many changing market environmental variables. A small number of such cases have progressed through the judicial process (notably, “U.S. v. David Topkins,” 2015), but each is ultimately the result of algorithms designed by programmers (at the direction of managers) with collusive intent. Calvano et al. (2019) posit that the nature of learning algorithms to outperform humans in other contexts may result in collusive pricing that is hard to detect, presenting significant challenges to effective antitrust regulation.

Proposition 1

Antitrust factors are not legal constraints presently considered by firms that deploy algorithmic pricing models.

Antidiscrimination as a legal constraint

The extent to which discriminatory schemes in pricing are addressed in US federal law is largely embodied within the Civil Rights Act of 1964 mandating that “[A]ll persons shall be entitled to the full and equal enjoyment of the goods, services, facilities, privileges, advantages, and accommodations of any place of public accommodation, … without discrimination or segregation on the ground of race, color, religion, or national origin” (42 U.S.C. § 2000a). Spurred by California’s Unrah Civil Rights Act (Cal. Civ. Code, § 51), state-level legislation has extended such equal protection to include gender, language, and other factors easily detected and exploited by a price discrimination algorithm. Yet evidence suggests that even when states expand civil and equal rights doctrine, illegal discriminatory pricing schemes persist and prohibitions are broadly unenforced (Massachusetts Post Audit and Oversight Bureau, 1997). Federal opinion is mixed—the Federal Trade Commission suggests plentiful examples how big data can exploit underserved populations (2016, pp. 9–12), yet offers a counterpoint from the Executive Office of the President stipulating “if historically disadvantaged groups are more price-sensitive than the actual consumer, profit-maximizing differential pricing should work to their benefit” in competitive markets (2015, p. 17). Distinguishing between the aforementioned disparate treatment and disparate impact, the Presidential report “suggests that policies to prevent inequitable application of big data should focus on risk-based pricing in high-stakes markets such as employment, insurance, or credit provision” (p. 17).

The European Union extends legislation to prohibit “discrimination on grounds of nationality of the recipient or national or local residence” (Directive 2006/123/EC, Article 20), as demonstrated by a 2016 settlement reached between the European Commission and Disneyland Paris to settle allegations of inconsistent booking practices across member states (Abele 2016). Despite enforcement efforts, an extensive 2016 review of 10,537 e-commerce websites conducted by the European Commission (European Commission 2016) demonstrated the extensive practice of “geo-blocking” in 63% of cases to effectively limit interstate commerce. The resulting regulation “seeks to address direct, as well as indirect discrimination” (Regulation 2018/302), and, while not prohibiting price discrimination per se, eliminates one conduit through which such practices are facilitated.

Proposition 2

Antidiscrimination doctrine is a legal constraint considered by firms that deploy algorithmic pricing models.

Data privacy as a legal constraint

As described by Acquisti et al., “personal privacy is rapidly emerging as one of the most significant public policy issues” (2016, p. 485) and both social and behavioral science literature conclude that individual concerns are driven by uncertainty, context dependence, and malleability of privacy preferences (Acquisti et al. 2015). The Identity Theft and Assumption Deterrence Act of 1998 (18 U.S.C. § 1028) considers the surreptitious theft of personal data, but largely depends upon a consumer-initiated complaint to launch an investigative inquiry. To spur proactivity in those cases arising from organizational data vulnerabilities, individual states have adopted disclosure laws requiring firms to report security and privacy breaches, yet evidence indicates this leads to a minimal decrease in widespread data theft (Romanosky et al. 2011). Several studies further indicate that the financial impact of firm disclosures of security and data privacy breaches are negligible (Acquisti et al. 2006; Campbell et al. 2003). This breadth of research suggests that while legal doctrine and disclosure requirements may minimally reduce data theft, financial impacts to the firm are minimal and the graver concern to data privacy is the burden placed on consumers to wrestle with an apparent dichotomy between privacy attitudes and privacy behaviors (Berendt et al. 2005).

A 2009 study indicates that American consumers overwhelmingly reject behavioral targeting, yet express widespread erroneous views about the breadth and durability of existing privacy laws (Turow et al. 2009). While online users often seek anonymity at the application level (e.g., a website), broad anonymity is subject to technology and governmental factors that make it hard to achieve (Kang et al. 2013). Owing to the complexity of coexisting factors, this observed dichotomy is likely attributable to consumer perceptions of risk versus trust (Norberg et al. 2007) and behavioral heuristics (e.g., endowment effect) (Acquisti et al. 2013). Roberds and Schreft (2009) even argue that some loss of data privacy (and, by analogy, some degree of identity theft) enables the efficient sharing of information necessary for our modern payment systems. The debate over how best to preserve privacy while acknowledging the benefit of information sharing exacerbates a considerable wedge between proponents of regulation versus self-regulation of consumer data. At one extreme, Solove recommends a regulatory “architecture that establishes control over the data security practices of institutions and that affords people greater participation in the uses of their information” (2003, p. 1266). The template for such regulation could model the OECD Privacy Framework (2013) requiring radical continuous transparency about what information is being shared, a participatory interface for consumers to verify, amend, limit, or remove erroneous information, and accountability mandates for entrusted entities. At the opposite extreme, Rubin and Lenard (2002) argue forcefully for industry self-regulation, pointing to their study indicating that imposition of regulation adds unnecessary economic costs absent justification of an existing market failure.

The United States and European Union have approached the data privacy debate from different angles, with the latter opting to enact a considerable regulatory framework with the 2016 General Data Protection Regulation (GDPR) (Regulation 2016/679). The GDPR broadened its scope far beyond the 1995 EU Data Protection Directive (Directive 95/46/EC), widely targeting even foreign firms that collect, use, and maintain personal data of EU citizens. The regulation obliges both controllers (GDPR Article 4(7)) and processors (GDPR Article 4(8)) of personal data to meet certain quality requirements including demonstrable consent of the subject for a specified and legitimate purpose (GDPR Articles 5(1) & 6(1)). Inge Graef succinctly describes the contextual requirements for price discrimination. “…in order to engage in personalized pricing, which is a form of profiling, a controller must have the explicit consent of the data subject involved. Once the controller has obtained consent of the data subject and has met the data quality requirements, it is free to engage in personalized pricing under data protection law” (2018, p. 551).

Proposition 3

Data privacy doctrine is a legal constraint considered by firms that deploy algorithmic pricing models.

Ethical considerations: conceptual foundations

In a world increasingly observed by a triangulation of biometric surveillance, embedded sensors, and dynamic decision-making algorithms, it is unsurprising that academic discourse spars over the degree to which such panoptic technologies pose a moral hazard risking an inherent loss of freedom (Gandy 1993; Reiman 1995). The specter of Bentham’s Panopticon (2015 [1843]) as reference to a prison supervised by an insidious all-knowing overseer seems oddly applicable to our time, in a way reminiscent of Foucault (1979) and Orwell (2017 [1949]). As it relates to price discrimination, fairness is an ethical framework often considered. A conceptual framework capturing the perceptions of price fairness is nicely outlined by Xia et al. (2004), concluding that personalized pricing approaches may diminish trust if unmitigated by certain factors like product differentiation. A multidimensional view of trust by Garabino and Lee supports this assertion, concluding that such pricing schemes reduce the trust consumers perceive in the benevolence of the firm (2003). Yet, what constitutes “fairness” is subjectively defined, and firms have an incentive to reframe economic exchanges (e.g., transforming personalized pricing into a norm) in such a way to make them seem more fair (Kahneman et al. 1986).

Other ethical considerations addressed by the literature include the degree to which deception (or perceived deception) creates individual harm. Analogously, information asymmetry and negative externalities caused by price discrimination can result in widespread social injustice. Although now over a decade old, a research study conducted at the University of Pennsylvania in 2005 revealed a majority of US adults recognize their online behavior is tracked, but 65% reasoned that they know what is required to prevent exploitation of this data. Upon taking a short true/false assessment, no statistical difference was noted between those expressing or lacking such confidence—and the overall scores were abysmally poor, averaging 63–72% incorrect on questions relating to legal rights and privacy policy (Turow et al. 2005). Given this wide perceptual schism, it is unsurprising that concerns of deception, fairness, and social justice have attracted widespread attention.

Deception as an ethical consideration

US legal doctrine articulates that a sellers’ non-disclosure of a fact (e.g., that a lower price may exist via alternative channels) is only equivalent to an assertion that the fact does not exist in severely limited cases, notably that the fact “would correct a mistake of the other party as to a basic assumption on which that party is making the contract” (Restatement (Second) of Contracts § 161(b), 1981). As elaborated by Miller (2014), unless a seller has publically advertised a price, a buyer is rarely positioned to claim that uniformity of prices represented the grounding assumption upon which they entered into the transaction.

The rise of algorithmic pricing to exploit individual preferences and behavioral discrimination increases the magnitude of information asymmetry, and to the extent that pricing is personalized, minimizes the benefit consumers may receive from price comparison websites (Kannan and Kopalle 2001). Danna and Gandy Jr. (2002) illuminate the process of data mining to demonstrate how algorithmic decision-making may segment high-value from low-value customers, allowing firms to better maximize the difference between customer acquisition cost and lifetime value. Yet “techniques that create and exploit consumers’ high search costs undermine the ability to compare prices and can lower overall welfare and harm consumers” (Miller 2014, p. 80) leading some to recommend that broadened disclosure laws are necessary to inform consumers when individual data profiles are mined to personalize pricing. In particular, Solove and Hoofnagle (2006) conceptualize a “Model Regime” to mitigate gaps and limitations in existing legal doctrine, ultimately arguing for widespread expansion of the Fair Credit Reporting Act (15 U.S.C. § 1681 et seq.) to apply whenever data brokers use personal information—not altogether dissimilar from the European Union’s GDPR requirements.

Some seek to address discriminatory pricing within the framework of the Uniform Commercial Code’s unconscionability provision (U.C.C. § 2-302), serving as a roundabout alternative to ill-effective antitrust regulation (e.g., the Robinson-Patman Act) by labeling price discrimination as an egregiously deceptive trade practice (Darr 1994; Klock 2002). Yet the practicality of such application is questionable, yielding to the freedom to contract in which multiple parties seek to maximize their utility within the transaction. Klock concurs with this limitation, yet points to seemingly applicable state-level statutes (Ala. Code § 8-31 et seq.; Tex. Bus. & Com. Code § 17 et seq.) that could prohibit unconscionable discriminatory pricing actions. Aside from questionable applicability (e.g., the Alabama law applies only during a state of emergency), the widespread practice of price discrimination among online merchants across interstate boundaries makes the application of state statute unlikely.

Proposition 4

Deception is an ethical framework considered by firms that deploy algorithmic pricing models.

Fairness as an ethical consideration

Price fairness in a sales transaction has been defined in literature as “the extent to which sacrifice and benefit are commensurate for each party involved” (Bolton et al. 2003, p. 475) premised on a principle of dual entitlement stipulating that buyers are entitled to the terms of a reference transaction and sellers are entitled to a reference profit (Kimes 1994). To determine the perceived fairness of the reference profit, buyers often rely upon past prices (Briesch et al. 1997; Jacobson and Obermiller 1990), in-store comparative prices (Rajendran and Tellis 1994), and competitive prices (Kahneman et al. 1986). Yet, it has been demonstrated how consumers underestimate inflation, ignore in-store comparative cost categories beyond cost of goods sold, and attribute competitive price differences to profit rather than cost (Bolton et al. 2003). Regardless of the attribution, consumers are far less likely to make a purchase when they perceive the price as both unfair and personally unfavorable (Richards et al. 2016).

To the extent that different prices result from personalized or dynamic pricing, Garbarino and Lee (2003) empirically demonstrate the loss of benevolent trust in the firm on behalf of customers. Fisher (2007) and Cox (2001), for example, illustrate the exceptional wrath incurred by Amazon and Microsoft when loyal trusting customers learned of discriminatory schemes offering new buyers most favored pricing opportunities. Such recognition by technology-savvy customers drives their adoption of watchdog tools (Mikians et al. 2012) and false identities (Acquisti and Varian 2005) to seek the lowest possible prices. Trust is thereby replaced with attentive suspicion and preemptive measures to avoid discrimination.

Proposition 5

Fairness is an ethical framework considered by firms that deploy algorithmic pricing models.

Social justice as an ethical consideration

Walzer (1983) argues that distributive justice is a practice of allocating goods appropriately given a complex equality. An injustice occurs when that complex equality is provoked by a tyrannical force that unhinges the pluralist trinity of free exchange, desert (being deserving), and need. Miller links the concept of social justice to price discrimination: “[W]e are not talking about the distribution of products or services as such. Rather, we are discussing the distribution of price advantages and disadvantages—the opportunities to buy the same product at a discount or exclusion from these opportunities. Price discrimination challenges the idea of free market exchange as a just distributional criterion for goods in the market sphere because it distorts the price system” (Miller 2014, p. 91).

On the one hand, most online users freely provide private information through credit card transactions, social media outlets, online services, and loyalty card patronage, yet the inequity between consumer costs and firm benefits represents a market inefficiency imposing considerable negative externalities on the marketplace (Baumer et al. 2003; Bergelson 2003). Those consumers identified by a firm as “low value” may give private information more frequently, but find themselves at a relative disadvantage to “high-value” customers who contribute very little. On the other hand, if volunteering private information yields a significant benefit, those withholding it encounter the issue of adverse selection wherein negative consequences accrue to non-participants. The issue broadens when fueled by vast quantities of information available without voluntary disclosure—constructed from opaque integrated information systems that facilitate the unjust allocation of price advantages.

Proposition 6

Social justice is an ethical framework considered by firms that deploy algorithmic pricing models.

Discussion

Aside from individual negotiation as a tool for personalized pricing, loyalty card programs are considered an early form of using IT systems to provide real-time individualized point-of-sale promotions (Henschen 2011). These early efforts expedited big data’s foray into the retail sector, but limited applicability to future purchases. With the proliferation of big data analytics and artificial intelligence, companies shifted their technological approach to exploit consumers’ preferences shared within the online environment, providing fertile opportunities to offer mass customized offerings with personalized pricing (Ghose and Huang 2009). Current technology extends into the former realm of science fiction where in-store video monitors use facial recognition (McDonald 2015) and Wi-Fi triangulation maps foot traffic throughout a venue (Kopytoff 2014). The data are tracked, mined, aggregated at an individual customer level, and used to optimize behavioral advertisements and personalize pricing (Ezrachi and Stucke 2016b).

Aside from the retail context, broad dissemination of consumer data increases individual exposure to potentially predatory pricing practices that are currently unregulated (most notably within the United States). Genetic test-kit provider, 23andMe, is particularly notable for sharing its accumulated data with pharmaceutical partners Genentech (Herper 2015) and GlaxoSmithKline (Roland 2019). A reported 80% of customers explicitly consent to sharing their private (albeit anonymized) data for research purposes. Studies demonstrate how individuals learning they carry genetic markers indicative of high risk for certain diseases purchase long-term care insurance to mitigate the potential risk (Taylor et al. 2010; Zick et al. 2005). The resulting information asymmetry between insurance carriers and their customers could impact pricing of insurance premiums. Erlich and Narayanan (2014) demonstrate the bi-directionality of information asymmetry by illustrating the relative ease in which a technically sophisticated adversary can exploit certain techniques using quasi-identifiers or metadata to trace individual identity. While in the United States, the Genetic Information Nondiscrimination Act (a.k.a. GINA, Pub. L. No. 110-233) prohibits the use of genetic information when underwriting health insurance premiums, no federal antidiscrimination protection exists for life or long-term care insurance. Klitzman (2018) proposes some possible solutions, but acknowledges that each results in an uncertainty underlying the price level.

The dynamic element

Implementation of a dynamic element in conjunction with targeted pricing in competitive industries can yield additional profit (Chen and Zhang 2009), but also lead to potential peril for companies, as consumers (and governments) are wary of the issue of “fairness” in relation to price discrimination (UK Competition and Markets Authority 2015). Rayna et al. describe how consumer willingness to disclose information can provide for the design of personalized pricing that is economically advantageous to both the firm and customer (2015), but in many cases, consumers lack a robust awareness of whether, when, and how their information is being used to set prices (Consumers Council of Canada 2017). A thought-provoking suggestion is introduced by Ezrachi and Stucke whereby the delineation between personalized and dynamic pricing will disappear as firms use behavioral discrimination to accelerate information asymmetry and increase consumers’ search costs for a “true” market price (2016a).

Researchers at Boston Consulting Group (BCG) (2019) tout the inevitability of “progressive pricing,” a variant of dynamic pricing in which value is shared with rather than extracted from [customers]” (p. 8). Progressive pricing builds upon price optimization and yield management, using algorithmic models to capture additional consumer surplus both through raising prices for those with a higher willingness to pay and lowering prices to increase penetration toward those previously unserved market segments. This strategy of “optimizing a continuum of prices” (p. 3) is gaining traction to expand markets and increase profits. A large retail bank uses the approach to personalize interest rates on checking account deposits by monitoring customers’ cash transfers to secondary savings banks and deploying algorithms as a predictive barometer of the interest rate necessary to prevent such withdrawals. Ridesharing services are renowned for surge pricing, pool discounts, and leveraging algorithms to offer nudges that target specific customers’ sensitivities to promotional pricing incentives (Ke 2018). BCG claims that an imperative of progressive pricing is that “firms must redefine fairness and make the case that the criteria that drive price differences are fair …. [o]ne example is supply and demand” (2019, p. 8). Yet redefining fairness can be difficult when an algorithm mistakes misfortune for missed opportunity. During 2 days, November 8–9, 2018, following the initial breakout of California’s deadliest wildfire, online price trackers noted that the Amazon.com direct (not third party) prices for a First Alert fire extinguisher and ResQLadder fire escape ladder increased by 18.3% and 22%, respectively. A spokesperson for Amazon.com dismissed the possibility of surge pricing, yet no price fluctuations were reported for similar products listed on Amazon.com.uk (Ke 2018). BCG reports that “progressive pricing is inevitable …. to any individual customer or segment of one who opts into share personal real-time data” (pp. 6–7), which underscores the importance of examining the degree of congruence between consumer perceptions of ethical or legal pricing practices with the algorithmic decision-making processes deployed by firms to make pricing decisions.

Algorithmic development

The evolution from revenue management in targeted industries (e.g., airlines, car rentals, and so on) to scenarios where firms simultaneously price abundant varieties of goods and services requires a cost-effective AI-based model incorporating flexible demand modeling and a revenue optimization algorithm (Shakya et al. 2010). Given the preponderance of personalized pricing in online marketplaces, revenue optimization further requires consideration of supply availability (in nearby warehouses) and fulfillment costs to the customer’s doorstep (Lei et al. 2018). The ubiquity of embedding sensors into IoT devices is driving the collection of massive data relating to consumer behavior (Manyika et al. 2011), creating a third-party market for data aggregation and opportunities for firms to maintain a competitive advantage through the evolution from ‘adaptive’ to ‘learning’ pricing algorithms that exploit real-time active improvements to optimize decision-making (Ezrachi and Stucke 2016b).

The transition to machine learning as the basis for algorithmic modeling requires a substantial increase in available data for training, validation, and testing (Heath 2018). Yet raw data used for training reflect human bias—whether explicit or by omission. A Google image search for “parents” yields photos of predominantly heterosexual couples, and a similar search for “nurse” results in a preponderance of women (Steimer 2018). Etlinger (2017) describes how algorithmic bias seeps into popular press by presenting several sensationalist news headlines from September 2017: “Computers can tell if you’re gay from photos,” “Google allowed advertisers to target people searching racist photos,” and “Amazon suggests users purchase dangerous item combinations” (p. 8). A McKinsey Quarterly report (2019) describes five key pain points that businesses should address to mitigate AI-based risks, emphasizing a distinction between factors that enable AI (data, technology, and security) and those inherent to the execution of AI (algorithmic models and human–computer interaction). Control mechanisms remain nascent, but center upon structured identification of the most critical risks, execution of broad enterprise-level governance, and reinforcement of specific protocols within critical infrastructure. Admittedly, incorporation of controls can decrease the predictive power of algorithmic models, but concurrently increases transparency to improve confidence in human judgement.

Reactions from the public sector

European governmental bodies are leading efforts to regulate privacy rights and ensure well-functioning markets. Whereas it has been asserted that the European Union’s General Data Protection Regulation (GDPR) equally applies to personalized pricing (Zuiderveen Borgesius and Poort 2017), precedent has yet been established at the Court of Justice of the European Union. The Consumers Council of Canada has outlined a comprehensive Consumer Protection Framework, advocating consumer education as a mitigation factor against dynamic pricing, but also acknowledges that consumers have false overconfidence and a constrained ability to adequately convey their privacy needs (2017). The USA has perhaps the least well-defined official articulation in response to the increased prevalence of personalized pricing, inferring that while monitoring the application of big data is necessary, most issues of competition and consumer privacy abuse can be managed through existing legal frameworks and regulatory policies (Executive Office of the President 2015).

As public-sector regulations begin emerging to address legal and ethical issues relating to data privacy, scholars debate the extent to which AI as a broader concept should fit into various frameworks of oversight. Treleaven et al. (2019) propose a multidimensional framework consisting of “when-to regulation, where-to jurisdiction, and how-to technology” (pp. 32–33). They further advocate for technology-oriented solutions that include algorithm testing (verification and cross-validation), certification (comprehensive audit), and circuit breakers (fail-safe mechanisms deployed during unexpected system difficulties). The European Commission (2019) has established an independent High-Level Expert Group on Artificial Intelligence (AI HLEG) to support the implementation of a vision enabling Europe to emerge as the world leader in ethical AI and related technologies. Central to this vision is “trustworthy AI” (p. 5) which must be lawful, ethical, and robust (in both the technical context and social sense in which AI shall not cause unintentional harm). The emerging field of explainable AI (XAI) may perhaps hold significant value by eliminating the black box stigma-associated algorithmic modeling. However, the nature of machine learning (more specifically, neural network and deep learning approaches) makes interpretability a significant challenge.

Conclusion: a research challenge

The pricing function grows increasingly complex with algorithmic models utilizing personal data to make sophisticated predictions to optimize price levels. Made in conjunction with human judgement and reasonable controls that account for ethical considerations and legal constraints, artificial intelligence at the heart of decision-making can provide win–win outcomes for both organizations and consumers. Without relevant oversight, as denoted by countless dystopian headlines, algorithms are subject to bias, discriminatory predictions, third-party tampering, and autonomous collusion. Our conceptual model, informed by extant literature and practitioner experience, seeks to build a foundation from which subsequent qualitative research can compare consumers’ ethical and legal schemata when confronting algorithmic personalized pricing with the organizational schema considered by firms adopting such practices.

Accordingly, we conclude our discussion by outlining a qualitative study that uses our preliminary conceptual model to address our core research questions.

Proposed study

To begin examining these crucial issues, we propose a two-phase qualitative study using a hermeneutic phenomenological analysis method to determine what constraints, considerations, and factors influence the decision schema in the context of using algorithmic pricing models. Phenomenological analysis is an appropriate method for the researcher to describe what is real through the direct lived experience and essence of participants’ perceptions (Merleau-Ponty 1962). Gadamer (1976) elaborates that hermeneutic reflection requires that researcher prejudgments and preunderstanding remain “constantly at stake” (p. 38) so that prejudices may be set aside based on “what the text says to us” (p. xviii).

The first phase of this study will use a focus group technique (Krueger 1994) to examine the attitudes and perceptions of retail consumers toward the ethical considerations of deception, fairness, and social justice in pricing. The second phase of this study will couple the legal constraints hypothesized by the conceptual model to those emergent ethical considerations from the consumer focus groups. A series of semi-structured interviews will illustrate the lived experiences of senior pricing managers and top executives who make price-setting or price-changing decisions. Transcripts from the interviews will be coded (through open, axial, and selective phases) to elicit common themes and categories that will serve as validation or refutation of our hypotheses. The overall objective of this research effort is to reveal whether firms avoid potential unethical or illegal behavioral targeting through a manifested process within their pricing function.

Implications for future research

Without speculating toward the far future, there are some natural extensions of algorithmic pricing techniques that warrant further research. As noted by Ezrachi and Stucke, the appeal for ex-ante monitoring of algorithm design may become feasible as the proficiency of auditors develops toward a higher degree of sophistication (2016b). A further consideration could acknowledge the use of countervailing technologies to disrupt the cycle of algorithmic self-learning or negate its impact altogether.

Many questions deserve consideration following the implementation of GDPR. It is possible that consumers’ behaviors toward reviewing privacy policies have changed, or that third parties have utilized transparency disclosures to illuminate the prevalence and salience of personalized pricing algorithms. There also remains a dearth of literature on the potential discriminatory impacts of personalized pricing—particularly in combination with big data analytics, AI, and machine learning algorithms. Finally, recognizing the growing complexity and creeping normality of personalized pricing, it seems that further consideration of consumer attitudes toward discriminatory pricing techniques is warranted.