Abstract
Despite offering substantial opportunities to tailor services to consumers’ wants and needs, artificial intelligence (AI) technologies often come with ethical and operational challenges. One salient instance of such challenges emerges when vulnerable consumers, consumers who temporarily or permanently lack resource access or control, are unknowingly discriminated against, or excluded from the marketplace. By integrating the literature on consumer vulnerability, AI for social good, and the calls for rethinking marketing for a better world, the current work builds a framework on how to leverage AI technologies to detect, better serve, and empower vulnerable consumers. Specifically, our AID framework advocates for designing AI technologies that make services more accessible, optimize customer experiences and journeys interactively, and to dynamically improve consumer decision-making. Adopting a multi-stakeholder perspective, we also discuss the respective implications for researchers, managers, consumers, and public policy makers.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Widespread application of artificial intelligence (AI) is fundamentally reshaping the way companies conduct business and provide their services (Davenport et al., 2020; Huang & Rust, 2018, 2021a, b; Kopalle et al., 2022; Mariani et al., 2022; van Doorn et al., 2023). Consumers are increasingly encountering AI in a broad range of personal, social, and professional contexts, for instance, when utilizing AI-driven products and services (e.g., customer support chatbots, intelligent personal assistants). Due to its advantages, including increased efficiency and cost reduction (Brynjolfsson et al., 2023; Huang & Rust, 2021a; Kunz & Wirtz, 2023), the adoption of AI technologies by companies is increasing rapidly. Global corporate investments in AI surpassed $100 billion in 2020 (Zhang et al., 2022), and this growth trajectory is continuing, reaching $176.47 billion in 2021, compared to $5.23 billion invested in 2013 (Zhang et al., 2022).
AI applications in the marketing and service industries mainly concentrate on two approaches: fine-grained and data-driven marketer-initiated (i.e., personalization) or customer-initiated (i.e., customization) tailoring of offerings and interactions based on customers' wants, needs, and interests. Even though service research (e.g., AI-based, personalized communication; Mende et al., 2023) and practice (e.g., AI-powered digital visual assistants for instantaneous image-to-text generation; Be My Eyes, 2023) have recently started to consider the potential of AI technologies to better serve vulnerable consumers, the extensive advantages AI technologies can offer still remain underutilized. Not to mention, many companies might also be missing out on an opportunity to serve their vulnerable consumers more effectively, due to a prevalent misconception that consumer vulnerability is a status confined to a relatively small consumer segment.
Contrary to this misconception, consumer vulnerability is defined as a dynamic state of powerlessness (Baker et al., 2005) and susceptibility to harm (Hill & Sharma, 2020; Salisbury et al., 2023), which can pertain to any consumer. Considering the proliferation of national and supranational laws and regulations aiming to protect vulnerable consumers, it has become a growing and important imperative for companies to learn how to best interact with these consumers. Since consumer vulnerability is often conceptualized as a state, and transcends consumers’ status (e.g., visually impaired, elderly, or obese consumers; Baker et al., 2005; Hill & Sharma, 2020), it can pose challenges for companies to detect vulnerable consumers, to address their unique needs when serving them, to prevent or mitigate potential discrimination and inequalities, and to promote social justice. Fortunately, technological advancements, particularly in the realm of AI, have empowered companies to adopt socially responsible business practices and improve their services to better cater to vulnerable consumers.
The adoption and investment in AI technologies to better serve vulnerable consumers can be resource-intensive for companies and necessitate substantial time and financial commitment. Such endeavors may also entail managing trade-offs between a company’s economic objectives (e.g., greater estimated profits by serving “mainstream” consumers, cost of developing new systems for potentially small consumer segments) and their social objectives (e.g., better serving vulnerable consumers, countering discrimination and inequalities, fostering social justice). Given the highly dynamic state of consumer vulnerability, however, it becomes apparent that any “mainstream” consumer could experience vulnerability under specific circumstances. Thus, being equipped to adeptly identify and serve vulnerable consumers could benefit the entire customer base. Furthermore, overlooking the (dynamically) changing needs of consumers when they experience vulnerability can yield adverse effects on customer satisfaction, word-of-mouth behavior, and corporate reputation. Overall, the business case, that is, the alignment and simultaneous achievement of economic and social objectives (Siltaloppi et al., 2020; Van der Byl & Slawinski, 2015) becomes more likely, and upfront investments in developing AI technologies are likely to be amortized as well.
In response to the calls for leveraging both marketing (e.g., Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021) and AI for social good and sustainable development (e.g., Cowls et al., 2021; Du & Sen, 2023; Floridi et al., 2018, 2020; Vinuesa et al., 2020), the current paper aims to explore the role of AI in enhancing services (and outcomes) for vulnerable consumers. It also seeks to offer guidance to businesses on best practices for utilizing AI in interactions with vulnerable consumers. In doing so, we develop a framework that conceptualizes the key qualities of AI technologies in relation to serving vulnerable consumers. Specifically, our AID framework highlights how accessible, interactive, and dynamic AI technologies can empower vulnerable consumers by providing accessible services, optimizing service experiences, and enhancing consumer decision-making.
Thereby, our work makes two main contributions to the literature. First, previous research in marketing has predominantly focused on studying mainstream consumers and their experiences with AI technologies (e.g., Castelo et al., 2019a, b, 2023; Longoni & Cian, 2022). To the best of our knowledge, however, this literature has neglected to study potentially marginalized, vulnerable consumer groups and their interactions with AI technologies, or took a limited status-based perspective (e.g., race; Poole et al., 2021). Moreover, this line of work has primarily examined consumers’ responses to AI (i.e., demand-side perspective), but has overlooked how companies can effectively design and integrate AI into their services (i.e., supply-side perspective). To address this gap, our AID framework combines insights from research on the psychology of AI (e.g., Longoni et al., 2019; Puntoni et al., 2021), the literature on consumer vulnerability (e.g., Baker et al., 2005; Hill & Sharma, 2020), and scholarly work on AI for social good (e.g., Cowls et al., 2021; Floridi et al., 2018, 2020). In response to the calls for rethinking marketing for a better world and the greater good (Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021), we illustrate how companies can harness AI technologies to empower vulnerable consumers and deliver socially beneficial outcomes by mitigating digital inequalities and the digital divide (e.g., Lythreatis et al., 2022; Ragnedda, 2018; Wei et al., 2011), thereby aligning with the Sustainable Development Goals 3 (SDG 3) “Good Health and Well-Being,” and 10 (SDG 10) “Reducing Inequalities” (Cowls et al., 2021; Vinuesa et al., 2020). This contribution also carries importance as the current number of AI initiatives targeting SDG 10 remains limited (Cowls et al., 2021)—despite the potential for pursuing SDG 10 to yield compounding positive effects on all other goals (Lusseau & Mancini, 2019) and AI’s ability to act as catalyst for achieving the SDGs (Du & Sen, 2023; Vinuesa et al., 2020). Overall, companies have the opportunity not only to benefit society at large by better serving vulnerable consumers, thereby reducing inequalities, and enhancing well-being, but also to uphold the ethical principle of justice when developing technology-based services.
Second, answering the call for marketing research to acknowledge the interrelatedness of stakeholders (Hillebrand et al., 2015), our work provides a multi-stakeholder perspective and addresses the implications and challenges tied to the design, development, and deployment of AI technologies for various stakeholders, including researchers, managers, consumers, and policy makers. Despite the increasing adoption of AI technologies for engaging with vulnerable consumers, many companies face difficulties in designing or adapting their service offerings to aptly address the needs of vulnerable consumers. This challenge is further amplified by the escalating number of laws and regulations aimed at protecting these consumers (Financial Conduct Authority, 2021; International Organization for Standardization, 2022). To address this issue, the present paper introduces a simple yet effective framework for designing and developing AI-driven service applications and systems. Specifically, our AID framework aims to assist companies in designing their AI technologies to effectively tailor to the unique requirements of vulnerable consumers and to enhance the customer experience at different touchpoints along the customer journey.
The remainder of the paper is structured as follows. After shedding light on the consumer vulnerability concept and vulnerable consumers in the age of AI, we illustrate our AID framework and discuss implications and challenges for researchers, managers, consumers, and policy makers.
Consumer vulnerability
Consumer vulnerability is defined as “a dynamic state that varies along a continuum as people experience more or less susceptibility to harm, due to varying conditions and circumstances” (Salisbury et al., 2023, p. 6), which inhibits their optimal functioning and individual agency in the marketplace (Baker et al., 2005). Hence, consumer vulnerability constitutes a state rather than a fixed status (Baker et al., 2005; Hill & Sharma, 2020) and cannot be simply equated with specific demographic or personal characteristics (e.g., elderly, blind, lower income, illiterate), stigmatization, or unmet needs. Instead, any consumer can experience vulnerability in the marketplace, regardless of their status or demographic characteristics (Shultz & Holbrook, 2009; Wünderlich et al., 2020).
For a multitude of reasons, millions of consumers experience vulnerability every day, which subsequently affects their decision-making capacity, purchase decisions, and overall behaviors. For example, according to the Financial Lives 2022 survey, 47% of adults in United Kingdom exhibited one or more characteristics of vulnerability (i.e., health, resilience, capability, life events; Financial Conduct Authority, 2022). A poignant instance of a life event that can render consumers vulnerable is when people lose their loved ones: Amidst the myriad formalities, including life insurance collection or funeral arrangement, those grieving may also feel overwhelmed as they grapple with bereavement. For example, they can struggle to comprehend information presented to them or even to make decisions. Beyond the impact of consumers’ moods and psychological states (e.g., going through depression after a painful break-up), vulnerability can also temporarily arise from other conditions, such as short-term financial constraints (e.g., prior to receiving wages) or a lack of self-control (e.g., engaging in compulsive buying on Black Friday, stress eating). Importantly, these conditions can lead to vulnerability in distinct ways and dynamically shape consumers’ functioning within the marketplace.
The concept of consumer vulnerability is not without controversy, and recent research has condensed limited access to and control over resources as key antecedents of consumer vulnerability, both of which can manifest through personal experience or observation (Hill & Sharma, 2020; Pavia & Mason, 2014). In essence, consumer vulnerability is “a state in which consumers are subject to harm because their access to and control over resources is restricted in ways that significantly inhibit their abilities to function in the marketplace” (Hill & Sharma, 2020, p. 554). Table 1 provides an overview, along with examples, of the limited resources and restricted control linked with consumer vulnerability, operating at an individual (i.e., consumers’ self-related assets, abilities, and sources of control), interpersonal (i.e., social factors, interactions, and sources of control), and a structural (i.e., marketplace and external factors and sources of control) level.
Conceptualizing consumer vulnerability as a state recognizes its potential to vary in terms of intensity (e.g., more vs. less extreme states of vulnerability), nature of the vulnerability (e.g., physical, psychological), and its duration (e.g., temporary vs. permanent; Hill & Sharma, 2020). Furthermore, while certain resource-related antecedents of consumer vulnerability are more directly (e.g., lack of material/financial resources due to economic downturns) or easily detectable (e.g., lack of physical resources due to physical impairments), others might not be immediately apparent (e.g., family breakdown, depression, addictions). Additionally, it is important to acknowledge that constraints on resource access and control can extend to the social network (e.g., family members nursing or taking care of vulnerable consumers) or support group of the original (i.e., primary) vulnerable person (Pavia & Mason, 2014). Secondary vulnerability results from the experiences of vulnerability faced by the primary consumer and can lead to diverse service needs of secondary vulnerable consumers, whether they are other-related (e.g., concern of primary consumers’ well-being) or self-related (e.g., emotional support, provision of adequate information; Leino et al., 2021). Given the interrelatedness of needs and well-being between primary and secondary consumers, it becomes imperative for companies to understand the holistic spectrum of antecedents and consequences of vulnerabilities, while catering to the needs of both consumer groups (Leino, 2017; Leino et al., 2021).
Due to the dynamic and broad nature of vulnerability, companies often encounter difficulties in efficiently and effectively identifying and serving vulnerable customers. For instance, a recent survey conducted among equity release advisers revealed that only 12% of the advisers considered it easy to identify vulnerable clients, despite 84% of them stating that identifying vulnerable consumers is one of their biggest priorities (Jones, 2020). Historically, companies predominantly relied on status-based criteria such as demographics (e.g., elderly, low-income groups) or self-reported indicators (e.g., consumers explicitly identify themselves as vulnerable). However, with advancements in AI technologies, novel approaches have surfaced for detecting these consumers and mitigating their vulnerabilities, enabling companies to address consumer vulnerabilities effectively and efficiently. For example, AI-powered customer base analysis (Valendin et al., 2022) can facilitate the generation of predictive models to gauge the likelihood of consumer vulnerability across various service contexts and to suggest tailored intervention strategies based on the calculated risk and identified category of vulnerability. Nevertheless, implementation of AI technologies can necessitate substantial resources (e.g., time, labor, money) and potentially entail significant alterations to service and marketplaces, leading to tensions between a company’s economic objectives and its commitment to consumer well-being. Thus, a pertinent question arises: Why should companies pay attention to and invest in mitigating consumer vulnerability and integrating AI systems when managing customer relationships?
First, companies have a legal obligation to treat vulnerable consumers fairly, as outlined by consumer rights and regulation (Larsen & Lawson, 2013). For instance, the European Union (EU) Unfair Commercial Practices Directive (Article 5) prohibits commercial practices that distort the economic behavior of vulnerable consumers, and the EU Artificial Intelligence Act bars the “use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability” (European Parliament, 2023). Moreover, there are industry-specific regulations in place to ensure the fair treatment of vulnerable consumers, for instance, by the International Organization for Standardization (ISO 22458:2022) or the United Kingdom’s Financial Authority Conduct (Financial Conduct Authority, 2021; International Organization for Standardization, 2022). The latter states that firms should take action (a) to understand the needs of vulnerable consumers, (b) to develop the right skills and capabilities of their staff to recognize and respond to these needs, (c) to respond to these needs throughout the product design, flexible service provision, and communications, and (d) to monitor and assess whether needs are met and responded to. Notably, advancements in AI technologies provide companies with significant avenues to actively contribute to and support all of these actions.
Second, by serving vulnerable consumers better, companies can reap substantial financial benefits. While it involves a short-term investment, this approach can yield long-term benefits by meticulously addressing the dynamically changing wants and needs of consumers who might be subject to primary or secondary vulnerability. As a result, companies stand to improve customer satisfaction and loyalty, ultimately fueling profitability and bolstering brand equity. Given the closely intertwined nature of customer satisfaction and loyalty, an additional positive spillover and multiplier effects can be expected to emerge (Leino et al., 2021).
Third, granting vulnerable consumers marketplace access and alleviating consumer vulnerability carries significant societal implications: it directly contributes to improving societal well-being, and reduces unequal market participation, healthcare, employment issues, among other advantages (Wünderlich et al., 2020). Companies that strive to mitigate, resolve, or at best prevent consumer vulnerability play a role in mitigating social inequalities, thereby actively contributing to a central antecedent of consumer well-being—social justice (Anderson et al., 2013; Fisk et al., 2018; Johns & Davey, 2019). Importantly, justice plays a pivotal role for the development and deployment of AI (Floridi et al., 2018; Jobin et al., 2019; Morley et al., 2020). The justice principle advocates fairness, avoiding unwanted biases and discrimination, equitable sharing of benefits, and the cultivation of solidarity (Jobin et al., 2019; Morley et al., 2020; Thiebes et al., 2021), all working to ultimately strengthen social cohesion (Jobin et al., 2019).
Overall, companies can face tensions between the social objective of better accounting for the dynamic states of consumer vulnerability to foster social justice and consumer well-being and economic objectives (e.g., profitability; Hahn et al., 2010; Van der Byl & Slawinski, 2015). Our work highlights that addressing consumer vulnerability does not necessarily imply an enduring trade-off or a zero-sum situation. While some consumers may be more predisposed to experiencing vulnerability in their daily lives, vulnerability can affect any consumer, both directly and indirectly. Accordingly, companies should be equipped to navigate the dynamic occurrence among consumers (i.e., primary vulnerability) and their social networks and support groups (i.e., secondary vulnerability) and consider integrating AI technologies into consumer interactions to detect and serve them more effectively and efficiently. We further argue that the high degree of service personalization and customer centricity suggested by our AID framework not only precludes the “alienation of mainstream consumers,” but, conversely, can yield substantial benefits for them. Hence, companies’ socially responsible efforts in mitigating consumer vulnerability can lead to positive responses from all consumers, irrespective of their vulnerability state, while also capitalizing on win–win outcomes facilitated by AI’s integration within services, including profit generation for companies, enhanced reputation and corporate image, and doing good for society as a whole (Chandy et al., 2021).
Before we present our AID framework, we briefly illustrate the role of AI in marketing and consumers’ responses to it.
(Vulnerable) Consumers in the age of AI
AI can be understood as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Kaplan & Haenlein, 2019, p. 17). With its extensive potential for personalization and customization, AI is progressively integrated into numerous marketing activities including decisions about products, services, prices, communication, and distribution (i.e., the marketing mix; Davenport et al., 2020; Huang & Rust, 2021a) throughout the entire customer journey (Hoyer et al., 2020) and the process of service creation, delivery, and interaction (Huang & Rust, 2021b). As a result of companies embracing AI, its influence extends to shaping how consumers think, feel, and behave (Puntoni et al., 2021). Today, AI technologies enable tracking consumers’ fitness activities, giving consumers recommendations on what to buy, responding to consumers’ requests and complaints, or even preparing their cocktails at a bar.
In response to AI’s transformative influence on business and daily activities, marketing scholars have been investigating the role of AI in shaping consumer experiences. As AI yields substantial power to (re-)shape business and social environments, personal interactions, workplaces, and human agency, it presents valuable opportunities for companies with respect to marketing strategy and actions (e.g., Huang & Rust, 2021a), services (e.g., Blut et al., 2021; Huang & Rust, 2021b; Mende et al., 2019; Xiao & Kumar, 2021), retailing (e.g., de Bellis & Venkataramani Johar, 2020; Guha et al., 2021; Shankar, 2018), customer experience (e.g., Grewal et al., 2020c; Hoyer et al., 2020; Puntoni et al., 2021), relationships (e.g., Libai et al., 2020), and engagement (e.g., Kumar et al., 2019). Despite the evident benefits of AI-driven personalization and customization, it is important to acknowledge that AI can treat consumer segments or individual consumers differently on the basis of demographic, psychological, and economic factors (Du & Sen, 2023), thereby potentially giving rise to consumer vulnerability. Such potentially discriminatory marketing methods raise substantial ethical concerns (e.g., De Bruyn et al., 2020; Du & Xie, 2021; Hermann, 2022), particularly, for vulnerable consumers (Argawal et al., 2020). Notably, AI technologies that impact or exploit consumer vulnerability could lead to manipulation, decrease autonomy, or change consumer behavior in ways that are not in their best interest (Strümke et al., 2023).
Marketing researchers have been increasingly studying consumers’ reactions to algorithmic versus human decision-making and underlying psychological processes. The overarching finding within this body of literature is that consumers’ reactions towards algorithms and AI versus humans depends on various factors. For example, consumers react less positively when AI makes morally relevant trade-offs (Dietvorst & Bartels, 2022), when it makes favorable (vs. an unfavorable) decisions about them (Yalcin et al., 2022), when it makes offers that are worse than expected (Garvey et al., 2023), or when its use is perceived as being motivated by firm benefits at the expense of customer benefits (Castelo et al., 2023). Moreover, consumers are found to perceive algorithms and AI to be less authentic (Jago, 2019; Jago et al., 2022) and less moral (Bigman & Gray, 2018; Giroux et al., 2022) than a human, and to neglect their unique characteristics (Longoni et al., 2019). Conversely, there are also circumstances under which consumers react more positively towards AI. For instance, consumers respond positively to AI in embarrassing service encounters (Holthöwer & van Doorn, 2023; Pitardi et al., 2022), when their needs are more certain (Zhu et al., 2022), and when the task at hand is objective (vs. subjective; Castelo et al., 2019a, b).
To the best of our knowledge, however, empirical (demand-side) marketing research on AI and algorithmic decision-making has primarily either neglected to investigate vulnerable consumers’ reactions and interactions with AI technologies or has adopted a limited status-based perspective on consumer vulnerability by concentrating on specific groups of vulnerable consumers. For instance, previous research demonstrated that intelligent personal assistants can help consumers with disabilities regain independence and freedom (Ramadan et al., 2021; Vieir et al., 2022). Conversely, older consumers can feel socially excluded and inadequately skilled when using retail technology autonomously (Pantano et al., 2022). Moreover, the use of AI-based service providers that match vulnerability attributes (i.e., obesity) can inadvertently offend consumers, as they might perceive that AI, compared to fellow humans, cannot relate to their (human) experience of living with such vulnerability attributes (Mende et al., 2023).
In our AID framework, we aim to complement prior work by providing a supply-side perspective and strategic guidance for companies to leverage AI technologies, thereby improving vulnerable consumers’ interactions with AI and ultimately enhancing their well-being. Specifically, we argue that AI can aid vulnerable consumers and be harnessed for social good and reduce inequalities when it is and makes services accessible, interactive, and dynamic. In the following, we delve into these three qualities and discuss how they should shape the development and deployment of AI for service provision and innovation.
Designing AI in service to aid vulnerable consumers: AID framework
AI technologies in service can be a double-edged sword for vulnerable consumers, with their effectiveness hinging on how and why they are implemented. On one hand, there may be barriers to adoption (e.g., ease of use, affordability, technology readiness, perceived risk; de Bellis & Venkataramani Johar, 2020; Lee & Coughlin, 2015), which can be amplified due to the inherently high-tech nature of AI. For example, a vulnerable consumer might hesitate to interact with a simple chatbot due to its standardized responses, or its perceived lack of receptiveness with their diminished material (e.g., recent job loss) or emotional resources (e.g., grief). Nevertheless, when implemented adeptly, AI has the potential to render services more accessible, to interactively ameliorate customer experiences and journeys in services, and to dynamically improve consumer decision-making. Following on the earlier example, AI can facilitate service personnel in identifying vulnerable consumers by detecting subtle cues (i.e., “feeling AI;” Huang & Rust, 2021a, b) that human employees might overlook and by guiding them in devising a suitable and effective strategy when catering to the needs of vulnerable consumers.
In the following, we outline how AI technology can improve vulnerable consumers’ experiences and introduce our AID framework. In doing so, we adopt a conceptualization of AI attributes that align with the sequential steps of the service process and the customer journey, along with the different levels of digital inequalities. Specifically, and as shown in Fig. 1, we propose that accessible AI technologies and services are the precondition for interactive customer experiences and journeys, that, in turn, allow service providers to dynamically assist vulnerable consumers in making beneficial decisions. These qualities further address the three levels of digital inequalities, that is, inequalities in access (first level), uses (second level), and outcomes (third level; Lutz 2019; Ragnedda, 2018; Wei et al., 2011).
Accessible
The increasing reliance on AI technologies in services (e.g., Blut et al., 2021; Huang & Rust, 2021b) can unintentionally contribute to the technology and digital inequality phenomenon or the digital divide (i.e., societal-level inequalities of digital access; Fisk et al., 2022; or first-level digital inequality; Wei et al., 2011), particularly affecting vulnerable consumers (Argawal et al., 2020; Grewal et al., 2020a; Lu & Sinha, 2023; Pantano et al., 2022). Therefore, it is imperative that AI technologies do not become barriers when accessing services and sources of vulnerability themselves (i.e., structural antecedents of consumer vulnerability), but rather are intentionally designed to facilitate access. Within the scope of our work, accessible refers to the unhampered possibility and capability to use and engage with AI technologies in services. The initial step towards addressing and alleviating general and technology-induced access barriers to services is to identify and predict potential states or antecedents of consumer vulnerability.
Service agents or frontline employees are often unaware when they are interacting with vulnerable consumers. According to a survey conducted by the Data & Marketing Association’s Contact Centre Council, only 4% of service agents indicated that they always recognize when they speak with vulnerable consumers (Lee & Workman, 2018). Detecting vulnerable consumers can be challenging due to various factors, such as time constraints, subtle cues exhibited by consumers, or simply a lack of attention. AI technologies, however, can overcome these (human) shortcomings by leveraging their computational power and machine learning algorithms to perform real-time analysis of consumer responses for more accurate and swift identification and prediction of vulnerability states. For example, companies like Aveni (Aveni, 2022) and Key (Key, 2022) incorporate AI solutions that employ Natural Language Processing (NLP) and big datasets to analyze customer interactions and determine the presence and type of consumer vulnerability. In instances where a consumer introduces flagged topics during a conversation such as anxiety, avolition, or exhibits flagged behavioral patterns (e.g., signals of continuing confusion), an alert is sent to the customer representative. Figure 1 depicts a simplified illustration of the consumer vulnerability identification and prediction process. AI technologies analyze consumer data related to material, physical, cognitive, and emotional resources, or proxies thereof (e.g., visual data like movements, conversational/language data, and response behavior, numerical data like purchase behavior/history or transactional data). This data is then compared against pre-defined default values (of what is deemed “mainstream”), and any deviations or anomalies are flagged to identify and/or make model-based predictions of consumer vulnerability. The outcomes of this process help determine whether human oversight by, and the involvement of, human service employees are necessary for consumers who need special attention and/or treatment. Just as learning and feedback loops are pivotal to AI systems, the identification and prediction outcomes should similarly guide and continually enhance the process of identifying and predicting consumer vulnerability. Data on factors that are likely to induce or contribute to consumer vulnerability also hold significant informative and strategic value for companies to prevent and proactively address (future) instances of consumer vulnerability.
AI technologies, in combination with abundance of available digital, mobile, and transaction data, have the potential to identify consumer groups with psychological and emotional impairments by predicting both consumers’ psychological traits (variability across consumer such as personality traits, values, cognitive styles) and states (variability within consumers over time such as mood, emotions, attention; e.g., Gladstone et al., 2019; Matz & Netzer, 2017; Matz et al., 2017; Stachl et al., 2020; Youyou et al., 2015). This enables “an unprecedented understanding of consumers’ unique needs as they relate to the situation-specific expressions of more stable motivations and preferences” (Matz & Netzer, 2017, p. 9). Recent advances in large language models have even demonstrated the potential for predicting and diagnosing diseases, such as dementia (e.g., Agbavor & Liang, 2022). In addition to vulnerability identification and prediction, AI-driven solutions like intelligent voice assistants can mitigate access barriers. They empower blind or visually impaired consumers to recognize objects, barcodes, and products (e.g., Aipoly, 2022; Microsoft, 2022), perceive the visual world through an auditory experience (e.g., Microsoft, 2022), and access and browse websites (e.g., User Way, 2022). These solutions address limitations in individual resources that hinder online and offline service access and participation in the marketplace (e.g., digital illiteracy, visual impairments) or control over them (e.g., difficulties in obtaining or assimilating information).
AI-enabled access to services constitutes a pivotal step in remedying consumer vulnerability (Shultz & Holbrook, 2009). Offering access to vulnerable consumers provides benefits to companies that extend beyond their customer base and increasing marketplace participation. First, the potential for using and interacting with AI technologies in service can enhance customer satisfaction and loyalty (e.g., Mehta et al., 2022) for both primary and secondary vulnerable consumers. Second, unbiasedness, validity, and accuracy of AI predictions rest upon the quality, integrity, and representativeness of the input data (e.g., Barredo Arrieta et al., 2020; Morley et al., 2020). For instance, healthcare communication and services may predominantly cater to older consumers, while certain healthcare issues increasingly affect younger consumers and their significant others (i.e., secondary vulnerability). Similarly, mental health and psychological distress issues might be disproportionately linked to specific demographic characteristics, such as education level, socio-economic status, or employment status. By granting access to vulnerable consumers, companies can gather relevant purchase and interaction data, enriching their customer databases in terms of scale, scope, and attributes. Consequently, they can mitigate potential underrepresentation or misrepresentation of vulnerable consumers in the data employed to train AI models, prevent biased predictions and prejudiced treatments of vulnerable consumers, and eventually uphold the justice principle of AI ethics.
Interactive
Recognizing that reducing service access barriers is just the initial step, it is essential to complement this with the integration of interactive AI service technologies that facilitate, smoothen, and simplify the vulnerable customer journey across all touchpoints, considering their limited resource access and control (see Fig. 1; Wünderlich et al., 2020). In essence, interactive implies responsive communication and personalized treatments that accommodate the specific and dynamic manifestations of consumer vulnerability. Since the state-based nature of consumer vulnerability implies that consumer vulnerability can arise during service interactions, continuous efforts are required for its identification and prediction. For example, AI systems can analyze customers’ voices to predict their mood based on factors such as their music preferences (e.g., Halbauer & Klarmann, 2022) or voice characteristics (e.g., trembling voice), and provide them with emotional support (e.g., Gelbrich et al., 2021; Sharma et al., 2023). Solutions like NICE Enlighten (NICE, 2022) can leverage AI and comprehensive interaction data to identify states of vulnerability (e.g., a lack of financial resources), and provide frontline employees with coaching and guidance on how to interact with the respective customers. Similarly, companies can deploy AI-enabled human enhancement technologies (e.g., augmented hearing, augmented vision, emotion detection) for frontline employees, enabling them to improve interactions with physically impaired consumers and to better respond to the emotions of vulnerable consumers (e.g., Grewal et al., 2020c; Henkel et al., 2020; Marinova et al., 2017; Sharma et al., 2023).
Moreover, interactive AI technologies implemented in-store, such as smart shelves, self-checkouts, price tags, and augmented reality (e.g., Dekimpe et al., 2020; Grewal et al., 2020b; Thorun & Diels, 2020; van Esch et al., 2021) can streamline services processes for vulnerable consumers, foster their autonomy, and improve the overall customer experience. In a similar way, harnessing AI-driven service robots to assist and support physically, cognitively, emotionally, or mentally impaired consumers can provide vulnerable consumers with a range of communication options (e.g., in written or oral form, by choosing options on digital screens), enabling them to express themselves effectively and efficiently. Importantly, AI-powered service robots are not bound by the same time or effort restrictions as human service employees, and the marginal costs of additional resource investment in customer interactions are likely to be lower for service robots. Lastly, interactive AI-driven service agents (e.g., chatbots) equipped with social intelligence can help replenish social support and emotional resources by engaging vulnerable consumers through corporate social media and other digital communication channels (Fletcher-Brown et al., 2021; Gelbrich et al., 2021; Pantano & Scarpi, 2022; Sharma et al., 2023).
Taken together and as illustrated in Fig. 1, AI technologies integrated into service should be designed to be interactive, accommodating the unique needs of vulnerable consumers along the customer journey (Argawal et al., 2020), and preventing (second-level digital) inequality in technology use (Wei et al., 2011). Moreover, AI involvement in service interactions can be considered as a strategic tool to empower vulnerable consumers (e.g., Hill & Sharma, 2020; Yap et al., 2021). Specifically, interactive AI technologies within services can enhance vulnerable consumers’ perception that their vulnerability can be (partly) alleviated within the specific service context (Pavia & Mason, 2014). This empowerment enables them to regain a degree of control over their resources by receiving tailored treatment aligned with their needs (Hill & Sharma, 2020). A positive ripple effect is that the burden and effort for secondary vulnerable consumers can be reduced. The benefits of interactive and consumer-centric AI technologies can also positively influence adoption behavior, overcoming potential negative responses to AI delineated above, for instance, when consumers’ unique characteristics are neglected (Longoni et al., 2019) or when offers are worse than expected (Garvey et al., 2023). Reliance on interactive AI service technologies can become habitual, leading to increased trust, customer retention, and the cultivation of positive customer relationships (Libai et al., 2020).
Dynamic
Consumer decisions, including those made by vulnerable consumers, are often prone to cognitive and behavioral biases (Dowling et al., 2020). The resource impairment experienced by vulnerable consumers can further compromise their decision-making process, particularly, when they do not fully understand their own preferences and what is in their best interest, and lack the knowledge, skills, or freedom to act on their preferences (Ringold, 2005; Shultz & Holbrook, 2009). Given that digital choice architectures are data-driven, dynamically adjustable, and personalizable (Helberger et al., 2022), adopting a flexible and state-dependent (i.e., dynamic) approach can assist vulnerable consumers, leading to better decision-making and mitigating inequality (Wei et al., 2011). First, to mitigate negative responses to AI technologies in situations of uncertain needs (Zhu et al., 2022), AI technologies can play a role in dynamically assessing the needs of vulnerable consumers. Addressing these needs requires anticipating the scale, scope, presentation, and comprehensibility of information needed across different service settings. This approach ensures that vulnerable consumers are not merely provided with more information, but with better information tailored to their unique requirements (Thorun & Diels, 2020).
Second, AI technologies or service employees empowered by AI can dynamically support and assist vulnerable consumers in making (more) beneficial decisions. For instance, improving their success in identifying at-risk customers by 30.4%, the British consulting company Capita deploys AI-driven real-time conversational analysis and assistance (“assisted customer conversation technology”). This approach leverages machine learning algorithms to analyze factors such as consumers’ tone, pitch, pace, and nature of the conversation (e.g., sudden change in behavior, emotionally withdrawn behaviors, and inconsistent or erratic communication) to detect vulnerable consumers (e.g., financial vulnerability, emotional distress; Capita, 2022, 2023). Following the analysis, the algorithm provides instantaneous customized guidance to customer service agents, advising whether special measures should be taken, and suggesting which personalized solutions based on consumers’ queries and needs could be offered. Some examples can include special conditions/offerings for consumers mentioning financial difficulties or discussing each option with any pros and cons in simple terms with consumers with limited mental capacity Capita, 2022, 2023). Similarly, Key utilizes its NLP-based consumer vulnerability insights to match vulnerable consumers with the most suitable service agent and to tailor services to their specific needs. Additionally, by harnessing AI technologies in conjunction with consumers’ financial and spending data, it has the potential to assist consumers with setting individual budgets, thereby positively shaping their spending behavior and addressing potential challenges arising from their limited control over financial resources (Lukas & Howard, 2023).
To ensure the agency and autonomy of vulnerable consumers, both AI technologies and service employees should opt for transparent interventions that “target individual cognitive and motivational competencies rather than immediate behavior (which is the target of nudges) and aim to empower people to make better decisions for themselves in accordance with their own goals and preferences” (Kozyreva et al., 2020, p. 129). These interventions, known as “boosts,” are distinct from nudges, as they do not alter the choice architecture that consumers encounter, nor do they merely present pertinent and accurate information (Hertwig, 2017; Hertwig & Grüne-Yanoff, 2017). Instead, they aim to foster and enhance human agency and consumer autonomy, and still necessitate individuals’ active cooperation, and inherently require transparency (Kozyreva et al., 2020).
In the service realm, a consumer boost is conceptualized as “context-specific and individualized intervention into consumers’ cognitive processes that aims at developing their operant resources to facilitate the efficient co-creation of transformative value” (Bieler et al., 2022, p. 34). This approach aligns with strength-based strategies for addressing vulnerability, which involve uncovering consumers’ capabilities to exert control over their vulnerability states through skill development, allowing them to become “masters of their own destiny” (Fisk et al., 2022, p. 13). Among other things, boosts can take on various forms, including mini-tutorials (e.g., Madan et al., 2023) or simple decision trees that utilize yes–no questions (Gigerenzer & Gaissmaier, 2011). These interventions are designed to enhance vulnerable consumers’ marketplace literacy (Ringold, 2005) and restore their control over resources (e.g., making financially beneficial decisions or engaging in healthier and sustainable consumption). Prior research showed that consumers seek more variety when interacting with AI-driven (vs. human) service agents (e.g., Zhang et al., 2022). Companies therefore can leverage this promising tendency to encourage vulnerable consumers to consider healthier or more sustainable options.
The provision of dynamic assistance through AI technologies in services inherently involves a learning and feedback-loop rationale. In this context, reactions, decisions, and behaviors of vulnerable consumers serve as data inputs for AI technologies to dynamically adjust and optimize the processes of identifying, predicting, addressing consumer vulnerability, and ideally preventing future instances of consumer vulnerability. For example, RecordSure’s AI (RecordSure, 2022) analyzes conversational customer interactions to both improve services and update their list of potential signs of consumer vulnerability. Similarly, Capita’s AI technology automatically categorizes, transcribes, and analyzes customer conversations to identify root causes for positive or negative conversations, aiming to inform and improve future conversations and customer service (Capita, 2023).
After presenting the constituents of our AID framework, we now sketch the multi-stakeholder implications for researchers, managers, policy makers, and consumers.
Multi-stakeholder implications
Implications for marketing researchers
While there has been some research exploring access to and control over specific resources as antecedents of consumer vulnerability (e.g., financial resources; Mogaji et al., 2020; Salisbury et al., 2023), to the best of our knowledge, prior research on consumer responses to AI has generally overlooked the state-based nature of consumer vulnerability. The conceptualization of consumer vulnerability as a state rather than a status, along with the diverse range of individual, interpersonal, and structural factors that contribute to vulnerability, gives rise to a multitude of questions for marketing research. This is particularly relevant given the inconclusive findings regarding consumer responses to AI (e.g., Logg et al., 2019; Longoni et al., 2019). In analogy to our AID typology, marketing research can also center around the AID attributes.
Accessible
When studying vulnerable consumers, researchers must give special consideration to accessible and inclusive research designs as well as to methodologically sound and ethically appropriate research conduct (Carlini & Robertson, 2023). It is crucial to adapt research designs to accommodate the unique characteristics of vulnerable consumers groups, which necessitates a re-evaluation of all aspects of the research projects, including recruitment and sampling, data collection and analysis, and the reporting and dissemination of findings (e.g., Carlini & Robertson, 2023; Dodds et al., 2023; Lewis et al., 2023). This may involve (a) reframing research issues and questions (e.g., defining consumers by their strengths instead of their “deficits,” adopting an “at-potential” focus instead of an “at-risk” focus), (b) adjusting language correspondingly, (c) redesigning research methods, and (d) relating to and empathizing with vulnerable consumers (Russell-Bennett et al., 2023). To foster the latter, it is important to directly and actively involve vulnerable consumers in the research process (“co-design,” Russell-Bennett et al., 2023; “consumer partnerships in research,” Carlini & Robertson, 2023). Researchers should be also mindful of existing institutionalized social structures and consider how the research process can create meaning and consequently influence outcomes (i.e., reflexivity; Russell-Bennett et al., 2023; Vink & Koskela-Huotari, 2022) in order to achieve value co-creation and reform social structures instead of simply reproducing them (Vink & Koskela-Huotari, 2022). Furthermore, accessibility extends to the dissemination of research findings. Considering the potential of these findings to contribute to social justice, consumer well-being, and hence sustainable development (i.e., SDGs 3 and 10), knowledge dissemination should go beyond the scientific community, and target managers and policy makers in particular.
Interactive
Another important stream of research relates to the interactions between humans (i.e., primary and secondary vulnerable consumers as well as service employees) and AI technologies. A focal research question pertains to the impact of different states of consumer vulnerability on customer responses to AI in various service contexts. Factors such as need uncertainty (Zhu et al., 2022) and uniqueness neglect (Longoni et al., 2019) have been identified as contributing to negative reactions to AI and can hold particular relevance in the context of consumer vulnerability. However, given that anyone can experience consumer vulnerability at any given time, further research is needed to unravel the complex interactions between AI technologies and multi-layered aspects of consumer vulnerability, which can vary in terms of intensity, subjectivity, and duration. For example, reactions to AI technologies might differ based on whether consumer vulnerability is related to individual, interpersonal, or structural resources as well as the level of control individuals have over these resources.
In this context, one fruitful research area is to study vulnerable consumers’ responses to different degrees of anthropomorphism in AI technologies. Existing work demonstrated that consumers generally exhibit more positive consumer reactions to anthropomorphized AI (e.g., Blut et al., 2021; Yalcin et al., 2022). Given that human-like cues can play a pivotal role in eliciting active social responses, the development of more humanized AI technologies could prove to be more effective in addressing interpersonal antecedents of consumer vulnerability that are primarily social in nature. Additionally, vulnerable consumers’ mindset (i.e., competitive vs. collaborative), their perceived psychological closeness to AI, emotional state, and perceived autonomy can also influence how they react to anthropomorphized AI technologies (e.g., Crolic et al., 2022; Fronczek et al., 2023; Han et al., 2023). Moreover, the presence of human service employees can undermine the importance of AI anthropomorphism (van Doorn et al., 2023). Therefore, future research should examine how vulnerable consumers react to humanized AI systems in relation to different vulnerability antecedents and states, individual consumer characteristics, and the interaction between human and AI service provision.
Another important research area that is currently underexplored in the context of AI technologies is secondary consumer vulnerability. The experiences of vulnerable consumers with AI systems can also have an impact on their social network and support groups. Well-designed AI systems can lead to more positive experiences for those who care for vulnerable consumers (e.g., due to effort reduction), but they may also threaten their self-identity as caregivers if they perceive AI technologies as better suited to assist and support primary vulnerable consumers. Furthermore, there could be instances where the needs of secondary consumers deviate from those of primary consumers (Leino et al., 2021), potentially leading to differential treatment. We therefore encourage researchers to investigate how secondary vulnerable consumers interact with and respond to AI technologies in services that are designed to empower primary vulnerable consumers.
Finally, marketing research can offer insights into the relationships, interactions, and collaboration between human service employees and AI technologies, particularly concerning fear of replacement and human job performance (Vorobeva et al., 2022). perceived levels of complementary and overlapping knowledge and skills between humans and AI in service encounters (Huang & Rust, 2022), as well as consumer responses to mixed human-AI service teams (van Doorn et al., 2023). These investigations can contribute to a deeper and more nuanced understanding of the dynamics between humans and AI in service settings, along with offering practical and timely insights for both employees and vulnerable consumers.
Dynamic
Just as AI technologies should be dynamic, companies’ strategic and operational thinking and actions should also be dynamic and adaptive. Accordingly, future research should explore how companies can effectively position and communicate their endeavors to empower vulnerable consumers through the use of AI technologies. One viable approach to enhance credibility is to communicate the benefits for the company, for consumers, and for society as a whole, adopting a “doing well by doing good approach” (Wallach & Popovich, 2023). By highlighting the benefits for multiple stakeholders, potential negative consumer responses, which may arise from perceiving the use of AI technologies as solely benefiting firms, could be mitigated (e.g., Castelo et al., 2023).
Besides, researchers should investigate how companies should handle customer complaints, service failure and recovery. On the one hand, companies can be particularly blamed and held responsible for service failures related to AI technology (Pavone et al., 2023). On the other hand, vulnerable consumers can be immune, and not react favorably to, certain service recovery initiatives including monetary compensations or apologies (Cenophat et al., 2023). Therefore, companies need to identify and dynamically implement appropriate coping and recovery strategies such as positive emotional responses, conveying warmth, providing explanations, and incorporating human intervention (e.g., Choi et al., 2021; Pavone et al., 2023).
Dynamic and adaptive measures are also required when it comes to measurement of customer satisfaction, service quality and climate. In light of the idiosyncrasies of service interactions of vulnerable consumers due to the dynamic and state-based nature of consumer vulnerability, a one-size-fits-all approach can be ill-suited and misleading for measurement purposes. Again, we encourage researchers to identify and develop performance indicators that can dynamically account for the complex interactions between AI technologies and multi-layered forms of consumer vulnerability.
Implications for managers
The design, development, and implementation of accessible, interactive, and dynamic AI technologies in service are far from trivial. There are many challenges awaiting managers as they need to consider not only the resource access and control restrictions of vulnerable consumers, but also adhere to the ethical principles of privacy, autonomy, and intelligibility—as illustrated in Fig. 2.
Optimally serving and aiding vulnerable consumers can potentially interfere with these ethical principles. Hence, merely following ethical principles as a tick-box exercise (Hagendorff, 2020) might be ill-suited to address the highly dynamic and state-dependent nature of consumer vulnerability. Instead, the consequences of vulnerability and the corresponding degree of potential harm should be considered when weighing the benefits and costs of deviating from strict ethical principles to better serve vulnerable consumers and maximize their utility. It is essential for managers to meticulously assess these trade-offs and make well-informed decisions that prioritize the best interests of vulnerable consumers, while also balancing their company’s economic and ethical considerations.
Intelligibility
The opacity and black-box nature of AI (e.g., Barredo Arrieta et al., 2020; Cadario et al., 2021; Rai, 2020) present challenges in understanding how AI models function and their underlying data processing algorithms (i.e., intelligibility; Floridi et al., 2018). This lack of intelligibility can hinder vulnerable consumers’ ability to assess the benefits or potential harm of AI technologies, understand the collected data (i.e., privacy), and decide whether to entrust decisions to AI systems (i.e., autonomy). Hence, managers should provide clear and straightforward explanations of how and when vulnerable consumers access, interact with, and are dynamically assisted by AI technologies. To prevent any potential information overload, irritation, or frustration that could undermine service provision following the AID principles, managers need to account for the respective state of vulnerability when determining the simplicity/complexity of explanations (i.e., “intelligibility-AID trade-off”). Additionally, managers should ensure that vulnerable consumers are explicitly informed when they interact with (or observed by) AI technologies, enabling them to draw any conclusions about (un)ethical treatment. This raises important managerial questions about AI disclosure (e.g., Mozafari et al., 2022).
Privacy
Another focal challenge constitutes privacy. While data collection is essential for AI technologies to effectively and efficiently identify and interact with vulnerable consumers, determining the appropriate amount of data to achieve economic and social objectives while upholding ethical principles can be complex. For instance, AI systems might require access to specific attributes (e.g., conversation, voice pitch) to determine a consumers’ vulnerability state. However, it is crucial to avoid excessive tracking and surveillance of vulnerable consumers (Andrew & Baker, 2021; König et al., 2020) to prevent customer data vulnerability (Martin & Murphy, 2017), especially when dealing with sensitive and “special category” data like biometric or health information. To address these concerns, companies should carefully establish privacy default settings (i.e., data protection/privacy by default) for different types of consumer data, ensuring data protection and privacy by default. Furthermore, privacy practices should be integrated into the design, development, and deployment of AI systems (i.e., privacy by design). However, it is noteworthy that merely refraining from collecting or utilizing sensitive data (i.e., “privacy through unawareness”) does not guarantee privacy, as sensitive attributes often correlate with non-sensitive attributes (Hagendorff, 2019). Thus, adopting a comprehensive privacy strategy that covers the entire customer journey is essential for building customer trust and maintaining support.
Additionally, companies should strive to establish safeguards and monitoring mechanisms throughout the data lifecycle. This entails determining the minimum scale and scope of data (i.e., data minimization) required for identifying, interacting with, and dynamically assisting vulnerable consumers. However, in doing so, companies might face a trade-off between privacy and providing tailored treatment of vulnerable consumers through data-driven AI technologies (i.e., “privacy-AID trade-off;” Rust, 2020). To address this, companies should obtain informed consent from vulnerable consumers or their support group (i.e., secondary vulnerable consumers) and provide comprehensible information about which and how data processing and the associated risks (Felzmann et al., 2020). In certain situations, service firms’ practices regarding customer data usage can be decisive and may influence consumer responses differently. For instance, customers perceive increased data privacy and feel less vulnerable when interacting with firm-owned devices in self-service technologies, depending on data sensitivity and transparency levels (Sohn et al., 2023). In this context, AI technologies can serve as privacy digital assistants, carefully monitoring privacy policies, identifying privacy violations, and disabling privacy-intrusive default settings (Lippi et al., 2019; Thorun & Diels, 2020).
Design
To ensure consumer autonomy, opt-in models and designs (i.e., consumers consciously and deliberately decide whether to use or be supported by AI in services) should be preferred (Borenstein & Arkin, 2016). Moreover, service providers should prioritize consumer empowerment through boosts rather than making comprehensive changes to vulnerable consumers’ choice architectures (i.e., nudges). Autonomy and human agency are also important for service employees. Therefore, it is crucial to integrate human oversight and options for human intervention into the service provision process. However, it is important to acknowledge that certain circumstances and states of vulnerability that may require more assistance and guidance, leading to less consumer autonomy in order to prevent or mitigate harm (i.e., “autonomy-AID trade-off”).
Companies can address ethical challenges through a holistic co-design approach. First, ethicists or personnel with relevant ethical expertise should be involved in the design and development process, adopting an embedded ethics approach (Bonnemains et al., 2018; McLennan et al., 2020). These experts can identify ethical concerns that marketers, data scientists, and AI developers might overlook. A senior-level working group composed of technologists/developers, legal/compliance experts, ethicists, and business leaders can be formed to pinpoint potential sources of ethical issues and devise practical solutions (Blackman & Ammanath, 2022). In some companies positions such as AI ethicists already exists under different titles: Data Privacy and Ethics Lead at Qantas, Director of Responsible Innovation & Responsible Innovation Manager at Meta/Facebook, Chief Ethical and Humane Use Officer at Salesforce, Chief AI Ethics Officer and Managing Director & Partner at BCG, or Microsoft Chief Responsible AI Officer (Minevich, 2021).
Second, involving vulnerable consumers in the design and development process is crucial (Dietrich et al., 2017) to raise awareness about their special needs, specific resource impairments and suitable remedies. Through active participation, vulnerable consumers can become value co-creators (Danaher et al., 2023; Fisk et al., 2022). This participatory approach not only generates valuable ideas for service improvement but also conveys a powerful message that companies value and respect their inputs, strengthening the sense of empowerment among vulnerable consumers (e.g., Auh et al., 2019).
Governance
To formalize and institutionalize the co-design process, companies should establish robust governance mechanisms that ensure ethical AI design, development, and deployment (Mökander & Floridi, 2021; Mökander et al., 2022). This can involve determining a set of guiding ethical principles, incorporating ethicists, and appointing an ethics board (Eitel-Porter, 2021). Among other things, companies can decide whether and which new services should go through an ethical risk due-diligence process during the design stage or prior to deployment (Blackman & Ammanath, 2022). To support these efforts, companies can consider implementing structures that oversee decision-making processes, maintain documentation, provide company-wide ethics training, conduct stress tests for governance structures, and establish appropriate metrics to monitor and ensure compliance (Eitel-Porter, 2021). Such metrics can lay the foundation for internal or external auditing mechanisms (Floridi et al., 2018; Mökander & Floridi, 2021; Mökander et al., 2022). Similarly, risk assessment and management processes that evaluate threats to companies and stakeholders as well as existing safeguards, can provide reference points for both internal and external auditing (Clarke, 2019a). To ensure transparency and inclusivity, involving vulnerable consumers as stakeholders in participatory audit conceptualization and implementation processes can prevent “closed-door compliance” and foster openness (Krafft et al., 2021).
Implications for consumers
With the growing integration of AI technologies into services, understanding their implications for vulnerable consumers comes to carry great significance. In the following sections, we delve into several key implications that can emerge from vulnerable consumers’ interactions with AI technologies.
Consumer autonomy
The ethical challenges faced by companies and managers, as outlined earlier, also have direct implications for vulnerable consumers. First, AI technologies in service can influence vulnerable consumers’ autonomy (André et al., 2018), that is, their ability to make independent decisions, free from external influences imposed by other agents (Wertenbroch et al., 2020). As AI technologies and frontline employees equipped with human-enhancement tools offer personalized information or assistance, vulnerable consumers principally delegate decisions to AI technologies during the information collection, consideration set, and even decision-making stage. This delegation can be beneficial in terms of resource efficiency (e.g., time, cognitive resources), tailored support, and content It could, however, also have downsides, potentially compromising the well-being of vulnerable consumers if they become overly reliant on AI systems (Banker & Khetani, 2019). To mitigate these risks, vulnerable consumers need a certain level of understanding of how AI technologies work (i.e., intelligibility), which can also help to counteract any tendency to avoid algorithms and AI (e.g., Cadario et al., 2021). In this context, providing simple explanations of how AI functions is likely more effective and satisfactory than comprehensive ones that risk information overload, irritation, and frustration (Rai, 2020), particularly, for cognitively impaired or digitally illiterate consumers. Thus, vulnerable consumers (or secondary vulnerable consumers on their behalf) should be able to request such explanations across all (online and offline) channels, either from AI technologies or human service employees.
Potential backlash
The introduction of AI technologies in services and their potential to enhance vulnerable consumers’ abilities can raise concerns about an ethical double standard. For example, non-vulnerable consumers may perceive it less fair for vulnerable consumers to benefit from AI technologies compared to themselves (Williams & Steffel, 2014). In extreme cases, vulnerable consumers might be viewed as robotic and lacking humanness (i.e., dehumanization; Haslam & Loughnan, 2014), especially if their (cognitive) abilities are perceived as enhanced rather than restored (Castelo et al., 2019a, b). To forestall or mitigate such adverse perceptions, it is essential for other consumers to keep the prosocial and restorative nature of such AI technologies in service in mind (Castelo et al., 2019a, b). That is, AI technologies are not deployed to give vulnerable consumers any advantages over “mainstream” consumers but to increase their ability to participate more fully in society.
Secondary consumer vulnerability
Consumer vulnerability can manifest itself as a shared experience within communities (Baker et al., 2007) and can give rise to secondary consumer vulnerability (Pavia & Mason, 2014). In such cases, secondary vulnerable consumers have to be aware that AI technologies hold the potential to empower them to better support primary vulnerable consumers, to lighten their responsibilities, and to eventually increase their own well-being. Thereby, potential reservations regarding AI technology can decrease. Despite their altruistic other-related needs and intentions, secondary vulnerable consumers could misunderstand primary customers’ needs due to false assumptions or changing needs (Leino et al., 2021). Dynamically designed AI technologies can then support them through need identification and recommendations which actions to take. Under some circumstances, particularly, when primary vulnerable consumers are not able to, secondary vulnerable consumers have to provide relevant data and consent to facilitate AI systems to optimally work and support them and primary vulnerable consumers. Again, a basic understanding on how AI technologies work can also help secondary vulnerable consumers to effectively leverage AI technologies to support primary vulnerable consumers.
Implications for policy makers
In the realm of AI technologies aimed at helping vulnerable consumers, there are several critical considerations that hold direct implications for policy makers. In what follows, we discuss the multifaceted ethical and legal issues and propose supranational co-regulation as a public policy instrument to address them.
Issues of a principled approach
An embedded ethics and co-design approach for the development and deployment of AI technologies to aid vulnerable consumers can encounter challenges when faced with organizational realities. First, companies’ mere reliance on ethical principles as non-binding guidelines (i.e., soft law) and self-regulatory commitments may lead to ethics shopping (Floridi, 2019a, b), that is, the malpractice of selecting ethical principles that align with existing business practices, and justifying them a posteriori (Floridi, 2019b). In other words, companies might “shop” for ethical principles that best match their current business practices, potentially undermining a genuine commitment to ethical practices. Second, companies could engage in “bluewashing”, a term describing the misleading use of claims or superficial measures to project an appearance of heightened ethical conduct (Floridi, 2019b, 2021a, b). Third, companies might use self-regulation as means to lobby against the development, implementation, and/or enforcement of legal regulations (i.e., ethics lobbying; Floridi, 2019b). Consequently, some scholars argue that the era of self-regulation as the instrument to address ethical challenges is coming to an end (Floridi, 2021b). These issues have important implications for public policy, that is, the conceptualization and implementation of legally binding ethical and socio-legal governance and policies (Kaplan & Haenlein, 2019; Stahl et al., 2021) beyond corporate self-commitments, self-regulation, and non-binding guidelines (Resséeguier & Rodrigues, 2020; Stix, 2021).
From principles to supranational co-regulation
Ethical principles offer a solid foundation, particularly given the time delays inherent in legislative processes addressing rapid and revolutionary technological developments like AI (Häußermann & Lütge, 2022). In this context, the EU’s General Data Protection Regulation (GDPR) that came into effect in 2018 constitutes an illustrative example. Article 25 of the GDPR stipulates data protection/privacy by design— a key concern for the GDPR (Andrew & Baker, 2021). Initially, privacy by design consisted of broad principles that eventually evolved into codified and concrete regulations. This progression underscores that the “the realization of an ideal can find a way into law, thereby becoming binding if proven useful” (Felzmann et al., 2020, p. 3344). The EU and the United States (US) have taken proactive steps by providing legal frameworks for AI with the proposal of the EU Artificial Intelligence Act (AIA) and the US National AI Initiative Act, respectively (Floridi, 2021a).
However, national political cultures and legal regimes differ in their constructions of personhood in relation to automation (Jones, 2017), constitutionalism ideologies (Celeste, 2019; De Gregorio, 2021), and approaches to AI regulation and AI for social good (Cath et al., 2018; Roberts et al., 2021). Furthermore, what is considered as consumer vulnerability can substantially vary among countries, potentially leading to cross-national discrimination against vulnerable consumers. In this context, consumer law and policy should focus the “properties and commercial practices of digital choice environments that can render everyone (dispositionally) vulnerable under the right conditions” instead of using status-based characteristics to label “certain groups or individuals as “vulnerable” or non-vulnerable” (Helberger et al., 2022, p. 194). Even within the EU AIA, consumer vulnerability is characterized as static and mainly related to age, and physical or mental disabilities. Given these national differences in recognizing and accommodating vulnerable consumers, policy makers have to contemplate supranational and harmonized regulations. Furthermore, national differences in legal frameworks can result in ethics dumping—an export of unethical or even illegal practices to countries with weaker regulations (Floridi, 2019b). This creates a double standard for vulnerable consumers if they are treated differently by AI technologies across countries (or in the home as compared to foreign countries).
Against this backdrop, supranational, collaborative co-regulatory approaches that engage the relevant stakeholders (particularly, vulnerable consumers or their advocates) across countries and cultures could pave the way for effective legislative processes in AI regulation that accounts for state-based consumer vulnerability (Clarke, 2019b). Policy makers can also reflect on utilizing collaborative co-regulation as it has the potential to prevent neglect or overly narrow conceptualizations of vulnerable populations, addressing limitations present in current AI ethics frameworks or initiatives like the EU AIA (Schiff et al., 2021).
Conclusion
By synthesizing the consumer vulnerability literature (e.g., Baker et al., 2005; Hill & Sharma, 2020), the scholarly work on the psychology of AI (e.g., Longoni et al., 2019; Puntoni et al., 2021) and AI for social good and sustainable development (e.g., Cowls et al., 2021; Du & Sen, 2023; Floridi et al., 2018, 2020; Vinuesa et al., 2020), along with the increasing calls to rethink marketing for a better world and the greater good (Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021), our AID framework illustrates how companies can harness AI technologies to better serve and empower vulnerable consumers (i.e., a supply-side focus on AI). Specifically, we propose that AI technologies can make services more accessible, interactively ameliorate customer experiences and journeys in services, and dynamically improve consumer decision-making.
Throughout this paper, we have argued that granting vulnerable consumers’ access to the marketplace and improving their interactions and decision-making can yield important advantages as companies can benefit from larger customer bases, less biased AI models, higher customer satisfaction, and higher profitability. Second, our AID framework empowers vulnerable consumers by reinstating control over resources, enabling them to profit from marketplace participation, and improving their experiences. Third, these results contribute to broader societal gains, including enhanced social justice, reduced inequalities (i.e., SDG 10), and improved consumer well-being (i.e., SDG 3), aligning with the greater good and sustainable development objectives. The benefits across stakeholders are particularly important in the light of the state-based nature of consumer vulnerability. Since “the experience of vulnerability is a reality [that can happen to anyone], but those encountering it do not wish it to be an equilibrium state” (Baker et al., 2005, p. 137), companies accept their social responsibility, address technology, digital, and market inequalities, and can mitigate consumer vulnerability by following our AID framework. Given the challenge of developing and deploying consumer-centric AI in an ethical and accountable way (Kunz & Wirtz, 2023), the collaborative engagement among companies, ethicists, vulnerable consumers, and policy makers becomes imperative in creating globally integrated, equitable marketing systems that reduce consumer vulnerability (Shultz & Holbrook, 2009) and increase individual and societal well-being. As noted, “AI technologies cannot solve all problems, but they can help to address the major challenges… facing humanity today” (Cowls et al., 2021, p. 114), and “marketing can and should be leveraged as a catalyst for positive change” (Mende & Scott, 2021, p. 116).
References
Agbavor, F., & Liang, H. (2022). Predicting dementia from spontaneous speech using large language models. PLOS Digit Health, 1(12), e0000168.
Anderson, L., Ostrom, A. L., Corus, C., Fisk, R. P., Gallan, A. S., Giraldo, M., Mende, M., Mulder, M., Rayburn, S. W., Rosenbaum, M. S., Shirahada, K., & Williams, J. D. (2013). Transformative service research: An agenda for the future. Journal of Business Research, 66(8), 1203–1210.
André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W., Huber, J., van Boven, L., Weber, B., & Yang, H. (2018). Consumer choice and autonomy in the age of artificial intelligence and big data. Customer Needs and Solutions, 5(1–2), 28–37.
Andrew, J., & Baker, M. (2021). The general data protection regulation in the age of surveillance capitalism. Journal of Business Ethics, 168(3), 565–578.
Argawal, R., Dugas, M., Gao, G., & Kannan, P. K. (2020). Emerging technologies and analytics for a new era of value-centered marketing in healthcare. Journal of the Academy of Marketing Science, 48(1), 9–23.
Auh, S., Menguc, B., Katsikeas, C. S., & Jung, Y. S. (2019). When does customer participation matter? An empirical investigation of the role of customer empowerment in the customer participation–performance link. Journal of Marketing Research, 56(6), 1012–1033.
Aveni. (2022). Powering consumer duty compliance with a machine line of defence. Retrieved August 18, 2022 from https://aveni.ai/consumer-duty/
Aipoly. (2022). Vision AI for the blind and visually impaired. Retrieved August 18, 2022 from https://www.aipoly.com
Baker, S. M., Gentry, J. W., & Rittenburg, T. L. (2005). Building understanding of the domain of consumer vulnerability. Journal of Macromarketing, 25(2), 128–139.
Baker, S. M., Hunt, D., & Rittenburg, T. L. (2007). Consumer vulnerability as a shared experience: Tornado recovery process in Wright, Wyoming. Journal of Public Policy & Marketing, 26(1), 6–19.
Banker, S., & Khetani, S. (2019). Algorithm overdependence: How the use of algorithmic recommendation systems can increase risks to consumer well-being. Journal of Public Policy & Marketing, 38(4), 500–515.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Benneto, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Be My Eyes. (2023). See the world together. Retrieved August 19, 2023 from https://www.bemyeyes.com/
Bieler, M., Maas, P., Fischer, L., & Rietmann, N. (2022). Enabling cocreation with transformative interventions: An interdisciplinary conceptualization of consumer boosting. Journal of Service Research, 25(1), 29–47.
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.
Blackman, R., & Ammanath, B. (2022, March 21). Ethics and AI: 3 conversations companies need to have. Harvard Business Review. Retrieved August 14, 2023 from https://hbr.org/2022/03/ethics-and-ai-3-conversations-companies-need-to-be-having
Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658.
Bonnemains, V., Saure, C., & Tessier, C. (2018). Embedded ethics: Some technical and ethical challenges. Ethics and Information Technology, 20(1), 41–58.
Borenstein, J., & Arkin, R. (2016). Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics, 22(1), 31–46.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. NBER Working Paper Series, Working Paper 31161
Cadario, R., Longoni, C., & Moorewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behavior, 5(12), 1636–1642.
Capita. (2022). Delivering service with sincerity to vulnerable customers. Retrieved August 24, 2023 from https://www.capita.com/our-thinking/delivering-service-sincerity-vulnerable-customers
Capita. (2023). Creating better outcomes for vulnerable customers. Retrieved August 14, 2023 from https://www.capita.com/expertise/customer-experience/customer-experience-systems-and-software/assisted-customer-conversations
Carlini, J., & Robertson, J. (2023). Consumer partnerships in research (CPR) checklist: A method for conducting market research with vulnerable consumers. International Journal of Market Research, 65(2–3), 215–236.
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019a). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.
Castelo, N., Schmitt, B., & Sarvay, M. (2019b). Human or robot? Consumer responses to radical cognitive enhancement products. Journal of the Association for Consumer Research, 4(3), 217–230.
Castelo, N., Boegershausen, H., Hildebrand, C., & Henkel, A. P. (2023). Understanding and improving consumer reactions to service bots. Journal of Consumer Research, Advance Online Publication. https://doi.org/10.1093/jcr/ucad023
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505–528.
Celeste, E. (2019). Digital constitutionalism: A new systematic theorization. International Review of Law, Computers & Technology, 33(1), 76–99.
Cenophat, S., Eisend, M., Bayón, T., & Haas, A. (2023). The role of customer relationship vulnerability in service recovery. Journal of Service Research. https://doi.org/10.1177/10946705231195008
Chandy, R. K., Johar, G. V., Moorman, C., & Roberts, J. H. (2021). Better marketing for a better world. Journal of Marketing, 85(3), 1–9.
Choi, S., Mattila, A. S., & Bolton, L. E. (2021). To err is human(-oid): How do consumers react to robot service failure and recovery? Journal of Service Research, 24(3), 354–371.
Clarke, R. (2019a). Principles and business processes for responsible AI. Computer Law & Security Review, 35(4), 410–422.
Clarke, R. (2019b). Regulatory alternatives for AI. Computer Law & Security Review, 35(4), 398–409.
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111–115.
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot: Anthropomorphism and anger in customer–chatbot interactions. Journal of Marketing, 86(1), 132–148.
Danaher, T. S., Danaher, P. J., Sweeney, J. C., & McColl-Kennedy, J. R. (2023). Dynamic customer value cocreation in healthcare. Journal of Service Research. https://doi.org/10.1177/10946705231161758
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42.
de Bellis, E., & Venkataramani Johar, G. (2020). Autonomous shopping systems: Identifying and overcoming barriers to consumer adoption. Journal of Retailing, 96(1), 74–87.
De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J.K.-U., & von Wangenheim, F. (2020). Artificial intelligence and marketing: Pitfalls and opportunities. Journal of Interactive Marketing, 51, 91–105.
De Gregorio, G. (2021). The rise of digital constitutionalism in the European Union. International Journal of Constitutional Law, 19(1), 41–70.
Dekimpe, M. G., Geyskens, I., & Gielens, K. (2020). Using technology to bring online convenience to offline shopping. Marketing Letters, 31(1), 25–29.
Dietrich, T., Trischler, J., Schuster, L., & Rundle-Thiele, S. (2017). Co-designing services with vulnerable consumers. Journal of Service Theory and Practice, 27(3), 663–688.
Dietvorst, B. J., & Bartels, D. M. (2022). Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. Journal of Consumer Psychology, 32(3), 406–424.
Dodds, S., Finsterwalder, J., Prayag, G., & Subramanian, I. (2023). Transformative service research methodologies for vulnerable participants. International Journal of Market Research, 65(2–3), 279–296.
Dowling, K., Guhl, D., Klapper, D., Spann, M., Stich, L., & Yegoryan, N. (2020). Behavioral biases in marketing. Journal of the Academy of Marketing Science, 48(3), 449–477.
Du, S., & Sen, S. (2023). AI through a CSR Lens: Consumer issues and public policy. Journal of Public Policy & Marketing, 42(4), 351–353.
Du, S., & Xie, C. (2021). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research, 129, 961–974.
Eitel-Porter, R. (2021). Beyond the promise: Implementing ethical AI. AI and Ethics, 1(1), 73–80.
European Parliament. (2023). Artificial intelligence act. Retrieved August 22, 2023 from https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333–3361.
Financial Conduct Authority. (2021). Guidance for firms on the fair treatment of vulnerable customers. Retrieved May 12, 2023 fromhttps://www.fca.org.uk/publications/finalised-guidance/guidance-firms-fair-treatment-vulnerable-customers
Financial Conduct Authority. (2022). Financial Lives 2022 Survey: Insights on vulnerability and financial resilience relevant to the rising cost of living. Retrieved May 12, 2023 from https://www.fca.org.uk/data/financial-lives-2022-early-survey-insights-vulnerability-financial-resilience
Fisk, R. P., Dean, A. M., Alkire (née Nasr), L., Joubert, A., Previte, J., Robertson, N., & Rosenbaum, M. S. (2018). Design for service inclusion: Creating inclusive service systems by 2050. Journal of Service Management, 29(5), 834–858.
Fisk, R. P., Gallan, A. S., Joubert, A. M., Beekhuyzen, J., Cheung, L., & Russell-Bennett, R. (2022). Healing the digital divide with digital inclusion: Enabling human capabilities. Journal of Service Research. https://doi.org/10.1177/10946705221140148
Fletcher-Brown, J., Turnbull, S., Viglia, G., Chen, T., & Pereira, V. (2021). Vulnerable consumer engagement: How corporate social media can facilitate the replenishment of depleted resources. International Journal of Research in Marketing, 38(2), 518–529.
Floridi, L. (2019a). Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1(6), 261–262.
Floridi, L. (2019b). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193.
Floridi, L. (2021a). The European legislation on AI: A brief analysis of its philosophical approach. Philosophy & Technology, 34(2), 215–222.
Floridi, L. (2021b). The end of an era: From self-regulation to hard law for the digital industry. Philosophy & Technology, 34(4), 619–622.
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Fronczek, L. P., Mende, M., Scott, M. L., Nenkov, G. Y., & Gustafsson, A. (2023). Friend or foe? Can anthropomorphizing self-tracking devices backfire on marketers and consumers? Journal of the Academy of Marketing Science, 51(5), 1075–1097.
Garvey, A. M., Kim, T., & Duhachek, A. (2023). Bad news? Send an AI. Good news? Send a human. Journal of Marketing, 87(1), 10–25.
Gelbrich, K., Hagel, J., & Orsingher, C. (2021). Emotional support from a digital assistant in technology-mediated services: Effects on customer satisfaction and behavioral persistence. International Journal of Research in Marketing, 38(1), 176–193.
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.
Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics, 178(4), 1027–1041.
Gladstone, J. J., Matz, S. C., & Lemaire, A. (2019). Can psychological traits be inferred from spending? Evidence from Transaction Data. Psychological Science, 30(7), 1087–1096.
Grewal, D., Hulland, J., Kopalle, P. K., & Karahanna, E. (2020a). The future of technology and marketing: A multidisciplinary perspective. Journal of the Academy of Marketing Science, 48(1), 1–8.
Grewal, D., Noble, S. M., Roggeveen, A. L., & Nordfalt, J. (2020b). The future of in-store technology. Journal of the Academy of Marketing Science, 48(2), 96–113.
Grewal, D., Kroschke, M., Mende, M., Roggeveen, A. L., & Scott, M. L. (2020c). Frontline cyborgs at your service: How human enhancement technologies affect customer experiences in retail, sales, and service settings. Journal of Interactive Marketing, 51, 9–25.
Guha, A., Grewal, D., Kopalle, P. K., Haenlein, M., Schneider, M. J., Jung, H., Moustafa, R., Hedge, D. R., & Hawkins, G. (2021). How artificial intelligence will affect the future of retailing. Journal of Retailing, 97(1), 28–41.
Hagendorff, T. (2019). From privacy to anti-discrimination in times of machine learning. Ethics and Information Technology, 21(4), 331–343.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120.
Hahn, T., Figge, F., Pinkse, J., & Preuss, L. (2010). Trade-offs in corporate sustainability: You can’t have your cake and eat it. Business Strategy and the Environment, 19(4), 217–229.
Han, B., Deng, X., & Fan, H. (2023). Partners or opponents? How mindset shapes consumers’ attitude toward anthropomorphic artificial intelligence service robots. Journal of Service Research, 26(3), 441–458.
Halbauer, I., & Klarmann, M. (2022). How voice retailers can predict customer mood and how they can use that information. International Journal of Research in Marketing, 39(1), 77–95.
Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65, 399–423.
Häußermann, J. J., & Lütge, C. (2022). Community-in-the-Loop: Towards pluralistic value creation in AI, or - Why AI needs business ethics. AI and Ethics., 2(2), 341–362.
Helberger, N., Sax, M., Strycharz, J., & Micklitz, H.-W. (2022). Choice architectures in the digital economy: Towards a new understanding of digital vulnerability. Journal of Consumer Policy, 45(2), 175–200.
Henkel, A. P., Bromuri, S., Iren, D., & Urovi, V. (2020). Half human, half machine – augmenting service employees with AI for interpersonal emotion regulation. Journal of Service Management, 31(2), 247–265.
Hermann, E. (2022). Leveraging artificial intelligence in marketing for social good - An ethical perspective. Journal of Business Ethics, 179(1), 43–61.
Hertwig, R. (2017). When to consider boosting: Some rules for policy-makers. Behavioural Public Policy, 1(2), 143–161.
Hertwig, R., & Grüne-Yanoff, T. (2017). Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science, 12(6), 973–986.
Hill, R. P., & Sharma, E. (2020). Consumer vulnerability. Journal of Consumer Psychology, 30(3), 551–570.
Hillebrand, B., Driessen, P. H., & Koll, O. (2015). Stakeholder marketing: Theoretical foundations and required capabilities. Journal of the Academy of Marketing Science, 43(4), 411–428.
Holthöwer, J., & van Doorn, J. (2023). Robots do not judge: Service robots can alleviate embarrassment in service encounters. Journal of the Academy of Marketing Science., 51(4), 767–784.
Hoyer, W. D., Kroschke, M., Schmitt, B., Kraume, K., & Shankar, V. (2020). Transforming the customer experience through new technologies. Journal of Interactive Marketing, 51, 57–71.
Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172.
Huang, M.-H., & Rust, R. T. (2021a). Engaged to a robot? The role of AI in service. Journal of Service Research, 24(1), 30–41.
Huang, M.-H., & Rust, R. T. (2021b). A Strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.
Huang, M.-H., & Rust, R. T. (2022). A framework for collaborative artificial intelligence in marketing. Journal of Retailing, 98(2), 209–223.
International Organization for Standardization (2022). ISO 22458:2022: Consumer vulnerability — Requirements and guidelines for the design and delivery of inclusive service. Retrieved May 12, 2023 from https://www.iso.org/standard/73261.html
Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38–56.
Jago, A. S., Carroll, G. R., & Lin, M. (2022). Generating authenticity in automated work. Journal of Experimental Psychology: Applied, 28(1), 52–70.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Johns, R., & Davey, J. (2019). Introducing the transformative service mediator: Value creation with vulnerable consumers. Journal of Services Marketing, 33(1), 5–15.
Jones, M. L. (2017). The right to a human in the loop: Political constructions of computer automation and personhood. Social Studies of Science, 47(2), 216–239.
Jones, R. (2020). Just 12% of Advisers Find It Easy to Spot Vulnerable Clients. Financial Reporter. https://www.financialreporter.co.uk/finance-news/just-12-of-advisers-find-it-easy-to-spot-vulnerable-clients.html
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25.
Key. (2022). For the life in later life. Retrieved August 18, 2022 from https://www.keyadvice.co.uk/
König, R., Uphues, S., Vogt, V., & Kolany-Raiser, B. (2020). The tracked society: Interdisciplinary approaches on online tracking. New Media & Society, 22(11), 1945–1956.
Kopalle, P. K., Gangwar, M., Kaplan, A., Ramachandran, D., Reinartz, W., & Rindfleisch, A. (2022). Examining artificial intelligence (AI) technologies in marketing via a global lens: Current trends and future research opportunities. International Journal of Research in Marketing, 39(2), 522–540.
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the Internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103–156.
Krafft, P. M., Young, M., Katell, M., Lee, J. E., Narayan, S., Epstein, M., et al. (2021). An action-oriented AI policy toolkit for technology audits by community advocates and activists. Conference on Fairness, Accountability, and Transparency (FAccT), 772–781
Kumar, V., Rajan, B., Venkatesan, R., & Lecinski, J. (2019). Understanding the role of artificial intelligence in personalized engagement marketing. California Management Review, 61(4), 135–155.
Kunz, W. H., & Wirtz, J. (2023). Corporate digital responsibility (CDR) in the age of AI: implications for interactive marketing. Journal of Research in Interactive Marketing. https://doi.org/10.1108/JRIM-06-2023-0176
Larsen, G., & Lawson, R. (2013). Consumer rights: An assessment of justice. Journal of Business Ethics, 112(3), 515–528.
Lee, C., & Coughlin, J. F. (2015). Older adults’ adoption of technology: An integrated approach to identifying determinants and barriers. Journal of Product Innovation Management, 32(5), 747–759.
Lee, E., & Workman, J. (2018, June 6). Who are vulnerable consumers and how can you learn to recognise their needs? Data & Marketing Association Contact Centre Council. Retrieved August 14, 2023 from https://dma.org.uk/article/who-are-vulnerable-consumers-and-how-can-you-learn-to-recognise-their-needs
Leino, H. M. (2017). Secondary but significant: Secondary customers’ existence, vulnerability and needs in care services. Journal of Services Marketing, 31(7), 760–770.
Leino, H. M., Hurmerinta, L., & Sandberg, B. (2021). Balancing service inclusion for primary and secondary customers experiencing vulnerabilities. Journal of Services Marketing, 35(6), 692–705.
Lewis, C., Mehmet, M., Quinton, S., & Reynolds, N. (2023). Methodologies for researching marginalised and/or potentially vulnerable groups. International Journal of Market Research, 65(2–3), 147–154.
Libai, B., Bart, Y., Gensler, S., Hofacker, C., Kaplan, A., Kötterheinrich, K., & Kroll, E. B. (2020). Brave new world? On AI and the management of customer relationships. Journal of Interactive Marketing, 51, 44–56.
Lippi, M., Contissa, G., Lagioia, F., Micklitz, H.-W., Palka, P., Sartor, G., et al. (2019). Consumer protection requires artificial intelligence. Nature Machine Intelligence, 1(4), 168–169.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.
Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. Journal of Marketing, 86(1), 91–108.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.
Lu, F.-C., & Sinha, J. (2023). Understanding retail exclusion and promoting an inclusive customer experience at transforming service encounters. The Journal of Consumer Affairs. https://doi.org/10.1111/joca.12529
Lukas, M. F., & Howard, R. C. (2023). The influence of budgets on consumer spending. Journal of Consumer Research, 49(5), 697–720.
Lusseau, D., & Mancini, F. (2019). Income-based variation in Sustainable Development Goal interaction networks. Nature Sustainability, 2(3), 242–247.
Lythreatis, S., Singh, S. K., & El-Kassar, A.-N. (2022). The digital divide: A review and future research agenda. Technological Forecasting and Social Change, 175, 121359.
Madan, S., Johar, G. V., Berger, J., Chandon, P., Chandy, R., Hamilton, R., John, L. K., Labroo, A. A., Liu, P. J., Lynch, J. G., Jr., Mazar, N., Mead, N. L., Mittal, V., Moorman, C., Norton, M. I., Roberts, J., Soman, D., Viswanathan, M., & White, K. (2023). Reaching for rigor and relevance: Better marketing research for a better world. Marketing Letters, 34(1), 1–12.
Mariani, M. M., Perez-Vega, R., & Wirtz, J. (2022). AI in marketing, consumer research and psychology: A systematic literature review and research agenda. Psychology & Marketing, 39(4), 755–776.
Marinova, D., de Ruyter, K., Huang, M.-H., Meuter, M. L., & Challagalla,. (2017). Getting smart: Learning from technology-empowered frontline interactions. Journal of Service Research, 20(1), 29–42.
Martin, K. D., & Murphy, P. E. (2017). The role of data privacy in marketing. Journal of the Academy of Marketing Science, 45(2), 135–155.
Matz, S. C., & Netzer, O. (2017). Using big data as a window into consumers’ psychology. Current Opinion in Behavioral Sciences, 18, 7–12.
Matz, S. C., Kosinski, M., Nave, G., & Stilwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Science, 114(48), 12714–12719.
McLennan, S., Fiske, A., Celi, L. A., Müller, R., Harder, J., Ritt, K., Haddadin, S., & Buyx, A. (2020). An embedded ethics approach for AI development. Nature Machine Intelligence, 2(9), 488–490.
Mehta, P., Jebarajakirthy, C., Maseeh, H. I., Anubha, A., Saha, R., & Dhanda, K. (2022). Artificial intelligence in marketing: A meta-analytic review. Psychology & Marketing, 39(11), 2013–2038.
Mende, M., Scott, M. L., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service robots rising: How humanoid robots influence service experiences and food consumption. Journal of Marketing Research, 56(4), 535–556.
Mende, M., & Scott, M. L. (2021). May the force be with you: Expanding the scope for marketing research as a force for good in a sustainable world. Journal of Public Policy & Marketing, 40(2), 116–125.
Mende, M., Scott, M. L., Ubal, V. O., Hassler, C. M. K., Harmeling, C. M., & Palmatier, R. W. (2023). Personalized communication as a platform for service inclusion? Initial insights into interpersonal and AI-based personalization for stigmatized consumers. Journal of Service Research. https://doi.org/10.1177/10946705231188676
Microsoft. (2022). Seeing AI. Retrieved August 18, 2022 from https://www.microsoft.com/en-us/ai/seeing-ai
Minevich, M. (2021). 15 AI ethics leaders showing the world the way of the future. Forbes. Retrieved August 14, 2023 from https://www.forbes.com/sites/markminevich/2021/08/09/15-ai-ethics-leaders-showing-the-world-the-way-of-the-future/?sh=6688a8c36bdf
Mogaji, E., Soetan, T. O., & Kieu, T. A. (2020). The implications of artificial intelligence on the digital marketing of financial services to vulnerable customers. Australasian Journal of Marketing, 29(3), 235–242.
Mökander, J., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds & Machines, 31(2), 323–327.
Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2022). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32(2), 241–268.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168.
Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2022). Trust me, I’m a bot – Repercussions of chatbot disclosure in different service frontline settings. Journal of Service Management, 33(2), 221–245.
NICE. (2022). NICE Enlighten AI for vulnerable customers. Retrieved August 18, 2022 from https://www.nice.com/resources/nice-enlighten-ai-for-vulnerable-customers-infographic
Pantano, E., & Scarpi, D. (2022). I, robot, you, consumer: Measuring artificial intelligence types and their effect on consumers emotions in service. Journal of Service Research, 25(4), 583–600.
Pantano, E., Viassone, M., Boardman, R., & Dennis, C. (2022). Inclusive or exclusive? Investigating how retail technology can reduce old consumers’ barriers to shopping. Journal of Retailing and Consumer Services, 68, 103074.
Pavia, T. M., & Mason, M. J. (2014). Vulnerability and physical, cognitive, and behavioral impairment: Model extensions and open questions. Journal of Macromarketing, 34(4), 471–485.
Pavone, G., Meyer-Waarden, L., & Munzel, A. (2023). Rage against the machine: Experimental insights into customers’ negative emotional responses, attributions of responsibility, and coping strategies in artificial intelligence–based service failures. Journal of Interactive Marketing, 58(1), 52–71.
Pitardi, V., Wirtz, J., Paluch, S., & Kunz, W. H. (2022). Service robots, agency and embarrassing service encounters. Journal of Service Management, 33(2), 389–414.
Poole, S. M., Grier, S. A., Thomas, K. D., Sobande, F., Ekpo, A. E., Torres, L. T., Addington, L. A., Weekes-Laidlow, M., & Henderson, G. R. (2021). Operationalizing critical race theory in the marketplace. Journal of Public Policy & Marketing, 40(2), 126–142.
Puntoni, S., Walker Reczek, R., Giesler, M., & Botti, S. (2021). Consumers and artificial intelligence: An experiential perspective. Journal of Marketing, 85(1), 131–151.
Ragnedda, M. (2018). Conceptualizing digital capital. Telematics and Informatics, 35(8), 2366–2375.
Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
Ramadan, Z., Farah, M. R., & El Essrawi, L. (2021). From Amazon.com to Amazon.love: How Alexa is redefining companionship and interdependence for people with special needs. Psychology & Marketing, 38(4), 596–609.
Recordsure. (2022). How can AI help support vulnerable customers? Retrieved August 18, 2022 from https://recordsure.com/blog/can-ai-help-support-vulnerable-customers/
Resséeguier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 1–5.
Ringold, D. J. (2005). Vulnerability in the marketplace: Concepts, caveats, and possible solutions. Journal of Macromarketing, 25(2), 202–214.
Roberts, H., Cowls, J., Hine, E., Mazzi, F., Tsamados, A., Taddeo, M., & Floridi, L. (2021). Achieving a ‘Good AI Society’: Comparing the aims and progress of the EU and the US”. Science and Engineering Ethics, 27, 68.
Russell-Bennett, R., Kelly, N., Letheren, K., & Chell, K. (2023). The 5R Guidelines for a strengths-based approach to co-design with customers experiencing vulnerability. International Journal of Market Research, 65(2–3), 167–182.
Rust, R. T. (2020). The future of marketing. International Journal of Research in Marketing, 37(1), 15–26.
Salisbury, L. C., Blanchard, S. J., Brown, A. L., Nenkov, G. Y., Hill, R. P., & Martin, K. D. (2023). Beyond income: Dynamic consumer financial vulnerability. Journal of Marketing, 87(5), 657–678.
Schiff, D., Borenstein, J., Biddle, J., & Laas, K. (2021). AI Ethics in the public, private, and NGO sectors: A review of a global document collection. IEEE Transactions on Technology and Society, 2(1), 31–42.
Shankar, V. (2018). How artificial intelligence (AI) is reshaping retailing. Journal of Retailing, 94(4), vi–xi.
Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46–57.
Shultz, C. J., & Holbrook, M. B. (2009). The paradoxical relationships between marketing and vulnerability. Journal of Public Policy & Marketing, 28(1), 124–127.
Siltaloppi, J., Rajala, R., & Hietala, H. (2020). Integrating CSR with business strategy: A tension management perspective. Journal of Business Ethics, 174(3), 507–527.
Sohn, S., Schnittka, O., & Seegebarth, B. (2023). Consumer responses to firm-owned devices in self-service technologies: Insights from a data privacy perspective. International Journal of Research in Marketing. https://doi.org/10.1016/j.ijresmar.2023.08.003
Stachl, C., Au, Q., Schoedel, R., Gosling, S. D., Harari, G. M., Buschek, D., Völkel, S. T., Schuwerk, T., Oldemeier, M., Ullmann, T., Hussmann, H., Bischl, B., & Bühner, M. (2020). Predicting personality from patterns of behavior collected with smartphones. Proceedings of the National Academy of Science, 117(30) 17680–17687
Stahl, B. C., Andreou, A., Brey, P. A. E., Hatzakis, T., Kirichenko, A., Macnish, K., Shaelou, S. L., Patel, A., Ryan, M., & Wright, D. (2021). Artificial intelligence for human flourishing – Beyond principles for machine learning. Journal of Business Research, 124, 374–388.
Stix, C. (2021). Actionable principles for artificial intelligence policy: Three pathways. Science and Engineering Ethics, 27, 15.
Strümke, I., Slavkovik, M., & Stachl, C. (2023). Against algorithmic exploitation of human vulnerabilities. arXiv. https://doi.org/10.48550/arXiv:2301.04993v1
Thiebes, S., Lins, S., & Sunyae, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447–464.
Thorun, C., & Diels, J. (2020). Consumer protection technologies: An investigation into the potentials of new digital technologies for consumer policy. Journal of Consumer Policy, 43(1), 177–191.
User Way. (2022). UserWay makes accessibility easy. Retrieved August 18, 2022, from https://userway.org
Valendin, J., Reutterer, T., Platzer, M., & Kalcher, K. (2022). Customer base analysis with recurrent neural networks. International Journal of Research in Marketing, 39(4), 988–1018.
Van der Byl, C. A., & Slawinski, N. (2015). Embracing tensions in corporate sustainability: A review of research from win-wins and trade-offs to paradoxes and beyond. Organization & Environment, 28(1), 54–79.
van Doorn, J., Smailhodzic, E., Puntoni, S., Li, J., Schumann, J. H., & Holthöwer, J. (2023). Organizational frontlines in the digital age: The Consumer-Autonomous Technology–Worker (CAW) framework. Journal of Business Research, 164, 114000.
van Esch, P., Cui, Y., & Jain, S. P. (2021). Stimulating or intimidating: The effect of AI-enabled in-store communication on consumer patronage likelihood. Journal of Advertising, 50(1), 63–80.
Vieir, A. D., Leite, H., Vitoria, A., & Volochtchuk, L. (2022). The impact of voice assistant home devices on people with disabilities: A longitudinal study. Technological Forecasting & Social Change, 184, 121961.
Vink, J., & Koskela-Huotari, K. (2022). Building reflexivity using service design methods. Journal of Service Research, 25(3), 371–389.
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 233.
Vorobeva, D., El Fassi, Y., Costa Pinto, D., Hildebrand, D., Herter, M. M., & Mattila, A. S. (2022). Thinking skills don’t protect service workers from replacement by artificial intelligence. Journal of Service Research, 25(4), 601–613.
Wallach, K. A., & Popovich, D. (2023). Cause beneficial or cause exploitative? using joint motives to increase credibility of sustainability efforts. Journal of Public Policy & Marketing, 42(2), 187–202.
Wei, K. K., Teo, H.-H., Chan, H. C., & Tan, B. C. Y. (2011). Conceptualizing and testing a social cognitive model of the digital divide. Information Systems Research, 22(1), 170–187.
Wertenbroch, K., Schrift, R. Y., Alba, J. W., Barasch, A., Bhattacharjee, A., Giesler, M., Knobe, J., Lehmann, D. R., Matz, S. C., Nave, G., Parker, J. R., Puntoni, S., Zheng, Y., & Zwebner, Y. (2020). Autonomy in consumer choice. Marketing Letters, 31(4), 429–439.
Williams, E. F., & Steffel, M. (2014). Double standards in the use of enhancing products by self and others. Journal of Consumer Research, 41(2), 506–525.
Wünderlich, N. V., Hogreve, J., Chowdhury, I. N., Fleischer, H., Mousavi, S., Rötzmeier-Keuper, J., & Sousa, R. (2020). Overcoming vulnerability: Channel design strategies to alleviate vulnerability perceptions in customer journeys. Journal of Business Research, 116, 377–386.
Xiao, L., & Kumar, V. (2021). Robotics for customer service: A useful complement or an ultimate substitute?”. Journal of Service Research, 24(1), 9–29.
Yalcin, G., Lim, S., Puntoni, S., & van Osselaer, S. M. J. (2022). Thumbs up or down: Consumer reactions to decisions by algorithms versus humans. Journal of Marketing Research, 59(4), 696–717.
Yap, S.-F., Xu, Y., & Tan, L. (2021). Coping with crisis: The paradox of technology and consumer vulnerability. International Journal of Consumer Studies, 45(6), 1239–1257.
Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Science, 112(4), 1036–1040
Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, R. (2022). The AI index 2022 annual report”. Stanford Institute for Human-Centered AI, Stanford University.
Zhu, Y., Zhang, J., Wu, J., & Liu, Y. (2022). AI is better when I’m sure: The influence of certainty of needs on consumers’ acceptance of ai chatbots. Journal of Business Research, 150, 642–652.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Dhruv Grewal served as Guest Editor for this article.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hermann, E., Williams, G.Y. & Puntoni, S. Deploying artificial intelligence in services to AID vulnerable consumers. J. of the Acad. Mark. Sci. 52, 1431–1451 (2024). https://doi.org/10.1007/s11747-023-00986-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11747-023-00986-8