Widespread application of artificial intelligence (AI) is fundamentally reshaping the way companies conduct business and provide their services (Davenport et al., 2020; Huang & Rust, 2018, 2021a, b; Kopalle et al., 2022; Mariani et al., 2022; van Doorn et al., 2023). Consumers are increasingly encountering AI in a broad range of personal, social, and professional contexts, for instance, when utilizing AI-driven products and services (e.g., customer support chatbots, intelligent personal assistants). Due to its advantages, including increased efficiency and cost reduction (Brynjolfsson et al., 2023; Huang & Rust, 2021a; Kunz & Wirtz, 2023), the adoption of AI technologies by companies is increasing rapidly. Global corporate investments in AI surpassed $100 billion in 2020 (Zhang et al., 2022), and this growth trajectory is continuing, reaching $176.47 billion in 2021, compared to $5.23 billion invested in 2013 (Zhang et al., 2022).

AI applications in the marketing and service industries mainly concentrate on two approaches: fine-grained and data-driven marketer-initiated (i.e., personalization) or customer-initiated (i.e., customization) tailoring of offerings and interactions based on customers' wants, needs, and interests. Even though service research (e.g., AI-based, personalized communication; Mende et al., 2023) and practice (e.g., AI-powered digital visual assistants for instantaneous image-to-text generation; Be My Eyes, 2023) have recently started to consider the potential of AI technologies to better serve vulnerable consumers, the extensive advantages AI technologies can offer still remain underutilized. Not to mention, many companies might also be missing out on an opportunity to serve their vulnerable consumers more effectively, due to a prevalent misconception that consumer vulnerability is a status confined to a relatively small consumer segment.

Contrary to this misconception, consumer vulnerability is defined as a dynamic state of powerlessness (Baker et al., 2005) and susceptibility to harm (Hill & Sharma, 2020; Salisbury et al., 2023), which can pertain to any consumer. Considering the proliferation of national and supranational laws and regulations aiming to protect vulnerable consumers, it has become a growing and important imperative for companies to learn how to best interact with these consumers. Since consumer vulnerability is often conceptualized as a state, and transcends consumers’ status (e.g., visually impaired, elderly, or obese consumers; Baker et al., 2005; Hill & Sharma, 2020), it can pose challenges for companies to detect vulnerable consumers, to address their unique needs when serving them, to prevent or mitigate potential discrimination and inequalities, and to promote social justice. Fortunately, technological advancements, particularly in the realm of AI, have empowered companies to adopt socially responsible business practices and improve their services to better cater to vulnerable consumers.

The adoption and investment in AI technologies to better serve vulnerable consumers can be resource-intensive for companies and necessitate substantial time and financial commitment. Such endeavors may also entail managing trade-offs between a company’s economic objectives (e.g., greater estimated profits by serving “mainstream” consumers, cost of developing new systems for potentially small consumer segments) and their social objectives (e.g., better serving vulnerable consumers, countering discrimination and inequalities, fostering social justice). Given the highly dynamic state of consumer vulnerability, however, it becomes apparent that any “mainstream” consumer could experience vulnerability under specific circumstances. Thus, being equipped to adeptly identify and serve vulnerable consumers could benefit the entire customer base. Furthermore, overlooking the (dynamically) changing needs of consumers when they experience vulnerability can yield adverse effects on customer satisfaction, word-of-mouth behavior, and corporate reputation. Overall, the business case, that is, the alignment and simultaneous achievement of economic and social objectives (Siltaloppi et al., 2020; Van der Byl & Slawinski, 2015) becomes more likely, and upfront investments in developing AI technologies are likely to be amortized as well.

In response to the calls for leveraging both marketing (e.g., Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021) and AI for social good and sustainable development (e.g., Cowls et al., 2021; Du & Sen, 2023; Floridi et al., 2018, 2020; Vinuesa et al., 2020), the current paper aims to explore the role of AI in enhancing services (and outcomes) for vulnerable consumers. It also seeks to offer guidance to businesses on best practices for utilizing AI in interactions with vulnerable consumers. In doing so, we develop a framework that conceptualizes the key qualities of AI technologies in relation to serving vulnerable consumers. Specifically, our AID framework highlights how accessible, interactive, and dynamic AI technologies can empower vulnerable consumers by providing accessible services, optimizing service experiences, and enhancing consumer decision-making.

Thereby, our work makes two main contributions to the literature. First, previous research in marketing has predominantly focused on studying mainstream consumers and their experiences with AI technologies (e.g., Castelo et al., 2019a, b, 2023; Longoni & Cian, 2022). To the best of our knowledge, however, this literature has neglected to study potentially marginalized, vulnerable consumer groups and their interactions with AI technologies, or took a limited status-based perspective (e.g., race; Poole et al., 2021). Moreover, this line of work has primarily examined consumers’ responses to AI (i.e., demand-side perspective), but has overlooked how companies can effectively design and integrate AI into their services (i.e., supply-side perspective). To address this gap, our AID framework combines insights from research on the psychology of AI (e.g., Longoni et al., 2019; Puntoni et al., 2021), the literature on consumer vulnerability (e.g., Baker et al., 2005; Hill & Sharma, 2020), and scholarly work on AI for social good (e.g., Cowls et al., 2021; Floridi et al., 2018, 2020). In response to the calls for rethinking marketing for a better world and the greater good (Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021), we illustrate how companies can harness AI technologies to empower vulnerable consumers and deliver socially beneficial outcomes by mitigating digital inequalities and the digital divide (e.g., Lythreatis et al., 2022; Ragnedda, 2018; Wei et al., 2011), thereby aligning with the Sustainable Development Goals 3 (SDG 3) “Good Health and Well-Being,” and 10 (SDG 10) “Reducing Inequalities” (Cowls et al., 2021; Vinuesa et al., 2020). This contribution also carries importance as the current number of AI initiatives targeting SDG 10 remains limited (Cowls et al., 2021)—despite the potential for pursuing SDG 10 to yield compounding positive effects on all other goals (Lusseau & Mancini, 2019) and AI’s ability to act as catalyst for achieving the SDGs (Du & Sen, 2023; Vinuesa et al., 2020). Overall, companies have the opportunity not only to benefit society at large by better serving vulnerable consumers, thereby reducing inequalities, and enhancing well-being, but also to uphold the ethical principle of justice when developing technology-based services.

Second, answering the call for marketing research to acknowledge the interrelatedness of stakeholders (Hillebrand et al., 2015), our work provides a multi-stakeholder perspective and addresses the implications and challenges tied to the design, development, and deployment of AI technologies for various stakeholders, including researchers, managers, consumers, and policy makers. Despite the increasing adoption of AI technologies for engaging with vulnerable consumers, many companies face difficulties in designing or adapting their service offerings to aptly address the needs of vulnerable consumers. This challenge is further amplified by the escalating number of laws and regulations aimed at protecting these consumers (Financial Conduct Authority, 2021; International Organization for Standardization, 2022). To address this issue, the present paper introduces a simple yet effective framework for designing and developing AI-driven service applications and systems. Specifically, our AID framework aims to assist companies in designing their AI technologies to effectively tailor to the unique requirements of vulnerable consumers and to enhance the customer experience at different touchpoints along the customer journey.

The remainder of the paper is structured as follows. After shedding light on the consumer vulnerability concept and vulnerable consumers in the age of AI, we illustrate our AID framework and discuss implications and challenges for researchers, managers, consumers, and policy makers.

Consumer vulnerability

Consumer vulnerability is defined as “a dynamic state that varies along a continuum as people experience more or less susceptibility to harm, due to varying conditions and circumstances” (Salisbury et al., 2023, p. 6), which inhibits their optimal functioning and individual agency in the marketplace (Baker et al., 2005). Hence, consumer vulnerability constitutes a state rather than a fixed status (Baker et al., 2005; Hill & Sharma, 2020) and cannot be simply equated with specific demographic or personal characteristics (e.g., elderly, blind, lower income, illiterate), stigmatization, or unmet needs. Instead, any consumer can experience vulnerability in the marketplace, regardless of their status or demographic characteristics (Shultz & Holbrook, 2009; Wünderlich et al., 2020).

For a multitude of reasons, millions of consumers experience vulnerability every day, which subsequently affects their decision-making capacity, purchase decisions, and overall behaviors. For example, according to the Financial Lives 2022 survey, 47% of adults in United Kingdom exhibited one or more characteristics of vulnerability (i.e., health, resilience, capability, life events; Financial Conduct Authority, 2022). A poignant instance of a life event that can render consumers vulnerable is when people lose their loved ones: Amidst the myriad formalities, including life insurance collection or funeral arrangement, those grieving may also feel overwhelmed as they grapple with bereavement. For example, they can struggle to comprehend information presented to them or even to make decisions. Beyond the impact of consumers’ moods and psychological states (e.g., going through depression after a painful break-up), vulnerability can also temporarily arise from other conditions, such as short-term financial constraints (e.g., prior to receiving wages) or a lack of self-control (e.g., engaging in compulsive buying on Black Friday, stress eating). Importantly, these conditions can lead to vulnerability in distinct ways and dynamically shape consumers’ functioning within the marketplace.

The concept of consumer vulnerability is not without controversy, and recent research has condensed limited access to and control over resources as key antecedents of consumer vulnerability, both of which can manifest through personal experience or observation (Hill & Sharma, 2020; Pavia & Mason, 2014). In essence, consumer vulnerability is “a state in which consumers are subject to harm because their access to and control over resources is restricted in ways that significantly inhibit their abilities to function in the marketplace” (Hill & Sharma, 2020, p. 554). Table 1 provides an overview, along with examples, of the limited resources and restricted control linked with consumer vulnerability, operating at an individual (i.e., consumers’ self-related assets, abilities, and sources of control), interpersonal (i.e., social factors, interactions, and sources of control), and a structural (i.e., marketplace and external factors and sources of control) level.

Table 1 Categorization and examples of consumer vulnerability

Conceptualizing consumer vulnerability as a state recognizes its potential to vary in terms of intensity (e.g., more vs. less extreme states of vulnerability), nature of the vulnerability (e.g., physical, psychological), and its duration (e.g., temporary vs. permanent; Hill & Sharma, 2020). Furthermore, while certain resource-related antecedents of consumer vulnerability are more directly (e.g., lack of material/financial resources due to economic downturns) or easily detectable (e.g., lack of physical resources due to physical impairments), others might not be immediately apparent (e.g., family breakdown, depression, addictions). Additionally, it is important to acknowledge that constraints on resource access and control can extend to the social network (e.g., family members nursing or taking care of vulnerable consumers) or support group of the original (i.e., primary) vulnerable person (Pavia & Mason, 2014). Secondary vulnerability results from the experiences of vulnerability faced by the primary consumer and can lead to diverse service needs of secondary vulnerable consumers, whether they are other-related (e.g., concern of primary consumers’ well-being) or self-related (e.g., emotional support, provision of adequate information; Leino et al., 2021). Given the interrelatedness of needs and well-being between primary and secondary consumers, it becomes imperative for companies to understand the holistic spectrum of antecedents and consequences of vulnerabilities, while catering to the needs of both consumer groups (Leino, 2017; Leino et al., 2021).

Due to the dynamic and broad nature of vulnerability, companies often encounter difficulties in efficiently and effectively identifying and serving vulnerable customers. For instance, a recent survey conducted among equity release advisers revealed that only 12% of the advisers considered it easy to identify vulnerable clients, despite 84% of them stating that identifying vulnerable consumers is one of their biggest priorities (Jones, 2020). Historically, companies predominantly relied on status-based criteria such as demographics (e.g., elderly, low-income groups) or self-reported indicators (e.g., consumers explicitly identify themselves as vulnerable). However, with advancements in AI technologies, novel approaches have surfaced for detecting these consumers and mitigating their vulnerabilities, enabling companies to address consumer vulnerabilities effectively and efficiently. For example, AI-powered customer base analysis (Valendin et al., 2022) can facilitate the generation of predictive models to gauge the likelihood of consumer vulnerability across various service contexts and to suggest tailored intervention strategies based on the calculated risk and identified category of vulnerability. Nevertheless, implementation of AI technologies can necessitate substantial resources (e.g., time, labor, money) and potentially entail significant alterations to service and marketplaces, leading to tensions between a company’s economic objectives and its commitment to consumer well-being. Thus, a pertinent question arises: Why should companies pay attention to and invest in mitigating consumer vulnerability and integrating AI systems when managing customer relationships?

First, companies have a legal obligation to treat vulnerable consumers fairly, as outlined by consumer rights and regulation (Larsen & Lawson, 2013). For instance, the European Union (EU) Unfair Commercial Practices Directive (Article 5) prohibits commercial practices that distort the economic behavior of vulnerable consumers, and the EU Artificial Intelligence Act bars the “use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability” (European Parliament, 2023). Moreover, there are industry-specific regulations in place to ensure the fair treatment of vulnerable consumers, for instance, by the International Organization for Standardization (ISO 22458:2022) or the United Kingdom’s Financial Authority Conduct (Financial Conduct Authority, 2021; International Organization for Standardization, 2022). The latter states that firms should take action (a) to understand the needs of vulnerable consumers, (b) to develop the right skills and capabilities of their staff to recognize and respond to these needs, (c) to respond to these needs throughout the product design, flexible service provision, and communications, and (d) to monitor and assess whether needs are met and responded to. Notably, advancements in AI technologies provide companies with significant avenues to actively contribute to and support all of these actions.

Second, by serving vulnerable consumers better, companies can reap substantial financial benefits. While it involves a short-term investment, this approach can yield long-term benefits by meticulously addressing the dynamically changing wants and needs of consumers who might be subject to primary or secondary vulnerability. As a result, companies stand to improve customer satisfaction and loyalty, ultimately fueling profitability and bolstering brand equity. Given the closely intertwined nature of customer satisfaction and loyalty, an additional positive spillover and multiplier effects can be expected to emerge (Leino et al., 2021).

Third, granting vulnerable consumers marketplace access and alleviating consumer vulnerability carries significant societal implications: it directly contributes to improving societal well-being, and reduces unequal market participation, healthcare, employment issues, among other advantages (Wünderlich et al., 2020). Companies that strive to mitigate, resolve, or at best prevent consumer vulnerability play a role in mitigating social inequalities, thereby actively contributing to a central antecedent of consumer well-being—social justice (Anderson et al., 2013; Fisk et al., 2018; Johns & Davey, 2019). Importantly, justice plays a pivotal role for the development and deployment of AI (Floridi et al., 2018; Jobin et al., 2019; Morley et al., 2020). The justice principle advocates fairness, avoiding unwanted biases and discrimination, equitable sharing of benefits, and the cultivation of solidarity (Jobin et al., 2019; Morley et al., 2020; Thiebes et al., 2021), all working to ultimately strengthen social cohesion (Jobin et al., 2019).

Overall, companies can face tensions between the social objective of better accounting for the dynamic states of consumer vulnerability to foster social justice and consumer well-being and economic objectives (e.g., profitability; Hahn et al., 2010; Van der Byl & Slawinski, 2015). Our work highlights that addressing consumer vulnerability does not necessarily imply an enduring trade-off or a zero-sum situation. While some consumers may be more predisposed to experiencing vulnerability in their daily lives, vulnerability can affect any consumer, both directly and indirectly. Accordingly, companies should be equipped to navigate the dynamic occurrence among consumers (i.e., primary vulnerability) and their social networks and support groups (i.e., secondary vulnerability) and consider integrating AI technologies into consumer interactions to detect and serve them more effectively and efficiently. We further argue that the high degree of service personalization and customer centricity suggested by our AID framework not only precludes the “alienation of mainstream consumers,” but, conversely, can yield substantial benefits for them. Hence, companies’ socially responsible efforts in mitigating consumer vulnerability can lead to positive responses from all consumers, irrespective of their vulnerability state, while also capitalizing on win–win outcomes facilitated by AI’s integration within services, including profit generation for companies, enhanced reputation and corporate image, and doing good for society as a whole (Chandy et al., 2021).

Before we present our AID framework, we briefly illustrate the role of AI in marketing and consumers’ responses to it.

(Vulnerable) Consumers in the age of AI

AI can be understood as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Kaplan & Haenlein, 2019, p. 17). With its extensive potential for personalization and customization, AI is progressively integrated into numerous marketing activities including decisions about products, services, prices, communication, and distribution (i.e., the marketing mix; Davenport et al., 2020; Huang & Rust, 2021a) throughout the entire customer journey (Hoyer et al., 2020) and the process of service creation, delivery, and interaction (Huang & Rust, 2021b). As a result of companies embracing AI, its influence extends to shaping how consumers think, feel, and behave (Puntoni et al., 2021). Today, AI technologies enable tracking consumers’ fitness activities, giving consumers recommendations on what to buy, responding to consumers’ requests and complaints, or even preparing their cocktails at a bar.

In response to AI’s transformative influence on business and daily activities, marketing scholars have been investigating the role of AI in shaping consumer experiences. As AI yields substantial power to (re-)shape business and social environments, personal interactions, workplaces, and human agency, it presents valuable opportunities for companies with respect to marketing strategy and actions (e.g., Huang & Rust, 2021a), services (e.g., Blut et al., 2021; Huang & Rust, 2021b; Mende et al., 2019; Xiao & Kumar, 2021), retailing (e.g., de Bellis & Venkataramani Johar, 2020; Guha et al., 2021; Shankar, 2018), customer experience (e.g., Grewal et al., 2020c; Hoyer et al., 2020; Puntoni et al., 2021), relationships (e.g., Libai et al., 2020), and engagement (e.g., Kumar et al., 2019). Despite the evident benefits of AI-driven personalization and customization, it is important to acknowledge that AI can treat consumer segments or individual consumers differently on the basis of demographic, psychological, and economic factors (Du & Sen, 2023), thereby potentially giving rise to consumer vulnerability. Such potentially discriminatory marketing methods raise substantial ethical concerns (e.g., De Bruyn et al., 2020; Du & Xie, 2021; Hermann, 2022), particularly, for vulnerable consumers (Argawal et al., 2020). Notably, AI technologies that impact or exploit consumer vulnerability could lead to manipulation, decrease autonomy, or change consumer behavior in ways that are not in their best interest (Strümke et al., 2023).

Marketing researchers have been increasingly studying consumers’ reactions to algorithmic versus human decision-making and underlying psychological processes. The overarching finding within this body of literature is that consumers’ reactions towards algorithms and AI versus humans depends on various factors. For example, consumers react less positively when AI makes morally relevant trade-offs (Dietvorst & Bartels, 2022), when it makes favorable (vs. an unfavorable) decisions about them (Yalcin et al., 2022), when it makes offers that are worse than expected (Garvey et al., 2023), or when its use is perceived as being motivated by firm benefits at the expense of customer benefits (Castelo et al., 2023). Moreover, consumers are found to perceive algorithms and AI to be less authentic (Jago, 2019; Jago et al., 2022) and less moral (Bigman & Gray, 2018; Giroux et al., 2022) than a human, and to neglect their unique characteristics (Longoni et al., 2019). Conversely, there are also circumstances under which consumers react more positively towards AI. For instance, consumers respond positively to AI in embarrassing service encounters (Holthöwer & van Doorn, 2023; Pitardi et al., 2022), when their needs are more certain (Zhu et al., 2022), and when the task at hand is objective (vs. subjective; Castelo et al., 2019a, b).

To the best of our knowledge, however, empirical (demand-side) marketing research on AI and algorithmic decision-making has primarily either neglected to investigate vulnerable consumers’ reactions and interactions with AI technologies or has adopted a limited status-based perspective on consumer vulnerability by concentrating on specific groups of vulnerable consumers. For instance, previous research demonstrated that intelligent personal assistants can help consumers with disabilities regain independence and freedom (Ramadan et al., 2021; Vieir et al., 2022). Conversely, older consumers can feel socially excluded and inadequately skilled when using retail technology autonomously (Pantano et al., 2022). Moreover, the use of AI-based service providers that match vulnerability attributes (i.e., obesity) can inadvertently offend consumers, as they might perceive that AI, compared to fellow humans, cannot relate to their (human) experience of living with such vulnerability attributes (Mende et al., 2023).

In our AID framework, we aim to complement prior work by providing a supply-side perspective and strategic guidance for companies to leverage AI technologies, thereby improving vulnerable consumers’ interactions with AI and ultimately enhancing their well-being. Specifically, we argue that AI can aid vulnerable consumers and be harnessed for social good and reduce inequalities when it is and makes services accessible, interactive, and dynamic. In the following, we delve into these three qualities and discuss how they should shape the development and deployment of AI for service provision and innovation.

Designing AI in service to aid vulnerable consumers: AID framework

AI technologies in service can be a double-edged sword for vulnerable consumers, with their effectiveness hinging on how and why they are implemented. On one hand, there may be barriers to adoption (e.g., ease of use, affordability, technology readiness, perceived risk; de Bellis & Venkataramani Johar, 2020; Lee & Coughlin, 2015), which can be amplified due to the inherently high-tech nature of AI. For example, a vulnerable consumer might hesitate to interact with a simple chatbot due to its standardized responses, or its perceived lack of receptiveness with their diminished material (e.g., recent job loss) or emotional resources (e.g., grief). Nevertheless, when implemented adeptly, AI has the potential to render services more accessible, to interactively ameliorate customer experiences and journeys in services, and to dynamically improve consumer decision-making. Following on the earlier example, AI can facilitate service personnel in identifying vulnerable consumers by detecting subtle cues (i.e., “feeling AI;” Huang & Rust, 2021a, b) that human employees might overlook and by guiding them in devising a suitable and effective strategy when catering to the needs of vulnerable consumers.

In the following, we outline how AI technology can improve vulnerable consumers’ experiences and introduce our AID framework. In doing so, we adopt a conceptualization of AI attributes that align with the sequential steps of the service process and the customer journey, along with the different levels of digital inequalities. Specifically, and as shown in Fig. 1, we propose that accessible AI technologies and services are the precondition for interactive customer experiences and journeys, that, in turn, allow service providers to dynamically assist vulnerable consumers in making beneficial decisions. These qualities further address the three levels of digital inequalities, that is, inequalities in access (first level), uses (second level), and outcomes (third level; Lutz 2019; Ragnedda, 2018; Wei et al., 2011).

Fig. 1
figure 1

AID framework, identification and prediction process of consumer vulnerability

Accessible

The increasing reliance on AI technologies in services (e.g., Blut et al., 2021; Huang & Rust, 2021b) can unintentionally contribute to the technology and digital inequality phenomenon or the digital divide (i.e., societal-level inequalities of digital access; Fisk et al., 2022; or first-level digital inequality; Wei et al., 2011), particularly affecting vulnerable consumers (Argawal et al., 2020; Grewal et al., 2020a; Lu & Sinha, 2023; Pantano et al., 2022). Therefore, it is imperative that AI technologies do not become barriers when accessing services and sources of vulnerability themselves (i.e., structural antecedents of consumer vulnerability), but rather are intentionally designed to facilitate access. Within the scope of our work, accessible refers to the unhampered possibility and capability to use and engage with AI technologies in services. The initial step towards addressing and alleviating general and technology-induced access barriers to services is to identify and predict potential states or antecedents of consumer vulnerability.

Service agents or frontline employees are often unaware when they are interacting with vulnerable consumers. According to a survey conducted by the Data & Marketing Association’s Contact Centre Council, only 4% of service agents indicated that they always recognize when they speak with vulnerable consumers (Lee & Workman, 2018). Detecting vulnerable consumers can be challenging due to various factors, such as time constraints, subtle cues exhibited by consumers, or simply a lack of attention. AI technologies, however, can overcome these (human) shortcomings by leveraging their computational power and machine learning algorithms to perform real-time analysis of consumer responses for more accurate and swift identification and prediction of vulnerability states. For example, companies like Aveni (Aveni, 2022) and Key (Key, 2022) incorporate AI solutions that employ Natural Language Processing (NLP) and big datasets to analyze customer interactions and determine the presence and type of consumer vulnerability. In instances where a consumer introduces flagged topics during a conversation such as anxiety, avolition, or exhibits flagged behavioral patterns (e.g., signals of continuing confusion), an alert is sent to the customer representative. Figure 1 depicts a simplified illustration of the consumer vulnerability identification and prediction process. AI technologies analyze consumer data related to material, physical, cognitive, and emotional resources, or proxies thereof (e.g., visual data like movements, conversational/language data, and response behavior, numerical data like purchase behavior/history or transactional data). This data is then compared against pre-defined default values (of what is deemed “mainstream”), and any deviations or anomalies are flagged to identify and/or make model-based predictions of consumer vulnerability. The outcomes of this process help determine whether human oversight by, and the involvement of, human service employees are necessary for consumers who need special attention and/or treatment. Just as learning and feedback loops are pivotal to AI systems, the identification and prediction outcomes should similarly guide and continually enhance the process of identifying and predicting consumer vulnerability. Data on factors that are likely to induce or contribute to consumer vulnerability also hold significant informative and strategic value for companies to prevent and proactively address (future) instances of consumer vulnerability.

AI technologies, in combination with abundance of available digital, mobile, and transaction data, have the potential to identify consumer groups with psychological and emotional impairments by predicting both consumers’ psychological traits (variability across consumer such as personality traits, values, cognitive styles) and states (variability within consumers over time such as mood, emotions, attention; e.g., Gladstone et al., 2019; Matz & Netzer, 2017; Matz et al., 2017; Stachl et al., 2020; Youyou et al., 2015). This enables “an unprecedented understanding of consumers’ unique needs as they relate to the situation-specific expressions of more stable motivations and preferences” (Matz & Netzer, 2017, p. 9). Recent advances in large language models have even demonstrated the potential for predicting and diagnosing diseases, such as dementia (e.g., Agbavor & Liang, 2022). In addition to vulnerability identification and prediction, AI-driven solutions like intelligent voice assistants can mitigate access barriers. They empower blind or visually impaired consumers to recognize objects, barcodes, and products (e.g., Aipoly, 2022; Microsoft, 2022), perceive the visual world through an auditory experience (e.g., Microsoft, 2022), and access and browse websites (e.g., User Way, 2022). These solutions address limitations in individual resources that hinder online and offline service access and participation in the marketplace (e.g., digital illiteracy, visual impairments) or control over them (e.g., difficulties in obtaining or assimilating information).

AI-enabled access to services constitutes a pivotal step in remedying consumer vulnerability (Shultz & Holbrook, 2009). Offering access to vulnerable consumers provides benefits to companies that extend beyond their customer base and increasing marketplace participation. First, the potential for using and interacting with AI technologies in service can enhance customer satisfaction and loyalty (e.g., Mehta et al., 2022) for both primary and secondary vulnerable consumers. Second, unbiasedness, validity, and accuracy of AI predictions rest upon the quality, integrity, and representativeness of the input data (e.g., Barredo Arrieta et al., 2020; Morley et al., 2020). For instance, healthcare communication and services may predominantly cater to older consumers, while certain healthcare issues increasingly affect younger consumers and their significant others (i.e., secondary vulnerability). Similarly, mental health and psychological distress issues might be disproportionately linked to specific demographic characteristics, such as education level, socio-economic status, or employment status. By granting access to vulnerable consumers, companies can gather relevant purchase and interaction data, enriching their customer databases in terms of scale, scope, and attributes. Consequently, they can mitigate potential underrepresentation or misrepresentation of vulnerable consumers in the data employed to train AI models, prevent biased predictions and prejudiced treatments of vulnerable consumers, and eventually uphold the justice principle of AI ethics.

Interactive

Recognizing that reducing service access barriers is just the initial step, it is essential to complement this with the integration of interactive AI service technologies that facilitate, smoothen, and simplify the vulnerable customer journey across all touchpoints, considering their limited resource access and control (see Fig. 1; Wünderlich et al., 2020). In essence, interactive implies responsive communication and personalized treatments that accommodate the specific and dynamic manifestations of consumer vulnerability. Since the state-based nature of consumer vulnerability implies that consumer vulnerability can arise during service interactions, continuous efforts are required for its identification and prediction. For example, AI systems can analyze customers’ voices to predict their mood based on factors such as their music preferences (e.g., Halbauer & Klarmann, 2022) or voice characteristics (e.g., trembling voice), and provide them with emotional support (e.g., Gelbrich et al., 2021; Sharma et al., 2023). Solutions like NICE Enlighten (NICE, 2022) can leverage AI and comprehensive interaction data to identify states of vulnerability (e.g., a lack of financial resources), and provide frontline employees with coaching and guidance on how to interact with the respective customers. Similarly, companies can deploy AI-enabled human enhancement technologies (e.g., augmented hearing, augmented vision, emotion detection) for frontline employees, enabling them to improve interactions with physically impaired consumers and to better respond to the emotions of vulnerable consumers (e.g., Grewal et al., 2020c; Henkel et al., 2020; Marinova et al., 2017; Sharma et al., 2023).

Moreover, interactive AI technologies implemented in-store, such as smart shelves, self-checkouts, price tags, and augmented reality (e.g., Dekimpe et al., 2020; Grewal et al., 2020b; Thorun & Diels, 2020; van Esch et al., 2021) can streamline services processes for vulnerable consumers, foster their autonomy, and improve the overall customer experience. In a similar way, harnessing AI-driven service robots to assist and support physically, cognitively, emotionally, or mentally impaired consumers can provide vulnerable consumers with a range of communication options (e.g., in written or oral form, by choosing options on digital screens), enabling them to express themselves effectively and efficiently. Importantly, AI-powered service robots are not bound by the same time or effort restrictions as human service employees, and the marginal costs of additional resource investment in customer interactions are likely to be lower for service robots. Lastly, interactive AI-driven service agents (e.g., chatbots) equipped with social intelligence can help replenish social support and emotional resources by engaging vulnerable consumers through corporate social media and other digital communication channels (Fletcher-Brown et al., 2021; Gelbrich et al., 2021; Pantano & Scarpi, 2022; Sharma et al., 2023).

Taken together and as illustrated in Fig. 1, AI technologies integrated into service should be designed to be interactive, accommodating the unique needs of vulnerable consumers along the customer journey (Argawal et al., 2020), and preventing (second-level digital) inequality in technology use (Wei et al., 2011). Moreover, AI involvement in service interactions can be considered as a strategic tool to empower vulnerable consumers (e.g., Hill & Sharma, 2020; Yap et al., 2021). Specifically, interactive AI technologies within services can enhance vulnerable consumers’ perception that their vulnerability can be (partly) alleviated within the specific service context (Pavia & Mason, 2014). This empowerment enables them to regain a degree of control over their resources by receiving tailored treatment aligned with their needs (Hill & Sharma, 2020). A positive ripple effect is that the burden and effort for secondary vulnerable consumers can be reduced. The benefits of interactive and consumer-centric AI technologies can also positively influence adoption behavior, overcoming potential negative responses to AI delineated above, for instance, when consumers’ unique characteristics are neglected (Longoni et al., 2019) or when offers are worse than expected (Garvey et al., 2023). Reliance on interactive AI service technologies can become habitual, leading to increased trust, customer retention, and the cultivation of positive customer relationships (Libai et al., 2020).

Dynamic

Consumer decisions, including those made by vulnerable consumers, are often prone to cognitive and behavioral biases (Dowling et al., 2020). The resource impairment experienced by vulnerable consumers can further compromise their decision-making process, particularly, when they do not fully understand their own preferences and what is in their best interest, and lack the knowledge, skills, or freedom to act on their preferences (Ringold, 2005; Shultz & Holbrook, 2009). Given that digital choice architectures are data-driven, dynamically adjustable, and personalizable (Helberger et al., 2022), adopting a flexible and state-dependent (i.e., dynamic) approach can assist vulnerable consumers, leading to better decision-making and mitigating inequality (Wei et al., 2011). First, to mitigate negative responses to AI technologies in situations of uncertain needs (Zhu et al., 2022), AI technologies can play a role in dynamically assessing the needs of vulnerable consumers. Addressing these needs requires anticipating the scale, scope, presentation, and comprehensibility of information needed across different service settings. This approach ensures that vulnerable consumers are not merely provided with more information, but with better information tailored to their unique requirements (Thorun & Diels, 2020).

Second, AI technologies or service employees empowered by AI can dynamically support and assist vulnerable consumers in making (more) beneficial decisions. For instance, improving their success in identifying at-risk customers by 30.4%, the British consulting company Capita deploys AI-driven real-time conversational analysis and assistance (“assisted customer conversation technology”). This approach leverages machine learning algorithms to analyze factors such as consumers’ tone, pitch, pace, and nature of the conversation (e.g., sudden change in behavior, emotionally withdrawn behaviors, and inconsistent or erratic communication) to detect vulnerable consumers (e.g., financial vulnerability, emotional distress; Capita, 2022, 2023). Following the analysis, the algorithm provides instantaneous customized guidance to customer service agents, advising whether special measures should be taken, and suggesting which personalized solutions based on consumers’ queries and needs could be offered. Some examples can include special conditions/offerings for consumers mentioning financial difficulties or discussing each option with any pros and cons in simple terms with consumers with limited mental capacity Capita, 2022, 2023). Similarly, Key utilizes its NLP-based consumer vulnerability insights to match vulnerable consumers with the most suitable service agent and to tailor services to their specific needs. Additionally, by harnessing AI technologies in conjunction with consumers’ financial and spending data, it has the potential to assist consumers with setting individual budgets, thereby positively shaping their spending behavior and addressing potential challenges arising from their limited control over financial resources (Lukas & Howard, 2023).

To ensure the agency and autonomy of vulnerable consumers, both AI technologies and service employees should opt for transparent interventions that “target individual cognitive and motivational competencies rather than immediate behavior (which is the target of nudges) and aim to empower people to make better decisions for themselves in accordance with their own goals and preferences” (Kozyreva et al., 2020, p. 129). These interventions, known as “boosts,” are distinct from nudges, as they do not alter the choice architecture that consumers encounter, nor do they merely present pertinent and accurate information (Hertwig, 2017; Hertwig & Grüne-Yanoff, 2017). Instead, they aim to foster and enhance human agency and consumer autonomy, and still necessitate individuals’ active cooperation, and inherently require transparency (Kozyreva et al., 2020).

In the service realm, a consumer boost is conceptualized as “context-specific and individualized intervention into consumers’ cognitive processes that aims at developing their operant resources to facilitate the efficient co-creation of transformative value” (Bieler et al., 2022, p. 34). This approach aligns with strength-based strategies for addressing vulnerability, which involve uncovering consumers’ capabilities to exert control over their vulnerability states through skill development, allowing them to become “masters of their own destiny” (Fisk et al., 2022, p. 13). Among other things, boosts can take on various forms, including mini-tutorials (e.g., Madan et al., 2023) or simple decision trees that utilize yes–no questions (Gigerenzer & Gaissmaier, 2011). These interventions are designed to enhance vulnerable consumers’ marketplace literacy (Ringold, 2005) and restore their control over resources (e.g., making financially beneficial decisions or engaging in healthier and sustainable consumption). Prior research showed that consumers seek more variety when interacting with AI-driven (vs. human) service agents (e.g., Zhang et al., 2022). Companies therefore can leverage this promising tendency to encourage vulnerable consumers to consider healthier or more sustainable options.

The provision of dynamic assistance through AI technologies in services inherently involves a learning and feedback-loop rationale. In this context, reactions, decisions, and behaviors of vulnerable consumers serve as data inputs for AI technologies to dynamically adjust and optimize the processes of identifying, predicting, addressing consumer vulnerability, and ideally preventing future instances of consumer vulnerability. For example, RecordSure’s AI (RecordSure, 2022) analyzes conversational customer interactions to both improve services and update their list of potential signs of consumer vulnerability. Similarly, Capita’s AI technology automatically categorizes, transcribes, and analyzes customer conversations to identify root causes for positive or negative conversations, aiming to inform and improve future conversations and customer service (Capita, 2023).

After presenting the constituents of our AID framework, we now sketch the multi-stakeholder implications for researchers, managers, policy makers, and consumers.

Multi-stakeholder implications

Implications for marketing researchers

While there has been some research exploring access to and control over specific resources as antecedents of consumer vulnerability (e.g., financial resources; Mogaji et al., 2020; Salisbury et al., 2023), to the best of our knowledge, prior research on consumer responses to AI has generally overlooked the state-based nature of consumer vulnerability. The conceptualization of consumer vulnerability as a state rather than a status, along with the diverse range of individual, interpersonal, and structural factors that contribute to vulnerability, gives rise to a multitude of questions for marketing research. This is particularly relevant given the inconclusive findings regarding consumer responses to AI (e.g., Logg et al., 2019; Longoni et al., 2019). In analogy to our AID typology, marketing research can also center around the AID attributes.

Accessible

When studying vulnerable consumers, researchers must give special consideration to accessible and inclusive research designs as well as to methodologically sound and ethically appropriate research conduct (Carlini & Robertson, 2023). It is crucial to adapt research designs to accommodate the unique characteristics of vulnerable consumers groups, which necessitates a re-evaluation of all aspects of the research projects, including recruitment and sampling, data collection and analysis, and the reporting and dissemination of findings (e.g., Carlini & Robertson, 2023; Dodds et al., 2023; Lewis et al., 2023). This may involve (a) reframing research issues and questions (e.g., defining consumers by their strengths instead of their “deficits,” adopting an “at-potential” focus instead of an “at-risk” focus), (b) adjusting language correspondingly, (c) redesigning research methods, and (d) relating to and empathizing with vulnerable consumers (Russell-Bennett et al., 2023). To foster the latter, it is important to directly and actively involve vulnerable consumers in the research process (“co-design,” Russell-Bennett et al., 2023; “consumer partnerships in research,” Carlini & Robertson, 2023). Researchers should be also mindful of existing institutionalized social structures and consider how the research process can create meaning and consequently influence outcomes (i.e., reflexivity; Russell-Bennett et al., 2023; Vink & Koskela-Huotari, 2022) in order to achieve value co-creation and reform social structures instead of simply reproducing them (Vink & Koskela-Huotari, 2022). Furthermore, accessibility extends to the dissemination of research findings. Considering the potential of these findings to contribute to social justice, consumer well-being, and hence sustainable development (i.e., SDGs 3 and 10), knowledge dissemination should go beyond the scientific community, and target managers and policy makers in particular.

Interactive

Another important stream of research relates to the interactions between humans (i.e., primary and secondary vulnerable consumers as well as service employees) and AI technologies. A focal research question pertains to the impact of different states of consumer vulnerability on customer responses to AI in various service contexts. Factors such as need uncertainty (Zhu et al., 2022) and uniqueness neglect (Longoni et al., 2019) have been identified as contributing to negative reactions to AI and can hold particular relevance in the context of consumer vulnerability. However, given that anyone can experience consumer vulnerability at any given time, further research is needed to unravel the complex interactions between AI technologies and multi-layered aspects of consumer vulnerability, which can vary in terms of intensity, subjectivity, and duration. For example, reactions to AI technologies might differ based on whether consumer vulnerability is related to individual, interpersonal, or structural resources as well as the level of control individuals have over these resources.

In this context, one fruitful research area is to study vulnerable consumers’ responses to different degrees of anthropomorphism in AI technologies. Existing work demonstrated that consumers generally exhibit more positive consumer reactions to anthropomorphized AI (e.g., Blut et al., 2021; Yalcin et al., 2022). Given that human-like cues can play a pivotal role in eliciting active social responses, the development of more humanized AI technologies could prove to be more effective in addressing interpersonal antecedents of consumer vulnerability that are primarily social in nature. Additionally, vulnerable consumers’ mindset (i.e., competitive vs. collaborative), their perceived psychological closeness to AI, emotional state, and perceived autonomy can also influence how they react to anthropomorphized AI technologies (e.g., Crolic et al., 2022; Fronczek et al., 2023; Han et al., 2023). Moreover, the presence of human service employees can undermine the importance of AI anthropomorphism (van Doorn et al., 2023). Therefore, future research should examine how vulnerable consumers react to humanized AI systems in relation to different vulnerability antecedents and states, individual consumer characteristics, and the interaction between human and AI service provision.

Another important research area that is currently underexplored in the context of AI technologies is secondary consumer vulnerability. The experiences of vulnerable consumers with AI systems can also have an impact on their social network and support groups. Well-designed AI systems can lead to more positive experiences for those who care for vulnerable consumers (e.g., due to effort reduction), but they may also threaten their self-identity as caregivers if they perceive AI technologies as better suited to assist and support primary vulnerable consumers. Furthermore, there could be instances where the needs of secondary consumers deviate from those of primary consumers (Leino et al., 2021), potentially leading to differential treatment. We therefore encourage researchers to investigate how secondary vulnerable consumers interact with and respond to AI technologies in services that are designed to empower primary vulnerable consumers.

Finally, marketing research can offer insights into the relationships, interactions, and collaboration between human service employees and AI technologies, particularly concerning fear of replacement and human job performance (Vorobeva et al., 2022). perceived levels of complementary and overlapping knowledge and skills between humans and AI in service encounters (Huang & Rust, 2022), as well as consumer responses to mixed human-AI service teams (van Doorn et al., 2023). These investigations can contribute to a deeper and more nuanced understanding of the dynamics between humans and AI in service settings, along with offering practical and timely insights for both employees and vulnerable consumers.

Dynamic

Just as AI technologies should be dynamic, companies’ strategic and operational thinking and actions should also be dynamic and adaptive. Accordingly, future research should explore how companies can effectively position and communicate their endeavors to empower vulnerable consumers through the use of AI technologies. One viable approach to enhance credibility is to communicate the benefits for the company, for consumers, and for society as a whole, adopting a “doing well by doing good approach” (Wallach & Popovich, 2023). By highlighting the benefits for multiple stakeholders, potential negative consumer responses, which may arise from perceiving the use of AI technologies as solely benefiting firms, could be mitigated (e.g., Castelo et al., 2023).

Besides, researchers should investigate how companies should handle customer complaints, service failure and recovery. On the one hand, companies can be particularly blamed and held responsible for service failures related to AI technology (Pavone et al., 2023). On the other hand, vulnerable consumers can be immune, and not react favorably to, certain service recovery initiatives including monetary compensations or apologies (Cenophat et al., 2023). Therefore, companies need to identify and dynamically implement appropriate coping and recovery strategies such as positive emotional responses, conveying warmth, providing explanations, and incorporating human intervention (e.g., Choi et al., 2021; Pavone et al., 2023).

Dynamic and adaptive measures are also required when it comes to measurement of customer satisfaction, service quality and climate. In light of the idiosyncrasies of service interactions of vulnerable consumers due to the dynamic and state-based nature of consumer vulnerability, a one-size-fits-all approach can be ill-suited and misleading for measurement purposes. Again, we encourage researchers to identify and develop performance indicators that can dynamically account for the complex interactions between AI technologies and multi-layered forms of consumer vulnerability.

Implications for managers

The design, development, and implementation of accessible, interactive, and dynamic AI technologies in service are far from trivial. There are many challenges awaiting managers as they need to consider not only the resource access and control restrictions of vulnerable consumers, but also adhere to the ethical principles of privacy, autonomy, and intelligibility—as illustrated in Fig. 2.

Fig. 2
figure 2

Guiding principles for AI technology development and deployment

Optimally serving and aiding vulnerable consumers can potentially interfere with these ethical principles. Hence, merely following ethical principles as a tick-box exercise (Hagendorff, 2020) might be ill-suited to address the highly dynamic and state-dependent nature of consumer vulnerability. Instead, the consequences of vulnerability and the corresponding degree of potential harm should be considered when weighing the benefits and costs of deviating from strict ethical principles to better serve vulnerable consumers and maximize their utility. It is essential for managers to meticulously assess these trade-offs and make well-informed decisions that prioritize the best interests of vulnerable consumers, while also balancing their company’s economic and ethical considerations.

Intelligibility

The opacity and black-box nature of AI (e.g., Barredo Arrieta et al., 2020; Cadario et al., 2021; Rai, 2020) present challenges in understanding how AI models function and their underlying data processing algorithms (i.e., intelligibility; Floridi et al., 2018). This lack of intelligibility can hinder vulnerable consumers’ ability to assess the benefits or potential harm of AI technologies, understand the collected data (i.e., privacy), and decide whether to entrust decisions to AI systems (i.e., autonomy). Hence, managers should provide clear and straightforward explanations of how and when vulnerable consumers access, interact with, and are dynamically assisted by AI technologies. To prevent any potential information overload, irritation, or frustration that could undermine service provision following the AID principles, managers need to account for the respective state of vulnerability when determining the simplicity/complexity of explanations (i.e., “intelligibility-AID trade-off”). Additionally, managers should ensure that vulnerable consumers are explicitly informed when they interact with (or observed by) AI technologies, enabling them to draw any conclusions about (un)ethical treatment. This raises important managerial questions about AI disclosure (e.g., Mozafari et al., 2022).

Privacy

Another focal challenge constitutes privacy. While data collection is essential for AI technologies to effectively and efficiently identify and interact with vulnerable consumers, determining the appropriate amount of data to achieve economic and social objectives while upholding ethical principles can be complex. For instance, AI systems might require access to specific attributes (e.g., conversation, voice pitch) to determine a consumers’ vulnerability state. However, it is crucial to avoid excessive tracking and surveillance of vulnerable consumers (Andrew & Baker, 2021; König et al., 2020) to prevent customer data vulnerability (Martin & Murphy, 2017), especially when dealing with sensitive and “special category” data like biometric or health information. To address these concerns, companies should carefully establish privacy default settings (i.e., data protection/privacy by default) for different types of consumer data, ensuring data protection and privacy by default. Furthermore, privacy practices should be integrated into the design, development, and deployment of AI systems (i.e., privacy by design). However, it is noteworthy that merely refraining from collecting or utilizing sensitive data (i.e., “privacy through unawareness”) does not guarantee privacy, as sensitive attributes often correlate with non-sensitive attributes (Hagendorff, 2019). Thus, adopting a comprehensive privacy strategy that covers the entire customer journey is essential for building customer trust and maintaining support.

Additionally, companies should strive to establish safeguards and monitoring mechanisms throughout the data lifecycle. This entails determining the minimum scale and scope of data (i.e., data minimization) required for identifying, interacting with, and dynamically assisting vulnerable consumers. However, in doing so, companies might face a trade-off between privacy and providing tailored treatment of vulnerable consumers through data-driven AI technologies (i.e., “privacy-AID trade-off;” Rust, 2020). To address this, companies should obtain informed consent from vulnerable consumers or their support group (i.e., secondary vulnerable consumers) and provide comprehensible information about which and how data processing and the associated risks (Felzmann et al., 2020). In certain situations, service firms’ practices regarding customer data usage can be decisive and may influence consumer responses differently. For instance, customers perceive increased data privacy and feel less vulnerable when interacting with firm-owned devices in self-service technologies, depending on data sensitivity and transparency levels (Sohn et al., 2023). In this context, AI technologies can serve as privacy digital assistants, carefully monitoring privacy policies, identifying privacy violations, and disabling privacy-intrusive default settings (Lippi et al., 2019; Thorun & Diels, 2020).

Design

To ensure consumer autonomy, opt-in models and designs (i.e., consumers consciously and deliberately decide whether to use or be supported by AI in services) should be preferred (Borenstein & Arkin, 2016). Moreover, service providers should prioritize consumer empowerment through boosts rather than making comprehensive changes to vulnerable consumers’ choice architectures (i.e., nudges). Autonomy and human agency are also important for service employees. Therefore, it is crucial to integrate human oversight and options for human intervention into the service provision process. However, it is important to acknowledge that certain circumstances and states of vulnerability that may require more assistance and guidance, leading to less consumer autonomy in order to prevent or mitigate harm (i.e., “autonomy-AID trade-off”).

Companies can address ethical challenges through a holistic co-design approach. First, ethicists or personnel with relevant ethical expertise should be involved in the design and development process, adopting an embedded ethics approach (Bonnemains et al., 2018; McLennan et al., 2020). These experts can identify ethical concerns that marketers, data scientists, and AI developers might overlook. A senior-level working group composed of technologists/developers, legal/compliance experts, ethicists, and business leaders can be formed to pinpoint potential sources of ethical issues and devise practical solutions (Blackman & Ammanath, 2022). In some companies positions such as AI ethicists already exists under different titles: Data Privacy and Ethics Lead at Qantas, Director of Responsible Innovation & Responsible Innovation Manager at Meta/Facebook, Chief Ethical and Humane Use Officer at Salesforce, Chief AI Ethics Officer and Managing Director & Partner at BCG, or Microsoft Chief Responsible AI Officer (Minevich, 2021).

Second, involving vulnerable consumers in the design and development process is crucial (Dietrich et al., 2017) to raise awareness about their special needs, specific resource impairments and suitable remedies. Through active participation, vulnerable consumers can become value co-creators (Danaher et al., 2023; Fisk et al., 2022). This participatory approach not only generates valuable ideas for service improvement but also conveys a powerful message that companies value and respect their inputs, strengthening the sense of empowerment among vulnerable consumers (e.g., Auh et al., 2019).

Governance

To formalize and institutionalize the co-design process, companies should establish robust governance mechanisms that ensure ethical AI design, development, and deployment (Mökander & Floridi, 2021; Mökander et al., 2022). This can involve determining a set of guiding ethical principles, incorporating ethicists, and appointing an ethics board (Eitel-Porter, 2021). Among other things, companies can decide whether and which new services should go through an ethical risk due-diligence process during the design stage or prior to deployment (Blackman & Ammanath, 2022). To support these efforts, companies can consider implementing structures that oversee decision-making processes, maintain documentation, provide company-wide ethics training, conduct stress tests for governance structures, and establish appropriate metrics to monitor and ensure compliance (Eitel-Porter, 2021). Such metrics can lay the foundation for internal or external auditing mechanisms (Floridi et al., 2018; Mökander & Floridi, 2021; Mökander et al., 2022). Similarly, risk assessment and management processes that evaluate threats to companies and stakeholders as well as existing safeguards, can provide reference points for both internal and external auditing (Clarke, 2019a). To ensure transparency and inclusivity, involving vulnerable consumers as stakeholders in participatory audit conceptualization and implementation processes can prevent “closed-door compliance” and foster openness (Krafft et al., 2021).

Implications for consumers

With the growing integration of AI technologies into services, understanding their implications for vulnerable consumers comes to carry great significance. In the following sections, we delve into several key implications that can emerge from vulnerable consumers’ interactions with AI technologies.

Consumer autonomy

The ethical challenges faced by companies and managers, as outlined earlier, also have direct implications for vulnerable consumers. First, AI technologies in service can influence vulnerable consumers’ autonomy (André et al., 2018), that is, their ability to make independent decisions, free from external influences imposed by other agents (Wertenbroch et al., 2020). As AI technologies and frontline employees equipped with human-enhancement tools offer personalized information or assistance, vulnerable consumers principally delegate decisions to AI technologies during the information collection, consideration set, and even decision-making stage. This delegation can be beneficial in terms of resource efficiency (e.g., time, cognitive resources), tailored support, and content It could, however, also have downsides, potentially compromising the well-being of vulnerable consumers if they become overly reliant on AI systems (Banker & Khetani, 2019). To mitigate these risks, vulnerable consumers need a certain level of understanding of how AI technologies work (i.e., intelligibility), which can also help to counteract any tendency to avoid algorithms and AI (e.g., Cadario et al., 2021). In this context, providing simple explanations of how AI functions is likely more effective and satisfactory than comprehensive ones that risk information overload, irritation, and frustration (Rai, 2020), particularly, for cognitively impaired or digitally illiterate consumers. Thus, vulnerable consumers (or secondary vulnerable consumers on their behalf) should be able to request such explanations across all (online and offline) channels, either from AI technologies or human service employees.

Potential backlash

The introduction of AI technologies in services and their potential to enhance vulnerable consumers’ abilities can raise concerns about an ethical double standard. For example, non-vulnerable consumers may perceive it less fair for vulnerable consumers to benefit from AI technologies compared to themselves (Williams & Steffel, 2014). In extreme cases, vulnerable consumers might be viewed as robotic and lacking humanness (i.e., dehumanization; Haslam & Loughnan, 2014), especially if their (cognitive) abilities are perceived as enhanced rather than restored (Castelo et al., 2019a, b). To forestall or mitigate such adverse perceptions, it is essential for other consumers to keep the prosocial and restorative nature of such AI technologies in service in mind (Castelo et al., 2019a, b). That is, AI technologies are not deployed to give vulnerable consumers any advantages over “mainstream” consumers but to increase their ability to participate more fully in society.

Secondary consumer vulnerability

Consumer vulnerability can manifest itself as a shared experience within communities (Baker et al., 2007) and can give rise to secondary consumer vulnerability (Pavia & Mason, 2014). In such cases, secondary vulnerable consumers have to be aware that AI technologies hold the potential to empower them to better support primary vulnerable consumers, to lighten their responsibilities, and to eventually increase their own well-being. Thereby, potential reservations regarding AI technology can decrease. Despite their altruistic other-related needs and intentions, secondary vulnerable consumers could misunderstand primary customers’ needs due to false assumptions or changing needs (Leino et al., 2021). Dynamically designed AI technologies can then support them through need identification and recommendations which actions to take. Under some circumstances, particularly, when primary vulnerable consumers are not able to, secondary vulnerable consumers have to provide relevant data and consent to facilitate AI systems to optimally work and support them and primary vulnerable consumers. Again, a basic understanding on how AI technologies work can also help secondary vulnerable consumers to effectively leverage AI technologies to support primary vulnerable consumers.

Implications for policy makers

In the realm of AI technologies aimed at helping vulnerable consumers, there are several critical considerations that hold direct implications for policy makers. In what follows, we discuss the multifaceted ethical and legal issues and propose supranational co-regulation as a public policy instrument to address them.

Issues of a principled approach

An embedded ethics and co-design approach for the development and deployment of AI technologies to aid vulnerable consumers can encounter challenges when faced with organizational realities. First, companies’ mere reliance on ethical principles as non-binding guidelines (i.e., soft law) and self-regulatory commitments may lead to ethics shopping (Floridi, 2019a, b), that is, the malpractice of selecting ethical principles that align with existing business practices, and justifying them a posteriori (Floridi, 2019b). In other words, companies might “shop” for ethical principles that best match their current business practices, potentially undermining a genuine commitment to ethical practices. Second, companies could engage in “bluewashing”, a term describing the misleading use of claims or superficial measures to project an appearance of heightened ethical conduct (Floridi, 2019b, 2021a, b). Third, companies might use self-regulation as means to lobby against the development, implementation, and/or enforcement of legal regulations (i.e., ethics lobbying; Floridi, 2019b). Consequently, some scholars argue that the era of self-regulation as the instrument to address ethical challenges is coming to an end (Floridi, 2021b). These issues have important implications for public policy, that is, the conceptualization and implementation of legally binding ethical and socio-legal governance and policies (Kaplan & Haenlein, 2019; Stahl et al., 2021) beyond corporate self-commitments, self-regulation, and non-binding guidelines (Resséeguier & Rodrigues, 2020; Stix, 2021).

From principles to supranational co-regulation

Ethical principles offer a solid foundation, particularly given the time delays inherent in legislative processes addressing rapid and revolutionary technological developments like AI (Häußermann & Lütge, 2022). In this context, the EU’s General Data Protection Regulation (GDPR) that came into effect in 2018 constitutes an illustrative example. Article 25 of the GDPR stipulates data protection/privacy by design— a key concern for the GDPR (Andrew & Baker, 2021). Initially, privacy by design consisted of broad principles that eventually evolved into codified and concrete regulations. This progression underscores that the “the realization of an ideal can find a way into law, thereby becoming binding if proven useful” (Felzmann et al., 2020, p. 3344). The EU and the United States (US) have taken proactive steps by providing legal frameworks for AI with the proposal of the EU Artificial Intelligence Act (AIA) and the US National AI Initiative Act, respectively (Floridi, 2021a).

However, national political cultures and legal regimes differ in their constructions of personhood in relation to automation (Jones, 2017), constitutionalism ideologies (Celeste, 2019; De Gregorio, 2021), and approaches to AI regulation and AI for social good (Cath et al., 2018; Roberts et al., 2021). Furthermore, what is considered as consumer vulnerability can substantially vary among countries, potentially leading to cross-national discrimination against vulnerable consumers. In this context, consumer law and policy should focus the “properties and commercial practices of digital choice environments that can render everyone (dispositionally) vulnerable under the right conditions” instead of using status-based characteristics to label “certain groups or individuals as “vulnerable” or non-vulnerable” (Helberger et al., 2022, p. 194). Even within the EU AIA, consumer vulnerability is characterized as static and mainly related to age, and physical or mental disabilities. Given these national differences in recognizing and accommodating vulnerable consumers, policy makers have to contemplate supranational and harmonized regulations. Furthermore, national differences in legal frameworks can result in ethics dumping—an export of unethical or even illegal practices to countries with weaker regulations (Floridi, 2019b). This creates a double standard for vulnerable consumers if they are treated differently by AI technologies across countries (or in the home as compared to foreign countries).

Against this backdrop, supranational, collaborative co-regulatory approaches that engage the relevant stakeholders (particularly, vulnerable consumers or their advocates) across countries and cultures could pave the way for effective legislative processes in AI regulation that accounts for state-based consumer vulnerability (Clarke, 2019b). Policy makers can also reflect on utilizing collaborative co-regulation as it has the potential to prevent neglect or overly narrow conceptualizations of vulnerable populations, addressing limitations present in current AI ethics frameworks or initiatives like the EU AIA (Schiff et al., 2021).

Conclusion

By synthesizing the consumer vulnerability literature (e.g., Baker et al., 2005; Hill & Sharma, 2020), the scholarly work on the psychology of AI (e.g., Longoni et al., 2019; Puntoni et al., 2021) and AI for social good and sustainable development (e.g., Cowls et al., 2021; Du & Sen, 2023; Floridi et al., 2018, 2020; Vinuesa et al., 2020), along with the increasing calls to rethink marketing for a better world and the greater good (Chandy et al., 2021; Madan et al., 2023; Mende & Scott, 2021), our AID framework illustrates how companies can harness AI technologies to better serve and empower vulnerable consumers (i.e., a supply-side focus on AI). Specifically, we propose that AI technologies can make services more accessible, interactively ameliorate customer experiences and journeys in services, and dynamically improve consumer decision-making.

Throughout this paper, we have argued that granting vulnerable consumers’ access to the marketplace and improving their interactions and decision-making can yield important advantages as companies can benefit from larger customer bases, less biased AI models, higher customer satisfaction, and higher profitability. Second, our AID framework empowers vulnerable consumers by reinstating control over resources, enabling them to profit from marketplace participation, and improving their experiences. Third, these results contribute to broader societal gains, including enhanced social justice, reduced inequalities (i.e., SDG 10), and improved consumer well-being (i.e., SDG 3), aligning with the greater good and sustainable development objectives. The benefits across stakeholders are particularly important in the light of the state-based nature of consumer vulnerability. Since “the experience of vulnerability is a reality [that can happen to anyone], but those encountering it do not wish it to be an equilibrium state” (Baker et al., 2005, p. 137), companies accept their social responsibility, address technology, digital, and market inequalities, and can mitigate consumer vulnerability by following our AID framework. Given the challenge of developing and deploying consumer-centric AI in an ethical and accountable way (Kunz & Wirtz, 2023), the collaborative engagement among companies, ethicists, vulnerable consumers, and policy makers becomes imperative in creating globally integrated, equitable marketing systems that reduce consumer vulnerability (Shultz & Holbrook, 2009) and increase individual and societal well-being. As noted, “AI technologies cannot solve all problems, but they can help to address the major challenges… facing humanity today” (Cowls et al., 2021, p. 114), and “marketing can and should be leveraged as a catalyst for positive change” (Mende & Scott, 2021, p. 116).