Introduction

Consumers today are constantly exposed to personalized content, such as social media posts, search results, movies or music recommendations, and political messages (Kozyreva et al., 23). Targeted advertising, a common practice among businesses that offer free digital content, also relies heavily on personalization (Boerman et al., 7). Companies use vast amounts of personal data to train algorithms that predict consumers’ likes, clicks, purchases, and even voting decisions. While personalization can be convenient and helpful for consumers, it also carries significant risks. One major risk is the creation of filter bubbles, where individuals are only exposed to content that aligns with their existing preferences and beliefs (Wagner & Eidenmuller, 35). This can reinforce existing biases and limit the diversity of recommendations and information. Personalization can also be used for manipulation by presenting selected messages in a format and at a time that is most likely to influence the target (e.g., Martin, 27). Additionally, the collection of large amounts of personal data for personalization purposes may lead to privacy violations if the data are mishandled, leaked, or sold to third parties (generally on privacy harms, see Citron & Solove, 11).

In order for a company’s data collection and processing for personalization purposes to be deemed lawful within the European Union (EU), it must adhere to the requirements set forth by the General Data Protection Regulation (GDPR). In drafting the GDPR, lawmakers aimed to find a balance between safeguarding consumers’ fundamental rights and serving the interests of companies and the public in developing better products and services based on personal data (Recital 4 of the GDPR, Lynskey, 25). This was achieved by establishing a variety of rules and principles, including a list of legal bases for data processing, with consent being just one of them (Article 6 of the GDPR).

Determining a legal basis for the collection and processing of consumer personal data for personalization purposes poses significant challenges. The option of relying on the data controller’s legitimate interest is constrained by a restrictive stance from the Court of Justice of the European Union (CJEU), emphasizing that consumers’ fundamental rights to data privacy override companies’ economic interests in the use of consumer data (C-252/21 para.117). Consequently, businesses primarily have two viable alternatives: relying on the main contract or obtaining separate consumer consent as a legal basis for data processing for personalization purposes.

When a business customizes its service, such as recommending movies or music based on predictions of consumer preferences, relying on the main contract as the legal basis for personal data processing appears in line with GDPR. However, for services processing personal data to personalize advertising, which is displayed in addition to the main service provided (e.g., social media), distinguishing between these two legal bases is less clear. The difficulty increases when considering the different contexts for which personalization is used by companies. For instance, shopping platforms like Amazon provide personalized services by suggesting promotions tailored to consumers’ browsing history or past purchases. These platforms also display personalized third-party advertisements for products based on similar metrics (Section 3, Amazon Privacy Notice; Amazon, 1). Music apps such as Spotify collect personal data, like listening history, to curate personalized song or playlist recommendations. Additionally, they generate advertisements for third-party products for non-Premium listeners, using data such as geo-location, in order to generate revenue for the app (Section 4, Spotify Privacy Policy; Spotify, 31). If however the consumer decides to pay for the Premium subscription, there are no more ads generated, but the personalized service of song recommendations remains in place. The same model applies to online newspapers, where articles are recommended based on past readings and interests, while third-party advertisements are tailored to browsing history or location (e.g., The Guardian Privacy Policy; The Guardian, 32). Given these practices, it remains uncertain whether consumer acceptance of the main contract alone would suffice to personalize both the main services and advertisements, or if separate consumer consent would be required.

Determining which of the legal basis applies has implications for consumers and companies alike. If a company needs to secure consumer consent for using their data in personalized advertising, there’s a potential risk of reducing the pool of consumers from whom data are collected and who are exposed to personalized advertisements. This reduction could have a direct impact on the company’s revenue, particularly when the primary service is offered for free, and the company relies on personalized advertising as the central source of income. On the other hand, relying on the contract as a legal foundation might curtail consumers’ control over their data. Balancing these considerations is essential to ensure both legal compliance and respect for individual data rights.

The aim of our study is to investigate a crucial question for both businesses and regulators: Do consumers differentiate between the collection and processing of their data for personalized advertisements versus personalized services? Additionally, does this differentiation vary depending on whether the main service is provided for free, thus requiring the use of targeted ads to generate revenue? Understanding consumers’ perspectives is crucial, given that the GDPR is designed to safeguard consumers’ fundamental right to the protection of personal data. If consumers dislike the processing of their personal data for personalization of services as much as they dislike it for personalized advertisement, excluding the former from their control by not requiring a separate consent might undermine consumer control in situations where they might prefer to retain it. Additionally, the fact that a consumer signs up to receive services such as music or movie streaming does not mean that they wish to have it personalized based on their personal data. Through our research, we seek to shed light on this important issue and offer insights that can facilitate the harmonization of current data protection regulations with consumer attitudes, thereby assisting legislators, courts, and enforcers in achieving this objective.

Much of the existing research on consumer attitudes towards personalization has primarily focused on their perceptions of data sharing concerning personalized advertisements (e.g., DeKeyzer et al., 13; Gironda & Korgaonkar, 19; Lee et. al, 24; Serrano-Melebran & Arenas-Gaitan, 29; Zhu & Chang, 39). Additionally, studies have highlighted that consumers are influenced by both general privacy concerns and the specific context of personalization (Bol et al., 9; Hüttel et al., 20; Kozyreva et al., 23). Importantly, research suggests that individuals’ attitudes towards data processing for personalization may vary depending on the nature of the service involved, with greater acceptance observed in commercial advertising contexts compared to political messaging (Kozyreva et al., 23). However, while these findings imply that the type of personalization could impact individuals’ willingness to share their data, existing studies have not provided insight into whether the legal distinction between personalized services and advertisements influences this dynamic. Moreover, the inconsistency in keeping the content of personalization constant, such as the type of service offered, raises concerns that the observed differences in attitudes may be influenced by the perceived usefulness of the content rather than solely the type of personalization.

This highlights a significant knowledge gap regarding consumers’ perceptions of personalized services, particularly concerning whether their willingness to share data with platforms varies based on the purpose of personalization. Importantly, although both privacy calculus and privacy as contextual integrity theories would predict that people would differentiate between different types of personalization, previous research has not explored whether consumers would be more inclined to share their data for a personalized in-app experience compared to personalized third-party advertisements. This gap is notable given the legal distinctions between these two purposes within the operational framework of such services. Thus, by conducting three vignette experiments involving music, shopping, and news apps offering either personalized services or advertisements, this paper aims to analyse whether consumers’ willingness to share their data differs depending on the type of personalization.

Background and Hypotheses

Legal Background

The use of personal data for targeted advertising especially by free social media platforms has raised a number of legal issues. This paper will focus on the use of personal data from the perspective of the GDPR, but there are a number of legal issues that rise from a contractual perspective too. For instance, Durovic and Poon (15) discuss how the limitation of liability to the amount actually paid by the user might potentially be deemed unfair, especially considering that these services operate on a pay-by-data model.

Turning to the GDPR, it aims to safeguard consumers’ fundamental right to the protection of their personal data by establishing several principles that data controllers must adhere to when processing consumer data. Of particular relevance to our study is the legality principle, which stipulates that data must be processed lawfully, based on one of the legal bases outlined in Art. 6 GDPR. In the context of personalization through automated decision-making, the two relevant legal bases are the data subject’s consent (Art. 6(1)(a)) and the processing of data necessary for the performance of the contract (Arts. 6(1)(b)) (Art. 22). For consent to serve as a valid legal basis, it must be freely given, specific, informed, and unambiguous. This means that it must be provided in plain and intelligible language and that it must be as easy to withdraw as it was to give (Art. 7). This also means that consent cannot be given via pre-ticked boxes or silence (Recital 32), thus requiring an affirmative action by the consumer. In contrast, collection and processing based on the necessity for contract performance requires that there is a main contract that the data subject consented to, for example, by agreeing to the terms and conditions (T&C) of the data controller’s site.

The issue of which legal basis is applicable for the processing of personal data for the purpose of personalization has been a topic of intense debate, culminating in the recent Binding Decision 3/2022 of the European Data Protection Board (EDPB) against Meta (European Data Protection Board, 16). Meta contended that when processing users’ data for personalization of advertisement they were relying on Art.6(1)(b) as their legal basis, since consumers contract with their platform through the T&Cs and have a separate Privacy Policy that explains the processing of data for personalized advertising, which is necessary for the platform to generate revenue (as no monetary payment is made by consumers). The EDPB rejected this argument. In the decision, it addressed the conditions under which data collectors can process personal data for personalization purposes based on the necessity for contract performance. It emphasized that the protection of consumers’ personal data is a fundamental right, so that data cannot be traded as a commodity by contract. As a result, when consumers’ interests in protection of their data conflict with the controller’s economic interest in its collection (as was the case for Meta), the former takes precedence (European Data Protection Board, 16, para.101). Furthermore, the EDPB found that, considering the controller’s interests, Meta had less intrusive alternatives available for generating revenue than behavioural advertising. Consequently, the EDPB determined that behavioural advertising was not necessary for performance of the contract in this case (European Data Protection Board, 16, para.121).

In addition, the EDPB established that the test of the necessity for contract performance is objective and based on the contract’s fundamental objective and substance (European Data Protection Board, 16, para.112). Since the primary purpose of Facebook is user communication, the processing of personal data for advertising purposes was not deemed necessary for the contract. As a result, Meta cannot rely on Art.6(1)(b) as its legal basis when processing data for personalization of advertisement. The EDPB reinforced its findings by citing data subjects’ rights under Art.21(2) and (3), which provide that individuals have the right to object to profiling for marketing purposes at any time (European Data Protection Board, 16, para.122). Therefore, if Art.6(1)(b) applied to a service such as Meta’s, it would violate consumers’ rights under Art.21. The EDPB also reached a similar conclusion in Binding Decision 5/2022 against WhatsApp, where it found that Art.6(1)(b) cannot be used as a legal basis when processing data for the purpose of “service improvement and security” (European Data Protection Board, 17).

It appears that Article 6(1)(b) would only be applicable to the collection of personal data for personalization purposes in situations where the platform commits to providing the consumer with personalized content, such as a streaming service like Disney+. In this case, the platform is delivering a personalized service that the consumer contracts for, making the processing of consumer data “objectively indispensable” for the fulfilment of the platforms’ contractual obligation (Case-252/21, para.98). The EDPB further emphasizes the importance of the transparency and fairness principles in Art.5(1)(a) in selecting an appropriate legal basis that recognizes the data subject’s reasonable expectation regarding collection of their data in the specific context (European Data Protection Board, 17, para.99). Therefore, if the consumer reasonably anticipates receiving personalized content from the contracted service, Art.6(1)(b) is an appropriate legal basis. However, if personalization pertains to third-party content, such as advertising, then Art.6(1)(a)—consent of the data subject—appears to be the appropriate legal basis, provided that it fulfils the requirements of Art.7.

Notwithstanding the above, there is still some confusion around services that are offered for free, but require users to provide their data in exchange for access to the service, such as social media apps. In these cases, personal data are used to generate revenue through advertising, which may make it seem as if the processing of consumer data is necessary for the contract. However, the recent EDPB decision against Meta challenges this assumption. So, is consumer consent always required for collection and processing of data for personalization of advertising on social media platforms that are offered for free, given that the primary purpose of the service is not personalized marketing? Moreover, should consumers not have to “pay” with their personal data for the service? These are among the preliminary questions posed in the Case C-446/21, which asks whether Art.6(1)(b) can replace Art.6(1)(a) in conjunction with Art.7 as a legal basis for “free” platforms.

What are the implications of this legal debate for consumers? It means that when their data are processed for advertising purposes, they will be required to provide their consent even if a company relies on personalized advertising to generate revenue and be able to offer the main service for free. If they decide to withdraw their consent, they can still access the service that contains the ads, although they will not be as intrusive as if they were personalized, but would rather be generated based on a general attribute (for example geo-location). However, for personalized services that are the main subject of a contract such as movies or music offered by streaming platforms, consumers will not be asked for their consent. The provider of these services can legally process personal data for personalization purposes when a consumer agrees to the terms and conditions upon signing up for the service. While this distinction is legally important, it raises questions about how consumers feel about sharing their data for personalized advertising versus personalized services. Does their willingness to share data differ between these two types of personalization?

Behavioural Background and Hypotheses

To understand consumers’ attitudes towards personalization, past research has largely focused on their perception of personalized advertising. Studies show that consumers generally find personalized advertising useful, which positively impacts their purchase intention and outweighs their privacy concerns (Gironda & Korgaonkar, 19; Serrano-Malebran & Arenas-Gaitán, 29). The perceived usefulness and relevance of personalized ads have been found to mediate the negative effects of perceived intrusiveness (Lee et al., 24; Zhu & Chang, 39), although this effect may vary depending on the level of personalization, with high personalization levels being associated with higher perceived intrusiveness (DeKeyzer et al., 13). Overall, personalization has become an expected and welcomed practice by consumers, depending on the context. For instance, consumers may tolerate personalized ads on Facebook because they see it as an inevitable part of using the platform (Van den Broeck et al., 33).

The above overview shows that in general, personalization is welcomed by consumers, although attitudes towards it are not homogenous. Importantly, differing contexts alter consumer attitudes and their willingness to share data. It has been shown, for example, that privacy risk perception is the strongest predictor of consumers’ willingness to disclose personal information for personalization purposes, particularly in the healthcare and commerce sectors, but not as much in the news context (Bol et al., 9). These findings indicate that consumers’ attitudes toward privacy are diverse and can vary depending on the type of personalization or data collected (Kozyreva et al., 23).

From a theoretical standpoint, heterogeneity across contexts can be explained by two theories, which form the basis of our study: the privacy calculus theory and the privacy as contextual integrity theory. Each will be discussed in turn before formulating the hypotheses.

First, the use of personalized services involves a trade-off between personalization and privacy, as consumers weigh the benefits of personalization against the costs of sharing their personal data with different communication partners, which in this context are digital service providers (Boerman et al., 8). This trade-off is captured by the privacy calculus theory, which suggests that consumers assess the value they receive from disclosing their personal data against the potential losses associated with the disclosure (Gironda & Korgaonkar, 19). Losses may include the intrusiveness of the personalization practice, such as the collection of sensitive data or third-party data access, while benefits relate to the usefulness of personalization, such as more relevant advertisements or lower prices (often even free) (Karwatzki et al., 22). The willingness to share data for personalization purposes depends then on whether the perceived value of personalization exceeds the perceived privacy costs, leading to a personalization-privacy trade-off.

Secondly, privacy concerns are context-dependent and vary depending on the type of personalization, data, and information (Kozyreva et al., 23). These variations can be explained by the theory of contextual integrity which holds that people are willing to disclose their personal data as long as the information flow is appropriate. What is considered an appropriate information flow depends on each consumer’s privacy expectations and involves factors such as the actors involved (senders, subject, and recipient), the type of information, and the transmission principle (Martin & Nissenbaum, 28). Importantly, these expectations will vary depending on the context. For instance, we may find it appropriate to share our health information (type of information) with a doctor (the actor) for the purpose of treatment, under the condition that this information will not be shared with anyone else other than my family (transmission principle).

The way in which those norms influence consumers’ willingness to share their data has been well documented in the literature. For instance, the information type affects consumers’ willingness to share their data as this differs depending on the description of personal data (Winegar & Sunstein, 36). Another example comes from studies on the willingness to share geolocation data. Specifically, Benisch et al. (6) showed that this may depend on the time of day and day of the week, as well as the actual location. In the context of personalization, this indicates that the willingness to share data for personalization will depend on the type of data used and the data collector, as well as the exact conditions of the data processing, including its purpose.

In our study, we aim to explore whether different types of personalization offered by companies (services vs. advertisements) affects consumers’ willingness to share personal data. Both the theory of privacy as contextual integrity and privacy calculus would predict that indeed their willingness to share will differ between the two types of personalization.

Starting with contextual integrity, the two different types of personalization each have different purposes, while the data type and actors largely remain the same. While the purpose of data collection for personalized services is to provide a tailored experience to the consumer, the purpose of it for the advertisement model is to provide revenue to the company. The question is then whether consumers perceive this information flow as appropriate in the consumer-business interaction context. Here, past research showed that consumers will perceive it as more violating of their privacy expectations when a retailer collects and uses their information to provide them with recommendations that when it is sold to a tracking company (Martin & Nissenbaum, 28). In the context of smart home devices, it has been shown that using the data for advertising purposes negatively affects acceptability, whereas this is not the case when data are used for offering price discounts or development of new features for the device (Apthorpe et al., 3). This suggests that users will find it less acceptable for a business to collect and use their data for personalization of advertisement than for personalization of services that they explicitly contracted for.

Secondly, looking at the privacy-calculus, the two types of personalization have different benefits and costs. In order to provide personalized services, businesses typically do not need to share consumer’s personal data with third parties. There might of course be some exceptions to this, for instance online news services may display news articles from third parties and share personal data with them. This general lack of third-party involvement could be viewed as a cost reduction for consumers and diminish potential privacy concerns. In contrast, personalized advertising often involves the sharing of personal data with third-party advertisers, which has been shown to generate more negative attitudes towards the platform, and thus reflects a greater cost in the privacy calculus (Boerman et al., 8).

Given the differences in transmission principles and their appropriateness in a given context as well as benefits and costs, we hypothesize that consumers will be more willing to share their personal information with services that collect data for personalized services than for personalized third-party advertisements. Therefore, our first hypothesis is as follows:

  • H1: Consumers are more willing to share their personal information with a provider of personalized services, than with those that collect data for personalized third-party advertisements.

Another question we aim at addressing in our study is whether the difference in willingness to share between personalized advertisement and services will depend on the price of the main services, specifically whether it is paid for or offered for free.

As mentioned above, from the perspective of privacy calculus, it will be the additional cost of data sharing for advertising purposes compared to personalization of services that will make consumers less willing to share their data for personalized advertising. Previous research, however, has shown that people expect free services to engage in more privacy-intrusive practices than paid services (Bamberger et al., 5). In other words, there seems to be a common belief that when obtaining a paid product, we pay for the protection of our privacy. This suggests that although both paid and free services engage in personalized advertising, consumers may perceive the paid one as more protective of their privacy and, therefore, less costly than the free service. Therefore, they should be more willing to share their data for personalized advertising when the product costs money rather than when it is offered for free. This would imply that the difference between the willingness to share data for personalized advertising and for personalized services should be larger when services are offered for free than when they are offered in exchange for money. Yet, research on zero-price and “pseudo-free” offers suggests that this relationship might be reversed.

Previous studies have shown that when the monetary price is zero, consumers tend to overestimate the benefits of the service (Shampanier et al., 30) and underestimate the costs (Hüttel et al., 20). Second, people react differently to services offered for “free” but involving non-monetary costs, such as the collection of personal data, than to paid services (Hüttel et al., 20; Dallas & Morwitz, 12). Consequently, if consumers are required to share their personal data in exchange for a free service, they are more likely to perceive these costs as smaller compared to when they are asked to share their personal data in addition to paying for services. At the same time, they may tend to overestimate the benefits of personalization whenever it comes with a free offer. This would suggest that in the privacy calculus, the costs will be perceived as smaller and the benefits as larger when personalized advertising comes in a free rather than a paid offer.

From the perspective of privacy as contextual integrity, people’s expectations regarding the appropriate information flow may differ within free and paid exchanges. Indeed, research on zero-price effects suggests that people tend to apply social rather than moral norms to exchanges that do not involve monetary payments (Ariely et al., 4). Further research has shown that social norms such as reciprocity and fairness mediate the effect of the zero-price on people’s willingness to accept free offers, even those involving non-monetary payments (Dallas & Morwitz, 12; Hüttel et al., 20). In addition, though relying on a small sample of respondents, qualitative research has shown that in the context of social media platforms offered to consumers for free, personalized advertising is both expected and accepted (Van den Broeck et al., 33). This suggests that, in contrast to retail and smart device contexts where the product involves a monetary price (Apthorpe et al., 3; Martin & Nissenbaum, 28) in the context of free services, sharing data for the purpose of personalized advertising may be both expected and found appropriate.

Given all this, we hypothesize that:

  • H2: The difference in willingness to share information between the two types of personalization will be larger when the service is paid for than when it is offered for free.

According to the theoretical frameworks we relied on when formulating our hypotheses, both expectations and perceptions of fairness (Dallas & Morwitz, 12) might play a mediating role in the impact of the type of personalization, as well as its interaction with the price of the app, on the willingness to share data. As we did not establish a precise theoretical model regarding the relationship between our treatments, these dependent variables, and our main variable of interest, i.e., the willingness to share, we refrained from formulating precise hypothesis as to those effects.

Methods

Participants and Design

We recruited 3,473 participants (age: M = 34.1, SD = 13.4; 50% female) from Prolific for our study. Thirty-seven observations were excluded from the analysis due to two failed attention checks. The survey was administered using Qualtrics, and participants took approximately 5–6 min to complete all questions. Participants were rewarded with £0.75 (≈ €0.84) for their participation. To ensure transparency and to guard against reporting biases, we pre-registered our hypotheses on the Open Science Framework.Footnote 1

We relied on a convenience sample, where participants completed the study on a first-come, first-served basis, while being randomly assigned to one of the experimental conditions. Table 2 in Appendix 2 presents the demographic distribution in our sample. Although the gender distribution was balanced, the majority of participants (79%) fell within the 18–44 age group. Additionally, the majority of participants (76%) identified themselves as residing in either the EU or UK. Lastly, most participants (60%) reported earning below £40k.

The experiment involved presenting participants with an offer for a mobile application, with separate studies conducted for a music app in November 2022 and for shopping and news apps in December 2022. The three studies, each involving a different type of app, were conducted to ensure the generalizability of our results across various contexts. Hypotheses were pre-registered independently for each app. A 2 × 2 between-subject design was employed for each application, manipulating whether the app offered personalized services or personalized advertisements (Personalised Service vs. Personalised Advertisement treatments), as well as the price of the app (Free vs. Paid treatments). In the Personalized Advertisement treatment, the app was described as collecting personal information such as browser history, date of birth, and email address to display third-party advertisements related to music, shopping, or news. In the Personalized Service treatment, the app collected the same personal information to personalize its services (e.g., “suggest newspapers and stories that suit your personal interests”). The only difference between the two types of personalization was whether it pertained to the main service provided within the app or to third-party advertisements. This design feature ensured that any differences found between the treatments were not due to participants perceiving the usefulness of personalized content differently, which could increase the benefits of data sharing and affect the privacy calculus. The aim of this design was to isolate the impact of the type of personalization on participants’ willingness to share personal data.

Based on a pilot study conducted in May 2022, we anticipated a small effect size of d = 0.3 in the Paid treatments. To achieve 95% statistical power at a standard statistical significance level of .05, we estimated that a sample of 578 observations would be necessary in the Paid treatments. As we did not have any pilot data to predict the size of the interaction effect, we decided to collect the same number of observations in the Free treatments to ensure adequate statistical power. Therefore, we recruited over 1130 observations for each app study.

Procedure and Materials

Participants were first informed about the general purpose of the study. They were then asked to read and provide consent to the terms of their participation. If they did not consent, they were not able to continue with the experiment. After obtaining consent, participants were asked to provide their Prolific ID and complete an attention task that required them to click on the screen at least three times before proceeding. Those who failed the attention check were immediately notified and given the option to quit the study. Next, participants were informed that they would be presented with an offer for a mobile app and were asked a series of questions about it. Participants were randomly assigned to different treatments and presented with the corresponding offer. Appendices 1 and 3 include all scenarios and questions.

We aimed to investigate two primary questions in our study: whether participants are willing to use the app, and whether they are willing to share their personal information with it. In the study with a music app, the order of these two questions was randomized. However, in the study with the shopping and news app, we first asked participants about their willingness to share personal information. Participants responded to these questions using a 7-point Likert scale, ranging from “Extremely unwilling” to “Extremely willing.”

Next, participants were presented with seven statements to assess their perception of the fairness of the offer (i.e., The offer I was presented with was fair/questionable/justified/honest/unfair/rip-off/suspicious). The order of these statements was randomized, and participants indicated their level of agreement with each statement using a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Additionally, we measured participants’ expectations regarding the app’s data protection practices by presenting them with seven statements (eight for the shopping and news apps) describing various practices (e.g., “I expect that the app will share my personal information with third parties.” or “I expect that the app will sell my personal information to third-parties.”) Participants indicated their level of agreement with each statement on a 7-point Likert scale ranging from strongly disagree to strongly agree. Again, the order of the statements was randomized. Here, we also included a second attention check in which we reminded participants to pay attention and instructed them to click “Somewhat Disagree.”

At the end of the survey, we included two questionnaires. In the first one, we measured general privacy concerns with five items on a 7-point Likert scale (e.g., “Compared to others, I am more sensitive about the way companies handle my personal information” or “Compared to others, I tend to be more concerned about threats to my information privacy.”). The second questionnaire measured reciprocity aversion with 11 items on a 5-point Likert scale (e.g., “I’ll never ask for help if I don’t have to, to avoid owing others.” or “Usually I don’t accept favours unless I am sure I can pay back all the favours quickly.”) Finally, we asked participants a few demographic questions (age, gender, financial situation, language).

Results

Our experiment included three types of apps offering shopping, news, and music services. While it is reasonable to anticipate variations in our measures of interest across these three contexts, our focus was not on comparing general differences between the three apps. Instead, our aim was to examine the impact of the type of personalization and the price of the app across these contexts. As we did not formulate specific hypotheses regarding differences between these apps and did not anticipate any interaction between the type of the app, the type of personalization, and/or the price of the service, we present our analysis separately for each app.

Willingness to Share Personal Information

The results showed that most participants were not willing to share their data with any of the mobile applications for the purpose of personalization, with a median response of 3 on a 7-point Likert scale (“Slightly unwilling”). Specifically, in the music app condition, the average response was 3.64 (SD = 1.72), while in the shopping app and news app conditions, the average responses were 3.35 (SD = 1.75) and 3.19 (SD = 1.68), respectively. This indicates that the majority of participants answered that they were “Extremely unwilling,” “Unwilling,” or “Slightly unwilling” to share their data with any of the apps.

None of the applications showed a statistically significant difference in willingness to share personal information for personalized services (music app: Mdn = 3, M = 3.68, SD = 1.71; shopping app: Mdn = 3, M = 3.44, SD = 1.79; news app: Mdn = 3, M = 3.23, SD = 1.68) and personalized advertisement (music app: Mdn = 3, M = 3.59, SD = 1.72; shopping app: Mdn = 3, M = 3.26, SD = 1.70; news app: Mdn = 3, M = 3.14, SD = 1.68), with z-scores above −1.48 and p-values above .14. The same pattern emerged when testing for simple effects of personalization type separately in the Paid and Free treatments within each app (z > −1.36, p > .18), except for the Paid Shopping app (z = −2.00, p = .045). Overall, these results suggest that participants’ willingness to share personal information did not significantly differ between personalized services and personalized advertisements, thereby not supporting hypothesis 1 (Fig. 1).

Fig. 1
figure 1

Willingness to share personal information depending on the type of personalization, price and app type. Note: Violin plots and box plots display the distribution of participants’ responses to the question about their willingness to share personal information, measured on a 7-point Likert scale where 1 represents “Extremely unwilling” and 7 represents “Extremely willing.” The middle line in box plots marks the median. The box ranges from the 25th to the 75th percentile, while the whiskers indicate the minimum and maximum values

Our planned linear regression analysis, as indicated in Table 1 (Model 2), revealed no statistically significant interaction between the application price and the type of personalization, contradicting hypothesis 2. However, we did observe a main effect of price for all applications except for the shopping app: Participants were more willing to share their data if the app was offered for free than if it was paid for (see Table 1, Model 1). The results remain robust even after running a logistic regression analysis to account for the bimodal distribution of participants’ responses (see Table 4 in Appendix 4).

Table 1 Linear regression analysis on the willingness to share personal data

We do not have theoretical reasons to believe that people would react differently to different types of personalization depending on demographic characteristics. However, although our samples differed with respect to demographic features, controlling for individual characteristics of participants, including age, gender, financial situation, country of residence (EU, UK, or Other), and general privacy concerns, did not change our results (see Table 1, Model 3). Control analyses showed that older participants were significantly less willing to share their personal data with all three mobile applications, and that higher levels of privacy concern were associated with lower willingness to share personal data (see Table 1, Model 3).

Finally, additional analysis comparing willingness to share the data across the three apps showed that participants’ willingness to share their data depended on the type of the app, i.e., participants were less willing to share their data with news app and more willing to share their data with music app than with shopping app (News vs. Shopping: z = 2.13, p = .03, Shopping vs. Music: z = −4.10, p < .001). However, even combining all datasets we found no statistically significant differences when comparing the willingness to share between Personalise Service and Personalised Advertisement treatments (N = 3436): z = −1.79, p = .07.

Fairness and Expectations

Our experiments yield compelling evidence that participants do not distinguish between their willingness to share data for personalized advertisements and services. Additionally, there is no interaction between the price of the app and the type of personalization. However, we do observe that participants are more inclined to share data with free apps compared to paid ones. To gain a deeper understanding of participants’ attitudes toward these offers, we conducted exploratory analysis of participants’ responses to two questionnaires: one assessing their perceptions of the fairness of the offer, and the other exploring their expectations of data processing practices.

We calculated participants’ fairness perception score by averaging their responses to seven statements that describe the offer as fair, questionable, justified, honest, unfair, rip-off, and suspicious (see Fig. 2). To account for reverse scoring, we recoded four items (questionable, unfair, rip-off, and suspicious). The quality of all constructs is reported in Table 3 in Appendix 3.

Fig. 2
figure 2

Fairness perception of the offer depending on the type of personalization, price and app type. Note: Violin plots and box plots display the distribution of participants' perception of fairness regarding the offer, with higher values indicating higher fairness and lower values indicating lower fairness. The middle line in box plots marks the median. The box ranges from the 25th to the 75th percentile, while the whiskers indicate the minimum and maximum values. The dots indicate the outliers

We compared fairness perception scores between the two types of personalization and found no statistically significant differences across the three apps (based on p-values adjusted for multiple comparisons using Bonferroni correction). However, we found a main effect of app price on fairness perception, with participants perceiving the free versions of all three apps as fairer than the paid versions (z > 4.9, p < .001 adjusted for multiple comparisons using Bonferroni correction).

The expectations variable was created by taking the mean of participants’ responses to seven (eight for news and shopping apps) items that describe various data collection and processing practices of the business offering the app (see Fig. 3). On average, participants expect the business to use their data for various purposes, ranging from benign ones like enhancing services to more problematic ones like selling data to third parties (M = 5.33, SD = 1.11). Our study revealed that participants’ expectations for data treatment did not differ significantly between free and paid apps (z > −1.52, p > .13). However, for news apps, the type of personalization implemented by the app did impact participants’ expectations (music app: z = 2.19, p = .34; news app: z = 3.22, p = .015; shopping app: z = 1.52, p = 1; p-values adjusted for multiple comparisons using Bonferroni correction). Specifically, when news apps personalize advertising, participants expect the apps to engage in more privacy-intrusive practices than when personalizing services.

Fig. 3
figure 3

Expectations about data practices depending on the type of personalization, price and app type. Note: Violin plots and box plots display the distribution of participants' expectations regarding the app’s privacy practices, with higher values indicating an expectation of more privacy-intrusive practices and lower values indicating an expectation of less privacy-intrusive practices. The middle line in box plots marks the median. The box ranges from the 25th to the 75th percentile, while the whiskers indicate the minimum and maximum values. The dots indicate the outliers

Discussion and Limitations

Key Findings

Consumers are constantly exposed to personalized content, such as political campaigns, news, entertainment services, and advertising. While some of this content is the primary service offered by a business, as in the case of movie streaming services that personalize movie suggestions, it is less clear whether personalization of advertising or political messages is part of the main service provided under the contract between a business and a consumer who signs up for social networks, online news, or movie streaming services. Determining whether personalization of advertising is necessary for the fulfilment of the contract is crucial from a GDPR perspective, as processing personal data to personalize services that are part of a contract does not require separate consent. However, when personalization is not part of a contract, businesses must obtain consumers’ consent before collecting and processing their data for that purpose. This requirement is especially important for companies that offer free products and services and rely on personalized advertising to generate revenue.

In this study, we aimed to investigate whether the legal distinction between personalized services and personalized advertising is reflected in consumers’ attitudes towards sharing their data with businesses implementing these types of personalization. Surprisingly, our findings show that consumers’ views do not align with the legal distinction. Our results indicate that consumers’ willingness to share their personal data does not vary depending on whether the data are used for personalized advertising or personalized services, even when the price of the product is considered. Overall, a majority of participants expressed unwillingness to share their data with any of the app providers. Notably, despite the lack of differences in willingness to share data based on the type of personalization, consumers expect companies that rely on personalized advertising to engage in more privacy-intrusive practices compared to those providing personalized services.

Our study also found that consumers differ in their willingness to share their data based on the price of the app. They are more likely to share their data with free mobile applications than with paid ones, and they perceive offers for free apps as more fair than offers for paid apps, regardless of the type of personalization used to generate revenue. This finding is noteworthy, as some have argued that data should not be considered payment for free services (see, for instance, European Data Protection Supervisor, 18). Our study shows that consumers view transactions where they provide their data in exchange for a free service as fair, or at least fairer than when paid apps employ the same practice. Consumers appear to be averse to the collection and processing of their data for personalization purposes in addition to paying a monetary price for the product. This holds true regardless of the type of personalization used. Our findings highlight the need for increased scrutiny of paid digital content, especially since recent studies indicate that paid applications and services are just as likely to engage in privacy-intrusive practices as free ones (Bamberger et al., 5).

Recommendations

Based on our findings, we recommend that the processing of personal data for any type of personalization should be based on a clear and informed consent of the data subject. Simply signing up for music, movie streaming, news, or shopping services does not necessarily mean that consumers are willing to share their data to receive personalized services. Our study shows that consumers are equally reluctant to share their data for personalization of services or advertisements. To better align the law with consumers’ attitudes, companies should only be permitted to process consumer data for personalization purposes with their explicit consent. This means that companies must obtain consumers’ clear and unambiguous consent before processing their data for personalization, whether for services or advertisements. In the absence of such consent, companies should not be permitted to use personal data for any personalization purposes.

Such an approach would also align with literature regarding the influence of website choice architecture on consumers’ privacy choices (van Ooijen & Vrabec, 34). For instance, consumers are more likely to give up their personal data when disclosure is presented as a default option by the website (Johnson et al., 21). To combat the use of these practices, the GDPR explicitly provides that the use of pre-ticked boxes is not valid consent (Recital 32) and embeds the principle of “privacy by default.” This requires that an individual’s personal data are not made accessible without the individual’s intervention and that only data necessary for a specific process is collected (Art.25(2) GDPR). While these measures could be seen as addressing the manipulation of choice architecture to sway consumers’ privacy decisions, they are only applicable when the legal basis for data processing is consent. In cases where the data collector asserts that processing is necessary for contractual performance and does not seek separate consent, these measures do not apply (van Ooijen & Vrabec, 34). Thus, considering that our results suggest that consumers’ willingness to share is homogenous across the two types of personalization, the use of consent across the board would help combat the current discrepancy in protection.

Businesses may oppose the suggested reform, arguing that it could ultimately harm consumers by preventing the development of new products and services that would benefit them. They may also claim that they will not be able to offer free services, as they rely on consumer data for personalized targeted advertising to generate revenue. While the first argument may be challenging to refute, the second one has already been addressed by the EDPB in its recent decision on Meta. The EDPB argued that Meta can earn revenue through context-based advertising, which is less invasive and does not require extensive data collection and processing (European Data Protection Board, 16, para.121). Although our study does not aim to assess the validity of these arguments, it provides helpful guidance that can help determine which approach aligns better with consumer attitudes.

Limitations

While our study provides valuable insights, there are limitations to our conclusions about the impact of personalization type on willingness to share personal data. Specifically, we found a null effect, meaning that we did not observe a significant difference in willingness to share data based on the type of personalization used by the mobile application. Interpreting null effects can be challenging because it is unclear whether the lack of effect reflects the real world or is a result of methodological limitations. To address these challenges, we took several steps. First, we conducted a well-powered study with a large enough sample size to detect a small effect. Second, we tested the effect across three different types of applications and still did not observe a significant difference. Finally, we combined all of the data and found no evidence for an effect of personalization type on willingness to share data with mobile applications.

The second limitation of our study is that participants made hypothetical decisions, and therefore, our results may not reflect their actual choices. Yet, from a policy perspective, it might be more valuable to follow people’s declared attitudes rather than their choices, which may be malleable. Contrary to the revealed preferences assumption, people’s actual choices may not be the best measure of their preferences, in particular, when they are affected by various contextual features. Therefore, our study provides insight into people’s attitudes towards data sharing and personalization in specific contexts, which can better inform policy decisions.

The hypothetical nature of our study could also have increased the possibility that participants did not fully understand or carefully consider the distinction between personalization of services and advertisements as described in our scenarios. While future studies could control for this by including a manipulation check asking participants about the details of the scenario, such a possibility is rather unlikely given the design details and our results.

First, participants’ engagement in the study was incentivized in two ways—they were rewarded for taking part, and the reward was conditional upon passing two attention checks. Failing both attention checks led to exclusion from the payments. Additionally, submissions were rejected on Prolific, which decreases participants’ chances of being invited to future studies. These two design features should ensure that participants carefully read the scenarios.

Second, although the difference in the price of the apps was only briefly indicated in the scenarios, we found significant differences in participants’ willingness to share their data, as well as in their perception of fairness, depending on the price of the app. This suggests that participants did read the scenarios carefully enough to detect such small differences.

We also observed differences in expectations regarding data processing practices depending on the type of personalization. Specifically, participants expected news apps to engage in more privacy-intrusive activities when offering personalized advertising compared to when offering personalized services. This indicates that consumers have varying expectations regarding what companies may do with their data, depending on whether they personalize services or advertisements, suggesting that they are indeed able to distinguish between the two.

Finally, the subtle difference in the description of personalization type is a feature, not a bug, of our experimental design. We wanted to ensure that the core difference between the two personalization types is whether they are provided as part of the service or as an advertisement, while keeping all other features such as the content of personalization constant to ensure that participants’ willingness to share their personal data is not affected or does not interact with participants’ perception of the usefulness of the content.

Implications and Future Directions

Our findings indicate that merely having different expectations is insufficient to alter willingness to share data. We can only speculate on potential reasons for the lack of differentiation between the willingness to share data for personalized advertisements and services. The gap between expectations regarding privacy practices and actual willingness to share data suggests that the differences in concerns and perceived costs related to privacy loss may not be significant enough to affect the willingness to share. Alternatively, factors such as a general reluctance to share data for personalization purposes, regardless of data controllers’ practices, could be influencing the outcome. Thus, the results do not directly contradict the privacy calculus theory. It is possible that participants hold expectations about other types of privacy practices not captured in our study, which affect the perceived costs associated with data collection for service personalization. Furthermore, our results suggest that people’s perceptions of appropriate information flows in a given context may be more nuanced. Specifically, the use of data for personalized services may not necessarily be perceived as appropriate, even in a business-consumer relationship.

One potential factor that may play a role in people’s unwillingness to share data for personalization is “privacy fatigue”—a sense of weariness toward privacy issues, in which individuals believe there are no effective means of managing their personal information on the Internet (Choi et al., 10). According to this perspective, whether personal data are used for the personalization of services or advertisements does not matter to consumers since they are perceived to be outside of their control. However, this explanation is unlikely to fully account for the observed patterns in our study for two reasons. First, we observed that people are generally unwilling to share their personal data, which is not in line with the “privacy fatigue” account. Second, “privacy fatigue” may well explain people’s behaviour online, but it may not fully capture their attitudes, which may still reflect high privacy concerns.

Conclusions

Policymakers face a challenging task of balancing the interests of businesses and consumers when it comes to protecting personal data. To achieve this balance, various legal bases for processing consumer data have been introduced. For instance, Article 6 of the GDPR permits companies to process consumer data lawfully when such processing is necessary for the performance of a contract (Article 6(1)(b) GDPR) or for the legitimate interests pursued by the controller (Article 6(1)(f) GDPR). However, such processing can limit consumers’ control over their data. In these instances, companies do not need to obtain separate consent from consumers, which may pose risks to the latter’s privacy.

The question of which legal basis to rely on for personal data processing is a contentious one, particularly when it comes to personalization. While processing personal data for the purpose of personalizing music, movies, or news may be seen as necessary to perform a contract, the same cannot be said for personalization of advertising. In this paper, we explored whether the legal distinctions map to people’s perceptions, more specifically, their willingness to share data for personalization of advertisement and services.

Our research reveals that consumers do not differentiate between the two types of personalization, and most are hesitant to share personal data for any form of personalization. We contribute to previous research on personalization by demonstrating that even seemingly benign and appropriate personalization, such as the personalization of services, can elicit people’s unwillingness to share their data, akin to personalization that is perceived as problematic, such as personalized advertisements. Thus, we illustrate that information flows that may appear suitable in a given context can still face opposition from users. Consequently, our findings carry significant implications for both businesses and policymakers.

If policymakers aim to consider people’s willingness to share their data when drafting and enforcing laws, they should offer consumers the option to opt-out of data sharing, even if it’s for personalizing services integral to a contract. Consumers should be able to access non-personalized services if they prefer not to share their data for that purpose. This message is also significant for businesses, as offering such an option when providing services may be desirable. While personalized experiences may appeal to many consumers, some may opt for non-personalized services due to privacy concerns.