Keywords

1 Introduction

A plethora of empirical studies aims to explain human privacy behavior, of which many focus on the so-called privacy paradox, i.e., the discrepancy between stated privacy concerns and privacy behavior (for an overview, the reader is referred to, e.g., [15]). Several theoretical explanations have been proposed for this phenomenon so far, however, there is no agreed-upon theoretical framework that correctly predicts and explains human privacy behavior. While the privacy paradox received considerable attention in the usable privacy research field, this attitude-behavior gap is not new in psychological research and has been investigated in other application areas such as health behavior for decades [3, 8, 25, 28, 30]. Hence, it could be worthwhile to consider theoretical models of human behavior stemming from other research contexts for privacy research as well, as these might provide novel explanations for the privacy-specific attitude-behavior gap (i.e., the privacy paradox), and add valuable factors for predicting privacy behavior, which can serve as a basis for designing privacy-supportive interventions.

The aim of this chapter is thus to summarize theoretical frameworks for explaining and predicting human behavior that could add to our understanding of user privacy behavior. Some of these concepts were already investigated in depth in the privacy context, while others originate from other contexts, such as health or working psychology, and have not been applied in the privacy context yet. The list of models is not exhaustive, rather, we selected such models that have either been applied extensively in privacy research or provide promising potential to add to the existing privacy models and stimulate novel insights.

2 Homo Economicus

The concept of the homo economicus [13, 33], originating from economic theory, forms the basis for the among privacy researchers well-known privacy calculus model [24]. This behavioral model is based on the idea that people act purely rationally and pursue the goal of maximizing their benefit in all their actions. To this end, the advantages and disadvantages of a decision are weighed up against each other, and in each case the behavior is chosen which has more positive than negative consequences (see Fig. 1). In the case of privacy, for example, social benefits like feeling connected to one’s social contacts can outweigh the downsides of using a privacy-threatening messenger or social network. On the other hand, potentially severe consequences of a privacy breach, such as the possibility of sensitive health information getting publicly known, can discourage information sharing in this area.

Fig. 1
A block diagram of the homo economicus model. The positive consequences and negative consequences together lead to behavior.

The homo economicus model

Although the privacy calculus or the underlying model of the homo economicus, respectively, offer intuitive and easy-to-understand explanations for human behavior, they fall short in explaining how and which consequences are evaluated by the users. While positive consequences, such as social inclusion or free and easy access to services, might be relatively easy to identify in a specific context, negative consequences are usually fuzzier and harder to pinpoint. For example, these can include the psychological burden of feeling surveilled, a vague perception of various risks that might become relevant in the future, or additional costs in terms of time and money for using more privacy-preserving technologies. Furthermore, the model does not make any assumptions about how the benefits and disadvantages are weighted by the individual users. Hence, the model of the homo economicus seems intuitive for explaining behavior retrospectively—e.g., the user decided to participate in a social network because the advantages of doing so were perceived to be greater than potential negative consequences—but it fails in predicting future behavior. More refined models are needed to also map the various factors that determine how positive and negative consequences are perceived by different users in different situations and how these translate into actual behavior.

3 Antecedents \(\rightarrow \) Privacy Concerns \(\rightarrow \) Outcomes (APCO) Model

The APCO model [29, 36] was developed based on reviews of the privacy literature [7]. It focuses on privacy concerns (also referred to as beliefs, attitudes, perceptions), which are directly and independently influenced by antecedents (see Fig. 2). According to the model, privacy concerns are a function of previous privacy experiences (e.g., users who have had bad experiences in the past tend to have greater privacy concerns), privacy awareness (e.g., if users are not at all aware that data is being collected from them in a certain situation, they will have fewer privacy concerns), demographic factors such as age or gender (the empirical evidence on the relationship between demographic factors and privacy is very mixed, however, so we will refrain from specifying a concrete direction of effect here), personality factors (here, too, the evidence is rather mixed), and culture or corporate climate (in some cultures, for example, more value is placed on privacy protection than in others). The privacy concerns of the users in turn affect the outcomes in the form of regulations, behavior (including data disclosure), and trust (e.g., towards the data collecting entity). Trust depends on the content provided in the privacy notice, which, for example, provides information about what data the entity claims to collect and how the collected data is protected. Furthermore, the privacy calculus is considered for the concrete decision for or against a certain behavior (see Sect. 2), i.e., the weighing of costs and benefits of this behavior.

Fig. 2
A block diagram of the APCO model. The antecedents of privacy experiences, privacy awareness, demographics, personality, and culture lead to privacy concerns. The privacy concerns lead to the outcomes regulations, behavioral reactions, and trust. Trust is based on the privacy notice.

The APCO model

The APCO model has been widely criticized since its publication [7], among other things because important psychological processes such as cognitive biases and bounded rationality are not taken into account. Hence, revised versions of the APCO model have been proposed [7, 10]. Still, the APCO model considers various factors that are internal and external to the user. In this respect, the APCO model is superior to the privacy calculus, of which it is a direct extension, but it is nevertheless only suitable to a very limited extent for explaining or predicting privacy behavior, since important factors are not considered and the current state of studies on the various influencing factors is rather inconclusive.

4 Theory of Planned Behavior

The theory of planned behavior is among the most popular models that explain human behavior in psychological research [4, 9, 26, 37]. As the name already suggests, this theory aims to explain only deliberate, i.e., planned behavior, and is less suited to explain automatic or reflexive behavior. It postulates that the users’ behavioral intention, e.g., to provide their data, is mainly affected by their attitude towards this behavior (i.e., do these users think it is a good idea to provide their data in general), the perceived social norm of this behavior (i.e., does their close social circle think it is a good idea to provide this data or to provide data in general), and their perceived behavior control (see Fig. 3). The latter distinguishes this theory from its predecessor, the theory of reasoned action [5], in which this factor was not considered.

Fig. 3
A block diagram of the theory of planned behavior. The attitude, subjective social norm, and perceived behavioral control affects behavioral intention which in turn leads to behavior. The perceived behavioral control also directly affects the behavior.

The theory of planned behavior

However, behavioral control might be an important factor—for instance, it seems reasonable that someone who thinks it might be a good idea to protect their private communication by using end-to-end encryption (E2EE) and who further thinks the people close to them also think this is a great idea is still not likely to use E2EE if they feel they are not able to implement E2EE at all. The perceived behavioral control can depend on internal factors, such as knowledge or self-efficacy, but also on external resources, such as time, money, or autonomy. For example, someone who is employed in an organization may not have the authorization to decide whether their colleagues should also implement E2EE. Yet, they cannot send encrypted mails without the receivers also having implemented E2EE. The head of the company, on the other hand, might dictate their employees to use E2EE, in which case these have little perceived behavioral control to decide against using E2EE. Thus, perceived behavioral control is assumed not only to affect behavioral intention but also to have a direct effect on behavior, as sometimes users are forced to go against their intention due to external factors.

The theory of planned behavior further considers the attitude-behavior gap referred to as privacy paradox [15, 22], i.e., the fact that people are often willing to do something (e.g., better protect their privacy, delete their Facebook account) but fail to actually do so, whether out of apathy or, for example, because they are still postponing the respective action for other reasons. The fact that this phenomenon, which is richly explored in other areas such as health behavior [3, 8, 25, 28, 30], was rarely known among privacy researchers may have led to an overestimation of this phenomenon in the field of privacy research. Hence, while it is important to include the attitude-behavior gap in models aimed at explaining privacy behavior, there might be several other influencing factors in the case of privacy behavior apart from the attitude-behavior gap. We will thus explore further behavioral models, which, to the best of our knowledge, have not yet been widely applied to privacy research, in the following sections.

5 Cognitive Consistency Theories

Cognitive consistency theories [1, 11, 12] describe the fact that people strive to avoid inconsistencies in their attitudes, beliefs, intentions, and actions, i.e., they strive for consistency among these factors. According to these theories, contradictions between behavior and attitudes lead to cognitive dissonance, which is perceived as unpleasant. To resolve this dissonance, people therefore adjust either their behavior or their attitude (see Fig. 4).

Fig. 4
A block diagram of cognitive consistency theories. The general and specific privacy concerns lead to negative impression. The positive features of the application leads to positive impression. Negative and positive impression lead to dissonance, which leads to perception adjustment of behavior.

Cognitive consistency theories

In terms of privacy behavior, this could, e.g., look as follows: Users feel a general level of privacy concerns, which is reflected in specific privacy concerns (related to the data disclosed during the interaction) when interacting with a concrete application. This leads to a negative perception of the application due to privacy concerns, i.e., the privacy-threatening potential of the application. On the other hand, the application may also provide various positive features, e.g., offer very useful functions or a good user experience. This leads to a positive perception of the application. The two contradictory perceptions of the application (negative because privacy-threatening and positive because of the functionalities provided) leads to cognitive dissonance. The users do not know how to act in the situation, since the two contradictory perceptions suggest to them at the same time that they should use the application and that they should refrain from using it. To resolve this dissonance, the perception of the application is adjusted by either relativizing the specific privacy concerns (along the lines of “Although I am in principle against the disclosure of this kind of information, it is not so bad with the present application because…”) or by correcting the positive perception of the application functions downward (“The application is not really that great after all, because…”). The impression of the application adjusted in this way can now be transferred directly into consistent behavior.

In the long term, however, a decision to use an application or to disclose data, i.e., behavior that is not privacy-preserving, can also lead to a dissonance between behavior and general privacy concerns. In these cases, either the behavior or the general privacy concerns can be adjusted, which can potentially lead to a gradual weakening of existing privacy concerns.

Like the homo economicus model, cognitive consistency theories explain privacy behavior in a rather post hoc manner. The theories offer an explanation beyond the rational model of homo economicus for seemingly inconsistent expressions of general privacy concerns and concrete privacy behavior, i.e., the privacy paradox, by taking well-studied psychological processes into account. Nevertheless, this model does also not allow for the prediction of privacy behavior, since adjustments for the purpose of establishing consistency may refer to different cognitions as well as behaviors. Moreover, in contrast to the theory of planned behavior, no concrete external factors such as social influence are considered.

6 Transactional Model of Stress and Coping

The transactional model of stress and coping [23] aims to explain in which circumstances a person experiences stress. Behind this lies the classic working psychology assumption that not every person reacts in the same way to a stressor and that this stressor may or may not lead to stress depending on personal conditions (see Fig. 5).

Fig. 5
A flow diagram of stress and coping. The steps are as follows. 1. Stressor. 2. Perception filter. 3. Primary appraisal of positive, dangerous, and irrelevant. 4. Secondary appraisal of sufficient and insufficient resources. 5. Stress. 6. Copying. 7. Emotion focused, problem focused. 8. Reappraisal.

The transactional model of stress and coping

A stressor occurring in the environment, for example, the collection of behavioral data via the use of cookies for the purpose of playing out personalized advertising, must first pass through the users’ perception filter, i.e., be registered by them in the first place. Here, for example, users who visit websites from an EU country will have a higher probability of perceiving this stressor, as they are usually made aware of the use of these cookies via a cookie consent notice. However, individuals who are fundamentally more privacy-aware are also more likely to register such a stressor than individuals who are more indifferent to online tracking. If the stressor is registered by the users, a primary appraisal takes place: the stressor is classified as positive, dangerous, or irrelevant. This classification depends, of course, both on the nature of the stressor (e.g., using cookies for the purpose of serving personalized advertising results in the collection of far more sensitive data than using cookies for the purpose of generating statistical analyses of website usage) and on the attitudes of the individual (in this case, for example, the importance given to the protection of personal data). If the stressor is evaluated as positive, e.g., because the users would like to receive personalized advertising, or as irrelevant, because the users are neither positive nor negative about the process, the process ends at this point. Only if the stressor is evaluated as dangerous, a secondary appraisal follows, in which the users check to what extent they have resources to react to the stressor. If the users have sufficient resources (e.g., technical knowledge, time, an interface that allows them to refuse the use of cookies for the purpose of displaying personalized advertising), they neutralize the stressor by using these resources. However, if the secondary appraisal turns out to be negative, i.e., the users conclude that they do not have enough resources (for example, because the cookie consent notice is a content blocker that requires consent to use all cookies in order to visit the desired page, too little time or knowledge is available to make the required settings, and/or additional dark patterns—see also the chapter “The Hows and Whys of Dark Patterns: Categorizations and Privacy”— have been used), stress evolves. The users now attempt to deal with this stress by following either an emotion-focused/appraisal-focused or a problem-focused coping strategy. In the former, an attempt is made to reduce the negative emotions, e.g., by distraction, or to change the reference to the situation, e.g., by the users convincing themselves that the acquisition of their data in this case is bearable. Only problem-focused coping leads to privacy-protecting action. Here, an attempt is made to build up additional resources, e.g., by asking another person for help, acquiring additional knowledge through research, or installing a technical assistance tool. Finally, a reassessment of the stressor takes place, reflecting on how successful the coping performed was. Based on this, it is possible that the same stressor will no longer be perceived as generating stress in the future if the users realize that they now have sufficient resources to counter it.

Although the transactional model of stress and coping does not directly seek to explain behavior, but only the genesis of and coping with (negative) stress, we believe it adds value to privacy research. It emphasizes the otherwise possibly easily overlooked fact that a stimulus—such as the collection of private data—must first pass through a person’s perceptual filter, which does not happen as a matter of course in times of ubiquitous data collection. Subsequently, this data collection must be classified as dangerous—this factor is also present in many other behavioral models. However, the model offers new input about dealing with insufficient resources. On the one hand, it illustrates that users experience stress in the nowadays common situation of being overwhelmed when dealing with the collection of their private data that is taking place—a circumstance that is potentially detrimental to health and has so far received little attention in the public debate on data protection. On the other hand, it captures what is in many cases ineffective coping via reassessment of the situation in a formal model. However, it does not explain how users can be persuaded to use a more goal-directed, problem-focused coping strategy. At this point, it should be emphasized that emotion-focused coping is by no means a bad option per se, because by reducing emotional stress, it allows users to shift into a more positive mindset that can facilitate better, problem-focused coping with the stressor. It is only harmful if emotion-focused coping is not combined with a problem-focused coping mechanism, because then the problem itself cannot be solved.

7 Rubicon Model

The Rubicon model [2] is a classic motivation model from psychology that distinguishes between different phases of action. Similar to the theory of planned behavior, the choice of action goals (“Behavioral Intention” in the theory of planned behavior) and the realization of these action goals (“Behavior”) are considered separately (see Fig. 6).

Fig. 6
A flow diagram of the Rubicon model. The steps are as follows. 1. Evaluate. 2. Plan. 3. Act. 4. Reflect. Evaluate and reflect come under motivation. Plan and act come under volition.

The Rubicon model

The first phase (evaluation) describes the weighing of different action goals. Here the model assumes that people have more potential action goals than they can realize and therefore must weigh up which goals (a) are particularly desirable and (b) have a good chance of being achieved with a realistic use of resources. Hence, this is a primarily motivational phase. The conclusion of this evaluation phase is the formulation of a concrete goal for action—the rather general desire to better protect one’s data could, for example, give rise to the possible goal for action of switching digital communication in a private context to privacy-friendly channels wherever possible. This transition from desire to concrete action goal is referred to as “crossing the Rubicon,” in analogy to Caesar’s crossing of the Rubicon in 49 B.C., with which he instigated a civil war and after which there was literally no turning back. In our everyday life, of course, crossing the Rubicon is far less dramatic; here, finality refers to the fact that by setting one’s goal for action, users create a commitment to themselves to reach that goal.

This is followed by the planning phase, in which the weighing of action goals is completed, and consideration is given to how the action goal formulated in the previous phase can best be achieved. Hence, according to theory, this is no longer a motivational phase, but a volitional phase. No action is taken at this stage; the user merely makes resolutions to act and considers at which points in the implementation of the goal difficulties could arise and how these can best be addressed. It is assumed that users do not act immediately because they first have to wait for favorable opportunities. When potentially favorable opportunities occur (favorable means compared to other past and anticipated future opportunities), action initiation occurs, drawing on the pre-determined strategies. For example, users may consider which channels or messengers they no longer want to use in the future and which they should be replaced with. In addition, they consider the people with whom they would like to communicate via these alternative channels and how the change of communication channel can best be implemented—for example, by selecting messengers that are available free of charge and easy to use. One potential difficulty could be that certain key communication partners may not want to switch channels voluntarily. In this case, the users would be well advised to consider in the planning phase how these communication partners can best be convinced—perhaps by providing them with a newspaper article that deals with what consequences the exploitation of data from private communications can have for private individuals.

Once an action has been initiated, the users are in the actional phase, in which they attempt to realize the action goal by implementing the actions and strategies defined in the previous phase. Depending on the complexity of the goal and the occurrence of difficulties, it is necessary here to accept considerable efforts and to resume interrupted actions several times in order to successfully achieve the goal. For example, users could fail here in the action of no longer using certain messengers if they find that a communication partner with whom they would like to remain in contact digitally in the future is not willing to change channels. In this case, persistent attempts at persuasion may be necessary if the goal is ultimately to be achieved. The effort that users are willing to make results from the commitment to the goal of action, which in turn depends on the attractiveness and feasibility of the goal.

In the last phase, the users reflect on the extent to which they have achieved the set action goal, also taking into account the extent to which the intended positive consequences have occurred as a result. In this phase, it may become apparent, for example, that despite successful achievement of the goal, not all the intended positive effects or additional negative effects not considered in advance have occurred. Here, motivational factors are again in the foreground. If the action goal is evaluated as achieved and the subsequent consequences as satisfactory, the action goal is mentally deactivated. If the action goal is judged as not or only insufficiently fulfilled, either the level of ambition is lowered and then the goal is deactivated or the action goal is maintained and new actions are planned, which are to make the achievement of the action goal possible after all. In the privacy context, this reveals a difficulty: While it is comparatively easy for the users to assess whether the action goal—in our example, the switch to privacy-friendly communication channels—has been achieved, it is almost impossible to assess the associated positive consequences. To do this, users would first have to have an overview of what data is collected about them through the use of privacy-unfriendly and privacy-friendly communication channels, how this data is processed, and what tangible consequences this has for their life. We know from research that users find it very difficult to assess the last aspect [6, 14, 16,17,18,19,20,21, 40], while at least users from the scope of the General Data Protection Regulation (GDPR) would in principle be entitled to information on the first two points—but here, too, anecdotal research shows that practice in many large companies is unfortunately currently far from providing users with comprehensive information on these aspects, even upon request [31, 38, 39]. Often, therefore, users at this stage are left to speculate about the positive effect of their actions, while the negative effect, for example, in the form of reduced usability or not reaching out to certain people via digital channels such as messenger, is clearly evident and thus can potentially lead to a change in the goal of action towards less privacy-preserving behavior. Similarly, some users find that they have only been able to partially achieve their goal but, in the absence of alternative promising action strategies, find themselves unable to continue pursuing their goal (with the prospect of successfully achieving it) and therefore lower their aspiration level. Especially in the context of messengers, this case often occurs when users are faced with the seemingly insurmountable obstacle that important communication partners cannot be reached via alternative channels (the so-called walled garden phenomenon [32]).

With its distinction between motivational and volitional phases, the Rubicon model also provides an explanation for the privacy paradox. In addition, it provides a suitable framework for designing interventions that are intended to support users in achieving their goals, such as a more conscious approach to their digital privacy. The model does not make any concrete assumptions about the occurrence of different desires or action goals, desired consequences, successful action strategies, and potential difficulties. It is therefore not suitable for predicting or explaining privacy behaviors. In our opinion, this model is helpful in principle, but it addresses a different context of application than, for example, the theory of planned behavior.

8 Capability, Opportunity, Motivation \(\rightarrow \) Behavior (COM-B) System

The COM-B system is a behavior system in which capability, opportunity, and motivation interact and lead to a certain behavior that in turn influences these components (see Fig. 7) [27]. For example, a user wants to protect their privacy by using a more privacy-friendly channel to communicate with their friends. According to the COM-B model, the person needs the psychological and physical ability to perform the activity. This includes the required knowledge and skills. If we revisit the example from the previous section, this can mean that the person knows which privacy-friendly channels are available and how they are installed or used. Furthermore, the person must be motivated. Motivation is defined as all brain processes that stimulate and control behavior, not just goals and conscious decisions. It includes habitual processes, emotional responses, and analytical decisions. Applied to our example, this can mean that if the person wants to communicate with a friend in a privacy-friendly way, on the one hand, they have to believe that alternative messengers protect privacy (reflective motivation) and then automatically select the privacy-friendly channel in the process of communication (automatic motivation). Last but not least, there have to be appropriate opportunities for a certain behavior to be shown. This includes all factors that are external to the individual and enable or trigger the behavior. For our example, this may mean that the person is directly offered a privacy-friendly channel as the first choice for communication.

Fig. 7
A block diagram of the COM-B system. Capability and opportunity lead to motivation. Capability, motivation, and opportunity lead to behavior. The behavior in turn affects capability, motivation, and opportunity.

The COM-B system, figure by Michie et al. [27] licensed under CC BY 2.0

While the COM-B model is a behavioral model, it also provides a basis for designing interventions aimed at changing behavior. According to Michie et al. [27], a particular intervention can change one or more components of the behavioral system. The causal links within the system can reduce or increase the effect of certain interventions by leading to changes elsewhere. The task is to consider what the behavior should be and what components of the behavioral system need to be changed to achieve this. Thus, the model can also serve as a basis for developing interventions to promote privacy-friendly behavior.

9 Health Action Process Approach

Originating from health research, the health action process approach (HAPA) [34, 35] describes the factors that bring about a change toward healthier behavior (either by taking up health-promoting activities such as exercise or by quitting unhealthy activities such as smoking). Like the Rubicon model, a motivational phase and a volitional phase are distinguished here, with intention forming the transition between the two phases (see Fig. 8).

Fig. 8
An illustration of the HAPA model. The intention of action self-efficacy, outcome expectancies, and risk perception leads to action and coping planning that leads to action initiation and maintenance. Coping and recovery self-efficacy, and self-monitoring lead to action initiation and maintenance.

The HAPA model, figure taken from Schwarzer [35] licensed under CC BY-NC-ND 4.0

Motivation may start with a perceived risk (in the privacy context, this could come, e.g., from a conversation with privacy-aware individuals or from hearing media reports about adverse consequences of data disclosure). In the further course, however, perceived risk plays a rather subordinate role and thus serves primarily as a trigger for building motivation. The motivational strength is mainly influenced by the other two variables, i.e., similar to the Rubicon model, outcome expectancies (which in this model would again be a weighing of potential advantages and disadvantages of a behavior), and self-efficacy (i.e., the extent to which the users are convinced that they can actually perform the behavior). Once the intention for a behavior has been formed, a planning phase follows first in this model as well. However, HAPA differentiates between figuring out strategies to perform the actual planned behavior (to revisit the previous example again, this could be, e.g., installing and using privacy-preserving messengers and uninstalling privacy-threatening messengers), called action planning, and figuring out replacement strategies or strategies to deal with potential obstacles (e.g., “If my mother is not willing to switch to another messenger, I will communicate with her by phone call and email in the future instead”), called coping planning. Again, self-efficacy plays a crucial role at this point in terms of the extent to which behavior can be maintained and potential difficulties dealt with, assuming that coping self-efficacy is distinct from action self-efficacy, i.e., users who have high action self-efficacy do not necessarily have high coping self-efficacy. Conceptually closely related to this is recovery self-efficacy, which describes the extent to which users can recover from possible setbacks and resume the desired behavior. Also important for maintaining the desired behavior is action control, which is usually achieved via self-monitoring—i.e., the users monitor their own behavior and check whether it is consistent with the targeted behavior.

HAPA combines the distinction between motivational and volitional phases with explanatory factors, providing both an approach to explaining behavior (and the intention-behavior gap) and a theoretical framework for designing interventions. For the latter, the model further distinguishes between individuals who are in the motivational phase (non-intenders), those who have already formed an intention but are still engaged in planning (intenders), and those who are already acting (actors). For non-intenders, successful interventions should focus on risk and resource communication, while intenders are best served by interventions designed to help them plan specific strategies (for primary behavior and dealing with obstacles), and actors benefit most from interventions designed to protect them from potential relapse, for example, by avoiding risky situations.

10 Conclusion

As of yet, there is no theory or behavioral model, which includes all factors that contribute to user privacy and can be used to perfectly predict privacy behavior. Still, aligning one’s research with theoretical behavior models adds validity and can inspire novel avenues for future work. Particularly models that originate from other contexts than privacy, e.g., originally stemming from health research, can provide valuable input to trigger new perspectives. The transactional model of stress and coping, for example, shines light on the fact that users being constantly overwhelmed by the management of their digital privacy can experience stress, which might lead to severe mental and/or physical health problems; a fact that should receive more attention in the public debates around the worth of data protection. Further, classic and concise psychological theories such as the theory of planned behavior can provide an intuitive to understand framework for conducting empirical studies. Models focusing on behavior change, such as the HAPA model, can contribute to our understanding of when, why, and how users can alter their privacy behavior towards a more deliberate handling of their private data, and thus form a decent basis for developing privacy interventions.