Keywords

1 Introduction

Many goals in life such as losing weight, passing an exam or paying off a loan require long-term planning. But while some people stick to their plans, others lack self-control; they eat unhealthy food, delay their studies and take out new loans. In behavioral economics the tendency to change a plan for no apparent reason is known as time-inconsistent behavior. The questions are, what causes these inconsistencies and why do they affect some more than others? A common explanation is that people make present biased decisions, i.e., they assign disproportionately greater value to the present than to the future. In this simplifying model a person’s behavior is the mere result of her present bias and the setting in which she is placed. However, the interplay between these two factors is intricate and sometimes counter-intuitive as the following example demonstrates:

Consider two runners Alice and Bob who have two weeks to prepare for an important race. Each week they must choose between two types of workout. Type A always incurs an effort of 1, whereas type B incurs an effort of 3 in the first and 9 in the second week. Since A offers less preparation than B, Alice and Bob’s effort in the final race is 13 if they consistently choose A and 1 if they consistently choose B. Furthermore, A and B are incompatible in the sense that switching between the two will result in an effort of 16 in the final race. Figure 1 models this setting as a directed acyclic graph G with terminal nodes s and t. The intermediate nodes \(v_X\) and \(v_{XY}\) represent a person’s state after completing the workouts \(X,Y \in \{A,B\}\). To move forward with the training, Alice and Bob must perform the tasks associated with the edges of G, i.e., complete workouts and run the race. Looking at G it becomes clear that two consecutive workouts of type B are the most efficient routine in the long run. However, this is not necessarily the routine a present biased person will choose.

For instance, assume that Alice and Bob discount future costs by a factor of \(a = 1/2 - \varepsilon \) and \(b = 1/2 + \varepsilon \) respectively. We call a and b their present bias. At the beginning of the first week Alice and Bob compare different workout routines. From Alice’s perspective two workouts of type A are strictly more preferable to two workouts of type B as she anticipates an effort of \(1 + a(1 + 13) = 8 - 14\varepsilon \) for the former and \(3 + a(9 + 1) = 8 - 10\varepsilon \) for the latter. A similar calculation for Bob shows that he prefers two workouts of type B. Considering that neither Alice nor Bob finds a mix of A and B particularly interesting at this point, we conclude that Alice chooses A in the first week and Bob B. However, come next week, Bob expects an effort of \(1 + b16 = 8 + 16\varepsilon \) for A and \(9 + b = 19/2 + \varepsilon \) for B. Assuming \(\varepsilon \) is small enough, A suddenly becomes Bob’s preferred option and he switches routines. Alice on the other hand has no reason to change her mind and sticks to A. As a result she pays much less than Bob during practice and in the final race. This is remarkable considering that her present bias is only marginally different from Bob’s. Moreover, it seems surprising that only Bob behaves inconsistently, although he is less biased than Alice.

1.1 Related Work

Traditional economics and game theory are based on the assumption that people maximize their utility in a rational way. But despite their prevalence, these assumptions disregard psychological aspects of human decision making observed in empirical and experimental research [5]. For instance, time-inconsistent behavior such as procrastination seems paradox in the light of traditional economics. Nevertheless, it can be explained readily by a tendency to overestimate immediate utility in long-term planning, see e.g. [13]. By studying such cognitive biases, behavioral economics tries to obtain more realistic economic models.

A significant amount of research in this field has been devoted to temporal discounting in general and quasi-hyperbolic discounting in particular, see [6] for a survey. The quasi-hyperbolic discounting model proposed by Laibson [11] is characterized by two parameters: the present bias \(\beta \in (0,1]\) and the exponential discount rate \(\delta \in (0,1]\). People who plan according to this model have an accurate perception of the present, but scale down any costs and rewards realized \(t \ge 1\) time units in the future by a factor of \(\beta \delta ^t\). To keep our work clearly delineated in scope, we adopt Akerlof’s model of quasi-hyperbolic discounting [1] and make the following two assumptions: First, we focus on the present bias \(\beta \) and set the exponential discount rate to \(\delta = 1\). Secondly, we assume people to be naive in the sense that they are unaware of their present bias and only optimize their current perceived utility when making a decision. Note that Alice and Bob from the previous example behave like agents in Akerlof’s model for a present bias of \(\beta = 1/2 - \varepsilon \) and \(\beta = 1/2 + \varepsilon \) respectively.

Until recently the economic literature lacked a unifying and expressive framework for analyzing time-inconsistent behavior in complex social and economic settings. Kleinberg and Oren closed this gap by modeling the behavior of naively present biased individuals as a planning problem in task graphs like the one depicted in Fig. 1 [8]. We introduce this framework formally in Sect. 2. As a result of Kleinberg and Oren’s work, an active line of research at the intersection of computer science and behavioral economics has emerged. For instance, the graphical model has been used to systematically analyze different types of quasi-hyperbolic discounting agents such as sophisticated agents who are fully or partially aware of their present bias [9] and agents whose present bias varies randomly over time [7]. Furthermore, the graphical model was used to shed light on the interplay between temporal biases and other types of cognitive biases [10].

The graphical model is of particular interest to us as it provides a natural framework for a design problem frequently encountered in behavioral economics. Given a certain social or economic setting, the problem is to improve a time-inconsistent person’s performance via various sorts of incentives, such as monetary rewards, deadlines or penalty fees, see e.g. [12]. Using the graphical model, Kleinberg and Oren demonstrate how a strategic choice reduction can incentivize people to reach predefined goals [8]. To implement their incentives, they simply remove the corresponding edges from the task graph. However, there is a computational drawback to this approach. As we have shown in previous work, an optimal set of edges to remove from a task graph with n nodes is NP-hard to approximate within a factor less than \(\sqrt{n}/3\) [2]. A more general form of incentives avoiding these harsh complexity theoretic limitations are penalty fees. In the graphical model penalty fees are at least as powerful as choice reduction and admit a polynomial time 2-approximation [3].

Fig. 1.
figure 1

Task graph of the running scenario

1.2 Incentive Design for an Uncertain Present Bias

Frederick, Loewenstein and O’Donoghue have surveyed several attempts to estimate people’s temporal discount functions [6]. But as estimates differ widely across studies and individuals, the difficulty of predicting a person’s temporal discount function becomes apparent. Clearly, this poses a serious challenge for the design of reliable incentives. After all, Alice and Bob’s scenario demonstrates how arbitrarily small changes in the present bias can cause significant changes in a person’s behavior. In this work we address the effects of incomplete information about a person’s present bias in two different notions of uncertainty.

In Sect. 3 we consider naive individuals whose exponential discount rate is \(\delta = 1\), but whose present bias \(\beta \) is unknown. The only prior information we have about \(\beta \) is its membership in some larger set B. Our goal is to construct incentives that are robust with respect to the uncertainty induced by B. More precisely, we are interested in incentives that work well for any present bias contained in B. An alternative perspective is that we try to construct incentives which are not limited to a single person, but serve an entire population of individuals with different present bias values. A simple instance of this problem in which a single task must be partitioned and stretched over a longer period of time has been studied by Kleinberg and Oren [8]. But like most research on incentivizing heterogeneous populations, see e.g. [12], Kleinberg and Oren’s results are restricted to a very specific setting. They themselves suggest the design of more general incentives as a major research direction for the graphical framework [8].

Using penalty fees as our incentive of choice and a fixed reward to keep people motivated, we present the first results in this area. Our contribution is twofold. On the one hand, we try to quantify the conceptual loss of efficiency caused by incomplete knowledge of \(\beta \). For this purpose we introduce a novel concept called price of uncertainty, which denotes the smallest ratio between the reward required by an incentive that accommodates all \(\beta \in B\) and the reward required by an incentive designed for a specific \(\beta \in B\). We present an elegant algorithmic argument to prove that the price of uncertainty is at most 2. Remarkably, this bound holds true independent of the underlying graph G and present bias set B. To complement our result, we construct a family of graphs G and present bias sets B for which the price of uncertainty converges to a value strictly greater than 1. On the other hand, we consider the computational problem of constructing penalty fees that work for all \(\beta \in B\), but require as little reward as possible. Drawing on the same algorithmic ideas we used to bound the price of uncertainty yields a polynomial time 2-approximation. Furthermore, we present a non-trivial proof to show that the decision version of the problem is contained in NP. Since all hardness results of [3] also apply under uncertainty, we know that there is no 1.08192-approximation unless \(\mathrm{P = NP}\).

1.3 Incentive Design for a Variable Present Bias

In Sect. 4 we generalize our notion of uncertainty to individuals whose present bias \(\beta \) may change arbitrarily over time within the set B. This model is inspired by work of Gravin et al. [7], except that we do not rely on the assumption that \(\beta \) is drawn independently from a fixed probability distribution. Instead, our goal is to design penalty fees that work well for all possible sequences of \(\beta \) over time. We believe this to be an interesting extension of the fixed parameter case as the variability of \(\beta \) may capture changes in a person’s temporal discount function caused by unforeseen cognitive biases different from her present bias. As a result we obtain more robust penalty fees.

Again, our contribution is twofold. On the one hand, we introduce the price of variability to quantify the conceptual loss of efficiency caused by unpredictable changes in \(\beta \). Similar to the price of uncertainty, we define this quantity to be the smallest ratio between the reward required by an incentive that accommodates all possible changes of \(\beta \in B\) over time and the reward required by an incentive designed for a specific and fixed \(\beta \in B\). However, unlike the price of uncertainty, the price of variability has no constant upper bound. Instead, the ratio seems closely related to the range \(\tau = \max {B}/\min {B}\) of the set B. By generalizing our algorithm from Sect. 3 we obtain an upper bound of \(1 + \tau \) for the price of variability. To complement this result, we construct a family of graphs G for which the price of variability converges to \(\tau /2\). On the other hand, we consider the computational aspects of constructing penalty fees for a variable \(\beta \). As a result of the unbounded price of variability, we are not able to come up with a constant polynomial time approximation. Instead, we obtain a \({(1 + \tau )}\)-approximation. However, by using a sophisticated reduction from VECTOR SCHEDULING, we prove that no efficient constant approximation is possible unless \(\mathrm{NP = ZPP}\). We conclude our work by studying a curious special case of variability in which individuals may temporarily lose their present bias. For this scenario, which is characterized by the assumption that \(1 \in B\), optimal penalty fees can be computed in polynomial time.

2 The Model

In the following we introduce Kleinberg and Oren’s graphical framework [8]. Let \(G =(V,E)\) be a directed acyclic graph with n nodes that models some long-term project. The start and end states are denoted by the terminal nodes s and t. Furthermore, each edge e of G corresponds to a specific task whose inured effort is captured by a non-negative cost c(e). To finish the project, a present biased agent must sequentially complete all tasks along a path from s to t. However, instead of following a fixed path, the agent constructs her path dynamically according to the following simple procedure:

When located at any node v different from t, the agent tries to evaluate the minimum cost she needs to pay in order to reach t. For this purpose she considers all outgoing edges (v, w) of her current position v. Because the tasks associated with these edges must be performed immediately, the agent assesses their cost correctly. In contrast, all future tasks, i.e., tasks on a path from v to t not incident to v, are discounted by her present bias of \(\beta \in (0,1]\). As a result, we define her perceived cost for taking (v, w) to be \(d_{\beta }(v,w) = c(v,w) + \beta d(w)\), where d(w) denotes the cost of a cheapest path from w to t. Furthermore, we define \(d_{\beta }(v) = \min \{c(v,w) + \beta d(w) \mid (v,w) \in E\}\) to be the agent’s minimum perceived cost at v. Since the agent is oblivious to her own present bias, she only traverses edges (v, w) for which \(d_{\beta }(v,w) = d_{\beta }(v)\). Ties are broken arbitrarily. Once the agent reaches the next node, she reiterates this process.

To motivate the agent, a non-negative reward r is placed at t. Because the agent must reach t before she can collect r, her perceived reward for reaching t is \(\beta r\) at each node different from t. When located at \(v \ne t\), the agent is only motivated to proceed if \({d_{\beta }(v) \le \beta r}\). Otherwise, if \(d_{\beta }(v) > \beta r\), she quits. We say that G is motivating, if she does not quit while constructing her path from s to t. Note that sometimes the agent can construct more than one path from s to t due to ties in the perceived cost of incident edges. In this case, G is considered motivating if she does not quit on any such path.

For the sake of a clear presentation, we will assume throughout this work that each node of G is located on a path from s to t. This assumption is sensible because the agent can only visit nodes reachable from s. Furthermore, she is not willing to enter nodes that do not lead to the reward at t. Consequently, only nodes that are on a path from s to t are relevant to her behavior. All nodes not satisfying this property can be removed from G in a simple preprocessing step.

2.1 Alice and Bob’s Scenario

To illustrate the model, we revisit Alice and Bob’s scenario. The task graph G is depicted in Fig. 1. Remember that \(a = 1/2 - \varepsilon \) and \(b = 1/2 + \varepsilon \) denote Alice and Bob’s respective present bias. For convenience let \(0 < \varepsilon \le 1/54\). Furthermore, assume that a reward of \(r = 27\) is awarded upon reaching t.

We proceed to analyze Alice and Bob’s walk through G. At their initial position s they must decide whether they move to \(v_A\) or \(v_B\). For this purpose they try to find a path that minimizes the perceived cost. As the more present biased person, Alice’s favorite path is \(s,v_{A},v_{AA},t\) with a perceived cost of \(d_a(s) = d_a(s,v_A) = 8 - 14\varepsilon \). By choice of \(\varepsilon \) this cost is covered by her perceived reward \({ar = 27/2 - 27\varepsilon }\). Consequently, she is motivated to traverse the first edge and moves to \(v_A\). A similar argument shows that Bob moves to \(v_B\). Once they reach their new nodes, Alice and Bob reevaluate plans. From Alice’s perspective \(v_{A},v_{AA},t\) is still the cheapest path to t. Bob, however, suddenly prefers \(v_{B},v_{AB},t\) to his original plan. Nevertheless, both of their perceived cost remains covered by their perceived reward and they move to \(v_{AA}\) and \(v_{AB}\) respectively. At this point the only option is to take the direct edge to t. For Alice the perceived cost at \(v_{AA}\) is sufficiently small to let her reach t. In contrast, Bob’s perceived cost of \({d_b(v_{AB}) = 16}\) exceeds his perceived reward of \(br = 27/2 + 27\varepsilon \) and he quits.

2.2 Cost Configurations

Bob’s behavior in the previous example demonstrates how present biased decisions can deter people from reaching predefined goals. To ensure an agent’s success it is therefore sometimes necessary to implement external incentives such as penalty fees. In the graphical model, penalty fees allow us to arbitrarily raise the cost of edges in G. More formally, let \(\tilde{c}\) be a so called cost configuration, which assigns a non-negative extra cost \(\tilde{c}(e)\) to all edges e of G. The result is a new task graph \(G_{\tilde{c}}\), whose edges e have a cost of \(c(e) + \tilde{c}(e)\). A present biased agent navigates through \(G_{\tilde{c}}\) according to the same rules applying in G. We say that \(\tilde{c}\) is motivating if and only if \(G_{\tilde{c}}\) is. To avoid ambiguity we annotate our notation whenever we consider a specific \(\tilde{c}\), e.g., we write \(d_{\tilde{c}}\) and \(d_{\beta ,\tilde{c}}\) instead of d and \(d_{\beta }\).

We conclude this section with a brief demonstration of the positive effects penalty fees can have in Alice and Bob’s scenario. Let \(\tilde{c}\) be a cost configuration that assigns an extra cost of \(\tilde{c}(v_{B},v_{AB}) = 1/2\) to \((v_{B},v_{AB})\) and \(\tilde{c}(e) = 0\) to all other edges \(e \ne (v_{B},v_{AB})\). Note that G and \(G_{\tilde{c}}\) are identical task graphs except for the cost of \((v_{B},v_{AB})\). Because Alice does not plan to take \((v_{B},v_{AB})\) on her way through G and has even less reason to do so in \(G_{\tilde{c}}\), we know that \(\tilde{c}\) does not affect her behavior. For similar reasons, \(\tilde{c}\) does not affect Bob’s choice to move to \(v_{B}\). However, once Bob has reached \(v_{B}\) his perceived cost of the path \(v_{B},v_{AB},t\) is \(d_{b,\tilde{c}}(v_B, v_{AB}) = 19/2 + 16\varepsilon \), whereas his perceived cost of \(v_{B},v_{BB},t\) is only \(d_{b,\tilde{c}}(v_B, v_{BB}) = 19/2 + \varepsilon \). Since the latter option appears to be cheaper and is covered by his perceived reward, Bob proceeds to \(v_{BB}\) and then onward to t. As a result \(\tilde{c}\) yields a task graph that is motivating for Alice and Bob alike. This is a considerable improvement to the original task graph.

3 Uncertain Present Bias

In this section we consider agents whose present bias \(\beta \) is uncertain in the sense that our only information about \(\beta \) is its membership in some set \(B \subset (0,1]\). We call B the present bias set. For technical reasons we assume that B can be expressed as the union of constantly many closed subintervals from the set (0, 1]. This way the intersection of B with a closed interval is either empty or contains an efficiently computable minimal and maximal element. To measure the degree of uncertainty induced by B, we define the range of B as \(\tau = \max {B}/\min {B}\).

3.1 A Decision Problem

Our goal is to construct a cost configuration \(\tilde{c}\) that is motivating for all \(\beta \in B\), but requires as little reward as possible. To assess the complexity of this task, let UNCERTAIN PRESENT BIAS (UPB) be the following decision problem:

Definition 1

(UPB). Given a task graph G, present bias set B and reward \(r > 0\), decide whether a cost configuration \(\tilde{c}\) motivating for all \(\beta \in B\) exists.

If \(\tau = 1\), i.e., B only contains a single present bias parameter, UPB is identical to the decision problem MOTIVATING COST CONFIGURATION (MCC) studied in [3]. Since MCC is NP-complete, UPB must be NP-hard. But unlike MCC it is not immediately clear if UPB is also contained in NP. The reason is that proving \(\mathrm{MCC} \in \mathrm{NP}\) only requires to verify whether a given cost configuration is motivating for a single value of \(\beta \); a property that can be checked in polynomial time [2]. However, proving \(\mathrm{UPB} \in \mathrm{NP}\) requires to verify whether a given cost configuration is motivating for all \(\beta \in B\). Taking into account that B may very well be an infinite set, it becomes clear that we cannot check all values of \(\beta \) individually. Interestingly, we do not have to; checking a finite subset \(B' \subseteq B\) of size \(\mathcal {O}(n^2)\) turns out to be sufficient.

Proposition 1

For any task graph G, reward r and present bias set B a finite subset \(B' \subseteq B\) of size \(\mathcal {O}(n^2)\) exists such that G is motivating for all \(\beta \in B\) if it is motivating for all \(\beta \in B'\).

The above proposition is related to a theorem by Kleinberg and Oren, which bounds the number of paths an agent takes as \(\beta \) varies over (0, 1] by \(\mathcal {O}(n^2)\) [8]. Kleinberg and Oren’s argument does not only establish existence of \(B'\), but also yields a polynomial time algorithm to construct \(B'\), which in turn implies that \(\mathrm{UPB} \in \mathrm{NP}\). Due to space constraints, we refer to the full version of this paper for a corresponding proof of Proposition 1 as well as all other omitted proofs.

Corollary 1

UPB is NP-complete.

3.2 The Price of Uncertainty

Since UPB is NP-complete, it makes sense to consider the corresponding optimization problem UPB-OPT. For this purpose, let r(G, B) be the infimum over all rewards admitting a cost configuration motivating for all \(\beta \in B\) and define:

Definition 2

(UPB-OPT). Given a task graph G and present bias set B, determine r(G, B).

Clearly, UPB-OPT must be at least as hard as the optimization version of MCC. Consequently, we know that UPB has no PTAS and is NP-hard to approximate within a ratio less than 1.08192 [3]. But does the transition from a certain to an uncertain \(\beta \) reduce approximability?

Setting complexity theoretic considerations aside for a moment, an even more general question arises: How does the transition from a certain to an uncertain \(\beta \) affect the efficiency of cost configurations assuming unlimited computational resources? To quantify this conceptual difference in efficiency, we look at the smallest ratio between optimal cost configurations motivating for all \(\beta \in B\) and optimal cost configurations motivating for a specific \(\beta \in B\). We call this ratio the price of uncertainty.

Definition 3

(Price of Uncertainty). Given a task graph G and a present bias set B, the price of uncertainty is defined as \(r(G,B)/\sup \{r(G,\{\beta \}) \mid \beta \in B\}\).

Let us illustrate the price of uncertainty by going back to Alice and Bob’s scenario and assume that \(B = \{a,b\}\) with \(a = 1/2 - \varepsilon \) and \(1/2 + \varepsilon \). In other words, the agent either behaves like Alice or she behaves like Bob, but we do not know which. It is easy to see that in either case the agent minimizes her maximum perceived cost on the way from s to t by taking the path \(P = s,v_B,v_{BB},t\). This minmax cost, which is either \(d_a(v_B, v_{BB}) = 19/2 - \varepsilon \) or \(d_b(v_B, v_{BB}) = 19/2 + \varepsilon \), provides two lower bounds for the necessary reward when divided by the respective present bias. More formally, it holds true that \({r(G,\{a\}) \ge (19/2 - \varepsilon )/(1/2 - \varepsilon )}\) and \({r(G,\{b\}) \ge (19/2 + \varepsilon )/(1/2 + \varepsilon )}\). However, as we have seen in Sect. 2, neither Alice nor Bob are willing to follow P without external incentives. To discourage the agent from leaving P, we assign an extra cost of \(\tilde{c}(s,v_A) = 5\varepsilon \) to \((s,v_A)\), \(\tilde{c}(v_B,v_{AB}) = 1/2 + 16\varepsilon \) to \((v_B,v_{AB})\) and \(\tilde{c}(e) = 0\) otherwise. This extra cost does not affect the agent’s maximum perceived cost along P, which she still experiences at \((v_B,v_{BB})\). As a result, our bounds for \(r(G,\{a\})\) and \(r(G,\{b\})\) are tight and we get \({\sup \{r(G,\{\beta \}) \mid \beta \in B\} = r(G,\{a\})}\). Moreover, because we have used the same cost configuration \(\tilde{c}\) to derive \(r(G,\{a\})\) and \(r(G,\{b\})\), it must hold true that \(r(G,B) = \sup \{r(G,\{\beta \}) \mid \beta \in B\}\), implying that the price of uncertainty in Alice and Bob’s scenario is 1.

figure a

3.3 Bounding the Price of Uncertainty

As Alice and Bob’s scenario demonstrates, cost configurations designed for an uncertain \(\beta \) are not necessarily less efficient than those designed for a specific \(\beta \). Therefore one might wonder whether scenarios exist in which a real loss of efficiency is bound to occur, i.e., can the price of uncertainty be greater than 1? The following proposition shows that such scenarios indeed exist.

Proposition 2

There exists a family of task graphs and present bias sets for which the price of uncertainty converges to 1.1.

As the price of uncertainty can be strictly greater than 1, the question for an upper bound arises. Ideally, we would like to design a cost configuration \(\tilde{c}\) motivating for all \(\beta \in B\) assuming the reward is set to \(\varrho r(G,\{b\})\) for some constant factor \(\varrho > 1\) and \(b = \min B\). Clearly, the existence of such a \(\tilde{c}\) would imply a constant bound of \(\varrho \) for the price of uncertainty independent of G and B. Using a generalized version of the approximation algorithm we proposed in [3], it is indeed possible to construct a \(\tilde{c}\) with the desired property for \(\varrho = 2\).

The main idea of UncertainPresentBiasApprox is simple: First, the algorithm computes a value \(\alpha \) such that \(\alpha /b\) is a lower bound on the reward necessary for agents with present bias b, i.e., \(r(G,\{b\}) \ge \alpha /b\). In particular, this bound implies \(\sup \{r(G,\{\beta \}) \mid \beta \in B\} \ge \alpha /b\). Next the algorithm constructs a \(\tilde{c}\) such that a reward of \(2\alpha /b\) is sufficiently motivating for all \(\beta \in B\), i.e., \(r(G,B) \le 2\alpha /b\). As a result the price of uncertainty can be at most 2. In the following we try to convey the intuition behind the algorithm in more detail.

We begin with the computation of \(\alpha \). For this purpose let P be a path minimizing the maximum cost an agent with present bias b perceives on her way from s to t. We call P a minmax path and define \(\alpha = \max \{d_{b}(e) \mid e \in P\}\) to be the maximum perceived edge cost of P. Since cost configurations cannot decrease edge cost, it should be clear that \(\alpha \) is a valid lower bound on the reward required for the present bias b, i.e., \(r(G,\{b\}) \ge \alpha /b\).

We proceed with \(\tilde{c}\). The goal is to assign extra cost in such a way that any agent with a present bias \(\beta \in B\) traverses only two kinds of edges. The first kind of edges are those on P. It is instructive to note that each such edge \((v,w) \in P\) is motivating for a reward of \(\alpha /b\) if \(\beta \ge b\). The reason is that

$$\begin{aligned} d_{\beta }(v,w) = \beta \Bigl (\frac{c(v,w)}{\beta } + d(v,w)\Bigr ) \le \beta \Bigl (\frac{c(v,w)}{b} + d(v,w)\Bigr ) = \beta \frac{d_{b}(v,w)}{b} = \beta \frac{\alpha }{b}. \end{aligned}$$

In particular, P is motivating for each present bias \(\beta \in B\). The second kind of edges are on cheapest paths to t. To identify these edges, the algorithm assigns a distinct successor \(\varsigma (v)\) to each node \(v \in V \setminus \{t\}\) such that \((v,\varsigma (v))\) is the initial edge of a cheapest path from v to t. Since we assume t to be reachable from all other nodes of G at least one suitable successor must exist. By definition of \(\varsigma \), we know that \(P' = v,\varsigma (v),\varsigma (\varsigma (v)),\ldots ,t\) is a cheapest path from v to t. We call \(P'\) the \(\varsigma \) -path of v and \(T = \{(v,\varsigma (v)) \mid v \in V \setminus \{t\}\}\) a cheapest path tree.

Remember that we try to keep agents on the edges of P and T. For this purpose, we assign an extra cost of \(\tilde{c}(e) = 2\alpha /b + 1\) to all other edges. This raises their perceived cost to at least \(2\alpha /b + 1\); a price no agent is willing to pay for a perceived reward of \(\beta 2\alpha /b\). However, since we have not assigned any extra cost to T so far, the perceived cost of edges in P and T is unaffected by the current \(\tilde{c}\). In particular, all edges of P are still motivating for a reward of \(\alpha /b\) and any present bias \(\beta \in B\). To keep agents from entering costly \(\varsigma \)-paths \(P' = v,\varsigma (v),\varsigma (\varsigma (v)),\ldots ,t\), we assign an extra cost to the out-edges \((v,\varsigma (v))\) of P, i.e., \(v \in P\) but \(\varsigma (v) \notin P\). The extra cost \(\tilde{c}(v,\varsigma (v))\) is chosen to match the cost of a most expensive edge on \(P'\) between v and the next intersection of \(P'\) and P. It is easy to see that the resulting \(\tilde{c}\) can no more than double the perceived cost of any edge in P, see the proof of Theorem 1 for a precise argument. Furthermore, the perceived cost of any out-edge \((v,\varsigma (v))\) of P is either high enough to keep agents on P or they do not encounter edges exceeding the perceived cost of \((v,\varsigma (v))\) until they reenter P. We conclude that a reward of \(2\alpha /b\) is sufficiently motivating, leading us to one of the central results of our work.

Theorem 1

The price of uncertainty is at most 2.

It is interesting to note that UncertainPresentBiasApprox can be executed in polynomial time. Furthermore, in the proof of Theorem 1 we argue that \(\alpha /b \le r(G,B) \le 2\alpha /b\). As a result we have also found an efficient constant factor approximation of UPB-OPT.

Corollary 2

UPB-OPT admits a polynomial time 2-approximation.

4 Variable Present Bias

So far we have considered agents with an unknown but fixed present bias. We now generalize this model to agents whose \(\beta \) may vary arbitrarily within B as they progress through G. It is convenient to think of \(\beta \) as a present bias configuration, i.e., an assignment of present bias values \(\beta (v) \in B\) to the nodes v of G. Whenever the agent reaches a node v, she acts according to the current present bias value \(\beta (v)\). We say that G is motivating with respect to a present bias configuration \(\beta \) if and only if the agent does not quit on a walk from s to t.

To illustrate the consequences of a variable present bias we revisit Alice and Bob’s scenario once more. Recall that the agent in this scenario is either like Alice with a present bias of \(a = 1/2 - \varepsilon \) or like Bob with a present bias of \(b = 1/2 + \varepsilon \), i.e., \(B = \{a,b\}\). But while she had to commit to one present bias before, she is now free to change between a and b. For instance, her present bias could be b at s and \(v_B\), but a otherwise, i.e., \(\beta (v) = b\) for \(v \in \{s,v_B\}\) and \(\beta (v) = a\) for \(v \in V \setminus \{s,v_B\}\). In this case she walks along the same path Bob would take, i.e., \(s,v_B,v_{AB},t\). However, there is a subtle difference. At \(v_{AB}\) the agent behaves like Alice and needs strictly more reward than Bob to remain motivated while traversing \((v_{AB},t)\). Under closer examination, which we will not go into detail here, it is in fact easy to see that the variability of \(\beta \) makes our agent more expensive to motivate than any agent with a fixed present bias from B.

4.1 Computational Consideration

Let G be an arbitrary task graph and B a suitable present bias set. We want to construct a cost configuration \(\tilde{c}\) that is motivating for all present bias configuration \(\beta \in B^V\), but requires as little reward as possible. Using arguments similar to those of Sect. 3, the computational challenges of this task are readily apparent. In particular, the corresponding decision problem VARIABLE PRESENT BIAS (VPB) is equivalent to MCC whenever B only contains a single element.

Definition 4

(VPB). Given a task graph G, present bias set B and reward \(r > 0\), decide whether a cost configuration \(\tilde{c}\) motivating for all \(\beta \in B^V\) exists.

Because MCC is NP-complete [3], it immediately follows that VPB is NP-hard. A proof that \(\mathrm{VPB} \in \mathrm{NP}\) can be found in the full version of this paper.

Corollary 3

VPB is NP-complete.

As it is NP-hard to find optimal cost configurations for general B, we turn to the optimization version of the problem. For this purpose let \(r(G,B^V)\) be the infimum over all rewards admitting a cost configuration \(\tilde{c}\) motivating for all \(\beta \in B^V\) and define VPB-OPT as:

Definition 5

(VPB-OPT). Given a task graph G and present bias set B, determine \(r(G,B^V)\).

Interestingly, approximating VPB-OPT seems to be much harder than UPB-OPT. The reason why the 2-approximation for UPB-OPT, i.e., UncertainPresentBiasApprox, does not work anymore is simple. Recall that the cost configuration \(\tilde{c}\) returned by the algorithm lets the agent take shortcuts along cheapest paths to t. To ensure that these shortcuts do not become too expensive, \(\tilde{c}\) assigns extra cost to their initial edge. This way the perceived cost within a shortcut should not be greater than that for entering. As long as the present bias is fixed, this works fine. However, if the present bias can change, the agent may become more biased within a shortcut and require higher rewards to stay motivated. One way to fix this problem is to let the assigned extra cost depend on \(\tau \), i.e., the range of B. More precisely, we multiply the cost assigned in line 9 of Algorithm 1 by \(\tau \) and change line 5 to assign a cost of \(\tilde{c}(e) = (1+\tau )\alpha /b + 1\). As a result we obtain a new algorithm VariablePresentBiasApprox with an approximation ration of \(1 + \tau \).

Theorem 2

VPB-OPT admits a polynomial time \((1+\tau )\)-approximation.

Although VariablePresentBiasApprox yields a good approximation for a moderately variable present bias, it does not provide constant approximation bounds like UncertainPresentBiasApprox. Surprisingly, a sophisticated reduction from VECTOR SCHEDULING (VS) [4], shows that VPB-OPT cannot have an efficient constant factor approximation unless \(\mathrm{ZPP} = \mathrm{NP}\).

Theorem 3

No polynomial time algorithm can approximate VPB-OPT within a constant factor \(\varrho > 1\), unless \(\mathrm {NP} = \mathrm {ZPP}\).

4.2 Occasionally Unbiased Agents

Although VPB is hard to solve in general, a curious special case consisting of all present bias sets B for which \(1 \in B\) is not. Note that agents whose present bias varies within such a B becomes temporarily unbiased whenever 1 is drawn. For this reason we call these agents occasionally unbiased. A behavioral pattern unique to occasionally unbiased agents is that they may start to walk along a cheapest path at any point in time whenever their present bias becomes 1. As a result we can reduce VPB to a decision problem we call CRITICAL NODE SET (CNS) for occasionally unbiased agents.

Definition 6

(CNS). Given a task graph G, present bias set B and reward r, decide the existence of a critical node set W.

We consider a node set W critical if the following properties hold: (a) \(s \in W\). (b) Each node \(v \in W\) has a path P to t that only uses nodes of W. (c) All edges e of P satisfy \(d_{b}(e) \le br\) with \(b = \min B\). As it turns out, such a W contains exactly those nodes an occasionally unbiased agent may visit with respect to a motivating cost configuration. This allows us to reduce VPB to CNS.

Proposition 3

If \(1 \in B\), then VPB has a solution if and only if CNS has one.

All that remains to show is that CNS is decidable in polynomial time. A straight forward approach to this simple algorithmic problem is DecideCriticalNodeSet. We therefore conclude that VPB is efficiently solvable for occasionally unbiased agents.

figure b

Corollary 4

If \(1 \in B\), then VPB can be solved in polynomial time.

4.3 The Price of Variability

To conclude our work, we take a step back from computational considerations and look at the implications of variability from a more general perspective. Our goal is to quantify the conceptual loss of efficiency incurred by going from a fixed and known present bias to an unpredictable and variable one. Similar to the price of uncertainty we define the price of variability as the following ratio.

Definition 7

(Price of Variability). Given a task graph G and a present bias set B, the price of variability is defined as \(r(G,B^V)/\sup \{r(G,\{\beta \}) \mid \beta \in B\}\).

It seems obvious that the price of variability depends closely on the structure of G and B. Nevertheless, we would like to find general bounds for the price of variability much like we did in Sect. 3 for the price of uncertainty. As a first step, it is instructive to note that the price of uncertainty is a natural lower bound for the price of variability. The reason for this is that each cost configuration that motivates an agent whose present bias varies arbitrarily in B must also motivate an agent whose present bias is a fixed value from B. Therefore it holds true that \(r(G,B^V) \ge r(G,B)\), which immediately implies the stated bound. Sometimes this bound is tight. Consider for instance Alice and Bob’s scenario. As we have shown in Sect. 3, it is possible to construct a cost configuration \(\tilde{c}\) verifying a price of uncertainty of 1. Using similar arguments, it is easy to see that \(\tilde{c}\) remains motivating if we allow the present bias to vary, implying an identical price of variability. However, for general instances of G and B this tight relation between the price of uncertainty and the price of variability is lost. In fact, we can show that unlike the price of uncertainty, which has a constant upper bound of 2, the price of variability may become arbitrarily large as the range of B increases.

Proposition 4

There exists a family of task graphs and present bias sets for which the price of variability converges to \(\tau /2\).

Although Proposition 4 implies that the price of variability can become substantially larger than the price of uncertainty, it should be noted that the task graph constructed in the proof of this proposition is close to a worst case scenario. In particular, we can show that the price of variability cannot exceed \(\tau + 1\), which is roughly twice the value obtained by Proposition 4. To verify this upper bound, it is helpful to recall the proof of Theorem 2. In the process of establishing the approximation ratio of VariablePresentBiasApprox we have argued that the cost configuration \(\tilde{c}\) returned by the algorithm motivates any agent with a present bias configuration \(\beta \in B^V\) for a reward of at most \((\tau + 1)r(G,\{\min B\})\). Consequently, it holds true that \(r(G,B^V )\le (\tau + 1)r(G,\{\min B\})\), implying that the price of variability cannot exceed \(\tau + 1\).

Corollary 5

The price of variability is at most \(\tau + 1\).