Skip to main content

Non-parametric Test of Time Consistency: Present Bias and Future Bias

  • Chapter
Behavioral Economics of Preferences, Choices, and Happiness

Abstract

This chapter reports the elicited time preference of human subjects in a laboratory setting. The model allows for non-linear utility functions, non-separability between delay and reward, and time inconsistency including future bias in addition to present bias. In particular, the experiment (1) runs a non-parametric test of time consistency and (2) estimates the form of time discount function independently of instantaneous utility functions, and then (3) the result suggests that many subjects exhibiting future bias, indicating an inverse S-curve time discount function .

The original article first appeared in Games and Economic Behavior 71:456–478, 2011. A newly written addendum has been added to this book chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For humans, Frederick et al. (2002) provide a comprehensive review of time preference from an economic perspective, while Green and Myerson (2004) provide an overview of studies in psychology. In the experiments on pigeons and rats, the reward is food/water or the access to it. Time preference is referred to as impulsive behavior in the literature. See Monterosso and Ainslie (1999) for a survey.

  2. 2.

    Halevy (2008) calls this diminishing impatience.

  3. 3.

    There are a few exceptions. Kirby and Santiesteban (2003) compare u(x) = x with \(u(x) = \sqrt{x}\) and find no significant difference in goodness-of-fit. Andersen et al. (2008), Fernández-Villaverde and Mukherji (2006) and Ida and Goto (2009) assume a constant relative risk-aversion (CRRA) utility function. Rubinstein (2003) does not impose any assumptions. The novel experimental design of Attema et al. (2010) does not require the functional form (utility-free). Tanaka et al. (2010) estimate parameters for CRRA utility functions incorporating with loss aversion and probability weighting function.

  4. 4.

    A trivial example of such a data set is \(\left \{(x_{i},t_{i})\big\vert x_{i} = e^{rt_{i}}.\right \}\).

  5. 5.

    Frederick et al. (2002) refer to the magnitude effect as one of the six commonly observed anomalies. It is referred to as amount-dependent discounting in the psychology literature. See the extensive survey by Green and Myerson (2004).

  6. 6.

    For example, u(x) = x 0. 42 + 45. 9 can accommodate the anomaly above. That is, u(15)∕u(60) = u(3, 000)∕u(4, 000) = 0. 95. Masatlioglu and Ok presented this numerical example in an earlier version of their paper.

  7. 7.

    There is an exception, which is, the research by Benhabib et al. (2010) that allows for a fixed cost of present bias.

  8. 8.

    Noor (2010) similarly defines more general time compensation function, Ψ s, l (t). \(T(x,x^{{\prime}})\) is equivalent to \(\varPsi _{x,x^{{\prime}}}(0)\).

  9. 9.

    Notice there are two underlying assumptions. One is that subjects are expected utility maximizers and the other is that u is time invariant.

  10. 10.

    I use only this probability equivalence (PE) method, not a certainty equivalence (CE) method, which elicits the certainty equivalent x for a given lottery \((x^{{\prime}},p^{{\prime}})\). Since this experiment intends to examine the correspondence between the time delay and the risk for a pair of fixed rewards, the CE method cannot be applicable. However, note that the systematic bias and the discrepancy between the PE and CE method are reported in Hershey and Schoemaker (1985).

  11. 11.

    Recall that subjects are assumed to be EU maximizer. If the prospect theory applies here, that is, subjects transform p into subjective weighting π(p), then the identity above should be \(D(t^{{\prime}}) =\pi (p^{{\prime}})\). Note that, however, the estimated time discount function represents the corresponding risk \(p^{{\prime}}\) to the given delay \(t^{{\prime}}\). Thus, this experimental design still integrates the risk and time preferences.

  12. 12.

    The front end delay (FED) design is used to control the transaction cost of the rewards and the immediacy effect in the recent experimental studies (Andersen et al. 2008; Benhabib et al. 2010; Coller and Williams 1999). With the FED, the earlier option will not be paid immediately; instead, it will be paid with a little delay (see Harrison and Lau (2005) for a discussion). Although I was aware of the advantage, I did not adopt it for the following reason. Suppose two delayed options are offered, (x, t) and \((x^{{\prime}},t^{{\prime}})\) where \(0 < t < t^{{\prime}}\). Note that, in theory, the time discount function depends on both timings (see Masatlioglu and Ok 2007). That is, \(D(t,t^{{\prime}})\) is not necessarily equal to \(D(0,t^{{\prime}}- t)\) or \(D(0,t^{{\prime}})/D(0,t)\). Thus, we cannot use that observation to elicit \(D(0,t^{{\prime}})\). In addition, It is important to keep the symmetric structure between the time and risk preference tasks. As Keren and Roelofsma (1995) and Halevy (2008) argue, the immediacy effect and the certainty effect have several common properties. If that is the case, the immediate reward (t = 0) corresponds to the certain reward (p = 1). It is not certain, however, what p would correspond to a seven-day FED (t = 7).

  13. 13.

    Prelec and Loewenstein (1991) review and contrast the anomalies in both expected utility theory and discount utility theory. For example, the decreasing impatience (“common difference effect”) corresponds to the “common ratio effect (anomaly)” in expected utility theory, and the present bias is equivalent to the certainty effect anomaly. The similar structures of those anomalies support my view.

  14. 14.

    \(\theta\) is introduced to capture any unobservable heterogeneity, or frailty. Assume that the frailty a is a multiplicative effect on the hazard function, h(t | a) = ah(t) and that the unobserved a follows a Gamma distribution, G\((1/\theta,\theta )\). This results in the D given above Mudholkar et al. (1996).

  15. 15.

    Becker et al. (1964).

  16. 16.

    See Bohm et al. (1997) that find the sensitivity of BDM to the underlying distribution of valuation.

  17. 17.

    Since the purpose of the BDM mechanism in this experiment is not to test the mechanism but to make subjects reveal their valuation, I believe that it is appropriate to teach the subjects about the incentive property. After they read the instructions for the time preference part, subjects answer two review questions on the mechanism. Out of 55 subjects, 35 answered both questions correctly and 12 answered one of the questions correctly.

  18. 18.

    There were two subjects who asked about the possible range of the delay. I answered them by saying that there was a range of a proposed delay, from which the computer program would choose a number and I did not tell the range. Then, I repeated their best response was still to answer questions truthfully regardless of the range.

  19. 19.

    In addition, it seemed that these two methods were not always incentive compatible. However, I leave this issue for future research on the methodology, as it calls for a rigorous investigation.

  20. 20.

    Attema et al. (2010) independently develop another experimental design with the same spirit. For a given pair of rewards \(x < x^{{\prime}}\), they elicit the length of interval between the two rewards that makes the two options equally good. Suppose \((x, 0) \sim (x^{{\prime}},t_{1})\). In the next question, let subjects compare (x, t 1) and \((x^{{\prime}},t_{2})\) and elicit t 2 that makes \((x,t_{1}) \sim (x^{{\prime}},t_{2})\). This sequence of adaptive questions yields the shape of the time discount function. See their paper for more detail. Note that, due to its adaptive nature, this method would not be incentive compatible if the reward were real money.

  21. 21.

    I do not observe a significant order effect in the reported delays. However, the subjects in the last two sessions who completed the risk preference part first significantly reported the lower lowest acceptable odds than those in the first three sessions. The mean difference is 8. 48 % points and the p-value of t-test is 0.051.

  22. 22.

    The first group consists of four questions, whose reward pairs are ($5, $10), ($5, $15), ($5, $20) and ($5, $25). The next group includes ($10, $15), ($10, $20) and ($10, $25).

  23. 23.

    It took 145.1 s on average for a subject to complete the time preference task and 144.7 s for the risk preference task. Of 56 subjects, 21 revised their answers in the time preference part and 18 revised their answers in the risk preference part.

  24. 24.

    There are several implementations of delayed payments. Harrison et al. (2005) have the Danish Ministry of Economic and Business Affairs transfer the delayed payment into the subjects’ bank account. Anderhub et al. (2001) and Coller and Williams (1999) give a post-dated check to subjects. Benhabib et al. (2010) send a check to the subject’s mailing address. Tanaka et al. (2010) assign to a village leader to deliver future rewards to participants in the village.

  25. 25.

    There was a subject who answered 366 (days) to all 10 delay questions. The value 366 was the longest delay that subjects could input. I refer this subject as ID56.

  26. 26.

    In this analysis, I excluded the data of ID56, since his response always implies future bias no matter what his true time preference is.

  27. 27.

    Note, however, that for the other 22 subjects the model has little explanatory power (R 2 < 0. 05). I treat them separately and discuss this later.

  28. 28.

    The experiment has two treatments. In the first treatment, subjects are asked to choose one of two future rewards. In the second treatment, they are asked to choose from one immediate reward and another future reward. If there is an immediate effect (present bias) and a premium to accept any delayed reward instead of an immediate one, then the premium is present only in the second treatment.

  29. 29.

    I thank an anonymous referee for his/her detailed comments pointing out these issues.

  30. 30.

    In our study, to minimize the skepticism, subjects were given a postal money order and wrote their addresses and names on the money order and an envelope, in which they sealed their money order.

  31. 31.

    I am aware that the future bias can be explained by the same psychological process that causes a subadditive discounting (Read 2001; Scholten and Read 2006). The subadditive discount function means D(0, t) > D(0, s) × D(s, t) and implies present biased behavior. Read explains that “when an object or event is subdivided, each part is paid more attention than if it is part of a larger whole (p. 10).” Notice that a similar subadditivity of attention can lead to the opposite future bias in this framework: namely, if a subject interprets the difference between x 0 and x 1 as an object, then the acceptable delay is a function of the difference, i.e. \(\tilde{T}(x_{1} - x_{0}) = T(x_{0},x_{1})\). When there is subadditivity in \(\tilde{T}\), it results in future bias observations.

  32. 32.

    Time-related aspects and delay discounting play important roles in clinical decisions. See Bos et al. (2005) and Ortendahl and Fries (2006) for reviews and discussions.

  33. 33.

    For example, nicotine-dependent (Reynolds et al. 2004) and alcoholic (Petry 2001) individuals have more myopic time preferences than individuals without any addiction.

  34. 34.

    This result supports one of the main findings of Andersen et al. (2008).

  35. 35.

    In psychopharmacology, there is extensive research on the relationship between addictive behavior and discounting. Reynolds (2006) and Bickel et al. (2007) provide comprehensive reviews of the literature.

  36. 36.

    See the extensive survey by Cardinal (2006) for other examples.

  37. 37.

    See also Kable and Glimcher (2007) for other arguments.

  38. 38.

    This simple example includes the for-sure option of (100 % 18 days) for illustration. For some other choice tasks, the assigned delay is uncertain for both of the left and the right options. For example, in another task, the DM is asked to choose either of the following options, (Left; 50 % 11 days; 50 % 25 days) and (Right; 75 % 11 days; 25 % 39 days). Note that the expected delay of these future options is 18 days.

References

  • Ahlbrecht M, Weber M (1997) An empirical study on intertemporal decision making under risk. Manag Sci 43(6):813–826

    Article  Google Scholar 

  • Anderhub V, Güth W, Gneezy U, Sonsino D (2001) On the interaction of risk and time preferences: An experimental study. Ger Econ Rev 2(3):239–253

    Article  Google Scholar 

  • Andersen S, Harrison GW, Lau MI, Rutström EE (2008) Eliciting risk and time preferences. Econometrica 76(3):583–618

    Article  Google Scholar 

  • Attema AE, Bleichrodt H, Rohde KI, Wakker PP (2010) Time-tradeoff sequences for analyzing discounting and time inconsistency. Manag Sci 56(11):2015–2030

    Article  Google Scholar 

  • Becker GM, DeGroot MH, Marschak J (1964) Measuring utility by a single-response sequential method. Behav Sci 9(3):226–232

    Article  Google Scholar 

  • Benhabib J, Bisin A, Schotter A (2010) Present-bias, quasi-hyperbolic discounting, and fixed costs. Games Econ Behav 69(2):205–223

    Article  Google Scholar 

  • Benzion U, Rapoport A, Yagil J (1989) Discount rates inferred from decisions: an experimental study. Manag Sci 35(3):270–284

    Article  Google Scholar 

  • Bickel WK, Miller ML, Yi R, Kowal BP, Lindquist DM, Pitcock JA (2007) Behavioral and neuroeconomics of drug addiction: competing neural systems and temporal discounting processes. Drug Alcohol Depend 90:S85–S91

    Article  Google Scholar 

  • Bohm P (1994) Time preference and preference reversal among experienced subjects: the effects of real payments. Econ J 104(427):1370–1378

    Article  Google Scholar 

  • Bohm P, Lindén J, Sonnegård J (1997) Eliciting reservation prices: becker-DeGroot-Marschak mechanisms vs. markets. Econ J 107:1079–1089

    Article  Google Scholar 

  • Bommier A (2006) Uncertain lifetime and intertemporal choice: risk aversion as a rationale for time discounting. Int Econ Rev 47:1223–1246

    Article  Google Scholar 

  • Bos JM, Postma MJ, Annemans L (2005) Discounting health effects in pharmacoeconomic evaluations: current controversies. Pharmacoecon 23:639–649

    Article  Google Scholar 

  • Cairns JA, van der Pol MM (1997) Constant and decreasing timing aversion for saving lives. Soc Sci Med 45(11):1653–1659

    Article  Google Scholar 

  • Cardinal RN (2006) Neural systems implicated in delayed and probabilistic reinforcement. Neural Netw 19:1277–1301

    Article  Google Scholar 

  • Chapman GB, Winquist JR (1998) The magnitude effect: temporal discount rates and restaurant tips. Psychon Bull Rev 5(1):119–123

    Article  Google Scholar 

  • Chapman GB, Nelson R, Hier DB (1999) Familiarity and time preferences: decision making about treatments for migraine headaches and Crhon’s disease. J Exp Psychol Appl 5:17–34

    Article  Google Scholar 

  • Chesson H, Viscusi WK (2000) The heterogeneity of time-risk tradeoffs. J Behav Decis Mak 13(2):251–258

    Article  Google Scholar 

  • Coller M, Williams MB (1999) Eliciting individual discount rates. Exp Econ 2:107–127

    Article  Google Scholar 

  • Coller M, Harrison GW, Rutström EE (2012) Latent process heterogeneity in discounting behavior. Oxf Econ Pap 64(2):375–391

    Article  Google Scholar 

  • Dasgupta P, Maskin E (2005) Uncertainty and hyperbolic discounting. Am Econ Rev 94(4):1290–1299

    Article  Google Scholar 

  • Eckel C, Engle-Warnick J, Johnson C (2005) Adaptive elicitation of risk preference. Working paper

    Google Scholar 

  • Fernández-Villaverde J, Mukherji A (2006) Can we really observe hyperbolic discouting? Working paper

    Google Scholar 

  • Fischbacher U (2007) z-Tree: Zurich toolbox for ready-made economic experiments. Exp Econ 10(2):171–178

    Article  Google Scholar 

  • Frederick S, Loewenstein G, O’Donoghue T (2002) Time discounting and time preference: a critical review. J Econ Lit 40(2):351–401

    Article  Google Scholar 

  • Green L, Myerson J (1996) Exponential versus hyperbolic discounting of delayed outcomes: risk and wating time. Am Zool 36(4):496–505

    Article  Google Scholar 

  • Green L, Myerson J (2004) A discounting framework for choice with delayed and probabilistic rewards. Psychol Bull 130(5):769–792

    Article  Google Scholar 

  • Green L, Myerson J, McFadden E (1997) Rate of temporal discounting decreases with amount of reward. Mem Cogn 25(5):715–723

    Article  Google Scholar 

  • Halevy Y (2008) Strotz meets Allais: diminishing impatience and the certainty effect. Am Econ Rev 98(3):1145–1162

    Article  Google Scholar 

  • Harrison G, Lau MI, Williams MB (2002) Estimating individual discount rates in Denmark: a field experiment. Am Econ Rev 92:1606–1617

    Article  Google Scholar 

  • Harrison GW, Lau MI (2005) Is the evidence for hyperbolic discounting in humans just an experimental artefact? Behav Brain Sci 28:657

    Article  Google Scholar 

  • Harrison GW, Lau MI, Rutström EE, Sullivan MB (2005) Eliciting risk and time preferences using field experiments: some methodological issues. In: Carpenter J, Harrison GW, List JA (eds) Field experiments in economics. Research in experimental economics, vol 10. JAI Press, Greenwich, pp 125–218

    Chapter  Google Scholar 

  • Hershey JC, Schoemaker PJ (1985) Probability versus certainty equivalence methods in utility measurement: are they equivalent? Manag Sci 31(10):1213–1231

    Article  Google Scholar 

  • Hesketh B (2000) Time perspective in career-related choices: applications of time discounting principles. J Vocat Behav 57:62–84

    Article  Google Scholar 

  • Holcomb JH, Nelson PS (1992) Another experimental look at individual time preference. Ration Soc 4:199–220

    Article  Google Scholar 

  • Holden ST, Shiferaw B, Wik M (1998) Poverty, market imperfections and time preferences of relevance for environmental policy? Env Devel Econ 3:105–130

    Article  Google Scholar 

  • Holt CA, Laury SK (2002) Risk aversion and incentive effects. Am Econ Rev 92(5):1644–1655

    Article  Google Scholar 

  • Ida T, Goto R (2009) Simultaneous measurement of time and risk preferences: stated preference discrete choice modeling analysis depending on smoking behavior. Int Econ Rev 50(4):1169–1182

    Article  Google Scholar 

  • Kable JW, Glimcher PW (2007) The neural correlates of subjective value during intertemporal choice. Nat Neurosci 10:1625–1633

    Article  Google Scholar 

  • Keren G, Roelofsma P (1995) Immediacy and certainty in intertemporal choice. Org Behav Hum Decis Process 63(3):287–297

    Article  Google Scholar 

  • Kinari Y, Ohtake F, Tsutsui Y (2009) Time discounting: declining impatience and interval effect. J Risk Uncertain 39(1):87–112

    Article  Google Scholar 

  • Kirby KN (1997) Bidding on the future: evidence against normative discounting of delayed rewards. J Exp Psychol Gen 126(1):54–70

    Article  Google Scholar 

  • Kirby KN, Maraković NN (1995) Modeling myopic decisions: evidence for hyperbolic delay-discounting within subjects and amounts. Org Behav Hum Decis Process 64(1):22–30

    Article  Google Scholar 

  • Kirby KN, Santiesteban M (2003) Concave utility, transaction costs, and risk in measuring discounting of delayed rewards. J Exp Psychol Learn Mem Cogn 29(1):66–79

    Article  Google Scholar 

  • Kirby KN, Petry NM, Bickel W (1999) Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. J Exp Psychol Gen 128(1):78–87

    Article  Google Scholar 

  • Laibson D (1997) Golden eggs and hyperbolic discounting. Quart J Econ 112(2):443–477

    Article  Google Scholar 

  • Loewenstein G (1987) Anticipation and the valuation of delayed consumption. Econ J 97(387):666–684

    Article  Google Scholar 

  • Loewenstein G, Prelec D (1992) Anomalies in intertemporal choice: evidence and an interpretation. Q J Econ 107(2):573–597

    Article  Google Scholar 

  • Masatlioglu Y, Ok EA (2007) A theory of (relative) discounting. J Econ Theory 137(1):214–245

    Article  Google Scholar 

  • McClure SM, Ericson KM, Laibson DI, Loewenstein G, Cohen JD (2007) Time discounting for primary rewards. J Neurosci 27:5796–5804

    Article  Google Scholar 

  • Monterosso J, Ainslie G (1999) Beyond discounting: possible experimental models of impulse control. Psychopharmacology 146:339–347

    Article  Google Scholar 

  • Mudholkar GS, Srivastava DK, Kollia GD (1996) A generalization of the Weibull distribution and application to the analysis of survival data. J Am Stat Assoc 91:1575–1583

    Article  Google Scholar 

  • Noor J (2010) Time preference data and functional equations. Boston University, working paper

    Google Scholar 

  • Ortendahl M, Fries JF (2006) Discounting and risk characteristics in clinical decision-making. Med Sci Monit 12:RA41–45

    Google Scholar 

  • Petry NM (2001) Delay discounting of money and alcohol in activity using alcoholics, currently abstinent alcoholics, and controls. Psychopharmacology 154:243–250

    Article  Google Scholar 

  • Prelec D (2004) Decreasing impatience: a criterion for non-stationary time preference and “hyperbolic” discounting. Scand J Econ 106(3):511–532

    Article  Google Scholar 

  • Prelec D, Loewenstein G (1991) Decision making over time and under uncertainty: a common approach. Manag Sci 37(7):770–786

    Article  Google Scholar 

  • Rachlin H, Raineri A, Cross D (1991) Subjective probability and delay. J Exp Anal Behav 55(2):233–244

    Article  Google Scholar 

  • Read D (2001) Is time-discounting hyperbolic or subadditive? J Risk Uncertain 23(1):5–32

    Article  Google Scholar 

  • Reuben E, Sapienza P, Zingales L (2010) Time discounting for primary and monetary rewards. Econ Lett 106(2):125–127

    Article  Google Scholar 

  • Reynolds B (2006) A review of delay-discounting research with humans: relations to drug use and gambling. Behav Pharmacol 17:651–667

    Article  Google Scholar 

  • Reynolds B, Richards JB, Hornc K, Karraker K (2004) Delay discounting and probability discounting as related to cigarette smoking status in adults. Behav Process 65:35–42

    Article  Google Scholar 

  • Rubinstein A (2003) “Economics and psychology”? the case of hyperbolic discounting. Int Econ Rev 44(4):1207–1216

    Article  Google Scholar 

  • Rubinstein A (2006) Discussion of “behavioral economics”. In: Blundell R, Newey WK, Persson T (eds) Advances in economics and econometrics. Econometric Society monographs, vol 42. Cambridge University Press, Cambridge, pp 246–257

    Chapter  Google Scholar 

  • Sayman S, Öncüler A (2009) An investigation of time-inconsistency. Manag Sci 55(3):470–482

    Article  Google Scholar 

  • Scholten M, Read D (2006) Discounting by intervals: a generalized model of intertemporal choice. Manag Sci 52(9):1424–1436

    Article  Google Scholar 

  • Stevenson MK (1986) A discounting model for decisions with delayed positive and negative outcomes. J Exp Psych 115:131–154

    Article  Google Scholar 

  • Takeuchi K (2011) Non-parametric test of time consistency: present bias and future bias. Games Econ Behav 71(2):456–478

    Article  Google Scholar 

  • Takeuchi K (2012) Time discounting: the concavity of time discount function: an experimental study. J Behav Econ Financ 5:2–9

    Google Scholar 

  • Tanaka T, Camerer CF, Nguyen Q (2010) Risk and time preferences: linking experimental and household survey data from Vietnam. Am Econ Rev 100(1):557–571

    Article  Google Scholar 

  • Thaler R (1981) Some empirical evidence on dynamic inconsistency. Econ Lett 8:201–207

    Article  Google Scholar 

  • van der Pol M, Cairns J (2001) Estimating time preferences for health using discrete choice. Soc Sci Med 52:1459–1470

    Article  Google Scholar 

  • Wahlund R, Gunarsson J (1996) Mental discounting and financial strategies. J Econ Perspect 17(6):709–730

    Google Scholar 

  • Warner JT, Pleeter S (2001) The personal discount rate: evidence from military downsizing programs. Am Econ Rev 91(1):33–53

    Article  Google Scholar 

  • Yaari ME (1965) Uncertain lifetime, life insurance, and the theory of the consumer. Rev Econ Stud 32:137–150

    Article  Google Scholar 

Download references

Acknowledgements

I thank Yan Chen, Fuhito Kojima, Yusufcan Masatlioglu, Daisuke Nakajima, Emre Ozdenoren, Scott Page, Matthew Rabin, Shunichiro Sasaki, Andrew Schotter, Lones Smith, Nathaniel Wilcox and seminar participants at Michigan, Caltech, Hitotsubashi, Amsterdam, and French Economic Association meetings (Lyon), Japan Economic Association meetings (Tokyo) and the ESA meetings (Osaka, Shanghai and Tucson) for helpful comments and discussions. I thank Benjamin Taylor and Xiao Liu for their excellent research assistance. I specially thank Yan Chen, two anonymous referees and an advisory editor for valuable feedbacks. All errors are mine. The research support provided by NSF grant SES0339587 to Chen is gratefully acknowledged. This work was also supported by JSPS KAKENHI Grant Number 22730156.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kan Takeuchi .

Editor information

Editors and Affiliations

Appendices

Appendix

1.1 Proof

Proof (Proof of Proposition 1)

Assume a subject exhibits decreasing impatience. Choose arbitrary w < z ≤ x 1 < x 2. Let t 1 = T(z, x 1), t 2 = T(z, x 2) and t 1 +δ = T(w, x 1). By transitivity, it follows that \((z,0) \sim (x_{1},t_{1}) \sim (x_{2},t_{2})\). Decreasing impatience implies \((x_{1},t_{1}+\delta ) \prec (x_{2},t_{2}+\delta )\), that is, \((x_{1},T(w,x_{1})) \prec (x_{2},T(z,x_{2}) + T(w,x_{1}) - T(z,x_{1}))\). Substituting \((x_{1},T(w,x_{1})) \sim (x_{2},T(w,x_{2}))\), it yields

$$\displaystyle{(x_{2},T(w,x_{2})) \prec (x_{2},T(z,x_{2}) + T(w,x_{1}) - T(z,x_{1})).}$$

Comparing these two options with the same reward x 2, observe T(w, x 2) > T(z, x 2) + T(w, x 1) − T(z, x 1), which means submodularity.

Assume the present bias and T is submodular. Choose arbitrary t 1 ≥ 0, δ ≥ 0 and x 2 > x 1 > 0. Suppose \((x_{1},t_{1}) \sim (x_{2},t_{2})\). We want to show \((x_{1},t_{1}+\delta ) \prec (x_{2},t_{2}+\delta )\). Find y < z ≤ x 1 such that t 1 = T(z, x 1) and δ = T(y, z). By submodularity there exists w < y such as t 1 +δ = T(w, x 1). Notice that T(w, x 2) > T(w, y) + T(y, z) + T(z, x 2) > T(y, z) + T(z, x 2) = δ + t 2. Therefore,

\((x_{1},t_{1}+\delta ) \sim (x_{1},T(w,x_{1})) \sim (x_{2},T(w,x_{2})) \prec (x_{2},t_{2}+\delta )\). ⊓⊔

Proof (Derivation of Remark 1)

Let b denote the fixed cost of future rewards. b is zero if the reward is paid immediately.

First, it is straightforward that the representation of V (x, t) = e rt u(x) − b result in T(x 0, x 1) + T(x 1, x 2) < T(x 0, x 2) for any three rewards x 0 < x 1 < x 2. Notice that u(x) = e rT(x, y) u(y) − b for a pair of rewards x < y and apply this equation for the three combinations of x 0, x 1 and x 2. Eliminate u(x 0) and u(x 1) from those equations and observe \(u(x_{2})\left [e^{-r[T(x_{0},x_{1})+T(x_{1},x_{2})]} - e^{-\mathit{rT}(x_{0},x_{2})}\right ] = b\). For any positive fixed cost (b > 0), this means T(x 0, x 1) + T(x 1, x 2) < T(x 0, x 2) and present bias.

Next, let us show that another representation of the fixed cost, V (x, t) = e rt u(xb), may also lead the present bias result. Consider u(x 0) = V (x 1, T(x 0, x 1)) = V (x 2, T(x 0, x 2)) and u(x 1b) = V (x 2, T(x 1b, x 2)). Altogether, they yield T(x 0, x 1) + T(x 1b, x 2) = T(x 0, x 2). Notice that this equation implies T(x 0, x 1) + T(x 1, x 2) < T(x 0, x 2), since T(x 1b, x 2) > T(x 1, x 2) for b > 0. ⊓⊔

1.2 Instruction

Experimental Instruction — T/R

Instruction

You are about to participate in an economics experiment in which you will earn dollars as well as money orders based on the decisions you make. All earnings you make in the experiment are yours to keep. Please do not talk to each other during the experiment. If you have a question, please raise your hand and the experimenter will come and help you.

Overview

  1. 1.

    This experiment consists of two different parts and two parts of follow-up survey.

  2. 2.

    In the first part, you will be asked several questions about your timing preferences and will earn a money order. The amount of the money order depends on your answers.

  3. 3.

    In the second part, you will be offered several lotteries to choose from.

    If you win any, the cash reward will be paid to you at the end of this experiment.

  4. 4.

    Note that the two parts are completely independent of one another. That is, your choices and the earnings in one part do not affect those in the other part.

  5. 5.

    We will read the instruction for each part separately. First, we will read the instruction for the first part and you will complete the first task. Then, we will read the instruction for the second part and you will complete the second task. Finally, we will ask you to fill out some survey questions.

  6. 6.

    At the end of the experiment, each of you will be informed individually of your earnings for both parts, and you will then get paid.

Part 1: Delayed Payment Decision

In this part, we will pay you with a money order. The money order is issued by the US Postal Service and redeemable for the face value cash at any postal office. It may be also deposited to your bank account.

Task

You will answer a set of ten questions assuming the following situation:

A money order of $A will be given to you at the end of experiment.

Alternatively, if you are willing to wait, then instead of $A, we will mail you a money order for $B which is greater than $A, i.e., $B > $A. Consider the acceptable longest delay for which you would be willing to wait to receive the larger amount.

Then, the question asks you to fill out the blank below:

Q: To me, “receiving $A today” is equally as good as “receiving $B in days.”

You must wait to get the larger amount. Decide what length of delay makes the two options the same to you, and fill in that amount.

Note that “Receiving $B in T days”, it means you expect to receive the money order of $B by mail in T days. The actual amounts of $A and $B vary from question to question.

If you get $B money order, you will write your mailing address on a stamped envelope, sign the money order and seal it into the envelope. We will then mail the envelope later.

After each one of you answers all ten questions, the computer will randomly select one of the questions. Your actual payment will be based on your answer to the selected question.

Procedure

To determine which of $A or $B you get, the computer will randomly choose a number. It will be generated independently of your answers to the questions. This number will become the actual delay for $B, if you get $B. Call that the proposed delay.

If the proposed delay is longer than your longest acceptable delay, you will not get $B. Instead, you will get $A at the end of the experiment.

If the proposed delay is shorter than or equal to your longest acceptable delay, you will get $B. The proposed delay will be the actual delay. Thus, the $B money order will arrive at your mailing address right after the proposed delay.

Example: (For purposes of illustration, we replace days with weeks.)

Suppose that you were asked the following question.

Q: To me, “receiving $70 today” is equally as good as “receiving $100 in weeks.”

If your answer was 10 weeks, i.e.,

To me, “receiving $70 today” is equally as good as “receiving $100 in 10 weeks,”

then, the computer randomly generates a number. If the number is greater than 10, e.g., if it is 14, then you do not get $100. Instead, you will get $70 today.

If the number is less than or equal to 10, then you will get $100. For example, suppose that the number generated is 8. In this case, you will get $100 in 8 weeks.

Any question?

Strategy:

Note that this procedure is such that your best response is to write down the longest delay for which you are willing to wait to get the larger amount, $B.

We now show that truthful reporting is your best strategy. We will illustrate why you will never be better off sending a false report. Let us work through one example. Say that we offer you two amounts $70 and $100 and ask you to choose a time, T that is such that you would be indifferent to waiting T weeks and receiving $100 as opposed to receiving $70 today. Let us just assume, for the sake of argument, that you would be indifferent between receiving $70 today and receiving $100 in  10  weeks. The question is should you tell us T  = 10 when we ask you?

To see why the answer is yes, let us say that you are thinking of not telling us the truth. There are two possible cases, under-reporting or over-reporting. We will show that in either case you might be worse off compared to telling the truth.

  1. 1.

    Under-reporting can make you worse off.

By reporting any shorter delay than your actual acceptable delay, T, you can never be better off, and sometimes be worse off.

Suppose that you falsely answered by saying that your acceptable delay was only 6 weeks, even though your true acceptable delay was 10 weeks, i.e.,

To me, “receiving $70 today” is equally as good as “receiving $100 in 6weeks.”

The computer randomly chooses a number to propose a delay. Suppose that the number generated is between 6 and 10, say, it is 9. Since this proposed delay is longer than that you reported, i.e., 9 > 6, you receive $70 today. But, the proposed delay is still shorter than your acceptable delay, and thus you would be willing to wait 9 weeks for $100. Receiving $70 today is worse than receiving $100 in 9 weeks. You lose the opportunity to get the better outcome by falsely reporting shorter delay.

Thus, under-reporting will never make you better off.

What about stating T greater than 10 weeks?

  1. 2.

    Over-reporting can make you worse off as well.

By reporting any longer delay than your actual acceptable delay, T, you may end up waiting too long.

Suppose that you falsely answered by saying that your acceptable delay was 14 weeks, even though your true acceptable delay was 10 weeks. That is,

To me, “receiving $70 today” is equally as good as “receiving $100 in 14 weeks.”

The computer randomly chooses a number to propose a delay. Suppose that the number generated is between 10 and 14, say, it is 13. Since the proposed delay is shorter than that you reported, i.e., 13 < 14, you will get $100. But, the actual delay, 13 weeks, is longer than your acceptable delay. You end up waiting too long. Thus, you lose the opportunity to get the better outcome by falsely reporting a longer delay.

Thus, over-reporting will never make you better off.

In sum, your best strategy is always to answer the questions truthfully.

Any question?

[the next part starts in a new page in the original format]

Part 2: Lottery Choice

Your earnings in this part will be paid in cash at the end of this experiment.

Task You will answer a set of ten questions assuming the following situation:

You are given two options:

  1. 1.

    Receive $Y for sure; or

  2. 2.

    Play a lottery for $Z, where $Z > $Y, and your odds of winning the lottery are P%.

Consider the lowest acceptable odds of winning with which you would be willing to play the lottery.

In a series of questions, you will be asked to fill out the blank below:

Q: To me, “receiving $Y for sure” is equally as good as “receiving $Z with % chance.”

You need to play a lottery to get the larger amount. Decide what odds of winning make the two options the same to you, and fill in that amount.

The actual amounts of $Y and $Z vary from question to question.

After each one of you answers all ten questions, the computer will select one of the questions at random. Your actual payment will be based on your answer to the selected question.

Payment

To determine your chance of winning the lottery, the computer will randomly choose a number between 0 and 100 %. Each of those numbers will be equally likely to be drawn, and the selected number will be the chance of winning.

If the chance of winning the lottery is less than your lowest acceptable odds of winning, you will not play the lottery. Instead, you will receive $Y for sure.

If the chance of winning the lottery is greater than or equal to your lowest acceptable odds of winning, you will play the lottery. If you win the lottery, you will get $Z; and if you lose, you will get nothing.

Example: (For purposes of illustration, we use different amounts than those actually given to you in the experiment.)

Suppose that you were asked the following question.

Q: To me, “receiving $70 for sure” is equally as good as “receiving $120 with % chance.” ’

Suppose your answer is 58 %, i.e.,

To me, “receiving $70 for sure” is equally as good as “receiving $120 with 58 % chance.”

Then, the computer randomly generates a number between 0 and 100.

If the number is less than 58, e.g., if it is 23, then you do not get to play the lottery. Thus, you get $70 for sure.

If the number is greater than or equal to 58, then you will play the lottery. For example, suppose that the number generated is 84. In this case, you will play a lottery for $120 and your chance of winning is 84 %. If you win the lottery, you will get $120; and if you lose the lottery you will get nothing.

Any question?

Strategy:

Note that this procedure is such that your best response is to write down the minimum odds with which you are willing to play a lottery for $Z.

We now show that truthful reporting is your best strategy. We will illustrate why you will never be better off sending a false report. Let us work through one example. Say that we offer you two amounts $70 and $120 and ask you to choose odds of a lottery, P%. Let us just assume, for the sake of argument, that you would be indifferent between receiving $70 for sure and receiving $120 with 58 % chance. The question is should you tell us P  = 58 when we ask you?

To see why the answer is yes, let us say that you are thinking of not telling us the truth. There are two possible cases, under-reporting or over-reporting. We will show that in either case you might be worse off compared to telling the truth.

1. Under-reporting can make you worse off. By reporting any lower odds than your actual acceptable odds, you can never be better off, and sometimes be worse off.

Suppose that you falsely answered by saying that your acceptable odds were 43 %, even though your true acceptable odds were 58 %, i.e.,

To me, “receiving $70 for sure” is equally as good as “receiving $120 with 43 % chance.”

The computer randomly chooses a number between 0 and 100 % to determine the chance of winning the lottery. Suppose that the number generated is between 43 and 58, say, it is 51. Since the number generated is greater than that you reported, i.e., 51 > 43, you play the lottery and your odds of winning the lottery are 51 %. But, it is lower than your acceptable odds, and thus playing the lottery is worse than receiving $70 for sure. You end up playing a lottery with unacceptably low odds by falsely reporting lower odds.

Thus, under-reporting will never make you better off.

What about stating P greater than 58 %?

2. Over-reporting can make you worse off as well. By reporting any higher odds than your acceptable odds, you may lose the opportunity to play a lottery even if it is preferred to receiving $70 for sure.

Suppose that you falsely answered by saying that your acceptable odds were 77 %, even though your true acceptable odds were 58 %.

‘To me, “receiving $70 for sure” is equally as good as “receiving $120 with 77 % chance.” ’

The computer randomly chooses a number between 0 and 100 % to determine the chance of winning the lottery. Suppose that the number generated is between 58 and 77, say, it is 66. Since the number generated is smaller than that you reported, i.e., 66 < 77, you do not play the lottery and you receive $70 for sure. But, the chance of winning the lottery, 66 %, is greater than your acceptable odds. It means you still prefer playing the lottery to receiving $70 for sure. Thus, you lost the opportunity to get the better outcome by falsely reporting higher odds.

Thus, over-reporting will never make you better off.

In sum, your best strategy is always to answer the questions truthfully.

Any question?

Addendum: Further Analysis

This addendum has been newly written for this book chapter.

2.1 Summary

Takeuchi (2011) separates time preference and risk preference by characterizing the consistency of time preference independently of utility function. Many experiments have done in the literature to elicit time discount function D in the following equation.

$$\displaystyle{u(y) = D(x,t) \cdot u(x),}$$

where y is the present value of a future option that pays x at time t, D is the discount function, and u is the instantaneous utility function. Notice that almost all of the experiments adjust the level of x and y to find the present value of a future option and then accumulate observations so that those observations will reveal the property of D.

There is, however, a confounding factor. As far as we try to observe the property of D by alternating the level of payments, we cannot separate the variance of D from the variance of u.

Takeuchi (2011), therefore, invents a new elicitation method that adjusts timing t instead of the level of reward x. My idea successfully results in the theoretical characterization of time consistency solely based on timing (See the definitions and Proposition 1 in Sect. 2). Then, I test my theory in the experiment and observe not only present biases but also future biases.

Readers should notice that the test usually has to involve four different reward levels and the corresponding four equivalent delays in accordance with Proposition 1, while Fig. 4.1 of the paper compares only three reward levels and three equivalent delays for illustration purpose. If there is any sort of fixed cost to the equivalent delay, a test that consists of only three equivalent delays will have a bias toward future biases.

2.2 The Follow-Up Experiment on S-Shape Discount Function

The result indicates that time discount function may be concave or inverse S-shape as shown in Fig. 4.10. Thus, I conduct another experiment to test the concavity of time discount function.

Fig. 4.10
figure 10

The inverse S-shape time discount function. Future bias implies that the time discount function is concave

I invent another non-parametric test to check the convexity of time discount function in Takeuchi (2012). Figure 4.11 shows one of the simplest choice tasks in the experiment.

Fig. 4.11
figure 11

Questionnaire sample screen shot (Translated from Japanese). If a decision maker chooses the left option (100 % 18 days), then it implies that his/her time discount function is concave around t = 18 and inverse S-shaped

The reward is fixed at 2, 000 Japanese Yen (JPY), though the delay of the payment is uncertain. The decision-maker (DM) is given two options. When she or he chooses the left option, the DM receives 2,000 JPY in 18 days for sure. If the DM chooses the right option, then the delay will be determined whether it is 11 or 25 days on the given probability that is 50 %:50 % in this example. Notice that the expected length of the delay is identical to each other option, namely \(18\mbox{ days} = \frac{1} {2}(11\ \mbox{ days} + 25\ \mbox{ days})\).

When the DM chooses the left option over the right one, it implies that

$$\displaystyle{D(x,18)u(x) > 0.5D(x,11)u(x) + 0.5D(x,25)u(x)}$$

where x = 2, 000 JPY. This inequality immediately implies the time discount function is concave around 18 regardless of the shape of u.Footnote 38

If time discount function is inverse S-shape, the fraction of subjects who indicate concavity of their time preference is decreasing in the expected delay. The experiment result supports this hypothesis as shown in Fig. 4.12.

Fig. 4.12
figure 12

The composition of responses for each group. The proportion of choices indicating concavity is decreasing in the expected delay. This pattern is consistent with the inverse S-shape time discount function

The inverse S-shaped time discount function fits this observation better than the standard convex time discount function. Most of the subjects reveal their concavity of time discounting around t = 7 days, which is consistent with the previous result that observed many of the subjects exhibited future bias around t = 2 weeks. Few of the subjects are Convex at questions where the expected delays are 7, 11 and 18 days, although one third of them are Convex when the expected delay is 49 days.

The result tells us that our time perception is not necessarily monotone. Time pasts slow around t = 0 probably because we feel that the very near future is part of the present. Then it runs fast and the discount function changes its shape from concave into convex. Again, it seems that time runs slow in far future since it is too far to feel the disutility of any additional delay. The concept of time flow is not solid but more elastic and flexible in our cognition.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Japan

About this chapter

Cite this chapter

Takeuchi, K. (2016). Non-parametric Test of Time Consistency: Present Bias and Future Bias. In: Ikeda, S., Kato, H., Ohtake, F., Tsutsui, Y. (eds) Behavioral Economics of Preferences, Choices, and Happiness. Springer, Tokyo. https://doi.org/10.1007/978-4-431-55402-8_4

Download citation

Publish with us

Policies and ethics