Clever enough to tell the truth


We conduct a field experiment on 427 Israeli soldiers who each rolled a six-sided die in private and reported the outcome. For every point reported, the soldier received an additional half-hour early release from the army base on Thursday afternoon. We find that the higher a soldier’s military entrance score, the more honest he is on average. We replicate this finding on a sample of 156 civilians paid in cash for their die reports. Furthermore, the civilian experiments reveal that two measures of cognitive ability predict honesty, whereas general self-report honesty questions and a consistency check among them are of no value. We provide a rationale for the relationship between cognitive ability and honesty and discuss its generalizability.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

    Interestingly, a large literature in personnel psychology debates the usefulness of personality tests as an aid in hiring decisions, job placement and worker evaluation. Most recently, Morgeson et al. (2007) review over 7000 manuscripts on the usefulness of these tests and conclude that they have low predictability of job performance and that alternatives to these self-report measures should be sought. Our findings support this conclusion.

  2. 2.

    Abeler et al. (2014) conduct an experiment on honesty by telephone on a representative sample of German respondents. Other experiments conducted on soldiers are: Goette et al. (2012) who compare the in-group cooperativeness and willingness to punish of extant groups of Swiss soldiers with those of randomly formed groups of soldiers. Lahav et al. (2011) distribute questionnaires on trains traveling between major Israeli cities to soldiers, teenagers and university students and show that soldiers have higher subjective discount rates than non-soldiers. In a companion paper based on the same soldier experiments, Ruffle and Tobol (2014) show that temporally distancing decisions from the receipt of payment increases honest reporting. Specifically, soldiers who participated in the die-rolling experiment on earlier days of the week reported low outcomes on average than those who participated closer to the end of the week. Soldiers’ military entrance scores served merely as a control variable in the analysis and was not explored in any depth. In the current paper, we focus on the relationship between military entrance scores and honesty and test the robustness of our findings on a civilian population.

  3. 3.

    To a lesser extent they also cheat their colleagues because a soldier who leaves the army base early necessitates that his uncompleted duties are distributed among those soldiers who remain behind.

  4. 4.

    With the exception of our purposeful oversampling of religious companies, we view our sample of soldiers as representative of the overall population of Israeli soldiers. In fact, in Sect. 3.2 we will see that the distribution of military entrance scores of soldiers in our sample mirrors the overall distribution. What is more, because military service is mandatory for all Israelis (except for the Arab-speaking and ultra-orthodox Jewish populations for whom it is optional), our sample constitutes a representative cross-section of society as a whole.

  5. 5.

    Consider the following back-of-the-envelope calculation. The average soldier reported a die outcome of 3.87 (see row 1 of left panel of Table 1), equivalent to 1.94 h early release. If we assume, for simplicity, that the median willingness to pay increases linearly with each additional half hour of early release, then the median willingness to pay for 1.94 hours equals 58.1 NIS for seven minutes of work. Contrast this with combat soldiers’ monthly wage of 700 NIS and non-combat soldiers’ monthly salary of between 300 and 500 NIS, depending on their job. At the time of the experiments, 3.5 NIS equaled $1 USD.

  6. 6.

    Further evidence against uniformly distributed die outcomes comes from the frequencies of reported 1 and 2 s, both significantly less than the percentage of 16.67 % expected from a uniform distribution (p < .001 from one-sided Binomial tests in both cases). At the same time, the frequencies of 4 and 5 s, are significantly greater than 16.67 % (p = .04 and p < .001, respectively). Only the frequencies of reported 3 s and 6 s cannot be rejected as significantly different from 16.67 % (p = .13 and p = .38, respectively).

  7. 7.

    Incomplete cheating appears to be a robust finding in the emerging literature on cheating regardless of whether the die-rolling paradigm (e.g., Shalvi et al. 2011; Fischbacher and Föllmi-Heusi 2013; Hao et al. 2013) or some other experimental method is used (e.g., Gneezy 2005; Charness and Dufwenberg 2006; Erat and Gneezy 2012).

  8. 8.

    Interestingly, Daniel Kahneman developed in large part the structured interview protocol, which remains largely intact to this day (Kahneman and Daniel 2002).

  9. 9.

    In fact, the kaba exam is only the first of several screening devices used to determine eligibility to become an officer. Only at the end of the first year of military service are additional selection criteria applied to those eligible soldiers with kaba scores of 52 or more, such as the recommendations of commanding officers, a personal interview with the soldiers’ commanding officers and a sergeant’s course.

  10. 10.

    Neither the rank-sum Wilcoxon-Mann–Whitney test nor the Kolmogorov–Smirnov test rejects the equality of the female and male distributions of military entrance scores (p = .20 and p = .66, respectively).

  11. 11.

    The Israel Defense Forces do not make publicly available the distribution of military entrance scores.

  12. 12.

    Warner and Pleeter (2001) also observe unique behavior among the two highest categories of entrance exam scores in the U.S. military. In particular, they exploit a natural experiment conducted by the U.S. Department of Defense to reduce military personnel in which mid-career personnel were offered the choice between a lump-sum separation payment and an annuity valued at considerably more in present terms. Personnel belonging to the top groups display lower rates of discount (i.e., more patience) than their peers, as evidenced by their higher likelihood of preferring an annuity to a lump-sum retirement payment.

  13. 13.

    The regressor is expressed as soldier i’s kaba minus 52 for ease of interpretation. Thus, the constant of 3.76 refers to the average die outcome reported by a soldier with a kaba of 52.

  14. 14.

    Also included but not shown are measures of the soldier’s military unit and military base peer effects (neither of these measures is significantly different from zero in any of the regressions), as well as indicators for the day of the week on which the experiment was conducted with Sunday as the omitted day (all of the other days of the week are positive and significantly different from zero in all regressions and are discussed in detail, along with the peer effects variables, in Ruffle and Tobol 2014).

  15. 15.

    Contrast this with Dreber and Johannesson (2008) who find that men are more likely than women to send deceptive, self-serving messages to their partner in a sender-receiver game modeled after Gneezy (2005).

  16. 16.

    To see why we need to invest the reports for high kaba soldiers, suppose we wish to determine whether a group that systematically under-reports is more or less honest than a group that systematically over-reports. To render the two distributions of die reports comparable, we first need to invert one of them before performing the appropriate statistical test. Because soldiers with a kaba of 52 or more neither unambiguously under-report nor unambiguously over-report, we compare both their original and their inverted die-report distributions with that of soldiers with a kaba below 52. Both methods lead to the same conclusion: soldiers with high kaba scores are more honest.

  17. 17.

    Fischbacher and Föllmi-Heusi (2013) also report the results of a double-anonymous version of their die-rolling experiment in which a subject’s reported die outcome is unknown to other subjects and to the experimenter. They find little difference in the distribution of reported outcomes across anonymity conditions. Mazar et al. (2008) report a similarly negligible difference in the number of matrices subjects claim to have solved when anonymity vis-à-vis the experimenter is added. Reputational concerns may nonetheless be more important in our setting in which the payment is different and a subject’s reported die outcome is observable by both his commanding officer and fellow soldiers with whom he interacts on a daily basis.

  18. 18.

    Our observation that as many as 40 % (.47 × (1 − .15)) of soldiers correctly guessed their kaba and those who guessed incorrectly were off by “only” 2 points on average are not surprising. Before entering the military, every recruit provides a preference ordering over military units in which he wishes to serve. Since different units require different kaba thresholds, a recruit’s acceptance to or rejection from his preferred unit(s) provides him with an update about the possible range of his kaba.

  19. 19.

    The military variables “WTP for half-hour early release” and military unit peer effects are of course absent from the civilian sample as are the day-of-the-week indicators since all civilian participants received payment immediately after participating and not on Thursday afternoon like the soldier sample.

  20. 20.

    The significance and lack thereof of each of these variables is robust to whichever subset of regressors is included in the specification.

  21. 21.

    In question 31, we ask “Which of the following sentences best describes you?” with “a. I always tell the truth,” “b. I almost always tell the truth,” “c. I usually tell the truth,” and “d. I tell the truth when it is convenient for me” as the possible responses. Question 46 reads, “Do you speak the truth in your daily life?” with the set of answers, “a. always,” “b. generally,” “c. sometimes,” and “d. when I stand to gain from it.” The absence of a one-to-one correspondence between the two sets of responses requires us to be liberal in our definition of consistency and minimizes the likelihood of a type-1 error in incorrectly inferring that a subject is lying or inconsistent in responding to the two questions. While 31a corresponds perfectly to 46a, 31b may be consistent with either 46a or 46b; 31c matches 46b or 46c, and 31d may correspond to 46c or 46d. Even with this charitable definition of consistency, we still find that 18 % of subjects unambiguously contradict themselves in responding to the two questions with 57 % of the inconsistent choices being 31a (“I always tell the truth”) and 46c (“sometimes”).

  22. 22.

    Numerous studies in economics demonstrate that higher cognitive ability predicts a number of desirable traits and outcomes, such as lower risk aversion and more patient time preferences (see, e.g., Frederick 2005 and the references therein as well as Burks et al. 2009, Dohmen et al. 2010, Oechssler et al. 2009). Oechssler et al. (2009) also show that subjects with high CRT scores are less prone than their low CRT-score peers to both the conjunction fallacy and to conservatism in probability updating, while the two groups are equally susceptible to anchoring.

  23. 23.

    Responses to the four self-report honesty questions are coded (as in Online Appendix B) such that higher values correspond to less honesty. The coefficient of −.29 on the self-report honesty variable has a p value of .11; but its negative sign implies that the less truthful a subject claims to be, the lower his reported die outcome.

  24. 24.

    In a within-subject online experiment, Hugh-Jones and David (2015) finds that honest behavior is positively correlated in a coin-flip and a quiz experiment. Yet, a self-report honesty question about whether lying in one’s self-interest is justifiable fails to predict behavior in either honesty experiment. At the same time, Hugh-Jones also includes self-report questions about whether the respondent had engaged in any one of four ethically questionable actions in the past 12 months (e.g., avoid fare on public transport, fabricate information on a job application). Reports of unethical actions do predict dishonesty in both experiments. These findings suggest that questions about actual participation in specific forms of dishonest behavior may be better predictors of dishonesty in incentivized experiments than general self-report questions about honesty.

  25. 25.

    Somewhat relatedly, numerous social psychology studies show that high self-control and the ability to overcome impulses are associated with higher grades, better relationships and interpersonal skills (see, for e.g., Tangney et al. 2004 and the references therein). Similarly, Shalvi et al. (2012) show that dishonesty increases when subjects face time pressure in the form of insufficient time to fully contemplate their reporting decision.

  26. 26.

    The forward-looking orientation implicit in this concern for one’s future self-image is consistent with the observed link found in the literature between CRT scores and more patient time preferences (see the references in footnote 23) and with the theory posited by Gottfredson and Hirschi (1990) that the primary cause of deviancy is low self-control, namely, the tendency of individuals to pursue short-term gratification without consideration of the long-term consequences of their acts.

  27. 27.

    This explanation raises the question whether the same relationship between cognitive ability and honesty would continue to hold in a high-stakes experiment in which the benefit to cheating is considerably higher.

  28. 28.

    Simon (1990) provides a theoretical rationale for the evolutionary success of social norms such as honesty based on docility and an inability to distinguish between socially prescribed behaviors that contribute to group fitness from those that reduce individual fitness.


  1. Abeler, J., Becker, A., & Falk, A. (2014). Representative evidence on lying costs. Journal of Public Economics, 113, 96–104.

    Article  Google Scholar 

  2. Arthur, W., & Day, D. V. (1994). Development of a short form for the Raven advanced progressive matrices test. Educational and Psychological Measurement, 54, 394–403.

    Article  Google Scholar 

  3. Azar, O. H., Yosef, S., & Bar-Eli, M. (2013). Do customers return excessive change in a restaurant? A field experiment on dishonesty. Journal of Economic Behavior & Organization, 93, 219–226.

    Article  Google Scholar 

  4. Brooks, C. (2013). Employee theft on the rise and expected to get worse. Business News Daily, June 19, 2013, Retrieved from,

  5. Burks, S. V., Carpenter, J. P., Goette, L., & Rustichini, A. (2009). Cognitive skills affect economic preferences, strategic behavior, and job attachment. Proceedings of the National Academy of Sciences, 106(19), 7745–7750.

    Article  Google Scholar 

  6. Charness, G., & Dufwenberg, M. (2006). Promises and partnerships. Econometrica, 74(6), 1579–1601.

    Article  Google Scholar 

  7. Dohmen, T., Falk, A., Huffman, D., & Sunde, U. (2010). Are risk aversion and impatience related to cognitive ability? American Economic Review, 100(3), 1238–1260.

    Article  Google Scholar 

  8. Dreber, A., & Johannesson, M. (2008). Gender differences in deception. Economics Letters, 99(1), 197–199.

    Article  Google Scholar 

  9. Erat, S., & Gneezy, U. (2012). White lies. Management Science, 58(4), 723–733.

    Article  Google Scholar 

  10. Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547.

    Article  Google Scholar 

  11. Fosgaard, T. R., Hansen, J. G., & Piovesan, M. (2013). Separating will from grace: An experiment on conformity and awareness in cheating. Journal of Economic Behavior & Organization, 93, 279–284.

    Article  Google Scholar 

  12. Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.

    Article  Google Scholar 

  13. Gino, F., & Ariely, D. (2012). The dark side of creativity: Original thinkers can be more dishonest. Journal of Personality and Social Psychology, 102(3), 445–459.

    Article  Google Scholar 

  14. Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 95(1), 384–394.

    Article  Google Scholar 

  15. Goette, L., Huffman, D., & Meier, S. (2012). The impact of social ties on group interactions: Evidence from minimal groups and randomly assigned real groups. American Economic Journal: Microeconomics, 4(1), 101–115.

    Google Scholar 

  16. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press.

    Google Scholar 

  17. Hao, L. & Houser, D. (2013). Perceptions, intentions, and cheating. Unpublished manuscript.

  18. Hartshorne, H., & May, M. A. (1928). Studies in the nature of character, vol 1: Studies in deceit. New York: Macmillan.

    Google Scholar 

  19. Hugh-Jones, D. (2015). Way to measure honesty: A new experiment and two questionnaires. Unpublished manuscript.

  20. Kahneman, D. (2002). The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2002 Daniel Kahneman, Vernon L. Smith. Retrieved from,

  21. Lahav, E., Benzion, U., & Shavit, T. (2011). The effect of military service on soldiers’ time preferences—Evidence from Israel. Judgment and Decision Making, 6(2), 130–138.

    Google Scholar 

  22. Lerer, Z. (2009). Groups of quality: The social history of the IDF selection system. Ph.D. dissertation, Tel Aviv University.

  23. Levitt, S. D. (2006). White-collar crime writ small: A case study of bagels, donuts, and the honor system. American Economic Review Papers and Proceedings, 96(2), 290–294.

    Article  Google Scholar 

  24. Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45(6), 633–644.

    Article  Google Scholar 

  25. Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007). Reconsidering the use of personality tests in personnel selection contexts. Personnel Psychology, 60(3), 683–729.

    Article  Google Scholar 

  26. Oechssler, J., Roider, A., & Schmitz, P. W. (2009). Cognitive abilities and behavioral biases. Journal of Economic Behavior & Organization, 72(1), 147–152.

    Article  Google Scholar 

  27. Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60(4), 995–1027.

    Article  Google Scholar 

  28. Pruckner, G. J., & Sausgruber, R. (2013). Honesty on the streets: A natural field experiment on newspaper purchasing. Journal of the European Economic Association, 1(3), 661–679.

    Article  Google Scholar 

  29. Raven, J. C. (1936). Mental tests used in genetic studies: the performance of related individuals on tests mainly educative and mainly reproductive. MSc Thesis, University of London, London.

  30. Rosenbaum, S. M., Billinger, S., & Stieglitz, N. (2014). Let’s be honest: A review of experimental evidence of honesty and truth-telling. Journal of Economic Psychology, 45, 181–196.

    Article  Google Scholar 

  31. Ruffle, B. J., & Tobol, Y. (2014). Honest on mondays: Honesty and the temporal distance between decisions and payoffs. European Economic Review, 65, 126–135.

    Article  Google Scholar 

  32. Shalvi, S., Dana, J., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organizational Behavior and Human Decision Processes, 115, 181–190.

    Article  Google Scholar 

  33. Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological Science, 23(10), 1264–1270.

    Article  Google Scholar 

  34. Simon, H. A. (1990). A mechanism for social selection and successful altruism. Science, 250(4988), 1665–1668.

    Article  Google Scholar 

  35. Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72(2), 271–324.

    Article  Google Scholar 

  36. Unger, S. M. (1964). Relation between intelligence and socially-approved behavior: A methodological cautionary note. Child Development, 35(1), 299–301.

    Article  Google Scholar 

  37. Warner, J. T., & Pleeter, S. (2001). The personal discount rate: Evidence from military downsizing programs. American Economic Review, 33–53.

  38. Wikipedia,

Download references


We thank Johannes Abeler, Yuval Arbel, Ofer Azar, Ronen Bar-El, Bram Cadsby, Danny Cohen-Zada, Leif Danziger, Nadja Dwenger, Naomi Feldman, Lan Guo, Shachar Kariv, Jonathan Mamujee, Mattia Pavoni, Chet Robie, Tata Pyatigorsky-Ruffle, Jonathan Schulz, Ze’ev Shtudiner, Justin Smith, Fei Song, Michal Kolodner-Tobol, Ro’i Zultan, an editor of this journal, David Cooper, two anonymous referees and numerous seminar participants for helpful comments. We also are grateful to Capt. Sivan Levi and Meytal Sasson for research assistance, Capt. Itamar Cohen for facilitating the soldier experiments and all of the commanding officers for granting us access to their units. A preliminary version of this paper circulated under the title, “Screening for Honesty”.

Author information



Corresponding author

Correspondence to Bradley J. Ruffle.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 190 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ruffle, B.J., Tobol, Y. Clever enough to tell the truth. Exp Econ 20, 130–155 (2017).

Download citation


  • Honesty
  • Cognitive ability
  • Soldiers
  • High non-monetary stakes

JEL Codes

  • C93
  • M51