Abstract
We conduct a field experiment on 427 Israeli soldiers who each rolled a six-sided die in private and reported the outcome. For every point reported, the soldier received an additional half-hour early release from the army base on Thursday afternoon. We find that the higher a soldier’s military entrance score, the more honest he is on average. We replicate this finding on a sample of 156 civilians paid in cash for their die reports. Furthermore, the civilian experiments reveal that two measures of cognitive ability predict honesty, whereas general self-report honesty questions and a consistency check among them are of no value. We provide a rationale for the relationship between cognitive ability and honesty and discuss its generalizability.
Similar content being viewed by others
Notes
Interestingly, a large literature in personnel psychology debates the usefulness of personality tests as an aid in hiring decisions, job placement and worker evaluation. Most recently, Morgeson et al. (2007) review over 7000 manuscripts on the usefulness of these tests and conclude that they have low predictability of job performance and that alternatives to these self-report measures should be sought. Our findings support this conclusion.
Abeler et al. (2014) conduct an experiment on honesty by telephone on a representative sample of German respondents. Other experiments conducted on soldiers are: Goette et al. (2012) who compare the in-group cooperativeness and willingness to punish of extant groups of Swiss soldiers with those of randomly formed groups of soldiers. Lahav et al. (2011) distribute questionnaires on trains traveling between major Israeli cities to soldiers, teenagers and university students and show that soldiers have higher subjective discount rates than non-soldiers. In a companion paper based on the same soldier experiments, Ruffle and Tobol (2014) show that temporally distancing decisions from the receipt of payment increases honest reporting. Specifically, soldiers who participated in the die-rolling experiment on earlier days of the week reported low outcomes on average than those who participated closer to the end of the week. Soldiers’ military entrance scores served merely as a control variable in the analysis and was not explored in any depth. In the current paper, we focus on the relationship between military entrance scores and honesty and test the robustness of our findings on a civilian population.
To a lesser extent they also cheat their colleagues because a soldier who leaves the army base early necessitates that his uncompleted duties are distributed among those soldiers who remain behind.
With the exception of our purposeful oversampling of religious companies, we view our sample of soldiers as representative of the overall population of Israeli soldiers. In fact, in Sect. 3.2 we will see that the distribution of military entrance scores of soldiers in our sample mirrors the overall distribution. What is more, because military service is mandatory for all Israelis (except for the Arab-speaking and ultra-orthodox Jewish populations for whom it is optional), our sample constitutes a representative cross-section of society as a whole.
Consider the following back-of-the-envelope calculation. The average soldier reported a die outcome of 3.87 (see row 1 of left panel of Table 1), equivalent to 1.94 h early release. If we assume, for simplicity, that the median willingness to pay increases linearly with each additional half hour of early release, then the median willingness to pay for 1.94 hours equals 58.1 NIS for seven minutes of work. Contrast this with combat soldiers’ monthly wage of 700 NIS and non-combat soldiers’ monthly salary of between 300 and 500 NIS, depending on their job. At the time of the experiments, 3.5 NIS equaled $1 USD.
Further evidence against uniformly distributed die outcomes comes from the frequencies of reported 1 and 2 s, both significantly less than the percentage of 16.67 % expected from a uniform distribution (p < .001 from one-sided Binomial tests in both cases). At the same time, the frequencies of 4 and 5 s, are significantly greater than 16.67 % (p = .04 and p < .001, respectively). Only the frequencies of reported 3 s and 6 s cannot be rejected as significantly different from 16.67 % (p = .13 and p = .38, respectively).
Incomplete cheating appears to be a robust finding in the emerging literature on cheating regardless of whether the die-rolling paradigm (e.g., Shalvi et al. 2011; Fischbacher and Föllmi-Heusi 2013; Hao et al. 2013) or some other experimental method is used (e.g., Gneezy 2005; Charness and Dufwenberg 2006; Erat and Gneezy 2012).
Interestingly, Daniel Kahneman developed in large part the structured interview protocol, which remains largely intact to this day (Kahneman and Daniel 2002).
In fact, the kaba exam is only the first of several screening devices used to determine eligibility to become an officer. Only at the end of the first year of military service are additional selection criteria applied to those eligible soldiers with kaba scores of 52 or more, such as the recommendations of commanding officers, a personal interview with the soldiers’ commanding officers and a sergeant’s course.
Neither the rank-sum Wilcoxon-Mann–Whitney test nor the Kolmogorov–Smirnov test rejects the equality of the female and male distributions of military entrance scores (p = .20 and p = .66, respectively).
The Israel Defense Forces do not make publicly available the distribution of military entrance scores.
Warner and Pleeter (2001) also observe unique behavior among the two highest categories of entrance exam scores in the U.S. military. In particular, they exploit a natural experiment conducted by the U.S. Department of Defense to reduce military personnel in which mid-career personnel were offered the choice between a lump-sum separation payment and an annuity valued at considerably more in present terms. Personnel belonging to the top groups display lower rates of discount (i.e., more patience) than their peers, as evidenced by their higher likelihood of preferring an annuity to a lump-sum retirement payment.
The regressor is expressed as soldier i’s kaba minus 52 for ease of interpretation. Thus, the constant of 3.76 refers to the average die outcome reported by a soldier with a kaba of 52.
Also included but not shown are measures of the soldier’s military unit and military base peer effects (neither of these measures is significantly different from zero in any of the regressions), as well as indicators for the day of the week on which the experiment was conducted with Sunday as the omitted day (all of the other days of the week are positive and significantly different from zero in all regressions and are discussed in detail, along with the peer effects variables, in Ruffle and Tobol 2014).
To see why we need to invest the reports for high kaba soldiers, suppose we wish to determine whether a group that systematically under-reports is more or less honest than a group that systematically over-reports. To render the two distributions of die reports comparable, we first need to invert one of them before performing the appropriate statistical test. Because soldiers with a kaba of 52 or more neither unambiguously under-report nor unambiguously over-report, we compare both their original and their inverted die-report distributions with that of soldiers with a kaba below 52. Both methods lead to the same conclusion: soldiers with high kaba scores are more honest.
Fischbacher and Föllmi-Heusi (2013) also report the results of a double-anonymous version of their die-rolling experiment in which a subject’s reported die outcome is unknown to other subjects and to the experimenter. They find little difference in the distribution of reported outcomes across anonymity conditions. Mazar et al. (2008) report a similarly negligible difference in the number of matrices subjects claim to have solved when anonymity vis-à-vis the experimenter is added. Reputational concerns may nonetheless be more important in our setting in which the payment is different and a subject’s reported die outcome is observable by both his commanding officer and fellow soldiers with whom he interacts on a daily basis.
Our observation that as many as 40 % (.47 × (1 − .15)) of soldiers correctly guessed their kaba and those who guessed incorrectly were off by “only” 2 points on average are not surprising. Before entering the military, every recruit provides a preference ordering over military units in which he wishes to serve. Since different units require different kaba thresholds, a recruit’s acceptance to or rejection from his preferred unit(s) provides him with an update about the possible range of his kaba.
The military variables “WTP for half-hour early release” and military unit peer effects are of course absent from the civilian sample as are the day-of-the-week indicators since all civilian participants received payment immediately after participating and not on Thursday afternoon like the soldier sample.
The significance and lack thereof of each of these variables is robust to whichever subset of regressors is included in the specification.
In question 31, we ask “Which of the following sentences best describes you?” with “a. I always tell the truth,” “b. I almost always tell the truth,” “c. I usually tell the truth,” and “d. I tell the truth when it is convenient for me” as the possible responses. Question 46 reads, “Do you speak the truth in your daily life?” with the set of answers, “a. always,” “b. generally,” “c. sometimes,” and “d. when I stand to gain from it.” The absence of a one-to-one correspondence between the two sets of responses requires us to be liberal in our definition of consistency and minimizes the likelihood of a type-1 error in incorrectly inferring that a subject is lying or inconsistent in responding to the two questions. While 31a corresponds perfectly to 46a, 31b may be consistent with either 46a or 46b; 31c matches 46b or 46c, and 31d may correspond to 46c or 46d. Even with this charitable definition of consistency, we still find that 18 % of subjects unambiguously contradict themselves in responding to the two questions with 57 % of the inconsistent choices being 31a (“I always tell the truth”) and 46c (“sometimes”).
Numerous studies in economics demonstrate that higher cognitive ability predicts a number of desirable traits and outcomes, such as lower risk aversion and more patient time preferences (see, e.g., Frederick 2005 and the references therein as well as Burks et al. 2009, Dohmen et al. 2010, Oechssler et al. 2009). Oechssler et al. (2009) also show that subjects with high CRT scores are less prone than their low CRT-score peers to both the conjunction fallacy and to conservatism in probability updating, while the two groups are equally susceptible to anchoring.
Responses to the four self-report honesty questions are coded (as in Online Appendix B) such that higher values correspond to less honesty. The coefficient of −.29 on the self-report honesty variable has a p value of .11; but its negative sign implies that the less truthful a subject claims to be, the lower his reported die outcome.
In a within-subject online experiment, Hugh-Jones and David (2015) finds that honest behavior is positively correlated in a coin-flip and a quiz experiment. Yet, a self-report honesty question about whether lying in one’s self-interest is justifiable fails to predict behavior in either honesty experiment. At the same time, Hugh-Jones also includes self-report questions about whether the respondent had engaged in any one of four ethically questionable actions in the past 12 months (e.g., avoid fare on public transport, fabricate information on a job application). Reports of unethical actions do predict dishonesty in both experiments. These findings suggest that questions about actual participation in specific forms of dishonest behavior may be better predictors of dishonesty in incentivized experiments than general self-report questions about honesty.
Somewhat relatedly, numerous social psychology studies show that high self-control and the ability to overcome impulses are associated with higher grades, better relationships and interpersonal skills (see, for e.g., Tangney et al. 2004 and the references therein). Similarly, Shalvi et al. (2012) show that dishonesty increases when subjects face time pressure in the form of insufficient time to fully contemplate their reporting decision.
The forward-looking orientation implicit in this concern for one’s future self-image is consistent with the observed link found in the literature between CRT scores and more patient time preferences (see the references in footnote 23) and with the theory posited by Gottfredson and Hirschi (1990) that the primary cause of deviancy is low self-control, namely, the tendency of individuals to pursue short-term gratification without consideration of the long-term consequences of their acts.
This explanation raises the question whether the same relationship between cognitive ability and honesty would continue to hold in a high-stakes experiment in which the benefit to cheating is considerably higher.
Simon (1990) provides a theoretical rationale for the evolutionary success of social norms such as honesty based on docility and an inability to distinguish between socially prescribed behaviors that contribute to group fitness from those that reduce individual fitness.
References
Abeler, J., Becker, A., & Falk, A. (2014). Representative evidence on lying costs. Journal of Public Economics, 113, 96–104.
Arthur, W., & Day, D. V. (1994). Development of a short form for the Raven advanced progressive matrices test. Educational and Psychological Measurement, 54, 394–403.
Azar, O. H., Yosef, S., & Bar-Eli, M. (2013). Do customers return excessive change in a restaurant? A field experiment on dishonesty. Journal of Economic Behavior & Organization, 93, 219–226.
Brooks, C. (2013). Employee theft on the rise and expected to get worse. Business News Daily, June 19, 2013, Retrieved from, http://www.businessnewsdaily.com/4657-employee-theft-rising.html.
Burks, S. V., Carpenter, J. P., Goette, L., & Rustichini, A. (2009). Cognitive skills affect economic preferences, strategic behavior, and job attachment. Proceedings of the National Academy of Sciences, 106(19), 7745–7750.
Charness, G., & Dufwenberg, M. (2006). Promises and partnerships. Econometrica, 74(6), 1579–1601.
Dohmen, T., Falk, A., Huffman, D., & Sunde, U. (2010). Are risk aversion and impatience related to cognitive ability? American Economic Review, 100(3), 1238–1260.
Dreber, A., & Johannesson, M. (2008). Gender differences in deception. Economics Letters, 99(1), 197–199.
Erat, S., & Gneezy, U. (2012). White lies. Management Science, 58(4), 723–733.
Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547.
Fosgaard, T. R., Hansen, J. G., & Piovesan, M. (2013). Separating will from grace: An experiment on conformity and awareness in cheating. Journal of Economic Behavior & Organization, 93, 279–284.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42.
Gino, F., & Ariely, D. (2012). The dark side of creativity: Original thinkers can be more dishonest. Journal of Personality and Social Psychology, 102(3), 445–459.
Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 95(1), 384–394.
Goette, L., Huffman, D., & Meier, S. (2012). The impact of social ties on group interactions: Evidence from minimal groups and randomly assigned real groups. American Economic Journal: Microeconomics, 4(1), 101–115.
Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press.
Hao, L. & Houser, D. (2013). Perceptions, intentions, and cheating. Unpublished manuscript.
Hartshorne, H., & May, M. A. (1928). Studies in the nature of character, vol 1: Studies in deceit. New York: Macmillan.
Hugh-Jones, D. (2015). Way to measure honesty: A new experiment and two questionnaires. Unpublished manuscript.
Kahneman, D. (2002). The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2002 Daniel Kahneman, Vernon L. Smith. Retrieved from, http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/kahneman-bio.html.
Lahav, E., Benzion, U., & Shavit, T. (2011). The effect of military service on soldiers’ time preferences—Evidence from Israel. Judgment and Decision Making, 6(2), 130–138.
Lerer, Z. (2009). Groups of quality: The social history of the IDF selection system. Ph.D. dissertation, Tel Aviv University.
Levitt, S. D. (2006). White-collar crime writ small: A case study of bagels, donuts, and the honor system. American Economic Review Papers and Proceedings, 96(2), 290–294.
Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45(6), 633–644.
Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007). Reconsidering the use of personality tests in personnel selection contexts. Personnel Psychology, 60(3), 683–729.
Oechssler, J., Roider, A., & Schmitz, P. W. (2009). Cognitive abilities and behavioral biases. Journal of Economic Behavior & Organization, 72(1), 147–152.
Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60(4), 995–1027.
Pruckner, G. J., & Sausgruber, R. (2013). Honesty on the streets: A natural field experiment on newspaper purchasing. Journal of the European Economic Association, 1(3), 661–679.
Raven, J. C. (1936). Mental tests used in genetic studies: the performance of related individuals on tests mainly educative and mainly reproductive. MSc Thesis, University of London, London.
Rosenbaum, S. M., Billinger, S., & Stieglitz, N. (2014). Let’s be honest: A review of experimental evidence of honesty and truth-telling. Journal of Economic Psychology, 45, 181–196.
Ruffle, B. J., & Tobol, Y. (2014). Honest on mondays: Honesty and the temporal distance between decisions and payoffs. European Economic Review, 65, 126–135.
Shalvi, S., Dana, J., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organizational Behavior and Human Decision Processes, 115, 181–190.
Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological Science, 23(10), 1264–1270.
Simon, H. A. (1990). A mechanism for social selection and successful altruism. Science, 250(4988), 1665–1668.
Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72(2), 271–324.
Unger, S. M. (1964). Relation between intelligence and socially-approved behavior: A methodological cautionary note. Child Development, 35(1), 299–301.
Warner, J. T., & Pleeter, S. (2001). The personal discount rate: Evidence from military downsizing programs. American Economic Review, 33–53.
Wikipedia, http://he.wikipedia.org/wiki/.
Acknowledgments
We thank Johannes Abeler, Yuval Arbel, Ofer Azar, Ronen Bar-El, Bram Cadsby, Danny Cohen-Zada, Leif Danziger, Nadja Dwenger, Naomi Feldman, Lan Guo, Shachar Kariv, Jonathan Mamujee, Mattia Pavoni, Chet Robie, Tata Pyatigorsky-Ruffle, Jonathan Schulz, Ze’ev Shtudiner, Justin Smith, Fei Song, Michal Kolodner-Tobol, Ro’i Zultan, an editor of this journal, David Cooper, two anonymous referees and numerous seminar participants for helpful comments. We also are grateful to Capt. Sivan Levi and Meytal Sasson for research assistance, Capt. Itamar Cohen for facilitating the soldier experiments and all of the commanding officers for granting us access to their units. A preliminary version of this paper circulated under the title, “Screening for Honesty”.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Ruffle, B.J., Tobol, Y. Clever enough to tell the truth. Exp Econ 20, 130–155 (2017). https://doi.org/10.1007/s10683-016-9479-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10683-016-9479-y