Can Honesty Oaths, Peer Interaction, or Monitoring Mitigate Lying?


We introduce several new variants of the dice experiment by Fischbacher and Föllmi-Heusi (Journal of the European Economic Association 11(3):525–547, 2013) to investigate measures to reduce lying. Hypotheses on the relative performance of these treatments are derived from a straightforward theoretical model. In line with previous research, we find that groups of two subjects lied at least to the same extent as individuals—even in a novel treatment where we assigned to one subject the role of being the other’s monitor. However, we find that our participants hardly lied if they do not benefit and only others do, even if they were in a reciprocal relationship. Thus, we conclude that collaboration on lying mostly happens for personal gain. To mitigate selfish lying, an honesty oath which aims to increase moral awareness turned out to be effective.

This is a preview of subscription content, access via your institution.

Fig. 1


  1. 1. See also the popular catchword “Hippocratic oath for business”, with Cabrera (2003) being only one of many authors using it.

  2. 2.

    This is not to say that the threat of punishment is the only mechanism through which religion works. For example, Lang et al. (2016) show that religious music is a sufficiently effective reminder for religious subjects not to cheat.

  3. 3.

    Blok sees ethical oaths as involving “the intention of a person not only to do something, but also to be the one who is committed to some future course of action.” (Blok 2013, p. 193, italics in original).

  4. 4.

    To check the expected impact of the Banker’s Oath Loonen and Rutgers (2017) conducted a survey among bank employees and their clients. They find that the trust in the Banker’s Oaths “does not seem to be very high”, and that bank employees even appear opposed to it.

  5. 5.

    For more extensive reviews of the related literature see, for example, de Bruin (2016), including a comparative evaluation of the MBA Oath, the Economist’s Oath (George DeMartino), the Dutch Banker’s Oath, and various other similar initiatives, or—with a focus on the private sector, especially banking—Boatright (2013).

  6. 6.

    With subjects agreeing “to swear upon my honor that, during the whole experiment, I will tell the truth and always provide honest answers”, Jacquemet et al. (2018).

  7. 7.

    Nevertheless, Cleek and Leonard (1998) found that the reference to the existence of corporate code of ethics had an effect no smaller than that of giving details on the code, at least on students imagining to work for a fictitious firm.

  8. 8.

    There is a good reason for reserving certain ceremonial elements to very special oaths like in court: Rutgers (2010, 2013) warns against the use of honesty oaths in the private sector, as common misuse of those in the private sector might spread into the public sector and harm the meaning that oaths currently hold in the public sector, Different kind of oaths might reduce the likelihood of such a spillover.

  9. 9.

    Twelve observations were not included in the sample because of a deviation from the experimental protocol. In these cases, the role of rolling the die was not assigned to one group member by the group but by the experimenters.

  10. 10.

    Amongst other dice experiments, those of Fischbacher and Föllmi-Heusi (2013) have also been conducted that way.

  11. 11.

    Wu et al. (2011) provide neuro-economic evidence that engagement in dishonesty purely for the benefit of others can be perceives as morally acceptable. This study supports our assumption that δj does not just consist of moral costs of cheating (equal to δi), but is indeed diluted by the norm of helping others.

  12. 12.

    When comparing αδi with δi + δj in the denominator, we can conclude that for the two to be equal α = 1 + δi/δj, which at most can be 2.

  13. 13.

    Note that neither a Kolmogorov–Smirnov nor a two-sided Mann–Whitney U test finds significant differences between Direct Reciprocity and Indirect Reciprocity.

  14. 14.

    In the monitoring treatments, the two highest payoffs were overrepresented.

  15. 15.

    In our post-experimental questionnaire, we asked participants, inter alia, for their age. For Baseline, we find a significant negative correlation between the player’s payoff and age (Spearman rank correlation coefficient value of − 0.512: p < 0.01). Hence, players’ payoffs decrease with increasing age.

  16. 16.

    Moreover, groups’ dishonesty appears to be connected to the degree of acquaintance between their group members: In a pre-test for this treatment, we asked subjects to rate the acquaintance with their co-player on a 7-point-scale between “unknown” and “very close.” We found that the degree of acquaintance is highly correlated with the groups’ payoff (Spearman rank correlation coefficient value of 0.644: p < 0.01). Additionally, “very close” groups earned significantly more than the other groups (two-sided Mann–Whitney U test: p <  0.01). Hence, groups seem to behave less honestly, the more familiar their group members are.

  17. 17.

    We define religious groups as groups in which both subjects state that they are religious.

  18. 18.

    Ordered probit regressions that explain the payoff amount and control for treatments, gender, age, siblings, moral, competitiveness, and risk aversion confirm that ceteris paribus religious subjects earn less.

  19. 19.

    In a very recent experimental study, Bodenschatz and Irlenbusch (2018) find that group decision-making reduced the bribes that were offered at least in a repeated setting. However, their design separates the anonymous interaction from detection, which is a chance move in a separate stage that is independent from peer interaction.

  20. 20.

    We owe this point to an anonymous referee.

  21. 21.

    On that subject, Dan Ariely (interviewed by Haas 2016) states that players hardly cheat when you remind them of their moral values, but that they do not remember them even the very next day.

  22. 22.

    Quoted from Boatright (2013), p. 151, who also provides a careful discussion of the less straightforward problems in real bankers’ oaths.

  23. 23.

    Ashforth and Anand (2003) correctly point out that corrupt systems and individuals are mutually reinforcing and that individuals joining a corrupt firm could be quickly indoctrinated by the “business as usual” mentality into the corrupt system. The firm dynamics aspects of institutionalization and socialization, discussed by Ashforth and Anand cannot be directly accounted for in any simple laboratory experiment.

  24. 24.

    To check whether we are the only ones to find our results partly surprising, we asked a different subject pool (100 students from the University of Kassel) in an online questionnaire to estimate our treatment results. We incentivized their guesses: The answers that were closest to our actual result in each treatment received € 10. Subjects of our online survey systematically overrated the effect of monitoring: Whereas people believe that monitoring is able to mitigate lying (they expected an average payoff of € 3.21 in the monitoring treatments vs. € 3.46 in Baseline, Wilcoxon signed rank test: p = 0.077), we do not find this effect in our experimental data (on average, we paid € 3.78 in the pooled monitoring treatments; two-sided Mann–Whitney U test expected vs. actual payoff: p < 0.001). Furthermore, people underestimated lying in the group treatment (they expected € 3.32, actually it was € 4.14, p < 0.001) and in the baseline scenario (€ 3.46 vs. € 3.66, p = 0.040) whereas they slightly, but not significantly, overrated lying in the reciprocity treatments (expected € 3.13 vs. € 2.98, p = 0.881).


  1. Abbink, K. (2000). Fair salaries and the moral costs of corruption. Mimeo: Bonn Econ Discussion Papers, 1/2000.

  2. Abeler, J., Nosenzo, D., & Raymond, C. (2016). Preferences for truth-telling. Mimeo: CESifo Working Paper No. 6087.

  3. Ashforth, B. E., & Anand, V. (2003). The normalization of corruption in organizations. Research in Organizational Behavior, 25, 1–52.

    Article  Google Scholar 

  4. Banfield, E. C. (1975). Corruption as a feature of governmental organization. Journal of Law and Economics, 18(3), 587–605.

    Article  Google Scholar 

  5. Barr, A., & Michailidou, G. (2017). Complicity without connection or communication. Journal of Economic Behavior & Organization, 142, 1–10.

    Article  Google Scholar 

  6. Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being Watched Enhance Cooperation in a Real-World Setting. Biology Letters, 2, 412–414.

    Article  Google Scholar 

  7. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H., & Strongman, J. A. (1999). Moral hypocrisy: Appearing moral to oneself without being so. Journal of Personality and Social Psychology, 77(3), 525–537.

    Article  Google Scholar 

  8. Bentham, J. (2005). Swear not at all. Containing an exposure of the mischievousness as well as antichristianity of the ceremony of an oath (1817). London: Elibron Classics.

    Google Scholar 

  9. Blok, V. (2013). The power of speech acts: Reflections on a performative concept of ethical oaths in economics and business. Review of Social Economy, 71, 187–208.

    Article  Google Scholar 

  10. Boatright, J. R. (2013). Swearing to be virtuous: The prospects of a banker’s oath. Review of Social Economy, 71, 140–165.

    Article  Google Scholar 

  11. Bodenschatz, A., & Irlenbusch, B. (2018) Do two bribe less than one?—An experimental study on the four-eyes-principle. Applied Economics Letters, forthcoming.

  12. Bussmann, K.-D., Krieg, O., Nestler, C., Salvenmoser, S., Schroth, A., Theile, A., & Trunk, D. (2009). Wirtschaftskriminalität 2009: Sicherheitslage in deutschen Großunternehmen (ed.). Frankfurt am Main: PricewaterhouseCoopers.

    Google Scholar 

  13. Cabrera, A. (2003). A hippocratic oath for business? Handelsblatt online. Retrieved from

  14. Carlsson, F., Kataria, M., Krupnick, A., Lampi, E., Löfgren, Å, Qin, P., & Sterner, T. (2013). The truth, the whole truth, and nothing but the truth—A multiple country test of an oath script. Journal of Economic Behavior & Organization, 89, 105–121.

    Article  Google Scholar 

  15. Charness, G., & Sutter, M. (2012). Groups make better self-interested decisions. Journal of Economic Perspectives, 26(3), 157–176.

    Article  Google Scholar 

  16. Christens, S., Dannenberg, A., & Sachs, F. (2017) Identification of individuals and groups in a public goods experiment, MAGKS discussion paper 55-2017.

  17. Cleek, M. A., & Leonard, S. L. (1998). Can corporate codes of ethics influence behavior? Journal of Business Ethics, 17(6), 619–630.

    Google Scholar 

  18. Cohen, T. R., Gunia, B. C., Kim-Jung, S. Y., & Murnighan, J. K. (2009). Do groups lie more than individuals? Honesty and deception as a function of strategic self-interest. Journal of Experimental Social Psychology, 45(6), 1321–1324.

    Article  Google Scholar 

  19. Conrads, J., Irlenbusch, B., Rilke, R. M., & Walkowitz, G. (2013). Lying and team incentives. Journal of Economic Psychology, 34, 1–7.

    Article  Google Scholar 

  20. Dana, J., Weber, R. A., & Kuang, J. X. (2007). Exploiting moral wiggle room: Experiments demonstrating an illusory preference for fairness. Economic Theory, 33(1), 67–80.

    Article  Google Scholar 

  21. Danilov, A., Biemann, T., Kring, T., & Sliwka, D. (2013). The dark side of team incentives: Experimental evidence on advice quality from financial service professionals. Journal of Economic Behavior & Organization, 93, 266–272.

    Article  Google Scholar 

  22. de Bruin, B. (2016). Pledging integrity: Oaths as forms of business ethics management. Journal of Business Ethics, 136(1), 23–42.

    Article  Google Scholar 

  23. Egan, M. (2016). 5,300 Wells Fargo employees fired over 2 million phony accounts, CNN Money online. Retrieved from .

  24. Erat, S., & Gneezy, U. (2012). White lies. Management Science, 58(4), 723–733.

    Article  Google Scholar 

  25. Fehr, E., & List, J. A. (2004). The hidden costs and returns of incentives—Trust and trustworthiness among CEOs. Journal of the European Economic Association, 2(5), 743–771.

    Article  Google Scholar 

  26. Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817–868.

    Article  Google Scholar 

  27. Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547.

    Article  Google Scholar 

  28. Fritz, S. (2006). Ökonomischer NutzenweicherKennzahlen: (Geld-)Wert von Arbeitszufriedenheit und Gesundheit: Band 38: Mensch, Technik, Organisation (2nd ed.). Zürich: vdf Hochschulverlag an der ETH Zürich.

    Google Scholar 

  29. Garbarino, E., Slonim, R., & Villeval, M. C. (2016). Loss aversion and lying behavior: Theory, estimation and empirical evidence. mimeo: IZA DP No. 10395.

  30. Gino, F., Ayal, S., & Ariely, D. (2009a).). Contagion and differentiation in unethical behavior the effect of one bad apple on the barrel. Psychological Science, 20, 393–398.

    Article  Google Scholar 

  31. Gino, F., Ayal, S., & Ariely, D. (2009b). Out of sight, ethically fine? The effects of collaborative work on individuals’ dishonesty. In Group Conference, Colorado Springs, Colorado.

  32. Gino, F., Ayal, S., & Ariely, D. (2013). Self-serving altruism? The lure of unethical actions that benefit others. Journal of Economic Behavior & Organization, 93, 285–292.

    Article  Google Scholar 

  33. Haas, M. (2016). Wie Trump das moralische Fundament der USA beschädigt, Süddeutsche Zeitung. Retrieved from

  34. Hartenberger, U., Lorenz, D., & Lützkendorf, T. (2013). A shared built environment: Professional identity through education and training. Building Research & Information, 41(1), 60–76.

    Article  Google Scholar 

  35. Hoering, S., Kühl, S., & Schulze-Fielitz, A. (2001). Homogenität und Heterogenität in der Gruppenzusammensetzung: Eine mikropolitische Studie über Entscheidungsprozesse in der Gruppenarbeit, Arbeit, 10 (4), 331–351.

    Article  Google Scholar 

  36. Jacquemet, N., Joule, R.-V., Luchini, S., & Shogren, J. F. (2013). Preference elicitation under oath. Journal of Environmental Economics and Management, 65, 110–132.

    Article  Google Scholar 

  37. Jacquemet, N., Luchini, S., Rosaz, J., & Shogren, J. F. (2018). Truth-telling under oath. Management Science, forthcoming.

  38. Jacquemet, N., Luchini, S., Shogren, J. F., & Watson, V. (2016). Using commitment to improve choice experiment survey responses. Working Paper.

  39. Jung, J. C., & Park, S. B. (2017). Case study: Volkswagen’s diesel emissions scandal. Thunderbird International Business Review, 59(1), 127–137.

    Article  Google Scholar 

  40. Kemper, N., Nayga, R. M. Jr., Popp, J., & Bazzani, C. (2016). The effects of honesty oath and consequentiality in choice experiments. In Selected Paper prepared for presentation at the Agricultural & Applied Economics Association’s 2016 AAEA Annual Meeting, Boston, Massachusetts, July 31–August 2.

  41. Knebel, H. (2011). Teamfähigkeit und die Beurteilung von Teamleistungen. In T. R. Hummel & E. Zander (Eds.), Neuere Entwicklungen in ausgewählten Bereichen der Personalpolitik (pp. 107–139). München: Rainer Hampp Verlag.

  42. Kocher, M. G., Schudy, S., & Spantig, L. (2017). I lie? We lie! Why? Experimental evidence on a dishonesty shift in groups, Management Science, August 2017.

  43. Koessler, A.-K., Torgler, B., Feld, L. P., & Frey, B. S. (2018). Commitment to pay taxes: Results from field and laboratory experiments, Freiburger Diskussionspapiere zur Ordnungsökonomik, No. 18/06.

  44. Kreps, D. M. (1997). Intrinsic motivation and extrinsic incentives. The American Economic Review, 87(2), 359–364.

    Google Scholar 

  45. Kretschmer, A. (2002). Maßnahmen zur Kontrolle von Korruption: Eine modelltheoretische Untersuchung, In Arbeitspapiere des Instituts für Genossenschaftswesen der Westfälischen Wilhelms-Universität Münster, 25.

  46. Kroher, M., & Wolbring, T. (2015). Social control, social learning, and cheating: Evidence from lab and online experiments on dishonesty. Social Science Research, 53, 311–324.

    Article  Google Scholar 

  47. Kugler, T., Bornstein, G., Kocher, M. G., & Sutter, M. (2007). Trust between individuals and groups: Groups are less trusting than individuals but just as trustworthy. Journal of Economic Psychology, 28(6), 646–657.

    Article  Google Scholar 

  48. Lang, M., Mitkidis, P., Kundt, R., Nichols, A., Krajcikova, L., & Xygalatas, D. (2016). Music as a sacred cue? Effects of religious music on moral behavior. Frontiers in Psychology, 7(814), 1–8.

    Google Scholar 

  49. Li, S., Bühren, C., Frank, B., & Qin, H. (2015). Group decision making in a corruption experiment: China and Germany compared. Journal of Economics and Statistics, 235, 207–227.

    Google Scholar 

  50. List, J., & Momeni, F. (2017): When Corporate social responsibility backfires: Theory and evidence from a natural field experiment. NBER Working Paper 24169.

  51. Loonen, T., & Rutgers, M. (2017). Swearing to be a good banker: Perceptions of the obligatory banker’s oath in the Netherlands. Journal of Banking Regulation, 18(1), 28–47.

    Article  Google Scholar 

  52. Majolo, B., Ames, K., Brumpton, R., Garratt, R., Hall, K., & Wilson, N. (2006). Human friendship favours cooperation in the Iterated Prisoner’s Dilemma. Behaviour, 143, 1383–1395.

    Article  Google Scholar 

  53. Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–644.

    Article  Google Scholar 

  54. McCord, L. B., Greenhalgh, K., & Magasin, M. (2004). Businesspersons beware: Lying is a crime, Graziadio Business Review 7 (3). Retrieved from

  55. Morgan, P. M., & Tindale, R. S. (2002). Group vs individual performance in mixed-motive situations: Exploring an inconsistency. Organizational Behavior and Human Decision Processes, 87, 44–65.

    Article  Google Scholar 

  56. O’Leary, C., & Pangemanan, G. (2007). The effect of groupwork on ethical decision-making of accountancy students. Journal of Business Ethics, 75(3), 215–228.

    Article  Google Scholar 

  57. Pruckner, G. J., & Sausgruber, R. (2013). Honesty on the streets: A field study on newspaper purchasing. Journal of the European Economic Association, 11, 661–679.

    Article  Google Scholar 

  58. Reuters, S. N. L. (2016). Wells Fargo under criminal investigation in California over phony accounts scandal, Time Business online. Retrieved from .

  59. Robbins, S. P., & Judge, T. A. (2013). Organizational behavior (15th edn.). New Jersey: Pearson Education.

    Google Scholar 

  60. Rose-Ackerman, S. (1975). The economics of corruption. Journal of Public Economics, 4, 187–203.

    Article  Google Scholar 

  61. Rothenhäusler, D., Schweizer, N., & Szech, N. (2018). Guilt in voting and public good games. European Economic Review, 101, 664–681.

    Article  Google Scholar 

  62. Rutgers, M. R. (2010). The oath of office as public value guardian. The American Review of Public Administration, 40(4), 428–444.

    Article  Google Scholar 

  63. Rutgers, M. R. (2013). Will the phoenix fly again?. Review of Social Economy, 71, 249–276.

    Article  Google Scholar 

  64. Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, 54–67.

    Article  Google Scholar 

  65. Schikora, J. T. (2011). Bringing the four-eyes-principle to the lab. mimeo: Munich Discussion Paper 2011 (3).

  66. Schulze, G. G., & Frank, B. (2003). Deterrence versus intrinsic motivation: Experimental evidence on the determinants of corruptibility. Economics of Governance, 4, 143–160.

    Article  Google Scholar 

  67. Shalvi, S., Gino, F., Barkan, R., & Ayal, S. (2015). Self-serving justifications: Doing wrong and feeling moral. Current Directions in Psychological Science, 24(2), 125–130.

    Article  Google Scholar 

  68. Soraperra, I., Weisel, O., Zultan, R., Kochavi, S., Leib, M., Shalev, H., & Shalvi, S. (2017). The bad consequences of teamwork. Economics Letters, 160, 12–15.

    Article  Google Scholar 

  69. Sulmasy, D. (1999). What is an oath and why should a physician swear one?. Theoretical Medicine and Bioethics, 20, 329–346.

    Article  Google Scholar 

  70. Utikal, V., & Fischbacher, U. (2013). Disadvantageous lies in individual decisions. Journal of Economic Behavior & Organization, 85, 108–111.

    Article  Google Scholar 

  71. van der Linden, B. (2013). Principles as ‘rules of thumb’: A particularist approach to codes of ethics and an analysis of the Dutch Banking Code. Review of Social Economy, 71(2), 209–227.

    Article  Google Scholar 

  72. Warner, S. L. (1965). Randomized-response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309), 63–69.

    Article  Google Scholar 

  73. Wegge, J. (2004). Führung von Arbeitsgruppen. Göttingen: Hogrefe Verlag.

    Google Scholar 

  74. Weisel, O., & Shalvi, S. (2015). The collaborative roots of corruption. Proceedings of the National Academy of Sciences of United States of America, 112(34), 10651–10656.

    Article  Google Scholar 

  75. Wouda, J., Bijlstra, G., Frankenhuis, W. E., & Wigboldus, D. H. J. (2017). The collaborative roots of corruption? A replication of Weisel & Shalvi (2015), Collabra: Psychology, 3, 1–3.

    Article  Google Scholar 

  76. Wu, D., Loke, I. C., Xu, F., & Lee, K. (2011). Neural correlates of evaluations of lying and truth-telling in different social contexts. Brain Research, 1389, 115–124.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Christoph Bühren.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Additional information

A previous version was circulated under the title “Lying in the face of monitoring, reciprocity, and commitment”, see Abeler et al. (2016) and Garbarino et al. (2016).



Factor Approximation in Our Theoretical Model of the Utility of Lying

In our “Results” section we found evidence to confirm some of the major predictions we made for behavioral differences between treatments based on our theoretical model of the utility of lying. Ex post, we can use the model for a more detailed interpretation of our findings. Thus, we estimate the most essential key factors of the model by using the observed mean values of payoffs per treatment.

In each treatment, the mean value of payoffs under complete honesty (\(\bar {r}\)) is expected to approximate €2.50. We assume: \(\bar {r}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,2.50\) for each treatment.


The observed mean value of payoffs in the baseline treatment is \({\bar {p}_{{\text{baseline}}}}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,3.66\). Assuming that on average all players engaged in the optimal amount of lying, we get:

$${\left( {\frac{1}{{2{{\bar {\delta }}_{{\text{baseline}}}}}}} \right)^2} - \bar {r}={\bar {p}_{{\text{baseline}}}} - \bar {r}$$

with \({\bar {\delta }_{{\text{baseline}}}}\) being the average dislike of lying for oneself in the baseline treatment.

Solving this equation for \({\bar {\delta }_{{\text{baseline}}}}\), we get:

$${\bar {\delta }_{{\text{baseline}}}}=\frac{1}{{2\sqrt {{{\bar {p}}_{{\text{baseline}}}}} }}=0.26$$

Moral Awareness

The observed mean value of payoffs in our moral awareness treatment is \({\bar {p}_{{\text{moral}}}}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,2.66\). Again, assuming that on average all players engaged in the optimal amount of lying, we get:

$${\left( {\frac{1}{{2{{\bar {\alpha }}_{{\text{moral}}}}{{\bar {\delta }}_{{\text{moral}}}}}}} \right)^2} - \bar {r}={\bar {p}_{{\text{moral}}}} - \bar {r}$$

with \({\bar {\delta }_{{\text{moral}}}}\) being the average dislike of lying for oneself, and \({\bar {\alpha }_{{\text{moral}}}}\) being the factor to which the honesty oath increased moral awareness on average compared to the baseline treatment.

Solving this equation for \({\bar {\alpha }_{{\text{moral}}}}\), we get:

$${\bar {\alpha }_{{\text{moral}}}}=\frac{1}{{2{{\bar {\delta }}_{{\text{moral}}}}\sqrt {{{\bar {p}}_{{\text{moral}}}}} }}$$

Since we already capture the difference in moral awareness between Baseline and Moral Awareness by using the factor \({\bar {\alpha }_{{\text{moral}}}}\), we assume that the average dislike of lying for oneself between those two treatments does not change (\({\bar {\delta }_{{\text{moral}}}}={\bar {\delta }_{{\text{baseline}}}}\)).

Now we can calculate \({\bar {\alpha }_{{\text{moral}}}}\):

$${\bar {\alpha }_{{\text{normal}}}}=\frac{1}{{2{{\bar {\delta }}_{{\text{baseline}}}}\sqrt {{{\bar {p}}_{{\text{moral}}}}} }}=1.17$$

The interpretation of this factor is that the honesty oath increased the average of moral awareness by about 17%.


The observed mean value of payoffs in the monitoring treatments is \({\bar {p}_{{\text{monitor}}}}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,3.78\). Again, assuming that on average all players engaged in the optimal amount of lying, we get:

$${\left( {\frac{1}{{{{\bar {\alpha }}_{{\text{monitor}}}}{{\bar {\delta }}_{{\text{monitor}}}}}}} \right)^2} - \bar {r}={\bar {p}_{{\text{monitor}}}} - \bar {r}$$

with \({\bar {\delta }_{{\text{monitor}}}}\) being the average dislike of lying for oneself, and \({\bar {\alpha }_{{\text{monitor}}}}\) being the factor to which monitoring increased moral awareness on average compared to the baseline treatment.

Solving this equation for \({\bar {\delta }_{{\text{monitor}}}}\), we get:

$${\bar {\delta }_{{\text{monitor}}}}=\frac{1}{{{{\bar {\alpha }}_{{\text{monitor}}}}\sqrt {{{\bar {p}}_{{\text{monitor}}}}} }}$$

Furthermore, we reason that \(1 \leq {\bar {\alpha }_{{\text{monitor}}}} \leq {\bar {\alpha }_{{\text{moral}}}}\). With this, we can estimate \({\bar {\delta }_{{\text{monitor}}}}\):

$$0.44=\frac{1}{{{{\bar {\alpha }}_{{\text{moral}}}}\sqrt {{{\bar {p}}_{{\text{monitor}}}}} }} \leq {\bar {\delta }_{{\text{monitor}}}} \leq \frac{1}{{\sqrt {{{\bar {p}}_{{\text{monitor}}}}} }}=0.51$$

Comparing this to the baseline treatment, we get:

$$1.69 \times {\bar {\delta }_{{\text{baseline}}}}{{{\text{}}}} \leq {\bar {\delta }_{{\text{monitor}}}} \leq 1.98 \times {\bar {\delta }_{{\text{baseline}}}}$$

This means that the average dislike of lying for oneself increased by at least 69% due to the presence of a monitor (while already considering possible effects of an increased moral awareness due to monitoring). Both effects, however, are narrowly overshadowed by the division of moral costs between both players (n = 2), since:

$$\frac{{{{\bar {\alpha }}_{{\text{monitor}}}}{{\bar {\delta }}_{{\text{monitor}}}}}}{n}=\frac{{1.98 \times {{\bar {\delta }}_{{\text{baseline}}}}}}{2}<{\bar {\delta }_{{\text{baseline}}}}$$


Since we find no significant difference between both reciprocity treatments, we define \({\bar {p}_{{\text{reci}}}}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,2.89\) as the observed mean value of payoffs in the pooled reciprocity treatments.

Again, assuming that on average all players engaged in the optimal amount of lying, we get:

$${\left( {\frac{{{{\bar {\beta }}_{{\text{reci}}}}}}{{2{{\bar {\delta }}_{{\text{reci}}}}}}} \right)^2} - \bar {r}={\bar {p}_{{\text{reci}}}} - \bar {r}$$

with \({\bar {\delta }_{{\text{reci}}}}\) being the average dislike of lying for another person, and \({\bar {\beta }_{{\text{reci}}}}\) being the factor to which players care about others on average.

Solving this equation for \({\bar {\beta }_{{\text{reci}}}}\), we get:

$${\bar {\beta }_{{\text{reci}}}}=2{\bar {\delta }_{{\text{reci}}}}\sqrt {{{\bar {p}}_{{\text{reci}}}}}$$


The observed mean value of payoffs in the group treatment is \({\bar {p}_{{\text{group}}}}={\sf C}\!\!\!\!\raise.8pt\hbox{=}\,4.14\). Again, assuming that on average all players engaged in the optimal amount of lying, we get:

$${\left( {\frac{{1+{{\bar {\beta }}_{{\text{group}}}}}}{{{{\bar {\delta }}_{{\text{group}};i}}+{{\bar {\delta }}_{{\text{group}};j}}}}} \right)^2} - \bar {r}={\bar {p}_{{\text{group}}}} - \bar {r}$$

with \({\bar {\delta }_{{\text{group}};i}}\) being the average dislike of lying for oneself, \({\bar {\delta }_{{\text{group}};j}}\) being the average dislike of lying for another person, and \({\bar {\beta }_{{\text{group}}}}\) being the factor to which players care about others on average.

Solving this equation for \({\bar {\beta }_{{\text{group}}}}\), we get:

$${\bar {\beta }_{{\text{group}}}}=\left( {{{\bar {\delta }}_{{\text{group}};i}}+{{\bar {\delta }}_{{\text{group}};j}}} \right)\sqrt {{{\bar {p}}_{{\text{group}}}}} - 1$$

Furthermore, we can assume that the factor to which players care about others and their dislike of lying for another person do not change between treatments (\(\bar {\beta }:={\bar {\beta }_{{\text{reci}}}}={\bar {\beta }_{{\text{group}}}}\) and \({\bar {\delta }_j}:={\bar {\delta }_{{\text{reci}}}}={\bar {\delta }_{{\text{group}};j}}\)), since we already consider the number of other players affected by lying (m) and the number of players participating in the decision (n) separately.

This implies that we can equate (3) with (4):

$$2{\bar {\delta }_j}\sqrt {{{\bar {p}}_{{\text{reci}}}}} =\left( {{{\bar {\delta }}_{{\text{group}};i}}+{{\bar {\delta }}_j}} \right)\sqrt {{{\bar {\delta }}_{{\text{group}}}}} - 1$$

Since we argue that the dislike of lying for oneself is higher than the dislike of lying for another person (\({\bar {\delta }_{{\text{group}};i}} \geq {\bar {\delta }_j}\)), we can reason that:

$$2{\bar {\delta }_j}\sqrt {{{\bar {p}}_{{\text{reci}}}}} \leq 2{\bar {\delta }_{{\text{group}};i}}\sqrt {{{\bar {p}}_{{\text{group}}}}} - 1$$

Solving this inequality for \({\bar {\delta }_j}\), we get:

$${\bar {\delta }_j} \leq {\bar {\delta }_{{\text{group}};i}} \times \sqrt {\frac{{{{\bar {p}}_{{\text{group}}}}}}{{{{\bar {p}}_{{\text{reci}}}}}}} - \frac{1}{{2\sqrt {{{\bar {p}}_{{\text{reci}}}}} }}$$

Here we can only guess \({\bar {\delta }_{{\text{group}};i}}\). However, if we approximate the dislike of lying for oneself in the group treatment with the corresponding value from the baseline treatment (\({\bar {\delta }_{{\text{group}};i}}\approx{\bar {\delta }_{{\text{baseline}}}}\)) we get:

$${\bar {\delta }_j} \leq {\bar {\delta }_{{\text{baseline}}}} \times \sqrt {\frac{{{{\bar {p}}_{{\text{group}}}}}}{{{{\bar {p}}_{{\text{reci}}}}}}} - \frac{1}{{2\sqrt {{{\bar {p}}_{{\text{reci}}}}} }}=0.26 \quad \times \sqrt {\frac{{{{\bar {p}}_{{\text{group}}}}}}{{{{\bar {p}}_{{\text{reci}}}}}}} - \frac{1}{{2\sqrt {{{\bar {p}}_{{\text{reci}}}}} }}=0.017$$

If this approximation is somewhat correct, this inequality has a very intuitive interpretation: The dislike of lying for another person was extraordinary low (the dislike of lying for oneself was at least 15 times higher than the dislike of lying for another person).

Furthermore, this indicates that the average factor to which players care about others (\(\bar {\beta }\)) was low as well, since:

$$\bar {\beta }={\bar {\beta }_{{\text{reci}}}}=2{\bar {\delta }_{{\text{reci}}}}\sqrt {{{\bar {p}}_{{\text{reci}}}}} =2{\bar {\delta }_j}\sqrt {{{\bar {p}}_{{\text{reci}}}}} =3.4 \times {\bar {\delta }_j} \leq 3.4 \times 0.017=0.058$$


Following our theoretical model of the utility of lying we can interpret our findings in more detail:

  • The honesty oath increased the moral awareness of our participants by about 17% (≈ 20%) on average.

  • The dislike of lying for oneself was at least 15 times higher than the dislike of lying for another person.

  • This was due to the fact that the dislike of lying for another person was extraordinarily low.

  • The degree to which players care about others in the experiment was relatively low.

  • Monitoring increased the dislike of lying for oneself by at least 69% (≈ 70%) (already considering possible effects of increased moral awareness due to monitoring). However, these two monitoring effects combined were still narrowly overshadowed by the division of moral costs between both players.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Beck, T., Bühren, C., Frank, B. et al. Can Honesty Oaths, Peer Interaction, or Monitoring Mitigate Lying?. J Bus Ethics 163, 467–484 (2020).

Download citation


  • Lie detection
  • Honesty
  • Moral awareness
  • Reciprocity
  • Group decision
  • Monitoring
  • Dice experiment

JEL Classification

  • C91
  • C92
  • D63
  • H26