The role of information accuracy and justification in bonus allocations

Abstract

Previous literature shows that managers, evaluating employees, insufficiently differentiate between strong and weak performers, which causes disadvantageous organizational outcomes. Bol et al. (Acc Organ Soc 51:64–73, 2016) demonstrate in an independent bonuses context that when these managers can base their evaluations on accurate information, they differentiate more when allocating bonuses, but only when evaluation outcomes are transparent. Our experiment replicates and extends Bol et al. (2016) in a fixed bonus pool context. We investigate the effects of information accuracy and whether managers get the opportunity to write a justification to their different employees when making their bonus allocations as an alternative way to create transparency. We hypothesize and find that justification increases managers’ differentiation in bonus allocations, but only—as in Bol et al. (2016)—when performance information accuracy is high. With a path analysis we disentangle the underlying process: we find that justification increases managers’ expectations that employees will perceive the differentiation in the bonus allocations as fair, especially when information accuracy is high. Finally, such managers’ expectations are positively related to the degree to which they differentiate in their bonus allocations.

This is a preview of subscription content, access via your institution.

Fig. 1

Notes

  1. 1.

    We use ‘compression of performance ratings’ and ‘centrality bias’ as synonyms, in line with Bol (2011) and Moers (2005).

  2. 2.

    For fair performance evaluations the employees should not perceive any difference between their rewards-to-input ratio and the rewards-to-input ratio of any colleague (Adams 1963, 1965; Golman and Bhatia 2012; Walster et al. 1973). Fair performance evaluations therefore differ from equal performance evaluations, in which the allocated rewards to each employee are the same or similar, irrespective of employees’ input, output or work contributions (Walster et al. 1973). Equal performance evaluations are the result of centrality bias.

  3. 3.

    Performance information accuracy refers to the extent to which information is informative about employees' effort. It refers to the variability around a point estimate of an employee’s effort level (Bol et al. 2016). Information accuracy is similar to information precision (Banker and Datar 1989).

  4. 4.

    See Colella et al. (2007) for an overview of the costs and benefits related to pay secrecy.

  5. 5.

    When evaluating performance, managers may justify to their supervisor, their subordinates or both (Ferris et al. 2008). However, the justification of performance evaluations to subordinates is the norm (Libby et al. 2004).

  6. 6.

    We introduce a voluntary disclosure (a possibility to justify one’s bonus decision) as in Bol et al. (2016) instead of a forced disclosure (a requirement to justify one’s bonus decision) in order to increase the comparability between our study and Bol et al. (2016).

  7. 7.

    The university at which the experiment took place granted approval for the experiment.

  8. 8.

    The accuracy of the information used in subjective performance evaluations could theoretically range from ‘no accuracy’ to ‘full accuracy’, however in practice these extremes are rare. Totally inaccurate information (‘no accuracy’) will not provide a good basis for a subjective performance evaluation and will therefore not be used. Fully accurate information (‘full accuracy’) is more often associated with formula-based evaluations that explicitly weigh each performance measure in a formula than with subjective performance evaluations. However, even under highly to fully accurate information, one could prefer subjective performance evaluations in order to avoid “‘game-playing’ associated with any formula-based plan, the possibility that bonuses will be paid even when performance is ‘unbalanced’ (i.e., overachievement on some objectives and underachievement on others)” (Ittner et al. 2003).

  9. 9.

    However, justification is not always positive. Ashton (1990) discusses how a justification requirement (instead of the justification possibility, as in our paper) can also alter the focus of evaluators from making good evaluations towards making ‘justifiable’ evaluations and good justification of evaluations. Next, Bartlett et al. (2014) show how the additional processing of information due to justification can increase judgment biases in the presence of both relevant and irrelevant information, as it increases the processing of all performance measures instead of only the relevant ones.

  10. 10.

    Informational fairness perceptions refer the extent to which employees perceive that they receive timely, accurate, and reasonable explanations about decision-making processes or outcomes (Colquitt 2001).

  11. 11.

    We conduct the experiment in the z-Tree experimental software (Fischbacher 2007).

  12. 12.

    Firms often use scorecards that only contain performance measures that are common to all business units (Cardinaels and van Veen-Dirks 2010).

  13. 13.

    Bol et al. (2016) measure differentiation in the bonus allocation by the difference in bonuses allocated to the strongest and the weakest performer, which is similar to our variable Bonus Range. Bol (2011) measures differentiation in the bonus allocation by the ratio between the standard deviation of the objective performance measures and the standard deviation of the subjective performance ratings provided by a manager to all employees in a reference group. Our variable Differentiation is similar to the measure of Bol (2011), as the objective performance measures are the same in all treatments of our paper and we only focus on the standard deviation of the subjective performance ratings of all employees.

  14. 14.

    Every p-value mentioned in this paper is a two-sided p-value.

  15. 15.

    Even in the Low Accuracy conditions participants thought the performance measures were quite accurate though. Their average response to the statement “the three performance measures provided a highly accurate image of the performance of each individual store manager” was 4.13 on 7, which is not significantly different from the scale midpoint of 4 (t145 = 1.089; p = 0.278), but still rather high.

  16. 16.

    ANOVA analyses allow to detect significant differences between cell means, but are not able to signal the functional form of the relationship among cell means. Contrast analysis is a refinement of ANOVA which allows to test a specific functional form of the relationship among cell means (i.e. a specific pattern predicted for the cell means). Contrast coding has greater statistical power than the conventional ANOVA (Buckless and Ravenscroft 1990). Therefore, contrast analysis is more suitable than conventional factorial ANOVA in case of an ordinal interaction effect because it allows for more statistical power in order to reveal the predicted interaction i.e. it allows for more statistical power in order to reveal the specific pattern of hypothesis 1 in which we specifically predict that differentiation in the bonus allocation is significantly higher in the High Accuracy—Justification condition than in the other three experimental conditions. We refer to Buckless and Ravenscroft (1990) and Bol et al. (2016) for a more extensive discussion of this matter.

  17. 17.

    A semi-omnibus F-test on the remaining between-group variance indicates that the mean value of Maximum Bonus is not significantly different across the three remaining experimental conditions (F2, 288 = 0.080; p = 0.923).

  18. 18.

    A semi-omnibus F-test on the remaining between-group variance indicates that the mean value of Minimum Bonus is not significantly different across the three remaining experimental conditions (F2, 288 = 0.364; p = 0.695).

  19. 19.

    A semi-omnibus F-test on the remaining between-group variance indicates that the mean value of the average bonus allocated to the three mediocre store managers is not significantly different across the three remaining experimental conditions (F2, 288 = 0.017; p = 0.983).

  20. 20.

    A semi-omnibus F-test on the remaining between-group variance indicates that the mean value of the standard deviation of the bonus allocated to the three mediocre store managers is not significantly different across the three remaining experimental conditions (F2, 288 = 0.176; p = 0.838).

  21. 21.

    Using a seven-point Likert scale, we collected this data at the end of the experiment in order to avoid that participants’ attention was attracted to the presence or absence of the possibility for justification depending on the condition they were in and in order to avoid leading participants to certain response patterns.

References

  1. Adams, J. S. (1963). Toward an understanding of inequity. Journal of Abnormal and Social Psychology, 67(5), 422–436.

    Article  Google Scholar 

  2. Adams, J. S. (1965). Inequity in social exchange. In Advances in experimental social psychology (Vol. 2, pp. 267–299). Academic Press.

  3. Ahn, T. S., Hwang, I., & Kim, M. I. (2010). The impact of performance measure discriminability on ratee incentives. The Accounting Review, 85(2), 389–417.

    Article  Google Scholar 

  4. Ashton, R. H. (1990). Pressure and performance in accounting decision settings: Paradoxical effects of incentives, feedback, and justification. Journal of Accounting Research, 28, 148–180.

    Article  Google Scholar 

  5. Bailey, W. J., Hecht, G., & Towry, K. L. (2011). Dividing the pie: The influence of managerial discretion extent on bonus pool allocation. Contemporary Accounting Research, 28(5), 1562–1584.

    Article  Google Scholar 

  6. Baker, G. P., Jensen, M. C., & Murphy, K. J. (1988). Compensation and incentives: Practice vs theory. Journal of Finance, 43(3), 593–616.

    Article  Google Scholar 

  7. Bamberger, P., & Belogolovsky, E. (2010). The impact of pay secrecy on individual task performance. Personnel Psychology, 63(4), 965–996.

    Article  Google Scholar 

  8. Bamberger, P., & Belogolovsky, E. (2017). The dark side of transparency: How and when pay administration practices affect employee helping. Journal of Applied Psychology, 102(4), 658.

    Article  Google Scholar 

  9. Banker, R. D., & Datar, S. M. (1989). Sensitivity, precision, and linear aggregation of signals for performance evaluation. Journal of Accounting Research, 27(1), 21–39.

    Article  Google Scholar 

  10. Bartlett, G., Johnson, E., & Reckers, P. (2014). Accountability and role effects in balanced scorecard performance evaluations when strategy timeline is specified. European Accounting Review, 23(1), 143–165.

    Article  Google Scholar 

  11. Belogolovsky, E., & Bamberger, P. A. (2014). Signaling in secret: Pay for performance and the incentive and sorting effects of pay secrecy. Academy of Management Journal, 57(6), 1706–1733.

    Article  Google Scholar 

  12. Bies, R. J., & Shapiro, D. L. (1988). Voice and justification: Their influence on procedural fairness judgments. Academy of Management Journal, 31(3), 676–685.

    Google Scholar 

  13. Bol, J. C. (2008). Subjectivity in compensation contracting. Journal of Accounting Literature, 27, 1–24.

    Google Scholar 

  14. Bol, J. C. (2011). The determinants and performance effects of managers’ performance evaluation biases. The Accounting Review, 86(5), 1549–1575.

    Article  Google Scholar 

  15. Bol, J. C., Kramer, S., & Maas, V. S. (2016). How control system design affects performance evaluation compression: The role of information accuracy and outcome transparency. Accounting, Organizations and Society, 51, 64–73.

    Article  Google Scholar 

  16. Brutus, S. (2010). Words versus numbers: A theoretical exploration of giving and receiving narrative comments in performance appraisal. Human Resource Management Review, 20(2), 144–157.

    Article  Google Scholar 

  17. Buckless, F. A., & Ravenscroft, S. P. (1990). Contrast coding: A refinement of ANOVA in behavioral analysis. The Accounting Review, 65(4), 933–945.

    Google Scholar 

  18. Cardinaels, E., & van Veen-Dirks, P. M. (2010). Financial versus non-financial information: The impact of information organization and presentation in a Balanced Scorecard. Accounting, Organizations and Society, 35(6), 565–578.

    Article  Google Scholar 

  19. Castilla, E. J. (2008). Gender, race, and meritocracy in organizational careers. American Journal of Sociology, 113(6), 1479–1526.

    Article  Google Scholar 

  20. Castilla, E. J. (2015). Accounting for the gap: A firm study manipulating organizational accountability and transparency in pay decisions. Organization Science, 26(2), 311–333.

    Article  Google Scholar 

  21. Colella, A., Paetzold, R. L., Zardkoohi, A., & Wesson, M. J. (2007). Exposing pay secrecy. Academy of Management Review, 32(1), 55–71.

    Article  Google Scholar 

  22. Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400.

    Article  Google Scholar 

  23. Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O., & Ng, K. Y. (2001). Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425–445.

    Article  Google Scholar 

  24. Dalla Via, N., Perego, P., & Van Rinsum, M. (2019). How accountability type influences information search processes and decision quality. Accounting, Organizations and Society, 75, 79–91.

    Article  Google Scholar 

  25. Ferris, G. R., Munyon, T. P., Basik, K., & Buckley, M. R. (2008). The performance evaluation context: Social, emotional, cognitive, political, and relationship components. Human Resource Management Review, 18(3), 146–163.

    Article  Google Scholar 

  26. Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.

    Article  Google Scholar 

  27. Gibbins, M., & Newton, J. D. (1994). An empirical exploration of complex accountability in public accounting. Journal of Accounting Research, 32(2), 165–186.

    Article  Google Scholar 

  28. Gibbs, M., Merchant, K. A., Van der Stede, W. A., & Vargus, M. E. (2004). Determinants and effects of subjectivity in incentives. The Accounting Review, 79(2), 409–436.

    Article  Google Scholar 

  29. Golman, R., & Bhatia, S. (2012). Performance evaluation inflation and compression. Accounting, Organizations and Society, 37(8), 534–543.

    Article  Google Scholar 

  30. Hölmstrom, B. (1979). Moral hazard and observability. The Bell Journal of Economics, 74-91

  31. Höppe, F., & Moers, F. (2011). The choice of different types of subjectivity in CEO annual bonus contracts. The Accounting Review, 86(6), 2023–2046.

    Article  Google Scholar 

  32. Ittner, C. D., Larcker, D. F., & Meyer, M. W. (2003). Subjectivity and the weighting of performance measures: Evidence from a balanced scorecard. The Accounting Review, 78(3), 725–758.

    Article  Google Scholar 

  33. Kennedy, J. (1993). Debiasing audit judgment with accountability: A framework and experimental results. Journal of Accounting Research, 31(2), 231–245.

    Article  Google Scholar 

  34. Klimoski, R., & Inks, L. (1990). Accountability forces in performance appraisal. Organizational Behavior and Human Decision Processes, 45(2), 194–208.

    Article  Google Scholar 

  35. Landy, F. J., Barnes, J. L., & Murphy, K. R. (1978). Correlates of perceived fairness and accuracy of performance evaluation. Journal of Applied Psychology, 63(6), 751–754.

    Article  Google Scholar 

  36. Langford, P. H. (2003). A one-minute measure of the Big Five? Evaluating and abridging Shafer’s (1999a) Big Five markers. Personality and Individual Differences, 35(5), 1127–1140.

    Article  Google Scholar 

  37. Lawler, E. E. (1967). The multitrait-multirater approach to measuring managerial job performance. Journal of Applied Psychology, 51(5), 369–381.

    Article  Google Scholar 

  38. Lerner, J. S., & Tetlock, P. E. (1999). Accounting for the effects of accountability. Psychological Bulletin, 125(2), 255–275.

    Article  Google Scholar 

  39. Leventhal, G. S. (1980). What should be done with equity theory? In Social exchange (pp. 27–55). Springer US.

  40. Levin, J. (2003). Relational incentive contracts. American Economic Review, 93(3), 835–857.

    Article  Google Scholar 

  41. Libby, R., Bloomfield, R., & Nelson, M. W. (2002). Experimental research in financial accounting. Accounting, Organizations and Society, 27(8), 775–810.

    Article  Google Scholar 

  42. Libby, T., Salterio, S. E., & Webb, A. (2004). The balanced scorecard: The effects of assurance and process accountability on managerial judgment. The Accounting Review, 79(4), 1075–1094.

    Article  Google Scholar 

  43. Maas, V. S., van Rinsum, M., & Towry, K. L. (2012). In search of informed discretion: An experimental investigation of fairness and trust reciprocity. The Accounting Review, 87(2), 617–644.

    Article  Google Scholar 

  44. MacLeod, W. B. (2003). Optimal contracting with subjective evaluation. American Economic Review, 93(1), 216–240.

    Article  Google Scholar 

  45. Merchant, K. A., & Van der Stede, W. A. (2017). Management control systems. performance measurement, evaluation and incentives (4th ed.). London: Pearson.

    Google Scholar 

  46. Mero, N. P., Guidice, R. M., & Brownlee, A. L. (2007). Accountability in a performance appraisal context: The effect of audience and form of accounting on rater response and behavior. Journal of Management, 33(2), 223–252.

    Article  Google Scholar 

  47. Mero, N. P., & Motowidlo, S. J. (1995). Effects of rater accountability on the accuracy and the favorability of performance ratings. Journal of Applied Psychology, 80(4), 517–524.

    Article  Google Scholar 

  48. Mero, N. P., Motowidlo, S. J., & Anna, A. L. (2003). Effects of accountability on rating behavior and rater accuracy. Journal of Applied Social Psychology, 33(12), 2493–2514.

    Article  Google Scholar 

  49. Moers, F. (2005). Discretion and bias in performance evaluation: The impact of diversity and subjectivity. Accounting, Organizations and Society, 30(1), 67–80.

    Article  Google Scholar 

  50. Pipino, L. L., Lee, Y. W., & Wang, R. Y. (2002). Data quality assessment. Communications of the ACM, 45(4), 211–218.

    Article  Google Scholar 

  51. Prendergast, C., & Topel, R. (1993). Discretion and bias in performance evaluation. European Economic Review, 37(2–3), 355–365.

    Article  Google Scholar 

  52. Rajan, M. V., & Reichelstein, S. (2006). Subjective performance indicators and discretionary bonus pools. Journal of Accounting Research, 44(3), 585–618.

    Article  Google Scholar 

  53. Tetlock, P. E., Skitka, L., & Boettger, R. (1989). Social and cognitive strategies for coping with accountability: Conformity, complexity, and bolstering. Journal of Personality and Social Psychology, 57(4), 632–640.

    Article  Google Scholar 

  54. Walster, E., Berscheid, E., & Walster, G. W. (1973). New directions in equity research. Journal of Personality and Social Psychology, 25(2), 151–176.

    Article  Google Scholar 

  55. Wang, R. Y., & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5–33.

    Article  Google Scholar 

  56. Yim, A. T. (2001). Renegotiation and relative performance evaluation: Why an informative signal may be useless. Review of Accounting Studies, 6(1), 77–108.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the editor and two anonymous reviewers, Jasmijn Bol, Eddy Cardinaels, John Christensen, Thomas De Groot, Henri Dekker, Sophie De Winne, Kathryn Kadous, Victor Maas, Karl Schuhmacher, Marcel Van Rinsum, Eelke Wiersma, and participants at the European Network for experimental Accounting Research Summer School 2015, at the Amsterdam Research Center in Accounting seminars 7th of March 2016 at Vrije Universiteit Amsterdam, at the Annual Conference for Management Accounting Research 2016, and at the Annual Conference of the European Accounting Association 2016 for helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tim Hermans.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hermans, T., Cools, M. & Van den Abbeele, A. The role of information accuracy and justification in bonus allocations. J Manag Control (2021). https://doi.org/10.1007/s00187-020-00312-1

Download citation

Keywords

  • Subjective performance evaluation
  • Bonus allocation
  • Information accuracy
  • Justification
  • Centrality bias

JEL Classification

  • J31
  • J33
  • M52
  • M55