Skip to main content
Log in

Human–Algorithm Collaboration Works Best if Humans Lead (Because it is Fair!)

  • Published:
Social Justice Research Aims and scope Submit manuscript

Abstract

Autonomous algorithms are increasingly being used by organizations to reach ever increasing heights of organizational efficiency. The emerging business model of today therefore appears to be one where autonomous algorithms are gradually expanding their occupation into becoming a leading decision-maker, and humans by default become increasingly more subordinate to such decisions. We address the question of whether this business perspective is consistent with the sort of collaboration employees want to have with algorithms at work. We explored this question by investigating in what way humans preferred to collaborate with algorithms when making decisions. Using two experimental studies (Study 1, n = 237; Study 2, n = 684), we show that humans consider the collaboration with autonomous algorithms as unfair when the algorithm leads decision-making and will even incur high financial costs in order to avoid this. Our results also show that humans do not want to exclude algorithms entirely but seem to prefer a 60–40% human–algorithm partnership. These findings contrast the position taken by today’s emerging business model on the issue of automated organizational decision-making. Our findings also provide support for the existence of an implicit theory—held by both present and future employees—that humans should lead and algorithms follow.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. In order to check if participants read the two plan options correctly, we asked, “Please indicate below which strategic plan enables the middle manager to have a voice in this decision” and provided the two plans as response options. For this question 33 participants incorrectly, selected option 1 and were, therefore, removed from our analyses.

  2. For exploratory reasons, participants were also asked which plan made them feel respected, dignified, and confident, but since our main focus is on understanding cooperation and fairness we only discuss those in the main text.

  3. For exploratory reasons, participants were also asked how much they agreed with, felt satisfied and confident with, and how effective they perceived the proposed partnership, but since our main focus is on understanding cooperation and fairness, we only discuss those in the main text.

References

  • Ambrose, M. L., & Schminke, M. (2009). The role of overall justice judgments in organizational justice research: A test of mediation. Journal of Applied Psychology, 94(2), 491–500.

    Google Scholar 

  • Andrews, L. (2018). Public administration, public leadership and the construction of public value in the age of algorithm and big data. Public Administration, 97, 296–310.

    Google Scholar 

  • Bamberger, P. A. (2018). AMD-clarifying what we are about and where we are going. Academy of Management Discoveries, 4(1), 1–10.

    Google Scholar 

  • Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. Thomas Dunne.

    Google Scholar 

  • Beck, A. H., Sangoi, A. R., Leung, S., Marinelli, R. J., Nielsen, T. O., Van De Vijver, M. J., ... & Koller, D. (2011). Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Science Translational Medicine, 3(108).

  • Behfar, K., & Okhuysen, G. A. (2018). Perspective—Discovery within validation logic: Deliberately surfacing, complementing, and substituting abductive reasoning in hypothetico-deductive inquiry. Organization Science, 29(2), 323–340.

    Google Scholar 

  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.

    PubMed  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Google Scholar 

  • Camerer, C., & Thaler, R. H. (1995). Anomalies: Ultimatums, dictators and manners. Journal of Economic Perspectives, 9(2), 209–219.

    Google Scholar 

  • Castelvechi, D. (2016). The black box of AI. Nature, 538, 20–23.

    Google Scholar 

  • Copeland, R., & Hope, B. (2016). The world’s largest hedge fund is building an algorithmic model from its employees’ brains. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/the-worlds-largest-hedge-fund-is-building-an-algorithmic-model-of-its-founders-brain-1482423694. 31 October 2018.

  • Cropanzano, R., Byrne, Z. S., Bobocel, D. R., & Rupp, D. E. (2001). Moral virtues, fairness heuristics, social entities, and other denizens of organizational justice. Journal of Vocational Behavior, 58(2), 164–209.

    Google Scholar 

  • Davenport, T. H. (2016). Rise of the strategy machines. MIT Sloan Management Review, 58(1), 29.

    Google Scholar 

  • Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Heuristics and Biases, 243, 1668–1674.

  • De Cremer, D. (2019). Leading artificial intelligence at work: A matter of facilitating human–algorithm cocreation. Journal of Leadership Studies, 13(1), 81–83.

    Google Scholar 

  • De Cremer, D. (2020a). Leadership by algorithm: Who leads and who follows in the AI era? Harriman House.

    Google Scholar 

  • De Cremer, D. (2020b). What does building a fair AI really entail? Harvard Business Review. September 3.

  • De Cremer, D. (2021). With AI entering organizations, responsible leadership may slip. AI and Ethics, 1–3.

  • De Cremer, D., & De Schutter, L. (2021). How to use algorithmic decision-making to promote inclusiveness in organizations. AI and Ethics, 1(4), 563–567.

    Google Scholar 

  • De Cremer, D., & Kasparov, G. (2021). AI should augment human intelligence, not replace it. Harvard Business Review. March 18.

  • De Cremer, D., & Kasparov, G. (2021a). The AI-Ethics paradox: Why better technology needs more and not less human responsibility. AI and Ethics.

  • De Cremer, D., & Kasaprov, G. (2021b). The ethics of technology innovation: A double-edged sword. AI and Ethics.

  • De Cremer, D., McGuire, J., Hesselbarth, Y., & Mai, K.M. (2019). Social intelligence at work: Can AI help you to trust your new colleagues? Harvard Business Review. June 4.

  • De Cremer, D., & Moore, C. (2020). Towards a better understanding of behavioural ethics in the workplace. Annual Review of Organizational Psychology and Organizational Behavior, 7, 369–393.

    Google Scholar 

  • De Cremer, D., van Dijke, M., Schminke, M., De Schutter, L., & Stouten, J. (2018). The trickle-down effects of perceived trustworthiness on employee performance. Journal of Applied Psychology, 103(12), 1335–1357.

    Google Scholar 

  • Deloitte. (2019). Automation with intelligence. Retrieved from https://www2.deloitte.com/content/dam/Deloitte/tw/Documents/strategy/tw-Automation-with-intelligence.pdf

  • Derrick, D. C., & Elson, J. S. (2019). Exploring automated leadership and agent interaction modalities. In Proceedings of the 52nd Hawaii international conference on system sciences (pp. 207–216).

  • Dewhurst, M., & Willmott, P. (2014). Manager and machine: The new leadership equation. McKinsey Quarterly, 4, 1–8.

  • Diab, D. L., Pui, S. Y., Yankelevich, M., & Highhouse, S. (2011). Lay perceptions of selection decision aids in US and non-US samples. International Journal of Selection and Assessment, 19(2), 209–216.

    Google Scholar 

  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.

    Google Scholar 

  • Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114–132.

    Google Scholar 

  • Gee, K. (2017). In unilever's radical hiring experiment, resumes are out, algorithms are in. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/in-unilevers-radical-hiring-experiment-resumes-are-out-algorithms-are-in-1498478400

  • George, G., & Bock, A. J. (2011). The business model in practice and its implications for entrepreneurship research. Entrepreneurship Theory and Practice, 35(1), 83–111.

    Google Scholar 

  • Glaser, V. (2014). Enchanted algorithms: How organizations use algorithms to automate decision-making routines. Academy of Management Proceedings, 2014(1), 12938.

    Google Scholar 

  • Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.

    PubMed  PubMed Central  Google Scholar 

  • Griesbach, K., Reich, A., Elliott-Negri, L., & Milkman, R. (2019). Algorithmic control in platform food delivery work. Socius, 5, 2378023119870041.

    Google Scholar 

  • Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19–30.

    PubMed  Google Scholar 

  • Haak, T. (2017). Algorithm aversion (HR trends, 2017). HR Trend Institute. Retrieved from https://hrtrendinstitute.com/2017/02/13/algorithm-aversion-hr-trends-2017-5/

  • Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730.

  • Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology, 1(03), 333–342.

    Google Scholar 

  • Hoffman, M., Kahn, L. B., & Li, D. (2017). Discretion in hiring. The Quarterly Journal of Economics, 133(2), 765–800.

    Google Scholar 

  • Jones W. A. (2021). Artificial intelligence and leadership: A few thoughts, a few questions. Journal of Leadership Studies, 12, 60–62.

  • Jones, T. M., Felps, W., & Bigley, G. A. (2007). Ethical theory and stakeholder-related decisions: The role of stakeholder culture. Academy of Management Review, 32(1), 137–155.

    Google Scholar 

  • Knight, W. (2017). The dark secret at the heart of AI. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/604087/the-drak-secret-at-the-heart-of-ai/ on 18 April 2017.

  • Leana, C. R. (1986). Predictors and consequences of delegation. Academy of Management Journal, 29(4), 754–774.

    Google Scholar 

  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data and Society, 5, 1–16.

    Google Scholar 

  • Lee, M. K., Kim, J. T., & Lizarondo, L. (2017). A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 CHI conference on human factors in computing systems (pp. 3365–3376).

  • Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 2015 CHI conference on human factors in computing systems.

  • Lehnis, M. (2018). Can we trust AI if we don't know how it works? BBC. Retrieved from https://www.bbc.com/news/business-44466213.

  • Libert, B., Beck, M., & Bonchek, M. (2017). AI in the boardroom: The next realm of corporate governance. MIT Sloan Management Review. Retrieved from https://sloanreview.mit.edu/article/ai-in-the-boardroom-the-next-realm-of-corporate-governance/ on 21 February 2017.

  • Lindebaum, D., Vesa, M., & den Hond, F. (2020). Insights from the machine stops to better understand rational assumptions in algorithmic decision-making and its implications for organizations. Academy of Management Review, 45, 247–263.

    Google Scholar 

  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.

    Google Scholar 

  • MacCrory, F., Westerman, G., Alhammadi, Y., & Brynjolfsson, E. (2014). Racing with and against the machine: Changes in occupational skill composition in an era of rapid technological advance. In Proceedings of the 35th international conference on information systems (pp. 295–311). Red Hook, NY: Curran Associates Inc.

  • Magretta, J. (2002). Why business models matter. Harvard Business Review, 80, 86–92.

    PubMed  Google Scholar 

  • Mentovich, A., Rhee, E., & Tyler, T. R. (2014). My life for a voice: The influence of voice on health-care decisions. Social Justice Research, 27(1), 99–117.

    Google Scholar 

  • Moore, D. A., Cain, D. M., Loewenstein, G., & Bazerman, M. H. (Eds.). (2005). Conflicts of interest: Challenges and solutions in business, law, medicine, and public policy. Cambridge University Press.

  • Mueller, J. (2018). Finding new kinds of needles in haystacks: Experimentation in the course of abduction. Academy of Management Discoveries, 4, 103–108.

    Google Scholar 

  • Myhill, K., Richards, J., & Sang, K. (2021). Job quality, fair work and gig work: The lived experience of gig workers. Advance Online Publication. https://doi.org/10.1080/09585192.2020.1867612

    Book  Google Scholar 

  • Naqvi, A. (2017). Responding to the will of the machine: Leadership in the age of artificial intelligence. Journal of Economics Bibliography, 4(3), 244–250.

    Google Scholar 

  • Nelson, J. (2019). AI in the boardroom—Fantasy or reality? Mondaq. Retrieved from http://www.mondaq.com/x/792746/new+technology/AI+In+The+Boardroom+Fantasy+Or+Reality on 26 March 2019.

  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167.

    Google Scholar 

  • Otting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27–39.

    Google Scholar 

  • Owens, D., Grossman, Z., & Fackler, R. (2014). The control premium: A preference for payoff autonomy. American Economic Journal: Microeconomics, 6(4), 138–161.

    Google Scholar 

  • Osterwalder, A., Pigneur, Y., & Tucci, C. L. (2005). Clarifying business models: Origins, present and future of the concept. Communications of the Association for Information Science (CAIS), 16, 1–25.

    Google Scholar 

  • Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the machines: A critical consideration of automated leadership decision making in organizations. Group and Organization Management, 41(5), 571–594.

    Google Scholar 

  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

  • Pearce, C. L., Conger, J. A., & Locke, E. A. (2008). Shared leadership theory. The Leadership Quarterly, 19, 622–628.

    Google Scholar 

  • Tang, P. M., Koopman, J., McClean, S. T., Zhang, J. H., Li, C. H., De Cremer, D., Lu, Y., & Ng, C. T. S. (2021). When conscientious employees meet intelligent machines: An integrative approach inspired by complementarity theory and role theory. Academy of Management Journal.

  • Reeves, M. (2015). Algorithms can make your organization self-tuning. Harvard Business Review. Harvard Business Review. Retrieved from https://hbr.org/2015/05/algorithms-can-make-your-organization-self-tuning

  • Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive, procedural, and interactional justice. Journal of Applied Psychology, 82(3), 434–443.

    Google Scholar 

  • Thibaut, J., & Walker, L. (1975). Procedural justice: A psychological analysis. Erlbaum.

    Google Scholar 

  • Treviño, L. (1992). Experimental approaches to studying ethical–unethical behavior in organizations. Business Ethics Quarterly, 2(2), 121–136.

    Google Scholar 

  • Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in groups. Advances in Experimental Social Psychology, 25, 115–191.

    Google Scholar 

  • Van den Bos, K., & Lind, E. A. (2002). Uncertainty management by means of fairness judgments. Advances in Experimental Social Psychology Advances in Experimental Social Psychology, 34, 1–60.

    Google Scholar 

  • von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries, 4, 404–409.

    Google Scholar 

  • Venema, L. (2018). Algorithm talk to me. Nature Human Behaviour, 2(3), 173–173.

    Google Scholar 

  • Woolley, J. W., Agarwal, P. K., & Baker, J. (2009). Modelling and prediction of chaotic systems with artificial neural networks. International Journal of Numerical Methods in Fluids, 8, 989–1004.

    Google Scholar 

  • World Economic Forum. (2020). The future of jobs report 2020. Retrieved from https://www.weforum.org/reports/the-future-of-jobs-report-2020

  • Zeng, Z., Miao, C., Leung, C., & Chin, J. J. (2018). Building more explainable artificial intelligence with argumentation. In Thirty-second AAAI conference on artificial intelligence.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David De Cremer.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Human and Animal Rights Statement

The research involved human participants and was approved by the ethics research board at Cambridge University (Protocol No.: 17/001).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Fig. 7.

Fig. 7
figure 7

The image depicts the loading bars that were presented to participants in the experimental paradigm. Each bar between the dotted lines was presented on the screen individually as a gif-image in which the dark blue squares moved from left to right in continuous loops for sporadic intervals of time, upon which the subsequent loading bar would be presented (in order from 1/4 to 4/4). The sequence of this presentation created the impression that the system was in the process of connecting participants with other users

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Cremer, D., McGuire, J. Human–Algorithm Collaboration Works Best if Humans Lead (Because it is Fair!). Soc Just Res 35, 33–55 (2022). https://doi.org/10.1007/s11211-021-00382-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11211-021-00382-z

Keywords

Navigation