Abstract
The growing uses of algorithm-based decision-making in human resources management have drawn considerable attention from different stakeholders. While prior literature mainly focused on stakeholders directly related to HR decisions (e.g., employees), this paper pertained to a third-party observer perspective and investigated how consumers would respond to companies’ adoption of algorithm-based HR decision-making. Through five experimental studies, we showed that the adoption of algorithm-based (vs. human-based) HR decision-making could induce consumers’ unfavorable ethicality inferences of the company (study 1); because implementing a calculative and data-driven approach (i.e. algorithm-based) to make employee-related decisions violates the deontological principles of respectful employee treatment (study 2). However, this effect was attenuated when consumers had high (vs. low) power distance beliefs (study 3); the algorithm served as assistance (vs. replacement) for human decisions (study 4); or the adoption was framed as employee-oriented (vs. company-oriented) motivated (study 5). Our findings suggested that consumers are aversive to algorithm-based HR decision-making because it is deontologically problematic regardless of its decision quality (i.e. accuracy). This paper contributes to the extant understanding of stakeholders’ responses to algorithm-based HR decision-making and consumers’ attitudes toward algorithm users.
Similar content being viewed by others
Notes
The project received IRB at School of Management, Huazhong University of Science and Technology (IRB #: 2020.04.23, Study Title: AI-HRM).
We used MTurk's Qualification Type function to avoid any overlapping of participants across our studies. After each experiment, we granted the participants the same qualification. The granted population did not receive further invitations.
References
Alge, B. J. (2001). Effects of computer surveillance on perceptions of privacy and procedural justice. Journal of Applied Psychology, 86(4), 797–804. https://doi.org/10.1037/0021-9010.86.4.797
Arnold, D. G., & Bowie, N. E. (2007). Respect for workers in global supply chains: Advancing the debate over sweatshops. Business Ethics Quarterly, 17(1), 135–145. https://doi.org/10.5840/beq200717121
Belmi, P., & Schroeder, J. (2021). Human “resources”? objectification at work. Journal of Personality and Social Psychology, 120(2), 384–417. https://doi.org/10.1037/pspi0000254
Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Political Analysis, 20(3), 351–368. https://doi.org/10.1093/pan/mpr057
Boeuf, B., & Darveau, J. (2019). An ethical perspective on necro-advertising: The moderating effect of brand equity. Journal of Business Ethics, 155(4), 1077–1099. https://doi.org/10.1007/s10551-017-3490-x
Brunk, K. H. (2010). Exploring origins of ethical company/brand perceptions—A consumer perspective of corporate ethics. Journal of Business Research, 63(3), 255–262. https://doi.org/10.1016/j.jbusres.2009.03.011
Brunk, K. H. (2012). Un/ethical company and brand perceptions: Conceptualising and operationalising consumer meanings. Journal of Business Ethics, 111(4), 551–565. https://doi.org/10.1007/s10551-012-1339-x
Brunk, K., & Blümelhuber, C. (2011). One strike and you’re out: Qualitative insights into the formation of consumers’ ethical company or brand perceptions. Journal of Business Research, 64(2), 134–141. https://doi.org/10.1016/j.jbusres.2010.02.009
Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? workforce implications: Profound change is coming, but roles for humans remain. Science, 358(6370), 1530–1534. https://doi.org/10.1126/science.aap8062
Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155
Carlson, R. W., Bigman, Y. E., Gray, K., Ferguson, M. J., & Crockett, M. J. (2022). How inferred motives shape moral judgements. Nature Reviews Psychology, 1(8), 468–478. https://doi.org/10.1038/s44159-022-00071-x
Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20–23. https://doi.org/10.1038/538020a
Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., & Sherman, R. A. (2017). The datafication of talent: How technology is advancing the science of human potential at work. Current Opinion in Behavioral Sciences, 18, 13–16. https://doi.org/10.1016/j.cobeha.2017.04.007
Cheung, M. F. Y., & To, W. M. (2021). The effect of consumer perceptions of the ethics of retailers on purchase behavior and word-of-mouth: The moderating role of ethical beliefs. Journal of Business Ethics, 171(4), 771–788. https://doi.org/10.1007/s10551-020-04431-6
Cohn, D. Y. (2010). Commentary essay on “exploring origins of ethical company/brand perceptions—A consumer perspective of corporate ethics.” Journal of Business Research, 63(12), 1267–1268. https://doi.org/10.1016/j.jbusres.2009.03.011
Cowgill, B. (2020). Bias and Productivity in Humans and Machines: Theory and Evidence from Resume Screening (pp. 1–35). Columbia Business School.
Cropanzano, R., Goldman, B., & Folger, R. (2003). Deontic justice: The role of moral principles in workplace fairness. Journal of Organizational Behavior, 24(8), 1019–1024. https://doi.org/10.1002/job.228
Davenport, T. H., & Kirby, J. (2016). Just how smart are smart machines? MIT Sloan Management Review, 1(3), 7–7.
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42. https://doi.org/10.1007/s11747-019-00696-0
Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7), 571–582. https://doi.org/10.1037/0003-066X.34.7.571
Dean, D. H. (2004). Consumer reaction to negative publicity: Effects of corporate reputation, response, and responsibility for a crisis event. Journal of Business Communication, 41(2), 192–211. https://doi.org/10.1177/0021943603261748
Dietvorst, B. J., & Bartels, D. M. (2021). Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. Journal of Consumer Psychology. https://doi.org/10.1002/jcpy.1266
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994–101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Gill, T. (2020). Blame it on the self-driving car: how autonomous vehicles can alter consumer morality. Journal of Consumer Research, 47(2), 272–291. https://doi.org/10.1093/jcr/ucaa018
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–1154. https://doi.org/10.1016/j.cognition.2007.11.004
Greenwood, M. R. (2002). Ethics and HRM: A review and conceptual analysis. Journal of Business Ethics, 36(3), 261–278. https://doi.org/10.1023/A:1014090411946
Gwinn, J. D., Judd, C. M., & Park, B. (2013). Less power = less human? effects of power differentials on dehumanization. Journal of Experimental Social Psychology, 49(3), 464–470. https://doi.org/10.1016/j.jesp.2013.01.005
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. https://doi.org/10.1037/0033-295X.108.4.814
Han, D., Lalwani, A. K., & Duhachek, A. (2017). Power distance belief, power, and charitable giving. Journal of Consumer Research, 44(1), 182–195. https://doi.org/10.1093/jcr/ucw084
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. https://doi.org/10.1207/s15327957pspr1003_4
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis. Guilford.
Hill, T. E. (1980). Humanity as an end in itself. Ethics, 91(1), 84–99. https://doi.org/10.1086/292205
Kaibel, C., Koch-Bayram, I., Biemann, T., & Mühlenbock, M. (2019). Applicant perceptions of hiring algorithms - Uniqueness and discrimination experiences as moderators. 79th Annual Meeting of the Academy of Management 2019: Understanding the Inclusive Organization, AoM 2019, 2019(1), 18172–18172. Doi: https://doi.org/10.5465/AMBPP.2019.210
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? on the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
Klink, R. R., & Wu, L. (2017). Creating ethical brands: The role of brand name on consumer perceived ethicality. Marketing Letters, 28(3), 411–422. https://doi.org/10.1007/s11002-017-9424-7
Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795–848. https://doi.org/10.1007/s40685-020-00134-w
Laczniak, G. R., & Murphy, P. E. (2012). Stakeholder theory and marketing: Moving from a firm-centric to a societal perspective. Journal of Public Policy and Marketing, 31(2), 284–292. https://doi.org/10.1509/jppm.10.106
Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, 65(7), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093
Langer, M., König, C. J., & Fitili, A. (2018). Information as a double-edged sword: The role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Computers in Human Behavior, 81, 19–30. https://doi.org/10.1016/j.chb.2017.11.036
Lecher, C. (2019). [How Amazon automatically tracks and fires warehouse workers for ‘productivity’].
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data and Society, 5(1), 1–16. https://doi.org/10.1177/2053951718756684
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for personal integrity. Journal of Business Ethics, 160(2), 377–392. https://doi.org/10.1007/s10551-019-04204-w
Lena, A., & Christoph, H. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05049-6
Levine, E. E., & Schweitzer, M. E. (2015). Prosocial lies: When deception breeds trust. Organizational Behavior and Human Decision Processes, 126, 88–106. https://doi.org/10.1016/j.obhdp.2014.10.007
Li, X., & Li, K. J. (2022). Beating the algorithm: Consumer manipulation, personalized pricing, and big data management. Manufacturing & Service Operations Management. https://doi.org/10.1287/msom.2022.1153
Lin, Y. T., Hung, T. W., & Huang, L. T. L. (2021). Engineering equity: How AI can help reduce the harm of implicit bias. Philosophy and Technology, 34, 65–90. https://doi.org/10.1007/s13347-020-00406-7
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
Love, L. F., & Singh, P. (2011). Workplace branding: Leveraging human resources management practices for competitive advantage through “best employer” surveys. Journal of Business and Psychology, 26(2), 175–181. https://doi.org/10.1007/s10869-011-9226-5
Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents: Caveats and solutions. Journal of Marketing, 85(2), 14–32. https://doi.org/10.1177/0022242920956676
Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
Martin, K., & Waldman, A. (2022). Are algorithmic decisions legitimate? The effect of process and outcomes on perceptions of legitimacy of AI decisions. Journal of Business Ethics. Doi: https://doi.org/10.1007/s10551-021-05032-7
Melé, D. (2014). “Human quality treatment”: Five organizational levels. Journal of Business Ethics, 120(4), 457–471. https://doi.org/10.1007/s10551-013-1999-1
Mirowska, A., & Mesnet, L. (2022). Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews. Human Resource Management Journal, 32(2), 364–383. https://doi.org/10.1111/1748-8583.12393
Morse, L., Teodorescu, M. H. M., Awwad, Y., & Kane, G. C. (2021). Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms. Journal of Business Ethics. Doi: https://doi.org/10.1007/s10551-021-04939-5
Newlands, G. (2021). Algorithmic surveillance in the gig economy: The organization of work through lefebvrian conceived space. Organization Studies, 42(5), 719–737. https://doi.org/10.1177/0170840620937900
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
O’Reilly, J., & Aquino, K. (2011). A model of third parties’ morally motivated responses to mistreatment in organizations. Academy of Management Review, 36(3), 526–543. https://doi.org/10.5465/amr.2009.0311
Oswald, F. L., Behrend, T. S., Putka, D. J., & Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: Forward progress for organizational research and practice. Annual Review of Organizational Psychology and Organizational Behavior, 7, 505–533. https://doi.org/10.1146/annurev-orgpsych-032117-104553
Palmeira, M., & Spassova, G. (2015). Consumer reactions to professionals who use decision aids. European Journal of Marketing, 49(3–4), 302–326. https://doi.org/10.1108/EJM-07-2013-0390
Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3), 184–188. https://doi.org/10.1177/0963721414531598
Pirson, M. (2019). A humanistic perspective for management theory: Protecting dignity and promoting well-being. Journal of Business Ethics, 159(1), 39–57. https://doi.org/10.1007/s10551-017-3755-4
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/AMR.2018.0072
Rank-Christman, T., Morrin, M., & Ringler, C. (2017). R-E-S-P-E-C-T Find out what my name means to me: The effects of marketplace misidentification on consumption. Journal of Consumer Psychology, 27(3), 333–340. https://doi.org/10.1016/j.jcps.2016.12.002
Reed, A., Aquino, K., & Levy, E. (2007). Moral identity and judgments of charitable behaviors. Journal of Marketing, 71(1), 178–193. https://doi.org/10.1509/jmkg.71.1.178
Rifon, N. J., Choi, S. M., Trimble, C. S., & Li, H. (2004). Congruence effects in sponsorship: The mediating role of sponsor credibility and consumer attributions of sponsor motive. Journal of Advertising, 33(1), 30–42. https://doi.org/10.1080/00913367.2004.10639151
Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Human-Computer Interaction, 35(5–6), 545–575. https://doi.org/10.1080/07370024.2020.1735391
Rowan, J. R. (2000). The moral foundation of employee rights. Journal of Business Ethics, 24(4), 355–361. https://doi.org/10.1023/A:1006286315756
Sampson, S. E. (2021). A strategic framework for task automation in professional services. Journal of Service Research, 24(1), 122–140. https://doi.org/10.1177/1094670520940407
Shrestha, Y. R., Krishna, V., & von Krogh, G. (2021). Augmenting organizational decision-making with deep learning algorithms: Principles, promises, and challenges. Journal of Business Research, 123, 588–603. https://doi.org/10.1016/j.jbusres.2020.09.068
Skarlicki, D. P., & Kulik, C. T. (2004). Third-party reactions to employee (mis)treatment: A justice perspective. Research in Organizational Behavior, 26(04), 183–229. https://doi.org/10.1016/S0191-3085(04)26005-1
Sparks, J. R., & Pan, Y. (2010). Ethical judgments in business ethics research: Definition, and research agenda. Journal of Business Ethics, 91(3), 405–418. https://doi.org/10.1007/s10551-009-0092-2
Srinivasan, R., & Sarial-Abi, G. (2021). When algorithms fail: Consumers’ responses to brand harm crises caused by algorithm errors. Journal of Marketing. https://doi.org/10.1177/0022242921997082
Su, Y., & Jin, L. (2021). The impact of online platforms’ revenue model on consumers’ ethical inferences. Journal of Business Ethics. https://doi.org/10.1007/s10551-021-04798-0
Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98(43), 93–101. https://doi.org/10.1016/j.chb.2019.04.012
Telkamp, J. B., & Anderson, M. H. (2022). The implications of diverse human moral foundations for assessing the ethicality of artificial intelligence. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05057-6
Teodorescu, M. H. M., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ML augmentation. MIS Quarterly, 45(3), 1483–1499. https://doi.org/10.25300/MISQ/2021/16535
Wessling, K. S., Huber, J., & Netzer, O. (2017). MTurk character misrepresentation: Assessment and solutions. Journal of Consumer Research, 44(1), 211–230. https://doi.org/10.1093/jcr/ucx053
Winterich, K. P., & Zhang, Y. (2014). Accepting inequality deters responsibility: How power distance decreases charitable behavior. Journal of Consumer Research, 41(2), 274–293. https://doi.org/10.1086/675927
Xie, W., Yu, B., Zhou, X., Sedikides, C., & Vohs, K. D. (2014). Money, moral transgressions, and blame. Journal of Consumer Psychology, 24(3), 299–306. https://doi.org/10.1016/j.jcps.2013.12.002
Xu, H. F., Bolton, L. E., & Winterich, K. P. (2021). How do consumers react to company moral transgressions? The role of power distance belief and empathy for victims. Journal of Consumer Research, 48(1), 77–101. https://doi.org/10.1093/jcr/ucaa067
Yoo, B., Donthu, N., & Lenartowicz, T. (2011). Measuring Hofstede’s five dimensions of cultural values at the individual level: Development and validation of CVSCALE. Journal of International Consumer Marketing, 23(3–4), 193–210. https://doi.org/10.1080/08961530.2011.578059
Young, L., & Saxe, R. (2009). Innocent intentions: A correlation between forgiveness for accidental harm and neural activity. Neuropsychologia, 47(10), 2065–2072. https://doi.org/10.1016/j.neuropsychologia.2009.03.020
Zollo, L. (2021). The consumers’ emotional dog learns to persuade its rational tail: Toward a social intuitionist framework of ethical consumption. Journal of Business Ethics, 168(2), 295–313. https://doi.org/10.1007/s10551-019-04420-4
Funding
The authors gratefully acknowledge the grants supported by National Science Foundation of China (71672063, 72072065, 72072152, 71925005, & 72232009), City University of Hong Kong SRG (7005478 & 7005791), and Social Science Foundation of Zhejiang Province (21YJRC01ZD) for financial support.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
This article does not have any potential conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yan, C., Chen, Q., Zhou, X. et al. When the Automated fire Backfires: The Adoption of Algorithm-based HR Decision-making Could Induce Consumer’s Unfavorable Ethicality Inferences of the Company. J Bus Ethics 190, 841–859 (2024). https://doi.org/10.1007/s10551-023-05351-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10551-023-05351-x