Skip to main content

Master and Slave: the Dialectic of Human-Artificial Intelligence Engagement

Abstract

The massive introduction of artificial intelligence (AI) has triggered significant societal concerns, ranging from “technological unemployment” and the dominance of algorithms in the work place and in everyday life, among others. While AI is made by humans and is, therefore, dependent on the latter for its purpose, the increasing capabilities of AI to carry out productive activities for humans can lead the latter to unwitting slavish existence. This has become evident, for example, in the area of social media use, where AI programmers tie psychology and persuasion to the human social need for approval and validation in ways that few users can resist. We argue that AI should serve humans with humans as masters and not the other way around. Moreover, we propose that virtue ethics might play a role to solidify the human as master of AI and guard against the alternative of AI as the master.

This is a preview of subscription content, access via your institution.

Fig. 1

Data Availability

Not applicable.

Code Availability

Not applicable.

Notes

  1. https://www.oracle.com/it/internet-of-things/what-is-iot/

  2. Another way to divide strong and weak AI is that strong AI “seeks not only to think, but to feel and purpose as well, becoming a “mind” and not only a model of one, while “weak AI” is meant to be at the service of human designs (Botica 2017).

  3. It should be noted that we deliberately use the term “slave” in this essay in reference to Hegel’s use of the term in the “Master-Slave Dialectic”, without connecting to the practice of slavery, past and present, or reference to persons with dignity that were or are enslaved.

References

  • Alford, Teresa, and Michael Naughton. 2001. Managing as if faith mattered. Notre Dame: Notre Dame University Press.

    Google Scholar 

  • André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Douglas Frank, William Goldstein, et al. 2018. Consumer choice and autonomy in the age of artificial intelligence and big data. Customer Needs and Solutions 5: 28–37.

    Article  Google Scholar 

  • Angwin, Julia, Jeff Larson, Surya Mattu, and L. Kirchner. 2016. Machine bias: There’s software used across the country to predict future criminals, and it’s biased against blacks. ProPublica. Retrieved December 3, 2021 at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  • Appel, H., Alexander Gerlach, and Jan Crusius. 2016. The interplay between Facebook use, social comparison, envy, and depression. Current Opinion in Psychology 9: 44–49. https://doi.org/10.1016/j.copsyc.2015.10.006.

    Article  Google Scholar 

  • Bag, S., J.H.C. Pretorius, S. Gupta, and Y.K. Dwivedi. 2021. Role of institutional pressures and resources in the adoption of big data analytics powered artificial intelligence, sustainable manufacturing practices and circular economy capabilities. Technological Forecasting and Social Change 163: 120420.

  • Banaji, Mazarin R., and Anthony Greenwald. 2016. Blindspot: Hidden biases of good people. New York: Bantam.

    Google Scholar 

  • Beran, Ondřej. 2018. An attitude towards an artificial soul? Responses to the “Nazi Chatbot”. Philosophical Investigations 41: 42–69.

    Article  Google Scholar 

  • Bird, Jordan J., Anikó Ekárt, and Diego R. Faria. 2018. Learning from interaction: An intelligent networked-based human-bot and bot-bot chatbot system. UK workshop on computational intelligence. Cham: Springer.

    Google Scholar 

  • Bosker, Bianca. 2016. The binge breaker. The Atlantic. https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/

  • Brendel, A.B., M. Mirbabaie, T.B. Lembcke, and L. Hofeditz. 2021. Ethical management of artificial intelligence. Sustainability 13: 1974.

    Article  Google Scholar 

  • Buckner, Cameron and James Garson. 2019. Connectionism. In Zalta, E. N. (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2019 edition). Metaphysics Research Lab, Stanford University.

  • Buer, Sven-Vegard, Jan Ola Strandhagen, and Felix Chan. 2018. The link between industry 4.0 and lean manufacturing: Mapping current research and establishing a research agenda. International Journal of Production Research 56: 2924–2940.

    Article  Google Scholar 

  • Carter, D. 2018. How real is the impact of artificial intelligence? The business information survey 2018. Business Information Review 35 (3): 99–115.

    Article  Google Scholar 

  • Danaher, John. 2016. The threat of algocracy: Reality, resistance and accommodation. Philosophy & Technology 29: 245–268.

    Article  Google Scholar 

  • Danaher, John, and N. McArthur, eds. 2017. Robot sex: Social and ethical implications. Cambridge, MA: MIT Press.

    Google Scholar 

  • Douglas, Heather. 2009. Science, policy, and the value-free ideal. Pittsburgh, PA: University of Pittsburgh Press.

    Book  Google Scholar 

  • Florentine, Sharon. 2016. How artificial intelligence can eliminate bias in hiring. https://www.cio.com/article/3152798/artificialintelligence/how-artificial-intelligence-can-eliminate-biasin-hiring.html.

  • Fogg, B.J. 1996. Persuasive technology: Using computers to change what we think and do. San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Frankish, Keith, and William Ramsey. 2014. The Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Grace, Katja, et al. 2018. When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research 62: 729–754.

    Article  Google Scholar 

  • Grodzinsky, F.S. 2017. Why big data needs the virtues. In Philosophy and computing essays in epistemology, philosophy of mind, logic, and ethics, ed. T.M. Powers, 221–234. Berlin: Springer.

    Chapter  Google Scholar 

  • Harris, T. (2015). How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist. Thrive Global. https://medium.com/thrive-global/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3

  • Harris, T. (2017). Our minds have been hijacked by our phones. Tristan Harris wants to rescue them. Wired. https://www.wired.com/story/our-minds-have-been-hijacked-by-our-phones-tristan-harris-wants-to-rescue-them/

  • High-Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. Brussels: European Commission.

    Google Scholar 

  • Jarrahi, M.H. 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons 61: 577–586.

    Article  Google Scholar 

  • Khakurel, J., B. Penzenstadler, J. Porras, A. Knutas, and W. Zhang. 2018. The rise of artificial intelligence under the lens of sustainability. Technologies 6 (4): 100.

    Article  Google Scholar 

  • Kim, T.W., and A. Scheller-Wolf. 2019. Technological unemployment, meaning in life, purpose of business, and the future of stakeholders. Journal of Business Ethics 160: 319–337.

    Article  Google Scholar 

  • Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Cass Sunstein. 2018. Discrimination in the age of algorithms. Journal of Legal Analysis 10: 113–174.

    Article  Google Scholar 

  • Lee, Min Kyung. 2016. Algorithmic bosses, robotic colleagues: Toward human-centered algorithmic workplaces. XRDS: Crossroads. The ACM Magazine for Students 23: 42–47.

    Google Scholar 

  • Liao, Yongxin, Fernando Deschamps, Eduardo de Freitas, Rocha Loures, and Felipe Pierin Ramos. 2017. Past, present and future of industry 4.0-a systematic literature review and research agenda proposal. International Journal of Production Research 55: 3609–3629.

    Article  Google Scholar 

  • Marcus, Gary. 2018. Deep learning: A critical appraisal. arXiv preprint 1801.00631.

  • Matt, Christian, Thomas Hess, and Alexander Benlian. 2015. Digital transformation strategies. Business & Information Systems Engineering 57: 339–343.

    Article  Google Scholar 

  • Mayer-Schonberger, Viktor and Cukier, Kenneth. 2013. Big Data: A Revolution That Will Transform How We Live, Work, and Think

  • McInnis, Brian, Dan Cosley, Chaebong Nam and Gilly Leshed. 2016. Taking a hit: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon mechanical Turk. In Proceedings of the 2016 Conference on Human Factors in Computing Systems (CHI 2016). ACM.

  • McNamee, Roger. 2019. Zucked: Waking up to the Facebook catastrophe. New York: Penguin.

    Google Scholar 

  • Morin-Major, Julie Katia, Marie-France Marin, Nadia Durand, Nathalie Wan, Robert-Paul Juster, and Sonia Lupien. 2016. Facebook behaviors associated with diurnal cortisol in adolescents: Is befriending stressful? Psychoneuroendocrinology 63: 238–246. https://doi.org/10.1016/j.psyneuen.2015.10.005.

    Article  Google Scholar 

  • Nishant, Rohit, Mike Kennedy, and Jacqueline Corbett. 2020. Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management 53: 102104.

    Article  Google Scholar 

  • O’Neil, Cathy. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.

    Google Scholar 

  • Olteanu, A., C. Castillo, F. Diaz, and E. Kıcıman. 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2: 13.

    Article  Google Scholar 

  • Oztemel, Ercan, and Samet Gursev. 2020. Literature review of industry 4.0 and related technologies. Journal of Intelligent Manufacturing 31: 127–182.

    Article  Google Scholar 

  • Pariser, Eli. 2012. The filter bubble: How the new personalized web is changing what we read and how we think. New York: Penguin.

    Book  Google Scholar 

  • Pearl, Judea, and Dana Mackenzie. 2018. The book of why: The new science of cause and effect. Basic Books.

    Google Scholar 

  • Phillips-Wren, Gloria. 2012. AI tools in decision making support systems: A review. International Journal on Artificial Intelligence Tools 21: 1240005.

    Article  Google Scholar 

  • Raji, Inioluwa, and Joy Buolamwini. 2019. Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 Conference on Artificial Intelligence, Ethics, and Society (AIES 2019). AAAI/ACM.

  • Ryan, Tracii, Andrea Chester, John Reece, and Sophia Xenos. 2014. The uses and abuses of Facebook: A review of Facebook addiction. Journal of Behavioral Addictions. https://doi.org/10.1556/JBA.3.2014.016.

  • Sanders, Adam, Chola Elangeswaran, and Jens Wulfsberg. 2016. Industry 4.0 implies lean manufacturing: Research activities in industry 4.0 function as enablers for lean manufacturing. Journal of Industrial Engineering and Management 9: 811–833.

    Article  Google Scholar 

  • Scheutz, Matthias. 2002. Computationalism: The next generation. In Computationalism: New Directions, pp. 1–21.

  • Schwab, Klaus. 2016. The Fourth Industrial Revolution. Geneva: World Economic Forum.

    Google Scholar 

  • Tabaka, Marla. 2017. Here's what's possibly causing your smartphone separation anxiety. Wired. https://www.inc.com/marla-tabaka/brain-hacking-why-you-have-smartphone-separation-anxiety.html

  • Vallor, Shannon. 2016. Technology and the virtues: A philosophical guide to a future worth wanting. New York: Oxford.

    Book  Google Scholar 

  • Vallor, Shannon, and G.A. Bekey. 2017. Artificial intelligence and the ethics of self-learning robots. In Robot ethics 2.0, ed. P. Lin, L. Abney, and R. Jenkins, 338–353. Oxford: Oxford University Press.

    Google Scholar 

  • Vasconcelos, Marisa, Carlos Cardonha, and Bernardo Gonçalves. 2018. Modeling epistemological principles for bias mitigation in AI systems: An illustration in hiring decisions. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 323–329.

  • Wakefield, Jane. (2016). BBC News. Microsoft chatbot is taught to swear on Twitter. Retrieved April 12, 2018, from http://www.bbc.co.uk/news/technology-35890188

  • Westerman, George, Didier Bonnet, and Andrew McAfee. 2014. The nine elements of digital transformation. MIT Sloan Management Review 55: 1–6.

    Google Scholar 

  • Widyanto, Laura, and Mark Griffiths. 2006. ‘Internet addiction’: A critical review. International Journal of Mental Health and Addiction. https://doi.org/10.1007/s11469-006-9009-9.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benito Teehankee.

Ethics declarations

Conflict of Interest

Not applicable.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kim, T.W., Maimone, F., Pattit, K. et al. Master and Slave: the Dialectic of Human-Artificial Intelligence Engagement. Humanist Manag J 6, 355–371 (2021). https://doi.org/10.1007/s41463-021-00118-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41463-021-00118-w

Keywords

  • Human-artificial intelligence engagement
  • Virtue ethics
  • Human flourishing