Can a Robot Be a Good Colleague?


This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people (e.g. soldiers giving “fallen” military robots military funerals and medals of honor) are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make work meaningful. In discussing whether robots can be good colleagues, the paper compares that question to the more widely discussed questions of whether robots can be our friends or romantic partners. The paper argues that the ideal of being a good colleague has many different parts, and that on a behavioral level, robots can live up to many of the criteria typically associated with being a good colleague. Moreover, the paper also argues that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague. The reason for this is that the “inner lives” of our friends and lovers are more important to us than the inner lives of our colleagues.

This is a preview of subscription content, access via your institution.


  1. 1.

    Different kinds of more or less advanced bomb disposal robots have been used for the last 40 years. For a brief history and account of what they do, see Allison (2016). See also Garreau (2007).

  2. 2.

    We say “he” here, we should also note, because Boomer’s human collaborators viewed Boomer in a gendered way, whereby Boomer was thought of as a “he”.

  3. 3.

    More generally, the fabric of society is crucially dependent on good work communities. For example, work-related illnesses put a strain on society, in addition to being burdensome for the working people themselves.

  4. 4.

    Darling also understands a social robot as a robot specifically designed to interact with human beings on a “social level”, such that it can potentially be a “companion” to the human beings it interacts with (Darling 2016, 215–216). The idea of a robot specifically designed to be some sort of companion is a little stronger than what we have in mind when we are asking whether a robot can be a good colleague—at least on an understanding of “companion” that suggests some sort of friendship. But we follow Darling in understanding a social robot as being one that can interact and communicate with human beings on a “social level”, to some extent. Those are the kinds of robots, we think, that stand the best chance of being perceived as colleagues by humans who might work with them.

  5. 5.

    The idea of a responsibility gap refers to a situation in which some morally significant outcome has been brought about for which it appears appropriate to find somebody to hold responsible, but where it is unclear whether there is any particular person or persons who could justifiably be held responsible. For example, if a robot with a significant form of functional autonomy harms a human being, it seems right that somebody should be held responsible for this. But it will not always be clear who exactly it is appropriate to hold responsible. See, e.g., Sparrow (2007) and Nyholm (2018).

  6. 6.

    The website, for example, claims to sell a sex robot called “Roxxxy”, which can become a “true companion”.

  7. 7.

    Having come up with a draft list of criteria, we ran our initial list by three work psychologists at our university, to see whether these conditions fit with what is usually understood as good collegial relationships in workplace psychology. Subsequently, we presented a revised list at a philosophy conference, asking for feedback from the audience attending our presentation (which consisted of around 40–50 people). The audience at that particular conference—a large Dutch philosophy conference—found our list intuitively plausible, and did not suggest any further criteria.

  8. 8.

    One last general remark: we intend this list to have wide application, across various different types of work. But we recognize that depending on what type of work is in question, different criteria may have different importance or priority in terms of what makes for a good colleague within the particular line of work in question. A more specialized discussion—e.g. of what makes somebody a good colleague in an intensive care unit or in a large restaurant kitchen—would make it appropriate to try to rank or assign weights to these different criteria. A more general discussion, such as our present discussion, appears best conducted without an attempt to assign specific weights or rankings to these different criteria.

  9. 9.

    We understand being reliable and being trustworthy as two distinct, but to some extent related, criteria for being a good colleague. Being trustworthy is, for example, a more demanding criteria than being reliable is. For more on the issue of robots and trust, see footnote 16.

  10. 10.

    Another comment one might make about these suggested criteria is that they bring up the question of whether robot colleagues should be given some form of moral and/or legal status. We will very briefly comment on the issue of whether robot colleagues should be treated with some degree of moral concern in our concluding remarks below. But since our main focus in this paper is on whether robots can live up to the ideal of being good colleagues, we will save a more thorough discussion of the moral and legal status—or potential lack thereof—of robot colleagues for another occasion. For related discussion, see Bryson (2018) and Gunkel (2018).

  11. 11.

    When we say that we are especially interested in criteria associated with being a good colleague rather than criteria for being a good friend, we are referring specifically to what we called “virtue friendships” above. There may be significant overlap between what is involved in being a good colleague and what is involved in being a good utility friend: good colleagues are useful to each other, for example, just like good utility friends are useful to each other. At the same time, though, there are also differences between being good colleagues and being good utility friends. Collegial relationships, for example, are had within the context of workplaces, where the colleagues have contracts specifying what their work tasks are, which might include specifications concerning ways in which they need to work together with their colleagues. Utility friendships, as we understand them, are typically not be governed by any explicit contracts.

  12. 12.

    We are inspired here by a distinction that John Danaher draws between technical and metaphysical obstacles to the prospects for robots to be able to be our friends. For Danaher’s discussion of the distinction between technical and metaphysical possibilities as those relate to human–robot friendship, see Danaher (2019, p. 11).

  13. 13.

    Some workers interviewed in an empirical study by Sauppé and Mutlu (2015) explicitly asked for the collaborative manufacturing robots they worked with to be equipped with the capability for small talk. In that way, their interaction with their robotic co-workers would be more like working with human colleagues.

  14. 14.

    Human–robot conversation has been extensively studied for several decades. A recent review concludes that “we seem to be still far from our goal of fluid and natural verbal and non-verbal communication between humans and robots” (Mavridis 2015, p. 31). Nevertheless, according to the review, considerable progress is being made. And although there are some tough challenges, there appear to be no in-principle, or metaphysical, obstacles toward fluent human–robot conversation.

  15. 15.

    Tay was trained on the inputs from human users. Some of the inputs from human users were racist or otherwise highly inappropriate in nature. The result was that Tay started generating morally inappropriate sentences, based on the human inputs in the training data. See Gunkel (2018).

  16. 16.

    Notably, Groom and Nass (2007) challenge the conceptualization of robots as full-fledged team-members, arguing that the humans do not sufficiently trust robots. They argue that robots do not have humanlike mental models and consequently cannot share in the team’s mental model that enables the team to work well together. As a result, robots cannot engage in the relevant trust-building interactions in ways that can make human team members come to trust the robots. Especially in safety-critical situations, human will feel unable to rely on robots, and Groom and Nass seem to view this as an in-principle limitation of robots. In response, we would like to make a few brief remarks. In the first place, a good robotic colleague is not necessarily a full-fledged team member. For example, we can imagine that the manufacturing robots mentioned above, studied by Sauppé and Mutlu (2015), only interact with their direct operator and are a good colleague merely to them. Secondly, it could turn out to be difficult to design robots that will be sufficiently trusted in contexts where the life of human workers is at considerable risk. In that case, the application context of robotic colleagues would be somewhat restricted, but this would not settle this paper’s question, since there would be many other contexts where robots potentially could be good colleagues. However, the most sensible approach seems to be to suspend judgment and see how trust in robots will develop in future work practices. Lots of research is being done on human-robot trust, for example on ways in which robots could repair human trust (Robinette et al. 2015) or even help teams to moderate interpersonal conflict (Jung et al. 2015). See also Coeckelbergh (2012) and Alaieri and Vellino (2016).

  17. 17.

    For related discussion, see Richard Bright’s interview with the philosopher Keith Frankish (Frankish 2018) “AI and Consciousness”, Interalia Magazine, Issue 39, February 2018, available here: (Accessed on August 21, 2019).

  18. 18.

    We want to emphasize that our claim here is not that people typically have no concern about the inner lives of their colleagues. Our claim is, instead, a comparative claim according to which the inner lives of those who are considered to be good colleagues typically matter less—perhaps even much less—to us than the inner lives of our friends or romantic partners matter to us.

  19. 19.

    David Gunkel argues that anytime that we apply human labels to robots (including labels like “slave” or “servant”) this creates pressure to ask whether any rights—even minimal rights—associated with those labels in the human case would also need to be extended to the robots. See Gunkel’s critical discussion of Bryson (2010) in Gunkel (2018).

  20. 20.

    Many thanks to Hannah Berkers, Pascale Le Blanc, Sonja Rispens, Jason Borenstein, the anonymous reviewers for this journal, and an audience at the sixth Annual OZSW Philosophy Conference in 2018 at Twente University for valuable feedback on this material. This work is part of the research program “Working with or Against the Machine? Optimizing Human–Robot Collaboration in Logistics Warehouses” with Project Number 10024747, which is (partly) financed by the Dutch Research Council (NWO).


  1. Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots: Autonomy, trust and responsibility. In Agah, et al. (Eds.), Social robotics (pp. 159–168). Berlin: Springer.

    Google Scholar 

  2. Allison, P. R. (2016). What does a bomb disposal robot actually do? BBC Future.

  3. Aristotle. (1999). Nicomachean ethics. Indianapolis: Hackett.

    Google Scholar 

  4. Beck, J. (2013). Married to a doll: Why one man advocates synthetic love, The Atlantic,

  5. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science Robotics, 3(21), eaat5954.

    Google Scholar 

  6. Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions (pp. 63–74). London: John Benjamins.

    Google Scholar 

  7. Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26.

    Google Scholar 

  8. Calvo, R. A., D'Mello, S., Gratch, J., & Kappas, A. (2014). Oxford handbook of affective computing. Oxford: Oxford University Press.

    Google Scholar 

  9. Carpenter, J. (2016). Culture and human–robot interactions in militarized spaces. London: Routledge.

    Google Scholar 

  10. Cavallo, F., Semeraro, F., Fiorini, L., Magyar, G., Sinčák, P., & Dario, P. (2018). Emotion modelling for social robotics applications: A review. Journal of Bionic Engineering, 15(2), 185–203.

    Google Scholar 

  11. Coeckelberg, M. (2010). Artificial companions: Empathy and vulnerability mirroring in human–robot relations. Studies in Ethics, Law, and Technology, 4(3), 1–17.

    Google Scholar 

  12. Coeckelbergh, M. (2012). Can we trust robots? Ethics & Information Technology, 14(1), 53–60.

    Google Scholar 

  13. Danaher, J. (2017). Will life be worth living in a world without work? Science and Engineering Ethics, 23(1), 41–64.

    Google Scholar 

  14. Danaher, J. (2018). Embracing the robot. Aeon The MIT Press,

  15. Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.

    Google Scholar 

  16. Darling, K. (2016). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In M. Froomki, R. Calo, & I. Kerr (Eds.), Robot law (pp. 213–232). Cheltenham: Edward Elgar.

    Google Scholar 

  17. Decker, M., Fischer, M., & Ott, I. (2017). Service robotics and human labor: A first technology assessment of substitution and cooperation. Robotics and Autonomous Systems, 87, 348–354.

    Google Scholar 

  18. Elder, A. (2017). Friendships, robots, and social media. London: Routledge.

    Google Scholar 

  19. Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. New York: Basic Books.

    Google Scholar 

  20. Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305–323.

    Google Scholar 

  21. Frankish, K. (2018). AI and consciousness (R. Bright, Interviewer). Retrieved from

  22. Garber, M. (2013, September 20). Funerals for fallen robots. Retrieved 7 December 2018, from

  23. Garreau, J. (2007). Bots on the ground. Washington Post. Retrieved from

  24. Gheaus, A., & Herzog, L. (2016). The goods of work (other than money!). Journal of Social Philosophy, 47(1), 70–89.

    Google Scholar 

  25. Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. (2015). Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Autonomous Robots, 39(3), 293–312.

    Google Scholar 

  26. Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human–robot teams. Interaction Studies, 8(3), 483–500.

    Google Scholar 

  27. Gunkel, D. (2018). Robot rights. Cambridge, MA: The MIT Press.

    Google Scholar 

  28. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human–robot interaction. Human Factors, 53(5), 517–527.

    Google Scholar 

  29. Harris, J. (2019). Reading the minds of those who never lived. Enhanced beings: The social and ethical challenges posed by super intelligent AI and reasonably intelligent humans. Cambridge Quarterly of Healthcare Ethics, 8(4), 585–591.

    Google Scholar 

  30. Hauskeller, M. (2017). Automatic sweethearts for transhumanists. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 203–218). Cambridge, MA: The MIT Press.

    Google Scholar 

  31. Heyes, C. (2018). Cognitive gadgets. Cambridge, MA: Harvard University Press.

    Google Scholar 

  32. Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.

    Google Scholar 

  33. Iqbal, T., & Riek, L. D. (2017). Human–robot teaming: Approaches from joint action and dynamical systems. In A. Goswami & P. Vadakkepat (Eds.), Humanoid robotics: A reference (pp. 1–20). Berlin: Springer.

    Google Scholar 

  34. Jung, M. F., Martelaro, N., & Hinds, P. J. (2015). Using robots to moderate team conflict: The case of repairing violations. Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction (pp. 229–236). ACM,

  35. Kolodny, N. (2003). Love as valuing a relationship. Philosophical Review, 112(2), 135–189.

    Google Scholar 

  36. Kontzer, T. (2016). Deep learning cuts error rate for breast cancer diagnosis | NVIDIA Blog. Retrieved 27 October 2018, from

  37. Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google glass, the collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human Values, 44(2), 291–314.

    Google Scholar 

  38. Kühler, M. (2014). Loving persons: activity and passivity in romantic love. In C. Maurer (Ed.), Love and its objects (pp. 41–55). London: Palgrave MacMillan.

    Google Scholar 

  39. Levin, J. (2018). Functionalism. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.),

  40. Levy, D. (2008). Love and sex with robots. London: Harper.

    Google Scholar 

  41. Ljungblad, S., Kotrbova, J., Jacobsson, M., Cramer, H., & Niechwiadowicz, K. (2012). Hospital robot at work: Something alien or an intelligent colleague? In Proceedings of the ACM 2012 conference on computer supported cooperative work (pp. 177–186). New York: ACM.

  42. Lysova, E. I., Allan, B. A., Dik, B. J., Duffy, R. D., & Steger, M. F. (2018). Fostering meaningful work in organizations: A multi-level review and integration. Journal of Vocational Behavior..

    Article  Google Scholar 

  43. Madden, C., & Bailey, A. (2016). What makes work meaningful—Or meaningless. MIT Sloan Management Review, 57(4), 52–61.

    Google Scholar 

  44. Marraffa, M. (2019). Theory of mind. The Internet Encyclopedia of Philosophy ISSN 2161-0002,

  45. Martela, F., & Riekki, T. J. J. (2018). Autonomy, competence, relatedness, and beneficence: A multicultural comparison of the four pathways to meaningful work. Frontiers in Psychology, 9, 1157.

    Article  Google Scholar 

  46. Mavridis, N. (2015). A review of verbal and non-verbal human–robot interactive communication. Robotics and Autonomous Systems, 63, 22–35.

    Article  Google Scholar 

  47. Nyholm, S. (2018). Attributing agency to automated systems: reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219.

    Google Scholar 

  48. Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. London: Rowman & Littlefield International.

  49. Nyholm, S., & Frank, L. (2017). From sex robots to love robots: Is mutual love with a robot possible? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 219–244). Cambridge, MA: The MIT Press.

    Google Scholar 

  50. Pettit, P. (2015). The robust demands of the good. Oxford: Oxford University Press.

    Google Scholar 

  51. Plato. (1997). Symposium. In J. Cooper (Ed.), Complete works (pp. 457–505). Indianapolis: Hackett.

    Google Scholar 

  52. Robinette, P., Howard, A. M., & Wagner, A. R. (2015). Timing is key for robot trust repair. In A. Tapus, E. André, J.-C. Martin, F. Ferland, & M. Ammi (Eds.), Social robotics (pp. 574–583). New York: Springer.

    Google Scholar 

  53. Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of robot performance on human–robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425–436.

    Google Scholar 

  54. Roessler, B. (2012). Meaningful work: Arguments from autonomy. Journal of Political Philosophy, 20(1), 71–93.

    Google Scholar 

  55. Royakkers, L., & Van Est, R. (2015). Just ordinary robots: Automation from love to war. London: CRC Press.

    Google Scholar 

  56. Sauppé, A., & Mutlu, B. (2015). The social impact of a robot co-worker in industrial settings. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 3613–3622). New York: ACM.

  57. Savela, N., Turja, T., & Oksanen, A. (2018). Social acceptance of robots in different occupational fields: A systematic literature review. International Journal of Social Robotics, 10(4), 493–502.

    Google Scholar 

  58. Schwartz, A. (1982). Meaningful work. Ethics, 92(4), 634–646.

    Google Scholar 

  59. Smids, J., Nyholm, S., & Berkers, H. (2019). Robots in the workplace: A threat—Or opportunity for—Meaningful work? Philosophy & Technology.

    Article  Google Scholar 

  60. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Google Scholar 

  61. Su, N. M., Liu, L. S., & Lazar, A. (2014). Mundanely miraculous: The robot in healthcare. In Proceedings of the 8th nordic conference on human-computer interaction: fun, fast, foundational (pp. 391–400). New York: ACM.

  62. Torta, E., Oberzaucher, J., Werner, F., Cuijpers, R. H., & Juola, J. F. (2012). Attitudes towards socially assistive robots in intelligent homes: Results from laboratory studies and field trials. Journal of Human-Robot Interaction, 1(2), 76–99.

    Google Scholar 

  63. Ward, S. J., & King, L. A. (2017). Work and the good life: How work contributes to meaning in life. Research in Organizational Behavior, 37, 59–82.

    Google Scholar 

  64. You, S., & Robert Jr., L. P. (2018). Human–robot similarity and willingness to work with a robotic co-worker. In Proceedings of the 2018 ACM/IEEE international conference on human–robot interaction (pp. 251–260). New York: ACM.

Download references

Author information



Corresponding author

Correspondence to Sven Nyholm.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020).

Download citation


  • Robots
  • Colleagues
  • Meaningful work
  • Human–robot interaction
  • Friendship and love