Ethics and Information Technology

, Volume 20, Issue 1, pp 5–14 | Cite as

Society-in-the-loop: programming the algorithmic social contract

  • Iyad RahwanEmail author
Original Paper


Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, ‘SITL = HITL + Social Contract.’


Ethics Artificial intelligence Society Governance Regulation 



I am grateful for financial support from the Ethics & Governance of Artificial Intelligence Fund, as well as support from the Siegel Family Endowment. I am endebted to Joi Ito, Suelette Dreyfus, Cesar Hidalgo, Alex ‘Sandy’ Pentland, Tenzin Priyadarshi and Mark Staples for conversations and comments that helped shape this article. I’m grateful to Brett Scott for allowing me to appropriate the term ‘Techno-Leviathan’ which he originally presented in the context of Cryptocurrency (Scott 2014). I thank Deb Roy for introducing me to Walter Lippman’s ‘The Phantom Public’ and for constantly challenging my thinking. I thank Danny Hillis for pointing to the co-evolution of technology and societal values. I thank James Guszcza for suggesting the term ‘algorithm auditors’ and for other helpful comments.


  1. Aldewereld, H., Dignum, V., & hua Tan, Y. (2014). Design for values in software development. In J. van den Jeroen, P. E. Vermaas, & I. van de Poel (Eds.), Handbook of ethics, values, and technological design. Dordrecht: Springer.Google Scholar
  2. Allen, J., Guinn, C. I., & Horvtz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems and Their Applications, 14(5), 14–23.CrossRefGoogle Scholar
  3. Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120.CrossRefGoogle Scholar
  4. Arrow, K. J. (2012). Social choice and individual values. New Haven: Yale University Press.zbMATHGoogle Scholar
  5. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on facebook. Science, 348(6239), 1130–1132.MathSciNetCrossRefzbMATHGoogle Scholar
  6. Baldassarri, D., & Grossman, G. (2011). Centralized sanctioning and legitimate authority promote cooperation in humans. Proceedings of the National Academy of Sciences, 108(27), 11023–11027.CrossRefGoogle Scholar
  7. Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2017). Fairness in criminal justice risk assessments: The state of the art. arXiv preprint [arXiv:1703.09207].
  8. Binmore, K. (2005). Natural justice. Oxford: Oxford University Press.CrossRefGoogle Scholar
  9. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.CrossRefGoogle Scholar
  10. Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.CrossRefGoogle Scholar
  11. Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227.CrossRefGoogle Scholar
  12. Cakmak, M., Chao, C., & Thomaz, A. L. (2010). Designing interactions for robot active learners. IEEE Transactions on Autonomous Mental Development, 2(2), 108–118.CrossRefGoogle Scholar
  13. Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.CrossRefGoogle Scholar
  14. Callejo, P., Cuevas, R., Cuevas, A., & Kotila, M. (2016). Independent auditing of online display advertising campaigns. In Proceedings of the 15th ACM workshop on hot topics in networks (HotNets) (pp. 120–126).Google Scholar
  15. Castelfranchi, C. (2000). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2(2), 113–119.CrossRefGoogle Scholar
  16. Chen, Y., Lai, J. K., Parkes, D. C., & Procaccia, A. D. (2013). Truth, justice, and cake cutting. Games and Economic Behavior, 77(1), 284–297.MathSciNetCrossRefzbMATHGoogle Scholar
  17. Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1–33.Google Scholar
  18. Conitzer, V., Brill, M., & Freeman, R. (2015). Crowdsourcing societal tradeoffs. In Proceedings of the 2015 international conference on autonomous agents and multiagent systems (pp. 1213–1217). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
  19. Crandall, J. W. & Goodrich, M. A. (2001). Experiments in adjustable autonomy. In 2001 IEEE international conference on Systems, man, and cybernetics (Vol. 3, pp. 1624–1629). IEEE.Google Scholar
  20. Cuzzillo, T. (2015). Real-world active learning: Applications and strategies for human-in-the-loop machine learning. Technical report, OâĂŹReilly.Google Scholar
  21. Delvaux, M. (2016). Motion for a European Parliament resolution: with recommendations to the commission on civil law rules on robotics. Technical Report (2015/2103(INL)), European Commission.Google Scholar
  22. Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415.CrossRefGoogle Scholar
  23. Dinakar, K., Chen, J., Lieberman, H., Picard, R., & Filbin, R. (2015). Mixed-initiative real-time topic modeling & visualization for crisis counseling. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 417–426). ACM.Google Scholar
  24. Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156.CrossRefGoogle Scholar
  25. Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16–23.CrossRefGoogle Scholar
  26. Fukuyama, F. (2011). The origins of political order: From prehuman times to the French Revolution. London: Profile books.Google Scholar
  27. Gates, G., Ewing, J., Russell, K., & Watkins, D. (2015). How Volkswagen’s ‘defeat devices’ worked. New York Times.Google Scholar
  28. Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.Google Scholar
  29. Gürerk, Ö., Irlenbusch, B., & Rockenbach, B. (2006). The competitive advantage of sanctioning institutions. Science, 312(5770), 108–111.CrossRefGoogle Scholar
  30. Hamilton, W. D. (1963). The evolution of altruistic behavior. American Naturalist, 97, 354–356.CrossRefGoogle Scholar
  31. Haviland, W., Prins, H., McBride, B., & Walrath, D. (2013). Cultural anthropology: The human challenge. Boston: Cengage Learning.Google Scholar
  32. Helbing, D., & Pournaras, E. (2015). Society: Build digital democracy. Nature, 527, 33–34.CrossRefGoogle Scholar
  33. Henrich, J. (2004). Cultural group selection, coevolutionary processes and large-scale cooperation. Journal of Economic Behavior & Organization, 53(1), 3–35.CrossRefGoogle Scholar
  34. Hern, A. (2016). ‘partnership on artificial intelligence’ formed by Google, Facebook, Amazon, IBM, Microsoft and Apple. Technical report, The Guardian.Google Scholar
  35. Hobbes, T. (1651). Leviathan, or, The matter, forme, and power of a common-wealth ecclesiasticall and civill. London: Andrew Crooke.Google Scholar
  36. Horvitz, E. (1999). Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 159–166). ACM.Google Scholar
  37. IEEE (2016). Ethically aligned design. Technical report, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.Google Scholar
  38. Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43–69.CrossRefGoogle Scholar
  39. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint [arXiv:1609.05807].
  40. Leben, D. (2017). A rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19, 107–115.CrossRefGoogle Scholar
  41. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRefGoogle Scholar
  42. Letham, B., Rudin, C., McCormick, T. H., Madigan, D., et al. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350–1371.MathSciNetCrossRefzbMATHGoogle Scholar
  43. Levy, S. (2010). The AI revolution is on. Wired.Google Scholar
  44. Lippmann, W. (1927). The phantom public. New Brunswick: Transaction Publishers.Google Scholar
  45. Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553), 445–451.CrossRefGoogle Scholar
  46. Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167.CrossRefGoogle Scholar
  47. Locke, J. (1689). Two treatises of government. Self Published.Google Scholar
  48. Markoff, J. (2015). Machines of loving grace. Ecco.Google Scholar
  49. MIT (2017). The moral machine. Retrieved January 01, 2017, from
  50. Moulin, H., Brandt, F., Conitzer, V., Endriss, U., Lang, J., & Procaccia, A. D. (2016). Handbook of Computational Social Choice. Cambridge: Cambridge University Press.Google Scholar
  51. National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence. Technical report, Executive Office of the President.Google Scholar
  52. Nisan, N., Roughgarden, T., Tardos, E., & Vazirani, V. V. (2007). Algorithmic game theory (Vol. 1). Cambridge: Cambridge University Press.Google Scholar
  53. Nowak, M., & Highfield, R. (2011). Supercooperators: Altruism, evolution, and why we need each other to succeed. New York: Simon and Schuster.Google Scholar
  54. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing Group.Google Scholar
  55. O’Reilly, T. (2016). Open data and algorithmic regulation. In B. Goldstein & L. Dyson (Eds.), Beyond transparency: Open data and the future of civic innovation. San Francisco: Code for America Press.Google Scholar
  56. Orseau, L. & Armstrong, S. (2016). Safely interruptible agents. In Uncertainty in artificial intelligence: 32nd Conference (UAI).Google Scholar
  57. Pariser, E. (2011). The filter bubble: What the internet is hiding from you. London: Penguin.Google Scholar
  58. Parkes, D. C., & Wellman, M. P. (2015). Economic reasoning and artificial intelligence. Science, 349(6245), 267–272.MathSciNetCrossRefzbMATHGoogle Scholar
  59. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  60. Pentland, A. S. (2013). The data-driven society. Scientific American, 309(4), 78–83.CrossRefGoogle Scholar
  61. Pigou, A. C. (1920). The economics of welfare. London: Palgrave Macmillan.Google Scholar
  62. Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.Google Scholar
  63. Richerson, P. J., & Boyd, R. (2005). Not by genes alone.Google Scholar
  64. Rousseau, J.-J. (1762). The Social Contract.Google Scholar
  65. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.MathSciNetCrossRefGoogle Scholar
  66. Scott, B. (2014). Visions of a techno-leviathan: The politics of the bitcoin blockchain. E-International Relations.Google Scholar
  67. Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge: MIT Press.Google Scholar
  68. Sheridan, T. B. (2006). Supervisory control. Handbook of Human Factors and Ergonomics (3rd ed., pp. 1025–1052). Hoboken: WileyGoogle Scholar
  69. Sigmund, K., De Silva, H., Traulsen, A., & Hauert, C. (2010). Social learning promotes institutions for governing the commons. Nature, 466(7308), 861–863.CrossRefGoogle Scholar
  70. Skyrms, B. (2014). Evolution of the social contract. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  71. Standing Committee of the One Hundred Year Study of Artificial Intelligence (2016). Artificial intelligence and life in 2030. Technical report, Stanford University.Google Scholar
  72. Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11(3), 10.CrossRefGoogle Scholar
  73. Tambe, M., Scerri, P., & Pynadath, D. V. (2002). Adjustable autonomy for the real world. Journal of Artificial Intelligence Research, 17(1), 171–228.MathSciNetzbMATHGoogle Scholar
  74. Thomaz, A. L., & Breazeal, C. (2008). Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6), 716–737.CrossRefGoogle Scholar
  75. Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46, 35–57.CrossRefGoogle Scholar
  76. Tufekci, Z. (2015). Algorithmic harms beyond facebook and google: Emergent challenges of computational agency. Journal on Telecommunications and High Technology Law, 13, 203.Google Scholar
  77. Turchin, P. (2015). Ultrasociety: How 10,000 years of war made humans the greatest cooperators on earth. Chaplin: Beresta Books.Google Scholar
  78. Valentino-DeVries, J., Singer-Vine, J., & Soltani, A. (2012). Websites vary prices, deals based on users’ information. Wall Street Journal, 10, 60–68.Google Scholar
  79. Van de Poel, I. (2013). Translating values into design requirements. In Philosophy and engineering: Reflections on practice, principles and process (pp. 253–266). Dordrecht: Springer.Google Scholar
  80. Young, H. P. (2001). Individual strategy and social structure: An evolutionary theory of institutions. Princeton: Princeton University Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.The Media LabMassachusetts Institute of TechnologyCambridgeUSA
  2. 2.Institute for Data, Systems and SocietyMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations