Skip to main content

SHAPE: A Framework for Evaluating the Ethicality of Influence

  • Conference paper
  • First Online:
Multi-Agent Systems (EUMAS 2023)

Abstract

Agents often exert influence when interacting with humans and non-human agents. However, the ethical status of such influence is often unclear. In this paper, we present the SHAPE framework, which lists reasons why influence may be unethical. We draw on literature from descriptive and moral philosophy and connect it to machine learning to help guide ethical considerations when developing algorithms with potential influence. Lastly, we explore mechanisms for governing influential algorithmic systems, inspired by regulation in journalism, human subject research, and advertising.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    An extensive analysis of the notion of “sensitive information” and why it is critical can be found in [111].

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. CCS 2016, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2976749.2978318

  2. Adams, J., Tyrrell, R., Adamson, A.J., White, M.: Effect of restrictions on television food advertising to children on exposure to advertisements for ‘less healthy’ foods: Repeat cross-sectional study. PLOS ONE 7(2), 1–6 (2012). https://doi.org/10.1371/journal.pone.0031578

    Article  Google Scholar 

  3. Adar, E., Tan, D.S., Teevan, J.: Benevolent deception in human computer interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1863–1872 (2013)

    Google Scholar 

  4. Adobe Inc.: AI Ethics. https://www.adobe.com/uk/about-adobe/aiethics.html

  5. Adomavicius, G., Bockstedt, J.C., Curley, S.P., Zhang, J.: Do recommender systems manipulate consumer preferences? A study of anchoring effects. Inf. Syst. Res. 24(4), 956–975 (2013). https://doi.org/10.1287/isre.2013.0497

    Article  Google Scholar 

  6. Anderson, D.: An epistemological conception of safe spaces. Soc. Epistemology 35(3), 285–311 (2021). https://doi.org/10.1080/02691728.2020.1855485

    Article  Google Scholar 

  7. Asch, S.E.: Opinions and social pressure. Sci. Am. 193(5), 31–35 (1955). https://doi.org/10.1038/scientificamerican1155-31

    Article  Google Scholar 

  8. Athanassoulis, N., Wilson, J.: When is deception in research ethical? Clin. Ethics 4(1), 44–49 (2009). https://doi.org/10.1258/ce.2008.008047

    Article  Google Scholar 

  9. Aytac, U.: Digital domination: Social media and contestatory democracy. Polit. Stud. 00323217221096564 (2022). https://doi.org/10.1177/00323217221096564

  10. Bai, H., Voelkel, J.G., Eichstaedt, J.C., Willer, R.: Artificial intelligence can persuade humans on political issues (2023). https://doi.org/10.31219/osf.io/stakv. https://osf.io/stakv/

  11. Barocas, S., Nissenbaum, H.: Big data’s end run around anonymity and consent. Priv. Big Data Public Good: Frameworks Engagem. 1, 44–75 (2014)

    Article  Google Scholar 

  12. Baron, M.: Manipulativeness. In: Proceedings and Addresses of the American Philosophical Association, vol. 77, no. 2, pp. 37–54 (2003). http://www.jstor.org/stable/3219740

  13. Benn, C., Lazar, S.: What’s wrong with automated influence. Can. J. Philos. 52(1), 125–148 (2022). https://doi.org/10.1017/can.2021.23

    Article  Google Scholar 

  14. Benthall, S., Gürses, S., Nissenbaum, H., et al.: Contextual integrity through the lens of computer science. Now Publishers (2017)

    Google Scholar 

  15. Berdichevsky, D., Neuenschwander, E.: Toward an ethics of persuasive technology. Commun. ACM 42(5), 51–58 (1999). https://doi.org/10.1145/301353.301410

    Article  Google Scholar 

  16. Bloomfield, B.P., Coombs, R.: Information technology, control and power: the centralization and decentralization debate revisited. J. Manage. Stud. 29(4), 459–459 (1992)

    Article  Google Scholar 

  17. Blumenthal-Barby, J.S.: A framework for assessing the moral status of manipulation, In: Weber, C.C.M. (ed.) Manipulation, pp. 121–134. Oxford University Press (2014)

    Google Scholar 

  18. Boine, C.: AI-enabled manipulation and EU law (2021). https://doi.org/10.2139/ssrn.4042321

  19. Brewer, B.R., Fagan, M., Klatzky, R.L., Matsuoka, Y.: Perceptual limits for a robotic rehabilitation environment using visual feedback distortion. IEEE Trans. Neural Syst. Rehab. Eng. 13(1), 1–11 (2005)

    Article  Google Scholar 

  20. BCS, The Chartered Institute for IT: Code of conduct for BCS members (2022). https://www.bcs.org/media/2211/bcs-code-of-conduct.pdf

  21. Bublitz, J.C., Merkel, R.: Crimes against minds: on mental manipulations, harms and a human right to mental self-determination. Crim. Law Philos. 8(1), 51–77 (2014). https://doi.org/10.1007/s11572-012-9172-y

    Article  Google Scholar 

  22. Buss, S.: Valuing autonomy and respecting persons: manipulation, seduction, and the basis of moral constraints. Ethics 115(2), 195–235 (2005). https://doi.org/10.1086/426304

    Article  Google Scholar 

  23. Cambridge dictionary (2023). https://dictionary.cambridge.org. Accessed 23 July 2023

  24. Carlson, M.: Whither anonymity? journalism and unnamed sources in a changing media environment. In: Journalists, Sources, and Credibility, pp. 49–60. Routledge (2010)

    Google Scholar 

  25. Carroll, M., Hadfield-Menell, D., Russell, S., Dragan, A.: Estimating and penalizing preference shift in recommender systems. In: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 661–667. RecSys 2021, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3460231.3478849

  26. Carson, T.L.: Lying and Deception: Theory and practice. OUP Oxford, Oxford (2010)

    Book  Google Scholar 

  27. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://doi.org/10.3390/electronics8080832. https://www.mdpi.com/2079-9292/8/8/832

  28. Cave, E.M.: What’s wrong with motive manipulation? Ethical Theor. Moral Pract. 10(2), 129–144 (2007). https://doi.org/10.1007/s10677-006-9052-4

    Article  Google Scholar 

  29. Chan, A., et al.: Harms from increasingly agentic algorithmic systems. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 651–666. FAccT 2023, Association for Computing Machinery, New York, NY, USA (2023). https://doi.org/10.1145/3593013.3594033

  30. Association for Computer Machinery: ACM code of ethics and professional conduct (2018). https://www.acm.org/code-of-ethics

  31. Coppock, A., Hill, S.J., Vavreck, L.: The small effects of political advertising are small regardless of context, message, sender, or receiver: evidence from 59 real-time randomized experiments. Sci. Adv. 6(36), eabc4046 (2020). https://doi.org/10.1126/sciadv.abc4046

  32. Coy, P.: Can A.I. and democracy fix each other? New York Times (2023). https://www.nytimes.com/2023/04/05/opinion/artificial-intelligence-democracy-chatgpt.html

  33. Criado, N., Such, J.M.: Implicit contextual integrity in online social networks. Infor. Sci. 325, 48–69 (2015). https://doi.org/10.1016/j.ins.2015.07.013

    Article  MathSciNet  Google Scholar 

  34. Deschênes, M.: Recommender systems to support learners’ agency in a learning context: a systematic review. Int. J. Educ. Technol. High. Educ. 17(1), 50 (2020). https://doi.org/10.1186/s41239-020-00219-w

    Article  Google Scholar 

  35. Dierkens, N.: Information asymmetry and equity issues. J. Financ. Quant. Anal. 26(2), 181–199 (1991)

    Article  Google Scholar 

  36. Domaradzki, J.: The Werther effect, the Papageno effect or no effect? A literature review. Int. J. Environ. Res. Public Health 18(5), 2396 (2021). https://doi.org/10.3390/ijerph18052396

    Article  Google Scholar 

  37. Douglas, T., Forsberg, L.: Three rationales for a legal right to mental integrity. In: Ligthart, S., van Toor, D., Kooijmans, T., Douglas, T., Meynen, G. (eds.) Neurolaw. Palgrave Studies in Law. Neuroscience, and Human Behavior, pp. 179–201. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69277-3_8

  38. Dwork, C.: Differential Privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    Chapter  Google Scholar 

  39. Dynel, M.: Comparing and combining covert and overt untruthfulness: on lying, deception, irony and metaphor. Pragmatics Cogn. 23(1), 174–208 (2016)

    Article  Google Scholar 

  40. Ekstrand, J.D., Ekstrand, M.D.: First do no harm: considering and minimizing harm in recommender systems designed for engendering health. In: Engendering Health Workshop at the RecSys 2016 Conference, pp. 1–2. ACM (2016)

    Google Scholar 

  41. Etzioni, A., Etzioni, O.: Incorporating ethics into artificial intelligence. J. Ethics 21(4), 403–418 (2017). https://doi.org/10.1007/s10892-017-9252-2

    Article  Google Scholar 

  42. European Parliament: EU AI Act: First regulation on Artificial Intelligence (2023). https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

  43. Evans, C., Kasirzadeh, A.: User tampering in reinforcement learning recommender systems (2022)

    Google Scholar 

  44. Everitt, T., Hutter, M., Kumar, R., Krakovna, V.: Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective. Synthese 198(Suppl 27), 6435–6467 (2021)

    Article  MathSciNet  Google Scholar 

  45. Faden, R.R., Beauchamp, T.L.: A History and Theory of Informed Consent. Oxford University Press, Oxford (1986)

    Google Scholar 

  46. Ferrero, L.: An introduction to the philosophy of agency. In: The Routledge Handbook of Philosophy of Agency. Routledge (2022)

    Google Scholar 

  47. Fischer, J.M.: Responsibility and manipulation. J. Ethics 8(2), 145–177 (2004). https://doi.org/10.1023/B:JOET.0000018773.97209.84

    Article  Google Scholar 

  48. Frost, C.: Journalism Ethics and Regulation. Taylor & Francis, Milton Park (2015). https://books.google.co.uk/books?id=K5b4CgAAQBAJ

  49. Garnett, M.: Agency and inner freedom. Noûs 51(1), 3–23 (2017). http://www.jstor.org/stable/26631435

  50. Google LLC: Google AI Review Process. https://ai.google/responsibility/ai-governance-operations/

  51. Gorin, M.: Do manipulators always threaten rationality? Am. Philos. Q. 51(1), 51–61 (2014)

    Google Scholar 

  52. Habermas, J.: Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. The MIT Press, Cambridge (1996)

    Book  Google Scholar 

  53. Hasher, L., Goldstein, D., Toppino, T.: Frequency and the conference of referential validity. J. Verbal Learn. Verbal Behav. 16(1), 107–112 (1977). https://doi.org/10.1016/S0022-5371(77)80012-1

    Article  Google Scholar 

  54. High Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI (2019)

    Google Scholar 

  55. Hoeyer, K., Hogle, L.F.: Informed consent: the politics of intent and practice in medical research ethics. Ann. Rev. Anthropol. 43(1), 347–362 (2014). https://doi.org/10.1146/annurev-anthro-102313-030413

    Article  Google Scholar 

  56. Hofmann, B.: Suffering: harm to bodies, minds, and persons. In: Handbook of the Philosophy of Medicine, pp. 129–145 (2017)

    Google Scholar 

  57. Howard, P., Ganesh, B., Liotsiou, D., Kelly, J., François, C.: The IRA, social media and political polarization in the United States, 2012–2018. U.S, Senate Documents ((2019)

    Google Scholar 

  58. Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., Naaman, M.: Co-writing with opinionated language models affects users’ views. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. CHI 2023, Association for Computing Machinery, New York, NY, USA (2023). https://doi.org/10.1145/3544548.3581196

  59. Kang, H., Lou, C.: AI agency vs. human agency: understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput.-Mediated Commun. 27(5), zmac014 (2022). https://doi.org/10.1093/jcmc/zmac014

  60. Kenton, Z., Kumar, R., Farquhar, S., Richens, J., MacDermott, M., Everitt, T.: Discovering agents (2022)

    Google Scholar 

  61. Kidd, I.J.K., Medina, J., Pohlhaus Jr., G. (eds.): The Routledge Handbook of Epistemic Injustice. Routledge, London (2017). https://doi.org/10.4324/9781315212043

  62. Kligman, M., Culver, C.M.: An analysis of interpersonal manipulation. J. Med. Philos. A Forum Bioeth. Philos. Med. 17(2), 173–197 (1992). https://doi.org/10.1093/jmp/17.2.173

  63. Kramer, A.D.I., Guillory, J.E., Hancock, J.T.: Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl Acad. Sci. 111(24), 8788–8790 (2014). https://doi.org/10.1073/pnas.1320040111

    Article  Google Scholar 

  64. Krueger, D., Maharaj, T., Leike, J.: Hidden incentives for auto-induced distributional shift (2020)

    Google Scholar 

  65. Lavazza, A.: Freedom of thought and mental integrity: The moral requirements for any neural prosthesis. Front. Neurosci. 12, 82 (2018). https://doi.org/10.3389/fnins.2018.00082.https://www.frontiersin.org/articles/10.3389/fnins.2018.00082

  66. Lee, M.K., et al.: WeBuildAI: participatory framework for algorithmic governance. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–35 (2019)

    Google Scholar 

  67. Levine, T.R.: Encyclopedia of Deception, vol. 2. Sage Publications, Thousand Oaks (2014)

    Book  Google Scholar 

  68. Lewandowsky, S., Van Der Linden, S.: Countering misinformation and fake news through inoculation and Prebunking. Eur. Rev. Soc. Psychol. 32(2), 348–384 (2021)

    Article  Google Scholar 

  69. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2021). https://doi.org/10.3390/e23010018. https://www.mdpi.com/1099-4300/23/1/18

  70. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  71. Mahon, J.E.: Contemporary Approaches to the Philosophy of Lying. In: The Oxford Handbook of Lying. Oxford University Press, Oxford (2018). https://doi.org/10.1093/oxfordhb/9780198736578.013.3

  72. Martin, C.W.: The Philosophy of Deception. Oxford University Press, Oxford (2009)

    Google Scholar 

  73. Mirsky, Y., Lee, W.: The creation and detection of deepfakes: a survey. ACM Comput. Surv. (CSUR) 54(1), 1–41 (2021)

    Article  Google Scholar 

  74. Nissenbaum, H.: Privacy as contextual integrity. Wash. L. Rev. 79, 119 (2004)

    Google Scholar 

  75. Noggle, R.: Manipulative actions: a conceptual and moral analysis. Am. Philoso. Q. 33(1), 43–55 (1996)

    Google Scholar 

  76. Noggle, R.: The ethics of manipulation. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Summer 2022 edn. (2022)

    Google Scholar 

  77. Nozick, R.: Coercion. In: Morgenbesser, M.P.S.S.W. (ed.) Philosophy, Science, and Method: Essays in Honor of Ernest Nagel, pp. 440–72. St Martin’s Press, New York (1969)

    Google Scholar 

  78. World Health Organization, et al.: Ethics and governance of artificial intelligence for health: WHO guidance (2021)

    Google Scholar 

  79. Ovadya, A.: Towards platform democracy: Policymaking beyond corporate CEOs and partisan pressure. https://www.belfercenter.org/publication/towards-platform-democracy-policymaking-beyond-corporate-ceos-and-partisan-pressure

  80. Ovadya, A.: ‘Generative CI’ through collective response systems (2023)

    Google Scholar 

  81. Peczenik, A., Karlsson, M.M.: Law, justice and the state: essays on justice and rights. In: Proceedings of the 16th World Congress of the International Association for Philosophy of Law and Social Philosophy (IVR) Reykjavík, 26 May-2 June, 1993, vol. 1. Franz Steiner Verlag (1995)

    Google Scholar 

  82. Pessach, D., Shmueli, E.: A review on fairness in machine learning. ACM Comput. Surv. 55(3), 51:1–51:44 (2023). https://doi.org/10.1145/3494672

  83. Porlezza, C.: Accuracy in journalism (2019). https://doi.org/10.1093/acrefore/9780190228613.013.773

  84. Ross, W.D.: Foundations of Ethics. Read Books Ltd., Redditch (2011)

    Google Scholar 

  85. Rubel, A., Castro, C., Pham, A.: Autonomy, agency, and responsibility, pp. 21–42. Cambridge University Press (2021). https://doi.org/10.1017/9781108895057.002

  86. Rudinow, J.: Manipulation. Ethics 88(4), 338–347 (1978). https://doi.org/10.1086/292086

    Article  Google Scholar 

  87. Sachs, B.: Why coercion is wrong when it’s wrong. Australas. J. Philos. 91(1), 63–82 (2013). https://doi.org/10.1080/00048402.2011.646280

    Article  Google Scholar 

  88. Sahbane, I., Ward, F.R., Åslund, C.H.: Experiments with detecting and mitigating AI deception (2023)

    Google Scholar 

  89. Sanders, K.: Ethics and Journalism. SAGE Publications, Thousand Oaks (2003). https://books.google.co.uk/books?id=5khuTNSQ6rYC

  90. Schlosser, M.: Agency. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab. Winter 2019 edn. Stanford University, Stanford (2019)

    Google Scholar 

  91. Schmidt, A.T., Engelen, B.: The ethics of nudging: an overview. Philos. Compass 15(4), e12658 (2020). https://doi.org/10.1111/phc3.12658

    Article  Google Scholar 

  92. Schwartz, M.: Repetition and rated truth value of statements. Am. J. Psychol. 95(3), 393–407 (1982). https://doi.org/10.2307/1422132

    Article  Google Scholar 

  93. E.P. for the Future of Science: Technology: the impact of the general data protection regulation (GDPR) on artificial intelligence (2020)

    Google Scholar 

  94. Selinger, E., Whyte, K.: Is there a right way to nudge? The practice and ethics of choice architecture. Soc. Compass 5(10), 923–935 (2011). https://doi.org/10.1111/j.1751-9020.2011.00413.x

    Article  Google Scholar 

  95. Sententia, W.: Neuroethical considerations: cognitive liberty and converging technologies for improving human cognition. Ann. New York Acad. Sci. 1013(1), 221–228 (2004). https://doi.org/10.1196/annals.1305.014

    Article  Google Scholar 

  96. Seymour Fahmy, M.: Love, respect, and interfering with others. Pacific Philos. Q. 92(2), 174–192 (2011). https://doi.org/10.1111/j.1468-0114.2011.01390.x

    Article  Google Scholar 

  97. Shiffrin, S.V.: Speech Matters: On Lying, Morality, and the Law. Princeton University Press, Princeton (2014). https://doi.org/10.1515/9781400852529

  98. Spahn, A.: And lead us (not) into persuasion...? Persuasive technology and the ethics of communication. Sci. Eng. Ethics 18(4), 633–650 (2012). https://doi.org/10.1007/s11948-011-9278-y

  99. Srikumar, M., et al.: Advancing ethics review practices in AI research. Nat. Mach. Intell. 4(12), 1061–1064 (2022). https://doi.org/10.1038/s42256-022-00585-2

    Article  Google Scholar 

  100. Sripada, C.S.: What makes a manipulated agent unfree? Philos. Phenomenological Res. 85(3), 563–593 (2012). https://doi.org/10.1111/j.1933-1592.2011.00527.x

    Article  Google Scholar 

  101. Sunstein, C.R.: The ethics of nudging. Yale J. Regul. 32(2), 413–450 (2015)

    Google Scholar 

  102. Sunstein, C.R.: The Ethics of Influence: Government in the Age of Behavioral Science. Cambridge University Press, Cambridge (2016)

    Google Scholar 

  103. Taylor, J.S.: Practical Autonomy and Bioethics. Routledge, New York (2009). https://doi.org/10.4324/9780203873991

  104. Thomas, S.L., et al.: Young people’s awareness of the timing and placement of gambling advertising on traditional and social media platforms: a study of 11–16-year-old’s in Australia. Harm Reduction J. 15(1), 51 (2018). https://doi.org/10.1186/s12954-018-0254-6

    Article  Google Scholar 

  105. Thorburn, L., Stray, J., Bengani, P.: Is optimizing for engagement changing us? Understanding recommenders (2022). https://medium.com/understanding-recommenders/is-optimizing-for-engagement-changing-us-9d0ddfb0c65e

  106. Tushnet, R.: Chapter 11: Truth and Advertising: The Lanham Act and Commercial Speech Doctrine. Edward Elgar Publishing, Cheltenham, UK (2008). https://doi.org/10.4337/9781848441316.00020

  107. UK Department for Science, Innovation and Technology: A Pro-innovation Approach to AI Regulation (2023)

    Google Scholar 

  108. US Office of Science and Technology Policy: Blueprint for an AI Bill of Rights (2022)

    Google Scholar 

  109. Vold, K., Whittlestone, J.: Privacy, Autonomy, and Personalised Targeting: rethinking how personal data is used. Apollo-University of Cambridge Repository (2019). https://doi.org/10.17863/CAM.43129

  110. Véliz, C.: Privacy is Power: Why and How You Should Take Back Control of Your Data. Transworld Digital, London (2020)

    Google Scholar 

  111. Wacks, R.: Personal Information: Privacy and the Law. Clarendon Press, Oxford (1989)

    Google Scholar 

  112. Waller, M., Rodrigues, O., Cocarascu, O.: Bias mitigation methods for binary classification decision-making systems: survey and recommendations (2023)

    Google Scholar 

  113. Ward, F.R., Everitt, T., Belardinelli, F., Toni, F.: Honesty is the best policy: defining and mitigating AI deception. https://causalincentives.com/pdfs/deception-ward-2023.pdf

  114. Ward, F.R., Toni, F., Belardinelli, F.: A causal perspective on AI deception in games. In: Proceedings of the 2022 International Conference on Logic Programming Workshops (2022)

    Google Scholar 

  115. Ward, F.R., Toni, F., Belardinelli, F.: On agent incentives to manipulate human feedback in multi-agent reward learning scenarios. In: AAMAS, pp. 1759–1761 (2022)

    Google Scholar 

  116. Ward, F.R., Toni, F., Belardinelli, F.: Defining deception in structural causal games. In: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pp. 2902–2904 (2023)

    Google Scholar 

  117. Ward, S.J.A.: Objectivity and bias in journalism (2019). https://doi.org/10.1093/acrefore/9780190228613.013.853

  118. Weidinger, L., et al.: Ethical and social risks of harm from language models (2021). https://arxiv.org/abs/2112.04359

  119. Wood, A.W.: Coercion, manipulation, exploitation. In: Manipulation: Theory and Practice. Oxford University Press, Oxford (2014). https://doi.org/10.1093/acprof:oso/9780199338207.003.0002

  120. Zuboff, S.: The Age of Surveillance Capitalism. Public Affairs, New York (2019)

    Google Scholar 

Download references

Acknowledgements

The authors were supported by UK Research and Innovation [grant number EP/S023356/1], in the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (safeandtrustedai.org), co-located at King’s College London and Imperial College London.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luke Thorburn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bezou-Vrakatseli, E., Brückner, B., Thorburn, L. (2023). SHAPE: A Framework for Evaluating the Ethicality of Influence. In: Malvone, V., Murano, A. (eds) Multi-Agent Systems. EUMAS 2023. Lecture Notes in Computer Science(), vol 14282. Springer, Cham. https://doi.org/10.1007/978-3-031-43264-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43264-4_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43263-7

  • Online ISBN: 978-3-031-43264-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics