Skip to main content

Fantastic Interfaces and Where to Regulate Them: Three Provocative Privacy Reflections on Truth, Deception and What Lies Between

  • Conference paper
  • First Online:
Digital Transformation of Collaboration (COINs 2019)

Part of the book series: Springer Proceedings in Complexity ((SPCOM))

  • 569 Accesses

Abstract

Speech Interfaces represent a new interactive phenomenon, which entails massive personal data processing. The spectrum of legal issues that arises from this interaction impacts both user privacy and social relationships. This study addresses three potential issues or ‘provocations’ relating to speech interaction that illustrate the challenges and complexity of this socio-legal domain: (i) the potential for lying; (ii) the possibility of breaching the law; (iii) the ability to interpret an order. It deploys an in-depth analysis of the related legal consequences and implications with the scope to prompt discussion around these provocative issues. It first provides an overview of the correct hermeneutical approach to frame legal paradigms, highlighting the crucial legal aspects, conceptual approaches and interpretations to be considered when addressing the whole ‘interactive artificial agents’ (IAA) phenomenon. The study adopts the classical Civil Law system’s methodology (qualitative/top-down analytical). The core of the study then focuses on the three provocations as connected by personal data processing. The goal is to provide a critical legal analysis of those interfaces that could impact the foundation of human socio-legal interrelations. By raising awareness of these controversial aspects, the work contributes to fostering further discussion about interdisciplinary privacy issues that stand at the intersection of Law, Social Sciences and HCI design, and that cross-pollinate each other.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Which is defined as the ‘range of ideas tolerated in public discourse, also known as the window of discourse’ [5]. The concept states that an idea's political viability depends mainly on whether it falls within this range, rather than on politicians' individual preferences. Indeed, according to Overton, the window contains the range of policies that a politician can recommend without appearing too extreme to gain or keep public office in the current climate of public opinion. The range is formed by ‘unthinkable’, ‘radical’, ‘acceptable’, ‘sensible’, ‘popular’, ‘policy’, and describes the different stages of public perception.

  2. 2.

    The relationship between users and service providers for the use of a service provided through an online platform (e.g. social media platforms) falls into the legal regime of contractual relationship. Despite service providers allege the service is provided for free (gratuitousness), this is not true: users pay the service (counter-compensate it) with their personal data, with their attention, and by accepting both to undergo advertisement, to be profiled and their profile to be sold to third parties. Therefore, for the Law, it is wrong to call them ‘users’, because they must more appropriately be considered ‘clients’ instead. For instance, note that the Facebook frontpage to access the platform after the Cambridge Analytica case has deleted the phrase ‘it is for free and it always will be’. However, this misconception of the data processing relationship implies treating personal data as commodities (and properties), which is not the case. Under the GDPR and Civil Law, one does not own their personal data. Instead they have personhood rights over them (to license the data exploitation under certain conditions set by the regulatory framework). Privacy rights are fundamental rights and, as such, cannot (and must not) be monetised.

  3. 3.

    Meaning those contracts in which the conditions cannot be bargained and essentially work as a ‘take it or leave it’ proposal.

  4. 4.

    Although, the benefit of this improvement mostly privileges service providers’ power of prediction and of shaping personalised advertisement.

  5. 5.

    So-called ‘Analogia Iuris', i.e. the technique of applying, in absence of a specific regulatory framework, the legal regime provided for a similar phenomenon, e.g. what happened with airplanes at the beginning of the twentieth century, when were mutated navigation rules for regulating the phenomenon.

  6. 6.

    When the term “Law” is capitalised, it refers to the whole Legal System. Instead, when it is not capitalised, it refer to a single law or norm.

  7. 7.

    That is legal fictions: because it is not possible to measure, quantify, define or prove certain situations (acts, facts, conducts) the Law assumes that under certain circumstances there is a fixed outcome, called ‘presumption’. The party who undergoes the effects of legal presumptions is usually admitted to prove the contrary.

  8. 8.

    To provide stability of social relationships through enforceability.

  9. 9.

    It regulates—through standardisation—the technical requirements that a technology must match in order to achieve the wanted effects.

  10. 10.

    The literature refers to these devices indifferently as conversational agents, voice user interfaces, speech interfaces, digital or virtual assistant, smart speakers, and so on.

  11. 11.

    Consider that, in Civil Law systems, the Law must be general and abstract, meaning that it should affect the most possible people and embrace the most possible situations.

  12. 12.

    We avoid the use of ‘intelligent’ as the Law does not care about the intelligence level of an agent (be it natural or artificial), unless constituting incapability, because it focuses on actions and, overall, their effects. The quality (legal status) of the agent affects only its imputability. The point for the Law is not how much clever one is but if an agent can be considered as a legal subject instead of a legal object. This goes for robots (embodied AIs), as well as for animals [14, 15].

  13. 13.

    It is instead shaped as a licensee agreement of use, paired with privacy policies by adhesion.

  14. 14.

    According to the GDPR the data processing must be informed by the principle of purpose, strictly linked with the principle of necessity and minimisation. They imply that the processing of personal data must be carried out for specific purposes only, as well as, the data processed must be quantitatively and qualitatively minimised to the extent of those solely necessary to provide the requested service.

  15. 15.

    Users cannot avoid advertisement nor, usually, can opt-out from personalised ads.

  16. 16.

    Usually dispatched together, so that users are forced to accept by adhesion both with no ability to opt-out from one or the other or single provisions.

  17. 17.

    For instance, Siri replies ‘I’m not programmed to lie’, while Google Assistant replies ‘I’d never lie to you’.

  18. 18.

    However, this is merely empirical. The terms and conditions do not cover this situation and there is no proof that the system turns off not to lie or to disclose the truth. However, it remains a fact that it does it every time for the same set of questions.

  19. 19.

    GDPR Article 25.

  20. 20.

    Consider that the goal of legal systems is to provide a twofold tool for social stability (reliability of socio-legal relationships) which is represented by foreseeability and certainty.

  21. 21.

    Articles 5 and 32 of the General Data Protection Regulation (GDPR) [41].

  22. 22.

    There is no discussion yet in the literature on the issue about knowing personal information of partners and the extent of this hypothetical right.

  23. 23.

    Often (legally) misused to approach self-driving car scenarios, their liability consequences and the related impact on AI systems programming.

  24. 24.

    It is not punishable whoever committed the fact for being forced to do so by the need to save himself or others from the current danger of serious harm.

  25. 25.

    The so-called ‘excitement causes’ which are established by the law and identify particular situations whose occurrence makes it lawful to commit a crime. For instance, self-defence.

  26. 26.

    Article 22.

  27. 27.

    However, consider that the WP29 guidelines [49, 50] underline how this should not be the case and, instead, the human intervention in the decision-making process should be substantial.

  28. 28.

    And according to GDPR Article 22.

  29. 29.

    And furthermore, it is against the general penal principles of Civil Law systems: one cannot be charged for the mere intention to commit a crime and, therefore, cannot be prevented to express that intention. For the Law, it is the action that is relevant.

  30. 30.

    Which, in both Economy and Criminology, falls into the rational-choice Theory, which posits that humans are reasoning actors who weigh means and ends, costs and benefits, through utilitarian approaches in order to make a rational choice [52, 53].

References

  1. S. Perez, Over a quarter of US adults now own a smart speaker, typically an Amazon Echo (2019), https://techcrunch.com/2019/03/08/over-a-quarter-of-u-s-adults-now-own-a-smart-speaker-typically-an-amazon-echo/. Last accessed 7 Dec 2019

  2. S. Forsdick, Amazon launches Alexa for business, but will office smart speakers take off? (2019), https://www.ns-businesshub.com/technology/amazon-launches-alexa-forbusiness/. Last accessed 7 Dec 2019

  3. K. Noda, Google Home: smart speaker as environmental control unit. Disabi. Rehab. Assis. Technol. (2017). https://doi.org/10.1080/17483107.2017.1369589

  4. L. Clark, N. Pantini et al., What makes a good conversation? Challenges in designing truly conversational agents, in 2019 Conference on Human Factors in Computing Systems (CHI 2019). arXiv:1901.06525 [cs.HC] (2019)

  5. J.P. Overton, https://www.mackinac.org/OvertonWindow. Last accessed 7 Dec 2019

  6. T. Wu, Blind spot: the attention economy and the law, Columbia Law School (2017), Scholarship archive. https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=3030&context=faculty_scholarship. Last accessed 27 Dec 2019

  7. H. Chung, S. Lee, Intelligent virtual assistant knows your life (2018), arXiv:1803.00466 [cs.CY]

  8. S. Sekine, H. Wakahara, Natural language processing device, method, and program. EP2653981A1 European Patent Office (2011)

    Google Scholar 

  9. N. Bostrom, E. Yudkowsky, The ethics of artificial intelligence, in Cambridge Handbook of Artificial Intelligence, ed. by W. Ramsey, K. Frankish (Cambridge University Press, 2011)

    Google Scholar 

  10. F. Raso, H. Hilligoss et al., Artificial intelligence & human rights: opportunities & risks (2018), https://cyber.harvard.edu/publication/2018/artificial-intelligence-human-rights. Last accessed 7 Dec 2019

  11. C. Chhertri, V.G. Motti, Eliciting privacy concerns for smart home devices from a user centred perspective, in iConference 2019, ed. by N.G. Taylor et al. (LNCS 11420, 2019), pp. 91–101

    Google Scholar 

  12. P. Cohen, A. Cheyer et al., On the future of personal assistants, in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (ACM, 2016), pp. 1032–1037

    Google Scholar 

  13. H. Schnädelbach, D. Kirk (eds.), People, Personal Data and the Built Environment (Springer, 2019)

    Google Scholar 

  14. E. Stradella, P. Salvini et al., Robot companions as case-scenario for assessing the “subjectivity” of autonomous agents, in CEUR Workshop Proceedings on Some Philosophical and Legal Remarks. (2012), http://ceur-ws.org/Vol-885/paper4.pdf. Last accessed 27 Dec 2019

  15. E. Schaerer, R. Kelley et al., Robots as animals: a framework for liability and responsibility in human-robot interactions, in 18th IEEE International Symposium on Robot and Human Interactive Communication Toyama (Japan, 2009)

    Google Scholar 

  16. Y. Goldberg, A primer on neural network models for natural language processing. J. AI. Res. 57, 345–420 (2016)

    Google Scholar 

  17. A. Turing, Computing Machinery and Intelligence. Mind LIX(236), 433–460 (1950)

    Article  MathSciNet  Google Scholar 

  18. J. Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (W.H. Freeman and Company, 1976), pp. 2, 3, 6, 182, 189

    Google Scholar 

  19. Google Duplex. Google blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html. Last accessed 7 Dec 2019

  20. R. De Renesse, Virtual digital assistants to overtake world population by 2021 (2017), https://ovum.informa.com/resources/product-content/virtual-digital-assistants-to-overtake-world-population-by-2021. Last accessed 7 Dec 2019

  21. R. Leenes, F. Lucivero, Laws on robots, laws by robots, laws in robots: regulating robot behaviour by design. Law Innova. Technol. 6(2), 193–220 (2014)

    Article  Google Scholar 

  22. F. Andrade, P. Novais et al., Contracting agents: legal personality and representation. Artifi. Intell. Law 15(4), 357–373 (2007)

    Article  Google Scholar 

  23. W. Wallach, From robots to techno Sapiens: ethics, law and public policy in the development of robotics and neurotechnologies. Law Innov. Technol. 3(2), 185–207 (2011)

    Article  Google Scholar 

  24. N. Abid, K. Ramokapane, J.M. Such, more than a smart speakers: security and privacy perceptions of smart home personal assistants, in Proceedings of the 15th Symposium on Usable Privacy and Security (USA, 2019), pp. 451–466

    Google Scholar 

  25. M.M. Losavio, K.P. Chow et al., The internet of things and the smart city: legal challenges with digital forensic, privacy and security. Secur. Privacy 1(23), 1–11 (2018)

    Google Scholar 

  26. P. Cheng, I.E. Bagci et al., Smart speaker privacy control—acoustic tagging for personal voice assistants, in IEEE Workshop on the Internet of Safe Things (SafeThings 2019) (2019)

    Google Scholar 

  27. Y. Liao, J. Vitak et al., Understanding the role of privacy and trust in intelligent personal assistant adoption, in iConference 2019, ed. by N.G. Taylor et al. (LNCS 11420, 2019), pp. 102–113

    Google Scholar 

  28. Z. Rutkay, C. Pelachuad (eds.), From Brows to Trust Evaluating Embodied Conversational Agents (Kluwer Academic Publisher, 2004)

    Google Scholar 

  29. N.M. Richards, D.S. Smart, How should the law think about robots?, in Robot Law, ed. by A.M. Froomkin, I. Kerr, R. Calo (Edward Elgar Publishing, 2016)

    Google Scholar 

  30. M. Alovisio, C. Blengino et al., The Law of Service Robots, ed. by C. Artusio, M.A. Senor. Report (NEXA Center for Internet and Society, 2015)

    Google Scholar 

  31. Alexa and Alexa device terms, https://www.amazon.com/gp/help/customer/display.html?nodeId=201566380. Last accessed 7 Dec 2019

  32. Google Assistant terms of service, https://developers.google.com/assistant/console/policies/terms-of-service. Last accessed 7 Dec 2019

  33. Apple software license agreements, https://www.apple.com/legal/sla/. Last accessed 7 Dec 2019

  34. Microsoft cortana and privacy, https://support.microsoft.com/en-us/help/4468233/cortana-and-privacy-microsoft-privacy. Last accessed 7 Dec 2019

  35. S. Tiribelli, Nuovi Media e Bellezza. La narrazione estetica del sé tra potenziamento e falsificazione. In Etica e Bellezza, ed. by S. Achella, F. Miano. SIFM 3, 263–272 (2019)

    Google Scholar 

  36. K. Cox, Why Amazon’s Alexa can’t tell you if it’s connected to the CIA (2019), https://www.consumerreports.org/consumerist/why-amazons-alexa-cant-tell-you-if-its-connected-to-the-cia/. Last accessed 7 Dec 2019

  37. Amazon skills. https://www.amazon.com/Cloudlands-Dev-One-Lie/dp/B07GLN2D2L. Last accessed 7 Dec 2019

  38. Google Assistant API terms of services, https://developers.google.com/assistant/sdk/terms-of-service. Last accessed 7 Dec 2019

  39. Apple discussion, https://discussions.apple.com/thread/250073486. Last accessed 7 Dec 2019

  40. L. Bolognini, C. Bistolfi, L’età del consenso digitale. Privacy e minori on line, riflessioni sugli impatti dell’art. 8 del Regolamento 2016/679(UE). CNAC report (2017)

    Google Scholar 

  41. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation)

    Google Scholar 

  42. D.A. Moses, M.K. Leonard, Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat. Commun. 10, 3096 (2019). https://doi.org/10.1038/s41467-019-10994-4

  43. Robot refuses to follow orders given by a human, https://www.dailymail.co.uk/video/news/video-1232192/Robot-follows-order-walk-table-trusting-caught.html. Last accessed 7 Dec 2019

  44. A. Bertonilin, Robots and liability: justifying a change in perspective, in Rethinking Liability in Science and Technology, ed. by F. Battaglia, M. Nikil et al. (Pisa University Press, 2014)

    Google Scholar 

  45. J. Danaher, Robots, law and the retribution gap. Ethics Inf. Technol. 18, 299–309 (2016)

    Article  Google Scholar 

  46. R. Whiters, The EU is trying to decide whether to grant robots personhood (2018), https://slate.com/technology/2018/04/the-eu-is-trying-to-decide-whether-to-grant-robots-personhood.html. Last accessed 7 Dec 2019

  47. J.J. Thomson, Killing, letting die, and the trolley problem. Monist 59, 204–17 (1976)

    Google Scholar 

  48. S. Wachter, B. Mittlestadt, L. Floridi, Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2, 6080 (2017)

    Article  Google Scholar 

  49. G.M. Riva, Net Neutrality matters: privacy antibodies for information monopolies and mass profiling. Publicum 5, 2 (2019), https://doi.org/10.12957/publicum.2019.47199. https://www.e-publicacoes.uerj.br/index.php/publicum/article/view/47199

  50. A. Ravà, Il diritto come norma tecnica (Cagliari, Dessì, 1911)

    Google Scholar 

  51. PYMINTS, IBM gives Alexa another rival with Watson Assistant (2018), https://www.pymnts.com/news/artificial-intelligence/2018/ibm-alexa-watson-assistant-ai-smart-speaker/. Last accessed 27 Dec 2019

  52. N. Garoupa, Economic theory of criminal behavior, in Encyclopedia of Criminology and Criminal Justice, ed. by G. Bruinsma, D. Weisurd, Chapt. 327 (2014)

    Google Scholar 

  53. R.V. Clarke, Situational crime prevention, in Building a Safer Society: Strategic Approaches to Crime Prevention, ed. by M. Tonry, D. Farrington (The University of Chicago Press, Chicago, 1995). ISBN 0-226-80824-6

    Google Scholar 

Download references

Acknowledgements

This investigation has been carried out thanks to Fulbright-Shuman Grant scheme for Visiting Scholars’ Research Projects. The author would like to thank Dr. Marguerite Barry, which has no responsibility for the content, for her precious suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gianluigi M. Riva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Riva, G.M. (2020). Fantastic Interfaces and Where to Regulate Them: Three Provocative Privacy Reflections on Truth, Deception and What Lies Between. In: Przegalinska, A., Grippa, F., Gloor, P. (eds) Digital Transformation of Collaboration. COINs 2019. Springer Proceedings in Complexity. Springer, Cham. https://doi.org/10.1007/978-3-030-48993-9_15

Download citation

Publish with us

Policies and ethics