What Is New with the Internet of Things in Privacy and Data Protection? Four Legal Challenges on Sharing and Control in IoT

Part of the Law, Governance and Technology Series book series (LGTS, volume 36)

Abstract

The Internet of Things (IoT) creates an intelligent, invisible network fabric that can be sensed, controlled and programmed, in ways that enable artefacts to communicate, directly or indirectly, with each other and the internet. This network is rapidly and increasingly evolving into the networked connection of people, processes, data and things (i.e., the web of “everything”). While the latter promises to improve our lives, by anticipating our preferences, optimizing our choices and taking care of many daily habits, the evolution of IoT is likely to raise new legal and technological challenges. This paper examines four challenges in the fields of privacy and data protection. Drawing on today’s debate on the architecture, standards, and design of IoT, these challenges concern: (i) the realignment of traditional matters of privacy and data protection brought on by structural data sharing and new levels and layers of connectivity and communication; (ii) collective, rather than individual, data protection; (iii) technological convergence, e.g. robotics and other forms of artificial agency, that may impact some further pillars of the field, such as data controllers; and, (iv) the relation between technological standards and legal standards. Since, properly speaking, we still do not have a universal IoT, current debate represents an opportunity to take these legal challenges seriously, and envisage what new environment we may wish.

Keywords

Agency Artificial Agents Control Data controllers Data protection Data sharing Group privacy Internet of things Machine-to-machine interaction Privacy Robotics Standards 

3.1 Introduction

A global network connecting personal computers has impressively evolved to include not only mobile devices, such as phones, but everyday objects. The on-going development of an “Internet of Things” (IoT) concerns the number of interlinked devices used by an increasing number of people, as well as the quality and features of their usage. Potentially every object, of different size and nature, may be equipped with sensors or connected to a computing system, becoming a smart device. The goal is to collect data from the environment around, and to process, analyse and communicate it through the internet. “Things” become able to “sense,” to interpret both people’s behaviours and needs and their environment, so as to provide them with an adequate and effective response (e.g. a suggestion, an alert, etc.), or acting on their behalf. The application of the IoT extends to different sectors. On the one hand, think about smart homes, that is, IoT implementations in the management and control of utilities in houses and buildings, such as heating, lighting, security systems and household appliances. On the other hand, consider IoT apps for smart cities, which refer to the use of networks of sensors and computers to maximise the efficiency of traffic, public transport, street lighting, etc. (Davies 2015). Other developments in IoT include smart cars, industrial robots, wearable devices (smart watches, sleep tracker bracelets, clothes, etc.), and e-Health services to improve care and reduce costs. In addition, cloud computing offers easily accessible and cheap data storing and sharing platforms. Moreover, progress in wireless technologies is expected to bring us soon towards the 5G (fifth generation) of ubiquitous mobile telecommunications technology, which will integrate new radio access networks seamlessly with existing network technologies (Davies 2016).

IoT functionality and success clearly rely on the collection, processing and sharing of data (e.g., personal and environmental), such as localization data, habits, health conditions, preferences, etc. These data may be transmitted to central services and to other devices (computers) in real time and in such a way that individuals do not even realize it (Monteleone 2011). Furthermore, Big Data instruments (e.g. algorithms) allow the analysis and correlation of data, creating patters of behaviours, and trends. As emphasized by Art. 29 Working Party in its Opinion on the Internet of Things (Art. 29 WP 2014), IoT can thus be grasped as an infrastructure in which billions of sensors embedded in common, everyday devices – “things” as such, or things linked to other objects or individuals – are designed to record, process, store and transfer data. Associated with unique identifiers, each device interacts with other devices, or systems using networking capabilities. As IoT hinges on the principle of the extensive processing of data through sensors designed to communicate unobtrusively and to exchange data in a seamless way, IoT is closely linked to the notions of “pervasive” and “ubiquitous” computing (Weiser 1993).

Benefits from IoT applications are expected for consumers, business and public authorities. People can save money by reducing energy consumption through smart meters; they can save time, using smart parking applications; be healthier by getting encouragements and feedbacks from sport-related wearable devices or by being remotely monitored at home if sick or elder. Individuals can also obtain personalized products and services based on their preferences and choices (expressed through the data provided by themselves or collected by the system). Businesses can provide better products and services based on consumers’ behaviours or protect their belongings through sophisticated security systems or becoming more efficient. IoT can also benefit public entities, by improving services and reducing costs (e-Health care systems, road safety monitoring, etc.). In its Communication from 2009, the European Commission recognized the potential of IoT in terms of economic growth and improvement in well-being for individuals and society as a whole (EC 2009).

Although expectations in IoT are high, some experts think they may be “inflated” as the IoT would still be far from a real take-off. Studies suggest that IoT is not widely implemented and that, properly speaking, no Internet of Things exists as such (Gartner 2014). In addition to technological issues on, say, protocols and standards, other challenges need to be addressed in order for the IoT to be implemented, as stressed by the European Parliament in its Resolution on IoT (EP 2010), including ethical, social, cultural and legal issues. Among the latter, consider matters of intellectual property and consumer law, ownership, due process, environmental law, and more. The EP, in particular, calling the Commission to engage in proactive consultation with the industry sector, pointed out that privacy and security have to be taken into account at the earliest possible stage in the development and deployment of any IoT technology. Furthermore, IoT applications must be operated in accordance with the rules enshrined in Articles 7 and 8 of the Charter of Fundamental Rights of the EU. As emerges from the European Commission Road Map for completing the Single Digital Market (EC 2015a), initiatives are taken at the EU level so as to achieve a “better access for consumers and businesses to digital goods and services across Europe” by “creating the right conditions for digital networks and services to flourish.” The latter intent included accelerating the reform in the data protection field (occurred with EU Regulation 2016/679), considered essential to increase trust in digital and online services and therefore to boost the Digital Single Market (EC 2015b).

Some sources report that users feel that their Internet data are protected to some extent, but are concerned for their de-contextualization. Others present most of the European internet users worried about the quantities of data they are asked for and how their personal data are further used (Eurobarometer 359 & 431). Privacy and data protection are in any case significant issues in the development of IoT. Data minimization, encryption and anonymization, introduced in the design of technical devices as early as possible, may reduce risks and concerns while helping developers and end-users in complying with relevant rules. In the aforementioned 2014 Opinion on IoT, Art. 29 WP identifies eight main data protection issues that may arise from the IoT. They include:
  1. 1.

    Lack of transparency;

     
  2. 2.

    Existence and involvement of various actors (often unknown to users);

     
  3. 3.

    Loss of control over data processing through IoT devices;

     
  4. 4.

    Difficulty in getting users’ consent and quality of consent undermined;

     
  5. 5.

    Lack of granularity of the services provided by the IoT devices, with the consequence that users may be obliged to refuse the whole service if they do not accept a particular data processing aspect of the device;

     
  6. 6.

    The possibility to process more data than required for the original purposes;

     
  7. 7.

    The use of data collected from different sources and devices for different purposes than the original;

     
  8. 8.

    Security risks in the transmission of personal data (from personal devices to central services or to other devices).

     
The Opinion provides some practical recommendations in order to ensure not only that IoT is in compliance with data protection rules but also that IoT technical developments and benefits are not prevented by data protection compliance measures (Salgado 2014). Drawing on this basis, the aim of this paper is to cast light on what the issues stressed by Art. 29 WP may have in common, and to identify and discuss other less-explored challenges of IoT in the field. This set of issues does not only regard today’s legal framework of data protection, i.e. mostly disposing for transparency in the ways in which data is collected, processed, and distributed. This set of issues, in the phrasing of Hannah Arendt, also regards the protection of people’s “opaqueness,” that is, privacy (Arendt 1958). As suggested by today’s debate on the architecture, standards, and design of IoT, we thus have to examine a set of problems in both fields of privacy and data protection that concern: (i) the realignment of traditional matters in these sectors of the law, triggered by structural data sharing and new levels and layers of connectivity and communication; (ii) collective, rather than individual, data protection and privacy; (iii) technological convergence, e.g. robotics and other forms of artificial agency, that may affect some legal concepts of the field, such as data controllers; and, (iv) the relation between technological standards and legal standards. We will scrutinize these issues separately, in each of the four sections of this paper. Then, the time will be ripe for the conclusions.

3.2 IoT and Structural Data Sharing

As stressed above in the introduction, IoT creates an invisible, smart network fabric that can be sensed, controlled and programmed. The intent is that artefacts can communicate, directly or indirectly, with each other and the internet. This network is rapidly and increasingly evolving into the networked connection of people, process, data and things (the so-called web of “everything”). Against this backdrop, it is important to understand how the connectivity and communication of IoT are designed and how this invisible, smart network fabric is structured at different levels and layers. The technical design of IoT is particularly relevant in this context, since different privacy and data protection concerns may hinge on different levels and layers of IoT connectivity and communication structure. In a nutshell, the focus should be on how data do “flow” or conversely, how the flow of information should be restricted in IoT.

Leaving the history of IoT growth aside, it is yet an open question what IoT ultimately amounts to. Some denounce that its “dynamic global network infrastructure with self-configuring capabilities based on standard and interoperable communication protocol” (CERP-IoT 2009), is “usually confused with other related concepts, such as radio frequency identification (RFID), wireless sensor network (WSN), electronic product code (EPC), machine to machine (M2M), cloud computing, and cyber-physical systems (CPS)” (Ning 2103: 3). Others reckon that IoT is originally equivalent to – or at least based on – the machine-to-machine (M2M) communication (Wang 2011). M2M communication basically aims to connect devices by means of any wired or wireless communication channel. The M2M concept has evolved from a remote network of machines, conveying information back to a central hub and rerouting such information into the system, towards a wider grid of networks transmitting data to personal applications through a considerable variety of mechanisms, such as sensors, devices, robots, tools, server computers, and grid systems. From this wider stance, some present M2M communication as an essential technology for IoT (Foschini et al. 2011). However, IoT is meant to provide people, systems, services, processes, smart objects and agents, with advanced means of connectivity and communication that go beyond M2M networks. By implementing “various kinds of communication modes, such as thing-to-human, human-to-thing, and thing-to-thing communication” (Ning 2011: 4), mechanisms of network and syntactic interoperability cover a larger variety of protocols, domains and applications. Furthermore, “many of these early M2M solutions… were based on closed purpose – built networks and proprietary or industry, specific standards – rather than on Internet Protocol (IP), based networks and Internet standards” (Rose et al. 2015: 7).

According to Huansheng Ning (2011: 4–6), IoT presents intrinsic features, namely the integration of the physical and the cyber world, by means of the following components: (1) ubiquitous sensing; (2) network of networks; and (3) intelligent processing. In addition, we should distinguish between the architecture and the model of IoT. The former “describes IoT from the network topology and IoT logical organizational view, while model describes IoT from different functional aspects” (Ning 2011: 11). This distinction suggests to scrutinize the different levels and layers that structure the set of IoT technologies and their integration of the physical and cyber world. Next, Sect. 3.2.1 focuses on the levels of IoT. Then, Sect. 3.2.2 dwells on its different layers. The overall aim of this section is to stress the first novelty of IoT for current privacy and data protection framework, namely structural data sharing.

3.2.1 Levels of IoT

From the standpoint of the network topology and the logical organization of IoT connectivity and communication, three technical levels of its architecture should be under scrutiny:
  1. 1.

    The level of basic connectivity: this level concerns the mechanisms that aim to establish physical and logical connection between systems (i.e. agents, objects, devices, processes, and so forth);

     
  2. 2.

    The level of network interoperability: this level concerns the mechanisms that enable communication and the exchange of messages between all the multiple physically and logically connected systems across a variety of networks;

     
  3. 3.

    The level of syntactic interoperability: this level concerns the understanding of the data structure in all the messages among and between the connected systems.

     
By examining the IoT architecture vis-à-vis its network technology and logical organization, it is clear that IoT is not an isolated and static system of interactions between communicating entities, such as agents, processes, devices, and the like. Rather, it is all about a more complex and dynamic environment in which all the networked entities do not have a full control over the whole system of interactions and communication, e.g. data which is remotely stored and shared with other parties. That which Adam Thierer remarked in the field of wearable technologies, can thus be expanded to IoT: “There are also concerns for those in environments where others are using wearable technologies. Such individuals may not be able to control how the wearable technologies used by others might be capturing their actions or data, and it may prove difficult if not impossible for them to grant consent in such contexts” (Thierer 2015: 55).

Along these lines, we also find the arguments of the Art. 29 Working Party in the aforementioned Opinion on recent developments of IoT from 2014 (doc. WP 223). The warning of the EU data protection authorities revolved around the risks brought on by the architecture of IoT and how data is structurally shared and stored. In their phrasing, “once the data is remotely stored, it may be shared with other parties, sometimes without the individual concerned being aware of it. In these cases, the further transmission of his/her data is thus imposed on the user who cannot prevent it without disabling most of the functionalities of the device. As a result of this chain of actions, the IoT can put device manufacturers and their commercial partners in a position to build or have access to very detailed user profiles” (WP 223). This scenario triggers new troubles with a traditional pillar of data protection framework, namely consent: “Classical mechanisms used to obtain individuals’ consent may be difficult to apply in the IoT, resulting in a ‘low-quality’ consent based in a lack of information or in the factual impossibility to provide fine-tuned consent in line with the preferences expressed by individuals” (op. cit.).

The ways in which the structural data sharing of IoT may affect classical mechanisms and traditional notions of data protection, such as informed consent, can be further illustrated with the functions performed at the architectural levels of IoT. We will explore these layers separately in the next section.

3.2.2 Layers of IoT

IoT layers describe the functions that are performed at the architectural levels. IoT layered models can be represented in accordance with different functional aspects. There are several kinds of IoT modelling, which includes the six distinct types of applications stressed by the McKinsey report (Chui et al. 2010). In addition, different layer-models, such as the IBM eight-layer model, have been put forward. In this paper, we will only consider the three-layer model (Ning 2011). The reason for this level of analysis is that the three-layer model appears as the functional counterpart of the three architectural levels we described above in the previous section. According to this stance, structural data sharing of IoT is smoothed by the functional aspect of each technology, so that three IoT layers follow as a result:
  1. 1.

    The sensor-actuator layer, which comprises sensors and actuators to perform thing identification, resource/service discovery, and execution control. It senses things to extract information and realize semantic resource discovery, and perform actions to exert control. Sensing techniques of this layer include RFID, cameras, Wi-Fi, Bluetooth, global positioning system (GPS), radar, and so on.

     
  2. 2.

    The network layer that includes the interfaces, gateways, communication channels, and information management of the network. Communication networks include, of course, the Internet and mobile networks, and access networks such as wireless sensors networks (WSNs). The hybrid topologies of the networks aim to attain reliable data transmission through data coding, extraction, fusion, restructuring, mining, and aggregation algorithms, that provide for real time network monitoring and configuration.

     
  3. 3.

    The application layer, which supports embedded interfaces to offer diverse functionalities, e.g. information aggregation and distribution, and by providing users with certain services. The layer includes different application processing systems. The latter’s goal is to process, analyse, and execute different functionalities, and to make decisions. This layer deals with the specific requirements for each application, and should provide a user-friendly interface for each of them.

     
The design issues of the threefold layer of IoT and both its connectivity and interoperability, i.e. the IoT architecture, raise of course matters of values and choices (Pagallo 2011). Ning presents them as the “social attributes”, or “social affecting factors,” that impact on such layers and levels of IoT (Ning 2011: 22–23). Two types of impact are particularly relevant in this context. They either involve the physical communicating entities of IoT, or affect its environmental and social conditions. The collective factors affecting the environmental and social conditions of IoT include the national or supranational IoT management layer, where local or global IoT technologies have to be regulated through standards, protocols, laws, industry planning and guidance functions that support local or global development, coordination and supervision of IoT technologies. In addition, we should scrutinize the geographical terrain and custom culture, for environmental and cultural factors can affect all the layers of IoT and, more particularly, the application layer. We return to this set of factors that may impact the environmental and social conditions of IoT, below in Sect. 3.5.

On the other hand, the social attributes or factors affecting physical communicating entities may include space-time dimensions in which the communicating entities interact, behaviour and tendency that affect the entities themselves and moreover, influence group. In the first case, IoT has to do with dynamic parameters; in the second case, the focus is on the interaction between IoT entities and how this interaction affects their status or attributes, e.g. the regularity of a course of action is understood as a matter of predictability and propensity to be under control. In the case of influence group, the status or attributes of any individual entity are intertwined with the functioning of a group of interacting parties. The ever-increasing pervasiveness of ICTs in our lives goes here hand-in-hand with that shift from “the primacy of entities” to “the primacy of interactions,” stressed by the Onlife Manifesto (ed. Floridi 2015). Whatever the social attribute or factor we choose, which may affect agents, objects, devices, or processes of IoT, it is crucial to keep in mind the levels of basic connectivity, and network and syntactic interoperability among such agents, objects, or devices, illustrated above in the previous section. On the basis of the IoT level of logical connectivity and interoperability, a more complex and dynamic environment of physical communicating entities follows as a result. They have to (be able to) connect, communicate, and understand each other.

Despite the panoply of variables and factors that we should take into account, it seems fair to affirm that privacy and data protection issues may arise from all the levels and layers of IoT, since data are increasingly and structurally shared among its physical communicating entities: smart objects and agents, wearable technologies in an intelligent ambient, future 5G ubiquitous mobile apps, and the like. Basic notions of the law, such as control over information and methods to obtain consent, are thus under stress. The first legal challenge of IoT is radical because “classical mechanisms,” traditionally employed in the field, fall short in coping with the lack of data control which may occur in the new environment, vis-à-vis the use of smart things, ubiquitous computing, or Electroencephalography (EEG) filters that aim to perceive the physiological and mental state of humans, also exploring new signal processing techniques. Current methods for, say, obtaining consent “may be difficult to apply in IoT” (WP 223), and hence we should not wonder why new mechanisms and models are necessary. Structural data sharing will be a test for the set of provisions set up by the new EU Regulation 2016/679, which shall apply from 25 May 2018. The attention is drawn here to Articles 4(11), 6(1)(a), and 7 of the new regulatory framework, and how this set of rules on informed consent will cope with that which the Onlife Manifesto calls the shift from the “primacy of entities” to the “primacy of interactions” (ed. Floridi 2015). What are the principles of data sharing in the new Regulation?

3.3 IoT and Groups

The second legal novelty of IoT concerns “big data,” and how they may affect current frameworks of privacy and data protection. Data cannot only be “shared,” as mentioned above in the previous section. Data may also “belong to” a group. IoT and big data are indeed correlated, for risks and challenges of big data affect IoT as well. This means that the collection and processing of data increasingly treat types, rather than tokens, and hence groups rather than individuals. Consider such techniques as data mining or profiling, and their models for assembling groups in accordance with certain educational, occupational or professional capabilities, social practices (e.g. a religion), or social characteristics (e.g. an ethnicity). Likewise, reflect on the prediction of people’s behaviour, in order to include or exclude some of them from a particular service, product, or credit. Rather than a unique data subject whose informational self-determination is specifically under attack, individuals are more often targeted as a member of a group, whereas they can even ignore being a part of that group on the basis of a set of ontological and epistemological predicates that cluster people into multiple categories. In the phrasing of Luciano Floridi, it is more about the new protection of “sardines,” i.e. individuals as members of a group, than “Moby Dicks.” And while “the individual sardine may believe that the encircling net is trying to catch it… it is not… it is trying to catch the whole shoal” (Floridi 2014: 3). The traditional type of protection against individual harm in the field of data protection should thus be supplemented with an analysis of the risks and threats to the processing and use of group data that may provoke new kinds of harm to most of us, namely the “sardines.” The analysis of this section is hence divided into three parts. The focus is on the notion of group, the distinction between group privacy and collective data protection, and what is going on with the new EU regulation.

3.3.1 The Meaning of a Group

So far, data protection has concerned individuals rather than groups. In EU law, for example, Article 2(a) of Directive 95/46/EC refers to the data and information of “an identified or identifiable natural person (‘data subject’).” Similarly, Article 4(1) of the new Regulation defines the “data subject” as “an identifiable natural person” that “can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Notwithstanding this equalization of data subjects and natural persons, it is noteworthy that the law already protects certain types of data that individuals have in common with other data subjects. Consider Article 8 of Directive 95/46/EC, which refers to “the processing of personal data revealing racial or ethnic origin.” Likewise, contemplate Article 9(1) of the new Regulation, according to which we should add to the previous protection of art. 8 on “data concerning health or sex life,” the processing of genetic data, i.e. what the proposal of the EU Commission defined as “all data, of whatever type, concerning the characteristics of an individual which are inherited or acquired during early prenatal development” (art. 4(10) of the proposal from 2012).

Although the focus of the new Regulation remains on how to protect personal data, such data is shared with other data subjects and furthermore, individuals can be targeted as a member of that specific (racial, ethnic, genetic, etc.) group. In such cases, the distinction that changes the data protection was stressed by the Art. 29 Working Party in Opinion 3/2012 on developments in biometric technologies (doc. WP 193): “in this case, it is not important to identify or verify the individual but to assign him/her automatically to a certain category.” This is the risk that was realized throughout the 2010–2011 Ivorian civil war (Taylor 2014). What, then, if the target of a privacy infringement is the group, or the category, as such? Moreover, how should we grasp the notion of group?

We may admit that the law should grant such a group its own rights to preserve and foster its identity and still, we have to avoid a twofold misunderstanding about the idea of legally protecting a group sharing data. First, it should be noted that an “identity” is not related to the protection of any alleged natural group, even in the case of genetic groups, or of groups sharing a language, a religion, etc. Rather, what is relevant concerns the set of ontological and epistemological predicates that cluster such a group, e.g. a category defined by some predisposition towards certain types of illnesses, behaviours, etc. Second, the protection of such group identity entails two different ways by which we may conceive its supra-individual right, namely either as a corporate or a collective right. In the first case, group rights are attached to an entity, such as an organization, corporation, or state, as an individual and autonomous entity having its own rights. Moreover, the artificial legal entity may hold such rights as against its own members, e.g. a state against its citizens, a university against a professor, etc. In the case of collective rights, individuals share some interests or beliefs forming them into a right-holding group: in contrast to corporate rights, such a group is the form through which the individuals exercise their rights, rather than those of the group qua group. Therefore, what kind of right should the law grant a group?

3.3.2 Group Privacy and Collective Data Protection

As mentioned above in the introduction, we conceive the relation between privacy and data protection in accordance with the distinction between the protection of people’s “opaqueness” (privacy) and the “transparency” with which personal data is collected and processed (data protection). Although these rights often overlap, this is not necessarily the case. As the legal safeguards enshrined in Articles 7 and 8 of the EU Charter of Fundamental Rights illustrate, there are cases of privacy that involve no data processing at all, e.g. cases of “unwanted fame.” Conversely, we may consider the protection of personal data qua data, that is, regardless of any alleged prejudice to an individual’s informational privacy. In the ruling of the EU Court of Justice in Google v. AEPD (C-131/12), for example, the claim is that a data subject has a right “without it being necessary in order to find such a right that the inclusion of the information in question… causes prejudice to the data subject” (§ 99 of the decision).

To make things more complex, consider the differentiation between privacy and data protection vis-à-vis some elements of comparative law. In the U.S., for instance, large civic membership organizations can legitimately claim a right to associational privacy even against the members of the group. In Boy Scouts of America v. Dale [530 U.S. 640 (2000)], the US Supreme Court ruled that the intimate association of the Scouts and “its” privacy, i.e. the associational privacy of the Boy Scouts as a corporate right, should prevail over “their” privacy, namely the right of the excluded group leader claiming an individual right. The Court conceded in other words that the privacy of the group, as a single and unitary holder, can be conceived analogously with an individual’s privacy. This stance is at odds with the European perspective. Consider such a case as Big Brother Watch and others v. UK (application 58170/13 before ECtHR). Here, the applicants, i.e. an association, claim to be victim of a violation of their rights under Article 8 of the European Convention of Human Rights (“ECHR”), insofar as the UK intelligence services would have been collecting and processing data in such a way that is neither “proportionate” nor “necessary” in a democratic society.

The jurisprudence of the Court (ECtHR) has not admitted so far the right to complaint by groups and other “artificial legal persons” pursuant to Article 8 ECHR. The reason is that the latter right to privacy would have more an individual than a collective character. However, whether or not Big Brother Watch wins its case against the UK government, it is difficult to imagine the ECtHR overturning two pillars of its case law on the kind of protection set up by the ECHR legal framework. First, in order to legitimately claim a violation of their rights, applicants, including associations, have to demonstrate that some sort of damage is involved in the case. Second, such damage never entails the protection of organizations against the members of the group but rather, the protection of the group against “its” state. What is at stake, here, does not concern the protection of corporate rights for large civic membership associations, as in the US case. Rather, in the field of the ECHR protection in such cases as Big Brother Watch v. UK, the issue has to do with a procedural right to a judicial remedy against governments and states in the sphere of private life, i.e. on the basis of personal damage suffered by some individuals, such as the applicants and members of the group.

This is the procedural approach followed by the EU Court of Justice in its own readings of Articles 7 and 8 of the EU Charter of Fundamental Rights. In the ruling issued on 8 April 2014 (C-293/12 and 594/12), for example, the EU CoJ accepted the claims by certain Austrian and Irish organizations of being victims of a violation of their rights under the provisions of the Data Retention Directive 24/2006. The reason why this right has to be conceived as the form through which individuals exercise their own rights, that is, as a collective right, rather than that of the group qua group, seems sound. Reflect on the number of cases in which the processing of personal data is legitimate regardless of the individual consent, e.g. current Article 7(d) of EU Directive 95/46/EC on “processing that is necessary in order to protect the vital interests of the data subject.” A new generation of group rights as corporate (rather than collective) rights over data which is structurally shared among the members of the group, would multiply without reasonable grounds cases rendering the consent of the individuals unnecessary, or superfluous. As a result, how should we conceive a new collective (rather than corporate) right to data protection in IoT?

3.3.3 A Look into the Future

We have insisted on how current IoT and Big Data trends will increasingly raise cases that affect groups, rather than individuals, so that current rights of the personal data protection framework should be properly complemented with a new generation of collective rights (Pagallo 2014). This approach has significantly been endorsed by the new EU Regulation on data protection. Pursuant to Article 80(1), “the data subject shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted… to lodge the complaint on his or her behalf,” so as to exercise the right to an effective judicial remedy against a supervisory authority, or against a controller or processor, or the right to lodge a complaint with a supervisory authority and to receive compensation. Furthermore, in accordance with Article 80(2), “Member States may provide that any body, organisation or association referred to in paragraph 1 of this Article, independently of a data subject’s mandate, has the right to lodge, in that Member State, a complaint,” which may concern either the right to an effective judicial remedy against a supervisory authority, or against a controller or processor, with the supervisory authority which is competent pursuant to Article 77 of the Regulation. Hence, the overall idea of this set of rules is not to replace today’s personal data protection with a US like-privacy group regime but rather, to complement the former with a new collective right to lodge complaints. Since the data subject can be targeted and her privacy infringed due to her membership in a given (racial, ethnic, genetic, etc.) data group, it makes sense to grant such a group, or “any body, organisation or association which aims to protect data subjects’ rights and interests,” a procedural right to a judicial remedy against the data controllers, processors or supervisory authorities.

Admittedly, whether the collective rights of the EU regulation for a new data protection framework will be effective, or good enough, to tackle the challenges of current IoT and Big Data trends, is an open question. The difficulty hinges on the type of harm, threat, or risk that the processing and use of group data raise in terms of physical threat or injury, unlawful discrimination, loss of confidentiality, identity theft, financial loss, etc. In order to evaluate the probability of events, their consequences and costs, the aforementioned EU regulation has set up a number of data protection impact assessments that should determine risks and threats for the processing and use of certain kinds of data. Consider Article 35(1) of the Regulation, according to which data controllers have the responsibility of performing a data protection impact assessment “where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons.” In particular, specific risks concern “a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing” (art. 35(3)(a)); “processing on a large scale of special categories of data,” such as sensitive data or criminal records (art. 35(3)(b)); and “a systematic monitoring of a publicly accessible area on a large scale” (art. 35(3)(c)). The twofold aim of this set of rules is clear: on the one hand, dealing with the pace of technological innovation, data protection should be pre-emptive, rather than remedial, so as to ensure that privacy safeguards are at work even before a single bit of information has been collected. On the other hand, the traditional protection of personal, i.e. individual, data should be complemented with a new generation of collective and procedural rights, so as to properly tackle the second legal challenge of IoT.

Still, the new responsibilities of data controllers set up by Article 35 insinuate a further issue: who controls what in IoT?

3.4 Agents Control

Another challenge triggered by the new scenarios of IoT regards matters of control over structural data sharing. This challenge concerns first of all, both a pillar of every legal framework, such as the notion of “agent,” and a basic notion of every data protection framework, namely the concept of “data controller.” This aspect of the analysis is strictly related to matters of security and safety mentioned above in a previous part of this paper, with the analysis of the social factors affecting the environmental and social conditions of IoT above in Sect. 3.2. In this section, we aim to emphasize something different, namely matters of technological convergence that will likely affect the data protection framework. To the notion of “artificial legal person,” such as states, organizations, and corporations, introduced above in the previous section, we should add that of “artificial agent.”

Contemplate the field of robotics and how several applications should be deemed as agents, rather than simple tools of human interaction (Pagallo 2013a, b). It would suffice it here to emphasize the criteria pointed out by Colin Allen, Gary Varner, and Jason Zinser (2000), and further developed by Luciano Floridi and Jeff Sanders (2004). Drawing on this research, three features of robotic behaviour help us to define the meaning of agency and illustrate why scholars more frequently liken robots to animals (e.g. Latour 2005; McFarland 2008; Davis 2011 etc.), rather than products and things.

First, robots are interactive as they do perceive their environment and respond to stimuli by changing the values of their own properties or inner states.

Second, robots are autonomous, because they modify their inner states or properties without external stimuli, thereby exerting control over their actions without any direct intervention of humans.

Third, robots are adaptable, for they can improve the rules through which their own properties or inner states change.

On this basis, it is crystal clear why some scholars and companies alike, e.g. Cisco, prefer to talk about an “internet of everything,” rather than IoT. We will not only be increasingly dealing with a network fabric that can be sensed, controlled and programmed, in ways that enable artefacts to communicate, directly or indirectly, with each other and the internet (Pagallo 2013a). Moreover, in this new environment, autonomous artificial agents – such as the software agents that run today’s business in e-commerce – will add a further layer of complexity to the previous legal challenges of IoT, since we should be ready to interact with “a new kid in town” (Pagallo 2016). In order to understand why this may be the case, the paper proceeds with the analysis of a peculiar class of artificial agent, i.e. consumer robots, and how they may affect the duties of current data controllers.

3.4.1 Technological Convergence: On Robots and Data Controllers in IoE

Domestic robots will know a lot of things about our private lives. Think about smart Roombas equipped with cameras in order to properly clean your flat, or personal artificial assistants connected to the internet (of things) so as to help us manage our business. The amount of personal information, collected and processed by a new generation of these robots, will likely depend on the ways in which individuals treat their artificial agents, and what is required for object recognition, navigation, and task completion of robots interacting with humans “out there,” in the real world. Not only sensors, cameras, GPS, facial recognition apps, Wi-Fi, microphones and more, will be assembled in a single piece of high-tech. But, as a prolonged epigenetic developmental process, several domestic robots will gain knowledge or skills from their own interaction with the living beings inhabiting the surrounding environment, so that more complex cognitive structures will emerge in the state-transition system of the artificial agent. New expectations of privacy and data protection should be expected as a result of this technological convergence between artificial intelligence (AI) and robotics in the IoE. Unsurprisingly, over the past years, scholars have increasingly drawn the attention to how this scenario may affect such fields of the law (EU Robotics 2013; Pagallo 2013a; Leenes and Lucivero 2014; Pagallo 2016; etc.).

Part of these novel legal issues were discussed above in the previous sections, examining robots as an instance of today’s structural data sharing and ubiquitous computing. In addition, we referred in Sect. 3.2, to how the IoT integration of the physical and the cyber worlds fits like hand to glove with the convergence of AI and robotics. In both cases, the aim is an “intelligent processing” of data through logical connection and syntactic interoperability (understanding). As a result, how should legal systems govern this technological convergence? Moreover, how should they keep it under control?

According to the guidelines that a EU-sponsored project, namely “RoboLaw,” presented in September 2014, the principle of privacy by design can play a key role in making and keeping this sector data protection-compliant (RoboLaw 2014: 19). Pursuant to Article 35 of the new EU regulation, we should test the enforcement of the principle through a data protection impact assessment. First, we should check the sensor-actuator layer of IoT: much as occurs with such IoT components, as devices, systems, or services, robots should be designed in a privacy-friendly way, so that the amount of data to be collected and processed is reduced to a minimum and in compliance with the finality principle. Second, as to the network layer of IoT, some legal safeguards, such as data security through encryption and data access control, should be embedded into the software and interface of the robot. The principle may be extended to a considerable variety of mechanisms through which data flow in the IoT. Third, regarding the application layer of IoT, “requirements such as informed consent can be implemented in system design, for example through interaction with users displays and input devices” (RoboLaw 2014). Leaving aside specific security measures for particular classes of service robots, what the EU project suggests is “the adoption of updated security measures [that] should not be considered only as a user’s choice, but also as a specific legal duty. It is clear that the illicit treatment of the data is unlikely to be considered a responsibility of the manufacturer of the robot, but rather a liability of its user, who is the ‘holder’ of the personal data” (op. cit., 190).

It is however debatable whether the end-user, or “human master,” of the domestic robot should be deemed as the data controller and hence, liable for any illicit treatment of personal data. As occurs today with issues of internet connectivity, or sensors and mobile computing applications, several cases indicate that the illicit treatment of personal data may depend on designers and manufacturers of robots, internet providers, applications developers, and so forth, rather than the end-user of a domestic robot. As Curtis Karnow warned in his Liability for Distributed Artificial Intelligence (1996), new forms of agency can even break down the classic cause and effect analysis. Noteworthy, among the special zones for robotics empirical testing and development, approved by the Japanese Cabinet Office since November 2003, the special zone of Kyoto has been devoted to the analysis of privacy and data protection issues since 2008. Defining who controls what and who should be considered as the data controller in the “internet of everything,” can be tricky (Pagallo 2013a). In the wording of the Art. 29 Working Party from the opinion 1/2010 (doc. WP 169), “the concept of controller is a functional concept, intended to allocate responsibility where the factual influence is, and thus based on a factual, rather than a formal analysis,” which “may sometimes require an in-depth and lengthy investigation” (op. cit., 9). This factual investigation will likely become even more in-depth and lengthy in IoE: after all, the behaviour of robots and of other artificial agents should be grasped in the light of a wider context, where humans interact in a seamless way through sensors, smart systems, and even autonomous gadgets.

The control over the autonomous behaviour and decisions of robots, systems, services, processes, smart objects and gadgets of IoE – whereas the aim is to determine who is the data controller – should be distinguished from the control over the standards that govern the behaviour of both the artificial agents and the smart things of IoE. In the first case, as previously stressed, the issue will revolve around Article 35 of the Regulation with a new generation of data protection impact assessments vis-à-vis the pace of technological research and development. In the second case, the focus is on how legal standards are defined at a national or supranational level, in order to set common solutions for intellectual property and consumer law, ownership, data protection, environmental law, and more. By considering crucial differences that persist between, say, the US and EU regulatory frameworks, even the RoboLaw Guidelines concede that these “significant differences… could make it difficult for manufacturers catering for the international market to design in specific data protection rules” (op. cit., 19).

At the end of the day, which norms and rules should designers and manufactures of domestic robots embed into their products? In more general terms, what standards should we adopt in the realm of IoT, or IoE?

3.5 Standards

As stressed before in this paper, there is still no such a thing as, properly speaking, the ‘internet of things’, or of ‘everything’. We lack the proper technical standards that may enable multiple systems and different networks to communicate with each other, making full interoperability feasible. This lack of standards and of protocols that may outline such standards, represents however an opportunity, for we have time to discuss what kind of architecture and legal safeguards should preside over our future’s interaction. Besides current discussions on protocols and technical standards, we thus have to widen our perspective and consider the role of legal standards. This is the final challenge of IoT which this paper aims to discuss after issues of architecture and data sharing (Sect. 3.2), group privacy and new collective rights (Sect. 3.3), artificial agents and data controllers (Sect. 3.4).

Legal standards can be conceived as a means that allows agents to communicate and interact. As such, legal standards should be differentiated from epistemic standards, i.e. ways to understand the informational reality of IoT, and social standards that enable users both to trust IoT applications (Durante 2011), and to evaluate the quality of IoT services regardless of whether or not these services meet social needs. The development of IoT does not only concern technological standards as a result, but all the types of standards previously mentioned. The interplay between these standards can be grasped as competing claims that not only may contend against each other, but vis-à-vis regulatory systems, such as the forces of the market and the rules of the law. Every regulatory system aims to govern social interaction by its own means, and such claims may either conflict, or reinforce, each other. A regulatory system can even render the claim of another regulatory system superfluous. Whatever the scenario we consider, however, such a competition does not take place in a normative vacuum, since it is structured by the presence of values and principles, social norms, and the different ways to grasp the informational reality of IoT. By considering that standards are “connected to power: they often serve to empower some and disempower others” (Busch 2011), what should then our position be? What control should we have over the standards of IoT?

From a legal point of view, there is only room for an alternative, i.e. either legal standards should be shared in order to adopt a supranational framework for IoT that guarantees uniform levels of protection, or local standards will reaffirm competitive advantages and national or supra-national sovereignty, e.g. the EU’s supremacy, by ensuring different levels of legal protection. Article 50 of the new Regulation on “international cooperation” will be a key reference point to ascertain whether and to what extent “the effective enforcement of legislation for the protection of personal data” will be facilitated by the development of international cooperation mechanisms and mutual assistance. Furthermore, law-makers should assess the role that social standards play in this context, since people can evaluate IoT services against the background of different social needs, i.e. the “geographical terrain and culture,” and because individuals trust IoT applications differently around the world (Durante 2010). Although new apps and services can create unprecedented social desires, or reformulate previous needs, standards are and should be local at times, in order to resist a process of homologation and the reduction of rich differences. These local differences represent one of the main factors that will affect the environmental and social conditions of IoT.

The aforementioned EU Regulation has followed the method of the “Brussels effect,” i.e. the strictest international standards across the board that often wield unilateral influence (Bradford 2012). This set of provisions will confront further forms of regulation and standards on which the development of IoT will hinge. In addition to the economic forces of the market, the constraints of social norms, and the extent to which technological standards are set up and shared, the development of IoT depends on whether all the other standards are, or should be, oriented towards the unification or fragmentation of IoT. According to how such technological, legal, epistemic, and social standards interact, or compete with each other, they will shape a new environment that applies to individuals, artificial agents, and things (technological standards), empowering some and disempowering others (legal standards), shaping the social features of IoT in terms of needs, benefits, user acceptance and trust (social standards), up to the making of the informational dimension of IoT as a comprehensible reality (epistemic standards). Whether the interaction between technological standards and competing regulatory systems will end up with legal standards for either the unification or fragmentation of IoT is another open issue we shall take into account. Nobody knows the evolution of the technological standards that we will have in the future (Brian Arthur 2009), and how they will interact with the evolution of epistemic standards and social norms. As Justice Alito emphasized in his concurring opinion in United States v. Jones from 23 January 2012 (565 U.S. __), “dramatic technological change may lead to periods in which popular expectations are in flux and may ultimately produce significant changes in popular attitudes.” The same is true in Europe.

3.6 Conclusions

The paper examined four legal challenges brought on by IoT in the fields of privacy and data protection. By casting light on matters of architecture, IoT layers, and design, the focus was on how people’s opaqueness (privacy) and the transparency of data processing (data protection) may be affected by this invisible network fabric. The first challenge concerned personal data which is structurally shared in the new environment, and affects “classical mechanisms” employed to obtain, say, consent. The second challenge had to do with Big Data and how its collection and processing increasingly treat types, rather than tokens, and thus groups rather than individuals: a new generation of collective rights followed as a result. The third challenge revolved around the increasing autonomy of artificial agents, rather than IoT devices as simple means, or things, of human interaction. As a matter of fact, who controls what in IoE may turn out to be “a snarled tangle of cause and effect as impossible to sequester as the winds of the air, or the currents of the ocean” (Karnow 1996). The final challenge regarded the dramatic technological change that has produced so far “significant changes in popular attitudes” (Justice Alito). The legal standards that should govern IoT and moreover, how these standards may interact with further standards and between competing regulatory systems, are open issues that will end up either with the unification, or the fragmentation, of IoT.

As stressed time and again in this paper, the legal challenges of IoT on data sharing and control, up to the risk of fragmentation of the new environment, brings however about a positive message. Since, properly speaking, we still do not have a universal IoT, current debate on privacy, data protection (and in particular the novel EU legal framework) represents an opportunity to take the challenges of IoT seriously, and envisage what new environment we wish.

Yet, in light of the strictest international standards across the board, such as the new provisions of the EU Regulation, there is a final problem. It seems fair to affirm that the aim of the law to govern the process of technological innovation should neither hinder its progress, nor require over-frequent revision to tackle such a process. The legal challenges of IoT will thus be a good test for the new Regulation, because we are going to experiment the strength of such provisions as Articles 4(11), 6(1)(a), and 7 on informed consent, Article 35 on a new generation of data protection impact assessments vis-à-vis the pace of technological research and development, Article 50 on “international cooperation,” and Article 80 on new rights to an effective judicial remedy for not-for-profit bodies, organisations or associations “properly constituted.” These provisions will cast light on the technological resilience and political wisdom of some of the new rules of the game. The debate is open.

References

  1. Allen, Colin, Varner, Gary, and Jason Zinser (2000) Prolegomena to Any Future Artificial Moral Agent, Journal of Experimental and Theoretical Artificial Intelligence, 12: 251–261;CrossRefGoogle Scholar
  2. Arendt, Hannah (1958) The Human Condition. Chicago: University of Chicago Press;Google Scholar
  3. Art. 29 WP (2014) Opinion 8 on the Recent Developments on the internet of Things, WP 223;Google Scholar
  4. Brian Arthur, William (2009) The Nature of Technology. New York: Free Press;Google Scholar
  5. Bradford, Anu (2012) The Brussels effect, Northwestern University Law Review, 107(1): 1–68;Google Scholar
  6. Busch, Lawrence (2011) Standards: Recipes for Reality, MIT Press;Google Scholar
  7. Chui, Michael, Loeffler, Markus and Roger Roberts (2010) The Internet of Things, McKinsey Quarterly, March;Google Scholar
  8. Cluster of European Research Project on the Internet of Things (CERP-IoT 2009) Internet of Things Research Strategic Roadmap – 15 September 2009. European Commission DG. INFSO-D4 Unit Brussels, online available at: https://www.internet-of-things-research.eu/pdf/IoT_Strategic_Research_Agenda_2009.pdf
  9. Davis, Jim (2011) The (common) Laws of Man over (civilian) Vehicles Unmanned, Journal of Law, Information and Science, 21(2): 10.5778/JLIS.2011.21.Davis.1
  10. Davies, Ron (2016) 5G Network Technology. Putting Europe at the Leading Edge, EDPS Briefing January 2016;Google Scholar
  11. Davis, Ron (2015) The Internet of Things. Opportunities and Challenges, EPRS, Briefing May 2015;Google Scholar
  12. Durante Massimo (2010) What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents, Knowledge Technology & Policy, 23(3–4): 347–366;CrossRefGoogle Scholar
  13. Durante Massimo (2011) The Online Construction of Personal Identity through Trust and Privacy, INFORMATION, (2)4: 594–620;Google Scholar
  14. EC (2009) European Commission Communication’s Internet of Things: an action plan for Europe, COM/2009/0278 final, http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52009DC0278;
  15. EC (2015a) European Commission’s Roadmap for completing the Digital Single Market, https://ec.europa.eu/priorities/publications/roadmap-completing-digital-single-market_enEuropean Commission;
  16. EC (2015b) Press release 15 December 2015, http://europa.eu/rapid/press-release_IP-15-6321_en.htm;
  17. EP (2010) European Parliament, Resolution of 15 June 2010 on Internet of Things http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52009DC0278;
  18. EU Robotics (2013) Robotics 2020 Strategic Research Agenda for Robotics in Europe, draft 0v42, 11 October;Google Scholar
  19. Floridi, Luciano (ed. by 2015) The Onlife Manifesto. Heidelberg: Springer;Google Scholar
  20. Floridi, Luciano (2014) Open Data, Data Protection, and Group Privacy, Philosophy and Technology, 27: 1–3;Google Scholar
  21. Floridi, Luciano and Jeff Sanders (2004) On the Morality of Artificial Agents, Minds and Machines, 14(3): 349–379;CrossRefGoogle Scholar
  22. Foschini, Luca, Taleb, Tarik, Corradi, Antonio, and Dario Bottazzi (2011) M2 M-based metropolitan platform for IMS-enabled road traffic management in IoT, IEEE Communication Magazine, 49(11): 50–57;CrossRefGoogle Scholar
  23. Gartner (2014) Hype Cycle for Emerging Technologies, online available at: http://www.gartner.com/document/2809728;
  24. Karnow, Curtis E. A. (1996) Liability for Distributed Artificial Intelligence, Berkeley Technology and Law Journal, 11: 147–183;Google Scholar
  25. Latour, Bruno (2005) Reassembling the Social: an Introduction to Actor-Network-Theory. Oxford: Oxford University Press;Google Scholar
  26. Leenes, Ronald and Federica Lucivero (2014) Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design, Law, Innovation and Technology, 6(2): 193–220;CrossRefGoogle Scholar
  27. McFarland, David (2008) Guilty Robots, Happy Dogs: The Question of Alien Minds. New York: Oxford University Press;Google Scholar
  28. Monteleone, Shara (2011) Ambient Intelligence: Legal Challenges and Possible Directions to Privacy Protection, in C. Akrivopoulou (ed.) Personal Data Privacy and Protection in a Surveillance Era, pp. 201–222. Technologies and Practices: IGI Global;CrossRefGoogle Scholar
  29. Ning, Huansheng (2011) Unit and Ubiquitous Internet of Things, New York: CRC Press.Google Scholar
  30. Pagallo, Ugo (2011) Designing Data Protection Safeguards Ethically, Information, 2(2): 247–265;CrossRefGoogle Scholar
  31. Pagallo, Ugo (2013a) The Laws of Robots: Crimes, Contracts, and Torts. Dordrecht: SpringerCrossRefGoogle Scholar
  32. Pagallo, Ugo (2013b) Robots in the Cloud with Privacy: A New Threat to Data Protection?, Computer Law & Security Review, 29(5): 501–508;CrossRefGoogle Scholar
  33. Pagallo, Ugo (2014) Il diritto nell’età dell’informazione. Torino: Giappichelli;Google Scholar
  34. Pagallo, Ugo (2016) The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design, in Data Protection on the Move, edited by S. Gutwirth, R. Leenes, and P. de Hert, pp. 387–410. Springer, Dordrecht;CrossRefGoogle Scholar
  35. RoboLaw (2014) Guidelines on Regulating Robotics. EU Project on Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics, September 22;Google Scholar
  36. Rose, Karen, Eldridge, Scott, and Lyman Chapin (2015) The Internet of Things: An Overview. Understanding the Issues and Challenges of a More Connected World, The Internet Society, available at: https://www.internetsociety.org/sites/default/files/ISOC-IoT-Overview-20151014_0.pdf;Google Scholar
  37. Salgado, Monica (2014) Internet of Things revised, Privacy and Data Protection, 15(1): 12–14;Google Scholar
  38. Taylor, Linnet (2014) “No Place to Hide? The Ethics and Analytics of Tracking Mobility Using African Mobile Phone Data.” Online version available at http://www.academia.edu/4785050/No_place_to_hide_The_ethics_and_analytics_of_tracking_mobility_using_African_mobile_phone_data. (in press);
  39. Thierer, Adam (2015) “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”, 21 RICH. J.L. & TECH. 6. Online version available at http://jolt.richmond.edu/v21i2/article6.pdf;
  40. Wang, Hu (2011) M2 M Communications. Presented at IET International Conference on Communication Technology and Application (ICCTA 2011);Google Scholar
  41. Weiser, Mark (1993) Ubiquitous Computing, Computer, 10: 71–72.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ugo Pagallo
    • 1
  • Massimo Durante
    • 1
  • Shara Monteleone
    • 2
  1. 1.Department of LawUniversity of TurinTurinItaly
  2. 2.European Parliamentary Research ServiceBrusselsBelgium

Personalised recommendations