Advertisement

AI & SOCIETY

, Volume 32, Issue 3, pp 309–320 | Cite as

Hermeneutic of performing data

  • Karamjit S. GillEmail author
Editorial

When we think about performance, we may think about performing art, an act of presenting a play, concert, or an act of performing a dramatic role, song, or piece of music. Yet in other contexts, performance may refer to a task or function, machine or a product, investment, or exaggerated human behaviour, or the use of language (Oxford English Living Dictionary). Three of our AI&Society editors, Victoria Vesna, Sha Xin Wei and Satinder Gill, and the artist Idris Khan set the scene for this exploration. For the artist Victoria Vesna (victoriavesna.com), performance is about exploring the nature of the interface between the physical, cultural, and our experiential worlds, and investigating how communication technologies affect collective behaviour and perceptions of identity shift in relation to scientific innovation. Reflecting on her vision of interactive art, she says that “In the end, all variations are about raising awareness of the interconnectivity of everything and everyone, and the collaborative interaction between the artist and the audience—through various expressions that can range in a number of manifestations. Technology is or should be utilised to amplify the experience and/or the range of influence”. Again Vesna surmises: “How does one create an experience that immerses the audience in a way that changes their perception of the subject matter? This is the core challenge of interactive art and made most obvious when computers are involved”. For the scientist, Sha Xin Wei (http://asunow.asu.edu), the technologies of performance relate to gesture and performance, sensors and active fabrics, temporal patterns, computer-mediated interaction, geometric visualisation and writing systems. It is about creating new kinds of responsive environments and improvisation in all senses, for exploring gesture, distributed agency and materiality in kinetic and body-based media and in responsive environments. For the Polanyian, Satinder Gill (http://cms.mus.cam.ac.uk/directory/satinder-gill), performance is about sensing our relationships with our environments, handling ambiguities, negotiating difference, empathising and collectively making skilled judgement in modern society. It involves exploring rhythm as a perspective for working with physical and large data sets in relation to one another, how professionals can identify shared rhythms and how these rhythms can be manipulated to better understand possible interventions to enhance shared rhythms that support social cohesion. For the artist, Idris Khan (https://www.victoria-miro.com/artists/14-idris-khan/), it is about manipulating photographs, be it a picture or an inscription on steel or board, or a scripture, sometimes using computers, to explore the deeper meaning buried in lines of writing, which he distils until they reveal some new truth. For the Cambridge Interdisciplinary Performance Network (CIPN), performance reflects a movement away from thinking in terms of immutable objects and singular subjects and focuses attention on collective contexts, exploring the potential of the idea of performance as an umbrella approach to culture: a ‘kind of thinking in its own right’. According to the Horizon project, the concept of Performing Data has emerged from multi-disciplinary engagements between artists, social scientists and technologists. Through performance, data are revealed to people in various material and embodied ways, sometimes slowly, sometimes, as if live, sometimes in tangible forms, and sometimes by requiring them to enact being sensors.

The idea of performing data has been stimulated by a recent conference on Diagrammatic: Beyond Inscription? held at Cambridge University (CRASSH (2016). The conference posits that diagrams inhabit a liminal space between representation and prescription, words and images, ideas and things, seeking resemblance to the empirical yet aspiring to generalisation. On the other hand the diagram, as a thinking tool, holds the promise of transforming abstract issues into graspable images and translating the unseen into intelligible and actionable form. The diagrams thus operate and perform as abstractive and constitutive components of empirical realities. Questions raised at the conference revolved around the visual and logical indeterminacy of diagrams, diagrammatic reasoning beyond the realm of diagrams as visual/textual objects, and the role of the diagram at the pivot of modern transformations and aporias between abstraction and form. A question also arose as to how diagrams transform the chaotic and arbitrary space into organised and often imaginative and creative spaces, and how diagrams foster relations, foster and organise conversations and bring to bear on the tacit dimension of these relations and conversations, and how visualisation of diagrammatic data enables visualisation of these relations. Although the culture of diagram allows us to explore a range of meaning, it is not always sensitive to the context. If images of knowledge (e.g. Hourglass figures) are taken for granted, it may cause trouble. It was pointed out that how Ernst Hackel claimed illustration as an explicit representation of nature, as essential as thought, and how this freedom of diagrammatic representation gave rise to reducing expectation of accuracy. It was hypothesised that using the analogy of Freud’s photographic illustration of mind leads us to visualise the diagram of mind as a spatial diagram of ideas. In the diagram of ideas, we can further hypothesise that the rule as an abstraction of interaction leads to an algorithmic shift from the physical anatomy of the brain to the ‘abstract functional working of the mind’. The body as a diagram, an abstraction of the physical, is mirrored in the brain as a symbolic representation. The structure of the psychic-thought process is mirrored as an apparatus of mind (a drawing of the anatomy of mind). We note here how the Freudian analogical reasoning, makes us see the ‘relationship between the body/architecture and the mind/interior’ as spatialisation of ideas rather than an object.

The Zoonotic Cycle diagram, when used to visualise and understand plague, acts as an inter-relational and counterbalancing interaction between human and animals. In this counter-balancing cycle, pathogens circulate in a normalised way without any damage or retribution. This model of inter-woven cybernetic paradigm of time transforms the quantitative exchange to qualitative exchange of relationships between humans and animals. In this paradigm, the representation of plague as an objective phenomenon transforms all interactions into symbolic interactions, and the Zoonotic Cycle diagram (a set of rules) becomes life on its own—a counterbalancing act. In this relational dance of the diagrammatical data, can we say that all reasoning is relational, whether it is mathematical or geometrical, or grammatical?

The architectural diagram in representing the internal and external relations to the object acts as an icon of reason, a thought process, a plan, a programme for the future, a mental formula to move from one thought to the other, a vehicle of abstraction, a representation of a chaotic scene, a production of thought process. In Plato’s republic of spatial relations, the architectural diagram performs the act of design, a concept of utopia in symbolic form—representing the boundaries of ideal society—precise and imprecise at the same moment. Move forward to the Enlightenment period, the architectural diagram performs the act of construction of good place, whilst at the same time acting as a spatial machine for social good in case of hospitals and playgrounds, and in case of building a prison acting as the machine for punishment. Here we see the architectural diagram as a new order of power, symbolic power of institutions, a political technology. In its form of a political diagram, it can perform as an instrument of social order, creating spatial communities—well behaved and badly behaved at the same time, a new model of inter-social reality, a new truth, a performing art as a story board of a narrative of a social film. The diagram acts as a notion of abstraction, a socio-cultural eye, a new way of perception, a modern cognitive form, a dream language. It also acts as an object, a figure in transition, as well as, a metaphorical figure, defining cultural and contextual boundaries. The diagram performs when it transforms itself as a social agent transformative of kinship relationships.

The discussion on the performance of diagrammatic data and the language of diagram, raised a number of questions of the representation of knowledge and its interpretation. For example, how to interpret an image in embryology disconnected from the others, temporarily and spatially? How can a diagram represent a chaotic and complex epidemic situation? Why do we ritualise time in still pictures? How does an architectural picture perform as an explanatory tool and in what ways it performs the dance of the moment by abstracting the function of the building? Palladio’s architecture not only encompassed the beauty of his work, but also the harmony with the culture of his time, integrating the extraordinary aesthetic quality with expressive characteristics of social aspirations. His buildings served to communicate, visually, their place in the social order of their culture. Whilst Palladio‘s architecture of social meanings makes us think of the way a diagram may think through the representation of thought, where the diagram becomes an abstraction in the form of a mental model. Nobert Weiner’s notion of feedback transforms the architectural diagram into a digital diagram, a malleable and thoughtless conduit of information. Here the diagram performs as an algorithm, a computational device of digital production and scaling, as if Google has overtaken the architectural construction. In the midst of the fascination with digital technology, we are cautioned to remember that architecture is simple, social, embodying social/spatial intelligence that conforms to the world. This we cannot get from machine intelligence. Moreover, it is not clear how the machine would deal with the architectural paradox: when an architect draws a diagram of a building, the diagram becomes a building, a static object, an exact language, an exact dream; but when the diagram as a model performs as a process, it performs as a dynamic process in which the diagram acts an algorithm of ideas.

Our interest in performing data also arises from the Performing Knowledge Conference (2016) held earlier at Cambridge University. The conference included two performances, a concert by Collegium Musicians led by Margaret Faultless and the other a piano performance by Tom Begin. Apart from the Collegium’s stimulating performance, the audience experienced the coherence of collaboration, relational interaction, tacit communication between and in-between the musicians and the conductor. The piano performance was a feat of relational interaction and conversation between the player and the piano, between mind and body, hand and eye, perception and emotion. Both these performances demonstrated the richness of the tacit dimension of performing knowledge, as seamless flow of data between and among the musicians, and the audience were itself part of the performance. This article reflects upon the creative perspective of performance, and lets the creative artists, scientists, and practitioners tell their own versions of performing data in their own words.

Thorp (2014) provides an insight into the life of performing data—data as a medium for performance—when he says that “Data live utilitarian lives. From the moment they are conceived, as measurements of some thing or system or person, they are conscripted to the cause of being useful. They are fed into algorithms, clustered and merged, mapped and reduced. They are graphed and charted, plotted and visualised. A rare datum might find itself turned into sound, or, more seldom, manifested as a physical object. Always, though, the measure of the life of data is in its utility. Data that are collected but not used are condemned to a quiet life in a database. They dwell in obscure tables, are quickly discarded, or worse (cue violin)—labelled as ‘exhaust’”. Here, data become the script, or the score, a cultural artefact, and in turn technologies that we typically think of as tools become instruments in the form of algorithms, and in some cases performers. “Algorithms can be used to build a ‘chain’ of like-sounding titles from the database, for example, Jacques Villon’s 1908 etching ‘Young Girl, Back Turned’ leads us to Picasso’s ‘Girl with a Mandolin (Fanny Tellier)’, from 1910. John Candelero’s photograph of the ‘Spanish Girl’ calls out Michael Almereda’s film ‘Another Girl Another Planet’. Perhaps the most exciting part about performance as a medium for data is that it allows for a fluid interpretation at the time of the performance itself. Actors can turn a dry algorithmic output into a wry dialogue of one-upmanship, allowing the artworks themselves to become pieces in an imagined language game. Here we see data breaking out of its accepted formal restrictions, a new lens into the event and experience, vastly different from what we would expect in a so called ‘data representation’”. As data exert more and more influence on our lived experience, it is important that artists find ways to work with it outside of decades-old visual means like charts and graphs. Performance provides rich terrain for engagement with data, and perhaps allows for a new paradigm in which data are not as Urist (2015) in ‘From Pain to Pixels’, says that a growing number of artists are using data from self-tracking apps in their pieces, creating conceptual works using information collected by mobile apps, GPS trackers, scientists, and more. Whilst some data artists aim to translate large amounts of information into some kind of aesthetic form, others believe that working with this data is not just a matter of reducing human beings to numbers, but also of achieving greater awareness of complex matters in a modern world. Current tools make self-tracking more efficient. Urist (ibid.) cites the Italian Mannerist painter Jacopo Pontormo who kept records of his daily life from January 1554 to October 1556, and in it, he detailed the amount of food he ate, the weather, symptoms of illness, friends he visited, even his bowel movements. Further he notes that in the 1970s, the Japanese conceptualist, On Kawara, produced his self-observation series, I Got Up, I Went, and I Met (shown at the Guggenheim), in which he painstakingly records the rhythms of his day. Kawara stamped postcards with the time he awoke, traced his daily trips onto photocopied maps, and listed the names of people he encountered for nearly 12 years.

Manaugh and Twilley (2013), commenting on data saturation of society say that art confronts the uncertainty of human existence, and many data artists are responding to an increasingly data-saturated culture. After all, almost every human interaction with digital technology now generates a data point—each credit-card swipe, text, and Uber ride traces a person’s movements throughout the day. The smartphone as a true personal computer, defines innovation of the era, on par with the mechanical clock or the automobile in past centuries. It is interesting to note that in the midst of the explosion of self-tracking, whether it is counting the number of calories or using a mood app to glean patterns in one’s mental state, what apps tell us is that ‘like a fingerprint’, no two people have the same data set. For example, a couple sharing a bed follow independent sleep cycles, friends who spend the day together count different steps, and their phones connect to different IP addresses. But what is more remarkable, they note, is the idea that within all of these numbers lies a better way of understanding ourselves. The information does not just provide a broad document of a life lived in the early twenty-first century: it can reveal something deeper and even more essential. Manaugh and Twilley (ibid.) cites a data artist, Laurie Frick, who believes that while numbers are abstract and unapproachable, human beings respond intuitively and emotionally to patterns. For example, in a series called Moodjam, Frick took thousands of Italian laminate countertop samples from a recycling centre and created a series of canvases and billboard-sized murals based on her temperament. For weeks, she manually tracked her feelings, using the online diary ‘Moodjam’, illustrating how users can express their emotions in colour patterns. In another experience, she used apps like ‘ManicTime’ on her laptop and ‘Moment’ on her iPhone to track each click and touch of her screen for almost a month. Frick is adamant that her work is about more than simply visualising information, that it serves as a metaphor for human experience, and thus belongs firmly in the art world.

But by blurring the boundaries, conceptual artists are helping scientists see their research more creatively. For example, Daniel Kohn (http://www.kohnworkshop.com), a Brooklyn-based painter, spent roughly a year at the Albert Einstein School of Medicine teaching geneticists ways to represent their digital data in more intuitive ways. And while algorithms have seeped into daily life, informing everything from consumer ‘music choices to dating options’—they are also edging into conceptual art. Recently, the website Artsy (artsy.net) held what it called the world’s first ‘Algorithm Auction’, “celebrating the art of code”. Works included ‘Turtle Geometry’, an 11-in. stack of programming on dot-matrix printer paper from 1969 made by Hal Abelson, a professor of electrical engineering and computer science at MIT. In fact, many data artists straddle art and science as Leonardo da Vinci did. Udell (2007) says that data in the hands of story tellers such as Hans Rosling (TED Talks) become compelling narrative, making the stories come to life, thus turning data into a performing art. In Hans Rosling’s hands, he says, data sings, global trends in health and economics come to vivid life. In this performance, a social story telling discourse, even when grounded in data, becomes memorable when performed well. As Udell (ibid.) says ‘data analysis as performance art goes beyond the snapshots produced by analytical tools. It lives in the interstitial spaces between the snapshots, traces a narrative arc, shows as it tells’.

The key idea emerging from the Horizon project on performing data (http://horizon.ac.uk) is that of performing data rather than visualising data in conventional ways such as through graphs and statistics. Through performance the data are revealed to people in various material and embodied ways—sometimes slowly, sometimes, as if live, sometimes in tangible forms, and sometimes by requiring them to enact being sensors. Film makers incorporate biodata into promotional films, artists capture biodata of spectators, art galleries present environmental data in material and performative ways, stimulating an emotional engagement with the data. These explorations use software platforms such the ‘Performing Data Toolkit’, to capture, record and mediate scientific data within artistic contexts and practices. Monika Fleischmann, Wolfgang Strauss (http://artlinecatalogue.eu) note that explorations in virtual and mixed realities, interactive installations, participatory environments and public performances, engage the designers and audience as data performers. The motif of the Data Performer relates not only to the visualisation and reification of immaterial data, but also to the actions and performance of the viewer. Data Performers are involved in space–time environments which they call enterable spaces of thoughts, developing an aesthetic of the interactive space of knowledge and thought. In these explorations, the viewer becomes a participant in an interactive plot. Here interactivity is seen as the perception of a world in motion—as the movement of thought.

Instead of intellectual and technical automatisation for the process of converting information into alleged knowledge—as computer science does—media art combines automatism of the machine with the act of uncovering its structures. Data performance, data mapping and visualisation are used in order to give a new structure to the already existing knowledge, and, thus, to rediscover it (Home of the Brain, Semantic Map, Energy-Passages, Media Flow). Here, knowledge is not only acquired through reading or listening, but also through the use of the body. Data from sensory interfaces are used to study bodily perception. Interaction can be described as a process of constituting knowledge through performative acts. By means of sensory interfaces, Fleischmann and Strauss examine, above all: touch and touchlessness, grasping and comprehension of spatial perception, and the sense of balance. On the one hand, they put the body in the focus of our interest and address the problem of the bodily knowledge of an acting subject. On the other hand, with interfaces for recording, storage and intermediation, they support the activities of the researching subject. An almost bodily immersion in data flows brings productive moments of interference and pause. In this way, the participating viewer experiences the feeling of presence. They note that the transformation of the viewer from a passive consumer into an active participant in the staging relates to the double requirement expressed by Matussek (2012) in Performing Memory: ‘Staging means not only to put something on stage, but also to put someone in a scene’. In the case of staging in media art, questions arise, such as: What do players and viewers see and hear? When do we play, and when do we become the object of the play? With artwork and their tools, artists attempt to reflect on these questions.

Similar question may come to mind when artists perform as data scientists and play with our bodies and brains, scanning them for visualising data patterns, integrating data patterns from ‘various sources (including medical and insurance records, wearable sensors, genetic data and even social media use)’ to ‘draw a comprehensive picture of the patient as an individual, and then offer a tailored healthcare package’. Marr (2016) perspective of this health care scenarios, gives us an insight into data science performance, in which sensors in smartphones enable doctors to share information across disciplines. In this scenario, we already notice the use of smartphones and apps in tracking and monitoring our life styles, and the creation of apps for monitoring chronic ailments like diabetes, and Parkinson’s and heart disease. The argument is that not only these technological devices can impartially record and transmit the actual patient data without any emotional or ego input from them, this data should be more and more accurate than ever before. It is posited that this performance of big data allows for the fascinating intersection of huge quantities of patient data with personal, individualised care, and thus brings nearer the dream of ‘algorithms with machine learning capabilities’ in providing as effective or more effective diagnosis than ‘human diagnosticians in spotting cancers in test results’, as well as, facilitating ‘follow-up, long-term care, and preventing relapses and readmissions’. We see in this technology scenario, a tendency to envision data performance as if health care and medicine were no more different form of enterprise data management and analysis, data warehousing, resource management or automated production, and there lies the paradox of what computers can do and what computers should not do to people.

Citing the benefits of global real-time data for accelerated disease outbreak detection of outbreaks such the 2014 Ebola virus outbreak in West Africa, Vayena et al. (2015) identify some of the key ethical challenges associated with the digital disease detection (DDD). These challenges, they note, arise from distinctive nature of DDD and the broader context in which it operates, multifaceted character of big data, linked to the methods by which data are generated, the purposes for which they are collected and stored, the kind of information that is inferred by their analysis, how that information is translated into practice, and how the ethical oversight and governance is facilitated. It is suggested that an ethical governance, here, needs to constitute new standards that relate to data not only from diverse cultural communities but also from a ‘diverse range of sources, e.g., self-tracking, citizen scientists, social networks, volunteers, or other participatory contexts’. This view of governance ‘raises difficult questions of cultural relativity, such as whether standards of privacy can take different forms in relation to different cultures or whether some minimal core of uniform standards is also justified’. However, this requires measures to ensure that the way data are collected and processed respect the rights and interests of people from these diverse regions and communities. The technical challenge is then how to develop a robust scientific methodology that involves the validation of algorithms, an understanding of confounding, filtering systems for noisy data, managing biases, the selection of appropriate data streams, and so on. We may thus assert that requirement of ethical governance demands robustness of data performance that is not just scientific but also ethical.

Back in 2012, Vinod Khosla argued that the ever increasing influx of patient data would help identify patterns and physiological interactions in ways that were not possible before. For example, machine-learning software would identify abnormalities and predict episodes, thereby assisting the discovery of most heart disease before a heart attack or stroke and address it at a fraction of the cost of care that would be needed following such a trauma. It was asserted that healthcare would become more about data-driven deduction and less about trial-and-error. In this scenario, the next-generation medicine would utilise more complex models of physiology, and more sensor data than a human medical doctor could comprehend, to suggest personalised diagnosis, and data science would be key to this. Technology would compensate for human deficiencies and amplify our strengths, making doctors more receptive to medical opportunities and better at their jobs—quicker, more accurate, and more fact-based. Diagnosis and treatment planning, he further asserted, would be done by a computer, used in concert with empathetic support from medical personnel selected more for their caring personalities than for their diagnostic abilities. Although it was recognised that ‘medicine is not just about inputting symptoms and receiving a diagnosis; it is about building relationships between providers and patients’, there was, however, a techno-centric vision, in which the routine diagnostic work would be undertaken by the “Dr. Algorithm”, and most of the care work of providing personal, compassionate care would be undertaken by caring professionals such as nurses and social workers. This techno-centric vision saw the coming of automation in medicine and health in the same way as if it were like the use of auto pilot in commercial flight. It was thus a matter of building robust back-end sensor technology and diagnostics through sophisticated machine learning and artificial intelligence operating on data in greater volumes than humans can handle.

As we pass through 2017, we note that the language of ‘what computers can do’ in health care is replaced by ‘what AI can do for health care’. Chan (2016) argues that Artificial intelligence when paired with robotics, ushers in robotic assistants for home, office and even hospital use. He says that concerns and fear of advanced robots in healthcare replacing human workers and taking over hospitals soon, may be unfounded especially at this time in spite of the fact that some hospitals are already using robots in the operating rooms for complex procedures. By citing the example of da Vinci surgical system, which provides a magnified vision of the area being operated on and allows the surgeon to perform small but precise movements on a patient’s body, Chan says that advanced technology still leaves surgeons in full control. He notes that although algorithms may be used ‘to compute dosage based on a patient’s records, diagnose an illness based on a series of questions and analyse blood tests more accurately and in the quickest time possible’, it does need not follow that ‘computers will ever supplant the doctor’s diagnosis’. He further notices that ‘the human aspect in medicine is irreplaceable because doctors can provide a form of psychological or emotional relief which a computer would definitely lack, in spite of AI being taught how to behave like a human. Moreover, AI algorithms may have the capacity to learn, they ‘may not be able to take calculated risks like human doctors and just be heavily dependent on what the logical and most accurate decision should be’. Here we notice that data scientists and even proponents of AI and robotics can envision the limits of data performance in developing techno-centric scenarios of medicine and health care.

For data scientists, our brain is constantly required to adapt in a rapidly changing data-driven environment. When seen as predictive analytics, our brain is just a complicated learning machine whose main goal is data compression and interpretation. In the realm of data science, this data processing, occurring automatically in our brains billion of times each second, is seen an elementary step in many data analysis applications. It is suggested that as the data contain numerous features, it is impossible to know deductively which features are “linked” the most to specific problems and should thus be incorporated in the analysis. Data science algorithms can be used to scan the data for meaningful patterns in all the different directions and extracts the features’ combinations along which the separation to meaningful clusters is the most prominent. (Gaber 2015). Egger and Carpi (2008) note that in many areas of science, data scientists use graphs and visual data for scientific communication, and use more specialised graphs for specific kinds of data. Evolutionary biologists, for example, use evolutionary trees or cladograms to show how species are related to each other, what characteristics they share, and how they evolve over time. Geologists use a type of graph called a stereonet that represents the inside of a hemisphere to depict the orientation of rock layers in three-dimensional space. Many fields now use three-dimensional graphs to represent three variables, though they may not actually represent three-dimensional space. Regardless of the exact type of graph, the creation of clear, understandable visualisations of data is of fundamental importance in all branches of science. In recognition of the critical contribution of visuals to science, the National Science Foundation and the American Association for the Advancement of Science sponsor an annual Science and Engineering visualisation Challenge, in which submissions are judged based on their visual impact, effective communication, and originality. Likewise, reading and interpreting graphs is a key skill at all levels, from the introductory student to the research scientist. Graphs are a key component of scientific research papers, where new data are routinely presented. Presenting the data from which conclusions are drawn allows other scientists the opportunity to analyse the data for themselves, a process whose purpose is to keep scientific experiments and analysis as objective as possible. Although tables are necessary to record the data, graphs allow readers to visualise complex data sets in a simple, concise manner.

Hanson (2014) on the data visualisation performance says that data visualisation is the act of taking a pool of data and forming a visual representation of that data in a schematic form. Hanson (ibid.) cites Hanson Organ Telhan and Mahir Yavuz, interdisciplinary artists, who create United colours of Dissent (UCoD), a data-driven performance designed for live public interaction in urban environments. Inhabitants of a public space respond to a series of questions using their mobile phones, and interact with each other in real time using a media facade or similar display infrastructure. The performance intends to capture the linguistic and socio-cultural profile of different communities by creating dynamic visualisations and infographics. Anyone in the audience with a smartphone can join the performance through a simple URL on their browser, and answer questions every 30 s using simple slider. Their answers are collected in a pool, and based on those answers, the results are visualised on the media facade. UCoD is based on the concept of immediate data visualisation, meaning that the data are visualised as it gets collected and shared immediately with the audience to give them feedback throughout the performance. As a result, Telhan and Yavuz use real-time technology to do just that, broadcast the results of their questions onto the media facade and let everyone see each other’s answers at the same time. By combining the city website UCoD, it makes the very visualisation of data accessible to different audiences, and provides an exciting opportunity to explore ways to integrate real-time culture-driven content to online platforms. Ferri (2014) commenting on turning data visualisation into art, notes that artists are using available technology to create masterpieces out of everything from disease and weather to Wi-Fi and the music of internet chatter. She says that artistry today is interactive and deeply affected by the information age. As artists work through a digital lens, they use parameters of scientific data and information design as a source of inspiration, transforming data from physical sources like heart rates and Wi-Fi channels into something creative, even interactive. For example, American artist David Bowen has created “Cloud Piano”, whereby a piano uses custom software to track cloud shapes and movements, and channels them through corresponding keys on the piano. The result is an accurate data capture of weather patterns, illuminating their strangeness through sound. The Immaterials project, Ferri (ibid.) notes, is an effort to visualise and understand things that are “immaterial”, like Wi-Fi, for instance. In their “light project”, the Immaterials team created a wall of light meant to mimic Wi-Fi’s “immaterial terrain”. The result is a freestanding wall of blue light whose saturation is directly related to the strength of the Wi-Fi signal. Though the members of the group do not consider themselves artists, it is obvious from the video that the resulting work is not only fascinating, but beautiful.

Inspired by the Brooklyn Bridge, artist Di Mainstone (http://dimainstone.com/project/human-harp/) wondered what New York City would sound like if it were set to music. Through the Creators Project she created “Human Harp”, a wearable Harp device meant to record the movement of city dwellers as they walk over a major bridge, which Mainstone says is itself like a giant instrument. In this project, a subject wears a vest-like device that has magnetic “buttons” that connect wires to the bridge, mimicking the strings of a harp. Music is made by the blending of the wearer’s movements and the natural bending and straining of the bridge. The sound is recorded, communicated back to a microcontroller, then to a laptop. Mainstone says she hopes one day the project will be interactive, seamlessly incorporated into every city bridge, so as people walk across, their movements become music. Using online public forums like chat rooms and bulletin boards, Ben Rubin and Mark Hansen created “Listening Post”, (https://www.artfund.org/news/2011/09/22/celebrating-contemporary-mark-hansen-and-ben-rubins-listening-post) an art installation of a grid of screens that reads (and sometimes sings) sound bites to create a series of statements, usually starting with “I am”. Created in 2001, the work explored the ways in which we communicate online, which has changed and expanded exponentially in the last decade. Sid Lee, Tool and Intel (https://www.psfk.com/2014/09/sid-lee-nyc-intel-heart-bot-installation.html) collaborated on “Heart Bot”, an installation that tracks your heart rate (through a hand sensor) and then draws each data capture on to a wall, creating a piece of spontaneous and interactive art. The drawing machine was patterned after a famous spray-painting robot called Hektor, created in 2002. The final drawing, created by capturing and illustrating dozens of individuals’ heart rates, showed each person’s “unique physiological response” to their environment.

Syuko and Ruairi (2016) in Fabricating Performance, provide another perspective of performing data, in exploring the overlap of architectural design and dance choreography to explore reciprocal exchanges regarding the body, geometry and methods of spatial notation. Instead of simply recording a performance, the notation is fed back to the performer and used as inspiration for further iterative performances. This feedback data can be used to adapt flexible digital systems to change and interrelate to dancer’s intentions for movement creation. ‘Performance-driven design’ and ‘Data-driven fabrication’ are combined resulting in a spatial design and construction system that incorporates interactivity between human and robotic performers. In dance choreography, the motion dynamics of the participants supply data that drives the fabrication that, in turn, is fed back to the inhabitants in an iterative process. Within the repetition and transfer, rhythms are created in which inhabitants can perform and occupy building areas of density and flight. They demonstrate how movement is used to create a designed performance and designed space of performance synchronously, and how interactive drawing notation might become a more dynamic communication and construction tool. They further demonstrate how visual artists choreograph dancers’ movements by screening them as a visualisation of biometric data such that the dancers movements become a constant and continuous creative process, and never a repeated response to a data input. It is interesting to note how visual artists use technology of projection to turn space into an instrument to be played in a dynamic act—the quality of body movement is reshaped each time. As dancers and projections simultaneously occupy the arena, encounters between the real dancer’s movement and the virtually reflected bodies are merged, while dancer’s movements are governed by moving projection targets. We observe how data in the form of notation or projection performs as an enabler in the translation of movement into assemblies of discrete gestures, in which space becomes an aggregation of moments of communication.

Taking a more traditional approach to data visualisation through sculpture, artist Luke Jerram (https://www.lukejerram.com), in Glass Microbiology, blows up some of the world’s deadliest viruses into larger-than-life glass sculptures. Their size correlates to the impact of the disease on each community, and their colourlessness is meant to take away any “colour” or prejudice associated with the disease. Though these sculptures undoubtedly have much cultural impact (and even an eerie beauty), they are fully accurate depictions and have even been used in medical textbooks and by the British Journal of Medicine. Like a combination of Heart Bot and Cloud Piano, the Stanza: Sensity project (https://hfolarin.wordpress.com/2015/02/18/stanza-nottingham-sensity-project/) tracks the emotional state of an entire city. Stanza uses environmental sensor technologies that track noise, humidity, light, sound and other factors to create a “sonicity”, or an interpretation of the emotional state of the city based mostly on sound. While these projects vary in their approach, one commonality is a respect and fascination with the natural world. As fluid as sound and light, the definition of ‘art’ is ever evolving. Digital tools are a way to illustrate the complex systems that have been in place since the beginning of time and the new ones that have come to inform our everyday existence.

The European policy-makers (JIIP 2016) see performing data in terms of digitalisation and data platforms that impact societies, for example the way data platforms that develop cars, control our food chains and even control our total industry base become a serious risk for our economic base. While accepting the inevitability of data saturation, they argue for the mind shift from a reactive approach to a pro-active approach, and from ‘letting things come’ to ‘express where we want to come’. The argument then moves onto experimentation with disruptive technologies to “reflect the potential that the omnipresent, fast-developing ICT provides for parallel innovations”. This they say ‘calls for unrestricted and uncensored dialogue between people everywhere’. By emphasising human reason at the core of the argument, they seek ‘a uniform global approach to the prevention and management of global crises and disasters based on an algorithmic description of the sequence of events in the chain of cause and effect’. However, the policy debate still circles around ethical considerations in artificial intelligence, focusing on managing risk of autonomous systems, ethical dimensions, regulation, black-box-based decision-making environments.

Davies (2017) gives us an insight into the impact and implication of the shifting power of data, when he says that the majority of us are entirely oblivious to what all this data says about us, either individually or collectively. As personal data are becoming a huge driver of the digital economy, the data corporations are becoming ‘more and more skillful at tracking our habits and subtly manipulating our behaviors’. In providing personal data to digital corporations in exchange for service, we are not only sacrificing our privacy rights, but in the process we are also allowing ‘our feelings, identities and affiliations to be tracked and analysed with unprecedented speed’. Moreover, anonymity and secrecy in which personal data are manipulated leaves little opportunity to anchor this new capacity of the digital driver in public interest or public debate. Whilst until recently statistic provided a quantitative tool for calculating, measuring and comparing alternative options for public scrutiny and debate, what is most politically significant about the recent shift from a logic of statistics to one of data, says Davies, is how comfortably it sits with the rise of new digital elite, ‘who seek out patterns from vast data banks, but rarely make any public pronouncements, let alone publish any evidence’. It will be tragic if the new digital elite is not aware, let alone rising to the danger of ignorance of social implications of the secrecy of the data and the consequence of its default analysis.

The social, cultural and economic costs of this secrecy of data point to a larger problem of the power shift from the individual, community and society to global companies like Facebook and Google. This power shift is illustrated by companies such as Cambridge Analytica, that use cutting-edge data analytics techniques, draw on various data sources to develop psychological profiles and target millions of consumers with tailored messaging (e.g. targeting of American voters during the 2016 presidential elections). This ability to develop and refine psychological insights across large populations, he says, is one of the most innovative and controversial features of the new data analysis. We wonder whether techniques of ‘sentiment analysis’, which detect the mood of large numbers of people by tracking indicators such as word usage on social media, and become incorporated into political campaigns, the emotional allure of political feelings of the general public could ever become amenable to scientific scrutiny. Davies (ibid) argues that although statistics as a quantitative tool in the hands of bureaucracy and market has fallen in disrepute, it at least provides a counter technique to the secrecy of the analytics. He notes that whereas statistics can be used to correct faulty claims about the economy or society or population, in an age of data analytics there are few mechanisms to prevent people from giving way to their instinctive reactions or emotional prejudices. On the contrary, he says, that companies such as Cambridge Analytica treat those feelings as things to be tracked. Although new data analytics apparatus of number-crunching is well suited to detecting trends, sensing the mood and spotting things as they bubble up, it is less suited to making the kinds of unambiguous, objective, potentially consensus-forming claims about society. Whereas the secrecy of analytics runs counter to bringing data, its analysis and results in the public domain, statistics on the other hand, helps anchor social findings and political narrative in a shared reality. Davies further warns that in this new technical climate of analytics, “it will fall to the new digital elite to identify the facts, projections and truth amid the rushing stream of data that results. The question to be taken more seriously, now that numbers are being constantly generated behind our backs and beyond our knowledge, is where the crisis of statistics leaves representative democracy’’.

But, he asks, how did we come to arrive at a situation where the experts who produce and use statistical tools have become painted as arrogant and oblivious to the emotional and local dimensions of politics? Then he provides an insight into this dilemma when he says that in recent years, a new way of quantifying and visualising populations has emerged that potentially pushes statistics to the margins, ushering in a different era altogether. Statistics, collected and compiled by technical experts, are giving way to data that accumulates by default, as a consequence of sweeping digitisation. Traditionally, statisticians have known which questions they wanted to ask regarding which population, then set out to answer them. By contrast, data are now automatically produced whenever we swipe a loyalty card, comment on Facebook or search for something on Google. We note that it is not only corporations such as Google and Facebook that are sold on analytics, fund management investment bankers such as Fink of BlackRock (Thomas 2017) have also thrown their lot with the machine, relying more and more on algorithms and models to pick stocks. This means relying more and more on ‘robo-advisers, big data and artificial intelligence’, and ‘systematic investing styles that favour algorithms, science and data-reliant models’.

As our cities, cars, homes and household objects become digitally connected, the amount of data we leave in our trail will grow even greater. In this new world, data are captured first and research questions come later. In the long term, he says, the implications of this will probably be as profound as the invention of statistics was in the late seventeenth century. The rise of “big data” provides far greater opportunities for quantitative analysis than any amount of polling or statistical modelling. But it is not just the quantity of data that is different, it represents an entirely different type of knowledge, accompanied by a new mode of expertise, and thereby to new forms of truth. Davies further argues for the need to not only seeking alternative ways of adapting data collection to reflect lived experiences better, but also to counter the technical-elite-led politics of facts and the populist politics of feeling. In other words, we need to make a choice between those still committed to public knowledge and public argument and those believers of analytics who profit from the ongoing disintegration of those things. In the world of data analytics where secrecy surrounding methods and sources of data is regarded as competitive advantage, it is doubtful that the ‘big data elite’ would easily give up their hold of data in favour of public interest and social benefit. In the face of this dominance of data accumulation, he indicates that it is encouraging to note that public bodies such as the Open Data Institute, co-founded by Tim Berners-Lee, have launched campaigns to make data publicly available. Davies says that in spite of the pessimism of a credible leverage over the data analytic corporations, there may still be hope that privacy and human rights law could represent a potential obstacle to the extension of data analytics.

We now see that having harnessed data to gain global dominance in digital technology, technology companies such as Google and Facebook are now moving into other markets such as healthcare, mobility, hotels, and media. Not only do they wield enormous corporate power, their exclusive mastery of technology raises questions of what kinds of regulatory approaches are viable in this new global environment, how nation-state can effectively regulate these new global entities, and in what ways society can make them respond to their social and ethical responsibilities, beyond the rhetoric of corporate responsibility. The manipulation of data is becoming a tool for control, governance, regulation and shaping of various societal activities.

Flyverbom et al. (2016) set out that as citizen data are being tracked and filtered by governments and companies at will, this growing focus on data collection and its use on a massive scale raise issues of power, transparency, privacy and autonomy, and thereby ethical concerns and political implications on a societal level. These concerns are intertwined with the increasing rhetoric of ‘rational choice’ theory, shifting burden of privacy concerns from corporations to the individual, as if the citizens have a choice when their personal data are ‘translated into algorithms as rational tools’ in the name of service provision, security, and knowledge production and public good. There also remains a concern of ‘how the public interest should manifest in algorithmic construction and operation’ and ‘how to situate algorithmically driven media platforms within the traditional institutional frameworks’.

While the regulators play ‘catch-up’ in creating data ethics frameworks for governance, the question remains how society can expect socially responsible behaviour from digital companies about their collection, use and re-circulation of data, without them hiding behind existing legal laws, and ensure greater clarity around their social responsibility agenda. Such a discussion on the creation of an ethical framework needs ‘to be infused with a more robust notion of the public interest than can currently be found in the realm of digital intermediary governance’.

As machine learning algorithms manipulate data to support and control institutional and organisational structures, they move beyond their role as computational artefacts, raising concerns about the limits of our ‘entrenched assumptions about agency, transparency, and normativit’. Moreover, as Introna (2016) observes, algorithms and their actions, are seen as problematic because they are inscrutable, automatic, and subsumed in the flow of daily practices. Although these concerns are voiced in terms of designing algorithmic transparency and openness, others have argued for more democratic or value-centred design of such actors. Introna argues for Foucault’s notion of ‘governmentality’ as a conceptual framework for examining ‘how practice becomes problematised, how calculative practices are enacted as technologies of governance, how such calculative practices produce domains of knowledge and expertise, and finally, how such domains of knowledge become internalised in order to enact self-governing subjects. In other words, it allows us to show the mutually constitutive nature of problems, domains of knowledge, and subjectivities enacted through governing practices’.

Drawing on Chantal Mouffe’s theories of agonistic pluralism, Crawford (2016) draws our attention to the working of algorithms within ‘highly contested online spaces of public discourse, such as YouTube and Facebook, where incompatible perspectives coexist. Yet algorithms are designed to produce clear “winners” from information contests, often with little visibility or accountability for how those contests are designed’. She says that if we widen our perspective beyond the isolated idea of a nonnegotiable algorithmic “black box” to the idea of agonistic pluralism as both a design ideal for engineers and a provocation to understand algorithms in a broader social context: rather than focusing on the calculations in isolation, we need to account for the spaces of contestation where they operate. Ananny (2016) proposes a possible approach to move beyond the isolated ‘black box’ algorithm, to the idea of a ‘networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. He argues for ‘an empirically grounded, pragmatic ethics of algorithms’ that draw on ‘algorithmic actions based on perceived similarity and probability’. He observes that ‘Algorithmic ethics resemble actuarial ethics: a prediction’s legitimacy is based not only on the probable correctness of a current calculation but on the risk of applying that calculation in the future’. He further quotes Jasanoff (2010:15) that If ‘‘risk is a product of human imaginations disciplined and conditioned by an awareness of the past’’ and asserts that ‘predictive algorithms are a key element of disciplining and conditioning ethical imagination—of envisioning what might or ought to be done’.

Artists see performance of data not just in terms of its transformation into information, but also in terms of interactivity between the artist and the audience. This interactivity itself becomes a tool for the continued evolution as an artist and a scientist and the amalgamation of their partnership. In the end performance is about raising awareness of the interconnectivity of everything and everyone. Technology is or should be utilised to amplify the experience and/or the range of influence. As wearable sensors proliferate, we have access to rich information regarding human movement that gives us insights into our daily activities like never before. In a sensor-rich environment, it is desirable to build systems that are aware of human interactions by studying contextual information. Experiential scientists, crafts people, medical practitioners and engineers transform raw data into information, then using their skills and experience transform information into knowledge, and through the application of their contextual knowledge and wisdom, make judgements about the accuracy, relevance and acceptability of data that is coming from many sources. In this transformation process, there is always a scope for human intervention at various levels of the data-to-action cycle and that intervention reflecting the many overlapping contexts would bear witness to situated judgements, in contrast to an intervention based upon machine learning algorithmic calculations. In other words, the performance of data, in the hands of expert practitioners, here is seen in terms of an evolving judgement-making process culminating in action. This transformational process from data to action, encompassing feedback loops and human intervention, provides a human-centred perspective of judgement that is contrary to the computational model of ‘judgement to calculation’, in which data are used to compute judgement. We should, however, recognise that the computation model of judgement, turning judgement into an algorithm, is still a dominant focus of the data-driven AI. It may be tempting to argue that nothing has fundamentally changed in the data–action cycle except for the availability of an abundance of data (big data) and the exponential processing speed of computers. The fallacy of this argument then revolves around the idea that only if we have an abundance of data and exponential processing speed of the computer, we can construct machine learning algorithms that can outstrip human cognition to the extent that machines can become far better than humans in processing a wide variety and large number of data sets and working in different ways to those of humans in reaching analytical judgements. However, this calculation-centred view of judgement fails to recognise that human judgement is about the process of finding a coherence among often conflicting and yet creative possibilities that cannot be reduced to calculation. Moreover, human judgement resides in and reflects the dynamic and evolving nature of professional and social practices, enriching human experience, knowledge, skill and cognition. From this human-centred perspective, performance of data lies in the performance of practice of the ‘data–action cycle’, in other words the performance of inter-relations between data, information, knowledge, wisdom and action. This view of performing data perceives and experiences the world around us, and seeks to understand the nature of the interface between the physical, cultural and our experiential worlds. The nature and practice of the interface here is fundamentally relational between, in-between, and across knowledges, experiences and practices of contextual domains, and not transactional in the sense of ‘cause and effect’ calculation. This view shifts our attention from a purely technological fascination of machine learning to the evolving interaction of human systems and technology, thereby providing a symbiotic horizon of performing data. In the midst of the fascination with digital technology, we are cautioned to remember that performance of data in the hands of creative artists and scientist embodies social/cultural and spatial intelligence that conforms to the living. This we cannot get from machine intelligence. Moreover, it is not clear how machine would deal with the architectural paradox: when an architect draws a diagram of a building, the diagram becomes a building, a static object, an exact language, an exact dream; but the diagram as a model performs as a process, a dynamic process in which the diagram acts an algorithm of ideas.

Our authors in this volume have also raised issues of data performance when discussing the rise of the intelligent machine and monster research robots. When we think of robots, we also think of the implication of data that makes the robot perform. What we fear in the robot is not just the performance of the soulless and mechanical monster, but also reflections of our own fears that we ourselves may be becoming something less than human. And in this process, the human being is seen to be disappearing, first ontologically, removing the self from the origin-connection with the innermost; secondly physically handing our functions over to the machine. In this situation, human beings no longer act as authors of their own actions, and in this ‘handing over’, reduce techne into a smart machine—a functional device. And in this process, the human being turns into a device itself, losing some essential part of our humanity by becoming governed by rational programming. However, in the creation of monster robots, we should not ignore the role and responsibility of the computer program designer/builder who makes specific assumptions about the application domain, including: ‘What will a program’s variables represent? How will data relationships be captured? What strategies will support control algorithms? What is the relationship between the ‘‘perfect’’ and the ‘‘good enough’’ solution?’

AI&Society warmly welcomes reflective contributions to the debate on hermeneutics of performing data in the pursuit of seeking harmonious interactivity of art, science, technology and society.

References

  1. Ananny M (2016) Toward an ethics of algorithms convening, observation, probability, and timeliness. In: Ziewitz M (ed) Governing algorithms, special issue of science, technology, and human values (STHV), vol 41, issue 1, SAGE Publications, LondonGoogle Scholar
  2. Chan L (2016) Will robots in healthcare make doctors obsolete?, Tech Times.com By Louise Chan. http://www.techtimes.com/articles/131870/20160209/will-robots-in-healthcare-make-doctors-obsolete.htm. Accessed 27 Apr 2017
  3. CRASSH (2016) Diagrammatic: beyond inscription? In: conference, 2–3 December 2016, Cambridge University. http://www.crassh.cam.ac.uk/events/26782. Accessed 2 Dec 2016
  4. Crawford K (2016) Can an algorithm be agonistic? ten scenes from life in calculated publics. In: Ziewitz M (ed) Governing algorithms, Special issue of science, technology, and human values (STHV), vol 41, issue 1, SAGE Publications, LondonGoogle Scholar
  5. Davies W (2017) How statistics lost their power – and why we should fear what comes next. The Guardian. https://www.theguardian.com/politics/2017/jan/19/crisis-of-statistics-big-data-democracy. Accessed 28 Apr 2017
  6. Egger AE and Carpi A (2008) Using Graphs and Visual Data in Science Visionlearning Vol. POS-1 (4). http://www.visionlearning.com/en/library/Process-of-Science/49/Using-Graphs-and-Visual-Data-in-Science/156. Accessed 28 Apr 2017
  7. Ferri J (2014) Turning data visualization into art: 7 artists use data as a muse. ENTERTAINMENT. https://iq.intel.com/turning-data-visualization-art-7-artists-using-data-museans. Accessed 23 Apr 2017
  8. Flyverbom M et al (2016) Sub-theme 63: digital transformations: technology, organization and governance in the algorithmic age. https://www.egosnet.org/jart/prj3/egos/main.jart?rel=de&reserve-mode=active&content-id=1454064926471&subtheme_id=1407070330863. Accessed 23 Apr 2017
  9. Gaber S (2015) From neuroscience to data science. http://itblog.emc.com/2015/03/13/from-neuroscience-to-data-science/. Accessed 28 Apr 2017
  10. Hanson J (2014) Artists use PubNub to power live, data visualization performance. https://www.pubnub.com/blog/2014-06-10-artists-use-pubnub-to-power-live-data-driven-performance/. Accessed 28 Apr 2017
  11. Introna L (2016) Algorithms, governance, and governmentality on governing academic writing. In: Ziewitz M (ed) Governing algorithms, Special issue of science, technology, and human values (STHV), vol 41, Issue 1, SAGE Publications, LondonGoogle Scholar
  12. Jasanoff S (2010) ‘‘Beyond calculation: a democratic response to risk’’. In: Lakoff A (ed) Disaster and the politics of intervention. Columbia University Press, New York, pp 14–41Google Scholar
  13. JIIP (2016) Annual symposium: the effect of digitisation on society, 14–16 November 2016, Brussels. http://www.knowledge4innovation.eu/8th-eis-programme. Accessed 28 Apr 2017
  14. Khosla V (2012) Technology will replace 80% of what doctors do, Fortune.com Vinod Khosla. http://fortune.com/2012/12/04/technology-will-replace-80-of-what-doctors-do/. Accessed 27 Apr 2017
  15. Manaugh G and Twilley N (2013) Making art out of earthquakes, the Atlantic. http://www.theatlantic.com/technology/archive/2013/03/making-art-out-of-earthquakes/274345/. Accessed 28 Apr 2017
  16. Marr B (2016) Big data: a game changer in healthcare, Forbes.com. https://www.forbes.com/sites/bernardmarr/2016/05/24/big-data-a-game-changer-in-healthcare/#5b6f7e2d525b. Accessed 28 Apr 2017
  17. Matussek P (2012) Memory theatre in the digital age. Perform Res 17(3):8–15CrossRefGoogle Scholar
  18. Performing Knowledge Conference (2016) Emmanuel College, Cambridge. https://www.evensi.uk/performing-knowledge-conference-emmanuel-college/169796020. Accessed 26 Apr 2016
  19. Syuko K and Ruairi G (2016) Fabricating performance: the interaction of dance and construction. The Interactive Architecture Lab, UCL, London. http://www.interactivearchitecture.org/fabricating-performance-a-dance-of-circular-feedback-processes-in-constructing-spatial-notion.html. Accessed 22 Apr 2017
  20. Thomas L (2017) At BlackRock, machines are rising over managers to pick stocks. New York Times. https://www.nytimes.com/2017/03/28/business/dealbook/blackrock-actively-managed-funds-computer-models.html. Accessed 28 Apr 2017
  21. Thorp J (2014) On data and performance. http://blog.blprnt.com/blog/blprnt/on-data-and-performance. Accessed 28 Apr 2017
  22. Udell J (2007) Data analysis as performance art, strategies for internet citizens. https://blog.jonudell.net/2007/07/09/data-analysis-as-performance-art/. Accessed 28 Apr 2017
  23. Vayena E et al (2015) Ethical challenges of big data in public health. PLoS Comput Biol 11(2): e1003904. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321985/(accessed. 29 Apr 2017)

Copyright information

© Springer-Verlag London 2017

Authors and Affiliations

  1. 1.Professor EmeritusUniversity of BrightonBrightonUK

Personalised recommendations