Advertisement

Seeing the World Through Machinic Eyes: Reflections on Computer Vision in the Arts

  • Marijke GoetingEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11130)

Abstract

Today, computer vision is broadly implemented and operates in the background of many systems. For users of these technologies, there is often no visual feedback, making it hard to understand the mechanisms that drive it. When computer vision is used to generate visual representations like Google Earth, it remains difficult to perceive the particular process and principles that went into its creation. This text examines computer vision as a medium and a system of representation by analyzing the work of design studio Onformative, designer Bernhard Hopfengärtner and artist Clement Valla. By using technical failures and employing computer vision in unforeseen ways, these artists and designers expose the differences between computer vision and human perception. Since computer vision is increasingly used to facilitate (visual) communication, artistic reflections like these help us understand the nature of computer vision and how it shapes our perception of the world.

Keywords

Art Design Perception Computer vision Google Earth Media Representation Digital Image 

References

  1. 1.
    Virilio, P.: The Vision Machine, p. 59. Indiana University Press, Indianapolis (1994)Google Scholar
  2. 2.
    Shubber, K.: Artificial artists: when computers become creative. Wired. 7 Aug 2013. http://www.wired.co.uk/article/can-computers-be-creative. Naughton, J.: Can Google’s AlphaGo really feel it in its algorithms? The Guardian. 31 Jan 2016. http://www.theguardian.com/commentisfree/2016/jan/31/google-alphago-deepmind-artificial-intelligence-intuititive. Titcomb, J.: The best of Siri: 11 funny responses from the iPhone’s virtual assistant. The Telegraph. 1 Jul 2015. http://www.telegraph.co.uk/technology/apple/11709991/The-best-of-Siri-11-funny-responses-from-the-iPhones-virtual-assistant.html
  3. 3.
    Virilio: The Vision Machine. 60Google Scholar
  4. 4.
    McLuhan, M.: Understanding Media: The Extensions of Man. Ginko Press, Berkeley (1964). 5, 19, 34Google Scholar
  5. 5.
    Valla, C.: The Universal Texture. Rhizome, 31 July 2012. http://rhizome.org/editorial/2012/jul/31/universal-texture/
  6. 6.
    Postcards from Google Earth. http://www.postcards-from-google-earth.com/info/. Accessed 1 Mar 2016
  7. 7.
    In 2012, Google switched from using its geo-modeling community (which consists of many volunteers that manually created detailed 3D models of buildings) to automatic image rendering and computer vision techniques to create a 3D representation of entire metropolitan areas. The Never-Ending Quest for the Perfect Map. https://googleblog.blogspot.com/2012/06/never-ending-quest-for-perfect-map.html. Accessed 15 Mar 2016
  8. 8.
    Google Earth’s Incredible 3D Imagery, Explained (Youtube). https://bit.ly/2pnyZsG. Accessed 26 Jun 2018
  9. 9.
    See also Goeting, M.: Digital fluidity: the performative and reconfigurable nature of the digital image in contemporary art and design. Int. J. New Media Technol. Arts 11(4), 27–46 (2016)Google Scholar
  10. 10.
    Hansen, M.: Seeing with the body: the digital image in postphotography. Diacritics 31(4), 54–84 (2001)CrossRefGoogle Scholar
  11. 11.
    Crary, J.: Techniques of the Observer: On Vision and Modernity in the Nineteenth Century, p. 2. MIT Press, Cambridge (1990)Google Scholar
  12. 12.
    Bolter, J.D., Grusin, R.: Remediation: Understanding New Media. MIT Press, Cambridge (2000). 5–6, 22–23Google Scholar
  13. 13.
    Bolter, Grusin: Remediation. 38–41Google Scholar
  14. 14.
    Bolter, Grusin: Remediation. 38Google Scholar
  15. 15.
    In this regard, Google Earth departs from the tradition of cartography. Cartography’s aim to map places always involves a form of reduction; relevant geographical information is collected and transformed into a schematic representation of an area. Traditionally, what gets on the map are distinctive elements, not uniformity, because including sameness hinders the functionality of a map. The aim of Google Earth, however, is to “build the most photorealistic version of our planet,” as one of Google’s developers makes clear, and this involves including as much (overlapping) information as possible. See Adams, C.: Imagery update: Explore your favorite places in Google Earth. Medium. https://medium.com/google-earth/imagery-update-explore-your-favorite-places-in-google-earth-5da3b28e4807. Accessed 19 Sep 2018. Consequently, Google Earth can no longer be considered a map. Instead, it functions as an all-encompassing virtual image space that builds on the photographic paradigm and owes more to the history of immersive (virtual) spaces than cartography
  16. 16.
    Coumans, A.: De stem van de grafisch ontwerper, drie vormen van dialogische verbeelding in het publieke domein. Esthetica. http://estheticatijdschrift.nl/wp-content/uploads/sites/175/2014/09/5-Esthetica-Destemvandegrafischontwerper-Drievormenvandialogischeverbeeldinginhetpubliekedomein-2010-12-20.pdf. Accessed 29 May 2018
  17. 17.
    For this reason, the distinction between “real” and “virtual” had lost its meaning to Flusser. He preferred to talk about the distinction between gestures of “abstraction” and “concretion”Google Scholar
  18. 18.
    Flusser, V.: Writings, p. 128. University Minnesota Press, Minneapolis (2002)Google Scholar
  19. 19.
    Flusser: Writings. 129–130Google Scholar
  20. 20.
    The term “transapparatische Bilder” was originally conceived by Flusser. Flusser, V.: Medienkultur. Fischer Taschenbuche Verlag, Frankfurt am Main (2005). 75, 77Google Scholar
  21. 21.
    According to Dutch glitch artist Rosa Menkman, “Glitch, an unexpected occurrence, unintended result, or break or disruption in a system, cannot be singularly codified, which is precisely its conceptual strength and dynamical contribution to media theory.” Menkman, R.: The Glitch Moment(um). Institute of Network Cultures, Amsterdam (2011) 26. “It [the glitch] is the moment at which this flow [of technology] is interrupted that a counter-experience becomes possible. [...] the possibility for an alternative message unfolds. Through the distorted lens of the glitch, a viewer can perceive images of machinic inputs and outputs. The interface no longer behaves the way it is programmed to; the uncanny encounter with a glitch produces a new mode that confounds an otherwise predictable masquerade of human-computer relations.” Skyers, E. I.: Vanishing Acts. Link Editions, Brescia (2015). 48–49Google Scholar
  22. 22.
    The Semacode was originally designed to encode Internet URLs, but it is also used by postal services to automate the distribution of parcels and letters, and by the railway and concert venues to sell tickets onlineGoogle Scholar
  23. 23.
    Hello, world! http://hello.w0r1d.net/description.html. Accessed 16 May 2018
  24. 24.
    Hansen: Seeing with the Body. 61–62Google Scholar
  25. 25.
    Virilio: The Vision Machine. 75 However, one could question whether the computational algorithms employed in computer vision are really that different from how we as humans perceive and analyze the world. After all, many of these technologies are modeled after us. Like us, they work with pattern recognition, although of course not as advanced as humans. However, although many scientists now agree that the human brain works similar to computers/algorithms, there is a risk in using this analogy. It may cause us to overlook the differences. As historian Yuval Noah Harari explains, during the Industrial Revolution scientists described the human body and mind as a steam engine because it was the dominant technology at the time. While this sounded logical in the nineteenth century, it seems naïve today and the same thing most likely applies to the human-computer analogy. It explains only a very small part of the human, and even less about what it means to be human. See Harari, Y. N.: Homo Deus: A Brief History of Tomorrow. Random House, New York (2016)Google Scholar
  26. 26.
    The digital photograph taken of the wheat field is actually a list of data that describes values of color, contrast, size, etc. It is only after using software to convert this data into an image that we can talk about an image. See also Goeting: Digital FluidityGoogle Scholar
  27. 27.
    The phrase “hello, world” was first used by Canadian computer scientist Brian Kernighan in 1978 in his instruction manual The C Programming Language as a small test program to display a basic understanding of the programming language. It was subsequently used by many others as a symbolic first step in mastering a programming languageGoogle Scholar
  28. 28.
    Dambeck, H.: Code in Kornfeld: Gre an die Welt ber Google Earth. Spiegel Online, 9 May 2006. http://www.spiegel.de/netzwelt/web/code-im-kornfeld-gruesse-an-die-welt-ueber-google-earth-a-415135.html
  29. 29.
    It is unclear whether Bernhard Hopfengärtner succeeded in getting his design recorded by the satellites Google uses and consequently on Google Earth, since the satellites scan the globe with intervals of approximately one year, by which time the pattern might have faded. Moreover, the satellite images are updated with the same interval, so if it did get on to Google Earth, it was only visible for a short period of time. That is, for those people who managed to find it among the vast imagery of Google’s virtual globeGoogle Scholar
  30. 30.
    Bauhaus-Universitt Weimar. http://www.uni-weimar.de/projekte/iwantmymkg/en/hello-world. Accessed 18 May 2018
  31. 31.
    Colomina, B., Wigley, M.: Are We Human? Notes on an Archeology of Design, p. 80. Lars Mller Publishers, Zrich (2016)Google Scholar
  32. 32.
    Colomina, Wigley: Are We Human? 76–77Google Scholar
  33. 33.
    McLuhan: Understanding Media. 12, 20Google Scholar
  34. 34.
    Colomina, Wigley: Are We Human? 9Google Scholar
  35. 35.
    Munster, A.: An Aesthesia of Networks Conjunctive Experience in Art and Technology, pp. 45–55. MIT Press, Cambridge (2013)Google Scholar
  36. 36.
    Munster: An Aesthesia of Networks. 53Google Scholar
  37. 37.
    Munster: An Aesthesia of Networks. 51–61Google Scholar
  38. 38.
    Colomina, Wigley: Are We Human? 23 See also Stiegler, B.: Time and Technics 1: The Fault of Epimetheus. Stanford University Press, Stanford (1998)Google Scholar
  39. 39.
    Colomina, Wigley. Are We Human? 25 The following text by Bernhard Hopfengärtner can be connected to this: “The interaction with our environment is producing our mental representation of it. Designing objects of interaction, be it physical objects, services or cultural technologies, is also a method of generating new or alternative ways of thinking. As interaction design can be used as an approach to explore this field, it also allows the incorporation social [sic] or philosophical observations and ideas into a tangible form.” Royal College of Art. https://www.rca.ac.uk/students/bernhard-hopfengartner/. Accessed 22 May 2018
  40. 40.
    Their “algorithmic robot,” as Kiefer and Laub call it, browses Google Earth day in, day out, continuously scanning the virtual globe by moving along the latitude and longitude of the earth. After it has circled the globe, it starts again, zooming in closer every time, which exponentially increases the number of images that need to be analyzed. After running the bot non-stop for several weeks, it only traveled a small part of the globe. Kiefer quoted in Solon, O.: Google Faces searches for Faces in Google Maps, and finds Forever Alone Guy. Wired. 23 May 2013. http://www.wired.co.uk/news/archive/2013-05/23/google-faces
  41. 41.
    Google Faces. http://onformative.com/work/google-faces. Accessed 7 Mar 2016
  42. 42.
    Onformative quoted in Garber, M.: These Artists are Mapping the Earth with Facial Recognition Software. The Atlantic, 21 May 2013. http://www.theatlantic.com/technology/archive/2013/05/these-artists-are-mapping-the-earth-with-facial-recognition-software/276101/
  43. 43.
    Kiefer quoted in Solon, O.: Google Faces searches for Faces in Google MapsGoogle Scholar
  44. 44.
    Virilio: The Vision Machine. 72–73Google Scholar
  45. 45.
    Berkeley Computer Vision Group. https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/. Accessed 2 Jun 2018
  46. 46.
    Rhodes, M.: Finding Hidden Faces in Google Earth’s Landscapes. Fast Company, 6 October 2013. http://www.fastcodesign.com/1672781/finding-hidden-faces-in-google-earths-landscapes
  47. 47.
    Likewise we can delegate our wish to achieve chance or randomness in the production of artworks to computers, believing the computer to be the better instrument to achieve this goal. While computational algorithms can produce long sequences of apparently random results, they are in fact based on deterministic logic. Because these computational algorithms can never be regarded as a “true” random number source (as tossing a coin or rolling dice are), they are called pseudorandom number generators. Here again, it is our desire to perceive randomness and our inability to discern patterns and predict the outcome of a computational algorithm, that creates the impression or illusion of randomnessGoogle Scholar
  48. 48.
    Moreover, as Stuart Hall makes clear, we humans are able to form concepts of rather obscure and abstract things, which we can’t in any simple way see, have never seen and possibly can’t or won’t ever see. Examples are concepts like war, death, friendship or love. Hall, S.: Representation: Cultural Representations and Signifying Practices. Open University, London (1997) 17 For instance, how would computer vision be able to detect an image as a representation or expression of love or revenge?Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Radboud UniversityNijmegenThe Netherlands
  2. 2.ArtEZ Institute of the ArtsArnhemThe Netherlands

Personalised recommendations