Advertisement

Geospatial Information Visualization and Extended Reality Displays

  • Arzu ÇöltekinEmail author
  • Amy L. Griffin
  • Aidan Slingsby
  • Anthony C. Robinson
  • Sidonie Christophe
  • Victoria Rautenbach
  • Min Chen
  • Christopher Pettit
  • Alexander Klippel
Open Access
Chapter

Abstract

In this chapter, we review and summarize the current state of the art in geovisualization and extended reality (i.e., virtual, augmented and mixed reality), covering a wide range of approaches to these subjects in domains that are related to geographic information science. We introduce the relationship between geovisualization, extended reality and Digital Earth, provide some fundamental definitions of related terms, and discuss the introduced topics from a human-centric perspective. We describe related research areas including geovisual analytics and movement visualization, both of which have attracted wide interest from multidisciplinary communities in recent years. The last few sections describe the current progress in the use of immersive technologies and introduce the spectrum of terminology on virtual, augmented and mixed reality, as well as proposed research concepts in geographic information science and beyond. We finish with an overview of “dashboards”, which are used in visual analytics as well as in various immersive technologies. We believe the chapter covers important aspects of visualizing and interacting with current and future Digital Earth applications.

Keywords

Visualization Geovisualization User-centric design Cognition Perception Visual analytics Maps Temporal visualization Immersive technologies Virtual reality Augmented reality Mixed reality Extended reality 

7.1 Introduction

A future, fully functional Digital Earth is essentially what we understand as a (geo)virtual reality environment today: A multisensory simulation of the Earth as-is and how it could be, so we can explore it holistically, with its past, present, and future made available to us in any simulated form we wish (Gore 1998; Grossner et al. 2008). The concept of Digital Earth can be associated with the emergence of the (recently popularized) concept of a ‘digital twin’, conceptualized as a digital replica of a physical entity. Although several researchers have expressed skepticism about the appropriateness and precision of the term ‘digital twin’ in recent publications (Batty 2018; Tomko and Winter 2019), it appears that the broad usage of the term refers to a reasonably rigorous attempt to digitally replicate real-world objects and phenomena with the highest fidelity possible. Such efforts currently exist for objects at microscales, such as a wind turbines, engines, and bridges; but they are also envisioned for humans and other living beings. A digital twin for an entire city is more ambitious and requires information on the interoperability and connectivity of every object. A true ‘all containing’ Digital Earth is still unrealized and is more challenging to construct. However, as Al Gore (1998) noted in his original proposal for a Digital Earth in 1998, making sense of the information a Digital Earth contains is even more difficult than its construction. A key capability that supports sensemaking is the ability to visualize geospatial information. There are countless ways to visualize geospatial information. For thousands of years, humankind has used maps to understand the environment and find our way home. Today, there are many visual methods for depicting real, simulated, or fictional geospatial ‘worlds’.

This chapter provides an overview of key aspects of visualizing geospatial information, including the basic definitions and organization of visualization-related knowledge in the context of a future Digital Earth. As understanding related human factors is necessary for any successful implementation of a visualization within the Digital Earth framework, we include a section on cognition, perception, and user-centered approaches to (geo)visualization. Because we also typically pose and answer analytical questions when we visualize information, we provide an overview of visual analytics; paying special attention to visualizing and analyzing temporal phenomena including movement because a Digital Earth would be clearly incomplete if it only comprises static snapshots of phenomena. After this examination of broader visualization-related concepts, because we conceptualize Digital Earth as a virtual environment, we pay special attention to how augmented (AR), mixed (MR), and virtual reality (VR) environments can be used to enable a Digital Earth in the section titled “Immersive Technologies—From Augmented to Virtual Reality”. The Digital Earth framework is relevant to many application areas, and one of the foremost uses of the framework is in the domain of urban science. This is unsurprising given that 55 percent of the population now live in urban areas, with the proportion expected to increase to two-thirds of the population by 2050 (United Nations Population Division 2018). Urban environments are complex, and their management requires many decisions whose effects can cause changes in other parts of the urban environment, making it important for decision makers to consider these potential consequences. One way of providing decision makers with an overview of urban environments is through dashboards. Therefore, we feature “dashboards” and discuss the current efforts to understand how they fit within the construct of Digital Earth. We finish the chapter with a few concluding remarks and future directions.

7.2 Visualizing Geospatial Information: An Overview

Cartography is the process by which geospatial information has been typically visualized (especially in the pre-computer era), and the science and art of cartography remain relevant in the digital era. Cartographic visualizations are (traditionally) designed to facilitate communication between the mapmaker and map users. As a new approach to making sense of geospatial information in the digital era, specifically in the development of digital tools that help map readers interact with this information, the concept of geovisualization emerged (MacEachren 1994; Çöltekin et al. 2017, 2018) and widened our understanding of how maps could help make sense of a Digital Earth when used in an exploratory manner in addition to their role in communication. Thus, geovisualization is conceived as a process rather than a product, although the term is also commonly used to refer to any visual display that features geospatial information (maps, images, 3D models, etc.). In the geovisualization process, the emphasis is on information exploration and sensemaking, where scientists and other experts design and use “visual geospatial displays to explore data, and through that exploration to generate hypotheses, develop problem solutions and construct knowledge” (Kraak 2003a, p. 390) about a geographic location or geographic phenomenon. How these displays (and associated analytical tools) could be designed and used became a focus of scientific research within the International Cartographic Association’s (ICA) Commission on Visualization and Virtual Environments, whose leaders described the term geovisualization as the “theory, methods and tools for visual exploration, analysis, synthesis, and presentation of geospatial data” (MacEachren and Kraak 2001, p. 3). Designing tools to support visualizing the geospatial information contained in a Digital Earth requires thinking about the data, representation of those data, and how users interact with those representations. Importantly, it requires the design of visual displays of geospatial information that can combine heterogeneous data from any source at a range of spatiotemporal scales (Nöllenburg 2007). To facilitate the ability to think spatially and infer spatiotemporal knowledge from a visualization, the visualization must also be usable, support users’ tasks and needs, and enable users to interact with the data (Fuhrmann et al. 2005). Visualizations of geospatial data connect people, maps, and processes, “leading to enlightenment, thought, decision making and information satisfaction” (Dykes et al. 2005a, p. 4). Below, we describe three key areas of knowledge that support the design of visualizations with the goal of helping users make sense of the information that a Digital Earth contains. The data that are available for incorporation in a Digital Earth are increasingly heterogeneous and more massive than before. These complex, large datasets include both spatial and aspatial data, all of which must be combined, ‘hybridized’ (i.e., synthesized in meaningful ways), and represented within a visualization environment. Users expect to be able to visualize complex spatiotemporal phenomena to analyze and understand spatiotemporal dynamics and systems. To support them in this, considering user interaction and interfaces is necessary to develop and incorporate intuitive and innovative ways to explore visual displays. This is especially relevant to virtual and augmented reality, to facilitate exploration of data and experiencing spaces ‘without hassle’.

Data A key goal of geovisualization is “to support and advance the individual and collective knowledge of locations, distribution and interactions in space and time” (Dykes et al. 2005b, p. 702). This remains a challenge due to increases in the diversity and quantity of data, users, and available visualization techniques and technologies (Griffin and Fabrikant 2012). The age of the data deluge (Bell et al. 2009) resulted in the generation of large quantities of spatial data (vector databases, maps, imagery, 3D models, numeric models, point clouds, etc.), as well as aspatial data (texts, stories, web data, photographs, etc.) that can be spatialized (Skupin and Buttenfield 1997). The ‘covisualization’ of those data together, such as in multiple coordinated views (or linked views, see Roberts 2007), is difficult due to their heterogeneity. This heterogeneity can be in the data’s source, scale, content, precision, dimension, and/or temporality. The visual integration of such heterogeneous data requires the careful design of graphical representations to preserve the legibility of the data (Hoarau and Christophe 2017).

7.2.1 Representation

Bertin’s seminal work (1967/1983) provides a conceptual framework, the visual variables, that allows for us to consider the graphical representation of geospatial information at a fundamental level (although it is important to note that Bertin’s propositions were not evidence-based, it was rather based on intuition and qualitative reasoning). Originally, Bertin proposed seven variables: position, size, shape, color value, color hue, orientation, and texture. Later work extended Bertin’s framework to include dynamic variables such as movement, duration, frequency, order, rate of change, synchronization, (Carpendale 2003; DiBiase et al. 1992; MacEachren 1995) and variables for 3D displays such as perspective height (Slocum et al. 2008), camera position, and camera orientation (Rautenbach et al. 2015). Visual variables remain relevant as a core concept of visualization research and have generated renewed interest in digital-era research questions, including in fields beyond geovisualization (e.g., Mellado et al. 2017). Notably, the information visualization community has also embraced Bertin’s visual variables (e.g., Spence 2007). Visual complexity is a major challenge in designing representations of geospatial data, and innovative measures and analysis methods have been proposed to address this problem (Fairbairn 2006; Li and Huang 2002; MacEachren 1982; Schnur et al. 2010, 2018; Touya et al. 2016). Digital Earth’s ‘big data’ challenges these efforts, stretching the capacity of existing tools to handle and process such datasets as well as the capacity of visualization users to read, understand, and analyze them (Li et al. 2016). One application area that is particularly afflicted by visual complexity is research on the urban and social dynamics that drive spatiotemporal dynamics in cities (Brasebin et al. 2018; Ruas et al. 2011). Developing approaches to represent spatiotemporal phenomena has been a long-standing challenge and many options have been investigated over the years (Andrienko and Andrienko 2006). Despite some progress, many questions remain (see the “Visualizing Movement” section). Some potential solutions such as using abstraction and schematization when visualizing urban datasets in Digital Earth can be found in the fields of data and information visualization (Hurter et al. 2018).

Another key aspect of visual representation design for geospatial data in Digital Earth applications involves how to deal with uncertainty. Uncertainty, such as that related to data of past or present states of a location or models of potential future states, remains difficult to represent in visual displays, and this is a major challenge for geovisualization designers. Which visual variables might aid in representing uncertainty? This question has been explored and tested to some degree (e.g., MacEachren et al. 2012; Slocum et al. 2003; Viard et al. 2011), although the majority of research has focused on developing new visualization methods rather than testing their efficacy (Kinkeldey et al. 2014). There are still no commonly accepted strategies for visualizing uncertainty that are widely applied. MacEachren (2015) suggests that this is because data uncertainty is only one source of uncertainty that affects reasoning and decision making and argues that taking a visual analytics approach (see the “Geovisual Analytics” section) might be more productive than a communication approach. Hullman (2016) notes the difficulty of evaluating the role of uncertainty in decision making as a major barrier to developing empirically validated techniques to represent uncertainty.

7.2.2 User Interaction and Interfaces

Since geovisualization environments are expected to provide tools and interaction modalities that support data exploration, user interaction and interface design are important topics for geovisualization. The visual display is an interface for the information, so users need effective ways to interact with geovisualization environments. Interaction modalities in geovisualization environments are ideally optimized or customizable for the amount of data, display modes, complexity of spaces or phenomena, and diversity of users (e.g., Hoarau and Christophe 2017). Interaction tools and modalities are a core interest in human-computer interaction (e.g., Çöltekin et al. 2017) and, in connection with visualization, they are often investigated with concepts explored in the information visualization domain (Hurter 2015; van Wijk and Nuij 2003), among others. Interaction and how it is designed are especially relevant for virtual and augmented reality approaches to visualization (see the “Immersive Technologies—From Augmented to Virtual Reality” section). Some form of interaction is required for most modern 2D displays, and it has a very important role in supporting exploration tasks, but seamless interaction is a necessity in a virtual or augmented world. Without it, the immersiveness of the visualization—a critical aspect of both VR and AR—is negatively affected. One approach that is notably at the intersection of representation design and user interaction design is a set of methods that are (interactively) nonuniform or space-variant. An example is displays in which the resolution or level of detail varies across the display in real time according to a predefined criterion. The best known among these nonuniform display types are the focus + context and fisheye displays (dating back to the early 1990s, e.g., see Robertson and Mackinlay 1993). Both the focus + context and fisheye displays combine an overview at the periphery with detail at the center, varying the level of detail and/or scale across a single display. A variation on the focus + context display has been named “context-adaptive lenses” (Pindat et al. 2012). Conceptually related to these approaches, in gaze-contingent displays (GCDs), the level of detail (and other selected visual variables) is adapted across the display space based on where the user is looking. This approach draws on perceptual models of the visual field, mimicking the human visual system. GCDs were proposed as early as the 1970s (see, e.g., Just and Carpenter 1976) and have continued to attract research interest over time as the technology developed (e.g., Bektas et al. 2015; Duchowski and Çöltekin 2007; Duchowski and McCormick 1995). For more discussion of “interactive lenses” in visualization, see the recent review by Tominski et al. (2017). Various other space-variant visualization approaches have been proposed in which, rather than varying the scale or level of detail, the levels of realism or generalization are varied across the display to support focus + context interactions with the data. These approaches aim to smoothly navigate between data and its representation at one scale (e.g., Hoarau and Christophe 2017), between different levels of generalization across scales (e.g., Dumont et al. 2018), or between different rendering styles (Boér et al. 2013; Semmo and Döllner 2014; Semmo et al. 2012). Mixed levels of realism have been proposed for regular maps used for data exploration purposes (Jenny et al. 2012) as well as for VR. In VR, egocentric-view-VR representations with selective photorealism (a mix of abstract and photorealistic representations) have been tested in the context of route learning, memory, and aging and have been shown to benefit users (Lokka et al. 2018; Lokka and Çöltekin 2019).

Decisions on how to combine data to design representations and user interactions should be informed by our understanding of how visualization users process visual information and combine it with their existing knowledge about the location or phenomenon to make sense of what they see. Thus, building effective visualizations of geospatial information for a Digital Earth requires an understanding of its users, their capabilities and their constraints, which we describe in the next section.

7.3 Understanding Users: Cognition, Perception, and User-Centered Design Approaches for Visualization

A primary way that humans make sense of the world—the real world, an “augmented world” with additional information overlaid, or a virtual word (such as a simulation)—is by making sense of what we see. Because vision is so important to human sense-making, visualizations are major facilitators of that process and provide important support for cognition. When effectively designed, visualizations enable us to externalize some of the cognitive burden to something we can (re)utilize through our visual perception (Hegarty 2011; Scaife and Rogers 1996). However, our ability to see something—in the sense of understanding it—is bounded by our perceptual and cognitive limits. Thus, any visualizations we design to help work with and understand geospatial information must be developed with the end user in mind, taking a user-centered design (UCD) approach (Gabbard et al. 1999; Huang et al. 2012; Jerald 2015; Lloyd and Dykes 2011; Robinson et al. 2005). A UCD approach is useful for understanding perceptual and cognitive limits and for adapting the displays to these limits. It also helps to evaluate the strengths of new methods of interacting with visualizations (Roth et al. 2017). For example, a user-centered approach has been used to demonstrate that an embodied data axis aids in making sense of multivariate data (Cordeil et al. 2017). Similarly, UCD was useful in determining which simulated city environments lead to the greatest sense of immersion to support participatory design processes for smart cities (Dupont et al. 2016), assuming that immersion has a positive effect in this context.

7.3.1 Making Visualizations Work for Digital Earth Users

7.3.1.1 Managing Information

As briefly noted earlier, a key benefit—and a key challenge—for visualization in the Digital Earth era is related to the amount of data that is at our fingertips (Çöltekin and Keith 2011). With so much available data, how can we make sense of it all? What we need is the right information in the right place at the right time for the decisions we are trying to make or the activities we are trying to support. Thus, understanding the context in which information and visualizations of information are going to be used (Griffin et al. 2017)—what data, by whom, for what purpose, on what device—is fundamental to designing appropriate and effective visualizations. For example, ubiquitous sensor networks and continuous imaging of the Earth’s surface allow for us to collect real-time or near real-time spatial information on fires and resources available to fight fires, and firefighters would benefit from improved situation awareness (Weichelt et al. 2018). However, which information should we show them, and how should it be shown? Are there environmental factors that affect what information they can perceive and understand from an AR system that visualizes important fire-related attributes (locations of active burns, wind speed and direction) and firefighting parameters (locations of teammates and equipment, locations of members of the public at risk)? How much information is too much to process and use effectively at a potentially chaotic scene?

A great strength of visualization is its ability to abstract: to remove detail and to reveal the essence. In that vein, realism as a display principle has been called “naive realism” because realistic displays sometimes impair user performance but users still prefer them (e.g., Lokka et al. 2018; Smallman and John 2005). The questions of how much abstraction is needed (Boér et al. 2013; Çöltekin et al. 2015) and what level of realism should be employed (Brasebin et al. 2018; Ruas et al. 2011) do not have clear-cut answers. In some cases, we need to follow the “Goldilocks principle” because too much or too little realism is suboptimal. As Lokka and Çöltekin (2019) demonstrated, if there is too much realism, we may miss important details because we cannot hold all the details in our memory whereas if there is too little, we may find it difficult to learn environments because there are too few ‘anchors’ for the human memory to link new knowledge of the environment. These issues of how to abstract data and how it can be effectively visualized for end users are growing in the era of big data and Digital Earth.

7.3.1.2 Individual and Group Differences

Nearly two decades ago, Slocum et al. (2001) identified individual and group differences as a research priority among the many “cognitive and usability issues in geovisualization” (as the paper was also titled). There was evidence prior to their 2001 paper and has been additional evidence since then that humans process information in a range of ways. Such differences are often based on expertise or experience (e.g., Griffin 2004; Çöltekin et al. 2010; Ooms et al. 2015) or spatial abilities (e.g., Liben and Downs 1993; Hegarty and Waller 2005), and are sometimes based on age (Liben and Downs 1993; Lokka et al. 2018), gender (Newcombe et al. 1983); culture (Perkins 2008), confidence and attitudes (e.g., Biland and Çöltekin 2017), or anxiety (Thoresen et al. 2016), among other factors. For brevity, we do not expand on the root causes of these differences, as this would require a careful treatment of the “nature vs. nurture” debate. We know that many of the shortcomings people experience can be remedied to different degrees based on interventions and/or training. For example, spatial abilities, as measured in standardized tests, can be enhanced by training (Uttal et al. 2013), and expertise/experience and education affect the ways that people process information (usually in improved ways, but these forms of knowledge can also introduce biases). Many of the above factors could be considered cognitive factors and might be correlated in several ways. A key principle arising from the awareness that individuals process information differently and that their capacities to do so can vary (whatever the reason) is that the “designer is not the user” (Richter et al. 2015, p. 4). A student of geovisualization (we include experts in this definition) is a self-selected individual who was likely interested in visual information. With the addition of education to this interest, it is very likely that a design that a geovisualization expert finds easy-to-use (or “user friendly”, a term that is used liberally by many in the technology sector) will not be easy-to-use or user friendly for an inexperienced user or a younger/older user.

7.3.1.3 Accessibility

Related to the individual and group differences as described above, another key consideration is populations with special needs. As in any information display, visualization and interaction in a geovisualization software environment should ideally be designed with accessibility in mind. For example, visually impaired people can benefit from multimedia augmentation on maps and other types of visuospatial displays (Brock et al. 2015; Albouys-Perrois et al. 2018). Another accessibility issue linked to (partial) visual impairment that is widely studied in geovisualization is color vision impairment. This is because color is (very) often used to encode important information and color deficiency is relatively common, with up to eight percent of the world’s population experiencing some degree of impairment (e.g., Brychtová and Çöltekin 2017a). Because it is one of the more dominant visual variables (Garlandini and Fabrikant 2009), cartography and geovisualization research has contributed to color research for many decades (Brewer 1994; Brychtová and Çöltekin 2015; Christophe 2011; Harrower and Brewer 2003). Two of the most popular color-related applications in use by software designers were developed by cartography/geovisualization researchers: ColorBrewer (Harrower and Brewer 2003) for designing/selecting color palettes and ColorOracle (Jenny and Kelso 2007) for simulating color blindness. Color is a complex and multifaceted phenomenon even for those who are not affected by color vision impairment. For example, there are perceptual thresholds for color discrimination that affect everyone (e.g., Brychtová and Çöltekin 2015, 2017b), and how colors are used and organized contributes to the complexity of maps (e.g., Çöltekin et al. 2016a, b). Color-related research in geographic information science also includes examination of the efficacy of color palettes to represent geophysical phenomena (Spekat and Kreienkamp 2007; Thyng et al. 2016) or natural color maps (Patterson and Kelso 2004). We include color in the above discussion because it is one of the strongest visual variables. However, color is not the only visual variable of interest to geovisualization researchers. Many other visual variables have been examined and assessed in user studies. For example, the effects of size (Garlandini and Fabrikant 2009), position, line thickness, directionality, color coding (Monmonier 2018; Brügger et al. 2017), shading, and texture (Biland and Çöltekin 2017; Çöltekin and Biland 2018) on map reading efficiency have been examined.

It is not possible to provide an in-depth review of all the user studies in the geovisualization domain within the scope of this chapter. However, it is worth noting that if a design maximizes accessibility, the users benefit and the (consequently) improved usability of visuospatial displays enables other professionally diverse groups to access and create their own visualizations: for example, city planners, meteorologists (e.g., Helbig et al. 2014) and ecoinformatics experts (e.g., Pettit et al. 2010), all of which are support systems of a ‘full’ future Digital Earth.

7.4 Geovisual Analytics

The science of analytical reasoning with spatial information using interactive visual interfaces is referred to as geovisual analytics (Andrienko et al. 2007; Robinson 2017). This area of GIScience emerged alongside the development of visual analytics, which grew out of the computer science and information visualization communities (Thomas and Cook 2005). A key distinction of geovisual analytics from its predecessor field of geovisualization is its focus on support for analytical reasoning and the application of computational methods to discover interesting patterns from massive spatial datasets. A primary aim of geovisualization is to support data exploration. Geovisual analytics aims to go beyond data exploration to support complex reasoning processes and pursues this aim by coupling computational methods with interactive visualization techniques. In addition to the development of new technical approaches and analytical methods, the science of geovisual analytics also includes research aimed at understanding how people reason with, synthesize, and interact with geographic information to inform the design of future systems. Progress in this field has been demonstrated on each of these fronts, and future work is needed to address the new opportunities and challenges presented by the big data era and meeting the vision proposed for Digital Earth.

7.4.1 Progress in Geovisual Analytics

Early progress in geovisual analytics included work to define the key research challenges for the field. Andrienko et al. (2007) called for decision making support using space-time data, computational pattern analysis, and interactive visualizations. This work embodied a shift from the simpler goal of supporting data exploration in geovisualization toward new approaches in geovisual analytics that could influence or direct decision making in complex problem domains. Whereas the goal in geovisualization may have been to prompt the development of new hypotheses, the goal in geovisual analytics has become to prompt decisions and actions. To accomplish this goal, GIScience researchers began to leverage knowledge from intelligence analysis and related domains in which reasoning with uncertain information is required to make decisions (Heuer 1999; Pirolli and Card 2005). Simultaneously, there were efforts to modify and create new computational methods to identify patterns in large, complex data sources. These methods were coupled to visual interfaces to support interactive engagement with users. For example, Chen et al. (2008) combined the SaTScan space-time cluster detection method with an interactive map interface to help epidemiologists understand the sensitivity of the SaTScan approach to model parameter changes and make better decisions about when to act on clusters that have been detected. Geovisual analytics have been applied in a wide range of domain contexts, usually targeting data sources and problem areas that are difficult to approach without leveraging a combination of computational, visual, and interactive techniques. Domains of interest have included social media analytics (Chae et al. 2012; Kisilevich et al. 2010), crisis management (MacEachren et al. 2011; Tomaszewski and MacEachren 2012), and movement data analysis (Andrienko et al. 2011; Demšar and Virrantaus 2010). The following section on “Visualizing Movement includes a deeper treatment of the approaches to (and challenges of) using visual analytics for dynamic phenomena.

A concurrent thread of geovisual analytics research has focused on the design and evaluation of geovisual analytics tools. In addition to the development of new computational and visual techniques, progress must also be made in understanding how geovisual analytics systems aid (or hinder) the analytical reasoning process in real-world decision making contexts (Çöltekin et al. 2015). Approaches to evaluating geovisual analytics include perceptual studies (Çöltekin et al. 2010), usability research (Kveladze et al. 2015), and in-depth case study evaluations of expert use (Lloyd and Dykes 2011). Additionally, new geovisual analytics approaches have been developed to support such evaluations (Andrienko et al. 2012; Demšar and Çöltekin 2017), as methods such as eye tracking are capable of creating very large space-time datasets that require combined computational and interactive visual analysis to be made sense of.

7.4.2 Big Data, Digital Earth, and Geovisual Analytics

The next frontier for geovisual analytics is to address the challenges posed by the rise of big spatial data. Big data are often characterized by a set of so-called V’s, corresponding to the challenges associated with volume, velocity, variety, and veracity, among others (Gandomi and Haider 2015; Laney 2001). Broadly, geovisual analytics approaches to handling big spatial data need to address problems associated with analysis, representation, and interaction (Robinson et al. 2017), similar to the challenges faced by geovisualization designers. New computational methods are needed to support real-time analysis of big spatial data sources. Representations must be developed to render the components and characteristics of big spatial data through visual interfaces (Çöltekin et al. 2017). We also need to know more about how to design interactive tools that make sense to end users to manipulate and learn from big spatial data (Griffin et al. 2017; Roth et al. 2017).

The core elements behind the vision for Digital Earth assume that big spatial data will exist for every corner of our planet, in ways that support interconnected problem solving (Goodchild et al. 2012). Even if this vision is achieved (challenging as that may seem), supporting the analytical goals of Digital Earth will require the development of new geovisual analytics tools and techniques. Major issues facing humanity today regarding sustainable global development and mitigating the impacts of climate change necessarily involve the fusion of many different spatiotemporal data sources, the integration of predictive models and pattern recognition techniques, and the translation of as much complexity as is possible into visual, interactive interfaces to support sensemaking and communication.

7.5 Visualizing Movement

One of the most complex design issues in visualization is how to deal with dynamic phenomena. Movement is an inherent part of most natural and human processes, including weather, geomorphological processes, human and animal mobility, transport, and trade. We may also be interested in the movement of more abstract phenomena such as ideas or language. Although movement is a complex spatiotemporal phenomenon, it is often depicted on static maps, emphasizing geographical aspects of movement. In the context of visualization, “Digital Earth” implies use of a globe metaphor, where movement data is displayed on a globe that can be spun and zoomed (see Fig. 7.1). In this section, we review map-based representations of movement that can be used within a 3D globe-based immersive environment. Visual representations that do not emphasize geographical location (e.g., origin-destination matrices and various timeline-based representations) are less amenable to being used within a global immersive environment, though they may have a supporting role as multiple coordinated views.
Fig. 7.1

Approaches to visualizing flows in a 3D immersive environment that were investigated by Yang et al. (2019).

Figure is modified based on Yang et al. (2019) with permission from the original authors

Note that most techniques for visualizing movement on the Earth’s surface were developed as 2D representations. However, many of these representations can be placed on the surface of a 3D globe and we can identify where the 3D environment may offer benefits and disadvantages. Notably, one disadvantage is that 3D environments often result in occlusion, and this occlusion is only partially addressed through interaction (Borkin et al. 2011; Dall’Acqua et al. 2013). Below, we begin by visually depicting individual journeys and progressively review aggregated movement data representations, which are more scalable and can synthesize and reveal general movement patterns (the individual trajectories cannot).

7.5.1 Trajectory Maps: The Individual Journey

Individual journeys can be expressed as trajectories that represent the geometrical paths (routes) of objects through time as a set of timestamped positions. For example, if we were interested in migrating birds, GPS loggers attached to individual birds could produce trajectories (see Fig. 7.2 for an example). These may help understand the route taken, stop-overs, timing, and interactions between individuals. The detail with which the geometrical path is captured depends on the temporal resolution of the sampled locations. Trajectories can also be reconstructed by stringing together locations from other sensors, for example, from multiple cameras with automatic license plate recognition or from a set of georeferenced tweets from a single user. One aspect of trajectories that is often overlooked is how they are segmented, that is, where they start and stop over the course of the journey. For tracked animals, algorithms that segment trajectories based on position or time intervals during which where there is little movement are common (e.g., Buchin et al. 2011). In the example above (Fig. 7.2), the nest location was used to segment trajectories into foraging trips.
Fig. 7.2

A (green) subset of bird tracked trajectories filtered on the spatial region on the map indicated by the red circle linked to the mouse pointer. These trajectories are identified in green on the timeline below (time vs altitude), indicating when the journeys occurred, with five of the journeys shown at the top left (time vs distance, with hourly isochrones).

Figure is modified, based on Slingsby and van Loon (2016) with permission from the original authors

Trajectory maps depict individual movement by showing the geometrical traces of individual journeys on a map. Where there are few trajectories, trajectory maps can clearly illustrate specific journeys and facilitate visual comparison within an individual’s journeys or between journeys undertaken by different individuals. An excellent book by Cheshire and Umberti (2017) uses a whole range of static visualization methods to illustrate the movements of various types of animals, including trajectory maps. As well as presenting movement traces, trajectory maps can be a useful precursor to more substantial and directed analyses (Borkin et al. 2011; Dall’Acqua et al. 2013).

Map-based representations emphasize the geometry of the path, and it can be difficult to use maps to determine temporal aspects of the trajectory, including direction and speed. One option is to use animation, which only displays the parts of trajectories that are within a moving temporal window. Although animation may be effective when presented as part of an existing narrative, it can be difficult to detect trends as it is hard to remember what came before (Robertson et al. 2008). Various user studies have investigated animation and its efficiency and effectiveness for spatiotemporal tasks, with mixed results. The current understanding is that animations can introduce too much cognitive load if the task requires comparisons, thus, animations must be used cautiously (Robertson et al. 2008; Russo et al. 2013; Tversky et al. 2002). So-called small multiples (a series of snapshots, see Tufte 1983) can be better than animations for some tasks. Another option that is similar to small multiples in the sense that all of the presented information is visible at all times or is easily on demand is the use of multiple coordinated views (briefly introduced above). With multiple coordinated views, a temporal representation of the movement is interactively linked to the map. When the mouse is “brushed” over parts of the trajectory on the map, corresponding parts on the timeline are identified and vice versa (as shown in Fig. 7.2). Brushing along the timeline has a similar effect as animation but is more flexible. Although trajectory maps can be good to represent relevant individual instances of journeys, they do not scale well to situations where there are more than a few trajectories. The effect of over plotting with multiple crossing lines often obscures patterns. Making trajectories semitransparent can help to some degree, as it emphasizes common sections of routes by de-emphasizing those that are less commonly used. Modifying the color hue—and/or other visual variables or symbols—can help identify individuals or categories of journeys (which might include the purpose of the journey or mode of transport). Hue typically does not facilitate distinguishing more than approximately ten individuals or categories, but labels and tooltips can provide such context. Sequential color schemes can indicate continuous numerical data along trajectories such as speed or height above the ground. Arrows or tapered lines can help show the direction of movement. To simplify displays, one can also attempt to simplify the underlying data rather than tweak the display design. Common approaches include filtering trajectories by various criteria, considering only origin-destination pairs, or spatiotemporal aggregation (we elaborate on these approaches below). Trajectory maps can also be shown in a 3D environment. Space-time cubes (Hägerstrand 1970) are a form of 3D trajectory map (Andrienko et al. 2003; Kapler and Wright 2004; Kraak 2003b) where the x- and y-axes represent geographical coordinates and the z-axis represents the progression of time (see Fig. 7.3 for an example). As with trajectory maps, space-time cubes can indicate spatiotemporal aspects of small numbers of journeys. However, when more trajectories are added, the occluding effects can be even more severe than in 2D. Interactive rotation and zooming of the cube, highlighting trajectories, and interactive filtering can address the problematic effects of such occlusion but do not scale well to many trajectories.
Fig. 7.3

A Space-Time Cube, showing a journey in which a person visits a pool, home, work, a restaurant and home.

Figure based on Kraak (2008) with permission from the original author

In 3D representations, the z-axis can also be used for nontemporal data, which may create a conflict. Where trajectories define movement in 3D space, the z-axis can be used to represent a third spatial dimension, that is, it can be used to depict the height above the ground. There are also many opportunities to depict other characteristics of trajectories along the z-axis, as illustrated by the “trajectory wall” (Tominski et al. 2012) shown in Fig. 7.4.
Fig. 7.4

“Trajectory wall” in which multiple (and sometimes time-varying) attributes are displayed vertically along a trajectory, based on Tominski et al. (2012), with permission from the original authors

Because the above approaches do not scale well when there are many trajectories, we must consider simplifying the data and display, such as by filtering the data. Notably, filtering serves two purposes. The first addresses the fact that trajectory maps do not scale well in situations in which there are more than a few trajectories. The second is to identify multiple trajectories or groups of trajectories for comparison. Tobler (1987) suggested subsetting and thresholding to reduce the number of trajectories on a single map. This involves filtering on the basis of characteristics of trajectories, such as using geographical (see Fig. 7.5 below) and temporal windows (see Fig. 7.7) through which trajectories can pass or filtering the trajectory’s length, importance, or category. These are now routinely facilitated using interactive methods that support visual exploratory data analysis. Identifying multiple trajectories or groups of trajectories for comparison includes choosing representative trajectories for a set of people or different times of the day or different days of the week. This identification of trajectories may be manually achieved as part of an exploratory analysis or geovisualization approach and can be assisted by statistical and data mining techniques in a geovisual analytics approach. For example, “K-means” clustering can be used to group trajectories into “clusters” (based on a chosen metric of trajectory similarity) and representative trajectories can be compared (Andrienko and Andrienko 2011). Visualization techniques that facilitate such comparisons are simply switching between displaying trajectories or groups of trajectories by using interactive brushing, superpositioning (where trajectories are displayed on the same map), or juxtaposition, where maps of groups of trajectories are displayed side-by-side using small multiples (Tufte 1983).
Fig. 7.5

Hurter et al.’s (2018) interactions in a 3D immersive environment to explore and filter a huge set of trajectories.

Figure based on Hurter et al. (2018), with permission from the original authors

In summary, trajectory maps are good for showing detailed examples of journeys but do not scale well to more than a few trajectories. Characteristics of these individual trajectories can be explored through multiple coordinated views with brushing. Trajectories are often displayed in maps in 2D, but 3D space-time cubes are also common. Overplotting many trajectories with semitransparent lines can help indicate parts of routes that are commonly taken, and a selected trajectory can be highlighted using a visual variable if there is a reason to emphasize a particular trajectory. In addition, trajectories can be filtered, grouped, and visually compared. For higher-level pattern identification, it is helpful to perform some aggregation, as discussed in the next section.

7.5.2 Flow Maps: Aggregated Flows Between Places

Flow maps depict movement between locations or regions. Unlike trajectory maps, they typically do not represent the route or path taken. This is suitable for cases in which there are origin-destination pairs; for example, county-country migrations (Fig. 7.6) and public bike hire journeys taken between pairs of bike docking stations (Fig. 7.7).
Fig. 7.6

20,000 county-county US migration vectors (3% random sample) between 2012 and 2016, rendered with transparency and anti-aliasing to show ‘occlusion density’.

Figure based on Wood et al. (2011), redrawn by Jo Wood using data from https://vega.github.io/vega-lite/data/us-10m.json and https://gicentre.github.io/data/usCountyMigration2012_16.csv

Fig. 7.7

As in Fig. 7.6, but clutter is reduced by filtering county-county flows to and from Ohio (orange and purple, respectively), where line thickness is proportional to volume and curved lines allow directions to be distinguished and reduce occlusion.

Produced by Jo Wood using data from https://vega.github.io/vega-lite/data/us-10m.json and https://gicentre.github.io/data/usCountyMigration2012_16.csv

Tobler’s (1987) early flow maps connected locations with straight lines. However, curved lines help reduce the undesirable occluding effects of line crossings. Jenny et al. (2018) provide a comprehensive set of guidelines for designing flow maps. Wood et al. (2011) also used curved lines to distinguish and visually separate flow in either direction, using asymmetry in the curve to indicate direction (Fig. 7.7). Yang et al. (2019) provide specific guidance for designing flow maps on (3D) digital globes. They recommend taking advantage of the z-axis to design flows with 3D curvature to help reduce clutter and make the maps more readable and provide evidence-based advice for displaying flows on 3D globes.

A characteristic of flow data is that it is usually aggregated, with the number of flows between origin-destination pairs reported. This is facilitated by the fact that there are often a finite number of spatial units (origins and destinations), as is the case for bike docking stations or country-country migration data. This makes them more scalable but, as shown in Fig. 7.6 (Wood et al. 2011), flow maps can have clutter and occlusion issues similar to those observed in trajectory maps. These can be partially addressed by filtering as in trajectory maps, but because flows are usually already aggregated, filtering by geographical area is likely to reduce such clutter more effectively to make patterns visible and interpretable (Andrienko and Andrienko 2011) [see the geographical filtering in the green trajectories shown in Fig. 7.2]. There are other ways to reduce clutter and provide more interpretable visual representations of movements, for example, by employing spatial aggregation or applying edge bundling.

7.5.2.1 Spatial Aggregation of Flows

Spatial aggregation reduces the geographical precision of movement but benefits visualization. In Fig. 7.6, although the US county-county migration data is already aggregated by county pair, further aggregating the state-state migration would produce a more interpretable graphic. However, this additional aggregation is at the expense of being able to resolve differences within states. In this example, we suggested aggregating the input data by pairs of existing defined regions (counties and states), but the data can also be aggregated into pairs of data-driven irregular tessellations (e.g., Voronoi polygons, Fig. 7.8) or regular tessellations (e.g., grid cells). Flows can also be generated from full trajectory data (see the above section) by aggregating the start and end points to spatial units, provided they have meaningful start and end points. When performing spatial aggregation, it is typical to disaggregate by temporal unit (e.g., year) and/or by categorical attribute (e.g., gender). This enables comparison of temporal and other attributes, for example, using small multiples as described in the previous section (e.g., Fig. 7.7 could be arranged in small multiples by the hour of the day).
Fig. 7.8

Aggregating flows into data-driven Voronoi polygons. Left: Car journey trajectory data, using transparency to reduce clutter and occlusion. Middle and right: Aggregated flows into data-driven Voronoi polygons of different scales.

Figure based on Andrienko and Andrienko (2011) with permission from the original authors

7.5.2.2 Edge Bundling of Flows

Edge bundling is a class of techniques designed to layout flows in interpretable ways, by ‘bundling’ parts of different flows that go in different directions (see the example in Fig. 7.9). Bundling techniques are used to reduce occlusion and convey the underlying movement structure (Holten and van Wijk 2009; Fig. 7.10). Jenny et al. (2017) provide an algorithm to facilitate this. For cases with a specific origin or destination of interest, Buchin et al. (2011) suggest an algorithm that aggregates flows into a tree-like representation that clarifies the flow structure (Fig. 7.11).
Fig. 7.9

Examples of origin-destination maps that are subsetted on a single origin and where an aggregated tree layout simplifies the visual complex complexity of flows to multiple destinations (Buchin et al. 2011).

Figure based on Buchin et al. (2011), with permission from the original authors

Fig. 7.10

US migration graph (9780 aggregated origin-destination pairs), in which (a) simply uses straight lines and the others are bundled using various algorithms (Holten and Van Wijk 2009).

Figure based on Holten and Van Wijk (2009) with permission from the original authors

Fig. 7.11

Internal migration in Ireland. Left: a flow map, where line thickness indicates flow. Middle: spatially-arranged small-multiples of destination maps. Right: OD maps with the same grid-based layouts at both levels of the hierarchy.

Based on Kelly et al. (2013) with permission from the original authors

7.5.3 Origin-Destination (OD) Maps

OD maps (Wood et al. 2011) are also an important tool. They aggregate flows into a relatively small number of spatial units based on existing units (e.g., states) or those that result from a Voronoi- or grid-based tessellation. OD maps are effectively small multiple destination maps. Cases with irregular spatial units should be organized in a grid layout that preserves as much of the geographical ‘truth’ as possible. The center of the labels typically indicates the origin (e.g., of migrants or another phenomenon), and the maps show the destinations from each origin (Fig. 7.9). Flow maps aid in visually understanding the structure of movement between places (Jenny et al. 2017). Below, we disregard the connection between the origin and destination and simply consider the density of movement.

7.5.4 In-Flow, Out-Flow and Density of Moving Objects

This section concerns movement for which we do not have the connection between origin and destination. This includes situations in which we only have data on the outflow (but do not know where the flow goes), inflow (but do not know where the flow originates from), or the density of moving objects. This can be expressed as a single value describing the movement for each spatial unit, for example, the out-migration flow from each county. As described above, the spatial units used may be derived from existing units (e.g., states) or Voronoi/grid-based tessellations. These values can be displayed as choropleth maps, in which regions are represented as tessellating polygons on a map and a suitable color scale is used to indicate in- or out-movement or the density of moving objects.

When performing spatial aggregation, the data in each spatial unit can be disaggregated by temporal unit or by category. Figure 7.12 provides a visual representation of this, where the density of delivery vehicles is aggregated to 1-km grid squares and the vehicles in each grid square are disaggregated into densities for five vehicle types, the days of the week, and 24 h of the day. Many environmental datasets that describe the movement of water or air masses do not have a meaningful concept of individual journeys. These datasets usually summarize movement as vectors depicting the flow magnitude and direction within grid cells. Visual representations of these movements usually take the form of regular arrays of arrows on maps (Fig. 7.13). Here, vectors represent a summary of ‘movement’ within grid cells. These can be explored using some of the methods described above, including filtering, temporal animation, and small multiples. Doing so may result in multiple vectors per grid cell, which provides an opportunity to symbolize multiple variables as glyphs (Slingsby 2018), for example, for climatic data (Wickham et al. 2012) or a rose diagram at origin or destination locations. In spatial tessellations, the problem of overlapping places is not as common. However, the on-screen size of spatial units must be large enough for the symbolization to be interpreted.
Fig. 7.12

Represents the density of moving vehicles in London, by grid square, day of week, hour of day and vehicle type, using a logarithmic colour scale.

Figure based on Slingsby et al. (2010) with permission from the original authors

Fig. 7.13

A wind field map, in which arrows indicate wind direction (arrow orientation towards the thin end) and strength (arrow length) for grid squares. It indicates aggregated movement per grid cell.

Based on https://github.com/gicentre/litvis/blob/master/examples/windVectors.md with the original author’s permission and data from http://www.remss.com/measurements/ccmp/

In summary, movement data exists in different forms and can often be transformed. This section provided an overview of map-based representations for three different levels of precision for movement data. The reviewed approaches can be used with digital globes, or a future Digital Earth with virtual dashboards through which one can integrate analytical operations within an AR or VR system. Hurter et al. (2018) show how interactions in a 3D immersive environment (see the “Immersive Technologies—From Augmented to Virtual Reality” section) can enable the exploration of large numbers of individual 3D trajectories. Next, we review the current state of the art in immersive technologies.

7.6 Immersive Technologies—From Augmented to Virtual Reality

In the virtual and augmented reality (VR and AR) domains, there is almost “visible” excitement, both in academia (off and on for over 30 years) and in the private sector (more recently). A 2016 Goldman Sachs analysis predicted that VR and AR would be an 80 billion dollar industry by 2025 (reported on CNBC: https://www.cnbc.com/2016/01/14/virtual-reality-could-become-an-80b-industry-goldman.html). Arguably, geospatial sciences will not be the same once immersive technologies such as augmented (AR), mixed (MR), and virtual reality (VR) have been incorporated into all areas of everyday life. In this chapter, we use the shorthand xR to refer to all immersive technologies and use the individual acronyms (AR/MR/VR) to refer to specific technologies. A closely related term that has recently been gaining momentum is immersive analytics, described as a blend of visual analytics, augmented reality, and human-computer interaction (Marriott et al. 2018), which draws on knowledge and experience from several fields described in this chapter to develop visualizations of geospatial information that support thinking. We do not elaborate on immersive analytics; see, e.g., Billinghurst et al. (2018) and Yang et al. (2019). Current technologies for xR hold promise for the future, despite being strongly “gadget”-dependent and somewhat cumbersome and ‘involved’ to set up (i.e., they require some technical skill and dedication). Thus, it remains to be seen whether these immersive experiences will become commonplace. We describe and elaborate on these display technologies below. We begin by outlining several concepts that are important for xR technology use.

7.6.1 Essential Concepts for Immersive Technologies

Concepts characterizing immersive technologies and their definitions are sometimes subject to debate. This is mainly because their development involves multiple disciplines. Because there have been parallel developments in different communities, similar concepts might be named using different terms. The related technology also evolves quickly, and a newer/improved version of a concept/approach/method/tool typically gets a new name to distinguish it from the older versions or because technology actors want to “brand” their innovative approach, or there is a scientific paradigm shift and a new name is needed even though it was based on an older concept. As in many other interdisciplinary and fast-evolving scientific disciplines, there is considerable discussion and occasional confusion about terminology. This process of “maturing” terminology is not unique to immersive technologies. One of the first taxonomies that provided an overview of all xR technologies, and perhaps the most influential one, was proposed by Milgram and Kishino (1994), who used the concept of a continuum from reality to virtuality (see Fig. 7.14).
Fig. 7.14

Shown are examples from projects in ChoroPhronesis that demonstrate the reality-virtuality continuum proposed by Milgram and Kishino (1994).

Figure designed by Mark Simpson

Their original definitions are more nuanced than this continuum and are challenging to apply in a fast-developing technology field. Nonetheless, it is useful to revisit some of their main distinctions for a conceptual organization of the terms in xR.

A confusing, yet central, term is immersion (see the “Virtual Reality” subsection below). Currently, the commonsense understanding of immersion is different than its rather narrow focus in the technical VR literature. For example, Slater (2009) distinguishes immersion from presence, with the former indicating a physical characteristic of the medium itself for the different senses involved. Presence is reserved for the psychological state produced in response to an immersive experience. To illustrate a simple example, Fig. 7.15 shows three experimental setups that were used in a recent study on how different levels of immersion influence the feeling of being present in a remote meeting (Oprean et al. 2018).
Fig. 7.15

Different levels of immersion, with immersiveness increasing from top to bottom. Increased immersion is supported by a combination of an increased field of view and the use of an egocentrically fixed rather than an allocentrically fixed reference frame.

Based on Oprean et al. (2017) with the original author’s permission

In this study, Oprean et al. (2018) compared a standard desktop setting (the lowest level of immersion) with a three-monitor setup (medium level of immersion) and an immersive headset (the Oculus Rift, DK2). One can “order” these technologies along a spectrum of immersiveness (as in Fig. 7.16), which helps in designing experiments to test whether or not feeling physically immersed affects aspects of thinking or collaboration (e.g., on the subjective feeling of team membership). Another key concept for immersive technologies, and a research topic in itself, is interaction (also discussed in the “Visualizing Geospatial Information” section). Interaction is important for any form of immersive technology because the classical “keyboard and mouse” approach does not work well (or at all) when the user is standing and/or moving. Interaction, along with immersion, is one of the four “I” terms proposed as the defining elements of VR; the other two are information intensity and intelligence of objects, as proposed by MacEachren et al. (1999a) in the 1990s. We elaborate on the four “I”s and other relevant considerations in the Virtual Reality section because they are discussed most often in the context of VR, and are relevant for other forms of xR. In addition to Milgram and Kishino’s (1994) continuum, there are many other ways to organize and compare immersive technologies. For example, a recent take on levels of realism and immersion is shown Fig. 7.16. This example extends the immersiveness spectrum by considering where visualization designs are located on an additional continuum: abstraction-realism.
Fig. 7.16

Extending the immersiveness spectrum by also considering where specific visualization designs are located on an additional continuum: abstraction-realism.

Figure by Çöltekin et al. (2016a, b), CC-BY-3.0

7.6.2 Augmented Reality

In Milgram and Kishino’s (1994) model, the first step from reality toward virtuality is augmented reality (AR). Augmented reality allows for the user to view virtual objects superimposed onto a real-world view (Azuma 1997). Technological advancements have allowed for augmented reality to evolve from bulky head-mounted displays (HMDs) in the 1960s to smartphone applications today (some examples are featured below), and through specialized (though still experimental) glasses such as Google Glass or Epson Moverio (Arth and Schmalstieg 2011). Although technology has truly advanced since the early—bulky and rather impractical—HMDs, there are still challenges in the adoption of augmented reality for dedicated geospatial applications in everyday life. These challenges are often technical, such as latency and the inaccuracy of sensors when using smartphones, and result in inaccuracies in registration of features and depth ambiguity (Arth and Schmalstieg 2011; Chi et al. 2013; Gotow et al. 2010). There are also design issues that should be considered and, ideally, user-evaluated when developing and designing a “geospatial” AR application (Arth and Schmalstieg 2011; Cooper 2011; Kounavis et al. 2012; Kourouthanassis et al. 2015; Kurkovsky et al. 2012; Olsson 2012; Tsai et al. 2016; Vert et al. 2014).

Akçayır and Akçayır (2017) and Wu et al. (2013) reviewed the current state of AR in education. They concluded that AR provides a unique learning environment because it combines digital and physical objects, an insight relevant to students and scientists who are learning about geographical systems. An example of AR in education and research is the “augmented reality sandbox” (https://arsandbox.ucdavis.edu) that has been widely used, for example, in an urban/landscape design experiment (Afrooz et al. 2018). A similar application is the “tangible landscape” (https://tangible-landscape.github.io) (Petrasova et al. 2015). Both of these applications superimpose an elevation color map, topographic contour lines, and simulated water on a physical sand model that can be physically (re)shaped by the user. A tourism-related science and education example is the “SwissARena”, which superimposes a 3D model on top of topographic maps of Switzerland (Wüest and Nebiker 2018), enabling smartphone and tablet users to visit museums and other public spaces through an augmented experience. Motivated by a fundamental (rather than an applied) question, Carrera and Bermejo Asensio (2017) tested whether the use of AR improves participants’ (spatial) orientation skills when interpreting landscapes. They found a significant improvement in participants’ orientation skills when using a 3D AR application. However, some pedagogical questions (e.g., how should AR be used to complement the learning objectives; what is the gap between teaching and learning?) and other usability gaps (e.g., it was difficult to use at first, unsuitable for large classes, cognitive overload, expensive technology, and inadequate teacher ability to use the technology) identified by Akçayır and Akçayır (2017) and Wu et al. (2013) regarding the use of AR in teaching need to be addressed. Given that early research suggests that AR might aid in developing spatial skills, its potential in education (especially in science education) appears to be reasonably high. Furthermore, there appear to be several benefits of using AR in research. For example, it has been suggested that AR is an excellent tool for collaborative work among researchers (Jacquinod et al. 2016). At the time of this writing, there are no common examples of these types of applications in use, but there have been various experimental implementations of AR in research and scientific visualization (e.g., Devaux et al. 2018). Thus, most of the present excitement about AR seems to be based on belief and intuition, which can be correct but may also mislead.

7.6.3 Mixed Reality

As conceptualized in the Milgram and Kishino (1994) model (Fig. 7.15), the term Mixed Reality (MR)—sometimes referred to as Hybrid Reality—applies to everything in between the real world and a virtual world. Therefore, the term includes AR, and the issues described above about AR also apply to MR. MR also includes augmented virtuality (AV). AV refers to virtual environments that are designed so that physical objects still play a role. Of the two subcategories of MR (AR and AV), AR is more developed at this point in time. Nonetheless, AV is relevant in a number of VR scenarios. For example, when we want haptic feedback, we give users suits or gloves. It is also relevant when we want to interact with the virtual world using any kind of hardware. Using hardware to drive interaction is the current state of the art; that is, although there are an increasing number of gesture tracking methods that map functions onto the body’s natural movements, several of the controls are physical objects, such as remote controls, often referred to as “wands”, or small trackable objects attached to the viewers called “lights”. Any combination (hybrid) environments of physical and virtual objects can be considered a form of MR. We do not expound on MR in this chapter, and any information presented in the AR section above and most of the information in the VR section below is relevant to MR.

7.7 Virtual Reality

How should we define virtual reality? There is no consensus on the “minimum requirements” of VR, though it is understood that an ideal VR system provides humans experiences that are indistinguishable from an experience that could be real. Ideally, a VR should stimulate all senses. That is, a virtual apple you eat should look, smell, and taste real, and when you bite, the touch and sounds should be just right. Current VR technologies are not there yet. The sense of vision (and the associated visualizations) has been investigated a great deal and audio research has made convincing progress, but we have a long way to go in terms of simulating smells, tastes, and touch. There are no hard and fast rules for “minimum requirements” for a display to qualify as VR, but there have been various attempts to systematically characterize and distinguish VR from other types of displays (see Fig. 7.16). Among these, Sherman and Craig (2003) list four criteria: a virtual world (graphics), immersion, interactivity, and sensory feedback. They distinguish interaction and sensory feedback in the sense that interaction occurs when there is an intentional user request whereas sensory feedback is embedded at the system level and is fed to the user based on tracking the user’s body. In the cartographic literature, a similar categorization was proposed even earlier by MacEachren et al. (1999b) in which they describe the Four ‘I’s, adding intelligence of objects to Heim’s (1998) original three ‘I’s: immersion, interactivity, information intensity. The Four ‘I’s and Sherman and Craig’s criteria have clear overlaps in immersion and interactivity, and links between a “virtual world” and “information intensity” and between “sensory feedback” and “intelligence of objects” can be drawn. Notably, some authors make a distinction between virtual reality and virtual environments: the term virtual reality does not exactly refer to mimicking reality (but an experience that feels real to the user). Nonetheless, because the word reality can invoke such an impression, the term virtual environment emerged. The term originated because one can also show fictional (or planned) environments using a visualization environment, and thus, the term “environment” more effectively encapsulates the range of things one can do in such a visualization environment. Below, we give a brief history of VR in domains that are directly related to Digital Earth and elaborate on what was once described as a “virtual geographic environment” (VGE).

7.7.1 Virtual Geographic Environments

An extension of earlier conceptualizations of ‘virtual geography’ (e.g., Batty 1997; MacEachren et al. 1999a), the term VGE was formally proposed at the beginning of the 21st century (attributed to Lin and Gong 2001) around the same time as the seminal book by Fisher and Unwin (2001). Since its beginnings, the VGE concept and accompanying tools have significantly evolved. A modern description of a VGE is a digital geographic environment “generated by computers and related technologies that users can use to experience and recognize complex geographic systems and further conduct comprehensive geographic analyses, through equipped functions, including multichannel human-computer interactions (HCIs), distributed geographic modeling and simulations, and network geo-collaborations” (Chen and Lin 2018, p. 329). Since their conception, VGEs have attracted considerable attention in the geographic information science research community over the last few decades (e.g., Goodchild 2009; Huang et al. 2018; Jia et al. 2015; Konecny 2011; Liang et al. 2015; Mekni 2010; Priestnall et al. 2012; Rink et al. 2018; Shen et al. 2018; Torrens 2015; Zhang et al. 2018; Zheng et al. 2017). Much like the “digital twin” idea, and well-aligned with the Digital Earth concept, VGEs often aim to mirror real-world geographic environments in virtual ones. Such a mirrored virtual geographic environment also goes beyond reality, as it ideally enables its user to visually perceive invisible or difficult-to-see phenomena in the real world, and explore them inside the virtual world (e.g., looking at forests at different scales, examining historical time periods, seeing under the ocean’s surface). As it can incorporate advanced analytic capabilities, a VGE can be superior to the real world for analysts. In an ideal VGE, one can view, explore, experience and analyze complex geographic phenomena. VGEs are not ‘just’ 3D GIS environments, but there are strong similarities between VGEs and immersive analytics approaches. A VGE can embed all the tools of a GIS, but a key point of a VGE is that they are meant to provide realistic experiences, as well as simulated ones that are difficult to distinguish from real-world experiences. A VGE would not be ideal if only analytics are needed, as 2D plans combined with plots may better facilitate the analyst’s goals. The combination of a traditional GIS and the power of immersive visualization environments offers novel ways to combine human cognitive abilities with what machines have to offer (Chen and Lin 2018; Lin et al. 2013a).

7.7.2 Foundational Structures of VGEs

Lin et al. (2013b) designed a conceptual framework that includes four VGE subenvironments: data, modeling and simulation, interaction, and collaborative spaces. They posit that a geographic database and a geographic model are core necessities for VGEs to support visualization, simulation, and collaboration. Below, we briefly elaborate on the four VGE subenvironments (Lin et al. 2013b).

7.7.2.1 Data Space

The “data space” is conceptualized as the first step in the pipeline of creating a VGE. This is where data are organized, manipulated, and visualized to prepare the digital infrastructure necessary for a VGE. One can also design this environment so that users can “walk” in their data and examine it for patterns and anomalies (as in immersive analytics). The data is ideally comprehensive (i.e., “information intensity” is desirable), such that semantic information, location information, geometric information, attribute information, feature spatiotemporal/qualitative relationships and their evolution processes are considered and organized to form virtual geographic scenarios with a range of visualization possibilities (e.g., standard VR displays, holograms, or other xR modes) and thus support the construction of VGEs (Lü et al. 2019).

7.7.2.2 Modeling and Simulation Space

Models and simulations, as the abstraction and expression of geographical phenomena and processes, are important means for modern geographic research (Lin et al. 2015). With the rapid development of networks, cloud/edge computing, and other modern technologies, modeling and simulation capabilities allow for a large range of exploration and experimentation types (e.g., Wen et al. 2013, 2017; Yue et al. 2016). VGEs can also integrate such technologies. Chen et al. (2015) and Chen and Lin (2018) propose that doing so would provide new modes for geographic problem solving and exploration, and potentially help users understand the Digital Earth.

7.7.2.3 Interaction Space

In general, interaction is what shifts a user from being a passive ‘consumer’ of information and makes them active producers of new information (see the “Geovisualization” section earlier in this chapter). In VGEs, interaction requires a different way of thinking than for desktop setups because the aspiration is to create experiences that are comparable to those in the real world (i.e., mouse-and-keyboard type interactions do not work well in VGEs). Thus, there have been considerable efforts to track a user’s hands, head, limbs, and eyes to model natural interaction. Interaction tools play an important role in information transmission between the VGE and its users (Batty et al. 2017; Voinov et al. 2018).

7.7.2.4 Collaboration Space

In addition to the interaction between a human and a machine, it is important to consider the interactions between humans, ideally, as it occurs in the real world (or improving upon real-world collaboration). At present, there is an increasing demand for collaborative work, especially when solving complex problems. Complex geographic problem solving may require participants from different domains, and collaboration-support tools such as VGEs might help them communicate with each other. There are many examples of collaborative research based on VGEs (e.g., Chen et al. 2012; Li et al. 2015; Xu et al. 2011; Zhu et al. 2015). If the four subenvironments are well-designed, VGEs could become effective scientific tools and advance geography research: simulations in a VGE could be systematically and comprehensively explored to deepen scientists’ understanding of complex systems such as human-environment interactions. Virtual scenarios corresponding to real-world scenarios with unified spatiotemporal frameworks can be employed to support integration of human and environmental resources. With Digital Earth infrastructure and modern technological developments, geographical problems at multiple scales can be solved and related virtual scenarios can be developed for deep mining and visual analysis (e.g., Lin et al. 2013b; Fig. 7.17). Importantly, VGEs can support collaborative exploration beyond reality. Working with virtual scenarios, users can communicate and conduct collaborative research free from the constraints of physical space (and in some cases, time).
Fig. 7.17

A VGE example built for air pollution analysis (Lin et al. 2013a).

Figure by Lin et al. (2013a). CC BY-NC-SA 3.0

This chapter so far has focused on theoretical constructs and examples of geographical visualization that can be used to represent and provide insights into our Earth system. However, it is also important to consider how such visualizations can be presented to policy and decision-makers to plan for a more sustainable future. The next section outlines a number of platforms for engaging such end users with packaged geographical information, known as dashboards.

7.8 Dashboards

A true Digital Earth describes Earth and all its systems, including ecosystems, climate/atmospheric systems, water systems, and social systems. Our planet faces a number of great challenges including climate change, food security, an aging population, and rapid urbanization. As policy-makers, planners, and communities grapple with how to address these critical problems, they benefit from digital tools to monitor the performance of our management of these systems using specific indicators. With the rise of big data and open data, a number of dashboards are being developed to support these challenges, enabled by geographical visualization technologies and solutions (Geertman et al. 2017). Dashboards can be defined as “graphic user interfaces which comprise a combination of information and geographical visualization methods for creating metrics, benchmarks, and indicators to assist in monitoring and decision-making” (Pettit and Leao 2017).

One can think of dashboards as installations that can provide key indicators of the performance of a particular Earth system, powered through the construct of Digital Earth. In 2016, the United Nations launched 17 Sustainable Development Goals to guide policy and funding priorities until 2030. Each of these goals include a number of indicators that can be quantified and reported within a Digital Earth dashboard, as illustrated, for example, such as in SDG Index and Dashboards (https://dashboards.sdgindex.org/#/) (Sachs et al. 2018).

For illustrative purposes, we focus on one SDG 11—Sustainable Cities and Communities, as there are a number of city dashboard initiatives that aim to provide citizens and visitors access to a rich tapestry of open data feeds. Data in these feeds are typically aggregated and presented to the user online and can include, for example, data on traffic congestion, public transport performance, air quality, weather data, social media streams, and news feeds. Users can interact with the data and perform visual analyses via different/multiple views, which might include graphs, charts, and maps. Examples include the London Dashboard (Gray et al. 2016) and the Sydney Dashboard (Pettit et al. 2017a), illustrated in Fig. 7.18.
Fig. 7.18

City of Sydney Dashboard.

Figure provided by Chris Pettit

There are also advanced dashboard platforms that support data-driven policy and decisions through analytics. For cities, there has been an increase in the number of city analytics dashboard platforms such as the Australian Urban Research Infrastructure Network (AURIN) workbench (Pettit et al. 2017b). The AURIN workbench provides users with access to over 3,500 datasets through an online portal. This portal provides data and includes more than 100 spatial-statistical tools (Sinnott et al. 2015). The AURIN workbench (Fig. 7.19) enables users to visualize census data and a number of other spatial datasets, including the results of statistical analyses through multiple coordinated (i.e., linked) views. Thus, it enables geovisual analytics as the user can brush between maps, graphs, charts, and scatterplots to explore the various dimensions of a city (Widjaja et al. 2014). In an era of smart cities, big data, and city analytics, an increasing number of geographical visualization platforms include both data and simulations to benchmark the performance of urban systems.
Fig. 7.19

The AURIN Workbench provides a rich geovisual analytics experience.

Figure provided by Chris Pettit

Dashboard views of the performance of Earth systems such as urban systems have a number of pros and cons. Dashboards can potentially provide the best available data on the performance of an urban system or natural asset so that decisions can account for multiple dimensions, including sustainability, resilience, productivity, and livability. Dashboards are also a window into the democratization of data and provide greater transparency and accountability in policy- and decision-making. However, there are a number of challenges in developing and applying dashboards; without good quality indicators and benchmarks, the utility of such digital presentations of performance can be questionable. Traditionally, dashboards have provided a unidirectional flow of information to their users. However, with the emergence of digital twins, there may be an opportunity for a true bidirectional flow of data between dashboards, their users and Earth systems.

7.9 Conclusions

Our understanding of the vision of Digital Earth is that it is a fully functional virtual reality system. To achieve such a system, we need to master every aspect of relevant technology and design and keep the users in mind. Visualization is an interdisciplinary topic with relevance in many areas of life in the digital era, especially given that there is much more data to analyze and understand than ever before. Because the Earth is being observed, measured, probed, listened to, and recorded using dozens of different sensors, including people (Goodchild 2007), the data we need to build a Digital Earth is now available (at least for parts of the Earth). Now, the challenge is to organize these data at a global scale following cartographic principles so that we can make sense of it. Herein lies the strength of visualization. By visualizing the data in multiple ways, we can create, recreate, and predict experiences, observe patterns, and detect anomalies. Recreating a chat with an old neighbor in our childhood living room 30 years later (e.g., instead of looking at a photo album) is no longer a crazy thought; we might be recording enough data to be able to do such things soon. The possibilities are endless. However, as inspiring as this may be, one must understand how to “do it right”; that is, we have much to learn before we will know what exactly we should show, when and to whom. In this chapter, we provided an overview of the current state of the art of topics related to visualization in the context of Digital Earth. We hope this chapter provided some insights into our current broad understanding of this challenge.

References

  1. Afrooz A, Ballal H, Pettit C (2018) Implementing Augmented Reality Sandbox in Geode Sign: A Future for Geode Sign. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 4: 5–12.  https://doi.org/10.5194/isprs-annals-iv-4-5-2018.CrossRefGoogle Scholar
  2. Akçayır M, Akçayır G (2017) Advantages and Challenges Associated with Augmented Reality for Education: A Systematic Review of the Literature. Educational Research Review 20: 1–11.  https://doi.org/10.1016/j.edurev.2016.11.002.CrossRefGoogle Scholar
  3. Albouys-Perrois J, Laviole J, Briant C et al (2018) Towards a Multisensory Augmented Reality Map for Blind and Low Vision People: A Participatory Design Approach. In CHI ’18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper 629. New York: ACM.  https://doi.org/10.1145/3173574.3174203.
  4. Andrienko G, Andrienko N, Burch M et al (2012) Visual Analytics Methodology for Eye Movement Studies. IEEE Transactions on Visualization and Computer Graphics 18: 2889–2898.  https://doi.org/10.1109/tvcg.2012.276.CrossRefGoogle Scholar
  5. Andrienko G, Andrienko N, Hurter C et al (2011) From Movement Tracks through Events to Places: Extracting and Characterizing Significant Places from Mobility Data. In 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), 161–170. Providence, RI: IEEE.  https://doi.org/10.1109/vast.2011.6102454.
  6. Andrienko G, Andrienko N, Jankowski P et al (2007) Geovisual Analytics for Spatial Decision Support: Setting the Research Agenda. International Journal of Geographical Information Science 21: 839–857.  https://doi.org/10.1080/13658810701349011.CrossRefGoogle Scholar
  7. Andrienko N, Andrienko G (2006) Exploratory Analysis of Spatial and Temporal Data: A Systematic Approach. Berlin Heidelberg: Springer-Verlag.Google Scholar
  8. Andrienko N, Andrienko G (2011) Spatial Generalization and Aggregation of Massive Movement Data. IEEE Transactions on Visualization and Computer Graphics 17: 205–219.  https://doi.org/10.1109/tvcg.2010.44.CrossRefGoogle Scholar
  9. Andrienko N, Andrienko G, Gatalsky P (2003) Visual Data Exploration Using the Space-Time Cube. In Proceedings of the 21st International Cartographic Conference, 1981–1983. Durban, South Africa: ICA.Google Scholar
  10. Arth C, Schmalstieg D (2011) Challenges of Large-Scale Augmented Reality on Smartphones. In ISMAR 2011 Workshop, 1–4. Basel, Switzerland.Google Scholar
  11. Azuma RT (1997) A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6: 355–385.  https://doi.org/10.1162/pres.1997.6.4.355.CrossRefGoogle Scholar
  12. Batty M (1997) Virtual Geography. Futures 29: 337–352.  https://doi.org/10.1016/s0016-3287(97)00018-9.CrossRefGoogle Scholar
  13. Batty M (2018) Digital Twins. Environment and Planning B 45: 817–820.  https://doi.org/10.1177/2399808318796416.Google Scholar
  14. Batty M, Lin H, Chen M (2017) Virtual Realities, Analogies and Technologies in Geography. In Handbook on Geographies of Technology, edited by Barney Warf, 96–110. London, UK: Edward Elgar Publishing.Google Scholar
  15. Bektas K, Çöltekin A, Krüger J et al (2015) A Testbed Combining Visual Perception Models for Geographic Gaze Contingent Displays. In Eurographics Conference on Visualization, edited by Jessie Kennedy and Enrico Puppo, 1–6. Cagliari, Sardinia, Italy.  https://doi.org/10.2312/eurovisshort.20151127.
  16. Bell G, Hey T, Szalay A (2009) Beyond the Data Deluge. Science 323: 1297–1298.  https://doi.org/10.1126/science.1170411.CrossRefGoogle Scholar
  17. Bertin J. (1967/1983) Semiology of Graphics: Diagrams, Networks, Maps. Translated by William Berg. Madison, WI.: University of Wisconsin Press.Google Scholar
  18. Biland J, Çöltekin A (2017) An Empirical Assessment of the Impact of the Light Direction on the Relief Inversion Effect in Shaded Relief Maps: NNW Is Better than NW. Cartography and Geographic Information Science 44 (4): 358–372.  https://doi.org/10.1080/15230406.2016.1185647.CrossRefGoogle Scholar
  19. Billinghurst M, Cordeil M, Bezerianos A et al (2018) Collaborative Immersive Analytics. In Immersive Analytics. Lecture Notes in Computer Science, edited by Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, 11190:221–257. Cham, Switzerland: Springer International Publishing.CrossRefGoogle Scholar
  20. Boér A, Çöltekin A, Clarke KC (2013) An Evaluation of Web-Based Geovisualizations for Different Levels of Abstraction and Realism – What Do Users Predict? In Proceedings of the 26th International Cartographic Conference, 209–220. Dresden, Germany: ICA.Google Scholar
  21. Borkin MA, Gajos KZ, Peters A et al (2011) Evaluation of Artery Visualizations for Heart Disease Diagnosis. IEEE Transactions on Visualization and Computer Graphics 17: 2479–2488.  https://doi.org/10.1109/tvcg.2011.192.CrossRefGoogle Scholar
  22. Brasebin M, Perret J, Mustière S et al (2018) 3D Urban Data to Assess Local Urban Regulation Influence. Computers, Environment and Urban Systems 68: 37–52.  https://doi.org/10.1016/j.compenvurbsys.2017.10.002.CrossRefGoogle Scholar
  23. Brewer CA (1994) Color Use Guidelines for Mapping and Visualization. In Visualization in Modern Cartography, edited by Alan M. MacEachren and D. R. Fraser Taylor, 123–147. Oxon, UK: Elsevier Science.  https://doi.org/10.1016/b978-0-08-042415-6.50014-4.CrossRefGoogle Scholar
  24. Brock AM, Truillet P, Oriola B et al (2015) Interactivity Improves Usability of Geographic Maps for Visually Impaired People. Human–Computer Interaction 30: 156–194.  https://doi.org/10.1080/07370024.2014.924412.CrossRefGoogle Scholar
  25. Brügger A, Fabrikant SI, Çöltekin A (2017) An Empirical Evaluation of Three Elevation Change Symbolization Methods along Routes in Bicycle Maps. Cartography and Geographic Information Science 44 (5): 436–451.  https://doi.org/10.1080/15230406.2016.1193766.CrossRefGoogle Scholar
  26. Brychtová A, Çöltekin A (2015) Discriminating Classes of Sequential and Qualitative Colour Schemes. International Journal of Cartography 1: 62–78.  https://doi.org/10.1080/23729333.2015.1055643.CrossRefGoogle Scholar
  27. Brychtová A, Çöltekin A (2017a) Calculating Colour Distance on Choropleth Maps with Sequential Colours – A Case Study with ColorBrewer 2.0. Kartographische Nachrichten, no. 2: 53–60.Google Scholar
  28. Brychtová A, Çöltekin A (2017b) The Effect of Spatial Distance on the Discriminability of Colors in Maps. Cartography and Geographic Information Science 44 (3): 229–245.  https://doi.org/10.1080/15230406.2016.1140074.CrossRefGoogle Scholar
  29. Buchin M, Driemel A, van Kreveld M et al (2011) Segmenting Trajectories: A Framework and Algorithms Using Spatiotemporal Criteria. Journal of Spatial Information Science 3: 33–63.  https://doi.org/10.5311/josis.2011.3.66.
  30. Carpendale MST (2003) Considering Visual Variables as a Basis for Information Visualisation. Calgary: University of Calgary.  https://doi.org/10.11575/prism/30495.
  31. Carrera CC, Bermejo Asensio LA (2017) Landscape Interpretation with Augmented Reality and Maps to Improve Spatial Orientation Skill. Journal of Geography in Higher Education 41 (1): 119–133.  https://doi.org/10.1080/03098265.2016.1260530.CrossRefGoogle Scholar
  32. Chae J, Thom D, Bosch H et al (2012) Spatiotemporal Social Media Analytics for Abnormal Event Detection and Examination Using Seasonal-Trend Decomposition. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), 143–152. Seattle, WA: IEEE.  https://doi.org/10.1109/vast.2012.6400557.
  33. Chen J, Roth RE, Naito AT et al (2008) Geovisual Analytics to Enhance Spatial Scan Statistic Interpretation: An Analysis of U.S. Cervical Cancer Mortality. International Journal of Health Geographics 7: article 57.  https://doi.org/10.1186/1476-072x-7-57.CrossRefGoogle Scholar
  34. Chen M, Lin H (2018) Virtual Geographic Environments (VGEs): Originating from or beyond Virtual Reality (VR)? International Journal of Digital Earth 11: 329–333.  https://doi.org/10.1080/17538947.2017.1419452.CrossRefGoogle Scholar
  35. Chen Min, Lin H, Kolditz O et al (2015) Developing Dynamic Virtual Geographic Environments (VGEs) for Geographic Research. Environmental Earth Sciences 74: 6975–6980.  https://doi.org/10.1007/s12665-015-4761-4.CrossRefGoogle Scholar
  36. Chen M, Lin H, Wen YN et al (2012) Sino-VirtualMoon: A 3D Web Platform Using Chang’E-1 Data for Collaborative Research. Planetary and Space Science 65: 130–136.  https://doi.org/10.1016/j.pss.2012.01.005.CrossRefGoogle Scholar
  37. Cheshire J, Umberti O (2017) Where the Animals Go: Tracking Wildlife with Technology in 50 Maps and Graphics. New York: W. W. Norton & Company.Google Scholar
  38. Chi HL, Kang SC, Wang XY (2013) Research Trends and Opportunities of Augmented Reality Applications in Architecture, Engineering, and Construction. Automation in Construction 33: 116–122.  https://doi.org/10.1016/j.autcon.2012.12.017.CrossRefGoogle Scholar
  39. Christophe S (2011) Creative Colours Specification Based on Knowledge (COLorLEGend System). The Cartographic Journal 48: 138–145.  https://doi.org/10.1179/1743277411y.0000000012.CrossRefGoogle Scholar
  40. Çöltekin A, Biland J (2018) Comparing the Terrain Reversal Effect in Satellite Images and in Shaded Relief Maps: An Examination of the Effects of Color and Texture on 3D Shape Perception from Shading. International Journal of Digital Earth in press: 1–18.  https://doi.org/10.1080/17538947.2018.1447030.CrossRefGoogle Scholar
  41. Çöltekin A, Alžběta B, Griffin AL et al (2016a) Perceptual complexity of soil-landscape maps: a user evaluation of color organization in legend designs using eye tracking. International Journal of Digital Earth 10(6):560–581.  https://doi.org/10.1080/17538947.2016.1234007.CrossRefGoogle Scholar
  42. Çöltekin A, Lokka IE, Zahner M (2016b) On the Usability and Usefulness of 3D (Geo)Visualizations – A Focus on Virtual Reality Environments. In International Archives of Photogrammetry, Remote Sensing, and Spatial Information Science, XLI-B2:387–392. Prague, Czechia.  https://doi.org/10.5194/isprsarchives-xli-b2-387-2016.
  43. Çöltekin A, Bleisch S, Andrienko G et al (2017) Persistent Challenges in Geovisualization – a Community Perspective. The International Journal of Cartography 3: 115–139.  https://doi.org/10.1080/23729333.2017.1302910.CrossRefGoogle Scholar
  44. Çöltekin A, Keith CC (2011) Virtual Globes or Virtual Geographical Reality. Geospatial Today January 2011: 26–28.Google Scholar
  45. Çöltekin A, Fabrikant SI, Lacayo M (2010) Exploring the Efficiency of Users’ Visual Analytics Strategies Based on Sequence Analysis of Eye Movement Recordings. International Journal of Geographical Information Science 24: 1559–1575.  https://doi.org/10.1080/13658816.2010.511718.CrossRefGoogle Scholar
  46. Çöltekin A, Janetzko H, Fabrikant SI (2018) Geovisualization. In The Geographic Information Science & Technology Body of Knowledge (2nd Quarter 2018 Edition), edited by John P. Wilson. UCGIS.  https://doi.org/10.22224/gistbok/2018.2.6.
  47. Çöltekin A, Pettit C, Wu B (2015) Geovisual Analytics: Human Factors. International Journal of Digital Earth 8: 595–598.  https://doi.org/10.1080/17538947.2015.1047173.CrossRefGoogle Scholar
  48. Cooper D (2011) User and Design Perspectives of Mobile Augmented Reality. MA Thesis, Muncie, Indiana: Ball State University.Google Scholar
  49. Cordeil M, Cunningham A, Dwyer T et al (2017) ImAxes. In UIST ’17 Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 71–83. Québec City, QC, Canada: ACM.  https://doi.org/10.1145/3126594.3126613.
  50. Dall’Acqua L, Çöltekin A, Noetzli J (2013) A Comparative User Evaluation of Six Alternative Permafrost Visualizations for Reading and Interpreting Temperature Information. In GeoViz Hamburg 2013. Hamburg, Germany.  https://doi.org/10.5167/uzh-87817.
  51. Demšar U, Çöltekin A (2017) Quantifying Gaze and Mouse Interactions on Spatial Visual Interfaces with a New Movement Analytics Methodology. PLOS ONE 12: e0181818.  https://doi.org/10.1371/journal.pone.0181818.CrossRefGoogle Scholar
  52. Demšar U, Virrantaus K (2010) Space-Time Density of Trajectories: Exploring Spatio-Temporal Patterns in Movement Data. International Journal of Geographical Information Science 24: 1527–1542.  https://doi.org/10.1080/13658816.2010.511223.CrossRefGoogle Scholar
  53. Devaux A, Hoarau C, Br’edif M et al (2018) 3D Urban Geovisualization: In Situ Augmented and Mixed Reality Experiments. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences IV–4: 41–48.  https://doi.org/10.5194/isprs-annals-iv-4-41-2018.CrossRefGoogle Scholar
  54. DiBiase D, MacEachren AM, Krygier JB et al (1992) Animation and the Role of Map Design in Scientific Visualization. Cartography and Geographic Information Science 19: 201–214.  https://doi.org/10.1559/152304092783721295.CrossRefGoogle Scholar
  55. Duchowski AT, Çöltekin A (2007) Foveated Gaze-Contingent Displays for Peripheral LOD Management, 3D Visualization, and Stereo Imaging. ACM Transactions on Multimedia Computing, Communications, and Applications 3: 1–18.  https://doi.org/10.1145/1314303.1314309.CrossRefGoogle Scholar
  56. Duchowski AT, McCormick BH (1995) Preattentive Considerations for Gaze-Contingent Image Processing. In Proceedings of SPIE, 2411:128–139. San Jose: SPIE.  https://doi.org/10.1117/12.207556.
  57. Dumont M, Touya G, Duchêne C (2018) Alternative Transitions between Existing Representations in Multi-Scale Maps. In Proceedings of the 28th International Cartographic Conference, 1–6. Washington DC: ICA.  https://doi.org/10.5194/ica-proc-1-33-2018.CrossRefGoogle Scholar
  58. Dupont L, Pallot M, Morel L (2016) Exploring the Appropriateness of Different Immersive Environments in the Context of an Innovation Process for Smart Cities. In 22nd ICE/IEEE International Technology Management Conference. Trondheim, Norway.Google Scholar
  59. Dykes J, MacEachren AM, Kraak MJ (2005a) Exploring Geovisualization. In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 1–19. Amsterdam: Pergamon Press.Google Scholar
  60. Dykes J, MacEachren AM, Kraak MJ (2005b) Advancing Geovisualization. In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 691–703. Amsterdam: Pergamon Press.  https://doi.org/10.1016/b978-008044531-1/50454-1.CrossRefGoogle Scholar
  61. Fairbairn D (2006) Measuring Map Complexity. The Cartographic Journal 43: 224–238.  https://doi.org/10.1179/000870406x169883.CrossRefGoogle Scholar
  62. Fisher P, Unwin D (2001) Virtual Reality in Geography. London: CRC Press.CrossRefGoogle Scholar
  63. Fuhrmann S, Paula AR, Edsall RM et al (2005) Making Useful and Useable Geovisualization: Design and Evaluation Issues. In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 553–566. Pergamon Press.  https://doi.org/10.1016/b978-008044531-1/50446-2.CrossRefGoogle Scholar
  64. Gabbard JL, Hix D, Swan EJ (1999) User-Centered Design and Evaluation of Virtual Environments. IEEE Computer Graphics and Applications 19: 51–59.  https://doi.org/10.1109/38.799740.CrossRefGoogle Scholar
  65. Gandomi A, Haider M (2015) Beyond the Hype: Big Data Concepts, Methods, and Analytics. International Journal of Information Management 35: 137–144.  https://doi.org/10.1016/j.ijinfomgt.2014.10.007.CrossRefGoogle Scholar
  66. Garlandini S, Fabrikant SI (2009) Evaluating the Effectiveness and Efficiency of Visual Variables for Geographic Information Visualization. In COSIT 2009, Lecture Notes in Computer Science, 5756:195–211. Berlin Heidelberg: Springer-Verlag.  https://doi.org/10.1007/978-3-642-03832-7_12.CrossRefGoogle Scholar
  67. Geertman S, Allan A, Pettit C et al (2017) Introduction to Planning Support Science for Smarter Urban Futures. In Planning Support Science for Smarter Urban Futures, edited by Stan Geertman, Andrew Allan, Chris Pettit, and Stillwell, John, 1–19. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-57819-4.Google Scholar
  68. Goodchild MF (2007) Citizens as Sensors: The World of Volunteered Geography. GeoJournal 69: 211–221.  https://doi.org/10.1007/s10708-007-9111-y.CrossRefGoogle Scholar
  69. Goodchild MF (2009) Virtual Geographic Environments as Collective Constructions. In Virtual Geographic Environments, edited by Hui Lin and Michael Batty, 15–24. Beijing: Science Press.Google Scholar
  70. Goodchild MF, Guo HD, Annoni A et al (2012) Next-Generation Digital Earth. Proceedings of the National Academy of Sciences 109: 11088–11094.  https://doi.org/10.1073/pnas.1202383109.CrossRefGoogle Scholar
  71. Gore A (1998) The Digital Earth: Understanding Our Planet in the 21st Century. Australian Surveyor 43 (2): 89–91.  https://doi.org/10.1080/00050326.1998.10441850.CrossRefGoogle Scholar
  72. Gotow JB, Zienkiewicz K, White J et al (2010) Mobile Wireless Middleware, Operating Systems, and Applications, Third International Conference, Mobilware 2010, Chicago, IL, USA, June 30–July 2, 2010. Revised Selected Papers, 129–143.  https://doi.org/10.1007/978-3-642-17758-3_10.Google Scholar
  73. Gray S, O’Brien O, Hügel S (2016) Collecting and Visualizing Real-Time Urban Data through City Dashboards. Built Environment 42: 498–509.  https://doi.org/10.2148/benv.42.3.498.CrossRefGoogle Scholar
  74. Griffin AL (2004) Understanding How Scientists Use Data-Display Devices for Interactive Visual Computing with Geographical Models. PhD Thesis, University Park, PA: Department of Geography, The Pennsylvania State University.Google Scholar
  75. Griffin AL, Fabrikant SI (2012) More Maps, More Users, More Devices Means More Cartographic Challenges. The Cartographic Journal 49: 298–301.  https://doi.org/10.1179/0008704112z.00000000049.
  76. Griffin AL, White T, Fish C et al (2017) Designing across Map Use Contexts: A Research Agenda. International Journal of Cartography 3: 1–25.  https://doi.org/10.1080/23729333.2017.1315988.CrossRefGoogle Scholar
  77. Grossner KE, Goodchild MF, Clarke KC (2008) Defining a Digital Earth System. Transactions in GIS 12: 145–160.  https://doi.org/10.1111/j.1467-9671.2008.01090.x.CrossRefGoogle Scholar
  78. Hägerstrand T (1970) What about People in Regional Science? Papers in Regional Science 24: 7–24.  https://doi.org/10.1111/j.1435-5597.1970.tb01464.x.CrossRefGoogle Scholar
  79. Harrower M, Brewer CA (2003) ColorBrewer.Org: An Online Tool for Selecting Colour Schemes for Maps. The Cartographic Journal 40: 27–37.CrossRefGoogle Scholar
  80. Hegarty M (2011) The Cognitive Science of Visual-Spatial Displays: Implications for Design. Topics in Cognitive Science 3: 446–474.  https://doi.org/10.1111/j.1756-8765.2011.01150.x.CrossRefGoogle Scholar
  81. Hegarty M, Waller DA (2005) Individual Differences in Spatial Abilities. In The Cambridge Handbook of Visuospatial Thinking, edited by Priti Shah and Akira Miyake, 121–169. Cambridge, UK: Cambridge University Press.Google Scholar
  82. Heim M (1998) Virtual Realism. Oxford: Oxford University Press.Google Scholar
  83. Helbig C, Bauer HS, Rink K et al (2014) Concept and Workflow for 3D Visualization of Atmospheric Data in a Virtual Reality Environment for Analytical Approaches. Environmental Earth Sciences 72: 3767–3780.  https://doi.org/10.1007/s12665-014-3136-6.CrossRefGoogle Scholar
  84. Heuer RJ (1999) Psychology of Intelligence Analysis. Langley, VA: Central Intelligence Agency.Google Scholar
  85. Hoarau C, Christophe S (2017) Cartographic Continuum Rendering Based on Color and Texture Interpolation to Enhance Photo-Realism Perception. ISPRS Journal of Photogrammetry and Remote Sensing 127: 27–38.  https://doi.org/10.1016/j.isprsjprs.2016.09.012.CrossRefGoogle Scholar
  86. Holten D, van Wijk JJ (2009) Force-Directed Edge Bundling for Graph Visualization. Computer Graphics Forum 28 (3): 983–990.  https://doi.org/10.1111/j.1467-8659.2009.01450.x.CrossRefGoogle Scholar
  87. Huang HS, Schmidt M, Gartner G (2012) Spatial Knowledge Acquisition with Mobile Maps, Augmented Reality and Voice in the Context of GPS-Based Pedestrian Navigation: Results from a Field Test. Cartography and Geographic Information Science 39: 107–116.  https://doi.org/10.1559/15230406392107.CrossRefGoogle Scholar
  88. Huang L, Gong JH, Li WH et al (2018) Social Force Model-Based Group Behavior Simulation in Virtual Geographic Environments. ISPRS International Journal of Geo-Information 7: article 79.  https://doi.org/10.3390/ijgi7020079.CrossRefGoogle Scholar
  89. Hullman J (2016) Why Evaluating Uncertainty Visualization Is Error Prone. In BELIV’2016, 143–151. Baltimore, MD.  https://doi.org/10.1145/2993901.2993919.
  90. Hurter C (2015) Image-Based Visualisation: Interactive Multidimensional Data Exploration. London: Morgan & Claypool Publishers.  https://doi.org/10.2200/s00688ed1v01y201512vis006.CrossRefGoogle Scholar
  91. Hurter C, Puechmorel S, Nicol F et al (2018) Functional Decomposition for Bundled Simplification of Trail Sets. IEEE Transactions on Visualization and Computer Graphics 24: 500–510.  https://doi.org/10.1109/tvcg.2017.2744338.CrossRefGoogle Scholar
  92. Jacquinod F, Pedrinis F, Edert J et al (2016) Automated Production of Interactive 3D Temporal Geovisualizations so as to Enhance Flood Risk Awareness. In UDMV ’16 Proceedings of the Eurographics Workshop on Urban Data Modelling and Visualisation, 71–77. Liège, Belgium: The Eurographics Association.Google Scholar
  93. Jenny B, Kelso NV (2007) Color Design for the Color Vision Impaired. Cartographic Perspectives 58: 61–67.  https://doi.org/10.14714/cp58.270.
  94. Jenny B, Stephen DM, Muehlenhaus I et al (2017) Force-Directed Layout of Origin-Destination Flow Maps. International Journal of Geographical Information Science 31 (8): 1521–1540.  https://doi.org/10.1080/13658816.2017.1307378.CrossRefGoogle Scholar
  95. Jenny B, Stephen DM, Muehlenhaus I et al (2018) Design Principles for Origin-Destination Flow Maps. Cartography and Geographic Information Science 45: 62–75.  https://doi.org/10.1080/15230406.2016.1262280.CrossRefGoogle Scholar
  96. Jenny H, Jenny B, Cron J (2012) Exploring Transition Textures for Pseudo-Natural Maps. In GI_Forum 2012: Geovisualization, Society and Learning, edited by T. Jekel, A. Car, Josef Strobl, and G. Grieseneber, 130–139. Berlin: Wichmann.Google Scholar
  97. Jerald J (2015) The VR Book. New York: ACM & Morgan & Claypool Publishers.  https://doi.org/10.1145/2792790.
  98. Jia FL, You X, Tian JP et al (2015) Formal Language for the Virtual Geographic Environment. Environmental Earth Sciences 74: 6981–7002.  https://doi.org/10.1007/s12665-015-4756-1.CrossRefGoogle Scholar
  99. Just MA, Carpenter PA (1976) The Role of Eye-Fixation Research in Cognitive Psychology. Behavior Research Methods & Instrumentation 8: 139–143.  https://doi.org/10.3758/bf03201761.CrossRefGoogle Scholar
  100. Kapler T, Wright W (2004) Geo Time Information Visualization. In IEEE Symposium on Information Visualization, 25–32. Austin, TX: IEEE.  https://doi.org/10.1109/infvis.2004.27.
  101. Kelly M, Slingsby A, Dykes J et al (2013) Historical Internal Migration in Ireland. In Proceedings of GISRUK 2013. Liverpool, UK.Google Scholar
  102. Kinkeldey C, MacEachren AM, Schiewe J (2014) How to Assess Visual Communication of Uncertainty? A Systematic Review of Geospatial Uncertainty Visualisation User Studies. The Cartographic Journal 51: 372–386.  https://doi.org/10.1179/1743277414y.0000000099.CrossRefGoogle Scholar
  103. Kisilevich S, Krstajic M, Keim D et al (2010) Event-Based Analysis of People’s Activities and Behavior Using Flickr and Panoramio Geotagged Photo Collections. In 14th International Conference on Information Visualisation, IV 2010, 289–296. London, UK: IEEE.  https://doi.org/10.1109/iv.2010.94.
  104. Konecny M (2011) Review: Cartography: Challenges and Potential in the Virtual Geographic Environments Era. Annals of GIS 17: 135–146.  https://doi.org/10.1080/19475683.2011.602027.CrossRefGoogle Scholar
  105. Kounavis CD, Kasimati AE, Zamani ED (2012) Enhancing the Tourism Experience through Mobile Augmented Reality: Challenges and Prospects. International Journal of Engineering Business Management 4: article 10.  https://doi.org/10.5772/51644.CrossRefGoogle Scholar
  106. Kourouthanassis PE, Boletsis C, Lekakos G (2015) Demystifying the Design of Mobile Augmented Reality Applications. Multimedia Tools and Applications 74: 1045–1066.  https://doi.org/10.1007/s11042-013-1710-7.CrossRefGoogle Scholar
  107. Kraak MJ (2003a) Geovisualization Illustrated. ISPRS Journal of Photogrammetry and Remote Sensing 57: 390–399.  https://doi.org/10.1016/s0924-2716(02)00167-3.CrossRefGoogle Scholar
  108. Kraak MJ (2003b) The Space–Time Cube Revisited from a Geovisualization Perspective. In Proceedings of the 21st International Cartographic Conference, 1988–1996. Durban, South Africa: ICA.Google Scholar
  109. Kraak MJ (2008) Geovisualization and Time – New Opportunities for the Space–Time Cube. In Geographic Visualization: Concepts, Tools and Applications, edited by Martin Dodge, Martin Turner, and McDerby, 293–306. Chichester, West Sussex, UK: John Wiley & Sons.  https://doi.org/10.1002/9780470987643.ch15.
  110. Kurkovsky S, Koshy R, Novak V et al (2012) Current Issues in Handheld Augmented Reality. In 2012 International Conference on Communications and Information Technology (ICCIT), 68–72. Hammamet, Tunisia: IEEE.  https://doi.org/10.1109/iccitechnol.2012.6285844.
  111. Kveladze I, Kraak MJ, van Elzakker CPJM (2015) The Space-Time Cube as Part of a GeoVisual Analytics Environment to Support the Understanding of Movement Data. International Journal of Geographical Information Science 29: 2001–2016.  https://doi.org/10.1080/13658816.2015.1058386.CrossRefGoogle Scholar
  112. Laney D (2001) 3D Data Management: Controlling Data Volume, Velocity, and Variety. META Group. https://blogs.gartner.com/doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf.
  113. Li SN, Dragicevic S, Castro FA et al (2016) Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges. ISPRS Journal of Photogrammetry and Remote Sensing 115: 119–133.  https://doi.org/10.1016/j.isprsjprs.2015.10.012.CrossRefGoogle Scholar
  114. Li Y, Gong JH, Song YQ et al (2015) Design and Key Techniques of a Collaborative Virtual Flood Experiment That Integrates Cellular Automata and Dynamic Observations. Environmental Earth Sciences 74: 7059–7067.  https://doi.org/10.1007/s12665-015-4716-9.CrossRefGoogle Scholar
  115. Li ZL, Huang PZ (2002) Quantitative Measures for Spatial Information of Maps. International Journal of Geographical Information Science 16: 699–709.  https://doi.org/10.1080/13658810210149416.CrossRefGoogle Scholar
  116. Liang JM, Gong JH, Li Y (2015) Realistic Rendering for Physically Based Shallow Water Simulation in Virtual Geographic Environments (VGEs). Annals of GIS 21: 301–312.  https://doi.org/10.1080/19475683.2015.1050064.CrossRefGoogle Scholar
  117. Liben LS, Downs RM (1993) Understanding Person-Space-Map Relations: Cartographic and Developmental Perspectives. Developmental Psychology 29: 739–752.  https://doi.org/10.1037/0012-1649.29.4.739.CrossRefGoogle Scholar
  118. Lin H, Batty M, Jorgensen SE et al (2015) Virtual Environments Begin to Embrace Process-Based Geographic Analysis. Transactions in GIS 19: 493–498.  https://doi.org/10.1111/tgis.12167.CrossRefGoogle Scholar
  119. Lin H, Chen M, Lu GN (2013a) Virtual Geographic Environment: A Workspace for Computer-Aided Geographic Experiments. Annals of the Association of American Geographers 103: 465–482.  https://doi.org/10.1080/00045608.2012.689234.CrossRefGoogle Scholar
  120. Lin H, Chen M, Lu GN et al (2013b) Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool. Earth-Science Reviews 126: 74–84.  https://doi.org/10.1016/j.earscirev.2013.08.001.CrossRefGoogle Scholar
  121. Lin H, Gong JH (2001) Exploring Virtual Geographic Environments. Annals of GIS 7: 1–7.  https://doi.org/10.1080/10824000109480550.CrossRefGoogle Scholar
  122. Lloyd D, Dykes J (2011) Human-Centered Approaches in Geovisualization Design: Investigating Multiple Methods Through a Long-Term Case Study. IEEE Transactions on Visualization and Computer Graphics 17: 2498–2507.  https://doi.org/10.1109/tvcg.2011.209.CrossRefGoogle Scholar
  123. Lokka IE, Çöltekin A (2019) Toward Optimizing the Design of Virtual Environments for Route Learning: Empirically Assessing the Effects of Changing Levels of Realism on Memory. International Journal of Digital Earth 12: 137–155.  https://doi.org/10.1080/17538947.2017.1349842.CrossRefGoogle Scholar
  124. Lokka IE, Çöltekin A, Wiener J et al (2018) Virtual Environments as Memory Training Devices in Navigational Tasks for Older Adults OPEN. Scientific Reports 8: 10809.  https://doi.org/10.1038/s41598-018-29029-x.
  125. Lü GN, Batty M, Strobl J et al (2019) Reflections and Speculations on the Progress in Geographic Information Systems (GIS): A Geographic Perspective. International Journal of Geographical Information Science 33 (2): 346–367.  https://doi.org/10.1080/13658816.2018.1533136.CrossRefGoogle Scholar
  126. MacEachren AM (1982) Map Complexity: Comparison and Measurement. The American Cartographer 9: 31–46.  https://doi.org/10.1559/152304082783948286.CrossRefGoogle Scholar
  127. MacEachren AM (1994) Visualization in Modern Cartography: Setting the Agenda. In Visualization in Modern Cartography, edited by Alan M. MacEachren and D. R. Fraser Taylor, 1–12. Oxford, UK: Pergamon Press.Google Scholar
  128. MacEachren AM (1995) How Maps Work: Representation, Visualization, and Design. New York: Guilford Press.Google Scholar
  129. MacEachren AM (2015) Visual Analytics and Uncertainty: It’s Not About the Data. In EuroVis Workshop on Visual Analytics (EuroVA), 55–60. Cagliari, Sardinia, Italy.  https://doi.org/10.2312/eurova.20151104.
  130. MacEachren AM, Edsall R, Haug D et al (1999a) Exploring the Potential of Virtual Environments for Geographic Visualization. presented at the Annual Meeting of the Association of American Geographers, Honolulu, HI.Google Scholar
  131. MacEachren AM, Jaiswal A, Robinson AC et al (2011) SensePlace2: Geo Twitter Analytics Support for Situational Awareness. In 2011 IEEE Conference on Visual Analytics, Science and Technology (VAST), 181–190. Providence, RI: IEEE.  https://doi.org/10.1109/vast.2011.6102456.
  132. MacEachren AM, Kraak MJ (2001) Research Challenges in Geovisualization. Cartography and Geographic Information Science 28: 3–12.  https://doi.org/10.1559/152304001782173970.CrossRefGoogle Scholar
  133. MacEachren AM, Kraak MJ, Verbree E (1999b) Cartographic Issues in the Design and Application of Geospatial Virtual Environments. In Proceedings of the 19th International Cartographic Conference. Ottawa, Canada: ICA.Google Scholar
  134. MacEachren AM, Roth RE, O’Brien J et al (2012) Visual Semiotics & Uncertainty Visualization: An Empirical Study. IEEE Transactions on Visualization and Computer Graphics 18: 2496–2505.  https://doi.org/10.1109/tvcg.2012.279.CrossRefGoogle Scholar
  135. Marriott K, Schreiber F, Dwyer T et al (2018) Immersive Analytics. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-030-01388-2.Google Scholar
  136. Mekni M (2010) Hierarchical Path Planning for Situated Agents in Informed Virtual Geographic Environments. In SIMUTools 2010. Torremolinos, Malaga, Spain.  https://doi.org/10.4108/icst.simutools2010.8811.
  137. Mellado N, Vanderhaeghe D, Hoarau C et al (2017) Constrained Palette-Space Exploration. ACM Transactions on Graphics 36: 1–14.  https://doi.org/10.1145/3072959.3073650.CrossRefGoogle Scholar
  138. Milgram P, Kishino F (1994) A Taxonomy of Mixed Reality Visual Displays. IEICE TRANSACTIONS on Information and Systems 77: 1321–1329.Google Scholar
  139. Monmonier M (2018) How to Lie with Maps (Third Edition). Chicago, IL: University of Chicago Press.Google Scholar
  140. Newcombe N, Bandura MM, Taylor DG (1983) Sex Differences in Spatial Ability and Spatial Activities. Sex Roles 9: 377–386.  https://doi.org/10.1007/bf00289672.CrossRefGoogle Scholar
  141. Nöllenburg M (2007) Human-Centered Visualization Environments, GI-Dagstuhl Research Seminar, Dagstuhl Castle, Germany, March 5–8, 2006, Revised Lectures 4417: 257–294.  https://doi.org/10.1007/978-3-540-71949-6_6.
  142. Olsson T (2012) User Expectations, Experiences of Mobile Augmented Reality Services. PhD Thesis, Tampere, Finland: Tampere University of Technology. https://tutcris.tut.fi/portal/files/5450806/olsson.pdf.
  143. Ooms K, Maeyer PD, Fack V (2015) Listen to the Map User: Cognition, Memory, and Expertise. The Cartographic Journal 52: 3–19.  https://doi.org/10.1179/1743277413y.0000000068.CrossRefGoogle Scholar
  144. Oprean D, Wallgrün JO, Duarte JMP et al (2018) Developing and Evaluating VR Field Trips. In Proceedings of Workshops and Posters of the 13th International Conference on Spatial Information Theory (COSIT 2017), edited by Eliseo Clementini, Eliseo Fogliaroni, and Andrea Ballatore, 105–110. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-63946-8_22.Google Scholar
  145. Oprean D, Simpson M, Klippel A (2017) Collaborating remotely: An evaluation of immersive capabilities on spatial experiences and team membership. International Journal of Digital Earth, 11(4): 420–436.  https://doi.org/10.1080/17538947.2017.1381191.CrossRefGoogle Scholar
  146. Patterson T, Kelso NV (2004) Hal Shelton Revisited: Designing and Producing Natural-Color Maps with Satellite Land Cover Data. Cartographic Perspectives 47: 28–55.  https://doi.org/10.14714/cp47.470.
  147. Perkins C (2008) Cultures of Map Use. The Cartographic Journal 45: 150–158.  https://doi.org/10.1179/174327708x305076.CrossRefGoogle Scholar
  148. Petrasova A, Harmon B, Petras V et al (2015) Tangible Modeling with Open Source GIS. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-25775-4.CrossRefGoogle Scholar
  149. Pettit C, Lieske SN, Jamal M (2017a) CityDash: Visualising a Changing City Using Open Data. In Planning Support Science for Smarter Urban Futures, edited by Stan Geertman, Allan Andrews, Chris Pettit, and John Stillwell, 337–353. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-57819-4_19.Google Scholar
  150. Pettit CJ, Leao SZ (2017) Dashboard. In Encyclopedia of Big Data, edited by Laurie A. Schintler and Connie L. McNeely, 1–6. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-32001-4.Google Scholar
  151. Pettit CJ, Russel ABM, Michael A et al (2010) Realising an EScience Platform to Support Climate Change Adaptation in Victoria. In 2010 IEEE Sixth International Conference on E-Science, 73–80. IEEE.  https://doi.org/10.1109/escience.2010.32.
  152. Pettit C, Tice A, Randolph B (2017b) Using an Online Spatial Analytics Workbench for Understanding Housing Affordability in Sydney. In Seeing Cities Through Big Data, Research, Methods and Applications in Urban Informatics, edited by Piyushimita Thakuriah, Nebiyou Tilahun, and Moira Zellner, 233–255. Cham: Springer.  https://doi.org/10.1007/978-3-319-40902-3_14.Google Scholar
  153. Pindat C, Pietriga E, Chapuis O et al (2012) JellyLens: Content-Aware Adaptive Lenses. In UIST ’12 Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, 261–270. Cambridge, MA: ACM.  https://doi.org/10.1145/2380116.2380150.
  154. Pirolli P, Card S (2005) The Sensemaking Process and Leverage Points for Analyst Technology as Identified through Cognitive Task Analysis. In Proceedings of International Conference on Intelligence Analysis. Maclean, VA.Google Scholar
  155. Priestnall G, Jarvis C, Burton A et al (2012) Virtual Geographic Environments. In Teaching Geographic Information Science and Technology in Higher Education, edited by David J. Unwin, Kenneth E. Foote, Nicholas J. Tate, and David DiBiase, 257–288. Chichester, West Sussex, UK: John Wiley & Sons.  https://doi.org/10.1002/9781119950592.ch18.CrossRefGoogle Scholar
  156. Rautenbach V, Çöltekin A, Coetzee S (2015) Exploring the Impact of Visual Complexity Levels in 3D City Models on the Accuracy of Individuals’ Orientation and Cognitive Maps. In ISPRS Geospatial Week 2015, Workshop II-3/W5, edited by Sidonie Christophe and Arzu Çöltekin, 499–506. La Grande Motte, France.  https://doi.org/10.5194/isprsannals-ii-3-w5-499-2015.CrossRefGoogle Scholar
  157. Richter KF, Tomko M, Çöltekin A (2015) Are We There Yet? Spatial Cognitive Engineering for Situated Human-Computer Interaction. In CESIP 2015: Cognitive Engineering for Spatial Information Processes: From User Interfaces to Model-Driven Design. Workshop at COSIT 2015, 1–7. Santa Fe, NM.Google Scholar
  158. Rink K, Chen C, Bilke L et al (2018) Virtual Geographic Environments for Water Pollution Control. International Journal of Digital Earth 11: 397–407.  https://doi.org/10.1080/17538947.2016.1265016.CrossRefGoogle Scholar
  159. Roberts JC (2007) State of the Art: Coordinated & Multiple Views in Exploratory Visualization. In Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), 61–71. Zürich, Switzerland: IEEE.  https://doi.org/10.1109/cmv.2007.20.
  160. Robertson G, Fernandez R, Fisher D et al (2008) Effectiveness of Animation in Trend Visualization. IEEE Transactions on Visualization and Computer Graphics 14: 1325–1332.  https://doi.org/10.1109/tvcg.2008.125.CrossRefGoogle Scholar
  161. Robertson GG, Mackinlay JD (1993) The Document Lens. In UIST ’93 Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology, 101–108. Atlanta, GA: ACM.  https://doi.org/10.1145/168642.168652.
  162. Robinson A (2017) Geovisual Analytics. In The Geographic Information Science & Technology Body of Knowledge (3rd Quarter 2017 Edition), edited by John P. Wilson. UCGIS.  https://doi.org/10.22224/gistbok/2017.3.6.
  163. Robinson AC, Chen J, Lengerich EJ et al (2005) Combining Usability Techniques to Design Geovisualization Tools for Epidemiology. Cartography and Geographic Information Science 32: 243–255.  https://doi.org/10.1559/152304005775194700.CrossRefGoogle Scholar
  164. Robinson AC, Demšar U, Moore AB et al (2017) Geospatial Big Data and Cartography: Research Challenges and Opportunities for Making Maps That Matter. International Journal of Cartography 3: 32–60.  https://doi.org/10.1080/23729333.2016.1278151.CrossRefGoogle Scholar
  165. Roth RE, Çöltekin A, Delazari L et al (2017) User Studies in Cartography: Opportunities for Empirical Research on Interactive Maps and Visualizations. The International Journal of Cartography 3: 61–89.  https://doi.org/10.1080/23729333.2017.1288534.CrossRefGoogle Scholar
  166. Ruas A, Perret J, Curie F et al (2011) Conception of a GIS-Platform to Simulate Urban Densification Based on the Analysis of Topographic Data. In Advances in Cartography and GIScience, Volume 2, edited by Anne Ruas, 413–430. Berlin Heidelberg: Springer-Verlag.  https://doi.org/10.1007/978-3-642-19214-2.Google Scholar
  167. Russo P, Pettit C, Çöltekin A et al (2013) Understanding Soil Acidification Process Using Animation and Text: An Empirical User Evaluation With Eye Tracking. In Proceedings of the 26th International Cartographic Conference, 431–448. Berlin Heidelberg: Springer-Verlag.  https://doi.org/10.1007/978-3-642-32618-9_31.Google Scholar
  168. Sachs J, Schmidt-Traub G, Kroll C et al (2018). New York: Bertelsmann Stiftung and Sustainable Development Solutions Network (SDSN).Google Scholar
  169. Scaife M, Rogers Y (1996) External Cognition: How Do Graphical Representations Work? International Journal of Human-Computer Studies 45: 185–213.  https://doi.org/10.1006/ijhc.1996.0048.CrossRefGoogle Scholar
  170. Schnur S, Bektaş K, Çöltekin A (2018) Measured and Perceived Visual Complexity: A Comparative Study among Three Online Map Providers. Cartography and Geographic Information Science 45: 238–254.  https://doi.org/10.1080/15230406.2017.1323676.CrossRefGoogle Scholar
  171. Schnur S, Bektaş K, Salahi M et al (2010) A Comparison of Measured and Perceived Visual Complexity for Dynamic Web Maps. In Proceedings of GIScience 2010, edited by Ross Purves and Robert Weibel, 1–4. Zürich, Switzerland. http://www.giscience2010.org/pdfs/paper_181.pdf.
  172. Semmo A, Döllner J (2014) An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments. In MapInteract’14, 43–49. Dallas/Fort Worth, Texas: ACM.  https://doi.org/10.1145/2677068.2677072.
  173. Semmo A, Trapp M, Kyprianidis JE et al (2012) Interactive Visualization of Generalized Virtual 3D City Models Using Level-of-Abstraction Transitions. Computer Graphics Forum 31: 885–894.  https://doi.org/10.1111/j.1467-8659.2012.03081.x.CrossRefGoogle Scholar
  174. Shen S, Gong JH, Liang JM et al (2018) A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments. ISPRS International Journal of Geo-Information 7: 54.  https://doi.org/10.3390/ijgi7020054.CrossRefGoogle Scholar
  175. Sherman WR, Craig AB (2003) Understanding VR: Understanding Virtual Reality: Interface, Application, and Design. San Francisco: Morgan Kaufmann Publishers, Inc.CrossRefGoogle Scholar
  176. Sinnott RO, Bayliss C, Bromage A et al (2015) The Australia Urban Research Gateway. Concurrency and Computation: Practice and Experience 27: 358–375.  https://doi.org/10.1002/cpe.3282.CrossRefGoogle Scholar
  177. Skupin A, Buttenfield BP (1997) Spatial Metaphors for Visualizing Information Spaces. Accounting, Organizations and Society 32: 649–667.  https://doi.org/10.1016/j.aos.2007.02.001.CrossRefGoogle Scholar
  178. Slater M (2009) Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philosophical Transactions of the Royal Society B: Biological Sciences 364: 3549–3557.  https://doi.org/10.1098/rstb.2009.0138.CrossRefGoogle Scholar
  179. Slingsby A (2018) Tilemaps for Summarising Multivariate Geographical Variation. In Paper Presented at VIS 2018. Berlin, Germany: IEEE.Google Scholar
  180. Slingsby A, van Loon E (2016) Exploratory Visual Analysis for Animal Movement Ecology. Computer Graphics Forum 35 (3): 471–480.  https://doi.org/10.1111/cgf.12923.CrossRefGoogle Scholar
  181. Slingsby A, Wood J, Dykes J (2010) Treemap Cartography for Showing Spatial and Temporal Traffic Patterns. Journal of Maps 6: 135–146.  https://doi.org/10.4113/jom.2010.1071.CrossRefGoogle Scholar
  182. Slocum TA, Blok C, Jiang B et al (2001) Cognitive and Usability Issues in Geovisualization. Cartography and Geographic Information Science 28: 61–75.  https://doi.org/10.1559/152304001782173998.CrossRefGoogle Scholar
  183. Slocum TA, Cliburn DC, Feddema JJ et al (2003) Evaluating the Usability of a Tool for Visualizing the Uncertainty of the Future Global Water Balance. Cartography and Geographic Information Science 30 (4): 299–317.  https://doi.org/10.1559/152304003322606210.CrossRefGoogle Scholar
  184. Slocum TA, McMaster RB, Kessler FC et al (2008) Thematic Cartography and Geovisualization (3rd Edition). Upper Saddle Hall, NJ: Prentice Hall.Google Scholar
  185. Smallman HS, John MS (2005) Naive Realism: Misplaced Faith in Realistic Displays. Ergonomics in Design: The Quarterly of Human Factors Applications 13: 6–13.  https://doi.org/10.1177/106480460501300303.CrossRefGoogle Scholar
  186. Spekat A, Kreienkamp F (2007) Somewhere over the Rainbow – Advantages and Pitfalls of Colourful Visualizations in Geosciences. Advances in Science and Research 1: 15—21.  https://doi.org/10.5194/asr-1-15-2007.CrossRefGoogle Scholar
  187. Spence R (2007) Information Visualization: Design for Interaction (2nd Edition). Harlow, Essex, UK: Pearson Education Limited.Google Scholar
  188. Thomas JJ, Cook KA (2005) Illuminating the Path: The Research and Development Agenda for Visual Analytics. New York: IEEE.Google Scholar
  189. Thoresen JC, Francelet R, Coltekin A et al (2016) Not All Anxious Individuals Get Lost: Trait Anxiety and Mental Rotation Ability Interact to Explain Performance in Map-Based Route Learning in Men. Neurobiology of Learning and Memory 132: 1–8.  https://doi.org/10.1016/j.nlm.2016.04.008.CrossRefGoogle Scholar
  190. Thyng K, Greene C, Hetl R et al (2016) True Colors of Oceanography: Guidelines for Effective and Accurate Colormap Selection. Oceanography 29: 9–13.  https://doi.org/10.5670/oceanog.2016.66.CrossRefGoogle Scholar
  191. Tobler WR (1987) Experiments In Migration Mapping By Computer. Cartography and Geographic Information Science 14 (2): 155–163.  https://doi.org/10.1559/152304087783875273.CrossRefGoogle Scholar
  192. Tomaszewski B, MacEachren AM (2012) Geovisual Analytics to Support Crisis Management: Information Foraging for Geo-Historical Context. Information Visualization 11: 339–359.  https://doi.org/10.1177/1473871612456122.CrossRefGoogle Scholar
  193. Tominski C, Gladisch S, Kister U et al (2017) Interactive Lenses for Visualization: An Extended Survey. Computer Graphics Forum 36: 173–200.  https://doi.org/10.1111/cgf.12871.CrossRefGoogle Scholar
  194. Tominski C, Schumann H, Andrienko G et al (2012) Stacking-Based Visualization of Trajectory Attribute Data. IEEE Transactions on Visualization and Computer Graphics 18: 2565–2574.  https://doi.org/10.1109/tvcg.2012.265.CrossRefGoogle Scholar
  195. Tomko M, Winter S (2019) Beyond Digital Twins-A Commentary. Environment and Planning B in press.  https://doi.org/10.1177/2399808318816992.Google Scholar
  196. Torrens PM (2015) Slipstreaming Human Geosimulation in Virtual Geographic Environments. Annals of GIS 21: 325–344.  https://doi.org/10.1080/19475683.2015.1009489.CrossRefGoogle Scholar
  197. Touya G, Hoarau C, Christophe S (2016) Clutter and Map Legibility in Automated Cartography: A Research Agenda. Cartographica: The International Journal for Geographic Information and Geovisualization 51: 198–207.  https://doi.org/10.3138/cart.51.4.3132.CrossRefGoogle Scholar
  198. Tsai TH, Chang HT, Yu MC et al (2016) Design of a Mobile Augmented Reality Application: An Example of Demonstrated Usability. In UAHCI 2016, 198–205. Cham, Switzerland: Springer International Publishing.  https://doi.org/10.1007/978-3-319-40244-4_19.Google Scholar
  199. Tufte ER (1983) The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.Google Scholar
  200. Tversky B, Morrison JB and Betrancourt M (2002) Animation: Can It Facilitate? International Journal of Human-Computer Studies 57: 247–262.  https://doi.org/10.1006/ijhc.2002.1017.CrossRefGoogle Scholar
  201. United Nations Population Division (2018) World Urbanization Prospects: The 2018 Revision. New York: United Nations.Google Scholar
  202. Uttal DH, Meadow NG, Tipton E et al (2013) The Malleability of Spatial Skills: A Meta-Analysis of Training Studies. Psychological Bulletin 139: 352–402.  https://doi.org/10.1037/a0028446.CrossRefGoogle Scholar
  203. van Wijk JJ, Nuij WAA (2003) Smooth and Efficient Zooming and Panning. In INFOVIS’03 Proceedings of the Ninth Annual IEEE Conference on Information Visualization, 15–22. Seattle, WA.Google Scholar
  204. Vert S, Dragulescu B, Vasiu R (2014) LOD4AR: Exploring Linked Open Data with a Mobile Augmented Reality Web Application. In 13th International Semantic Web Conference (ISWC 2014), 185–188. Trentino, Italy.Google Scholar
  205. Viard T, Caumon G, Lévy B (2011) Adjacent versus Coincident Representations of Geospatial Uncertainty: Which Promote Better Decisions? Computers & Geosciences 37 (4): 511–520.  https://doi.org/10.1016/j.cageo.2010.08.004.CrossRefGoogle Scholar
  206. Voinov A, Çöltekin A, Chen M et al (2018) Virtual Geographic Environments in Socio-Environmental Modeling: A Fancy Distraction or a Key to Communication? International Journal of Digital Earth 11: 408–419.  https://doi.org/10.1080/17538947.2017.1365961.CrossRefGoogle Scholar
  207. Weichelt B, Yoder A, Bendixsen C et al (2018) Augmented Reality Farm MAPPER Development: Lessons Learned from an App Designed to Improve Rural Emergency Response. Journal of Agromedicine 23: 284–296.  https://doi.org/10.1080/1059924x.2018.1470051.CrossRefGoogle Scholar
  208. Wen YN, Chen M, Lu GN et al (2013) Prototyping an Open Environment for Sharing Geographical Analysis Models on Cloud Computing Platform. International Journal of Digital Earth 6: 356–382.  https://doi.org/10.1080/17538947.2012.716861.CrossRefGoogle Scholar
  209. Wen YN, Chen M, Yue SS et al (2017) A Model-Service Deployment Strategy for Collaboratively Sharing Geo-Analysis Models in an Open Web Environment. International Journal of Digital Earth 10: 405–425.  https://doi.org/10.1080/17538947.2015.1131340.CrossRefGoogle Scholar
  210. Wickham H, Hofmann H, Wickham C et al (2012) Glyph-Maps for Visually Exploring Temporal Patterns in Climate Data and Models. Environmetrics 23: 382–393.  https://doi.org/10.1002/env.2152.CrossRefGoogle Scholar
  211. Widjaja I, Russo P, Pettit C et al (2014) Modeling Coordinated Multiple Views of Heterogeneous Data Cubes for Urban Visual Analytics. International Journal of Digital Earth 8: 558–578.  https://doi.org/10.1080/17538947.2014.942713.CrossRefGoogle Scholar
  212. Wood J, Slingsby A, Dykes J (2011) Visualizing the Dynamics of London’s Bicycle-Hire Scheme. Cartographica: The International Journal for Geographic Information and Geovisualization 46: 239–251.  https://doi.org/10.3138/carto.46.4.239.CrossRefGoogle Scholar
  213. Wu HK, Lee WY, Chang HY et al (2013) Current Status, Opportunities and Challenges of Augmented Reality in Education. Computers & Education 62: 41–49.  https://doi.org/10.1016/j.compedu.2012.10.024.CrossRefGoogle Scholar
  214. Wüest R, Nebiker S (2018) Geospatial Augmented Reality for the Interactive Exploitation of Large-Scale Walkable Orthoimage Maps in Museums. Proceedings of the ICA 1: 1–6.  https://doi.org/10.5194/ica-proc-1-124-2018.CrossRefGoogle Scholar
  215. Xu BL, Lin H, Chiu LS et al (2011) Collaborative Virtual Geographic Environments: A Case Study of Air Pollution Simulation. Information Sciences 181: 2231–2246.  https://doi.org/10.1016/j.ins.2011.01.017.CrossRefGoogle Scholar
  216. Yang YL, Dwyer T, Jenny B et al (2019) Origin-Destination Flow Maps in Immersive Environments. IEEE Transactions on Visualization and Computer Graphics 25: 693–703.  https://doi.org/10.1109/tvcg.2018.2865192.CrossRefGoogle Scholar
  217. Yue SS, Chen M, Wen YN et al (2016) Service-Oriented Model-Encapsulation Strategy for Sharing and Integrating Heterogeneous Geo-Analysis Models in an Open Web Environment. ISPRS Journal of Photogrammetry and Remote Sensing 114: 258–273.  https://doi.org/10.1016/j.isprsjprs.2015.11.002.CrossRefGoogle Scholar
  218. Zhang F, Hu MY, Che WT et al (2018) Framework for Virtual Cognitive Experiment in Virtual Geographic Environments. ISPRS International Journal of Geo-Information 7: 36.  https://doi.org/10.3390/ijgi7010036.CrossRefGoogle Scholar
  219. Zheng PB, Tao H, Yue SS et al (2017) A Representation Method for Complex Road Networks in Virtual Geographic Environments. ISPRS International Journal of Geo-Information 6: 372.  https://doi.org/10.3390/ijgi6110372.CrossRefGoogle Scholar
  220. Zhu J, Zhang H, Yang XF et al (2015) A Collaborative Virtual Geographic Environment for Emergency Dam-Break Simulation and Risk Analysis. Journal of Spatial Science 61: 133–155.  https://doi.org/10.1080/14498596.2015.1051148.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Arzu Çöltekin
    • 1
    Email author
  • Amy L. Griffin
    • 2
  • Aidan Slingsby
    • 3
  • Anthony C. Robinson
    • 4
  • Sidonie Christophe
    • 5
  • Victoria Rautenbach
    • 6
  • Min Chen
    • 7
  • Christopher Pettit
    • 8
  • Alexander Klippel
    • 9
  1. 1.Institute for Interactive Technologies, FHNW University of Applied Sciences and Arts Northwestern SwitzerlandBrugg-WindischSwitzerland
  2. 2.School of ScienceRMIT UniversityMelbourneAustralia
  3. 3.Department of Computer ScienceCity University of LondonLondonUK
  4. 4.Department of GeographyGeoVISTA Center, The Pennsylvania State UniversityUniversity ParkUSA
  5. 5.University of Paris-Est, LASTIG GEOVIS, IGN, ENSGSaint-MandéFrance
  6. 6.Department of Geography, Geoinformatics and MeteorologyUniversity of PretoriaPretoriaSouth Africa
  7. 7.Key Laboratory of Virtual Geographic EnvironmentMinistry of Education of the People’s Republic of China, Nanjing Normal UniversityNanjingChina
  8. 8.Cities Analytics LabUNSWSydneyAustralia
  9. 9.Department of GeographyThe Pennsylvania State UniversityUniversity ParkUSA

Personalised recommendations