Geospatial Information Visualization and Extended Reality Displays
In this chapter, we review and summarize the current state of the art in geovisualization and extended reality (i.e., virtual, augmented and mixed reality), covering a wide range of approaches to these subjects in domains that are related to geographic information science. We introduce the relationship between geovisualization, extended reality and Digital Earth, provide some fundamental definitions of related terms, and discuss the introduced topics from a human-centric perspective. We describe related research areas including geovisual analytics and movement visualization, both of which have attracted wide interest from multidisciplinary communities in recent years. The last few sections describe the current progress in the use of immersive technologies and introduce the spectrum of terminology on virtual, augmented and mixed reality, as well as proposed research concepts in geographic information science and beyond. We finish with an overview of “dashboards”, which are used in visual analytics as well as in various immersive technologies. We believe the chapter covers important aspects of visualizing and interacting with current and future Digital Earth applications.
KeywordsVisualization Geovisualization User-centric design Cognition Perception Visual analytics Maps Temporal visualization Immersive technologies Virtual reality Augmented reality Mixed reality Extended reality
A future, fully functional Digital Earth is essentially what we understand as a (geo)virtual reality environment today: A multisensory simulation of the Earth as-is and how it could be, so we can explore it holistically, with its past, present, and future made available to us in any simulated form we wish (Gore 1998; Grossner et al. 2008). The concept of Digital Earth can be associated with the emergence of the (recently popularized) concept of a ‘digital twin’, conceptualized as a digital replica of a physical entity. Although several researchers have expressed skepticism about the appropriateness and precision of the term ‘digital twin’ in recent publications (Batty 2018; Tomko and Winter 2019), it appears that the broad usage of the term refers to a reasonably rigorous attempt to digitally replicate real-world objects and phenomena with the highest fidelity possible. Such efforts currently exist for objects at microscales, such as a wind turbines, engines, and bridges; but they are also envisioned for humans and other living beings. A digital twin for an entire city is more ambitious and requires information on the interoperability and connectivity of every object. A true ‘all containing’ Digital Earth is still unrealized and is more challenging to construct. However, as Al Gore (1998) noted in his original proposal for a Digital Earth in 1998, making sense of the information a Digital Earth contains is even more difficult than its construction. A key capability that supports sensemaking is the ability to visualize geospatial information. There are countless ways to visualize geospatial information. For thousands of years, humankind has used maps to understand the environment and find our way home. Today, there are many visual methods for depicting real, simulated, or fictional geospatial ‘worlds’.
This chapter provides an overview of key aspects of visualizing geospatial information, including the basic definitions and organization of visualization-related knowledge in the context of a future Digital Earth. As understanding related human factors is necessary for any successful implementation of a visualization within the Digital Earth framework, we include a section on cognition, perception, and user-centered approaches to (geo)visualization. Because we also typically pose and answer analytical questions when we visualize information, we provide an overview of visual analytics; paying special attention to visualizing and analyzing temporal phenomena including movement because a Digital Earth would be clearly incomplete if it only comprises static snapshots of phenomena. After this examination of broader visualization-related concepts, because we conceptualize Digital Earth as a virtual environment, we pay special attention to how augmented (AR), mixed (MR), and virtual reality (VR) environments can be used to enable a Digital Earth in the section titled “Immersive Technologies—From Augmented to Virtual Reality”. The Digital Earth framework is relevant to many application areas, and one of the foremost uses of the framework is in the domain of urban science. This is unsurprising given that 55 percent of the population now live in urban areas, with the proportion expected to increase to two-thirds of the population by 2050 (United Nations Population Division 2018). Urban environments are complex, and their management requires many decisions whose effects can cause changes in other parts of the urban environment, making it important for decision makers to consider these potential consequences. One way of providing decision makers with an overview of urban environments is through dashboards. Therefore, we feature “dashboards” and discuss the current efforts to understand how they fit within the construct of Digital Earth. We finish the chapter with a few concluding remarks and future directions.
7.2 Visualizing Geospatial Information: An Overview
Cartography is the process by which geospatial information has been typically visualized (especially in the pre-computer era), and the science and art of cartography remain relevant in the digital era. Cartographic visualizations are (traditionally) designed to facilitate communication between the mapmaker and map users. As a new approach to making sense of geospatial information in the digital era, specifically in the development of digital tools that help map readers interact with this information, the concept of geovisualization emerged (MacEachren 1994; Çöltekin et al. 2017, 2018) and widened our understanding of how maps could help make sense of a Digital Earth when used in an exploratory manner in addition to their role in communication. Thus, geovisualization is conceived as a process rather than a product, although the term is also commonly used to refer to any visual display that features geospatial information (maps, images, 3D models, etc.). In the geovisualization process, the emphasis is on information exploration and sensemaking, where scientists and other experts design and use “visual geospatial displays to explore data, and through that exploration to generate hypotheses, develop problem solutions and construct knowledge” (Kraak 2003a, p. 390) about a geographic location or geographic phenomenon. How these displays (and associated analytical tools) could be designed and used became a focus of scientific research within the International Cartographic Association’s (ICA) Commission on Visualization and Virtual Environments, whose leaders described the term geovisualization as the “theory, methods and tools for visual exploration, analysis, synthesis, and presentation of geospatial data” (MacEachren and Kraak 2001, p. 3). Designing tools to support visualizing the geospatial information contained in a Digital Earth requires thinking about the data, representation of those data, and how users interact with those representations. Importantly, it requires the design of visual displays of geospatial information that can combine heterogeneous data from any source at a range of spatiotemporal scales (Nöllenburg 2007). To facilitate the ability to think spatially and infer spatiotemporal knowledge from a visualization, the visualization must also be usable, support users’ tasks and needs, and enable users to interact with the data (Fuhrmann et al. 2005). Visualizations of geospatial data connect people, maps, and processes, “leading to enlightenment, thought, decision making and information satisfaction” (Dykes et al. 2005a, p. 4). Below, we describe three key areas of knowledge that support the design of visualizations with the goal of helping users make sense of the information that a Digital Earth contains. The data that are available for incorporation in a Digital Earth are increasingly heterogeneous and more massive than before. These complex, large datasets include both spatial and aspatial data, all of which must be combined, ‘hybridized’ (i.e., synthesized in meaningful ways), and represented within a visualization environment. Users expect to be able to visualize complex spatiotemporal phenomena to analyze and understand spatiotemporal dynamics and systems. To support them in this, considering user interaction and interfaces is necessary to develop and incorporate intuitive and innovative ways to explore visual displays. This is especially relevant to virtual and augmented reality, to facilitate exploration of data and experiencing spaces ‘without hassle’.
Data A key goal of geovisualization is “to support and advance the individual and collective knowledge of locations, distribution and interactions in space and time” (Dykes et al. 2005b, p. 702). This remains a challenge due to increases in the diversity and quantity of data, users, and available visualization techniques and technologies (Griffin and Fabrikant 2012). The age of the data deluge (Bell et al. 2009) resulted in the generation of large quantities of spatial data (vector databases, maps, imagery, 3D models, numeric models, point clouds, etc.), as well as aspatial data (texts, stories, web data, photographs, etc.) that can be spatialized (Skupin and Buttenfield 1997). The ‘covisualization’ of those data together, such as in multiple coordinated views (or linked views, see Roberts 2007), is difficult due to their heterogeneity. This heterogeneity can be in the data’s source, scale, content, precision, dimension, and/or temporality. The visual integration of such heterogeneous data requires the careful design of graphical representations to preserve the legibility of the data (Hoarau and Christophe 2017).
Bertin’s seminal work (1967/1983) provides a conceptual framework, the visual variables, that allows for us to consider the graphical representation of geospatial information at a fundamental level (although it is important to note that Bertin’s propositions were not evidence-based, it was rather based on intuition and qualitative reasoning). Originally, Bertin proposed seven variables: position, size, shape, color value, color hue, orientation, and texture. Later work extended Bertin’s framework to include dynamic variables such as movement, duration, frequency, order, rate of change, synchronization, (Carpendale 2003; DiBiase et al. 1992; MacEachren 1995) and variables for 3D displays such as perspective height (Slocum et al. 2008), camera position, and camera orientation (Rautenbach et al. 2015). Visual variables remain relevant as a core concept of visualization research and have generated renewed interest in digital-era research questions, including in fields beyond geovisualization (e.g., Mellado et al. 2017). Notably, the information visualization community has also embraced Bertin’s visual variables (e.g., Spence 2007). Visual complexity is a major challenge in designing representations of geospatial data, and innovative measures and analysis methods have been proposed to address this problem (Fairbairn 2006; Li and Huang 2002; MacEachren 1982; Schnur et al. 2010, 2018; Touya et al. 2016). Digital Earth’s ‘big data’ challenges these efforts, stretching the capacity of existing tools to handle and process such datasets as well as the capacity of visualization users to read, understand, and analyze them (Li et al. 2016). One application area that is particularly afflicted by visual complexity is research on the urban and social dynamics that drive spatiotemporal dynamics in cities (Brasebin et al. 2018; Ruas et al. 2011). Developing approaches to represent spatiotemporal phenomena has been a long-standing challenge and many options have been investigated over the years (Andrienko and Andrienko 2006). Despite some progress, many questions remain (see the “Visualizing Movement” section). Some potential solutions such as using abstraction and schematization when visualizing urban datasets in Digital Earth can be found in the fields of data and information visualization (Hurter et al. 2018).
Another key aspect of visual representation design for geospatial data in Digital Earth applications involves how to deal with uncertainty. Uncertainty, such as that related to data of past or present states of a location or models of potential future states, remains difficult to represent in visual displays, and this is a major challenge for geovisualization designers. Which visual variables might aid in representing uncertainty? This question has been explored and tested to some degree (e.g., MacEachren et al. 2012; Slocum et al. 2003; Viard et al. 2011), although the majority of research has focused on developing new visualization methods rather than testing their efficacy (Kinkeldey et al. 2014). There are still no commonly accepted strategies for visualizing uncertainty that are widely applied. MacEachren (2015) suggests that this is because data uncertainty is only one source of uncertainty that affects reasoning and decision making and argues that taking a visual analytics approach (see the “Geovisual Analytics” section) might be more productive than a communication approach. Hullman (2016) notes the difficulty of evaluating the role of uncertainty in decision making as a major barrier to developing empirically validated techniques to represent uncertainty.
7.2.2 User Interaction and Interfaces
Since geovisualization environments are expected to provide tools and interaction modalities that support data exploration, user interaction and interface design are important topics for geovisualization. The visual display is an interface for the information, so users need effective ways to interact with geovisualization environments. Interaction modalities in geovisualization environments are ideally optimized or customizable for the amount of data, display modes, complexity of spaces or phenomena, and diversity of users (e.g., Hoarau and Christophe 2017). Interaction tools and modalities are a core interest in human-computer interaction (e.g., Çöltekin et al. 2017) and, in connection with visualization, they are often investigated with concepts explored in the information visualization domain (Hurter 2015; van Wijk and Nuij 2003), among others. Interaction and how it is designed are especially relevant for virtual and augmented reality approaches to visualization (see the “Immersive Technologies—From Augmented to Virtual Reality” section). Some form of interaction is required for most modern 2D displays, and it has a very important role in supporting exploration tasks, but seamless interaction is a necessity in a virtual or augmented world. Without it, the immersiveness of the visualization—a critical aspect of both VR and AR—is negatively affected. One approach that is notably at the intersection of representation design and user interaction design is a set of methods that are (interactively) nonuniform or space-variant. An example is displays in which the resolution or level of detail varies across the display in real time according to a predefined criterion. The best known among these nonuniform display types are the focus + context and fisheye displays (dating back to the early 1990s, e.g., see Robertson and Mackinlay 1993). Both the focus + context and fisheye displays combine an overview at the periphery with detail at the center, varying the level of detail and/or scale across a single display. A variation on the focus + context display has been named “context-adaptive lenses” (Pindat et al. 2012). Conceptually related to these approaches, in gaze-contingent displays (GCDs), the level of detail (and other selected visual variables) is adapted across the display space based on where the user is looking. This approach draws on perceptual models of the visual field, mimicking the human visual system. GCDs were proposed as early as the 1970s (see, e.g., Just and Carpenter 1976) and have continued to attract research interest over time as the technology developed (e.g., Bektas et al. 2015; Duchowski and Çöltekin 2007; Duchowski and McCormick 1995). For more discussion of “interactive lenses” in visualization, see the recent review by Tominski et al. (2017). Various other space-variant visualization approaches have been proposed in which, rather than varying the scale or level of detail, the levels of realism or generalization are varied across the display to support focus + context interactions with the data. These approaches aim to smoothly navigate between data and its representation at one scale (e.g., Hoarau and Christophe 2017), between different levels of generalization across scales (e.g., Dumont et al. 2018), or between different rendering styles (Boér et al. 2013; Semmo and Döllner 2014; Semmo et al. 2012). Mixed levels of realism have been proposed for regular maps used for data exploration purposes (Jenny et al. 2012) as well as for VR. In VR, egocentric-view-VR representations with selective photorealism (a mix of abstract and photorealistic representations) have been tested in the context of route learning, memory, and aging and have been shown to benefit users (Lokka et al. 2018; Lokka and Çöltekin 2019).
Decisions on how to combine data to design representations and user interactions should be informed by our understanding of how visualization users process visual information and combine it with their existing knowledge about the location or phenomenon to make sense of what they see. Thus, building effective visualizations of geospatial information for a Digital Earth requires an understanding of its users, their capabilities and their constraints, which we describe in the next section.
7.3 Understanding Users: Cognition, Perception, and User-Centered Design Approaches for Visualization
A primary way that humans make sense of the world—the real world, an “augmented world” with additional information overlaid, or a virtual word (such as a simulation)—is by making sense of what we see. Because vision is so important to human sense-making, visualizations are major facilitators of that process and provide important support for cognition. When effectively designed, visualizations enable us to externalize some of the cognitive burden to something we can (re)utilize through our visual perception (Hegarty 2011; Scaife and Rogers 1996). However, our ability to see something—in the sense of understanding it—is bounded by our perceptual and cognitive limits. Thus, any visualizations we design to help work with and understand geospatial information must be developed with the end user in mind, taking a user-centered design (UCD) approach (Gabbard et al. 1999; Huang et al. 2012; Jerald 2015; Lloyd and Dykes 2011; Robinson et al. 2005). A UCD approach is useful for understanding perceptual and cognitive limits and for adapting the displays to these limits. It also helps to evaluate the strengths of new methods of interacting with visualizations (Roth et al. 2017). For example, a user-centered approach has been used to demonstrate that an embodied data axis aids in making sense of multivariate data (Cordeil et al. 2017). Similarly, UCD was useful in determining which simulated city environments lead to the greatest sense of immersion to support participatory design processes for smart cities (Dupont et al. 2016), assuming that immersion has a positive effect in this context.
7.3.1 Making Visualizations Work for Digital Earth Users
126.96.36.199 Managing Information
As briefly noted earlier, a key benefit—and a key challenge—for visualization in the Digital Earth era is related to the amount of data that is at our fingertips (Çöltekin and Keith 2011). With so much available data, how can we make sense of it all? What we need is the right information in the right place at the right time for the decisions we are trying to make or the activities we are trying to support. Thus, understanding the context in which information and visualizations of information are going to be used (Griffin et al. 2017)—what data, by whom, for what purpose, on what device—is fundamental to designing appropriate and effective visualizations. For example, ubiquitous sensor networks and continuous imaging of the Earth’s surface allow for us to collect real-time or near real-time spatial information on fires and resources available to fight fires, and firefighters would benefit from improved situation awareness (Weichelt et al. 2018). However, which information should we show them, and how should it be shown? Are there environmental factors that affect what information they can perceive and understand from an AR system that visualizes important fire-related attributes (locations of active burns, wind speed and direction) and firefighting parameters (locations of teammates and equipment, locations of members of the public at risk)? How much information is too much to process and use effectively at a potentially chaotic scene?
A great strength of visualization is its ability to abstract: to remove detail and to reveal the essence. In that vein, realism as a display principle has been called “naive realism” because realistic displays sometimes impair user performance but users still prefer them (e.g., Lokka et al. 2018; Smallman and John 2005). The questions of how much abstraction is needed (Boér et al. 2013; Çöltekin et al. 2015) and what level of realism should be employed (Brasebin et al. 2018; Ruas et al. 2011) do not have clear-cut answers. In some cases, we need to follow the “Goldilocks principle” because too much or too little realism is suboptimal. As Lokka and Çöltekin (2019) demonstrated, if there is too much realism, we may miss important details because we cannot hold all the details in our memory whereas if there is too little, we may find it difficult to learn environments because there are too few ‘anchors’ for the human memory to link new knowledge of the environment. These issues of how to abstract data and how it can be effectively visualized for end users are growing in the era of big data and Digital Earth.
188.8.131.52 Individual and Group Differences
Nearly two decades ago, Slocum et al. (2001) identified individual and group differences as a research priority among the many “cognitive and usability issues in geovisualization” (as the paper was also titled). There was evidence prior to their 2001 paper and has been additional evidence since then that humans process information in a range of ways. Such differences are often based on expertise or experience (e.g., Griffin 2004; Çöltekin et al. 2010; Ooms et al. 2015) or spatial abilities (e.g., Liben and Downs 1993; Hegarty and Waller 2005), and are sometimes based on age (Liben and Downs 1993; Lokka et al. 2018), gender (Newcombe et al. 1983); culture (Perkins 2008), confidence and attitudes (e.g., Biland and Çöltekin 2017), or anxiety (Thoresen et al. 2016), among other factors. For brevity, we do not expand on the root causes of these differences, as this would require a careful treatment of the “nature vs. nurture” debate. We know that many of the shortcomings people experience can be remedied to different degrees based on interventions and/or training. For example, spatial abilities, as measured in standardized tests, can be enhanced by training (Uttal et al. 2013), and expertise/experience and education affect the ways that people process information (usually in improved ways, but these forms of knowledge can also introduce biases). Many of the above factors could be considered cognitive factors and might be correlated in several ways. A key principle arising from the awareness that individuals process information differently and that their capacities to do so can vary (whatever the reason) is that the “designer is not the user” (Richter et al. 2015, p. 4). A student of geovisualization (we include experts in this definition) is a self-selected individual who was likely interested in visual information. With the addition of education to this interest, it is very likely that a design that a geovisualization expert finds easy-to-use (or “user friendly”, a term that is used liberally by many in the technology sector) will not be easy-to-use or user friendly for an inexperienced user or a younger/older user.
Related to the individual and group differences as described above, another key consideration is populations with special needs. As in any information display, visualization and interaction in a geovisualization software environment should ideally be designed with accessibility in mind. For example, visually impaired people can benefit from multimedia augmentation on maps and other types of visuospatial displays (Brock et al. 2015; Albouys-Perrois et al. 2018). Another accessibility issue linked to (partial) visual impairment that is widely studied in geovisualization is color vision impairment. This is because color is (very) often used to encode important information and color deficiency is relatively common, with up to eight percent of the world’s population experiencing some degree of impairment (e.g., Brychtová and Çöltekin 2017a). Because it is one of the more dominant visual variables (Garlandini and Fabrikant 2009), cartography and geovisualization research has contributed to color research for many decades (Brewer 1994; Brychtová and Çöltekin 2015; Christophe 2011; Harrower and Brewer 2003). Two of the most popular color-related applications in use by software designers were developed by cartography/geovisualization researchers: ColorBrewer (Harrower and Brewer 2003) for designing/selecting color palettes and ColorOracle (Jenny and Kelso 2007) for simulating color blindness. Color is a complex and multifaceted phenomenon even for those who are not affected by color vision impairment. For example, there are perceptual thresholds for color discrimination that affect everyone (e.g., Brychtová and Çöltekin 2015, 2017b), and how colors are used and organized contributes to the complexity of maps (e.g., Çöltekin et al. 2016a, b). Color-related research in geographic information science also includes examination of the efficacy of color palettes to represent geophysical phenomena (Spekat and Kreienkamp 2007; Thyng et al. 2016) or natural color maps (Patterson and Kelso 2004). We include color in the above discussion because it is one of the strongest visual variables. However, color is not the only visual variable of interest to geovisualization researchers. Many other visual variables have been examined and assessed in user studies. For example, the effects of size (Garlandini and Fabrikant 2009), position, line thickness, directionality, color coding (Monmonier 2018; Brügger et al. 2017), shading, and texture (Biland and Çöltekin 2017; Çöltekin and Biland 2018) on map reading efficiency have been examined.
It is not possible to provide an in-depth review of all the user studies in the geovisualization domain within the scope of this chapter. However, it is worth noting that if a design maximizes accessibility, the users benefit and the (consequently) improved usability of visuospatial displays enables other professionally diverse groups to access and create their own visualizations: for example, city planners, meteorologists (e.g., Helbig et al. 2014) and ecoinformatics experts (e.g., Pettit et al. 2010), all of which are support systems of a ‘full’ future Digital Earth.
7.4 Geovisual Analytics
The science of analytical reasoning with spatial information using interactive visual interfaces is referred to as geovisual analytics (Andrienko et al. 2007; Robinson 2017). This area of GIScience emerged alongside the development of visual analytics, which grew out of the computer science and information visualization communities (Thomas and Cook 2005). A key distinction of geovisual analytics from its predecessor field of geovisualization is its focus on support for analytical reasoning and the application of computational methods to discover interesting patterns from massive spatial datasets. A primary aim of geovisualization is to support data exploration. Geovisual analytics aims to go beyond data exploration to support complex reasoning processes and pursues this aim by coupling computational methods with interactive visualization techniques. In addition to the development of new technical approaches and analytical methods, the science of geovisual analytics also includes research aimed at understanding how people reason with, synthesize, and interact with geographic information to inform the design of future systems. Progress in this field has been demonstrated on each of these fronts, and future work is needed to address the new opportunities and challenges presented by the big data era and meeting the vision proposed for Digital Earth.
7.4.1 Progress in Geovisual Analytics
Early progress in geovisual analytics included work to define the key research challenges for the field. Andrienko et al. (2007) called for decision making support using space-time data, computational pattern analysis, and interactive visualizations. This work embodied a shift from the simpler goal of supporting data exploration in geovisualization toward new approaches in geovisual analytics that could influence or direct decision making in complex problem domains. Whereas the goal in geovisualization may have been to prompt the development of new hypotheses, the goal in geovisual analytics has become to prompt decisions and actions. To accomplish this goal, GIScience researchers began to leverage knowledge from intelligence analysis and related domains in which reasoning with uncertain information is required to make decisions (Heuer 1999; Pirolli and Card 2005). Simultaneously, there were efforts to modify and create new computational methods to identify patterns in large, complex data sources. These methods were coupled to visual interfaces to support interactive engagement with users. For example, Chen et al. (2008) combined the SaTScan space-time cluster detection method with an interactive map interface to help epidemiologists understand the sensitivity of the SaTScan approach to model parameter changes and make better decisions about when to act on clusters that have been detected. Geovisual analytics have been applied in a wide range of domain contexts, usually targeting data sources and problem areas that are difficult to approach without leveraging a combination of computational, visual, and interactive techniques. Domains of interest have included social media analytics (Chae et al. 2012; Kisilevich et al. 2010), crisis management (MacEachren et al. 2011; Tomaszewski and MacEachren 2012), and movement data analysis (Andrienko et al. 2011; Demšar and Virrantaus 2010). The following section on “Visualizing Movement” includes a deeper treatment of the approaches to (and challenges of) using visual analytics for dynamic phenomena.
A concurrent thread of geovisual analytics research has focused on the design and evaluation of geovisual analytics tools. In addition to the development of new computational and visual techniques, progress must also be made in understanding how geovisual analytics systems aid (or hinder) the analytical reasoning process in real-world decision making contexts (Çöltekin et al. 2015). Approaches to evaluating geovisual analytics include perceptual studies (Çöltekin et al. 2010), usability research (Kveladze et al. 2015), and in-depth case study evaluations of expert use (Lloyd and Dykes 2011). Additionally, new geovisual analytics approaches have been developed to support such evaluations (Andrienko et al. 2012; Demšar and Çöltekin 2017), as methods such as eye tracking are capable of creating very large space-time datasets that require combined computational and interactive visual analysis to be made sense of.
7.4.2 Big Data, Digital Earth, and Geovisual Analytics
The next frontier for geovisual analytics is to address the challenges posed by the rise of big spatial data. Big data are often characterized by a set of so-called V’s, corresponding to the challenges associated with volume, velocity, variety, and veracity, among others (Gandomi and Haider 2015; Laney 2001). Broadly, geovisual analytics approaches to handling big spatial data need to address problems associated with analysis, representation, and interaction (Robinson et al. 2017), similar to the challenges faced by geovisualization designers. New computational methods are needed to support real-time analysis of big spatial data sources. Representations must be developed to render the components and characteristics of big spatial data through visual interfaces (Çöltekin et al. 2017). We also need to know more about how to design interactive tools that make sense to end users to manipulate and learn from big spatial data (Griffin et al. 2017; Roth et al. 2017).
The core elements behind the vision for Digital Earth assume that big spatial data will exist for every corner of our planet, in ways that support interconnected problem solving (Goodchild et al. 2012). Even if this vision is achieved (challenging as that may seem), supporting the analytical goals of Digital Earth will require the development of new geovisual analytics tools and techniques. Major issues facing humanity today regarding sustainable global development and mitigating the impacts of climate change necessarily involve the fusion of many different spatiotemporal data sources, the integration of predictive models and pattern recognition techniques, and the translation of as much complexity as is possible into visual, interactive interfaces to support sensemaking and communication.
7.5 Visualizing Movement
Note that most techniques for visualizing movement on the Earth’s surface were developed as 2D representations. However, many of these representations can be placed on the surface of a 3D globe and we can identify where the 3D environment may offer benefits and disadvantages. Notably, one disadvantage is that 3D environments often result in occlusion, and this occlusion is only partially addressed through interaction (Borkin et al. 2011; Dall’Acqua et al. 2013). Below, we begin by visually depicting individual journeys and progressively review aggregated movement data representations, which are more scalable and can synthesize and reveal general movement patterns (the individual trajectories cannot).
7.5.1 Trajectory Maps: The Individual Journey
Trajectory maps depict individual movement by showing the geometrical traces of individual journeys on a map. Where there are few trajectories, trajectory maps can clearly illustrate specific journeys and facilitate visual comparison within an individual’s journeys or between journeys undertaken by different individuals. An excellent book by Cheshire and Umberti (2017) uses a whole range of static visualization methods to illustrate the movements of various types of animals, including trajectory maps. As well as presenting movement traces, trajectory maps can be a useful precursor to more substantial and directed analyses (Borkin et al. 2011; Dall’Acqua et al. 2013).
In summary, trajectory maps are good for showing detailed examples of journeys but do not scale well to more than a few trajectories. Characteristics of these individual trajectories can be explored through multiple coordinated views with brushing. Trajectories are often displayed in maps in 2D, but 3D space-time cubes are also common. Overplotting many trajectories with semitransparent lines can help indicate parts of routes that are commonly taken, and a selected trajectory can be highlighted using a visual variable if there is a reason to emphasize a particular trajectory. In addition, trajectories can be filtered, grouped, and visually compared. For higher-level pattern identification, it is helpful to perform some aggregation, as discussed in the next section.
7.5.2 Flow Maps: Aggregated Flows Between Places
Tobler’s (1987) early flow maps connected locations with straight lines. However, curved lines help reduce the undesirable occluding effects of line crossings. Jenny et al. (2018) provide a comprehensive set of guidelines for designing flow maps. Wood et al. (2011) also used curved lines to distinguish and visually separate flow in either direction, using asymmetry in the curve to indicate direction (Fig. 7.7). Yang et al. (2019) provide specific guidance for designing flow maps on (3D) digital globes. They recommend taking advantage of the z-axis to design flows with 3D curvature to help reduce clutter and make the maps more readable and provide evidence-based advice for displaying flows on 3D globes.
A characteristic of flow data is that it is usually aggregated, with the number of flows between origin-destination pairs reported. This is facilitated by the fact that there are often a finite number of spatial units (origins and destinations), as is the case for bike docking stations or country-country migration data. This makes them more scalable but, as shown in Fig. 7.6 (Wood et al. 2011), flow maps can have clutter and occlusion issues similar to those observed in trajectory maps. These can be partially addressed by filtering as in trajectory maps, but because flows are usually already aggregated, filtering by geographical area is likely to reduce such clutter more effectively to make patterns visible and interpretable (Andrienko and Andrienko 2011) [see the geographical filtering in the green trajectories shown in Fig. 7.2]. There are other ways to reduce clutter and provide more interpretable visual representations of movements, for example, by employing spatial aggregation or applying edge bundling.
184.108.40.206 Spatial Aggregation of Flows
220.127.116.11 Edge Bundling of Flows
7.5.3 Origin-Destination (OD) Maps
OD maps (Wood et al. 2011) are also an important tool. They aggregate flows into a relatively small number of spatial units based on existing units (e.g., states) or those that result from a Voronoi- or grid-based tessellation. OD maps are effectively small multiple destination maps. Cases with irregular spatial units should be organized in a grid layout that preserves as much of the geographical ‘truth’ as possible. The center of the labels typically indicates the origin (e.g., of migrants or another phenomenon), and the maps show the destinations from each origin (Fig. 7.9). Flow maps aid in visually understanding the structure of movement between places (Jenny et al. 2017). Below, we disregard the connection between the origin and destination and simply consider the density of movement.
7.5.4 In-Flow, Out-Flow and Density of Moving Objects
This section concerns movement for which we do not have the connection between origin and destination. This includes situations in which we only have data on the outflow (but do not know where the flow goes), inflow (but do not know where the flow originates from), or the density of moving objects. This can be expressed as a single value describing the movement for each spatial unit, for example, the out-migration flow from each county. As described above, the spatial units used may be derived from existing units (e.g., states) or Voronoi/grid-based tessellations. These values can be displayed as choropleth maps, in which regions are represented as tessellating polygons on a map and a suitable color scale is used to indicate in- or out-movement or the density of moving objects.
In summary, movement data exists in different forms and can often be transformed. This section provided an overview of map-based representations for three different levels of precision for movement data. The reviewed approaches can be used with digital globes, or a future Digital Earth with virtual dashboards through which one can integrate analytical operations within an AR or VR system. Hurter et al. (2018) show how interactions in a 3D immersive environment (see the “Immersive Technologies—From Augmented to Virtual Reality” section) can enable the exploration of large numbers of individual 3D trajectories. Next, we review the current state of the art in immersive technologies.
7.6 Immersive Technologies—From Augmented to Virtual Reality
In the virtual and augmented reality (VR and AR) domains, there is almost “visible” excitement, both in academia (off and on for over 30 years) and in the private sector (more recently). A 2016 Goldman Sachs analysis predicted that VR and AR would be an 80 billion dollar industry by 2025 (reported on CNBC: https://www.cnbc.com/2016/01/14/virtual-reality-could-become-an-80b-industry-goldman.html). Arguably, geospatial sciences will not be the same once immersive technologies such as augmented (AR), mixed (MR), and virtual reality (VR) have been incorporated into all areas of everyday life. In this chapter, we use the shorthand xR to refer to all immersive technologies and use the individual acronyms (AR/MR/VR) to refer to specific technologies. A closely related term that has recently been gaining momentum is immersive analytics, described as a blend of visual analytics, augmented reality, and human-computer interaction (Marriott et al. 2018), which draws on knowledge and experience from several fields described in this chapter to develop visualizations of geospatial information that support thinking. We do not elaborate on immersive analytics; see, e.g., Billinghurst et al. (2018) and Yang et al. (2019). Current technologies for xR hold promise for the future, despite being strongly “gadget”-dependent and somewhat cumbersome and ‘involved’ to set up (i.e., they require some technical skill and dedication). Thus, it remains to be seen whether these immersive experiences will become commonplace. We describe and elaborate on these display technologies below. We begin by outlining several concepts that are important for xR technology use.
7.6.1 Essential Concepts for Immersive Technologies
Their original definitions are more nuanced than this continuum and are challenging to apply in a fast-developing technology field. Nonetheless, it is useful to revisit some of their main distinctions for a conceptual organization of the terms in xR.
7.6.2 Augmented Reality
In Milgram and Kishino’s (1994) model, the first step from reality toward virtuality is augmented reality (AR). Augmented reality allows for the user to view virtual objects superimposed onto a real-world view (Azuma 1997). Technological advancements have allowed for augmented reality to evolve from bulky head-mounted displays (HMDs) in the 1960s to smartphone applications today (some examples are featured below), and through specialized (though still experimental) glasses such as Google Glass or Epson Moverio (Arth and Schmalstieg 2011). Although technology has truly advanced since the early—bulky and rather impractical—HMDs, there are still challenges in the adoption of augmented reality for dedicated geospatial applications in everyday life. These challenges are often technical, such as latency and the inaccuracy of sensors when using smartphones, and result in inaccuracies in registration of features and depth ambiguity (Arth and Schmalstieg 2011; Chi et al. 2013; Gotow et al. 2010). There are also design issues that should be considered and, ideally, user-evaluated when developing and designing a “geospatial” AR application (Arth and Schmalstieg 2011; Cooper 2011; Kounavis et al. 2012; Kourouthanassis et al. 2015; Kurkovsky et al. 2012; Olsson 2012; Tsai et al. 2016; Vert et al. 2014).
Akçayır and Akçayır (2017) and Wu et al. (2013) reviewed the current state of AR in education. They concluded that AR provides a unique learning environment because it combines digital and physical objects, an insight relevant to students and scientists who are learning about geographical systems. An example of AR in education and research is the “augmented reality sandbox” (https://arsandbox.ucdavis.edu) that has been widely used, for example, in an urban/landscape design experiment (Afrooz et al. 2018). A similar application is the “tangible landscape” (https://tangible-landscape.github.io) (Petrasova et al. 2015). Both of these applications superimpose an elevation color map, topographic contour lines, and simulated water on a physical sand model that can be physically (re)shaped by the user. A tourism-related science and education example is the “SwissARena”, which superimposes a 3D model on top of topographic maps of Switzerland (Wüest and Nebiker 2018), enabling smartphone and tablet users to visit museums and other public spaces through an augmented experience. Motivated by a fundamental (rather than an applied) question, Carrera and Bermejo Asensio (2017) tested whether the use of AR improves participants’ (spatial) orientation skills when interpreting landscapes. They found a significant improvement in participants’ orientation skills when using a 3D AR application. However, some pedagogical questions (e.g., how should AR be used to complement the learning objectives; what is the gap between teaching and learning?) and other usability gaps (e.g., it was difficult to use at first, unsuitable for large classes, cognitive overload, expensive technology, and inadequate teacher ability to use the technology) identified by Akçayır and Akçayır (2017) and Wu et al. (2013) regarding the use of AR in teaching need to be addressed. Given that early research suggests that AR might aid in developing spatial skills, its potential in education (especially in science education) appears to be reasonably high. Furthermore, there appear to be several benefits of using AR in research. For example, it has been suggested that AR is an excellent tool for collaborative work among researchers (Jacquinod et al. 2016). At the time of this writing, there are no common examples of these types of applications in use, but there have been various experimental implementations of AR in research and scientific visualization (e.g., Devaux et al. 2018). Thus, most of the present excitement about AR seems to be based on belief and intuition, which can be correct but may also mislead.
7.6.3 Mixed Reality
As conceptualized in the Milgram and Kishino (1994) model (Fig. 7.15), the term Mixed Reality (MR)—sometimes referred to as Hybrid Reality—applies to everything in between the real world and a virtual world. Therefore, the term includes AR, and the issues described above about AR also apply to MR. MR also includes augmented virtuality (AV). AV refers to virtual environments that are designed so that physical objects still play a role. Of the two subcategories of MR (AR and AV), AR is more developed at this point in time. Nonetheless, AV is relevant in a number of VR scenarios. For example, when we want haptic feedback, we give users suits or gloves. It is also relevant when we want to interact with the virtual world using any kind of hardware. Using hardware to drive interaction is the current state of the art; that is, although there are an increasing number of gesture tracking methods that map functions onto the body’s natural movements, several of the controls are physical objects, such as remote controls, often referred to as “wands”, or small trackable objects attached to the viewers called “lights”. Any combination (hybrid) environments of physical and virtual objects can be considered a form of MR. We do not expound on MR in this chapter, and any information presented in the AR section above and most of the information in the VR section below is relevant to MR.
7.7 Virtual Reality
How should we define virtual reality? There is no consensus on the “minimum requirements” of VR, though it is understood that an ideal VR system provides humans experiences that are indistinguishable from an experience that could be real. Ideally, a VR should stimulate all senses. That is, a virtual apple you eat should look, smell, and taste real, and when you bite, the touch and sounds should be just right. Current VR technologies are not there yet. The sense of vision (and the associated visualizations) has been investigated a great deal and audio research has made convincing progress, but we have a long way to go in terms of simulating smells, tastes, and touch. There are no hard and fast rules for “minimum requirements” for a display to qualify as VR, but there have been various attempts to systematically characterize and distinguish VR from other types of displays (see Fig. 7.16). Among these, Sherman and Craig (2003) list four criteria: a virtual world (graphics), immersion, interactivity, and sensory feedback. They distinguish interaction and sensory feedback in the sense that interaction occurs when there is an intentional user request whereas sensory feedback is embedded at the system level and is fed to the user based on tracking the user’s body. In the cartographic literature, a similar categorization was proposed even earlier by MacEachren et al. (1999b) in which they describe the Four ‘I’s, adding intelligence of objects to Heim’s (1998) original three ‘I’s: immersion, interactivity, information intensity. The Four ‘I’s and Sherman and Craig’s criteria have clear overlaps in immersion and interactivity, and links between a “virtual world” and “information intensity” and between “sensory feedback” and “intelligence of objects” can be drawn. Notably, some authors make a distinction between virtual reality and virtual environments: the term virtual reality does not exactly refer to mimicking reality (but an experience that feels real to the user). Nonetheless, because the word reality can invoke such an impression, the term virtual environment emerged. The term originated because one can also show fictional (or planned) environments using a visualization environment, and thus, the term “environment” more effectively encapsulates the range of things one can do in such a visualization environment. Below, we give a brief history of VR in domains that are directly related to Digital Earth and elaborate on what was once described as a “virtual geographic environment” (VGE).
7.7.1 Virtual Geographic Environments
An extension of earlier conceptualizations of ‘virtual geography’ (e.g., Batty 1997; MacEachren et al. 1999a), the term VGE was formally proposed at the beginning of the 21st century (attributed to Lin and Gong 2001) around the same time as the seminal book by Fisher and Unwin (2001). Since its beginnings, the VGE concept and accompanying tools have significantly evolved. A modern description of a VGE is a digital geographic environment “generated by computers and related technologies that users can use to experience and recognize complex geographic systems and further conduct comprehensive geographic analyses, through equipped functions, including multichannel human-computer interactions (HCIs), distributed geographic modeling and simulations, and network geo-collaborations” (Chen and Lin 2018, p. 329). Since their conception, VGEs have attracted considerable attention in the geographic information science research community over the last few decades (e.g., Goodchild 2009; Huang et al. 2018; Jia et al. 2015; Konecny 2011; Liang et al. 2015; Mekni 2010; Priestnall et al. 2012; Rink et al. 2018; Shen et al. 2018; Torrens 2015; Zhang et al. 2018; Zheng et al. 2017). Much like the “digital twin” idea, and well-aligned with the Digital Earth concept, VGEs often aim to mirror real-world geographic environments in virtual ones. Such a mirrored virtual geographic environment also goes beyond reality, as it ideally enables its user to visually perceive invisible or difficult-to-see phenomena in the real world, and explore them inside the virtual world (e.g., looking at forests at different scales, examining historical time periods, seeing under the ocean’s surface). As it can incorporate advanced analytic capabilities, a VGE can be superior to the real world for analysts. In an ideal VGE, one can view, explore, experience and analyze complex geographic phenomena. VGEs are not ‘just’ 3D GIS environments, but there are strong similarities between VGEs and immersive analytics approaches. A VGE can embed all the tools of a GIS, but a key point of a VGE is that they are meant to provide realistic experiences, as well as simulated ones that are difficult to distinguish from real-world experiences. A VGE would not be ideal if only analytics are needed, as 2D plans combined with plots may better facilitate the analyst’s goals. The combination of a traditional GIS and the power of immersive visualization environments offers novel ways to combine human cognitive abilities with what machines have to offer (Chen and Lin 2018; Lin et al. 2013a).
7.7.2 Foundational Structures of VGEs
Lin et al. (2013b) designed a conceptual framework that includes four VGE subenvironments: data, modeling and simulation, interaction, and collaborative spaces. They posit that a geographic database and a geographic model are core necessities for VGEs to support visualization, simulation, and collaboration. Below, we briefly elaborate on the four VGE subenvironments (Lin et al. 2013b).
18.104.22.168 Data Space
The “data space” is conceptualized as the first step in the pipeline of creating a VGE. This is where data are organized, manipulated, and visualized to prepare the digital infrastructure necessary for a VGE. One can also design this environment so that users can “walk” in their data and examine it for patterns and anomalies (as in immersive analytics). The data is ideally comprehensive (i.e., “information intensity” is desirable), such that semantic information, location information, geometric information, attribute information, feature spatiotemporal/qualitative relationships and their evolution processes are considered and organized to form virtual geographic scenarios with a range of visualization possibilities (e.g., standard VR displays, holograms, or other xR modes) and thus support the construction of VGEs (Lü et al. 2019).
22.214.171.124 Modeling and Simulation Space
Models and simulations, as the abstraction and expression of geographical phenomena and processes, are important means for modern geographic research (Lin et al. 2015). With the rapid development of networks, cloud/edge computing, and other modern technologies, modeling and simulation capabilities allow for a large range of exploration and experimentation types (e.g., Wen et al. 2013, 2017; Yue et al. 2016). VGEs can also integrate such technologies. Chen et al. (2015) and Chen and Lin (2018) propose that doing so would provide new modes for geographic problem solving and exploration, and potentially help users understand the Digital Earth.
126.96.36.199 Interaction Space
In general, interaction is what shifts a user from being a passive ‘consumer’ of information and makes them active producers of new information (see the “Geovisualization” section earlier in this chapter). In VGEs, interaction requires a different way of thinking than for desktop setups because the aspiration is to create experiences that are comparable to those in the real world (i.e., mouse-and-keyboard type interactions do not work well in VGEs). Thus, there have been considerable efforts to track a user’s hands, head, limbs, and eyes to model natural interaction. Interaction tools play an important role in information transmission between the VGE and its users (Batty et al. 2017; Voinov et al. 2018).
188.8.131.52 Collaboration Space
This chapter so far has focused on theoretical constructs and examples of geographical visualization that can be used to represent and provide insights into our Earth system. However, it is also important to consider how such visualizations can be presented to policy and decision-makers to plan for a more sustainable future. The next section outlines a number of platforms for engaging such end users with packaged geographical information, known as dashboards.
A true Digital Earth describes Earth and all its systems, including ecosystems, climate/atmospheric systems, water systems, and social systems. Our planet faces a number of great challenges including climate change, food security, an aging population, and rapid urbanization. As policy-makers, planners, and communities grapple with how to address these critical problems, they benefit from digital tools to monitor the performance of our management of these systems using specific indicators. With the rise of big data and open data, a number of dashboards are being developed to support these challenges, enabled by geographical visualization technologies and solutions (Geertman et al. 2017). Dashboards can be defined as “graphic user interfaces which comprise a combination of information and geographical visualization methods for creating metrics, benchmarks, and indicators to assist in monitoring and decision-making” (Pettit and Leao 2017).
One can think of dashboards as installations that can provide key indicators of the performance of a particular Earth system, powered through the construct of Digital Earth. In 2016, the United Nations launched 17 Sustainable Development Goals to guide policy and funding priorities until 2030. Each of these goals include a number of indicators that can be quantified and reported within a Digital Earth dashboard, as illustrated, for example, such as in SDG Index and Dashboards (https://dashboards.sdgindex.org/#/) (Sachs et al. 2018).
Dashboard views of the performance of Earth systems such as urban systems have a number of pros and cons. Dashboards can potentially provide the best available data on the performance of an urban system or natural asset so that decisions can account for multiple dimensions, including sustainability, resilience, productivity, and livability. Dashboards are also a window into the democratization of data and provide greater transparency and accountability in policy- and decision-making. However, there are a number of challenges in developing and applying dashboards; without good quality indicators and benchmarks, the utility of such digital presentations of performance can be questionable. Traditionally, dashboards have provided a unidirectional flow of information to their users. However, with the emergence of digital twins, there may be an opportunity for a true bidirectional flow of data between dashboards, their users and Earth systems.
Our understanding of the vision of Digital Earth is that it is a fully functional virtual reality system. To achieve such a system, we need to master every aspect of relevant technology and design and keep the users in mind. Visualization is an interdisciplinary topic with relevance in many areas of life in the digital era, especially given that there is much more data to analyze and understand than ever before. Because the Earth is being observed, measured, probed, listened to, and recorded using dozens of different sensors, including people (Goodchild 2007), the data we need to build a Digital Earth is now available (at least for parts of the Earth). Now, the challenge is to organize these data at a global scale following cartographic principles so that we can make sense of it. Herein lies the strength of visualization. By visualizing the data in multiple ways, we can create, recreate, and predict experiences, observe patterns, and detect anomalies. Recreating a chat with an old neighbor in our childhood living room 30 years later (e.g., instead of looking at a photo album) is no longer a crazy thought; we might be recording enough data to be able to do such things soon. The possibilities are endless. However, as inspiring as this may be, one must understand how to “do it right”; that is, we have much to learn before we will know what exactly we should show, when and to whom. In this chapter, we provided an overview of the current state of the art of topics related to visualization in the context of Digital Earth. We hope this chapter provided some insights into our current broad understanding of this challenge.
- Albouys-Perrois J, Laviole J, Briant C et al (2018) Towards a Multisensory Augmented Reality Map for Blind and Low Vision People: A Participatory Design Approach. In CHI ’18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper 629. New York: ACM. https://doi.org/10.1145/3173574.3174203.
- Andrienko G, Andrienko N, Hurter C et al (2011) From Movement Tracks through Events to Places: Extracting and Characterizing Significant Places from Mobility Data. In 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), 161–170. Providence, RI: IEEE. https://doi.org/10.1109/vast.2011.6102454.
- Andrienko N, Andrienko G (2006) Exploratory Analysis of Spatial and Temporal Data: A Systematic Approach. Berlin Heidelberg: Springer-Verlag.Google Scholar
- Andrienko N, Andrienko G, Gatalsky P (2003) Visual Data Exploration Using the Space-Time Cube. In Proceedings of the 21st International Cartographic Conference, 1981–1983. Durban, South Africa: ICA.Google Scholar
- Arth C, Schmalstieg D (2011) Challenges of Large-Scale Augmented Reality on Smartphones. In ISMAR 2011 Workshop, 1–4. Basel, Switzerland.Google Scholar
- Batty M, Lin H, Chen M (2017) Virtual Realities, Analogies and Technologies in Geography. In Handbook on Geographies of Technology, edited by Barney Warf, 96–110. London, UK: Edward Elgar Publishing.Google Scholar
- Bektas K, Çöltekin A, Krüger J et al (2015) A Testbed Combining Visual Perception Models for Geographic Gaze Contingent Displays. In Eurographics Conference on Visualization, edited by Jessie Kennedy and Enrico Puppo, 1–6. Cagliari, Sardinia, Italy. https://doi.org/10.2312/eurovisshort.20151127.
- Bertin J. (1967/1983) Semiology of Graphics: Diagrams, Networks, Maps. Translated by William Berg. Madison, WI.: University of Wisconsin Press.Google Scholar
- Billinghurst M, Cordeil M, Bezerianos A et al (2018) Collaborative Immersive Analytics. In Immersive Analytics. Lecture Notes in Computer Science, edited by Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, 11190:221–257. Cham, Switzerland: Springer International Publishing.CrossRefGoogle Scholar
- Boér A, Çöltekin A, Clarke KC (2013) An Evaluation of Web-Based Geovisualizations for Different Levels of Abstraction and Realism – What Do Users Predict? In Proceedings of the 26th International Cartographic Conference, 209–220. Dresden, Germany: ICA.Google Scholar
- Brasebin M, Perret J, Mustière S et al (2018) 3D Urban Data to Assess Local Urban Regulation Influence. Computers, Environment and Urban Systems 68: 37–52. https://doi.org/10.1016/j.compenvurbsys.2017.10.002.CrossRefGoogle Scholar
- Brewer CA (1994) Color Use Guidelines for Mapping and Visualization. In Visualization in Modern Cartography, edited by Alan M. MacEachren and D. R. Fraser Taylor, 123–147. Oxon, UK: Elsevier Science. https://doi.org/10.1016/b978-0-08-042415-6.50014-4.CrossRefGoogle Scholar
- Brychtová A, Çöltekin A (2017a) Calculating Colour Distance on Choropleth Maps with Sequential Colours – A Case Study with ColorBrewer 2.0. Kartographische Nachrichten, no. 2: 53–60.Google Scholar
- Buchin M, Driemel A, van Kreveld M et al (2011) Segmenting Trajectories: A Framework and Algorithms Using Spatiotemporal Criteria. Journal of Spatial Information Science 3: 33–63. https://doi.org/10.5311/josis.2011.3.66.
- Carpendale MST (2003) Considering Visual Variables as a Basis for Information Visualisation. Calgary: University of Calgary. https://doi.org/10.11575/prism/30495.
- Chae J, Thom D, Bosch H et al (2012) Spatiotemporal Social Media Analytics for Abnormal Event Detection and Examination Using Seasonal-Trend Decomposition. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), 143–152. Seattle, WA: IEEE. https://doi.org/10.1109/vast.2012.6400557.
- Cheshire J, Umberti O (2017) Where the Animals Go: Tracking Wildlife with Technology in 50 Maps and Graphics. New York: W. W. Norton & Company.Google Scholar
- Çöltekin A, Biland J (2018) Comparing the Terrain Reversal Effect in Satellite Images and in Shaded Relief Maps: An Examination of the Effects of Color and Texture on 3D Shape Perception from Shading. International Journal of Digital Earth in press: 1–18. https://doi.org/10.1080/17538947.2018.1447030.CrossRefGoogle Scholar
- Çöltekin A, Lokka IE, Zahner M (2016b) On the Usability and Usefulness of 3D (Geo)Visualizations – A Focus on Virtual Reality Environments. In International Archives of Photogrammetry, Remote Sensing, and Spatial Information Science, XLI-B2:387–392. Prague, Czechia. https://doi.org/10.5194/isprsarchives-xli-b2-387-2016.
- Çöltekin A, Keith CC (2011) Virtual Globes or Virtual Geographical Reality. Geospatial Today January 2011: 26–28.Google Scholar
- Çöltekin A, Fabrikant SI, Lacayo M (2010) Exploring the Efficiency of Users’ Visual Analytics Strategies Based on Sequence Analysis of Eye Movement Recordings. International Journal of Geographical Information Science 24: 1559–1575. https://doi.org/10.1080/13658816.2010.511718.CrossRefGoogle Scholar
- Çöltekin A, Janetzko H, Fabrikant SI (2018) Geovisualization. In The Geographic Information Science & Technology Body of Knowledge (2nd Quarter 2018 Edition), edited by John P. Wilson. UCGIS. https://doi.org/10.22224/gistbok/2018.2.6.
- Cooper D (2011) User and Design Perspectives of Mobile Augmented Reality. MA Thesis, Muncie, Indiana: Ball State University.Google Scholar
- Cordeil M, Cunningham A, Dwyer T et al (2017) ImAxes. In UIST ’17 Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 71–83. Québec City, QC, Canada: ACM. https://doi.org/10.1145/3126594.3126613.
- Dall’Acqua L, Çöltekin A, Noetzli J (2013) A Comparative User Evaluation of Six Alternative Permafrost Visualizations for Reading and Interpreting Temperature Information. In GeoViz Hamburg 2013. Hamburg, Germany. https://doi.org/10.5167/uzh-87817.
- Duchowski AT, McCormick BH (1995) Preattentive Considerations for Gaze-Contingent Image Processing. In Proceedings of SPIE, 2411:128–139. San Jose: SPIE. https://doi.org/10.1117/12.207556.
- Dupont L, Pallot M, Morel L (2016) Exploring the Appropriateness of Different Immersive Environments in the Context of an Innovation Process for Smart Cities. In 22nd ICE/IEEE International Technology Management Conference. Trondheim, Norway.Google Scholar
- Dykes J, MacEachren AM, Kraak MJ (2005a) Exploring Geovisualization. In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 1–19. Amsterdam: Pergamon Press.Google Scholar
- Fuhrmann S, Paula AR, Edsall RM et al (2005) Making Useful and Useable Geovisualization: Design and Evaluation Issues. In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 553–566. Pergamon Press. https://doi.org/10.1016/b978-008044531-1/50446-2.CrossRefGoogle Scholar
- Garlandini S, Fabrikant SI (2009) Evaluating the Effectiveness and Efficiency of Visual Variables for Geographic Information Visualization. In COSIT 2009, Lecture Notes in Computer Science, 5756:195–211. Berlin Heidelberg: Springer-Verlag. https://doi.org/10.1007/978-3-642-03832-7_12.CrossRefGoogle Scholar
- Geertman S, Allan A, Pettit C et al (2017) Introduction to Planning Support Science for Smarter Urban Futures. In Planning Support Science for Smarter Urban Futures, edited by Stan Geertman, Andrew Allan, Chris Pettit, and Stillwell, John, 1–19. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-57819-4.Google Scholar
- Goodchild MF (2009) Virtual Geographic Environments as Collective Constructions. In Virtual Geographic Environments, edited by Hui Lin and Michael Batty, 15–24. Beijing: Science Press.Google Scholar
- Griffin AL (2004) Understanding How Scientists Use Data-Display Devices for Interactive Visual Computing with Geographical Models. PhD Thesis, University Park, PA: Department of Geography, The Pennsylvania State University.Google Scholar
- Griffin AL, Fabrikant SI (2012) More Maps, More Users, More Devices Means More Cartographic Challenges. The Cartographic Journal 49: 298–301. https://doi.org/10.1179/0008704112z.00000000049.
- Hägerstrand T (1970) What about People in Regional Science? Papers in Regional Science 24: 7–24. https://doi.org/10.1111/j.1435-5597.1970.tb01464.x.CrossRefGoogle Scholar
- Hegarty M, Waller DA (2005) Individual Differences in Spatial Abilities. In The Cambridge Handbook of Visuospatial Thinking, edited by Priti Shah and Akira Miyake, 121–169. Cambridge, UK: Cambridge University Press.Google Scholar
- Heim M (1998) Virtual Realism. Oxford: Oxford University Press.Google Scholar
- Heuer RJ (1999) Psychology of Intelligence Analysis. Langley, VA: Central Intelligence Agency.Google Scholar
- Huang HS, Schmidt M, Gartner G (2012) Spatial Knowledge Acquisition with Mobile Maps, Augmented Reality and Voice in the Context of GPS-Based Pedestrian Navigation: Results from a Field Test. Cartography and Geographic Information Science 39: 107–116. https://doi.org/10.1559/15230406392107.CrossRefGoogle Scholar
- Hullman J (2016) Why Evaluating Uncertainty Visualization Is Error Prone. In BELIV’2016, 143–151. Baltimore, MD. https://doi.org/10.1145/2993901.2993919.
- Jacquinod F, Pedrinis F, Edert J et al (2016) Automated Production of Interactive 3D Temporal Geovisualizations so as to Enhance Flood Risk Awareness. In UDMV ’16 Proceedings of the Eurographics Workshop on Urban Data Modelling and Visualisation, 71–77. Liège, Belgium: The Eurographics Association.Google Scholar
- Jenny B, Kelso NV (2007) Color Design for the Color Vision Impaired. Cartographic Perspectives 58: 61–67. https://doi.org/10.14714/cp58.270.
- Jenny H, Jenny B, Cron J (2012) Exploring Transition Textures for Pseudo-Natural Maps. In GI_Forum 2012: Geovisualization, Society and Learning, edited by T. Jekel, A. Car, Josef Strobl, and G. Grieseneber, 130–139. Berlin: Wichmann.Google Scholar
- Jerald J (2015) The VR Book. New York: ACM & Morgan & Claypool Publishers. https://doi.org/10.1145/2792790.
- Kapler T, Wright W (2004) Geo Time Information Visualization. In IEEE Symposium on Information Visualization, 25–32. Austin, TX: IEEE. https://doi.org/10.1109/infvis.2004.27.
- Kelly M, Slingsby A, Dykes J et al (2013) Historical Internal Migration in Ireland. In Proceedings of GISRUK 2013. Liverpool, UK.Google Scholar
- Kisilevich S, Krstajic M, Keim D et al (2010) Event-Based Analysis of People’s Activities and Behavior Using Flickr and Panoramio Geotagged Photo Collections. In 14th International Conference on Information Visualisation, IV 2010, 289–296. London, UK: IEEE. https://doi.org/10.1109/iv.2010.94.
- Kraak MJ (2003b) The Space–Time Cube Revisited from a Geovisualization Perspective. In Proceedings of the 21st International Cartographic Conference, 1988–1996. Durban, South Africa: ICA.Google Scholar
- Kraak MJ (2008) Geovisualization and Time – New Opportunities for the Space–Time Cube. In Geographic Visualization: Concepts, Tools and Applications, edited by Martin Dodge, Martin Turner, and McDerby, 293–306. Chichester, West Sussex, UK: John Wiley & Sons. https://doi.org/10.1002/9780470987643.ch15.
- Kurkovsky S, Koshy R, Novak V et al (2012) Current Issues in Handheld Augmented Reality. In 2012 International Conference on Communications and Information Technology (ICCIT), 68–72. Hammamet, Tunisia: IEEE. https://doi.org/10.1109/iccitechnol.2012.6285844.
- Kveladze I, Kraak MJ, van Elzakker CPJM (2015) The Space-Time Cube as Part of a GeoVisual Analytics Environment to Support the Understanding of Movement Data. International Journal of Geographical Information Science 29: 2001–2016. https://doi.org/10.1080/13658816.2015.1058386.CrossRefGoogle Scholar
- Laney D (2001) 3D Data Management: Controlling Data Volume, Velocity, and Variety. META Group. https://blogs.gartner.com/doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf.
- Lokka IE, Çöltekin A, Wiener J et al (2018) Virtual Environments as Memory Training Devices in Navigational Tasks for Older Adults OPEN. Scientific Reports 8: 10809. https://doi.org/10.1038/s41598-018-29029-x.
- MacEachren AM (1994) Visualization in Modern Cartography: Setting the Agenda. In Visualization in Modern Cartography, edited by Alan M. MacEachren and D. R. Fraser Taylor, 1–12. Oxford, UK: Pergamon Press.Google Scholar
- MacEachren AM (1995) How Maps Work: Representation, Visualization, and Design. New York: Guilford Press.Google Scholar
- MacEachren AM (2015) Visual Analytics and Uncertainty: It’s Not About the Data. In EuroVis Workshop on Visual Analytics (EuroVA), 55–60. Cagliari, Sardinia, Italy. https://doi.org/10.2312/eurova.20151104.
- MacEachren AM, Edsall R, Haug D et al (1999a) Exploring the Potential of Virtual Environments for Geographic Visualization. presented at the Annual Meeting of the Association of American Geographers, Honolulu, HI.Google Scholar
- MacEachren AM, Jaiswal A, Robinson AC et al (2011) SensePlace2: Geo Twitter Analytics Support for Situational Awareness. In 2011 IEEE Conference on Visual Analytics, Science and Technology (VAST), 181–190. Providence, RI: IEEE. https://doi.org/10.1109/vast.2011.6102456.
- MacEachren AM, Kraak MJ, Verbree E (1999b) Cartographic Issues in the Design and Application of Geospatial Virtual Environments. In Proceedings of the 19th International Cartographic Conference. Ottawa, Canada: ICA.Google Scholar
- Mekni M (2010) Hierarchical Path Planning for Situated Agents in Informed Virtual Geographic Environments. In SIMUTools 2010. Torremolinos, Malaga, Spain. https://doi.org/10.4108/icst.simutools2010.8811.
- Milgram P, Kishino F (1994) A Taxonomy of Mixed Reality Visual Displays. IEICE TRANSACTIONS on Information and Systems 77: 1321–1329.Google Scholar
- Monmonier M (2018) How to Lie with Maps (Third Edition). Chicago, IL: University of Chicago Press.Google Scholar
- Nöllenburg M (2007) Human-Centered Visualization Environments, GI-Dagstuhl Research Seminar, Dagstuhl Castle, Germany, March 5–8, 2006, Revised Lectures 4417: 257–294. https://doi.org/10.1007/978-3-540-71949-6_6.
- Olsson T (2012) User Expectations, Experiences of Mobile Augmented Reality Services. PhD Thesis, Tampere, Finland: Tampere University of Technology. https://tutcris.tut.fi/portal/files/5450806/olsson.pdf.
- Oprean D, Wallgrün JO, Duarte JMP et al (2018) Developing and Evaluating VR Field Trips. In Proceedings of Workshops and Posters of the 13th International Conference on Spatial Information Theory (COSIT 2017), edited by Eliseo Clementini, Eliseo Fogliaroni, and Andrea Ballatore, 105–110. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-63946-8_22.Google Scholar
- Patterson T, Kelso NV (2004) Hal Shelton Revisited: Designing and Producing Natural-Color Maps with Satellite Land Cover Data. Cartographic Perspectives 47: 28–55. https://doi.org/10.14714/cp47.470.
- Pettit C, Lieske SN, Jamal M (2017a) CityDash: Visualising a Changing City Using Open Data. In Planning Support Science for Smarter Urban Futures, edited by Stan Geertman, Allan Andrews, Chris Pettit, and John Stillwell, 337–353. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-57819-4_19.Google Scholar
- Pettit CJ, Russel ABM, Michael A et al (2010) Realising an EScience Platform to Support Climate Change Adaptation in Victoria. In 2010 IEEE Sixth International Conference on E-Science, 73–80. IEEE. https://doi.org/10.1109/escience.2010.32.
- Pettit C, Tice A, Randolph B (2017b) Using an Online Spatial Analytics Workbench for Understanding Housing Affordability in Sydney. In Seeing Cities Through Big Data, Research, Methods and Applications in Urban Informatics, edited by Piyushimita Thakuriah, Nebiyou Tilahun, and Moira Zellner, 233–255. Cham: Springer. https://doi.org/10.1007/978-3-319-40902-3_14.Google Scholar
- Pindat C, Pietriga E, Chapuis O et al (2012) JellyLens: Content-Aware Adaptive Lenses. In UIST ’12 Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, 261–270. Cambridge, MA: ACM. https://doi.org/10.1145/2380116.2380150.
- Pirolli P, Card S (2005) The Sensemaking Process and Leverage Points for Analyst Technology as Identified through Cognitive Task Analysis. In Proceedings of International Conference on Intelligence Analysis. Maclean, VA.Google Scholar
- Priestnall G, Jarvis C, Burton A et al (2012) Virtual Geographic Environments. In Teaching Geographic Information Science and Technology in Higher Education, edited by David J. Unwin, Kenneth E. Foote, Nicholas J. Tate, and David DiBiase, 257–288. Chichester, West Sussex, UK: John Wiley & Sons. https://doi.org/10.1002/9781119950592.ch18.CrossRefGoogle Scholar
- Rautenbach V, Çöltekin A, Coetzee S (2015) Exploring the Impact of Visual Complexity Levels in 3D City Models on the Accuracy of Individuals’ Orientation and Cognitive Maps. In ISPRS Geospatial Week 2015, Workshop II-3/W5, edited by Sidonie Christophe and Arzu Çöltekin, 499–506. La Grande Motte, France. https://doi.org/10.5194/isprsannals-ii-3-w5-499-2015.CrossRefGoogle Scholar
- Richter KF, Tomko M, Çöltekin A (2015) Are We There Yet? Spatial Cognitive Engineering for Situated Human-Computer Interaction. In CESIP 2015: Cognitive Engineering for Spatial Information Processes: From User Interfaces to Model-Driven Design. Workshop at COSIT 2015, 1–7. Santa Fe, NM.Google Scholar
- Roberts JC (2007) State of the Art: Coordinated & Multiple Views in Exploratory Visualization. In Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), 61–71. Zürich, Switzerland: IEEE. https://doi.org/10.1109/cmv.2007.20.
- Robertson GG, Mackinlay JD (1993) The Document Lens. In UIST ’93 Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology, 101–108. Atlanta, GA: ACM. https://doi.org/10.1145/168642.168652.
- Robinson A (2017) Geovisual Analytics. In The Geographic Information Science & Technology Body of Knowledge (3rd Quarter 2017 Edition), edited by John P. Wilson. UCGIS. https://doi.org/10.22224/gistbok/2017.3.6.
- Ruas A, Perret J, Curie F et al (2011) Conception of a GIS-Platform to Simulate Urban Densification Based on the Analysis of Topographic Data. In Advances in Cartography and GIScience, Volume 2, edited by Anne Ruas, 413–430. Berlin Heidelberg: Springer-Verlag. https://doi.org/10.1007/978-3-642-19214-2.Google Scholar
- Russo P, Pettit C, Çöltekin A et al (2013) Understanding Soil Acidification Process Using Animation and Text: An Empirical User Evaluation With Eye Tracking. In Proceedings of the 26th International Cartographic Conference, 431–448. Berlin Heidelberg: Springer-Verlag. https://doi.org/10.1007/978-3-642-32618-9_31.Google Scholar
- Sachs J, Schmidt-Traub G, Kroll C et al (2018). New York: Bertelsmann Stiftung and Sustainable Development Solutions Network (SDSN).Google Scholar
- Schnur S, Bektaş K, Salahi M et al (2010) A Comparison of Measured and Perceived Visual Complexity for Dynamic Web Maps. In Proceedings of GIScience 2010, edited by Ross Purves and Robert Weibel, 1–4. Zürich, Switzerland. http://www.giscience2010.org/pdfs/paper_181.pdf.
- Semmo A, Döllner J (2014) An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments. In MapInteract’14, 43–49. Dallas/Fort Worth, Texas: ACM. https://doi.org/10.1145/2677068.2677072.
- Slingsby A (2018) Tilemaps for Summarising Multivariate Geographical Variation. In Paper Presented at VIS 2018. Berlin, Germany: IEEE.Google Scholar
- Slocum TA, McMaster RB, Kessler FC et al (2008) Thematic Cartography and Geovisualization (3rd Edition). Upper Saddle Hall, NJ: Prentice Hall.Google Scholar
- Spence R (2007) Information Visualization: Design for Interaction (2nd Edition). Harlow, Essex, UK: Pearson Education Limited.Google Scholar
- Thomas JJ, Cook KA (2005) Illuminating the Path: The Research and Development Agenda for Visual Analytics. New York: IEEE.Google Scholar
- Thoresen JC, Francelet R, Coltekin A et al (2016) Not All Anxious Individuals Get Lost: Trait Anxiety and Mental Rotation Ability Interact to Explain Performance in Map-Based Route Learning in Men. Neurobiology of Learning and Memory 132: 1–8. https://doi.org/10.1016/j.nlm.2016.04.008.CrossRefGoogle Scholar
- Tufte ER (1983) The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.Google Scholar
- United Nations Population Division (2018) World Urbanization Prospects: The 2018 Revision. New York: United Nations.Google Scholar
- van Wijk JJ, Nuij WAA (2003) Smooth and Efficient Zooming and Panning. In INFOVIS’03 Proceedings of the Ninth Annual IEEE Conference on Information Visualization, 15–22. Seattle, WA.Google Scholar
- Vert S, Dragulescu B, Vasiu R (2014) LOD4AR: Exploring Linked Open Data with a Mobile Augmented Reality Web Application. In 13th International Semantic Web Conference (ISWC 2014), 185–188. Trentino, Italy.Google Scholar
- Yue SS, Chen M, Wen YN et al (2016) Service-Oriented Model-Encapsulation Strategy for Sharing and Integrating Heterogeneous Geo-Analysis Models in an Open Web Environment. ISPRS Journal of Photogrammetry and Remote Sensing 114: 258–273. https://doi.org/10.1016/j.isprsjprs.2015.11.002.CrossRefGoogle Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.