1 Introduction

From virtual globes [8] (e.g., Google Maps) to global positioning system, spatial computing has transformed society via pervasive services (e.g., Uber and other location-based services), ubiquitous systems (e.g., geographical information system, spatial database management system), and pioneering scientific methods (e.g., spatial statistics). These tools are just the tip of the iceberg. In the coming decade, spatial computing researchers will be working to develop a compelling array of new geo-related capabilities. For example, where GPS route finding today is based on shortest travel time or travel distance, companies are now experimenting with eco-routing, finding routes that reduce fuel consumption [43]. For example, United Parcel Service (UPS) uses a smart routing service that avoids left turns and reduces the fuel consumption of its vehicles caused by idle waits for left turns [89]. Such savings can be multiplied many times over when eco-routing services become available for consumers and other fleet owners (e.g., public transportation) [59]. New geo-related capabilities will also change how we use the Internet. Currently, users access information based on keywords and references, but a large portion of information has an inherent spatial component. Storing and referencing data by location may allow for more intuitive searching and knowledge discovery. It would then be possible to draw correlations and find new information based on relative locations, rather than keywords [54]. The incorporation of location information for Internet users, documents, and servers will allow a flourishing of services designed around enhanced usability, security and trust. Moreover, the data collection is facilitated thanks to the breakthrough on sensor technology. Now, using the rich geospatial data collected from these sensors, we can analyze, model, and visualize Earth as a complex entity. For example, Fig. 1 illustrates how the Earth would be affected by gravity changes on its surface that was impossible to visualize previously. Clearly, spatial computing is crucial for understanding Earth as a complex system and its physics, biology and sociology.

Fig. 1
figure 1

Figure illustrates the gravitational difference on the surface of the earth. Areas of strongest gravity are in yellow and weakest in blue. Distortions are magnified 10000 times for visualization [1]

The expected economic benefits of these and other spatial computing technologies are significant. According to a recent McKinsey report, location-based services will provide a significant portion of the estimated 150,000 new deep-analytical jobs and 1.5 million data-savvy manager and analyst positions are needed for the upcoming push by companies into big-data analysis [59]. Along with that, a potential consumer surplus of “$600 billion annually” is possible through the use of personal location data [59].

While such opportunities are undoubtedly exciting, they also raise a host of new challenges for spatial computing that will need to be addressed with creativity, dedication, and financial resolve. This paper presents a perspective on spatial computing based on the discussions at the 2012 Computing Community Consortium (CCC) visioning workshop. A staggering number of ideas came out of these discussions. We initially synthesized them for a broader audience in [84]. This version refines our synthesis further. Although we explain several aspect of recent advances and application of spatial computing, readers interested in more detailed exploration of spatial computing are encouraged to consult textbooks [16, 23, 24, 80, 83], monographs [78, 81] and encyclopedias [51, 85]. The rest of this paper is organized as follows:

Section 2 reviews the recent changes in spatial computing, Section 3 presents research opportunities and challenges for spatial computing to be addressed and Section 4 reviews the geo-privacy policy issues. Finally, Section 5 presents final considerations for spatial computing. In addition, Appendix A presents emerging applications for different sections of community, Appendix B shows several spatial computer science questions that are worth brainstorming and finally Appendix C gives examples of platform trends for spatial computing.

2 The changing world of spatial computing

Traditionally, map creation was a cumbersome job that was not only costly and but also time consuming. Moreover, not only map creation but also using those maps and Geographic Information System (GIS) technologies required a sophisticated training that could be afforded only by government agencies (i.e. Department of Defense) or big companies (i.e. Oil Exploration Companies). Such organizations depended on highly specialized software such as ArcGIS and Oracle Spatial Databases for editing or analyzing geographic information and their expectations did not extend much beyond the distribution of paper maps and their electronic counterparts.

By the recent changes and technological advances in spatial computing, now the roles of people changed dramatically as outlined in Table 1. Today, “everyone” is a mapmaker and every phenomenon is observable, “everyone” uses location-based services, and every platform is location-aware.

Table 1 Cultural shift in spatial computing

However, due to the extreme success and widespread use of spatial computing, two issues arose: (a) people’s expectations starts to overwhelm the advances [63] and (b) People start to get more concerned about their privacy. We describe these challenges in more detail as follows.

2.1 Everyone is a mapmaker and every phenomenon is observable

The fact that users with cell phones and access to the Internet now number in the billions is a new reality of the 21st century. Increasingly, the sources of geo-data are smartphone users who are untrained in GIS technology [36] (e.g., Mercator projection, World Geodetic System, etc) as well as hobbyists acting as volunteer geographic information (VGI) providers. Data quality is often uncertain since the sources are generally untrained in making and verifying specific measurements and may unwittingly contribute erroneous information. Figure 2 is a well known example of erroneous distance information computed on a planar map using circular distance, an easy mistake without the help of GIS supporting spherical measurements.

Fig. 2
figure 2

Figure shows the mistake a 2003 Economist article made which underestimates the range of North Korean missiles. In a Earth is assumed as flat which caused the underestimation, whereas in b the correct range is shown [29]. (Best in color)

In addition, more phenomena are also becoming observable in the sense that sensors are getting richer for 3D mapping (e.g., LiDAR, ground-penetrating radar) and broader spectra at finer resolutions are being captured. This makes it possible to observe more phenomena at higher levels of precision. For example, Fig. 3 shows the visualization of ground water levels over time that were measured by ground water sensors and remote sensing imagery. However, richer and more precise sensor data presents new challenges due to increased data volume, velocity and variety which are exceeding the capacity of current spatial computing technologies.

Fig. 3
figure 3

Figure shows the levels of underground water in 2013 compared with the past. Red color indicates significant deficiency [NASA]

2.2 Everyone uses location-based services

The proliferation of web-based technologies, cell-phones, consumer GPS-devices, and location-based social media has facilitated the widespread use of location-based services [81]. Internet services such as Google Earth and OpenStreetMap have brought GIS to the masses (e.g., Google Earth has received over a billion downloads [14]). Services such as Enhanced-911 (E-911) and navigation applications are consumed by billions of individuals. Facebook check-in and other location-based social media are also used by over a billion people around the world.

2.3 Every platform is becoming location aware

Spatial computing and cell-phones continue to influence each other due to the increasing need by individuals to know their spatial context, use navigation applications, etc. Recently, smart phone sales have eclipsed those of personal computers [21]. As a result, computing platforms are being increasingly shaped by cell-phones, and thus by spatial computing. This new reality will require reimagining the various layers of the computing stack. Support for geospatial notions within the general computing eco-system has been rich at the application level (e.g., hundreds of projections are supported by ArcGIS). However, more support will be needed at lower layers (e.g., operating system, runtime system) for next-generation spatial computing. Support for geospatial notions will be needed for compilers and computer network security. The possibility exists that GPS circuits will be needed on-chip and that geodetic and Internet infrastructure will be linked.

2.4 Expectations are rising and so are the risks

Recently, spatial computing has become one of the main components of people’s social needs. People use spatial computing in every day of their life such as location-based services, route suggestion and navigation, and virtual globes. The vast variety of spatial computing applications and their convenience gained people’s trust as well as increased their expectations. However, spatial computing also raised serious concerns over geo-privacy. These concerns must be addressed to avoid skepticism among people, reduce public’s discomfort of location aware services and make economic entities less liable over geo-privacy issues. Sustainable geo-privacy policy must emerge from civil society. The needs of policy stakeholders must be balanced to ensure public safety as well as economic prosperity. Conversation starters centering on special cases such as emergencies are needed to initiate the extremely challenging but necessary geo-privacy policy discussion.

These profound changes will define the frontiers of future spatial computing research. Today, the field abounds with exciting opportunities at every turn but the stakes have been raised. If spatial computing is to achieve its full transformative potential, it will need to both expand and deepen its research horizons even further. The rest of this document summarizes the promising technologies followed by the research opportunities and challenges that lie ahead.

3 Research opportunities and challenges

Spatial computing’s success to date has created significant new research opportunities in four broad areas: science, systems, services and crosscutting as detailed in Table 2. First, overcoming the challenges of everyone being a mapmaker and every phenomenon being observable will require Spatial Computing science to move from fusion of data from a few trusted sources to synergizing data across numerous volunteers. Second, facilitated use of location-based services will be needed to make these services available for everyone as opposed to only the GIS-trained few. Third, surmounting the challenge of equipping every platform to be location-aware will move spatial computing from a few platforms (e.g., PCs) to all platforms (e.g., sensors, clouds). Other opportunities due to rising expectations are crosscutting a number of interdisciplinary fields such as navigating the human body. The profound changes outlined above have opened exciting new frontiers in spatial computing:

Table 2 Spatial computing opportunities

3.1 Spatial computing sciences: from fusion to synergetics

Historically, spatial computing science dealt with geographic data from highly trained GIS professionals in authoritative organizations with data quality assurance processes. Today, an ever-increasing volume of geographic data namely volunteered geographic information (VGI) is coming from average citizens via check-ins, tweets, geo-tags, geo-reports from Ushahidi [72], and donated GPS tracks. Due to the nature of VGI data, there are several issues that should be addressed, such as the quality of the collected data, preventing spatial data fraud, etc. Such data requires transformation of traditional data fusion ideas into a broader paradigm of data synergetics, thereby raising many new issues. For example, we need to be able to manipulate qualitative spatio-temporal data in order to reason about and integrate the qualitative spatial and temporal information that may be gleaned from VGI (e.g., geo-tags, geo-reports, etc.). Spatio-temporal prediction may assist in inferring the described location of a tweet from its content. Additionally, since contending narratives in VGI data may lead to alternative maps of a common area from different perspectives, handling multiple competing spatial descriptions from the past and future is essential. Furthermore, spatial and spatio-temporal computing standards are needed to more effectively utilize VGI such as geo-tags with known geographical locations via history-aware gazetteers.

3.1.1 Qualitative volunteered data and next generation sensor measurement

Qualitative volunteered data and next-generation sensors provide tremendous potential in spatial computing [33]. Much volunteered geographic data today is qualitative, i.e., non-metric, linguistic, topological, contextual, descriptive, cultural, crowd-sourced. Integrating qualitative spatial and temporal information from geo-tags, tweets regarding places, and other VGI into existing data collections will allow us to automate the organization and manipulation of a range of data currently unavailable for use with traditional data [8, 68]. It will make it possible to reason about the relevant and salient features of large, complex data sets. For example, it will allow us to develop and evaluate potential scenarios for humanitarian crises or to perform a post mortem analysis of a natural disaster (e.g., Haiti earthquake [70]). New challenges emerge such as: How does one manage hybrid quantitative and qualitative spatio-temporal data? How should one interpret statements such as “he crossed the street”, “crossed the room”, or “crossed the ocean”? How do we merge existing work on spatial relationships with natural language? How do we develop computationally efficient methods of spatial reasoning with hybrid quantitative/qualitative, discrete/continuous descriptions? How do we deal with the mismatch between qualitative spatio-temporal data and its relationship to the continuous nature of space and time?

Next-generation sensors are becoming richer for 3D mapping (e.g., LiDAR (Light Detection And Ranging) and ground penetrating radar) and our ability to capture broader spectra at finer resolutions is improving. Next-generation sensors exist on many platforms such as UAVs (Unmanned Arieal Vehicle) and cellphones that number in the billions. However, spatial heterogeneity is a key challenge. Retrofitting every sensor (e.g., every traffic camera) with specialized equipment such as heated enclosures depending on its spatial location (e.g., Minnesota during the winter) may not be economically feasible. Thus, new ways of determining which parts of the spectrum are most robust to fog, rain, and hail must be investigated. Furthermore questions such as: “what energy sources (e.g., solar, vibration, heat, etc.) are most efficient across various geographies, sensors, and climates of interest?”, must be addressed.

3.1.2 Spatio-temporal prediction

Geospatial information can also be helpful when making spatio-temporal predictions about a broad range of phenomena such as the next location of a car, the risk of forthcoming famine or cholera, a criminal’s potential residence [27] or the future path of a hurricane. For example, in Fig. 4 disease cases from the 1854 London cholera outbreak were used to identify the outbreak hotspot around the Broad Street water pump which is the potential cause of the outbreak using spatial scan statistics. Models may also predict the location of probable tumor growth in a human body or the spread of cracks in silicon wafers, aircraft wings, and highway bridges. Such predictions would challenge the best of machine learning and reasoning algorithms, including leveraging geospatial time series data. We see rich problems in this realm. Many current statistical techniques assume independence between observations and stationarity of phenomena. However, spatio-temporal data often violate these common assumptions. Novel techniques accounting for spatial autocorrelation (the degree of dependency among observations in a geographic space), domain-specific models, and non-stationarity may enable more accurate predictions.

Fig. 4
figure 4

Figure portrays a significant hotspot analysis using the deaths’ locations and water pump sites of the infamous London cholera epidemic in 1854 [86]. (Best in color)

For spatio-temporal prediction to be used effectively, several questions should be addressed: How to use traditional machine learning techniques and address the challenges specific to spatio-temporal data (autocorrelation, spatial uncertainty [64, 75], heterogeneity etc.)? How can we handle the data imperfections that occurred due to losses? When modifying traditional machine learning techniques to be used with spatio-temporal data, how can we achieve computational efficiency? How can we keep privacy when mining spatio-temporal data?

3.1.3 Synthesizing multiple viewpoints of past, present and future

Given the wide variety of data sources, it is not easy to synergize data, fusing various types of spatial data, synthesize new information from the available data, and conflate or combine related sources of spatial data. Automating map comparisons to identify differences across competing perspectives will enable data analytics on multi-source spatial data. For example, comparing and visualizing the various geo-political claims on the South China Sea requires extensive analysis of past and present claims by a number of legal entities. On the surface this synergetics problem may appear to be traditional data integration but the problem has more structure in the context of spatio-temporal data, which may allow a larger degree of automation and computational efficiency. The domain semantics offer constraints such as a common, finite, and continuous embedding space (e.g., the surface of the Earth), thus allowing for interpolation and autocorrelation. Equally important is the challenge of how to semantically annotate data and define metadata in a way that ensures its meaning will be reconstructible by future generations [82].

In order to support all of these tasks, it will first be necessary to develop representations that capture both the data and any associated metadata about multiple views of past, present, and future. So how can we keep the accuracy and provenance without comprising semantics of data? Which techniques are needed to integrate data from various sources and not lose its metadata and semantics? How can we produce new sources that can be accurately described. The integration and analysis techniques must also deal with the various modalities and resolutions of the data sources.

3.1.4 Spatial and spatio-temporal computing standards

Spatial data can be used more effectively if events, objects, and names can be easily associated with known geographical locations. These locations can be countries, states, cities, or well known named places. In this context, there are two main challenges: how to associate an event to a known location using some kind of text and location matching algorithm, and once a match is made in two different systems, how to identify if they both map to the same location. The first problem is well known and several commercial solutions exist to solve this problem. The second problem is relatively new and requires support from the standards bodies. For example, a document might have a reference to Bombay and a geo-extraction tool can identify that the document is referring to the business capital of India. Once this association is made, the tool might tag the document with the text Bombay, India? (before 1996). Another tool looking at the same document might tag it with the text Mumbai, India? (since 1996). When this sort of information is exchanged, further processing is required to reconcile the fact that both documents refer to the same location.

Which sub-areas of spatial computing are ripe for standardization i.e., where is consensus emerging on a set of common concepts, representations, data-types, operations, algebras, etc.? Which spatial computing sub-areas have the greatest standardization needs from a societal perspective (e.g., emergency responders)? How may consensus be reached in areas of greatest societal need?

3.2 Spatial computing systems: from sensors to clouds

Earlier, spatial computing was only used by a highly trained group of people who could afford and needed specialized hardware and software platforms (ESRI Arc, Oracle Spatial, etc.). Today, from enterprise level computers to wearable devices, spatial computing is used by almost all computing platforms due to the advances on computing technologies and abundant GPS enabled devices. Since spatial computing is used more diversely across platforms, new spatial computing platforms are needed. These platforms not only should support spatial computing at lower layers of the computing stack to allow uninterrupted operation over devices with less hardware capabilities (i.e. IoT, embedded and wearable devices) but also should allow interoperability across different hardware platforms. New augmented reality technologies with better accuracy and scalability are needed to use with wearable displays (i.e. Google eyeglasses [95]) and mobile devices. Spatial computing systems needs to be adapted to allow real time centimeter scale remote sensing [16, 32, 66, 97] via UAVs and Satellite Imagery to be for applications of emergency, precision agriculture [4, 9] and food, energy and water management. Moreover, spatial big data will provide new opportunities of cloud computing and will address the current limitations of traditional spatial computing that is inadequate for the current size, variety and velocity of the spatial big data.

3.2.1 Spatial computing infrastructure

Internet infrastructure consists of hardware and software systems essential to Internet operation. Location is fast becoming an essential part of Internet services, with HTML 5 providing native support for locating browsers. “Check-in” and other location-based services are becoming increasingly popular in social networks such as Facebook and FourSquare. Geo-location services (e.g., Quova, IP2Location) are increasingly popular for jurisdiction regulation compliance, geo-fencing for digital rights management, fraud-detection, etc. Current localization techniques on the Internet rely on distance-bounding protocols using networks of transmitter, receivers, computers, cameras, power meters, etc. Spatial computing infrastructure can be expanded throughout the computing stack (e.g., OS, Network, Logical, Physical) to enable routers, servers, even TVs, to locate themselves in the world and provide location-based services (e.g., evacuation [53] targeting to TVs based on location).

Next-generation infrastructure will enable higher resolution applications, scalability and reliability, and new representation and analysis on more complex domains. Which spatial primitives must be implemented in silicon chips for secure authentication of location (similar to encryption-on-chip)? Can we utilize graphical processing units (GPU) for spatial computations? How can upper-layer software (e.g., OS, GIS applications [60, 85]) take advantage of GPU support without specialized coding? Could we integrate the National Geodetic Survey (ground-based location broadcasts for GPS) with the Internet to more accurately use distance-bounding protocols for location estimation? What is the appropriate allocation of spatial data types [73] and operations across hardware, assembly language, OS kernel, run-time systems, network stack, database management systems, geographic information systems and application programs?

3.2.2 Augmented reality

Augmented reality gives information and alters the view about the real world using computer graphics that is spatially aligned with the space in real time. Augmented reality can give historic information about a place as well as numeric data about the world around the user. It already is used in a variety of places, such as heads-up displays in airplanes, and has become popular with smartphone applications. Augmented reality will play a crucial role in assisted medicine (e.g., clinical, surgical as well as diagnostic and therapeutic), training and simulation (e.g., medicine, military, engineering, teaching, etc.), in-situ architecture, engineering, and construction, civil/urban planning, assembly and maintenance (Fig. 5), in-situ contextualized learning, and intelligence amplification.

Fig. 5
figure 5

Experimental Augmented Reality Assistance for an Aircraft Engine Assembly Task. Head tracked optical see-through head-worn display overlays graphics on the user’s view of components to be assembled. View through the head worn display shows dynamic 3D arors and labels that provide assistance in spatially aligning components [48]. (Best in color)

The new spatial computing research challenges in this space stem from the need for new algorithms as well as cooperation between users and the cloud for full 3D position and orientation pose estimation of people and devices and registration of physical and virtual things.

In order to leverage the benefits of augmented reality, there are several spatial computing challenges that needs to addressed. These include new algorithms and user and cloud coordination techniques to align physical world with the virtual reality. The questions arise from the challenges can be listed as: What are natural interfaces leveraging all human senses (e.g., vision, hearing, touch, etc.) and controls (e.g., thumbs, fingers, hands, legs, eyes, head, and torso) to interact with augmented reality across different tasks? How can we capture human bodies with their full degrees of freedom and represent them in virtual space? Can we provide automated, accurate, and scalable retrieval/recognition for AR, presentation/visualization of augmented information, and user interfaces that are efficient, effective, and usable? What are the most natural wearable AR displays (e.g., watches, eyewear, cell-phones) for different tasks (e.g., driving, walking, shopping [15])? How do we visualize and convey uncertainty about location, value, recency, and quality of spatio-temporal information? How can ubiquitous interactive room-scale scanning and tracking systems change the way in which we interact with computers and each other? How do we visualize alternative perspectives about a contested place from different stakeholders?

3.2.3 Collection, fusion, and curation of sensing data

Due to rapid improvements and cost reductions in sensor technology, the amount of sensor data available is exploding and much of this sensor data has a spatial component to it. In the past, datasets traditionally consisted of values along a single dimension (e.g., space or time). As we begin to collect detailed data along both dimensions, we need new techniques to collate and process this data (Fig. 6). Currently we are able to conduct economical, time persistent monitoring of a location by placing a sensor at that location. We also have the ability to do economical, space persistent monitoring by using a sensor to scan a location or space periodically. However, inexpensive, space-time persistent monitoring of a large area (e.g., country) over long durations (e.g., year, decade) remains an open problem despite recent advances such as wide-area motion imagery (WAMI). WAMI is an important technology which can provide pervasive infrastructure for real-time localization for things such as emergency response and health management, and real-time situation awareness for societal scale applications, such as water and energy distribution.

Fig. 6
figure 6

This schematic plots the precision of current applications that use spatial positioning as a function of the required time interval. The most demanding applications at the shortest time intervals include GPS seismology and tsunami warning systems. At the longest time intervals, the most demanding applications include sea level change and geodynamics [61]. (Best in color)

How do we create the infrastructure for the continuous and timely collection, fusion, and curation of all of this spatio-temporal data? How do we develop participatory sensing system architectures to support the multi- spectral and multi-modal data collection through both physical and virtual means; can we increase spatio- temporal resolution to achieve real-time decimeter scale localization? How do we exploit existing sensor networks for capturing and processing events?

3.2.4 Computational issues for spatial big data

Increasingly, location-aware datasets are of a volume, variety, and velocity that exceed the capability of spatial computing technologies. Spatial big data examples can be listed as GPS tracks collected using mobile devices, vehicle engine measurements, temporally detailed road maps, etc. Spatial Big Data poses an enormous set of challenges in terms of analytics, data processing, capacity, and validation. Specifically, new analytics and systems algorithms are needed that deal with partial data (as the data is distributed across data centers), and the ability to compute global models from partial (local) models is essential. Also needed are novel ways of validating global models computed from local models as well as processing streaming data before the data is refreshed (e.g., traffic, GPS).

Spatial Big Data (SBD) requires a next-generation computational infrastructure that minimizes data movement and performs in-situ analysis (before data hits secondary storage) and data summarization of the most frequently used, or intermediate results; it creates a plethora of new technology with transformative potential. Can SBD be used to remove traditional issues with spatial computing, such as the common problem of users specifying neighborhood relationships (e.g., adjacency matrix in spatial statistics) by developing SBD-driven estimation procedures? How might we take advantage of SBD to enable spatial models to better model geographic heterogeneity, e.g., via spatial ensembles of localized models? Lastly, how can we modify traditional big data tools to calculate spatial algorithms, which tend to be iterative and interdependent (a problem for the MapReduce framework due to the expensive Reduce step)?

3.3 Spatial computing services: spatial cognition first

Traditionally, spatial computing was a skill achieved by only a small number of people and those people were using sophisticated GIS tools that could not be used by ordinary people. Recent advances on GPS enabled mobile devices and location aware applications allowed ordinary people to use location based services. Therefore, there is a need for understanding people’s requests [62] and develop new design approaches to make those spatial tools “user-friendly” for ordinary people [39]. In addition, spatial cognitive assistance may allow a more natural way to describe routes that are not named (e.g., alleys behind buildings) and may allow people use landmarks (e.g., turn when you pass the church) for routing purposes. Also, spatial computing may benefit from understanding crowd movement behavior instead of individual’s movement behavior. New opportunities arise from the social media context as well. Tweets, messages, posts, etc. may reveal interesting information about the location as well as an early revelation of emergency situation (e.g., hurricane, accident, tsunami, etc.). Finally, people’s spatial skills should be improved to leverage more benefits from spatial computing.

3.3.1 Spatial cognitive assistance

Spatial cognition is the study of knowledge and beliefs held by the general public (in contrast to people trained in GIS technology) about location, size, distance, direction and other spatial properties of places and events in the world [62]. As the community of spatial computing technology users grows (to billions), it is crucial that user interfaces employ spatial cognitive language understood by the general public. For example, navigation maps on cell-phones use egocentric map orientation (e.g., the top of the map points east if the user is heading east instead of the north- up orientation used by professionals). Second, spatial skills (e.g., localizing, orienting, and reading maps) differ across individuals. Third, spatial information of interest depends on the task at hand. The importance of matching the spatial tool with the spatial abilities of the user has been well documented, with the appropriate feature set varying greatly with the spatial domain [92]. For example, an automated method to provide routing information not based on street names and addresses but on major landmarks aligns much better with traditional human spatial cognition. Spatial systems are now being specialized for a myriad of users including drivers, bicycle riders (both on-street and trail), wheel-chair users, public transit riders, etc.

While providing greater capabilities, there are ways in which the spatial knowledge once held by users has been given over to the system. Spatial cognitive assistance can greatly improve human task performance but also has long-term risks such as de-skilling of the human, promoting a deficit of spatial awareness, and vulnerability to infrastructure failure. Thus, the challenge of spatial cognitive assistance lies in (1) determining which cognitive skills are important to preserve, and which may be allowed to atrophy, (2) identifying the trade-offs between task performance and skill retention (and robustness to disaster), (3) designing spatial cognitive assistance to improve users knowledge and skills (not just immediate task performance), and (4) developing means of evaluating the effectiveness of spatial cognitive assistance systems Investigating what it will take to avoid these problems will be an important undertaking in realizing the potential gains of improving the knowledge and skills of technology users in the population, increasing task engagement while reducing distraction and improving safety, and making populations more robust in the face of disaster or infrastructure failure.

3.3.2 Spatial computing for human-human interaction/collaboration

Human-centered spatial computing is a fundamental and overarching set of principles governing the design, implementation, and use of spatial technologies that goes far beyond the design of effective user interfaces. It promises new interactive environments for improving quality of life [6] for all humans (e.g., enabling human to human interaction via spatial technology). Already, spatial computing has enabled new types of interaction with location- based social media, organizing activities such as Smart Mobs (spontaneous groupings of people for a single purpose such as coordinating location movement) and Participative Planning (e.g., collaborative design of a landscape, bridge, etc.). It points towards the augmentation of human cognition through the careful design of technologies to improve natural spatial abilities and discourage atrophy of key critical talents and skills. Research in this area could lead to dramatic advances in multiple fields, including more effective management of and response to emergency situations, the minimization of the technology gap between diverse segments of the population, the efficient and ethical use of crowdsourcing and social sensors for spatial data, and making energy consumption transparent in order to empower users to conserve resources with less effort, potentially saving billions of dollars every year.

Key research directions include understanding spatial human interaction in small (e.g., proximal interactions) and large (crowd-sourcing, flash-crowds) settings. Additional questions that merit investigation are: How are geo-social groups formed? How are geo-social groups spatio-temporally organized? What are the spatio-temporal signatures of group behaviors of interest (e.g., compliant, non-compliant)? What factors influence spatio-temporal cognition? What are the dynamics of spatial cognition in a group? What are the shared perceptions of space and time?

3.3.3 Context-aware spatial computing

Context refers to the set of circumstances or facts that surround a particular event or situation (e.g., who is tweeting or speaking, where they are, physical features in the situation, etc.). The spatio-temporal context of a person or device includes their location, places, trajectory, as well as related locations, places, and trajectories. Today, spatial computing systems often use the current location of a user to customize answers. For example, a search by a traveler for a gas station or ATM often lists the nearby instances. However, the context of the route and destination may enhance the place recommendation so that gas stations or ATMs that the traveler has already passed are not recommended.

Interesting future research directions in spatial cognition that account for context include investigating how average users interpret Tobler’s first law of geography, i.e., the notion that “Everything is related to everything else, but near things are more related than distant things” [90], as a basis for map visualization (spatialization) of other information (e.g., news topics). Do people assume that distances between items in visualizing a map are proxies for similarities between items? In general, do maps and geographic context affect the spatial cognition, abilities, and skills of people, and local populations? If spatial cognition varies across different geo-contexts (e.g., places, countries, regions), how should spatial computing systems accommodate the geographic heterogeneity? How may one predict the favorite places for a person in a new city based on his/her home city trajectories in a privacy protected manner? Next-generation spatial computing will aim to identify the fundamental axes/ dimensions of context-aware computing (space, time, and purpose), as well as include common variables, taxonomies, and frameworks to fuse these axes. Future technologies [40] will strive towards building systems, products, hardware, methods, and services that can ally/differentiate computation along these axes. Finally, there is an important exception to Tobler’s first law, known as teleconnections, which will also demand attention. Teleconnections (e.g., El Niño/ La Niña events) play a crucial role in climate science and must also be accounted for in next-generation spatial computing systems.

3.3.4 Improving spatial abilities and skills in students

Spatial abilities and skills can be described as the ability of using and working with spatial data. Such abilities include navigation, learning spatial layouts as well as mental rotation, transformation, scaling and deformation of physical objects across space-time (e.g., spatial reasoning), etc. Spatial skills strongly predict who will go into and succeed in science, technology, engineering, and math (STEM) fields [90]. While spatial skills are a particularly important component of scientific literacy, they are often overlooked. As the National Science Board [91] recently observed, “a talent highly valuable for developing STEM excellence - spatial ability - is not measured and hence missed” (p. 9). Nowadays, there is a need for people who have STEM skills and can handle STEM intensive jobs. People can be recruited for STEM education using spatial training. Also people’s spatial skills can be improved using education programs in different levels of educations (i.e., K-12, undergraduate, graduate). Significant challenges lie in how to improve the knowledge and skills of technology users in the general population. How do we increase spatial task engagement and reduce distraction, while improving safety? Which spatial skills are weakened from use of spatial computing (e.g., map localization)? How can people’s skills be altered on spatial thinking and STEM learning? How do we effectively structure educational opportunities to serve students talented in spatial ability? How may STEM talent be further developed by using advances in spatial computing? How may spatial computing be designed to further strengthen spatial abilities of interest to STEM disciplines?

3.4 Cross-cutting issues and interfaces

Emerging spatial computing sciences, systems, and services give unprecedented opportunities for research and application developments that can revolutionize our ways of life and in the meantime lead to new spatial-social questions about privacy. An example of the potential may be seen in the ubiquity of GPS-enabled devices (e.g., cell-phones) and location-based services. As localization infrastructure and map data sets reach indoors, there is expectation that the support that existed for an outdoor context will also be available indoors [52]. An example of the risks is the issue of geo-privacy. While location information (GPS in phones and cars) can provide great value to users and industry, streams of such data also introduce privacy concerns of stalking and geo-slavery [25, 35]. Since they think location information may affect their privacy, many people still does not use mobile commerce [49, 55]. Moreover, attempts from computer science field caused more harm than good. Spatial computing research is needed to address many questions such as whether people reasonably expect that their movements will be recorded and aggregated [71].

3.4.1 Ubiquitous computing

Ubiquitous computing is computing everywhere, anytime. It is computing indoors as well as outdoors, bio-spatial as well as geo-spatial, spatially aware, but also spatially contexualized. We believe, in the coming decade, spatial computing will need enable location based services indoors where people spend 90 % of their lives. As localization infrastructure and map data sets reach indoors, there is an expectation that the support that existed for an outdoor context will also be available indoors. For example, visitors to an office building may expect GPS service on their phone to lead to them to a particular room in the building. How do notions such as nodes, edges, shortest paths, average speed, etc., translate in an indoor context? In other words, localization infrastructure and map data sets are being challenged to keep up with us wherever we go. How should scalability, where architectures are faced with handling massive amounts of spatial data in real time be addressed? How may the spatiotemporal data collected at various resolutions be served (commensurate with the application requirements)? How do we verify the quality of the spatiotemporal data, enabling error propagation that flows with the served data?

Although spatial databases have traditionally been used to manage geographic data, the human body is another important low-dimensional physical space that is extensively measured, queried and analyzed in the field of medicine. The 21st century promises a spatio-temporal framework for monitoring health status over the long term (automated analysis of dental X-rays, mammograms, etc.) or predicting when an anomalous decay or growth will change in size. A spatial framework may play an important role in improving health-care quality [18] by providing new avenues of analysis and discovery on the progression of disease and the treatment of pathologies (e.g., cancer). Answering long term questions based on spatial medical data sets gathered over time poses numerous conceptual and computational challenges such as developing a reference frame analogous to latitude/longitude for the human body, implementing location determination methods to know where we are in the body, developing routing techniques [80, 83, 85] in a continuous space where no roads are defined to reduce the invasiveness of certain procedures, defining and capturing change across two images for understanding trends, and scalability to potential petabyte- and exabyte-sized data sets. Developing a reference frame for the human body entails defining a coordinate system to facilitate looking across snapshots. Rigid structures in the body such as bone landmarks provide important clues as to the current spatial location in relation to soft tissues. This has been used in Stereotactic surgery to locate small targets in the body for some action such as ablation, biopsy or injection [20, 56]. Although the reference frame might be useful in defining a coordinate system, location determination is needed to pinpoint specific coordinates in the body. An analogy is using GPS to determine one’s location on the earth. If we know our location in the body, it becomes possible to answer routing questions but routing based on the body’s spatial network over time is a difficult task given that the space is continuous. An example of this problem is to find the shortest path to a brain tumor that minimizes tissue damage. What are corresponding definitions of shortest path weight and paths for routing in the human body?

3.4.2 Persistent sensing and monitoring

Advances in Sensing and Monitoring will enable the next frontier in human and environmental health. For example, tele-health is a critically emerging market that is expected to become a significant portion of the $2.5 trillion health- care market. Supporting emerging applications of sensor-based environmental monitoring with relevance to human security and sustainability will be of critical importance. The possibilities are endless and include micro-robots within the human body for real-time and active health monitoring; detecting, extracting, modeling, and tracking anomalies and abnormalities (new phenomena); large-scale monitoring and modeling of the surrounding environment to study its effect on public health [12, 57, 75, 77, 88]; and empowering the interactions between the physical and virtual worlds, e.g., through augmentation, personalization, context awareness, immersion, and integration. The research challenges stem from modeling user intent and behavior, presenting outcomes of user inquires using new 3D interfaces that provide understandable context and enable early error detection, on-demand disparate data integration that evolves with emergent behavior, and real-time data analysis, modeling, and tracking of crowd movements (Fig. 7), human [96] and environmental events and phenomena. The late 20th century saw focus mainly on historical records or very short-term forecasts of a few days. The 21st century requires future projections for the medium term extrapolating sensor data via geographical models such as with climate data [76]. For example in Fig. 8 the output of a traffic density monitoring system is compared with a N O 2 air pollution levels to determine the effect of vehicles on air quality [3]. Such examples gives rise to questions such as: How do we conceptualize the spatio-temporal world measured by sensors? How do we explain sensor-observed spatio-temporal phenomena through the application of appropriate methods of analysis, and models of physical and human processes? How do we use spatio-temporal concepts to think about sensor-observed spatio-temporal phenomena? What are scalable and numerically robust algorithms for spatial statistical [38] modeling? What are algorithm design paradigms [46] for spatio-temporal problems that are NP-hard? Or that violate the dynamic programming assumptions of stationary ranking of candidates?

Fig. 7
figure 7

Simulation and the actual picture of the pedestrians at the Shibuya Crossing, Tokyo [44](Best in color)

Fig. 8
figure 8

Air samples from 150 sites (a) across the five neighborhoods of New York in 2009. Pollutants such as N O 2 (b) can cause serious health problems [3]

3.4.3 Trustworthy localization and transportation systems

Spatial Computing is expected to produce tools, procedures, and an infrastructure for rapid development, evaluation, and deployment of Intelligent Transportation Systems. With potential savings of 2.9 billion gallons of wasted fuel, six million crashes per year, 4.2 billion hours of travel delay, and $80 billion in the cost of urban congestion, next-generation trustworthy intelligent transportation systems (e.g. Waze [93]) have tremendous transformative potential for society [59]. Figure 9 illustrates this by an example and shows hotspots of congested route segments that may help drivers to avoid congestions as well as help officials plan road network modifications. In order to realize increased safety, optimized travel, reduced accidents and fuel consumption, and increased mobility of objects, there are several challenges that must be overcome including understanding the privacy issues that users have in sharing their spatio-temporal trajectories and creating a trusted environment for the release of location data; online auditing that enables users to verify the usage of their location, activity, and context data (who is using the user’s data and for what purpose and at what time); establishing quality- based user contracts that mandate systems to offer quality guarantees with error correction mechanisms; and enabling collaborative use of spatial computing systems by communities of location-based social network users.

Fig. 9
figure 9

GPS data highlighting road segments of traffic congestion. Road network modifications may help reduce congestions [30]. (Best in color)

A significant research challenge toward the realization of trustworthy transportation systems is to develop privacy- preserving protocols for efficiently aggregating spatio-temporal trajectory data with the goal of providing information about motion flows without revealing individual trajectories. Another major research direction toward enabling trust in transportation systems is the verification of the integrity and completeness of the results of geospatial queries to defend not only against inadvertent data loss and corruption (e.g., caused by faulty hardware and software errors) but also against malicious attacks (e.g., aimed at causing traffic congestion). Relevant research should evaluate recent advances in applied cryptography and secure data management, such as authenticated data structures (e.g., [42]), differential privacy (e.g., [26]), and oblivious storage (e.g., [41]) in the context of spatial computing needs, e.g., location authentication and geo-fencing of entities. How can we ensure location authentication and authenticity despite GPS-spoofing and other location manipulation technology? Even if location authentication is secure, is it robust and precise enough to guarantee usability for consumers? What type of location authentication is possible without requiring all-new Internet infrastructure?

3.4.4 Understanding geo-privacy concerns

Spatial computing has been advanced by the state of the art technologies in GPS devices and wireless communications (previously in eighteenth century, even longitude problem was one of the hardest problems [87]) [28, 34, 74]. On the end-user side, the widespread use of smart-phones, handheld devices and tablets has added new dimensions to spatial and temporal computing. Every click on a smart-phone bears information about the individual’s behavior. Every screen touch and every step we take with a smart-phone in our pocket indicates where we’ve been and where we’re heading, what we’ve been doing and what we plan to do, where we live and where we work, the places we visit and the movies we watch, our likes and our dislikes, what we do on our own and what we do jointly with friends [94]. The future calls for data management systems that pay attention to the knowledge discovery and behavior mining of individuals given their spatio-temporal footprints. At the same time, however, addressing user geo-privacy concerns will have to remain a priority. Individuals and groups are keenly interested in the ability to seclude geospatial information about themselves and thereby reveal their geospatial information selectively. Already, many location-based services are held back in the marketplace due to perceived threats to user privacy [59]. Optimists predict that a new generation of location-based services can be built that fully respects individual user privacy. Others fear that the geo-privacy problem is a dead end and that the only feasible solution is to “secure” users’ personally identifying information (PII), including their location, in cages that are accessible by (and only by) trusted parties.

4 Geo-privacy policy

United States policy makers have the opportunity to take the lead in the global race to establish a new geo-privacy paradigm. Achieving a consensus across American society, public safety will increase the likelihood that the United States will establish industry clusters in the geo-privacy realm, without spooking consumers. If we don’t clarify soon what can and can’t be done with users’ geo-data, we will lack the legislation and directives needed to protect U.S. jobs as well as its competitive advantage on a global scale as many European countries have already began work in this area [37].

4.1 Geo-privacy groups, interests and risks

Given the competing interests and risks among stakeholders, it is extremely challenging to develop geo-privacy policies acceptable to all groups. There is a need for deep conversations spanning these groups to identify common ground. We suggest a few possible approaches to begin this conversation. As summarized in Table 3, sustainable geo-privacy policy emerges from the balance of civil society, economic prosperity, public safety, and policy makers.

Table 3 Groups, interests and risks to consider for geo-privacy policy

Geo-privacy policy affects civil society, economic prosperity, public safety, and policy makers. Consumers may reap the rewards of location-based services and other spatial computing related technologies while being provided with certain basic protections. Companies are concerned with reducing liability amid policy uncertainty. Geo-privacy policy is critical due to the increased consumer concern about intrusion into their daily lives and the mounting pressure on Internet giants such as Facebook, Google [31], and Microsoft to adjust to the new mobile world. For example, the New York Times (NYT) reported: “Making money will now depend on how deftly tech companies can track their users from their desktop computers to the phones in their palms and ultimately to the stores, cinemas and pizzerias where they spend their money. It will also depend on how consumers - and government regulators - will react to having every move monitored.” Public safety will benefit from improved geo-targeting and geo-precision during emergencies and increasing public trust and compliance. On the other hand, public safety officials risk false alarms due to lack of geo-precision leading to mistrust and lower compliance as well as potential loss of lives both by the public and first responders. Policy makers have a tremendous opportunity to spur the economy by unleashing m-commerce market through geo-innovations but they risk public trust. Technology has many possibilities but geo-privacy policy is indispensable in unleashing its full potential.

4.2 Geo-privacy policy conversation starters

The U.S. needs to have a public discussion of geo-privacy issues. Starting and maintaining such a discussion is challenging, but essential to timely policy formulation. Table 4 lists several geo-privacy policy conversation starters.

Table 4 Geo-privacy policy conversation starters

We believe the conversation needs to begin where there is likely to be easy agreement among stake holders, such as natural disasters and emergency response. Policy must facilitate response to emergency scenarios such as was done in the past for enhanced 911 (E-911) [47]. The second conversation starter extends this idea for differential geo-privacy where the chance of learning new information about an individual is minimized while maximizing the accuracy of queries. An example is geo-targeting during emergencies such as hurricanes or earthquakes where affected populations are warned without the need to store their locations (e.g., the Commercial Mobile Alert System (CMAS)). The third conversation starter advocates sending applications to data on personal devices (e.g., cellphones, vehicle- embedded personal computers) instead of vice-versa, which has tremendous promise in facilitating fuel-saving eco- routing services as otherwise people may hesitate to send their GPS trace information to a third party. Geo-privacy risks are minimized assuming such applications are tested and certified to avoid data leaks. The fourth conversation starter calls for maintaining transparent transactions where information such as the location traces and volume of transactions are made available to an individual by entities that collect such information. Additionally, the purposes for which such information is collected should be specified up front (i.e., before or at collection) and the subsequent use of location traces should only be for the previously agreed upon purposes. The fifth conversation starter concerns the creation of responsible entities for storing location traces (e.g., the credit bureau or census) for publishing geo-statistics while protecting confidentiality. For example, geo-statistical data such as hourly population counts of different areas could be aggregated to support urban planning, traffic management, etc. For example in Fig. 10, anonymous location data from cell phone network was used to determine the work and life trends of people in Morrison, NJ which may help urban informatics planning tasks. The idea behind GPS data collection is to not widely distribute any of the GPS-tracks and instead “secure” the user’s personally identifying information, including location, in caches that are accessible by (and only by) trusted parties or applications that are sent to the data. Geo-privacy in spatial computing is a unique discipline, as it requires experts from both a data mining and security perspective.

Fig. 10
figure 10

Laborshed of Morrison, NJ. Anonymous location data from cellular phone networks illustrates how people live and work [13]. (Best in color)

4.3 Cross cutting benefits of geo-privacy policy

Policy makers have already had a major impact on spatial computing through policies that enabled Enhanced 911 (E-911) [47] for linking with appropriate public resources, GPS for use by the general public, and CMAS. Great opportunities lie ahead in the leveraging of users’ locations and expected routes in proactive services and assistance, ad impressions, and healthcare. Many of these benefits are described in the 2011 McKinsey Global Institute report, which estimates savings of “about $600 billion annually by 2020” in terms of fuel and time saved [59] by helping vehicles avoid congestion and reduce idling at red lights or left turns. With proper geo-privacy policies in place, spatial computing may more effectively assist vehicles avoid congestion via next-generation routing services. Eco-routing may leverage various forms of Spatial Big Data to compare routes by fuel consumption or greenhouse gas emissions rather than total distance or travel-time. Policy makers have an opportunity to improve consumer confidence in the use of eco-routing by paving the way for the construction of a new generation of location based services while fully respecting individual user privacy.

5 Final considerations

In the following years, spatial computing will create huge number of opportunities for scientists. However, societal impact needs to be taken into account. It is vital that U.S. policymakers clarify users’ geo-privacy rights. Without that it will be difficult for spatial computing to achieve its full transformative potential. We must also acknowledge the unique and daunting computational challenges that working with spatio- temporal data poses.

Successfully harnessing the potential of these datasets will require significant U.S. investment and funding of spatial computing research. Currently most spatial computing projects are too small to achieve the critical mass needed for major steps forward. Federal agencies need to strongly consider funding larger and bolder efforts involving a dozen or more faculty groups across multiple universities. Bolder ideas need to be pursued perhaps by leveraging existing mechanisms such as: NSF/CISE Expeditions in Computing, NSF Science and Technology Centers (STC), NSF Engineering Research Centers (ERC), U.S.-DoD Multi-disciplinary University Research Initiative (MURI), NIH Program Project Grants (P01), U.S.-DoT University Transportation Centers (UTC), U.S.-DoE Advanced Scientific Computing Research (ASCR) Centers, and U.S.-DHS Centers of Excellence.

Furthermore, spatial computing scientists need more institutional support on their home campuses. Beyond one- time large grants, it will be necessary to institutionalize spatial computing research programs to leverage enduring opportunities as acknowledged by a large number of research universities establishing GIS centers (akin to computer centers of the 1960’s) on campus to serve a broad range of research endeavors including climate change, public health, etc. Given its cross-cutting reach, NSF/CISE can establish computer science leadership in this emerging area of critical national importance by creating a dedicated enduring research program for spatial computing parallel to CNS, IIS, and CCF.

A number of agencies have research initiatives in spatial computing (e.g., the National Cancer Institute’s Spatial Uncertainty: Data, Modeling, and Communication, and the National Geospatial Intelligence Agency’s Academic Research Program (NARP)). However, spatial computing and the agencies themselves could benefit from multi- agency coordination to reduce competing projects and facilitate interdisciplinary and inter-agency research. Spatial computing already shows its success with various economic benefits and these benefits can be multiplied by spatial computing research.