Advertisement

Cartography

  • Liqiu MengEmail author
Living reference work entry

Abstract

The chapter addresses the development mainstreams and research challenges in cartography in a society characterized by mobility, cloud computing, and Volunteered Geographic Information (VGI). Starting from a relation analysis between the real world and its manifold digital worlds and between the official and commercial geodata suppliers, the author indicates the parallelized trend of going global and going local with geosensor networks and location-based services. She stresses the growing significance of VGI and open source platforms as well as their influence on cartographic visualization. Representative research and development results about geodata enrichment, spatiotemporal modeling, and map design are demonstrated. Emerging research demands in the fields of map mash-ups, incremental map generation, and geovisual analytics are identified.

Keywords

Spatial Object Volunteer Geographic Information Geovisual Analytic Commercial Service Provider Cartographic Visualization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 The Real World and Its Digital Models

The cartographic communication process in the digital era begins with the conversion of the physical world to a digital model aligned with a predefined spatiotemporal and semantic resolution. A systematic and timely digitalization of the earth’s surface is essentially enabled by the front-end sensory technologies. Since the launch of the first earth-observing satellite ERTS-1 1972 in the USA, our worldview has been changed not only physically but also psychologically. Things we share with each other on the earth appear to be more valuable than those that separate us (Siegmund 2011). In the meantime of the dramatic development of satellite remote-sensing technologies, aerophotogrammetry, close-range photogrammetry, and terrestrial surveying have been continuously elaborated and extended as well. Never before has the earth’s surface been so quickly, intensively, and precisely scanned and sliced in multiple layers and facets.

Each version of the digital world, being acquired by means of optoelectronic sensors, interferometric radar sensors laser scanners, or whatever other surveying technologies, represents a new discovered earth with the basic information about its geometric, radiometric, and dynamic properties. If different images of the same region from multiple sensors exist, a more precise digital version may be derived on the basis of sensory fusion. Similarly, a more actual digital world can be captured if the changes between its versions from different time periods can be automatically detected.

The satellite remote-sensing missions are usually designed and operated by national and international agencies and firms. Their rapid growth in number and cross-border cooperation has led to the emergence of the so-called geosensor network (GSN). A GSN consists of a number of sensors of similar or different types which have self-positioning functions and can communicate with one another in a wired or wireless manner. The Open Geospatial Consortium (OGC) initiated in 2001 the Sensor Web Enablement with the aim to find, access, and control all sensors or sensor clusters via Internet. The interplay of these networked geosensors makes it possible to capture, analyze, and disseminate the information from large regions including the crowd urban space and inaccessible terrains for humans (cf. Born et al. 2008; Heunecke 2008; Schulze et al. 2010).

A GSN constituted by globally networked satellite missions for telecommunication, earth observation, and exploration along with the high-precision global positioning systems, international earth rotation, and reference systems is considered high-end. It delivers well-structured and seamless digital worlds. However, the information from a high-end GSN is often aggregated over census units, administrative units, postal districts, catchment areas, etc. This may limit the precision of the planning results or decisions. With the growing instrumentalization of our daily life during the recent decade, the trend of going global is now being parallelized by the trend of going local represented by the low-end GSN. Being composed of inexpensive mobile devices, a low-end GSN allows the specialized and individualized geodata acquisition. In this context, each traveling person is an active sensor (Goodchild 2007). With his/her own sense organs and pocket devices like camera, compass, and GPS receivers, he/she may observe a selected hot spot with the finest possible details and disseminate the data in the Internet.

Unlike the electromechanical surveying instruments that deliver objective, parameterized, and formatted geodata, humans perceive their ambience according to personal interests and provide emotionally loaded experience reports for themselves and for their fellow people (Krisp and Meng 2013). They describe not only “what,” “when,” and “how much” but also the impressions about the illumination, weather conditions, smell, taste, noise, gesture, mood, and other perceptible phenomena that are neither measureable nor projectable. By connecting all these personal stories with one another and with the sites where they originate, we may gain an improved insight into the world (Clark 2011). Data from individual persons are geometrically imprecise, but semantically valuable and can be flexibly applied for time-critical decision processes.

Due to the massified participation in the geosensor network, the unique physical world has been unfolded into numerous digital models. Geodata streams or big data are dynamically poured into the cartographic visualization and analytical processes.

2 Geodata Supply and Integration in Cartographic Process

As mentioned in the previous sections, different stakeholders are involved in the digitalization of our physical world. Depending on motivation and purpose, these stakeholders may be grouped into official, commercial, and volunteered geodata suppliers.

Official geodata suppliers are typically represented on the one hand by surveying & mapping agencies responsible for transdisciplinary framework geodata and on the other hand by diverse public agencies responsible for geo-referenced thematic data from various disciplines. They have the obligations to acquire, update, and distribute the geodata in a systematic way and to develop basic services for public and private target groups. In doing so, the official geodata suppliers have to uphold predefined quality standards and geodata infrastructures. Their data and services, with or without charge, are regarded as being reliable, authoritative, copyrighted, application neutral, and interoperable.

Commercial geodata suppliers like TeleAtlas, NavTeq, con terra, etc. conduct the systematic acquisition and enrichment of geodata for special purposes such as navigation, software selling, consultancy, and training. They provide geodata in their own data formats in line with the competition regulations. Being driven by the marketing policy, they may purchase the existing official geodata and enrich them geometrically and semantically with their own datasets. The quality and interoperability of their geodata relies largely on the demands of target users and therefore cannot be generally judged as being good or not.

Volunteered geodata suppliers usually mean the individual persons or like-minded social groups. They capture and disseminate the so-called Volunteered Geographic Information (VGI) about what happens “here” and “now.” Their activities are context aware and take place spontaneously and sporadically. The VGI can take the form of geotags that can be freely shared by other Internet users. Without having to follow uniform quality criteria, the VGI is either formatted according to an open standard or remains as unstructured fragments describing the geometries and/or meanings of the underlying geo-objects. This kind of crowdsourcing is free of charge and growing exponentially at a daily basis. The undertaking of VGI acquisition and dissemination seems to be purposeless and ethically irresponsible, but it represents a just-in-time complementation to the official and commercial geodata policy. Time-critical applications such as evacuation, rescue work, etc. have essentially benefited from the radical openness, flexibility, and location-based intelligence of the VGI (Meier 2010). With the proliferation of the cost-efficient geodata, open exchange formats have occurred and become popular in the recent years. A number of widespread formats were reported in (Udell 2005; Ferrate 2008; Turner and DuVander 2010):
  • GML (Geographic Markup Language) as an application of XML (Extensible Markup Language) for spatial objects

  • GPX (GPS Exchange Format) from TopoGrafix

  • KML (Keyhole Markup Language) from Google Earth

  • Shapefile from ESRI

  • GeoRSS as extension of the RSS (Really Simple Syndication) for geo-referenced contents

  • GeoJSON as extension of the JSON (JavaScript Object Notation) for simple and complex geometries of spatial objects

The growing significance of commercial geodata and VGI is challenging the early monopoly power of official geodata suppliers and forcing them to work with enterprises and individual Internet users closely together for an efficient exchange, integration, and collective maintenance of geodata through a constructive competition. In order to formalize the unstructured datasets and make them queryable, supporting approaches such as intelligent search engines, data mining, and cloud computing methods which allow the access of information services across the border of the individual computers or computing centers are needed.

Research demands also exist for queryable accommodation of the temporal information of the individual geo-objects. One of the first investigations was reported by Fan (2010). In his work, a spatiotemporal data model on the basis of CityGML was developed for the integration of semantic and geometric changes of 3D buildings. Changes beyond a certain significance level are recorded as events. The data items in this model can be double indexed, i.e., according to the objects involved in an event on the one hand and according to the events that happen to an object on the other hand. Such an index enables both object-based and event-based queries. While the object-based query allows the user to retrieve the geometries and attributes of the individual objects as well as their spatiotemporal relationships, the event-based query delivers answers to five fundamental questions about “what,” “where,” “when,” “who,” and “how” characterizing an event.

With regard to the structured datasets, a number of efficient integration approaches have been developed and implemented for three purposes:
  • One-sided or double-sided enrichment of datasets of the same type from the same geographic region; an official street dataset, for example, may be complemented with routing information from a commercial navigation database. Commercial navigation databases in turn may be complemented by local and specific VGI. The enriched dataset usually reveals an improved precision, completeness, and currency. Figure 1 demonstrates an example for the enrichment of the street data from the official DLM-De with navigation attributes from TeleAtlas (Zhang 2009).

  • Vertical or horizontal conflation of various topologically related datasets from the same geographic region; the integration of settlements and hydrological network with a terrain model, for instance, addresses a vertical conflation, while stitching bikeways and pedestrian paths with motorways is treated as a horizontal conflation. In Fig. 2, some conflation results of pedestrian paths from official DLM-De with motorways from NavTeq are illustrated (Zhang 2009). These results can be further conflated with a public transport network as shown in Fig. 3. The output dataset can then serve as a test-bed for the implementation of multimodal navigation algorithms as reported in Liu (2011) (Fig. 4).

  • Connection of database schema based on the ontological similarity between datasets of the same type but from different geographic regions; for example, the digital landscape models in different countries are defined in different schema and languages. A conceptual comparison between two schema is possible if ontologically similar object types can be identified (cf. Kieler 2008).

Fig. 1

Streets from the official DLM-De (top), after enrichment with the navigation attributes from the commercial TeleAtlas (bottom) (Zhang 2009)

Fig. 2

Conflation of pedestrian paths from the official DLM-De (gray) with motorways from the commercial NavTeq (brown) (Section of Munich) (Zhang 2009)

Fig. 3

Further conflation of results from Fig. 2 (black) with the public transport network of Munich (Tram: red, Commuter railway: green, Subway: blue) (Liu 2011)

Fig. 4

Two routing suggestions from an automatic multimodal navigation service based on the conflation results from Fig. 3 (Motorway: black, Pedestrian path: brown, Tram: red, Commuter railway: green) (Liu 2011)

The geodata integration is an inseparable part of the iterative cartographic process ranging from geospatial data modeling to usability test as shown in Fig. 5. Visualization cannot begin unless the dataset is available and suitably prepared. A successful map design requires on the one hand an insight into the data model, the data contents, and their quality aspects and on the other hand an insight into user requirements and user characteristics. Whenever something has gone wrong with the map use (e.g., a walker lost his way, a military weapon hit a wrong target), the cartographers will immediately become the focus of public criticism. Whether cartographers have really caused the mischief or they are just scapegoats of the mischief, map mistakes regardless of their type and origin are always condemned as the fault of cartographers. For this reason, the cartographer must participate in the task of modeling the real world and accompany the subsequent working steps and treat the map use as an inseparable step of the cartographic process. Although different specialists attend the different steps with different intensity and focuses, the cartographer is the only one who has to work with all participants, get an overview of error sources, and keep the accumulated errors beneath a given threshold.

Fig. 5

The iterative cartographic process

3 The State of the Art of Maps

A map design procedure starts from a preprocessed primary digital model and ends with the maps as secondary models of the reality. In ancient times, maps were strongly associated with privilege and power. The unbeatable visualization capability made maps the most coveted treasures for explorers, conquerors, and missionaries. “He who wins a map wins the land” was not an overstatement. The omnipotence of early maps was due to the following properties (Meng 2008):
  • A map can visualize complex spatial relationships of a large geographic space in a compact and portable form.

  • A map can bring the underground landscape into view.

  • A map can seamlessly illustrate a living environment that may contain some unexplored parts or imprecise surveying results.

  • A map can deliberately serve as metaphor to spread space-related ideological ideas.

After each technical evolution, however, maps became less omnipotent and more easily accessible as the result of improved productivity. The zeitgeist of cartographic technologies is well reflected in the state of the art of computer technologies. Since the invention of the first binary programmable computer from Konrad Zuse in his German parents’ house in 1930s (http://www.computerhope.com/issues/ch000984.htm), the history of computer has experienced multiple paradigm shifts. Some essential cornerstones include computer with a RAM in 1950s, minicomputer in 1960s, desktop-PC and workstation in 1970s, laptop in 1980s, mobile computer (notebook, Tablet PC, PDA, smartphone, etc.) in 1990s, and finally the Internet-enabled mobile computer (MID – Mobile Internet Device) in the twenty-first century. The digital cartography began with the introduction of computers equipped with graphic cards in the 1970s and has evolved simultaneously with the computer sciences since then on. In 2006, Google Earth brought about a revolution and an unprecedented pervasiveness of maps. Once being omnipotent, today’s maps have become omnipresent. What has not unchanged is the necessary cognitive effort involved in map design. In designing usable maps or cartographic information systems, cartographers have to take into account diverse perspectives from marketing strategy, technical infrastructure, and usage context. They need to find a number of trade-offs (cf. Meng 2005):
  • Between the maximally allowed information amount on the output medium and the fitness for perception and purpose

  • Between functionality and expressiveness

  • Between the interactivity that allows users to do what they want and the self-adaption that efficiently support users’ actions without their interference

  • Between the information-hiding technique constrained by limited display space and information-revealing technique that should release users’ searching effort

  • Between individualized design and cost-efficient design

  • Between vendor identity and interface standardization

Due to their intuitive operability and capacity of visualizing complex relations, maps remain as the most popular and user-friendly form of geo-services in the information society. The state-of-the-art maps may be characterized by a number of dichotomies.

3.1 2D Maps vs. 3D Maps

The majority of maps are displayed or printed on a 2D surface on which the terrain height is either omitted (e.g., on a city plan) or depicted by contour lines (e.g., on a topographic map). 2D maps are favorable for visualizing detailed ground plans, giving an unbiased overview of geo-objects and their relationships. Easily reproducible symbols in plain-looking geometric forms are preferred to record semantic meanings of classified geo-objects. The large visual discrepancy between the map and the reality, however, does not allow an immediate understanding. Users must rely on a legend to interpret the individual symbols.

With the development of high performance computer and 3D computer graphics, it has become much easier to design 3D maps which come closer to natural human experiences in the 3D geospace. To create and feel the 3D sensation on a 2D display surface is a tricky strategy which applies depth cues that can be immediately recognized by users. Various methods are available to create depth cues of 3D scenes. A 3D visible and tangible geo-object can be typically displayed on a plane according to the central perspective projection which has inherent monocular depth clues such as occlusion and vergence. Depth cues can be enhanced when 2D map symbols are suitably shadowed or shaded. Likewise, a sequence of 2D presentations can be logically chained and rendered at a speed of more than 24 pieces a second, thus creating a motion parallax that invokes a 3D sensation on the retina. Methods that can completely reproduce the depth cues of a 3D scene include stereoscopic rendering such as anaglyph based on the binocular parallax and autostereoscopic rendering such as holography based on the wave field information of the 3D scene (Buchroithner 2009).

Although a 3D map does not allow a straightforward estimation of distances and orientations, its depth cues convey a sense of naturalism and make it easy for users to interpret the presented geo-objects and their relationships without having to consult a legend. Among 3D maps, there is a further distinction between photorealistic maps and non-photorealistic maps.

Photorealism strives for the optical resemblance to the reality according to the motto “making things truer than the truth.” Photorealistic maps offer an intuitive organization of spatial objects from the real world, thus facilitate a natural perception and understanding. Photorealistic maps have been widely used in virtual reality systems and serve as communication language and working tool in a large number of fields such as architectural construction, archeological reconstruction, urban planning, mobile telecommunication, disaster simulation, and mission rehearsal. As soon as invisible objects and intangible concepts are concerned, however, photorealistic maps become powerless due to their limitations in two aspects (cf. Jahnke et al. 2009):
  • They usually depict unclassified geometric details at a low abstraction level. The user faces a high cognitive workload of sifting through the crowded graphics in order to notice some relevant information.

  • Users’ attention may be entirely locked on the appearance so that less effort will be made to look for further information behind the appearance. In addition, users may be likely confused by details that are not up to date.

Non-photorealistic maps make a subset of non-photorealistic 3D computer graphics (Strothotte and Schlechtweg 2002). As shown in Fig. 6, by means of graphic variables and their combinations, non-geometric information can be embedded in nodes, edges, and faces that constitute a 3D object. For example, users of a non-photorealistic city can obtain not only the geometric information such as the location, size, and shape of each individual building but also non-geometric information such as function, built material, build year, ground humidity, thermal property of walls, etc. In addition, virtual navigators can be inserted to the presentation. Explanatory information such as street names can be placed in favor of viewing direction.

Fig. 6

Non-photorealistic alternatives of a photorealistic church (Jahnke et al. 2009)

By combining the abstraction power of 2D maps with the naturalistic expressiveness of photorealistic maps, non-photorealistic maps possess a number of useful properties. As compared with 2D maps, non-photorealistic maps provide a more similar look to the reality, therefore, allow an easier recognition of the individual objects. As compared with photorealistic maps, they reveal a reduced geometric complexity and can thus accommodate more semantic information. Through some artistic treatments such as visually highlighting certain depth cues or certain objects, non-photorealistic maps can efficiently guide users’ attention and stimulate their curiosity for querying non-geometric information. In a first user test of non-photorealistic approach, Plesa and Cartwright (2008) confirmed that non-photorealistic city models are generally well accepted and understood by users due to their clarity, legibility, and aesthetic appeal as shown in Fig. 7. The example in Fig. 8 from the work of Kumke (2011) illustrates how a non-photorealistic map can scientifically and aesthetically visualize a building façade enriched with thermal information from an infrared image.

Fig. 7

The non-photorealistic prototype developed by Plesa and Cartwright (2008) (left) and an enlargement (right)

Fig. 8

Two alternative visualizations of building façade with thermal information (Kumke 2011)

3.2 Map Products vs. Map Services

The mapmaking process in pre-computer era began with field observation and ended up with a map original for reproduction. Being driven by the cost-benefit concern, mapmakers tried their best to accommodate the maximum allowed information within the map original and reproduce it in a large number. A mass-produced map usually provides an allocentric view as a result of compromise among various constraints and is intended to support as many user types and user tasks as possible for a relatively long time period. In fact, the allocentric view represents a one-fit-all solution from the perspective of the mapmaker or developer of mapping systems and undergoes a strict quality assurance.

The liberalization of data acquisition techniques has now made it possible to digitize the real world including its finest object details, share, and maintain the digital records in distributive data warehouses. Mass production is being replaced by on-demand production which is an approach toward the individualization of mapmaking. An individualized map narrows down its applications to a special user group (e.g., school children) or even a single user. Therefore, it may provide an egocentric view to fit the user profile better than an allocentric map would do. The ego center is reflected in various aspects such as user’s preference, profession, cultural background, current task and information need, etc. It may dynamically vary from one moment to another (see Fig. 9). Therefore, the corresponding map is rather short-lived and must be immediately usable to individual users and their requirements. The result of egocentric mapmaking process tends to be more map services than map products. Map services communicate not only the narrative information about what, where, and how much but also higher knowledge about how and why and what will be done in a self-explaining way. By providing ready-to-work services instead of ready-to-get information, an egocentric map takes over an essential part of users’ mental effort by map understanding. While allocentric map products follow the principle “what you see is what you may want” (information need is stimulated by map), egocentric map services work more in a reversed order, that is, “what you want is what you will see” (map is stimulated by information need).

Fig. 9

(a) An allocentric visualization of the arena of Munich “Oktoberfest.” (b) The same area with egocentric visual emphasis (cf. Meng 2005)

3.3 Stationary Maps vs. Mobile Maps

Printed maps or atlases in large format are usually used in an indoor and stationary environment. Being hung on walls or stretched over tables, they serve the purpose of planning, communication, and exploration for one user or a user group at a time. The lightweight of printed maps also possesses an inherent mobility. Being tailored or folded in a pocket form, a printed map can be easily carried around in a mobile environment to support highly personal and real-time user tasks such as positioning and navigation. However, the view-only printed map is not able to react on the changing parameters in a mobile context such as actual interest of the user, weather condition, illumination condition, etc. Moreover, it is not at all cost-efficient to produce and print individual pocket maps for a single user.

Electronic maps displayed on screen or projected on canvas have all properties of stationary printed maps. In addition, Internet connection and interactive functions can be made available so that users may freely retrieve the most relevant information for his/her task and interoperate with other users in the same room or networked users from geographically separated locations. Being bound to a high performance computer as well as a large database, early electronic maps were not mobile as all. Thanks to the development of wireless Internet and miniaturization of computing devices, cartographers can now design mobile maps, thus win back the mobility of printed pocket maps and, more profoundly, unite two open-ended systems – the real and virtual world.

Being displayed on personal devices used by a single person at a time, mobile maps are inherently egocentric services and represent one of the most widespread types of location-based services. Although the same mobile map can also be shared by networked users, it does not primarily act as a collaborative thinking instrument as a stationary Internet map would do, rather a common memory to back up the group mobility. Users in a mobile environment often consult a map in a hasty mood. Their limited cognitive capacity constrained by time pressure can accommodate far less information than the maximally allowed visual load on the map. In addition, mobile users may be constantly disturbed by external visual, auditive, and haptic stimuli from the real world. On the other hand, these stimuli may carry additional context information which can be incrementally integrated in mobile maps or directly used to support users’ mobile activity.

The inherently strong interaction between the virtual and the real world requires a smooth switch from one to another, hence, the immediate comprehensiveness of the mobile map. In other words, mobile map services should non-intrusively support the user and efficiently guide user’s attention to his/her instantly wanted information. A mobile environment badly needs the so-called subdued computing (Ibach and Horbank 2004). The interaction with the map should make the user feel like effortlessly fetching information from the ambient environment into which map has discreetly melted.

Apart from technical issues such as network accessibility, positioning quality, and transmission speed, the designer of mobile map services has the essential task to match in real time a “meager” map with the “sharp” user requirements filtrated through a very narrow space-time slot.

3.4 Graphic User Interface vs. Map Symbols

The visual stimuli of a cartographic information system are mainly composed of two categories: (1) graphic user interface with small-sized icons arranged in a menu bar or toolkits superimposed on the display surface and (2) map symbols that represent geo-objects at different spatiotemporal scales. The first category contains the accessible system operations that have an impact on graphic symbols as well as the underlying database (see Fig. 10). Different user groups may have different access rights. The second category is connected with qualitative or quantitative meanings which can be retrieved or manipulated with help of interactive operations from the first category.

Fig. 10

Icons of different graphic user interfaces

Icons usually convey the information about the actionable properties of system operations such as insert, delete, and search. Since they are not directly bound to the individual geo-objects, they do not have fixed locations on the display. Often, they are grouped into different folders based on their functional similarity. For example, icons for viewing, selecting, and editing belong to three different folders. The limited display space can hardly allow all icons to be simultaneously visible. Often the “Swiss-knife” principle is adopted according to which the icons are hierarchically ordered in their folders and only those which are frequently used or currently relevant to user’s application are made visible. The hidden icons or folders can be revealed on demand. Sometimes, the system developer may have to embed multiple actionable properties in one single icon and reveal them with the help of a modality function. These multimodal icons work in a similar way to the keyboard for text input where both the lower case and upper case of a letter are embedded in the same key and specified by a Shift key.

Icon design is a cognitively challenging task. Usually, an icon can metaphorically infer the form of a tool, the object on which the tool can be used, or the states before and after the operation. Although standardization is very much desired by users for the purpose of minimizing the average learning effort and enhancing the interoperability among different systems, showing the identity of the system vendor is still a prevailing driving force in the practice of interface design. In other words, the designer expects the user to recognize the system vendor at first upon seeing the graphic interface and then begins to explore what the individual icons afford. As a result, users have to perform cognitively burdensome tasks. Without sufficient visual cues or guides, icons are often difficult to interpret and remember. On the one hand, the excessive use of audio-visual variables may overstrain user’s sensory organs and disturb his/her task. On the other hand, visually obscure icons can be particularly error prone if they deviate too much from user’s habits. In order to keep pace with the cutting-edge technology, software vendors have been constantly upgrading their systems by inserting the newest functionality that can ever be realized on the high-end platform. These systems are potentially more powerful in terms of supporting the user to do more things. However, the upgrading would be in vain if many users are hardly aware of a substantial part of any accessible operations. In fact, the design of a user interface reaches its perfection, “not when you have nothing more to add, but when you have nothing more to take away” (Exupéry in Gloor 1997).

Map symbols as such convey qualitative and quantitative properties of the individual or classified geo-objects, such as maximum load and speed limit of a motorway, or name and population of a city. They are placed at locations where the geo-objects occur or aggregated.

Upon seeing a symbol, an individual user will more or less unconsciously associate it with one or many primary meanings. If he continues to think or attend to the symbol, he may discover new meanings related to the primary ones. The cognition process continues till no further meanings can be revealed. Different users may interpret the same symbol in different ways. By means of a map legend, the primary meanings of each symbol are explicitly explained. In this way, all map users are trimmed to the same anchor point and misinterpretations are kept within a predefined frame. Using graphic variables and their combinations, multiple meanings can be embedded in a symbol. Redundant variables can be applied to ensure the efficient interpretation. For example, the variable “shadow” of various polygon symbols in Fig. 11a and the illustrative strokes within the circle symbols in Fig. 11b contribute much to the attractiveness of the map, although they do not carry extra information.

Fig. 11

(a) Variable “shadow” in a section of the city plan by ©Euro-Cities AG, (b) illustrative circle symbols in a wallmap at Moscow State University of Geodesy and Cartography (photo, 2007)

In an interactive map, however, a map legend alone is no longer adequate because the perception task has been extended to include both seeing and doing. More advanced design methods are necessary to enable an easy detection of not only what kind of meanings the individual symbols bear, but also what kind of operations are applicable on them and how they work. Generally speaking, an interactive map contains both nonsensitive symbols, i.e., view-only symbols, and sensitive symbols which can additionally respond to users’ actions such as mouse click, mouse-over, etc. A salient graphic variable is required to make the distinction between nonsensitive and sensitive symbols apparent to the user. As shown in Fig. 12, the sensitive symbols are highlighted using additional colors or depth cues. Obviously, the highlighted or uplifted symbols invite the user to click on them.

Fig. 12

(a) Sensitive symbols are highlighted with color in St. Petersburg architecture, www.tcaup.umich.edu/stpetersburg/mapindex.html; (b) sensitive symbols rise above the surface www.kartografie.nl/webcartography/webbook/ch07 (cf. Meng 2009)

Ideally, at least one graphic variable must be reserved as a cue for each meaning or each actionable operation so as to avoid misinterpretation. For example, a sensitive symbol should bear the apparent cues indicating the meanings of the underlying geographic concept. At the same time, it should provide subtle cues for the user to trigger the applicable operations. Technically, it is possible to add to a symbol not only “sensitivity” but also “thickness.” The current design practice, however, is far from this ideal. In many cases, the designer does not bother to add an extra cue to make sensitive symbols outstanding, or he/she adds only one single cue to indicate all sorts of differences. In some systems, various mouse buttons are assigned to various operations, while other systems do not make such distinctions. On the other hand, the user is overwhelmed by interactive tools with which he/she is expected to see, see through, and see beyond the map symbols. Moreover, what would exactly happen after a press or mouse click on a symbol is often unpredictable. This makes the user feel uncertain. Consequently, many useful operations remain undiscovered.

4 The Development Trends in Cartography

4.1 Parallel Mainstreams

The professional mapmakers working at official mapping agencies have the core task to derive and update topographic map series from the digital landscape models of different resolutions for their regular customers. The printed paper maps are flanked by a far larger number of screen maps and web maps for different display formats, stationary, and mobile usage context. The product palette ranges from the 2D standard maps, the up-to-date image maps, 3D map-like presentations to the multimedia Atlas information systems. The interactivity of a digital map is a basic service taken for granted. The cartographic semiotics and design rules have been continuously adapted and extended to the changing usage environment. With more and more cartographic knowledge being converted into mapmaking recipes which are accessible as cloud solutions everywhere at any time for Internet users, cartography has become ubiquitous and professional cartographers almost invisible.

In the meantime, a commercial cartography flourishes as a value-adding engine of geoinformation. To the commercial geodata suppliers, their services without integrated cartographic tools are neither visible nor competitive on the market. The progresses of the commercial cartography are characterized by the improvement of user friendliness and openness of system operations on the one hand and the professionalization of map design on the other hand. One of the well-known examples is Google Maps which is a service started by Google in 2005. Figure 13 illustrates the successive elaboration of Google Maps with respect to the color use, text placement, and symbolization. Another example is represented by the development strategy of ESRI – a leading company in GIS. Through its active participation in cartographic research activities, ESRI has substantially benefited from the front-end research findings in cartographic science and made contributions to the cartographic practice with case studies. The product family of ArcGIS with its cartographic component ArcMap has been globally accepted as a development environment in the official cartography.

Fig. 13

Design elaborations of a city plan section from London 2009 to 2011 (Jones 2011)

The volunteered cartography marks the youngest mainstream. Being empowered by the open information society and mapmaking recipes, more and more former map users are now generating their own naïve maps self-reliantly. Instead of beholding the “golden rules” of map design, volunteered mapmakers are more interested in creating maps that are good enough to support their one-time or first-time use (White et al. 2010; Stengel and Pomplun 2011). A number of cartographic tools or program libraries that are freely accessible by clients include:

4.2 Map Mash-Ups

The aforementioned cartographic mainstreams benefit from one another. Their mutual complementation leads to the so-called mash-ups. A mash-up is a hybrid web service that occurs through the seamless combination of existing web contents and offers (http://www.enzyklo.de/Begriff/Mashup). In this sense, the map mash-ups are understood as mixed web maps with geodata and design components from official mapping agencies, commercial service providers, and volunteered Internet users. Map mash-ups are particularly useful for time-critical application fields where the demand on actual, complete, and inexpensive map services cannot be satisfied by relying only on the capacity of official mapping agencies and commercial service providers. Map mash-ups are usually built upon the open or free-of-charge basic services and made accessible for both public and private users. Some examples of fundamental map mash-up services are:
  • OpenStreetMap (http://www.openstreetmap.org) – an open source service for the collective maintenance of the open source geodata and generation of general and thematic maps such as street maps with navigation functions

  • Google MapMaker (http://www.google.com/mapmaker) – a service from Google for the volunteered content replenishment and updating of Google Maps

  • Wikimapia (http://Wikimapia.org) – a free service for content replenishment and annotations on various maps and geo-referenced images

  • Waze (http://world.waze.com) – a GPS-supported navigation program and traffic information system for smartphones with participative drivers who may complete, correct, actualize, and annotate street maps with traffic information such as driving speed, accident reports, etc.

  • Map Share (http://www.tomtom.com/page/mapshare) – a service from TomTom which allows the user to update a map displayed on a TomTom device with information such as road blocking, road deviation, and street changes and at the same time receive the corrections from other users

As soon as the users gain some experiences of generating simple maps, they will attempt not only to upload their contents to the existing systems but also to play an active role of designing and analyzing the systems. The following advanced mash-up services have been widespread in the Internet (cf. Turner and DuVander 2010):
  • Google My Maps – a map editor from Google Maps for personalized map design, text placement, and sharing the results with other users.

  • GeoCommons – ein open source toolkit for the thematic map generation and analysis. With the GeoCommons API (Application Programming Interface), each Internet user may design personalized maps and find and download existing maps.

  • Map Warper – an open source georectifier with which the user may align his/her image or map with a geometrically correct base map and download the rectified results for his/her own applications.

With the further extension, elaboration, and modularization of mash-up services, the influence of Internet users on map graphics will continue to increase. Innovative methods of transformation, distribution, and overlay of datasets maintained in different formats are necessary in order to generate efficient and reliable map mash-ups in a web-friendly environment.

4.3 Incremental Map Generation

In planning a conventional map production line, the time points for data input, preprocessing, design and compilation, proof, output, and printing are usually explicitly given. The time lag between the data input and output may vary from days to years depending on the scope of the cartographic project. Since the input data are frozen at a certain time point, the map users do not expect the map products that are newer than the input date. It is a well-known fact that maps possess high scientific values and preserve a sense of longevity, thanks to the abstraction of data, generalization, and symbolization. In spite of the known time lag, they can still successfully support a wide range of general applications without having to be updated. However, this kind of sequential production pipeline faces now the challenge of continuously incoming geodata streams and the growing user requirements for real-time map services.

Technically, it is possible to simulate the dynamic spatiotemporal course with a series of chronologically ordered static maps or 3D virtual environments. Nevertheless, a time-stamped map series is not more helpful than one single and easily comprehensible real-time map for decision processes which essentially rely on the “here-and-now” information. A real-time map in its true sense is able to grow or shrink simultaneously with its underlying geodata streams and always shows the current state of the spatial objects and their relationships.

If the geodata streams are homogenous and small scoped, a real-time or near real-time map may be generated through frequent data refreshing and repeatedly running the design process, e.g., at an hourly rate. The geospatial problems in reality, however, rarely reveal a simple nature. Their data streams are not only large in size but also heterogeneous in terms of the sources, dimensions, coding formats, transmission channels, and intensities. The fast agglomeration of large data amounts requires a map production system to work incrementally with all of its connections and components ranging from data stream management, change detection, query languages, integration methods, design toolkits to user interface.

An incremental computing system is able to respond to the changes of data and accommodate them during the running time without having to interrupt and restart the process. So far the incremental approach of map generation has been only tested with homogeneous and structured data streams of small scope. Brüntrup et al. (2005) reported a program for the incremental derivation of reliable street geometry and travel time from various GPS traces which are sequentially left along the streets. Xie et al. (2007) described a planning strategy which can be incrementally elaborated with new test data till an optimal plan that satisfies the evaluation criteria has been reached. The reported case studies share a common goal to iteratively improve a process with new incoming data till no further improvement is possible. The topic of incremental generation of real-time map from distributive open source data for time-critical applications such as crowd management, emergency services, disaster management, etc. has not yet been explored. With the constantly accumulating data streams, maps which should visualize the transient current state of spatial scenes and be interactively operated by diverse user groups should take a more intelligent instead of more complicated look. Some of the research issues tackle the questions of how the data structures can be partitioned and the software components modularized so as to detect the changes without redundancy and optimize the incremental updating, processing, and visualization of the most actual geodata.

4.4 Geovisual Analytics

In an era characterized by mobility, cloud computing, and VGI, the immanent role of maps for the visualization of spatiotemporal and thematically confined hotspots is (re-)stressed. The multiple functions of maps which are more than just offering the location-based services should not be ignored or underestimated. It is particularly worthwhile to mention the irreplaceable position and value of maps in the geovisual analytics – a young interdisciplinary research field founded 5 years ago (Andrienko et al. 2007).

It is obviously difficult to detect reliable clues from the dynamically unfolding data streams. Diverse spatial patterns can appear or disappear, separate from one another or merge together, stay, or move (Kremer et al. 2011) Therefore, a straightforward approach of pattern recognition is impossible, and powerful computing methods are indispensible. The scope and the dynamic properties of many space-related problems cannot be adequately described even if large and complex data streams are available, apart from the further difficulties caused by errors, gaps, noise, and uncertainty involved in the data. In spite of the increasing power and performance, the existing computing systems are not yet able to self-reliantly interpret the spatial data streams and make meaningful reasoning. Often the system developers are forced to sample the data streams in order to fit the analytical capability of the machines. In the end, they can only provide oversimplified prototypical solutions which may be far away from the reality.

Promising decision strategies require the active participation of users to control the computing models. The human intelligence that embraces the context awareness, the domain knowledge, the judgment skills, and association abilities may more or less compensate the weakness of computers in dealing with illogical, inconsistent, or hard-to-formulate facts. An effective and efficient human-machine collaboration requires on the one hand the intuitively operable analytical toolkits and on the other hand the intuitive perception of the complex geodata space such as seeing, hearing, and touching. Interactive maps in combination with further graphic presentation forms may serve as an ideal support for both purposes.

Maps in a geovisual-analytical system may, for instance, visualize the same dataset in different styles, at different levels of details, in different layers or overlapped. Maps can also visualize or spatialize the intermediate or final results of analysis and queries. More important is the capability of interactive maps to accompany the overall cognition process of the user during his/her data exploration and knowledge construction. The individual users may add annotations on the map whenever they like and treat the accumulated labels as a collective memory and discussion base with other users. On this note, the role of interactive maps can be aptly expressed by an extended saying “a picture tells a thousand words, and an interface tells a thousand pictures.”

The visually perceivable patterns can be inspected by machines. The patterns detected by the machines, in the other way round, can be visually inspected by human eyes. This iterative reciprocity between the visual perception and computer analysis may efficiently reduce the searching space in terms of both depth and width for the creation, verification, or rejection of hypotheses, thus making a complex problem manageable. The complementary enhancement of computer power and human visual system may also support the meaningful knowledge construction. Being exposed to all the verified spatiotemporal patterns, the users may identify the most relevant one for his/her tasks. Unusual patterns or anomalies such as a one-time event or a small-sized distribution which may be overlooked or treated as noise by analytical programs may be highlighted by means of alternative visualization forms. This visual emphasis may invoke more attention and stimulate the reasoning sense of the user.

The research demands in the geovisual analytics are mainly reflected in the following aspects (vgl. Andrienko et al. 2010):
  • Development of geo-collaborative graphic user interface in order to make the analytical functions accessible to a larger user community and enable the communication between different stakeholders from different geographic regions, cultures, and disciplines

  • Scalability of geovisual-analytical methods in order to preserve the effectiveness and efficiency, independent of hardware-software configurations

  • Interoperability of the geovisual-analytical methods which allows the user to retrieve the data and software on servers, add his/her innovations, and share his/her results with other users

5 Concluding Remarks

Cartographers have won a lot of freedom in the course of the extensive modernization of cartographic processes. They are, however, by no means less occupied. All events that take place everyday in the real or virtual world are cartographically relevant. With the increasing availability of open source platforms, cartography has become ubiquitous and professional cartographers invisible. Regardless of location and time, every Internet user can have access to cloud solutions, download the relevant information in a map or through a map, at the same time add his/her own contents, and create his/her own maps. This phenomenon of “maps for everybody and everybody makes maps” has posed a number of new research questions concerned with the map mash-ups, the incremental map generation, and the geovisual analytics. A competitive and complementary cooperation of official mapping agencies, commercial mapping enterprises, and volunteered mapmakers is indispensible to seek answers to these questions.

References

  1. Andrienko G, Andrienko N, Jankowski P, Keim D, Kraak MJ, MacEachren A, Wrobel S (2007) Geovisual analytics for spatial decision support: setting the research agenda. IJGIS 21(8): 839–857Google Scholar
  2. Andrienko G, Andrienko N, Demsar U, Dransch D, Dykes J, Fabrikant S-I, Jern M, Kraak MJ, Schumann H, Tominski C (2010) Space, time and visual analytics. IJGIS 24(10):1577–1600Google Scholar
  3. Born A, Walter K, Niemeyer F, Bill R (2008) Geosensornetzwerke als Komponente von Frühwarnsystemen. In: Internationalisierung der Geoinformationswirtschaft : 4. GeoForum MV 2008. Rostock-WarnemündeGoogle Scholar
  4. Brüntrup R, Edelkamp S, Jabbar S, Scholz B (2005) Incremental Map generation with GPS traces. In: Proceedings of the 8th IEEE conference on intelligent transportation systems, Vienna, pp 574–579Google Scholar
  5. Buchroithner M (ed) (2009) True-3D in cartography-ICA SPIE Europe symposium, Dresden, 24–28 Aug 2009. Institute for Cartography, TU Dresden.Google Scholar
  6. Clark K (2011) Stories everywhere. In: O’Reilly “Where 2.0” conference, Santa Clara, 19–21 Apr 2011Google Scholar
  7. Fan H (2010) Integration of time-dependent features within 3D city model. PhD thesis, TU MunichGoogle Scholar
  8. Ferrate A (2008) 3 top data formats for map mashups: KML, GeoRSS and GeoJSON. http://www.programmableweb.com/ API News, 27 Aug 2008
  9. Gloor P (1997) Elements of hypermedia design – techniques for navigation & visualization in cyberspace. Birkhäuser, BostenCrossRefGoogle Scholar
  10. Goodchild MF (2007) Citizens as sensors: the world of volunteered geography. GeoJournal 69(4):211–221CrossRefGoogle Scholar
  11. Heunecke O (2008) Geosensornetze im Umfeld der Ingenieurvermessung. BDVI-FORUM 2/2008, pp 357–364Google Scholar
  12. Ibach P, Horbank, M (2004) Highly available location-based services in mobile environments. In: International Service Availability Symposium 2004, MunichGoogle Scholar
  13. Jahnke M, Meng L, Kyprianidis JE, Döllner J (2009) Non-photorealistic visualizations on mobile devices and usability concerns. In: Lin H, Batty M (eds) Virtual geographic environments. Science Press, Beijing, pp 168–181Google Scholar
  14. Jones J (2011) Evolving the look of Google Maps, redux. http://google-latlong.blogspot.com/2011/07/evolving-look-of-google-maps-redux.html, 19 July 2011
  15. Kieler B (2008) Derivation of semantic relationships between different ontologies with the help of geometry – 2008. In: Workshop “semantic web meets geopatial applications”, held in conjunction with AGILE 2008, TorontoGoogle Scholar
  16. Kremer H, Kranen P, Jansen T, Seidl T, Bifet A, Holmes G, Pfahringer B (2011) An effective evaluation measure for clustering on evolving data streams. In: Proceedings of the 17th ACM SIGKDD conference on knowledge discovery and data mining, San DiegoGoogle Scholar
  17. Krisp JM, Meng L (2013) Introduction: going local – evolution of location-based services. In Krisp JM (ed) Progress in location-based services. Lecture notes in geoinformation and cartography. Springer, Heidelberg/New York, pp xxi-xxivGoogle Scholar
  18. Kumke H (2011) Kartographische Anreicherung von Gebäudefassaden mit thermalen Bilddaten. PhD thesis, TU MunichGoogle Scholar
  19. Liu L (2011) Data model and algorithms for multimodal route planning with transportation networks. PhD thesis, TU MunichGoogle Scholar
  20. Meier P (2010) Crowdsourcing the impossible: Ushahidi-Haiti. In: O’Reilly “Where 2.0” conference, San Jose, 30 Mar–1 Apr 2010Google Scholar
  21. Meng L (2005) Egocentric design of map-based mobile services. Cartogr J 42(1):5–13CrossRefGoogle Scholar
  22. Meng L (2008) Kartographie im Umfeld moderner Informations- und Medientechnologien. KN 1/2008, 3-1Google Scholar
  23. Meng, L. (2009) Affordance and Reflex Level of Geovisualization. In H. Lin, Batty M (eds) Virtual geographic environments. Science Press, Beijing: 139–154Google Scholar
  24. Plesa MA, Cartwright W (2008) Evaluating the effectiveness on non-realistic 3D maps for navigation with mobile devices. In: Meng L, Zipf A, Winter S (eds) Map-based mobile services – design, interaction and usability. Springer, Berlin, pp 80–104CrossRefGoogle Scholar
  25. Strothotte T, Schlechtweg S (2002) Non-photorealistic computer graphics: modeling, rendering and animation. Morgan Kaufman, San Francisco, 640pGoogle Scholar
  26. Schulze MJ, Brenner C, Sester M (2010) cooperative information augmentation in a geosensor network. In: Proceedings of the joint international conference on theory, data handling and modelling in geospatial information science, Hongkong, vol 38, no 2, pp 444–449Google Scholar
  27. Siegmund A (2011) Satellitenbilder im Unterricht – eine Ländervergleichsstudie zur Ableitung fernerkundungsdidaktischer Grundsätze. Doktorarbeit, Pädagotische Hochschule HeidelbergGoogle Scholar
  28. Stengel S, Pomplun S (2011) Die freie Weltkarte OpenStreeMap – Potenziale und Risiken, KN Kartographische Nachrichten 3/2011, pp 115–120Google Scholar
  29. Turner A, DuVander A (2010) Data and formats for your maps. In: O’Reilly “Where 2.0” conference, San Jose, 30 Mar–1 Apr 2010Google Scholar
  30. Udell J (2005) Annotating the planet with Google Maps: open, XML-based design makes it a service factory for the geospatial web. InfoWorld, 4 Mar 2005Google Scholar
  31. White I, Coast S, Trainor T, ter Haar P, Eisnor D-A (2010) Base map 2.0. In: O’Reilly “Where 2.0” conference, San Jose, 30 Mar–1 Apr 2010Google Scholar
  32. Xie D, Marales M, Pearce R, Thomas S, Amato NM (2007) IMG – a new framework for building roadmaps. https://parasol.tamu.edu/groups/amatogroup/research/IMG/
  33. Zhang M (2009) Methods and implementations of road-network matching. PhD thesis, TU MunichGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Institute for Photogrammetry and Cartography, Technische Universität MünchenMunichGermany

Personalised recommendations