Abstract
Geographic information volunteered by the public is usually also of public interest. However, just publishing the data is not enough to make the data accessible and usable for the public. The raw data might need to be abstracted and interpreted, as well as visually presented to be understandable to non-experts. To address this, we propose interactive visual reporting solutions that leverage natural language and visualizations for geo-related data. We present these reports as interactive documents, but also in other media such as virtual reality environments. First, we have studied the interplay of textual and visual content in such reports. To ease the creation of content, we have developed solutions for authoring interactive documents with a close linking of textual contents and visually presented data. Moreover, we propose automatic report generation approaches that specifically support the exploration of the geo-related data starting from an explanatory summary.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
- Geographic visualization
- Data-driven storytelling
- Interactive documents
- Authoring interfaces
- Text generation
1 Introduction
Data-driven storytelling embeds data into a narration and usually combines a textual representation with visualizations (Segel and Heer 2010). While magazine-style stories might be most common, there exist other presentation genres like animations and slide shows, comics, or annotated charts (Segel and Heer 2010). The power of data-driven storytelling lies in making data accessible to a wider audience, by guiding through the analysis insights while inviting users to engage through simple interactions. Textual and visual data descriptions complement each other and, together, form an integrated representation that is both easy to follow and rich of information. For instance, a story on flight data might link a globe or map that shows flight trajectories with explanations on busy routes and airports but also could provide insights into specific examples like the longest flights.
As an expressive and interactive medium, data-driven stories fit volunteered geographic information. For data that is volunteered by the public, it is a natural choice to make that data also available and understandable for a broad audience. Figure 7.1 illustrates this concept as closing a circle. Public groups or individuals volunteer geographic information that is then made available on open data platforms (I). While experts have already analyzed such data in various application scenarios (II), we contribute methods for bringing the derived insights back to the general public and decision-makers. From the open data itself (III.a) and insights of the experts (III.b), we support authoring and automatically generating reports that can be understood by this broad group of users (IV). Computer support and automation are necessary to efficiently create summaries of varying data and to allow personalized reporting. With these reporting solutions, we intend to facilitate non-experts users and foster a dialog between the public, decision-makers, and experts (V). This cycle aligns with endeavors of others to make volunteered geographic data directly usable to the public, for instance, within project IDEAL-VGI (Chap. 2).
The challenges of this research are in the identification and selection of relevant insights, as well as in their reporting as integrated visual and textual representations. The produced reports should furthermore invite the readers and users to explore the data. We have approached these challenges by first studying the interplay of text and visualization in existing examples of geographic data-driven stories (Sect. 7.2). Then, we investigated solutions that help author reports with close linking between the two representations (Sect. 7.3). Finally, designing automatically generated reports allowed us to provide, aside to a data-driven story, solutions with certain support for exploration (Sect. 7.4). While geodata plays a role in all presented research, we also consider the visualization of additional, non-geographic data.
This chapter describes the results of the project vgiReports and summarizes as well as connects an excerpt of project-related publications (Latif et al. 2021a,b, 2022a,b) and a preliminary work (Latif and Beck 2019a). Two of these works (Latif et al. 2021a,b) report results from collaborations with other projects of the priority program.
2 The Interplay of Text and Visualization
The way textual and visual descriptions are combined is crucial in data-driven stories. If integrated well, it could avoid a split attention effect between the two media (Ayres and Sweller 2005) and might even help identify misaligning information (Zheng and Ma 2022). Interactive linking of text and visualization can increase user engagement (Zhi et al. 2019) and guide user attention while supporting specifically less experienced users in correctly mapping the text and data (Barral et al. 2021).
Journalistic outlets provide many high-quality, manually crafted examples of data-driven stories. For instance, The New York Times has published in 2021 more than a hundred carefully designed visual stories and interactive graphics (The New York Times 2021). As some of such stories cover geographic aspects, we can leverage them to study how geographic data is successfully reported to a wide audience. Previous research has already studied in existing stories structure and sequence (Hullman et al. 2013), patterns of visual narrative flow (McKenna et al. 2017), and narrative order in time-oriented stories (Lan et al. 2021). Text in such stories can have different roles, ranging from introductory texts to detailed annotations of the visualization (Segel and Heer 2010). We have focused on a fine-grained analysis of such categories and the explicit and implicit interplay of text and visualization in stories, with a certain focus on geographic aspects. In a first study, we analyzed 22 full stories from a variety of news media (Latif et al. 2021b). A second study looked at a set of 110 paragraph-chart pairs stemming from 77 articles of different news media (Latif et al. 2022b). Using a qualitative methodology in both studies, we investigated the text on sentence and word level and classified the cases into different categories. Specifically, we have addressed the following research questions.
What Are the Reported Analysis Insights, and How Is the Related Data Visually Communicated? (Latif et al. 2021b)
We observed two categories of textual narrative: data-driven text and contextual embedding text. The former directly relates to the data and describes analysis insights. These insights link to the analysis tasks, namely, identify, summarize, and compare. In stories with geographic focus, location and time are generally key concepts. In particular, locations associated with extreme values as well as those depicting very dissimilar behavior (outliers) are identified and explained in the narrative. Likewise, clusters of locations are discussed together to highlight their similarities. Other reported insights include geographic and temporal variations of variable values across a geographic region or time span. The narrative either uses measures of central tendency like mean, median, or mode to summarize them or describes these variations in plain words. Lastly, the narrative compares locations or other data items using part-to-whole contrasts, correlation, or statistical ranking. Apart from these insights, as contextual embedding, a comparable proportion of the narrative blends in the background of the story and data, necessary domain knowledge, and quotes from external sources or people to make these stories self-sufficient units of information. Moreover, authors of the stories interpret analysis insights, relate to other data and information sources, and attach judgment. It is important to note that the data-driven text and the contextual embedding text are often intermingled and cannot be always unambiguously separated. As the textual narrative explains the analysis insights, visualizations act as a complement to show the relevant data. Visualizations can serve a specific purpose, for instance, to provide an overview of the data, to support comparisons, or to highlight details. The use of simple visualizations like maps, visually enriched tables, bar charts, and line plots is more common compared to slightly more advanced ones like distribution plots and scatter plots.
How Do Textual Narration and Visualization Interplay? (Latif et al. 2021b)
We discovered different kinds of linking strategies to convert visualizations and an associated textual narrative into a single engaging story. First, the visualizations are almost always placed close to the text that describes them. Likewise, the sequence of visualizations in a story is important: The overview visualizations often appear first and are followed by detailed visualizations. Second, to strengthen the linking further, textual elements like captions, annotations, or tooltips are employed inside or next to visualizations. These textual elements often explain the key insight of a visual or help users in better interpreting the visualization. We observed that the use of descriptive annotations even enabled authors to include comparatively complex and non-standard visualizations into their stories. Third, visualizations as a whole or parts of them are explicitly referenced from the textual narrative. Authors sometimes also use the same colors in text and visualization to show connections between the two media.
What Implicit References Exist Between Text and Visualization, and How Do They Relate to the Data? (Latif et al. 2022b)
Implicit references can be defined as connections between a textual narrative and a visualization if both refer to the same data items. For instance, the mentions of countries aside showing a world map make country names implicit references. However, such connections are not limited to just single entities or values but also include group references (referring to many data points, e.g., EU) and interval references (referring to numerical ranges). Furthermore, individual references can be grouped together to form higher-order references. We found that these implicit references can correspond to analysis tasks such as identification, summarization, and comparison. Almost half of the implicit references directly matched a chart feature (e.g., axis label, legend, annotation, caption). However, the other half contained linguistic variations (e.g., inferences, synonyms, abbreviations, stems, or lemmas) or numerical variations (e.g., rounded off numbers, approximations, computed measures) and are harder to map to the visualized data.
3 Authoring Interactive Reports
Creating a data-driven story requires effort: Aside from writing the text, data needs to be analyzed and visually presented. Web technologies provide a good basis for making the content available. However, whereas many content management systems allow placing textual and visual content side by side, they do not support integrating both representations closer and, through this, creating interactive documents. Filling this gap, various authoring tools and supporting approaches have already been suggested for data-driven storytelling (Tong et al. 2018, Section 3). For instance, Chen et al. (2020) developed a framework to synthesize stories from insights identified using a visual analytics systems. It allows an author to arrange insights in different simplified visualizations, annotating and connecting them to tell a story.
Whereas most of these approaches support efficiently generating stories of different kinds, they do not directly address creating explicit and interactive links between the text and visualization. In contrast, VizFlow (Sultanum et al. 2021) focuses on links between text and visualization for authoring; while their links are limited to manually created links to image-based features of the visualization, they investigate in more detail how to leverage such links for document layout. Ellipsis (Satyanarayan and Heer 2014) allows authoring staged slide show stories with annotations that can be bound to data values and adapt with it. Elastic Documents (Badam et al. 2019) does not allow creating links directly but extracts text and tables that relate and connects them using new visualizations. Related are, furthermore, general approaches for annotating charts by, among other visual marks, textual content (Ren et al. 2017).
Focusing on an easy and efficient creation of valuable links between the text and visualizations, we have developed Kori (Latif et al. 2022b). The system, as demonstrated in Fig. 7.2, supports both manual creation of links and automatic suggestions for links. The computed links are based on processing the text and consider the hierarchical structure of references discussed in Sect. 7.2. While an author is composing an interactive story, the system offers unobtrusive suggestions, which can then be inspected and accepted or discarded. For the reader, the links finally act as interactive references and, when triggered, take the users’ attention to the respective portion of the visualization. Not only do they reduce a split attention effect but could be starting points from where to explore the data further. For the manual creation and adaption of links, the system offers an interface that requires only a few interactions to define the references. There are two modes of manual construction: First, the authors can directly select visual marks in the chart using a direct manipulation mode (e.g., rectangular brush selection). Alternatively, authors can apply a series of filters to select visualized data points. Through these means, authors can effortlessly create references and focus on composing their story.
In a study, we asked 11 participants having diverse background and experience to create references to link text and visualizations in three examples. In the first and second task, they reproduced given links of various kinds with the tool. The third task was more open-ended; the participants were only given a set of visualizations and had to also textually summarize some findings as a short story while linking the text to the visualizations. The results indicated that participants did not have difficulties using the interface and were able to construct meaningful references in all three tasks. Among 64 automatic suggestions that occurred in the sessions, 48 were correct and 16 incorrect. Participants also used the manual construction mode and rated it comparable to the automatic suggestion feature, both with a median of 4 on a scale from 1 (worst) to 5 (best). The feedback for the automatic suggestions was mostly positive and confirmed that many recommendations were helpful and did not disturb the users’ workflow. “Smarter” reference detection methods, however, could still improve the experience.
4 Explorative Reporting
Data-driven stories and visual reports of data might be presented as interactive documents but often remain rather static. Users can interactively navigate through the story and retrieve some details on demand, but the documents mostly lack support for starting an explorative analysis that goes beyond the original story. Moreover, stories do not adapt to personal interests or current data. The explanation for these restrictions is simple: the stories are manually written as static texts. However, if partly automizing the generation of the natural-language content, we could provide extended options for explorative data analysis and personalization.
Various techniques exist for natural-language generation (Gatt and Krahmer 2018). Whereas the use of advanced generation methods for automatic reporting of data is not yet common in journalistic and industrial practices, some research prototypes have already investigated its potential for more adaptive reporting. For instance, such generation techniques have been used to provide guidance in the data exploration process by reporting automatically derived data facts (Srinivasan et al. 2019). But also whole stories can be generated. Unlike approaches that rather target at fully automatic generation (Shi et al. 2021), we are interested in still human-authored reports, however, which can adapt to different data automatically. Used in interactive documents, the generated text blends with visualizations in a data-driven story. In earlier works, we have explored such representations, for instance, to generate profiles of scientific authors (Latif and Beck 2019b) or, in software engineering use cases, to summarize program executions (Beck et al. 2017) and code quality (Mumtaz et al. 2019). More and more, the generated reports allow greater flexibility regarding the interactive exploration of the data, which complements explanatory texts and guided data analysis. The general idea can be described as exploranation, mixing exploration with explanation (Ynnerman et al. 2018).
Now, we have investigated such approaches to apply them to geographic data in the context of different media and usage modalities. These cover novel aspects such as comparative descriptions of selected entities, novel forms of presentation such as adaptive audio guides, and novel blends of interaction forms and presentations such as chatbots. This set of diverse examples shows early prototypes that demonstrate promising directions of visual reporting; we have not evaluated them yet in detail or connected them into a more comprehensive framework.
4.1 Maps with Data-Driven Explanations
Maps that show statistical information are widely used in data-driven storytelling. Choropleth maps visualize variation in one variable for a set of regions (e.g., countries). However, oftentimes, it is desirable to describe the relationship between two variables, which would require simultaneous visualization of two values per region. For instance, per capita spending on education could be compared to per capita spending on defense to understand different geopolitical roles of countries. An established way of visualizing such bivariate data is to employ graduated symbols overlaid on a choropleth map (Elmer 2012). However, by construction, these bivariate map visualizations are more complex to interpret, and it gets harder to spot visual patterns. Additional textual explanations might counterbalance and could hint at interaction effects of the variables that would go unnoticed otherwise. Encoding more variables per region in a more complex visual glyph is doable but would render the communication to a wider audience even more challenging.
To visually and textually report at least bivariate geographic data in a more accessible way, we developed Interactive Map Reports (Latif and Beck 2019a). It employs well-established statistical methods to detect notable relationships, geographic patterns, and outliers in given bivariate data. These insights are automatically transformed into a natural-language narrative that is, then, presented alongside a bivariate map visualization as shown in Fig. 7.3. The given example relates, for states of the USA, the number of fatalities caused by storms to the number of storms to reflect if the quantity of storms is directly related to the death toll. The textual narrative serves as a guide and explains findings. Small graphics in the text help establish linking between the two representations. Users can explore the map visualization as they read through the narrative by activating interactive links (printed in boldface). Likewise, while exploring the map, users can either get additional details on a selected geographic region or a comparative text for two selected regions.
The system is capable of generating interactive reports for different bivariate geographic datasets. Through a small set of parameters that the user provides about variables, geographic region and granularity, and general terminology, it adapts the generated narrative and visualization.
4.2 Interactive Audio Guides in Virtual Reality
Virtual reality is emerging to be an engaging medium for interactive data visualization, and it has just been started to be explored for data-driven storytelling (Isenberg et al. 2018). The idea of exploranation is also applicable to virtual reality visualizations. However, as used previously in documents, longer textual narrative will not be suitable as reading would counteract immersion. As an alternative, audio can be used in virtual reality applications such as games, movies, or virtual museums. Prerecorded audio narrative might be played at various stages of the story (e.g., in a game) or activated by a user interaction (e.g., in a virtual museum). The prerecording aspect limits the flexibility, and such approaches cannot adapt to changes in the data as a result of interactions.
To support exploranation, our approach Talking Realities (Latif et al. 2022a) combines a data-driven audio narrative with an immersive virtual-reality visualization. The audio narrative is based on automatic identification of interesting analysis insights. Using speech synthesis services, it is rendered on the fly from generated text and, therefore, adapts to data selections and user interactions. To provide a smooth exploration experience, the narrative should be synchronized with visual animations. To cater to the needs of a larger target user group, Talking Realities advocates three modes with varying levels of guidance. On the one hand, fully guided tours walk users through a pre-defined sequence of findings with the least freedom to explore. Free exploration, on the other hand, lets users investigate the data visualization without any intervention. In the middle lies the guided exploration that provides hints at potential perspectives that are worthy of exploration. We have tested the approach with different immersive visualizations, ranging from multivariate statistical data to astronomic data. Figure 7.4 shows an example of intercontinental air traffic data projected onto a globe.
Scenes and audio explanations (here, transcribed) from our prototype implementing the Talking Realities approach for air traffic data. (Top) A description of the aggregated intercontinental flights for one day. (Bottom) Scenes reporting the longest flight from an airport and most flights to any other airport
4.3 A Chatbot Interface Providing Visual and Textual Answers
Using natural language can make interactions with a machine effortless. Chatbots that reply to textual messages are an example of that. Instead of going through context menus and then choosing the relevant option, chatbots let us verbalize our requests as we would to another human being. However, research on supporting chatbot interfaces for data analysis and visualization is still in its infancy. Although some systems are already powerful, the use of chatbots can still lead to false expectations, misunderstood questions, and unexpected replies (Tory and Setlur 2019). We believe that a chatbot interface could be a good starting point to make the first contact with the data. In response to the user query, a resulting exploranative representation of data should then enable users to verify and validate presented facts and to explore related ones.
For a specialized use case of exploring relationships among historical public figures, we have developed VisKonnect (Latif et al. 2021a) together with project WorldKG (Chap. 1). The approach offers a chatbot interface to ask questions about said historical figures. Given a question, it uses a rule-based approach to understand the intent of the question and extract meaningful entities (e.g., people, places). Based on this information, it formulates a SPARQL query to pull the relevant data from an event knowledge graph (Gottschalk and Demidova 2019). This data is then visualized in multiple linked visualizations, highlighting the timelines of individual and shared events, as well as where these events take place. These visualizations are augmented with a textual explanation that aims at answering the user question (either through simple text templates or the GPT-3 language model). Additionally, related events are listed and serve as interactive links to explore them in the associated visualization. Figure 7.5 demonstrates a query about two well-known scientists and the response generated by VisKonnect.
5 Conclusion and Future Work
Within the presented research, we have empirically investigated in-depth how geographic data and related information can be jointly described and linked in textual and visual representations. For creating data-driven stories as integrated reports, we provide authoring support of such links that better connect the two representations. While links can be manually added in a flexible and easy-to-use way, our solution also automatically recommends specific linking through analyzing the data-driven text. In different reporting solutions, we were able to demonstrate the flexibility and broad applicability of our reporting solutions as automatically generated descriptions of statistical maps, as audio guides in virtual reality for immersive visualizations, and as a natural-language interface to a knowledge graph that responds with textual and visual data representations.
With these solutions, we not only guide users through insights of a data analysis but at the same time invite them to explore the data in depth. Following our overarching goal to loop back the volunteered data to the public, we are now specifically interested in transferring these empirical results, methods-oriented general solutions, and early research prototypes to specific application examples and inviting a broader audience to use them. Ongoing work targets at this already, for instance, investigating a visual reporting solution for personalized, comparative summarizations of hotel reviews.
Our research generally emphasizes that citizen participation in research is not one directional. Reflecting back results and providing options to explore the data support an even higher level of participation and should be considered in all citizen science projects and data volunteering platforms. Our ideas can be brought together with analysis solutions for volunteered geographic information, and we invite researchers developing such solutions to also investigate this perspective. Still, empirical studies are necessary that explore the effect of providing visual reporting solutions on the engagement of volunteers and the influence on decision processes.
References
Ayres P, Sweller J (2005) The split-attention principle in multimedia learning. In: Mayer RE (ed) The Cambridge handbook of multimedia learning. Cambridge University Press, Cambridge, pp 135–146
Badam SK, Liu Z, Elmqvist N (2019) Elastic documents: coupling text and tables through contextual visualizations for enhanced document reading. IEEE Trans Vis Comput Graph 25(1):661–671. https://doi.org/10.1109/tvcg.2018.2865119
Barral O, Lallé S, Iranpour A, Conati C (2021) Effect of adaptive guidance and visualization literacy on gaze attentive behaviors and sequential patterns on magazine-style narrative visualizations. ACM Trans Interact Intell Syst 11(3-4):1–46. https://doi.org/10.1145/3447992
Beck F, Siddiqui HA, Bergel A, Weiskopf D (2017) Method execution reports: generating text and visualization to describe program behavior. In: Proceedings of the 2017 IEEE Working Conference on Software Visualization. IEEE, pp 1–10. https://doi.org/10.1109/vissoft.2017.11
Chen S, Li J, Andrienko G, Andrienko N, Wang Y, Nguyen PH, Turkay C (2020) Supporting story synthesis: bridging the gap between visual analytics and storytelling. IEEE Trans Vis Comput Graph 26(7):2499–2516. https://doi.org/10.1109/tvcg.2018.2889054
Elmer ME (2012) Symbol considerations for bivariate thematic mapping. PhD thesis, University of Wisconsin–Madison
Gatt A, Krahmer E (2018) Survey of the state of the art in natural language generation: core tasks, applications and evaluation. J Artif Intell Res 61:65–170. https://doi.org/10.1613/jair.5477
Gottschalk S, Demidova E (2019) EventKG—the hub of event knowledge on the web—and biographical timeline generation. Semant Web 10(6):1039–1070
Hullman J, Drucker S, Henry Riche N, Lee B, Fisher D, Adar E (2013) A deeper understanding of sequence in narrative visualization. IEEE Trans Vis Comput Graph 19(12):2406–2415. https://doi.org/10.1109/tvcg.2013.119
Isenberg P, Lee B, Qu H, Cordeil M (2018) Immersive visual data stories. In: Immersive analytics. Springer, New York, pp 165–184. https://doi.org/10.1007/978-3-030-01388-2_6
Lan X, Xu X, Cao N (2021) Understanding narrative linearity for telling expressive time-oriented stories. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, pp 1–13. https://doi.org/10.1145/3411764.3445344
Latif S, Beck F (2019a) Interactive map reports summarizing bivariate geographic data. Vis Inf 3(1):27–37. https://doi.org/10.1016/j.visinf.2019.03.004
Latif S, Beck F (2019b) VIS author profiles: interactive descriptions of publication records combining text and visualization. IEEE Trans Vis Comput Graph 25(1):152–161. https://doi.org/10.1109/tvcg.2018.2865022
Latif S, Agarwal S, Gottschalk S, Chrosch C, Feit F, Jahn J, Braun T, Tchenko YC, Demidova E, Beck F (2021a) Visually connecting historical figures through event knowledge graphs. In: Proceedings of the 2021 IEEE Visualization Conference. IEEE, pp 156–160. https://doi.org/10.1109/vis49827.2021.9623313
Latif S, Chen S, Beck F (2021b) A deeper understanding of visualization-text interplay in geographic data-driven stories. Comput Graph Forum 40(3):311–322. https://doi.org/10.1111/cgf.14309
Latif S, Tarner H, Beck F (2022a) Talking realities: audio guides in virtual reality visualizations. IEEE Comput Graph Appl 42(1):73–83. https://doi.org/10.1109/mcg.2021.3058129
Latif S, Zhou Z, Kim Y, Beck F, Kim NW (2022b) Kori: interactive synthesis of text and charts in data documents. IEEE Trans Vis Comput Graph 28(1):184–194. https://doi.org/10.1109/tvcg.2021.3114802
McKenna S, Henry Riche N, Lee B, Boy J, Meyer M (2017) Visual narrative flow: exploring factors shaping data visualization story reading experiences. Comput Graph Forum 36(3):377–387. https://doi.org/10.1111/cgf.13195
Mumtaz H, Latif S, Beck F, Weiskopf D (2019) Exploranative code quality documents. IEEE Trans Vis Comput Graph 26(1):1129–1139. https://doi.org/10.1109/tvcg.2019.2934669
Ren D, Brehmer M, Lee B, Hollerer T, Choe EK (2017) ChartAccent: annotation for data-driven storytelling. In: Proceedings of the 2017 IEEE Pacific Visualization Symposium. IEEE, pp 230–239. https://doi.org/10.1109/pacificvis.2017.8031599
Satyanarayan A, Heer J (2014) Authoring narrative visualizations with Ellipsis. Comput Graph Forum 33(3):361–370. https://doi.org/10.1111/cgf.12392
Segel E, Heer J (2010) Narrative visualization: telling stories with data. IEEE Trans Vis Comput Graph 16(6):1139–1148. https://doi.org/10.1109/tvcg.2010.179
Shi D, Xu X, Sun F, Shi Y, Cao N (2021) Calliope: automatic visual data story generation from a spreadsheet. IEEE Trans Vis Comput Graph 27(2):453–463. https://doi.org/10.1109/tvcg.2020.3030403
Srinivasan A, Drucker SM, Endert A, Stasko J (2019) Augmenting visualizations with interactive data facts to facilitate interpretation and communication. IEEE Trans Vis Comput Graph 25(1):672–681. https://doi.org/10.1109/TVCG.2018.2865145
Sultanum N, Chevalier F, Bylinskii Z, Liu Z (2021) Leveraging text-chart links to support authoring of data-driven articles with VizFlow. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445354
The New York Times (2021) 2021: The year in visual stories and graphics. https://www.nytimes.com/interactive/2021/12/29/us/2021-year-in-graphics.html
Tong C, Roberts R, Borgo R, Walton S, Laramee R, Wegba K, Lu A, Wang Y, Qu H, Luo Q, Ma X (2018) Storytelling and visualization: an extended survey. Information 9(3):65. https://doi.org/10.3390/info9030065
Tory M, Setlur V (2019) Do what I mean, not what I say! Design considerations for supporting intent and context in analytical conversation. In: Proceedings of the 2019 IEEE Conference on Visual Analytics Science and Technology. IEEE, pp 93–103. https://doi.org/10.1109/vast47406.2019.8986918
Ynnerman A, Löwgren J, Tibell LAE (2018) Exploranation: a new science communication paradigm. IEEE Comput Graph Appl 38(3):13–20. https://doi.org/10.1109/MCG.2018.032421649
Zheng C, Ma X (2022) Evaluating the effect of enhanced text-visualization integration on combating misinformation in data story. In: Proceedings of the 15th IEEE Pacific Visualization Symposium. IEEE, pp 141–150. https://doi.org/10.1109/pacificvis53943.2022.00023
Zhi Q, Ottley A, Metoyer R (2019) Linking and layout: exploring the integration of text and visualization in storytelling. Comput Graph Forum 38(3):675–685. https://doi.org/10.1111/cgf.13719
Acknowledgements
The work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—424960846.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Beck, F., Latif, S. (2024). Visually Reporting Geographic Data Insights as Integrated Visual and Textual Representations. In: Burghardt, D., Demidova, E., Keim, D.A. (eds) Volunteered Geographic Information. Springer, Cham. https://doi.org/10.1007/978-3-031-35374-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-35374-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35373-4
Online ISBN: 978-3-031-35374-1
eBook Packages: Computer ScienceComputer Science (R0)