Personal and Ubiquitous Computing

, Volume 22, Issue 2, pp 409–431 | Cite as

SAViL: cross-display visual links for sensemaking in display ecologies

  • Haeyong Chung
  • Chris North
Original Article


The main challenge associated with visual analysis using multiple displays is tied to the fact that a user must maintain awareness of and synthesize scattered information across separate displays—some of which may be out of the user’s immediate field of vision. To address this need, we present Spatially Aware Visual Links (SAViL), a cross-display visual link technique capable of (1) guiding the user’s attention to relevant information and (2) visually connecting related information across displays. In essence, SAViL visually represents the direct connections among different types of visual objects on separate displays to help users create semantic layers of documents spread over different displays. To test the efficacy of this system, we evaluated the impact of visual linking on the sensemaking process for text data utilizing multiple heterogeneous displays. The results of our evaluation indicate that cross-display links enable users to effectively forage for, organize, and synthesize relevant information scattered across multiple displays, integrating the different displays into a single cohesive visual workspace to support their sensemaking tasks.


Sensemaking Multi-display environment Display ecology Visual text analytics Visual links 



We thank Kris Cook (PNNL) for her insightful comments.

Funding information

This work was partially supported by grants from the U.S. Department of Defense, NSF IIS-1218346, and New Faculty Research Award from University of Alabama in Huntsville.


  1. 1.
    Chung H, North C, Joshi S, Chen J (2015) Four considerations for supporting visual analysis in display ecologies. In: Proc. of IEEE VAST’15, pp 33–40Google Scholar
  2. 2.
    Card SK, Mackinlay JD, Shneiderman B (1999) Readings in information visualization: using vision to think. Morgan Kaufmann Publishers Inc., San FranciscoGoogle Scholar
  3. 3.
    Pirolli P, Card S (2005) Sensemaking processes of intelligence analysts and possible leverage points as identified through cognitive task analysis. In: Proc. of international conference on intelligence analysis, pp 6Google Scholar
  4. 4.
    Hamilton P, Wigdor DJ (2014) Conductor: enabling and understanding cross-device interaction. In: Proc. of ACM CHI’14, pp 2773–2782Google Scholar
  5. 5.
    Jetter H-C, Zöllner M, Gerken J, Reiterer H (2012) Design and implementation of post-WIMP distributed user interfaces with ZOIL. Int J Human Comput Interact 28(11):737–747CrossRefGoogle Scholar
  6. 6.
    Chung H, North C, Self JZ, Chu S, Quek F (2014) VisPorter: facilitating information sharing for collaborative sensemaking on multiple displays. Pers Ubiquit Comput 18(5):1169–1186CrossRefGoogle Scholar
  7. 7.
    Huang EM, Mynatt ED, Trimble JP (2006) Displays in the wild: understanding the dynamics and evolution of a display ecology. In: Proc. of the 4th international conference on pervasive computing, pp 321–336Google Scholar
  8. 8.
    Marquardt N, Diaz-Marino R, Boring S, Greenberg S (2011) The proximity toolkit: prototyping proxemic interactions in ubiquitous computing ecologies. In : Proc. of ACM UIST’11, pp 315–326Google Scholar
  9. 9.
    Chung H, Cho YJ, Self J, North C (2012) Pixel-oriented Treemap for multiple displays. In: Proc. of IEEE VAST’12, pp 289–290Google Scholar
  10. 10.
    Geyer F, Pfeil U, Höchtl A, Budzinski J, Reiterer H (2011) Designing reality-based interfaces for creative group work. In: Proc of the ACM conference on Creativity and cognition 2011, pp 165–174Google Scholar
  11. 11.
    Chung H, Chu S, Quek F, North C (2013) A comparison of two display ecology models for collaborative sensemaking. In Proc. of ACM PerDis’13, pp 37–42Google Scholar
  12. 12.
    Quigley A, Grubert J (2015) Perceptual and social challenges in body proximate display ecosystems. In: Proc of International Conference on Human-Computer Interaction with Mobile Devices and Services 2015, pp 1168–1174Google Scholar
  13. 13.
    Baldonado MQW, Woodruff A, Kuchinsky A (2000) Guidelines for using multiple views in information visualization, In: Proc. of AVI’00, pp 110–119Google Scholar
  14. 14.
    Badam SK, Elmqvist N (2014) PolyChrome: a cross-device framework for collaborative web visualization. In: Proc. of ITS’14, pp 109–118Google Scholar
  15. 15.
    Wigdor D, Jiang H, Forlines C, Borkin M, Shen C (2009) WeSpace: the design development and deployment of a walk-up and share multi-surface visual collaboration system. In: Proc. of ACM CHI’09, pp 1237–1246Google Scholar
  16. 16.
    Steinberger M, Waldner M, Streit M, Lex A, Schmalstieg D (2011) Context-preserving visual links. IEEE TVCG 17(12):2249–2258Google Scholar
  17. 17.
    Waldner M, Puff W, Lex A, Streit M, Schmalstieg D (2010) Visual links across applications. In: Proc of Graphics Interface 2010, pp 129–136Google Scholar
  18. 18.
    Andrews C, North C (2012) Analyst’s workspace: an embodied sensemaking environment for large, high-resolution displays. In: Proc. of IEEE VAST’12, pp 123–131Google Scholar
  19. 19.
    Stasko J, Gorg C, Liu Z (2008) Jigsaw: supporting investigative analysis trhough interactive visualization. Inf Vis 7:118–132CrossRefGoogle Scholar
  20. 20.
    Andrews C, Endert A, North C (2010) Space to think: large high-resolution displays for sensemaking. In: Proc. of ACM CHI’10, pp 55–64Google Scholar
  21. 21.
    Nicol G, Wood L, Champion M, Byrne S (2001) Document Object Model (DOM) level 3 core specification. W3C Working Draft 13:1–146Google Scholar
  22. 22.
    Isenberg P, Isenberg T, Hesselmann T, Lee B, Von Zadow U, Tang A (2013) Data visualization on interactive surfaces: a research agenda. IEEE Comput Graph Appl 33(2)Google Scholar
  23. 23.
    Robinson AC (2008) Collaborative synthesis of visual analytic results. In: Proc. of IEEE VAST’08, pp 67–74Google Scholar
  24. 24.
    Plaue C, Stasko J (2009) Presence & placement: exploring the benefits of multiple shared displays on an intellective sensemaking task. In: Proc of the ACM international conference on Supporting group work 2009, pp 179–188Google Scholar
  25. 25.
    Wallace JR, Scott SD, MacGregor CG (2013) Collaborative sensemaking on a digital tabletop and personal tablets: prioritization, comparisons, and tableaux. In: Proc. of ACM CHI’13, pp 3345–3354Google Scholar
  26. 26.
    Ball R, North C, Bowman DA (2007) Move to improve: promoting physical navigation to increase user performance with large displays. In: Proc. of ACM CHI’07, pp 191–200Google Scholar
  27. 27.
    Shupp L, Andrews C, Dickey-Kurdziolek M, Yost B, North C (2009) Shaping the display of the future: the effects of display size and curvature on user performance and insights. Hum Comput Interact 24(1–2):230–272CrossRefGoogle Scholar
  28. 28.
    Elmqvist N, Irani P (2013) Ubiquitous analytics: interacting with big data anywhere, anytime.IEEEComputer Mag Computer 46(4):86–89Google Scholar
  29. 29.
    Langner R, Horak T, Dachselt R (2016) Towards combining mobile devices for visual data exploration. Poster presented at IEEE InfoVis’16Google Scholar
  30. 30.
    Grinstein G, Plaisant C, Laskowski S, O’Connell T, Scholtz J, Whiting AM (2007) VAST 2007 Contest-Blue Iguanodon. In: Proc. of IEEE VAST’07, pp 231–232Google Scholar
  31. 31.
    Baudisch P, Rosenholtz R (2003) Halo: a technique for visualizing off-screen objects. In: Proc. of ACM CHI’03, pp 481–488Google Scholar
  32. 32.
    Gustafson S, Baudisch P, Gutwin C, Irani P (2008) Wedge: clutter-free visualization of off-screen locations. In: Proc. of ACM CHI’08, pp 787–796Google Scholar
  33. 33.
    Sekuler AB, Murray RF (2001) 9-amodal completion: a case study in grouping. Adv Psychol 130:265–293CrossRefGoogle Scholar
  34. 34.
    Chung H, Andrews C, North C (2014) A survey of software frameworks for cluster-based large high-resolution displays. IEEE TVCG 20(8):1158–1177Google Scholar
  35. 35.
    LearnBoost (2017) Mongoose.
  36. 36.
    NaturalPoint (2011) Optitrack.
  37. 37.
    Astanin S (2017) A pure Python library to receive motion capture data from OptiTrack Streaming Engine.
  38. 38.
    Consortium WWW (2011) Cascading style sheets level 2 revision 1 (CSS 2.1) specificationGoogle Scholar
  39. 39.
    Endert A, Fiaux P, North C (2012) Semantic interaction for visual text analytics. In: Proc. of ACM CHI’12, pp 473–482Google Scholar
  40. 40.
    Isenberg P, Fisher D, Paul SA, Morris MR, Inkpen K, Czerwinski M (2012) Co-located collaborative visual analytics around a tabletop display. IEEE TVCG 18(5):689–702. Google Scholar
  41. 41.
    Kang Y-A, Gorg C, Stasko J (2009) Evaluating visual analytics systems for investigative analysis: deriving design principles from a case study. In: Proc. of IEEE VAST’09, pp 139–146Google Scholar
  42. 42.
    Hughes F, Schum D (2003) Discovery-proof-choice, the art and science of the process of intelligence analysis-preparing for the future of intelligence analysis. Joint Military Intelligence College, WashingtonGoogle Scholar
  43. 43.
    Bolton N (2014) Synergy.
  44. 44.
    Strauss A, Corbin J (1990) Basics of qualitative research, vol. 15. Sage, Newbury ParkGoogle Scholar
  45. 45.
    Selassie D, Heller B, Heer J (2011) Divided edge bundling for directional network data. IEEE TVCG 17(12):2354–2363Google Scholar
  46. 46.
    Holten D, Van Wijk JJ (2009) Force-directed edge bundling for graph visualization. In: Computer graphics forum, vol 3, pp 983–990Google Scholar
  47. 47.
    Cui W, Zhou H, Qu H, Wong PC, Li X (2008) Geometry-based edge clustering for graph visualization. IEEE Trans Vis Comput Graph 14(6):1277–1284CrossRefGoogle Scholar
  48. 48.
    Grubert J, Heinisch M, Quigley A, Schmalstieg D (2015) Multifi: multi fidelity interaction with displays on and around the body. In: Proc. of ACM CHI’15, pp 3933–3942Google Scholar
  49. 49.
    Grubert J, Kranz M, Quigley A (2015) Design and technology challenges for body proximate display ecosystems. In: Proc. of International Conference on Human-Computer Interaction with Mobile Devices and Services 2015, pp 951–954Google Scholar
  50. 50.
    Pirchheim C, Waldner M, Schmalstieg D (2009) Deskotheque: improved spatial awareness in multi-display environments. In: Proc. of IEEE VR’09, pp 123–126Google Scholar
  51. 51.
    North C (2006) Toward measuring visualization insight. IEEE CG&A 26(3):6–9Google Scholar
  52. 52.
    Chang R, Ziemkiewicz C, Green TM, Ribarsky W (2009) Defining insight for visual analytics. IEEE CG&A 29(2):14–17Google Scholar
  53. 53.
    Dostal J, Kristensson PO, Quigley A (2013) Subtle gaze-dependent techniques for visualising display changes in multi-display environments. In: Proc of the international conference on Intelligent user interfaces 2013, pp 137–148Google Scholar
  54. 54.
    Dostal J, Kristensson PO, Quigley A (2014) Visual focus-aware techniques for visualizing display changes. In: Google PatentsGoogle Scholar
  55. 55.
  56. 56.
    Chung H, Seungwon Y, Massjouni N, Andrews C, Kanna R, North C (2010) VizCept: supporting synchronous collaboration for constructing visualizations in intelligence analysis. In: Proc. of IEEE VAST’10, pp 107–114Google Scholar
  57. 57.
    Geyer F, Jetter HC, Pfeil U, Reiterer H (2010) Collaborative sketching with distributed displays and multimodal interfaces. In: Proc. of ACM ITS’10, pp 259–260Google Scholar
  58. 58.
    Elmqvist N (2011) Munin: a peer-to-peer middleware for ubiquitous visualization spaces. In: Proc. of the 1st workshop on distributed user interfaces 2011, pp 17–20Google Scholar
  59. 59.
    Schreiner M, Rädle R, Jetter H-C, Reiterer H (2015) Connichiwa: a framework for cross-device web applications. In: Proc. of ACM CHI’15, pp 2163–2168Google Scholar

Copyright information

© Springer-Verlag London Ltd. 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Alabama in HuntsvilleHuntsvilleUSA
  2. 2.Department of Computer ScienceVirginia TechBlacksburgUSA

Personalised recommendations