Encyclopedia of Computer Graphics and Games

Living Edition
| Editors: Newton Lee

Cognitive Processing of Information Visualization

  • Chen GuoEmail author
  • Shuang Wei
  • Yingjie Chen
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-08234-9_95-1

Synonyms

Definition

Information visualization is the use of computer supported, interactive, visual representations of abstract data to amplify cognition (Card et al. 1999).

Introduction

Information visualizations turn raw data into information and enable researchers to gain insight from the data. Understanding how viewers interpret different types of visual information contributes to the creation of effective and intuitive visualizations. This paper introduces the cognitive processing of visualizations from the angles of pre-attentive processing, visual working memory, cognitive load, and sensemaking. The hope is that this will provide readers enough of an understanding of visualization through visual perception and cognition theories and methods. We presented two case studies that illustrated how cognitive theories inform and impact our research.

State of the Art Work

Vision’s Constructive Power

Gardner (1983) advocated for a multiple intelligence theory in 1983 that has drawn the attention of researchers due to its premise that each individual difference’s intelligence is composed of multiple intelligences. The intelligences are independent and have their own operating systems within the brain. People with higher visual and spatial intelligences respond better to visual cues by storing, manipulating, and recreating visual information. The constructive power of human vision has drawn the attention of researchers for many years. This section reviews the work of Hoffman, Gombrich, and Arnheim in the domain of visual intelligence and discusses how they inform, impact, and relate the creation and viewer interpretation of information visualizations.

Hoffman and Vision’s Constructive Power

Hoffman (2000) attempted to explain the complex mental construction process of all sighted individuals. In Visual Intelligence (Hoffman 2000), he introduced sets of universal and specialized rules that govern our perception of line, color, form, depth, and motion. Hoffman’s rule of generic views and 10 rules of visual intelligence clearly explain how individuals interpret and construct visual objects. These rules indicate how our mind organizes data and turn data into knowledge. They can be widely applied to visualization design. People have countless interpretations of what they see, but humans prefer to perceive things quickly and efficiently. The generic views rule is that designers should construct a stable view and a constant image. If an object has more salient-part boundaries, humans will see it as a figure because it is more efficient for our perception to process clearer evidence and a stronger boundary. Havre et al. (2000) were inspired by the perceptual processes of identifying curves and silhouettes, recognizing parts, and grouping them together into objects. They created a novel visualization tool called ThemeRiver that employs the river metaphor to depict thematic variations over time. Temporal thematic changes can be easily recognized because the system uses smooth and continuous curves to bound a theme and distinct colors to differentiate themes.

Moreover, Hoffman (2000) found that if two visual structures have a non-accidental relation, a designer should group and assign them to a common origin. He also stated that if three or more curves intersected at a common point in an image, they should be interpreted as intersecting at a common point in space. These rules can guide the design of movement data through grouping and stack trajectories based on time proximity and visual similarity. Crnovrsanin et al. (2009) plotted trace as distance to the explosion (y-axis) vs. time (x-axis). By applying proximity, audiences can easily depict the entire event a glance and identify different patterns, such as spatial concentration, co-incidence, trends, and divergence. In order to make a powerful design and compelling product, visualization researchers need to integrate these rules and construct what human beings desire to see with little effort.

Gombrich and Constructivist Perception

Gombrich (1977) proposed in Art and Illusion that visual perception is always functioning as a projection of prior experience and imagination, or the so-called constructive. As a constructivist, Gombrich pointed out that artists manipulate the inherited pictorial schemata to directly observe the world, and in turn correct the schemata based on their interaction experience. Gombrich (1977) also pointed out that the ability to recognize objects was the result of perceptual tuning and selection attention. He differentiated looking from seeking, and stated that viewers experience four-step processes of image engagement while looking at images. The four steps include attention (focus), interest (cognitive awareness of the focus), involvement (meaning attached to the awareness), and attitude (feeling resulting from the meaning) (Gombrich 1977). The first step is to be attracted to parts of the image. Then a viewer attaches some meaning to the parts. Thereafter, viewers will normally generate some feeling or attitude towards the image. At this point, looking changes into seeing with a statement of the image. The attitudes, in turn, affect the way viewers perceive the image. The engagement process clearly presents how viewers interact with pictures, and therefore enlightens the visualization design.

Shneiderman (1996) was inspired by Gombrich’s schemata and proposed a famous visual information-seeking mantra: Overview first, zoom and filter, then details-on-demand. The mantra has served as a golden rule in visual analytics because it takes human perceptual abilities in current design into consideration. It is very easy for audiences to scan, recognize, and recall images rapidly. Audiences detect changes in size, color, shape, movement, or texture. It is intuitive for audiences to perform tasks like dragging one object to another. Almost all successful visualization design supports the overview, zoom and filter, then details-on-demand. Guo et al. (2014) used dodecagons to visualize GPS data. The system provides an effective overview to show the common patterns. It also allows analysts to filter and examine individual patterns in detail through various interactions.

Arnheim and Gestalt Principles

Arnheim (1969) defined picture perception as the composition of circles, lines, squares, and other forms of graphs into shapes and patterns. The innate laws of the structure are called Gestalt theory. In Art and Visual Perception: A Psychology of the Creative Eye (Arnheim 1974), Arnheim detailed picture-making based on balance, shape, form, growth, space, light, color, movement, dynamic, and expression. Gestalt laws, such as figure/ground, simplicity, completeness, good continuation, and the like were named as fundamental to human perception and visual design.

Visual balance is an innate concept as well as a key principle with which designers convey messages (Pornstein and Krinsky 1985). Learning visual balance enables designers to create visualizations that “look visually right” (Carpenter and Graham 1971) with respect to color and layout. Arnheim (1974) illustrated the balance concept with a structural net that determined balance. He described that every visual work had nine hotspots and visual elements on the main axes or at the centers that should be in visual balance. Weight and direction led to visual balance. More specifically, the characteristics of visual objects, such as location, size, color, shape, and subject matter, influenced visual balance. In spatiotemporal visualization, it is not an easy task to arrange map elements – legends, scales, borders, areas, place names, and glyphs – into an aesthetically pleasing design. Dent (1999) employed the structural net as a guide for thematic map creation. The research effectively used all spaces and retained a harmonious balance among visual elements.

Arnheim proposed that visual thinking occurred primarily through abstract imagery (Arnheim 1969). Arnheim stressed the importance of reasoning with shapes and identified the nature of abstraction in visual representation. Designers always use visual abstraction to clean up the display and impress observers. When using visual abstraction in information visualization, researchers should keep in mind that the meaning of the raw data sets should be preserved true to their original form. Numerous visual abstraction approaches have emerged in the visualization field. Agrawala and Stolte (2001) presented route maps to depict a path from one location to another. Through specific generalization techniques involving distortion and abstraction, route maps were able to present trajectory in a clear, concise, and convenient form. Lamping et al. (1995) created a novel, hyperbolic geometry approach to visualize large hierarchies. Interaction techniques, such as manipulating focus, using pointer clicks, and interactive dragging, emphasized important actions that viewers tend to focus at the expense of distorting less important information. Humphrey and Adams (2010) employed the General Visualization Abstraction (GVA) algorithm in providing a novel technique for information abstraction, such as selection and grouping. This approach facilitated abstraction by identifying the most relevant information items after assigning an importance value to each item. GVA could be applied to geographic map-based interfaces to support incident management and decision-making. Visual abstraction helps to transfer the meaning of the original data into a slightly different but clearer form. Additionally, visual abstraction also supports the cluttered graphical representation of numerous big data sets by replacing the data with new visual elements corresponding to higher levels of abstraction (Novotny 2004).

Preattentive Processing

Human brains can rapidly and automatically direct attention to information that has the highest salience as well as suppress irrelevant information based on simple computations of an image (Healey and Enns 2012). This is often called pre-attentive processing and provides the informational basis of attentional selection (Logan 1992). Detection precedes conscious attention. Selection cannot occur until the ensemble coding and feature hierarchy of pre-attentive process is complete.

Many research efforts have tried to address the following central question: which properties of visualizations rapidly attract people? Selective attention usually binds features, such as color, shape, location, and texture, into a perceptual object representation (Wheeler and Treisman 2002). Ware (2012) identified visual properties that are pre-attentively processed in visualization. These are also referred to as pre-attentive attributes that can be perceived in less than 10 ms without conscious effort, and require 200–250 ms for large, multi-element displays. These attributes are grouped into four categories: color (hue and intensity), form (line orientation, line length, line width, line collinearity, size, curvature, spatial grouping, blur, added marks, and numerosity), motion (flicker and direction of motion), and spatial position (2D position, stereoscopic depth, and convex/concave shape based on shading) (Ware 2012). The result, which makes symbols pop out, can be applied to information visualization design.

Five notable theories explain how pre-attentive processing occurs. The feature integration theory proposed a model of how low-level human vision is composed of a set of feature maps that can detect features either in parallel or in serial (Treisman 1991), and a master map of locations that is required to combine featured activities at a common spatial location (Treisman and Gelade 1980). Texton Theory focused on statistical analysis of texture patterns. A group of texture patterns consists of three categories: elongated blobs (e.g., rectangles, ellipses, line segments) with specific properties, such as hue, orientation, and width; terminators (ends of line segments); and crossing line segments (Julesz 1981, 1984). Researchers stated that only a difference in textons or in their density can be detected (Julesz 1981). Instead of supporting the dichotomy of serial and parallel search modes, Duncan and Humphreys (1989) explored two factors that may influence search time in conjunction searches: the number of information items required to identify the target and how easily a target can be distinguished from its distractors. Duncan and Humphreys (1989) assumed that the search ability depends on the type of task and the display conditions. Search time is related to two factors: T-N similarity and N-N similarity (Duncan 1989). T-N similarity refers to the amount of similarity between targets and nontargets that have a positive relationship with search time and a negative relationship with search efficiency. N-N similarity represents the amount of similarity within the nontargets themselves that have a negative relationship with search time and a negative relationship with search efficiency. Guided search theory was proposed by Wolfe (1994). He constructed an activation map for the visual search based on bottom-up and top-down visual information (Wolfe 1994). Users’ attention is drawn to the highest hills in the activation map, which generates the largest combination of bottom-up and top-down influences (Healey and Enns 2012). More recently, Boolean Map Theory was presented (Huang and Pashler 2007). Researchers divided the visual search into the two stages of selection and access, and divided the scene into selected elements and excluded elements. This is referred to as a Boolean map. Viewers were able to generate Boolean maps in two ways: by specifying a single value of a feature or applying union and intersection onto two existing maps (Huang and Pashler 2007).

Visual Working Memory

Baddeley (1992) stated that the working memory model consists of a phonological loop that maintains verbal–linguistic information, a visuospatial sketchpad that maintains visual and spatial information, a central executive to control and coordinate the operation of the systems, and an episodic buffer to communicate with long-term memory. Working memory decides which activities to perform, inhibits distracting information, and stores information while accomplishing a complex task (Miyake and Shah 1999). Luck and Vogel (2013) defined visual working memory as the active maintenance of visual information to serve the needs of ongoing tasks. The last 15 years have seen a surge in research on visual working memory that aims to understand its structure, capacity, and the individual variability present in its cognitive functions. There are three essential theoretical issues related to visual working memory: discrete-slot versus shared-resource, visual representation, and visual context.

Visual working memory research has largely focused on identifying the limited capacity of the working memory system and exploring the nature of stored memory representations. The field has recently debated whether the capacity of visual working memory is constrained by a small set of “discrete fixed-precision representations,” the discrete-slot model, or by a pool of divisible resources in parallel, the shared-resource model (Luck and Vogel 2013; Huang 2010; Zhang and Luck 2008). Visual working memory allows people to temporarily maintain visual information in their minds for a few seconds after its disappearance (Luck and Vogel 1997). Some researchers proposed that working memory stores a fixed number of high-precision representations when people are faced with a large number of items, and no information is retained about the remaining objects (Luck and Vogel 1997; Pashler 1988; Zhang and Luck 2008). Luck and Vogel (1997) also stated that it is possible to store both the color and orientation of four items in visual working memory. Some researchers found that both the number of visual objects and visual information load imposed capacity limits on visual working memory up to approximately four or five objects (Alvarez and Cavanagh 2004; Luck and Vogel 1997; Pashler 1988), and six spatial locations represented allocentrically in a spatial configuration (Bor et al. 2001). Thus, the temporary storage of visual information is more related to integrated objects rather than individual features. This statement is also consistent with the selective attention metaphor that visuospatial attention is like the beam of a flashlight. People are unable to split their attention to several locations and are, instead, always paying attention to the most important events while simultaneously filtering out all distractions.

However, other researchers claimed that the visual working memory is able to store imprecise representations of all items, including low-resolution representations of the remaining objects (Bays et al. 2009, 2011; Bays and Husain 2008; Frick 1988; Wilken and Ma 2004). They thought of visual working memory as many low-resolution digital photographs and challenged the concept of working memory by examining the distribution of recall errors across the visual scene. Based on a Bayesian decision model, more visual objects are held in visual working memory and fewer resources are allocated to each object. Thus, in contrast to the discrete slots model, the continuous resource model emphasizes that the storage capacity of the visual working memory is not limited to the number of visual objects. Recent empirical evidence on recurrent neural networks suggests that a discrete item limit is more favorable (Luck and Vogel 2013). Although there is still much ongoing debate regarding the models for resource allocation, there is general agreement that visual working memory has an important object/resolution trade-off: as more items are stored in visual working memory, less fidelity per visual item can be maintained (Brady et al. 2011).

Additionally, what we see depends on where our attention is focused, and our attention is guided by what we store in our memory. Attention and visual working memory are closely connected. The central executive of working memory manages the direction of attention and the supervision of information integration. Moreover, both attention and visual working memory have a limited capacity of visual features that can be detected or maintained. Preattentive processing plays a critical role in which visual properties our eyes are drawn to, and therefore helps people deal with visual and spatial information in working memory.

Cognitive Load

Cognitive load refers to “the total amount of mental activity that the working memory imposes on working memory at an instance in time” (Cooper 1998). Cooper (1998) also stated, “the most important factor that contributes to cognitive load is the number of elements that need to be attended to.” Sweller and his colleagues (Chandler and Sweller 1991, 1992; Sweller et al. 1998) identified three sources of cognitive load: intrinsic, extraneous, and germane. The intrinsic cognitive load is determined by the basic characteristics of the information (Sweller 1993). The extraneous cognitive load is imposed by the designer as they organize and display information (Chandler and Sweller 1991, 1992). Designers are always striving to reduce cognitive load and help viewers grasp the underlying information more effectively and efficiently. Finally, the germane cognitive load is the roaming free capacity that uses the extraneous load to build a new, complex schema (Sweller et al. 1998).

Miller (1956) developed our understanding of working memory by using information chunks that could be strung together. He and his followers also believed that working memory had a capacity of between seven and ten chunks at any given time (Merriënboer and Sweller 2005; Miller 1956). In terms of visual working memory, the capacity is limited to approximately four or five visual elements (Alvarez and Cavanagh 2004; Luck and Vogel 1997; Pashler 1988) and six spatial locations where conscious thought transpires (Bor et al. 2001). Generally speaking, visual elements are schemas that can be understood as models that organize our knowledge. Different people mentally store different numbers of visual objects. Due to the nature of the material, it is almost impossible to change intrinsic cognitive load with design. However, people can control their extraneous cognitive load using design techniques. The level of the extraneous cognitive load may be modified based on how designers present information to users. The germane cognitive load is more so focused on individual self-regulation and concerns schema automation (Paas et al. 2004). Some research posited that the third resource germane cognitive load may have a positive impact on working memory, whereas the intrinsic and extraneous cognitive loads are considered as negative in impact (Sweller et al. 1998). One metric for the evaluation of visualization is to compare the sum of the intrinsic cognitive load and extraneous cognitive load with the working memory capacity. If the additive effect produced by the three resources is less than the working memory capacity, the visualization system involves lower cognitive overload and good usability, which is more likely to be successful. On the contrary, if the sum exceeds the user’s working memory capacity, the visualization system has higher cognitive overload and poor usability, which is far less likely to be successful.

Visual working memory has a limited amount of processing power and capacity. Users will get overwhelmed and abandon a visualization task when the amount of information exceeds their visual working memory capacity. The designer must use various design strategies to keep the cognitive load imposed by a user interface to a minimum; therefore, more visual working memory resources are available for activities. Also, cognitive load varies by user. Examples include the involvement required for interaction. For an experienced user, adding interaction may assist them to gain insight from the data. Such interactivity might increase cognitive overload and make the visualization more difficult for the novice user. By reducing extraneous cognitive load with visual analytics techniques, we can minimize the total cognitive load imposed by the visualization interface, which will increase the portion of available working memory to attend to information.

Sensemaking

Sensemaking is the process through which people make trade-offs and construct new knowledge of the world (Weick 1995). These processes include the encoding, retention, and retrieval of information in a temporary workspace or working memory. Sensemaking is constrained by the structure and capacity of working memory; the operation of the knowledge base is constrained by its own nature and architecture (Gilhooly and Logie 2004). Visual analytics is defined as the science of analytical reasoning facilitated by interactive visual interfaces (Thomas and Cook 2005). The field aims to support sensemaking processes and gain insight using an interactive, visual exploration of the data set. However, due to the limited capacity of visual working memory, it is very difficult for people to discover and keep track of all patterns while looking at visualization graphs. Also, inconsistencies between mental models and external representations increase the cognitive overload and thereby further hinder sensemaking outcomes.

There are three phases and loops in the sensemaking process: information foraging, information schematization and problem-solving, and decision-making and action (Ntuen et al. 2010). Information foraging is a cost and benefit assessment of maximizing the rate of gaining valuable information and minimizing the number of consumed resources. Based on the metaphor of an animal foraging for food, information foraging theory helps visualization researchers discover effective ways to represent massive amounts of data and provide effective mechanisms for navigation. Challenges include formalizing contributions, such as identifying trends or outliers of interest, posting explanatory hypotheses, and providing retrieval mechanisms. Information schematization is regarded as an information fusion tool or thinking process that uses new information to explain surprise and update prospective, predictive states of a situation. Decision-making supports situational understanding in which a stimulus is placed into a framework to understand, explain, attribute, extrapolate, and predict a process that leads to situational understanding. Sensemaking deals with seeking, collating, and interpreting information to support decision-making (Ntuen et al. 2010). Information content is held in active working memory. Sensemaking can be applied to information visualization with the intent to reduce complexity and simplify the volume of data collected in order to create understanding. Visualization can serve to amplify or weaken cognitive ability.

The following section presents two spatial-temporal data visual analytics systems to illustrate how cognitive theories could be applied to the design and development of visualization system.

Case Study 1: Vast Challenge 2015 CrowdAnalyzer

CrowdAnalyzer is a system developed from the IEEE Visual Analytics Science and Technology (Vast) challenge 2015 (Wei et al. 2015). With a background that DinoFun World is a typical amusement park, sitting on hundreds of hectares and hosting thousands of visitors every day, a dataset including 3 days’ movement tracking information of visitors is provided for researchers to identify the patterns and outliers of attendances and park’s activities during those 3 days.

CrowdAnalyzer is designed to visualize visitors’ movements, analyze parks’ facility status, and identify visitors’ group and movement patterns. The system has four parts: facility manager, group hierarchical inspector, enter-leave time group viewer, and map (Fig. 1). Due to the huge amount of data, it is impossible to present the data at once. CrowdAnalyzer allows the viewer to get an overview first and then decide which area of the map they did like to further investigate through zooming in and filtering the details of the data.
Fig. 1

Facility manager in CrowdAnalyzer

The tab “Facility Manager” presents the parks’ entertainment facilities’ status: the number of people waiting and the estimated waiting time. Multiple line charts are used to present the data. The positive y-axis represents the number of people, the negative y-axis is the waiting time, and the x-axis is the timeline. The colored points on the coordinate show the number of people waiting to take the ride at the corresponding time points. According to Arnheim (1969), people will interpret the composition of different forms of graphs into shapes and patterns. The user will automatically categorize the same color dots into a group. The good continuation between the same colored dots will make them into a line from Gestalt principles of perception (Arnheim 1974). The line depicts the waiting people and the waiting time’s change through time, and the distinct colors differentiate the data on different days. In Fig. 1, people will know that the waiting people and time on Friday is much lesser than the weekend.

To reduce users’ visual working memory, in CrowdAnalyzer, every visualization has a legend on the corner. Windows are arranged side by side to convenient users’ exploration and comparison. Hundreds of thousands of tourists visit the park in 3 days. To reduce users’ workload from both perception and memory aspects, it is important to categorize visitors into typical groups. Researchers applied statistical analysis (K-means and EM algorithms) on data criteria (such as visiting date, entering location, and population) to cluster individual tourists into hundreds of small groups. Then visualize these groups using parallel coordinates. The parallel coordinates have eight vertical axes (group ID, walking steps, the number of the entrance, group population, the number of thrill rides, the number of kiddie rides, the number of everyone rides, and the number of shows). The line connecting axes represents a typical visitor group. Users can filter on the parallel coordinates to figure out patterns and outliers. For example, in Fig. 2, the groups that haven’t taken any rides but entered the park many times are picked out. The groups’ moving trajectories and the key time points are presented on the map. There are two groups of concentric arcs on the map. Each arc represents one group. The arcs with the same radius represent the same group, so we know there are six groups of visitors. As the parallel coordinates show there are only one people in each group, there are six people in total. The start point of the arc represents the visitor’s entering time, and the end of the arc represents the visitor’s leaving time. We can tell that six people entered and left the park at the same time. The yellow color on the arc represents the duration that the visitor stayed at the place on the map. In conclusion, from the trajectories and arcs on the map, the six people arrived and left the park at the same time, shared exactly the same path, took a rest at the entrance before entering the park, then took a rest again at the other entrance of the park. Then, we can reasonably consider these people are not visitors but the park staffs who are responsible for float performances. They performed on a fixed route, repeated the performance five times each day, and took a rest after the performance was done. Through filtering out abnormal data on parallel coordinates and presenting detail information on the map, users are able to observe and understand the characters of different types of visitors.
Fig. 2

Enter-leave time group view and parallel coordinates in CrowdAnalyzer

Case Study 2: Vast Challenge 2016 Metacurve

The second system MetaCurve aims to help analysts interpret and analyze periodical time series data using IEEE VAST 2016 mini Challenge 2 dataset. The dataset registered an office building’s temperature and HAVC (heating, ventilation, and air conditioning) system status in 2 weeks, and the employees’ movements in the building. The first section of MetaCurve is the movement patterns and outliers of employees. Summarized pie charts (Fig. 3) are designed to attract users’ attention and guide users to click the pie charts to explore more. In Fig. 3, fourteen arcs make up a pie chart. Color is used to differentiate different arcs, and the black stroke between arcs strengthens the boundary of arcs. The radius of each arc represents the number of outliers of employees’ movements. Although in the weekend, employees took breaks and the arcs are not shown, people still have the visual ability to make up the absent part, recognize the pie chart, and further understand the information that pie chart is designed to express. The seven pie charts showing movement anomalies of seven types of employees present that the movements of employees from the administration, engineering, facilities, and IT departments are more flexible than the employees of the executive, HR, and security department.
Fig. 3

Pie charts in MetaCurve

By clicking on each pie chart, a detailed visualization combining coordinate, color, shade, and shapes are provided (Fig. 4). Human is sensitive to the difference in color, shape, and shade. Different visual factors are adopted to take advantage of human visual perception. The x-axis of the coordinate in Fig. 4 is the timeline. The y-axis is corresponding to the building floors and zones. From the bottom of the y-axis (first-floor zone 1) to the top (third floor zone 7), each proximity zone is presented. As employee’s movement data is periodical data, their proximity records are stacked together in the chart. If an employee appeared at a zone less than four times in 2 weeks, a light shade would be marked at the time. If the appearance time is only once, a dot will be marked on the shade. The orange line marked on the zone shows the location of the employee’ office. Figure 4 clearly depicts the movement differences between a security people and an administer.
Fig. 4

The timeline chart showing the movement differences between a security people and an administer

The building temperature and HVAC data are also periodical data, which is overlaid onto the timeline in MetaCurve. For continuous data, such as CO2 concentration in Fig. 5, after using statistical measurements to compute the median value, variation, and interquartile, the range between the first quartile and the third quartile is plotted using color and considered as a normal data range. The data points located out of the range are considered as major outliers and marked with dots. The color of dots is corresponding to the date.
Fig. 5

Statistical measurements of CO2 concentration in MetaCurve

By using colorful pie charts, MetaCurve firstly attracts users’ attention and then provides detailed spatial-temporal information through combining coordinate, color, curve, and shapes.

Conclusion

The information visualization field is a multi-disciplinary field and this field still lacks sufficient theoretical foundations that inform design and evaluation. This paper provided an overview of the literature related to the cognition theories of information visualization. It introduced the underlying cognitive knowledge and the basic perception theories central to the development of the ideas in visualization. The two case studies of information visualization show how the basic cognition theories inform and impact the design and development of information visualization. Investigating the cognitive processing of information visualization can provide a useful way for researchers to describe, validate, and understand the design work. The value of theories in cognition and perception can further defend and secure the information visualization discipline.

Cross-References

References

  1. Agrawala, M., Stolte, C.: Rendering effective route maps: improving usability through generalization. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 241–249. ACM, New York (2001)Google Scholar
  2. Alvarez, G.A., Cavanagh, P.: The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychol. Sci. 15, 106–111 (2004)CrossRefGoogle Scholar
  3. Arnheim, R.: Visual Thinking. University of California Press, Berkeley (1969)Google Scholar
  4. Arnheim, R.: Art and Visual Perception. University of California Press, Los Angeles (1974)Google Scholar
  5. Baddeley, A.: Working memory. Science. 255, 556–559 (1992)CrossRefGoogle Scholar
  6. Bays, P.M., Husain, M.: Dynamic shifts of limited working memory resources in human vision. Science. 321, 851–854 (2008).  https://doi.org/10.1126/science.1158023CrossRefGoogle Scholar
  7. Bays, P.M., Catalao, R.F.G., Husain, M.: The precision of visual working memory is set by allocation of a shared resource. J. Vis. 9, 7 (2009).  https://doi.org/10.1167/9.10.7CrossRefGoogle Scholar
  8. Bays, P.M., Wu, E.Y., Husain, M.: Storage and binding of object features in visual working memory. Neuropsychologia. 49, 1622–1631 (2011).  https://doi.org/10.1016/j.neuropsychologia.2010.12.023CrossRefGoogle Scholar
  9. Bor, D., Duncan, J., Owen, A.M.: The role of spatial configuration in tests of working memory explored with functional neuroimaging. Scand. J. Psychol. 42, 217–224 (2001)CrossRefGoogle Scholar
  10. Brady, T.F., Konkle, T., Alvarez, G.A.: A review of visual memory capacity: Beyond individual items and toward structured representations. J. Vis. 11, 4 (2011).  https://doi.org/10.1167/11.5.4CrossRefGoogle Scholar
  11. Card, S.K., Mackinlay, J.D., Shneiderman, B. (eds.): Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco (1999)Google Scholar
  12. Carpenter, P., Graham, W.: Art and Ideas: An Approach to Art Appreciation. Mills and Boon, London (1971)Google Scholar
  13. Chandler, P., Sweller, J.: Cognitive load theory and the format of instruction. Cogn. Instr. 8, 293–332 (1991)CrossRefGoogle Scholar
  14. Chandler, P., Sweller, J.: The split-attention effect as a factor in the design of instruction. Br. J. Educ. Psychol. 62, 233–246 (1992)CrossRefGoogle Scholar
  15. Cooper, G.: Research into cognitive load theory and instructional design at UNSW. (1998). 347–362. http://dwb4.unl.edu/Diss/Copper/UNSW.htm
  16. Crnovrsanin, T., Muelder, C., Correa, C., Ma, K.L.: Proximity-based visualization of movement trace data. In: 2009 I.E. Symposium on Visual Analytics Science and Technology. pp. 11–18 (2009)Google Scholar
  17. Dent, B.D.: Cartography: Thematic Map Design. WCB/McGraw-Hill, Boston (1999)Google Scholar
  18. Duncan, J.: Boundary conditions on parallel processing in human vision. Perception. 18, 457–469 (1989)CrossRefGoogle Scholar
  19. Duncan, J., Humphreys, G.W.: Visual search and stimulus similarity. Psychol. Rev. 96, 433–458 (1989)CrossRefGoogle Scholar
  20. Frick, R.W.: Issues of representation and limited capacity in the visuospatial sketchpad. Br. J. Psychol. Lond. Engl. 1953. 79(Pt 3), 289–308 (1988)Google Scholar
  21. Gardner, H.: Frames of Mind: The Theory of Multiple Intelligences. Basic Book, New York (1983)Google Scholar
  22. Gilhooly, K., Logie, R.H.: Working Memory and Thinking: Current Issues In Thinking And Reasoning. Psychology Press, Hove (2004)Google Scholar
  23. Gombrich, E.: Art and Illusion: A Study in the Psychology of Pictorial Representation. Phaidon Press, London/New York (1977)Google Scholar
  24. Guo, C., Xu, S., Yu, J., Zhang, H., Wang, Q., Xia, J., Zhang, J., Chen, Y.V., Qian, Z.C., Wang, C., Ebert, D.: Dodeca-rings map: Interactively finding patterns and events in large geo-temporal data. In: 2014 I.E. Conference on Visual Analytics Science and Technology (VAST). pp. 353–354 (2014)Google Scholar
  25. Havre, S., Hetzler, B., Nowell, L.: Theme River: visualizing theme changes over time. In: IEEE Symposium on Information Visualization 2000. INFOVIS 2000. Proceedings. pp. 115–123 (2000)Google Scholar
  26. Healey, C.G., Enns, J.T.: Attention and visual memory in visualization and computer graphics. IEEE Trans. Vis. Comput. Graph. 18, 1170–1188 (2012).  https://doi.org/10.1109/TVCG.2011.127CrossRefGoogle Scholar
  27. Hoffman, D.D.: Visual Intelligence: How We Create what We See. W. W. Norton, New York (2000)Google Scholar
  28. Huang, L.: Visual working memory is better characterized as a distributed resource rather than discrete slots. J. Vis. 10, 8 (2010).  https://doi.org/10.1167/10.14.8CrossRefGoogle Scholar
  29. Huang, L., Pashler, H.: A Boolean map theory of visual attention. Psychol. Rev. 114, 599–631 (2007).  https://doi.org/10.1037/0033-295X.114.3.599CrossRefGoogle Scholar
  30. Humphrey, C.M., Adams, J.A.: General visualization abstraction algorithm for directable interfaces: component performance and learning effects. IEEE Trans. Syst. Man Cybern. Part Syst. Hum. 40, 1156–1167 (2010).  https://doi.org/10.1109/TSMCA.2010.2052604CrossRefGoogle Scholar
  31. Julesz, B.: A theory of preattentive texture discrimination based on first-order statistics of textons. Biol. Cybern. 41, 131–138 (1981)MathSciNetCrossRefGoogle Scholar
  32. Julesz, B.: A brief outline of the texton theory of human vision. Trends Neurosci. 7, 41–45 (1984).  https://doi.org/10.1016/S0166-2236(84)80275-1CrossRefGoogle Scholar
  33. Lamping, J., Rao, R., Pirolli, P.: A Focus+Context technique based on hyperbolic geometry for visualizing large hierarchies. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 401–408. ACM Press/Addison-Wesley, New York (1995)Google Scholar
  34. Logan, G.D.: Attention and preattention in theories of automaticity. Am. J. Psychol. 105, 317–339 (1992)CrossRefGoogle Scholar
  35. Luck, S.J., Vogel, E.K.: The capacity of visual working memory for features and conjunctions. Nature. 390, 279–281 (1997).  https://doi.org/10.1038/36846CrossRefGoogle Scholar
  36. Luck, S.J., Vogel, E.K.: Visual working memory capacity: from psychophysics and neurobiology to individual differences. Trends Cogn. Sci. 17, 391–400 (2013).  https://doi.org/10.1016/j.tics.2013.06.006CrossRefGoogle Scholar
  37. Miller, G.A.: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63, 81–97 (1956).  https://doi.org/10.1037/h0043158CrossRefGoogle Scholar
  38. Miyake, A., Shah, P.: Models of Working Memory: Mechanisms of Active Maintenance and Executive Control. Cambridge University Press, Cambridge (1999)CrossRefGoogle Scholar
  39. Novotny M.: Visually effective information visualization of large data. In: Proceedings of the 8th Central European Seminar on Computer Graphics (CESCG 2004), pp. 41–48. CRC Press, Boca Raton (2004)Google Scholar
  40. Ntuen, C.A., Park, E.H., Gwang-Myung, K.: Designing an information visualization tool for sensemaking. Int. J. Hum. Comput. Interact. 26, 189–205 (2010).  https://doi.org/10.1080/10447310903498825CrossRefGoogle Scholar
  41. Paas, F., Renkl, A., Sweller, J.: Cognitive load theory: instructional implications of the interaction between information structures and cognitive architecture. Instr. Sci. 32, 1–8 (2004).  https://doi.org/10.1023/B:TRUC.0000021806.17516.d0CrossRefGoogle Scholar
  42. Pashler, H.: Familiarity and visual change detection. Percept. Psychophys. 44, 369–378 (1988).  https://doi.org/10.3758/BF03210419CrossRefGoogle Scholar
  43. Pornstein, M.H., Krinsky, S.J.: Perception of symmetry in infancy: the salience of vertical symmetry and the perception of pattern wholes. J. Exp. Child Psychol. 39, 1–19 (1985).  https://doi.org/10.1016/0022-0965(85)90026-8CrossRefGoogle Scholar
  44. Shneiderman, B.: The eyes have it: a task by data type taxonomy for information visualizations. In: IEEE Symposium on Visual Languages, 1996. Proceedings. pp. 336–343 (1996)Google Scholar
  45. Sweller, J.: Some cognitive processes and their consequences for the organisation and presentation of information. Aust. J. Psychol. 45, 1–8 (1993).  https://doi.org/10.1080/00049539308259112CrossRefGoogle Scholar
  46. Sweller, J., van Merrienboer, J.J.G., Paas, F.G.W.C.: Cognitive Architecture and Instructional Design. Educ. Psychol. Rev. 10, 251–296 (1998).  https://doi.org/10.1023/A:1022193728205CrossRefGoogle Scholar
  47. Thomas, J.J., Cook, K.A.: Illuminating the Path: The Research and Development Agenda for Visual Analytics. IEEE Computer Society/Pacific Northwest National Laboratory (PNNL), Los Alamitos/Richland (2005)Google Scholar
  48. Treisman, A.: Search, similarity, and integration of features between and within dimensions. J. Exp. Psychol. Hum. Percept. Perform. 17, 652–676 (1991)CrossRefGoogle Scholar
  49. Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980).  https://doi.org/10.1016/0010-0285(80)90005-5CrossRefGoogle Scholar
  50. van Merriënboer, J.J.G., Sweller, J.: Cognitive load theory and complex learning: recent developments and future directions. Educ. Psychol. Rev. 17, 147–177 (2005).  https://doi.org/10.1007/s10648-005-3951-0CrossRefGoogle Scholar
  51. Ware, C.: Information Visualization, Third Edition: Perception for Design. Morgan Kaufmann, Burlington (2012)Google Scholar
  52. Wei, S., Hu, K., Cheng, L., Tang, H., Du, W., Guo, C., Pan, C., Li, M., Yu, B., Li, X., Chen, Y.V., Qian, Z.C., Zhu, Y.M.: Crowd Analyzer: a collaborative visual analytic system. In: 2015 I.E. Conference on Visual Analytics Science and Technology (VAST). pp. 177–178 (2015)Google Scholar
  53. Weick, K.E.: Sensemaking in Organizations. SAGE, Thousand Oaks (1995)Google Scholar
  54. Wheeler, M.E., Treisman, A.M.: Binding in short-term visual memory. J. Exp. Psychol. Gen. 131, 48–64 (2002)CrossRefGoogle Scholar
  55. Wilken, P., Ma, W.J.: A detection theory account of change detection. J. Vis. 4, 1120–1135 (2004).  https://doi.org/10.1167/4.12.11CrossRefGoogle Scholar
  56. Wolfe, J.M.: Guided Search 2.0 A revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994).  https://doi.org/10.3758/BF03200774CrossRefGoogle Scholar
  57. Zhang, W., Luck, S.J.: Discrete fixed-resolution representations in visual working memory. Nature. 453, 233–235 (2008).  https://doi.org/10.1038/nature06860CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Media Arts & DesignJames Madison UniversityHarrisonburgUSA
  2. 2.Department of Computer Graphics TechnologyPurdue UniversityWest LafayetteUSA