Cognitive Processing of Information Visualization
Information visualization is the use of computer supported, interactive, visual representations of abstract data to amplify cognition (Card et al. 1999).
Information visualizations turn raw data into information and enable researchers to gain insight from the data. Understanding how viewers interpret different types of visual information contributes to the creation of effective and intuitive visualizations. This paper introduces the cognitive processing of visualizations from the angles of pre-attentive processing, visual working memory, cognitive load, and sensemaking. The hope is that this will provide readers enough of an understanding of visualization through visual perception and cognition theories and methods. We presented two case studies that illustrated how cognitive theories inform and impact our research.
State of the Art Work
Vision’s Constructive Power
Gardner (1983) advocated for a multiple intelligence theory in 1983 that has drawn the attention of researchers due to its premise that each individual difference’s intelligence is composed of multiple intelligences. The intelligences are independent and have their own operating systems within the brain. People with higher visual and spatial intelligences respond better to visual cues by storing, manipulating, and recreating visual information. The constructive power of human vision has drawn the attention of researchers for many years. This section reviews the work of Hoffman, Gombrich, and Arnheim in the domain of visual intelligence and discusses how they inform, impact, and relate the creation and viewer interpretation of information visualizations.
Hoffman and Vision’s Constructive Power
Hoffman (2000) attempted to explain the complex mental construction process of all sighted individuals. In Visual Intelligence (Hoffman 2000), he introduced sets of universal and specialized rules that govern our perception of line, color, form, depth, and motion. Hoffman’s rule of generic views and 10 rules of visual intelligence clearly explain how individuals interpret and construct visual objects. These rules indicate how our mind organizes data and turn data into knowledge. They can be widely applied to visualization design. People have countless interpretations of what they see, but humans prefer to perceive things quickly and efficiently. The generic views rule is that designers should construct a stable view and a constant image. If an object has more salient-part boundaries, humans will see it as a figure because it is more efficient for our perception to process clearer evidence and a stronger boundary. Havre et al. (2000) were inspired by the perceptual processes of identifying curves and silhouettes, recognizing parts, and grouping them together into objects. They created a novel visualization tool called ThemeRiver that employs the river metaphor to depict thematic variations over time. Temporal thematic changes can be easily recognized because the system uses smooth and continuous curves to bound a theme and distinct colors to differentiate themes.
Moreover, Hoffman (2000) found that if two visual structures have a non-accidental relation, a designer should group and assign them to a common origin. He also stated that if three or more curves intersected at a common point in an image, they should be interpreted as intersecting at a common point in space. These rules can guide the design of movement data through grouping and stack trajectories based on time proximity and visual similarity. Crnovrsanin et al. (2009) plotted trace as distance to the explosion (y-axis) vs. time (x-axis). By applying proximity, audiences can easily depict the entire event a glance and identify different patterns, such as spatial concentration, co-incidence, trends, and divergence. In order to make a powerful design and compelling product, visualization researchers need to integrate these rules and construct what human beings desire to see with little effort.
Gombrich and Constructivist Perception
Gombrich (1977) proposed in Art and Illusion that visual perception is always functioning as a projection of prior experience and imagination, or the so-called constructive. As a constructivist, Gombrich pointed out that artists manipulate the inherited pictorial schemata to directly observe the world, and in turn correct the schemata based on their interaction experience. Gombrich (1977) also pointed out that the ability to recognize objects was the result of perceptual tuning and selection attention. He differentiated looking from seeking, and stated that viewers experience four-step processes of image engagement while looking at images. The four steps include attention (focus), interest (cognitive awareness of the focus), involvement (meaning attached to the awareness), and attitude (feeling resulting from the meaning) (Gombrich 1977). The first step is to be attracted to parts of the image. Then a viewer attaches some meaning to the parts. Thereafter, viewers will normally generate some feeling or attitude towards the image. At this point, looking changes into seeing with a statement of the image. The attitudes, in turn, affect the way viewers perceive the image. The engagement process clearly presents how viewers interact with pictures, and therefore enlightens the visualization design.
Shneiderman (1996) was inspired by Gombrich’s schemata and proposed a famous visual information-seeking mantra: Overview first, zoom and filter, then details-on-demand. The mantra has served as a golden rule in visual analytics because it takes human perceptual abilities in current design into consideration. It is very easy for audiences to scan, recognize, and recall images rapidly. Audiences detect changes in size, color, shape, movement, or texture. It is intuitive for audiences to perform tasks like dragging one object to another. Almost all successful visualization design supports the overview, zoom and filter, then details-on-demand. Guo et al. (2014) used dodecagons to visualize GPS data. The system provides an effective overview to show the common patterns. It also allows analysts to filter and examine individual patterns in detail through various interactions.
Arnheim and Gestalt Principles
Arnheim (1969) defined picture perception as the composition of circles, lines, squares, and other forms of graphs into shapes and patterns. The innate laws of the structure are called Gestalt theory. In Art and Visual Perception: A Psychology of the Creative Eye (Arnheim 1974), Arnheim detailed picture-making based on balance, shape, form, growth, space, light, color, movement, dynamic, and expression. Gestalt laws, such as figure/ground, simplicity, completeness, good continuation, and the like were named as fundamental to human perception and visual design.
Visual balance is an innate concept as well as a key principle with which designers convey messages (Pornstein and Krinsky 1985). Learning visual balance enables designers to create visualizations that “look visually right” (Carpenter and Graham 1971) with respect to color and layout. Arnheim (1974) illustrated the balance concept with a structural net that determined balance. He described that every visual work had nine hotspots and visual elements on the main axes or at the centers that should be in visual balance. Weight and direction led to visual balance. More specifically, the characteristics of visual objects, such as location, size, color, shape, and subject matter, influenced visual balance. In spatiotemporal visualization, it is not an easy task to arrange map elements – legends, scales, borders, areas, place names, and glyphs – into an aesthetically pleasing design. Dent (1999) employed the structural net as a guide for thematic map creation. The research effectively used all spaces and retained a harmonious balance among visual elements.
Arnheim proposed that visual thinking occurred primarily through abstract imagery (Arnheim 1969). Arnheim stressed the importance of reasoning with shapes and identified the nature of abstraction in visual representation. Designers always use visual abstraction to clean up the display and impress observers. When using visual abstraction in information visualization, researchers should keep in mind that the meaning of the raw data sets should be preserved true to their original form. Numerous visual abstraction approaches have emerged in the visualization field. Agrawala and Stolte (2001) presented route maps to depict a path from one location to another. Through specific generalization techniques involving distortion and abstraction, route maps were able to present trajectory in a clear, concise, and convenient form. Lamping et al. (1995) created a novel, hyperbolic geometry approach to visualize large hierarchies. Interaction techniques, such as manipulating focus, using pointer clicks, and interactive dragging, emphasized important actions that viewers tend to focus at the expense of distorting less important information. Humphrey and Adams (2010) employed the General Visualization Abstraction (GVA) algorithm in providing a novel technique for information abstraction, such as selection and grouping. This approach facilitated abstraction by identifying the most relevant information items after assigning an importance value to each item. GVA could be applied to geographic map-based interfaces to support incident management and decision-making. Visual abstraction helps to transfer the meaning of the original data into a slightly different but clearer form. Additionally, visual abstraction also supports the cluttered graphical representation of numerous big data sets by replacing the data with new visual elements corresponding to higher levels of abstraction (Novotny 2004).
Human brains can rapidly and automatically direct attention to information that has the highest salience as well as suppress irrelevant information based on simple computations of an image (Healey and Enns 2012). This is often called pre-attentive processing and provides the informational basis of attentional selection (Logan 1992). Detection precedes conscious attention. Selection cannot occur until the ensemble coding and feature hierarchy of pre-attentive process is complete.
Many research efforts have tried to address the following central question: which properties of visualizations rapidly attract people? Selective attention usually binds features, such as color, shape, location, and texture, into a perceptual object representation (Wheeler and Treisman 2002). Ware (2012) identified visual properties that are pre-attentively processed in visualization. These are also referred to as pre-attentive attributes that can be perceived in less than 10 ms without conscious effort, and require 200–250 ms for large, multi-element displays. These attributes are grouped into four categories: color (hue and intensity), form (line orientation, line length, line width, line collinearity, size, curvature, spatial grouping, blur, added marks, and numerosity), motion (flicker and direction of motion), and spatial position (2D position, stereoscopic depth, and convex/concave shape based on shading) (Ware 2012). The result, which makes symbols pop out, can be applied to information visualization design.
Five notable theories explain how pre-attentive processing occurs. The feature integration theory proposed a model of how low-level human vision is composed of a set of feature maps that can detect features either in parallel or in serial (Treisman 1991), and a master map of locations that is required to combine featured activities at a common spatial location (Treisman and Gelade 1980). Texton Theory focused on statistical analysis of texture patterns. A group of texture patterns consists of three categories: elongated blobs (e.g., rectangles, ellipses, line segments) with specific properties, such as hue, orientation, and width; terminators (ends of line segments); and crossing line segments (Julesz 1981, 1984). Researchers stated that only a difference in textons or in their density can be detected (Julesz 1981). Instead of supporting the dichotomy of serial and parallel search modes, Duncan and Humphreys (1989) explored two factors that may influence search time in conjunction searches: the number of information items required to identify the target and how easily a target can be distinguished from its distractors. Duncan and Humphreys (1989) assumed that the search ability depends on the type of task and the display conditions. Search time is related to two factors: T-N similarity and N-N similarity (Duncan 1989). T-N similarity refers to the amount of similarity between targets and nontargets that have a positive relationship with search time and a negative relationship with search efficiency. N-N similarity represents the amount of similarity within the nontargets themselves that have a negative relationship with search time and a negative relationship with search efficiency. Guided search theory was proposed by Wolfe (1994). He constructed an activation map for the visual search based on bottom-up and top-down visual information (Wolfe 1994). Users’ attention is drawn to the highest hills in the activation map, which generates the largest combination of bottom-up and top-down influences (Healey and Enns 2012). More recently, Boolean Map Theory was presented (Huang and Pashler 2007). Researchers divided the visual search into the two stages of selection and access, and divided the scene into selected elements and excluded elements. This is referred to as a Boolean map. Viewers were able to generate Boolean maps in two ways: by specifying a single value of a feature or applying union and intersection onto two existing maps (Huang and Pashler 2007).
Visual Working Memory
Baddeley (1992) stated that the working memory model consists of a phonological loop that maintains verbal–linguistic information, a visuospatial sketchpad that maintains visual and spatial information, a central executive to control and coordinate the operation of the systems, and an episodic buffer to communicate with long-term memory. Working memory decides which activities to perform, inhibits distracting information, and stores information while accomplishing a complex task (Miyake and Shah 1999). Luck and Vogel (2013) defined visual working memory as the active maintenance of visual information to serve the needs of ongoing tasks. The last 15 years have seen a surge in research on visual working memory that aims to understand its structure, capacity, and the individual variability present in its cognitive functions. There are three essential theoretical issues related to visual working memory: discrete-slot versus shared-resource, visual representation, and visual context.
Visual working memory research has largely focused on identifying the limited capacity of the working memory system and exploring the nature of stored memory representations. The field has recently debated whether the capacity of visual working memory is constrained by a small set of “discrete fixed-precision representations,” the discrete-slot model, or by a pool of divisible resources in parallel, the shared-resource model (Luck and Vogel 2013; Huang 2010; Zhang and Luck 2008). Visual working memory allows people to temporarily maintain visual information in their minds for a few seconds after its disappearance (Luck and Vogel 1997). Some researchers proposed that working memory stores a fixed number of high-precision representations when people are faced with a large number of items, and no information is retained about the remaining objects (Luck and Vogel 1997; Pashler 1988; Zhang and Luck 2008). Luck and Vogel (1997) also stated that it is possible to store both the color and orientation of four items in visual working memory. Some researchers found that both the number of visual objects and visual information load imposed capacity limits on visual working memory up to approximately four or five objects (Alvarez and Cavanagh 2004; Luck and Vogel 1997; Pashler 1988), and six spatial locations represented allocentrically in a spatial configuration (Bor et al. 2001). Thus, the temporary storage of visual information is more related to integrated objects rather than individual features. This statement is also consistent with the selective attention metaphor that visuospatial attention is like the beam of a flashlight. People are unable to split their attention to several locations and are, instead, always paying attention to the most important events while simultaneously filtering out all distractions.
However, other researchers claimed that the visual working memory is able to store imprecise representations of all items, including low-resolution representations of the remaining objects (Bays et al. 2009, 2011; Bays and Husain 2008; Frick 1988; Wilken and Ma 2004). They thought of visual working memory as many low-resolution digital photographs and challenged the concept of working memory by examining the distribution of recall errors across the visual scene. Based on a Bayesian decision model, more visual objects are held in visual working memory and fewer resources are allocated to each object. Thus, in contrast to the discrete slots model, the continuous resource model emphasizes that the storage capacity of the visual working memory is not limited to the number of visual objects. Recent empirical evidence on recurrent neural networks suggests that a discrete item limit is more favorable (Luck and Vogel 2013). Although there is still much ongoing debate regarding the models for resource allocation, there is general agreement that visual working memory has an important object/resolution trade-off: as more items are stored in visual working memory, less fidelity per visual item can be maintained (Brady et al. 2011).
Additionally, what we see depends on where our attention is focused, and our attention is guided by what we store in our memory. Attention and visual working memory are closely connected. The central executive of working memory manages the direction of attention and the supervision of information integration. Moreover, both attention and visual working memory have a limited capacity of visual features that can be detected or maintained. Preattentive processing plays a critical role in which visual properties our eyes are drawn to, and therefore helps people deal with visual and spatial information in working memory.
Cognitive load refers to “the total amount of mental activity that the working memory imposes on working memory at an instance in time” (Cooper 1998). Cooper (1998) also stated, “the most important factor that contributes to cognitive load is the number of elements that need to be attended to.” Sweller and his colleagues (Chandler and Sweller 1991, 1992; Sweller et al. 1998) identified three sources of cognitive load: intrinsic, extraneous, and germane. The intrinsic cognitive load is determined by the basic characteristics of the information (Sweller 1993). The extraneous cognitive load is imposed by the designer as they organize and display information (Chandler and Sweller 1991, 1992). Designers are always striving to reduce cognitive load and help viewers grasp the underlying information more effectively and efficiently. Finally, the germane cognitive load is the roaming free capacity that uses the extraneous load to build a new, complex schema (Sweller et al. 1998).
Miller (1956) developed our understanding of working memory by using information chunks that could be strung together. He and his followers also believed that working memory had a capacity of between seven and ten chunks at any given time (Merriënboer and Sweller 2005; Miller 1956). In terms of visual working memory, the capacity is limited to approximately four or five visual elements (Alvarez and Cavanagh 2004; Luck and Vogel 1997; Pashler 1988) and six spatial locations where conscious thought transpires (Bor et al. 2001). Generally speaking, visual elements are schemas that can be understood as models that organize our knowledge. Different people mentally store different numbers of visual objects. Due to the nature of the material, it is almost impossible to change intrinsic cognitive load with design. However, people can control their extraneous cognitive load using design techniques. The level of the extraneous cognitive load may be modified based on how designers present information to users. The germane cognitive load is more so focused on individual self-regulation and concerns schema automation (Paas et al. 2004). Some research posited that the third resource germane cognitive load may have a positive impact on working memory, whereas the intrinsic and extraneous cognitive loads are considered as negative in impact (Sweller et al. 1998). One metric for the evaluation of visualization is to compare the sum of the intrinsic cognitive load and extraneous cognitive load with the working memory capacity. If the additive effect produced by the three resources is less than the working memory capacity, the visualization system involves lower cognitive overload and good usability, which is more likely to be successful. On the contrary, if the sum exceeds the user’s working memory capacity, the visualization system has higher cognitive overload and poor usability, which is far less likely to be successful.
Visual working memory has a limited amount of processing power and capacity. Users will get overwhelmed and abandon a visualization task when the amount of information exceeds their visual working memory capacity. The designer must use various design strategies to keep the cognitive load imposed by a user interface to a minimum; therefore, more visual working memory resources are available for activities. Also, cognitive load varies by user. Examples include the involvement required for interaction. For an experienced user, adding interaction may assist them to gain insight from the data. Such interactivity might increase cognitive overload and make the visualization more difficult for the novice user. By reducing extraneous cognitive load with visual analytics techniques, we can minimize the total cognitive load imposed by the visualization interface, which will increase the portion of available working memory to attend to information.
Sensemaking is the process through which people make trade-offs and construct new knowledge of the world (Weick 1995). These processes include the encoding, retention, and retrieval of information in a temporary workspace or working memory. Sensemaking is constrained by the structure and capacity of working memory; the operation of the knowledge base is constrained by its own nature and architecture (Gilhooly and Logie 2004). Visual analytics is defined as the science of analytical reasoning facilitated by interactive visual interfaces (Thomas and Cook 2005). The field aims to support sensemaking processes and gain insight using an interactive, visual exploration of the data set. However, due to the limited capacity of visual working memory, it is very difficult for people to discover and keep track of all patterns while looking at visualization graphs. Also, inconsistencies between mental models and external representations increase the cognitive overload and thereby further hinder sensemaking outcomes.
There are three phases and loops in the sensemaking process: information foraging, information schematization and problem-solving, and decision-making and action (Ntuen et al. 2010). Information foraging is a cost and benefit assessment of maximizing the rate of gaining valuable information and minimizing the number of consumed resources. Based on the metaphor of an animal foraging for food, information foraging theory helps visualization researchers discover effective ways to represent massive amounts of data and provide effective mechanisms for navigation. Challenges include formalizing contributions, such as identifying trends or outliers of interest, posting explanatory hypotheses, and providing retrieval mechanisms. Information schematization is regarded as an information fusion tool or thinking process that uses new information to explain surprise and update prospective, predictive states of a situation. Decision-making supports situational understanding in which a stimulus is placed into a framework to understand, explain, attribute, extrapolate, and predict a process that leads to situational understanding. Sensemaking deals with seeking, collating, and interpreting information to support decision-making (Ntuen et al. 2010). Information content is held in active working memory. Sensemaking can be applied to information visualization with the intent to reduce complexity and simplify the volume of data collected in order to create understanding. Visualization can serve to amplify or weaken cognitive ability.
The following section presents two spatial-temporal data visual analytics systems to illustrate how cognitive theories could be applied to the design and development of visualization system.
Case Study 1: Vast Challenge 2015 CrowdAnalyzer
CrowdAnalyzer is a system developed from the IEEE Visual Analytics Science and Technology (Vast) challenge 2015 (Wei et al. 2015). With a background that DinoFun World is a typical amusement park, sitting on hundreds of hectares and hosting thousands of visitors every day, a dataset including 3 days’ movement tracking information of visitors is provided for researchers to identify the patterns and outliers of attendances and park’s activities during those 3 days.
The tab “Facility Manager” presents the parks’ entertainment facilities’ status: the number of people waiting and the estimated waiting time. Multiple line charts are used to present the data. The positive y-axis represents the number of people, the negative y-axis is the waiting time, and the x-axis is the timeline. The colored points on the coordinate show the number of people waiting to take the ride at the corresponding time points. According to Arnheim (1969), people will interpret the composition of different forms of graphs into shapes and patterns. The user will automatically categorize the same color dots into a group. The good continuation between the same colored dots will make them into a line from Gestalt principles of perception (Arnheim 1974). The line depicts the waiting people and the waiting time’s change through time, and the distinct colors differentiate the data on different days. In Fig. 1, people will know that the waiting people and time on Friday is much lesser than the weekend.
Case Study 2: Vast Challenge 2016 Metacurve
By using colorful pie charts, MetaCurve firstly attracts users’ attention and then provides detailed spatial-temporal information through combining coordinate, color, curve, and shapes.
The information visualization field is a multi-disciplinary field and this field still lacks sufficient theoretical foundations that inform design and evaluation. This paper provided an overview of the literature related to the cognition theories of information visualization. It introduced the underlying cognitive knowledge and the basic perception theories central to the development of the ideas in visualization. The two case studies of information visualization show how the basic cognition theories inform and impact the design and development of information visualization. Investigating the cognitive processing of information visualization can provide a useful way for researchers to describe, validate, and understand the design work. The value of theories in cognition and perception can further defend and secure the information visualization discipline.
- Agrawala, M., Stolte, C.: Rendering effective route maps: improving usability through generalization. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 241–249. ACM, New York (2001)Google Scholar
- Arnheim, R.: Visual Thinking. University of California Press, Berkeley (1969)Google Scholar
- Arnheim, R.: Art and Visual Perception. University of California Press, Los Angeles (1974)Google Scholar
- Bays, P.M., Wu, E.Y., Husain, M.: Storage and binding of object features in visual working memory. Neuropsychologia. 49, 1622–1631 (2011). https://doi.org/10.1016/j.neuropsychologia.2010.12.023CrossRefGoogle Scholar
- Card, S.K., Mackinlay, J.D., Shneiderman, B. (eds.): Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann Publishers Inc., San Francisco (1999)Google Scholar
- Carpenter, P., Graham, W.: Art and Ideas: An Approach to Art Appreciation. Mills and Boon, London (1971)Google Scholar
- Cooper, G.: Research into cognitive load theory and instructional design at UNSW. (1998). 347–362. http://dwb4.unl.edu/Diss/Copper/UNSW.htm
- Crnovrsanin, T., Muelder, C., Correa, C., Ma, K.L.: Proximity-based visualization of movement trace data. In: 2009 I.E. Symposium on Visual Analytics Science and Technology. pp. 11–18 (2009)Google Scholar
- Dent, B.D.: Cartography: Thematic Map Design. WCB/McGraw-Hill, Boston (1999)Google Scholar
- Frick, R.W.: Issues of representation and limited capacity in the visuospatial sketchpad. Br. J. Psychol. Lond. Engl. 1953. 79(Pt 3), 289–308 (1988)Google Scholar
- Gardner, H.: Frames of Mind: The Theory of Multiple Intelligences. Basic Book, New York (1983)Google Scholar
- Gilhooly, K., Logie, R.H.: Working Memory and Thinking: Current Issues In Thinking And Reasoning. Psychology Press, Hove (2004)Google Scholar
- Gombrich, E.: Art and Illusion: A Study in the Psychology of Pictorial Representation. Phaidon Press, London/New York (1977)Google Scholar
- Guo, C., Xu, S., Yu, J., Zhang, H., Wang, Q., Xia, J., Zhang, J., Chen, Y.V., Qian, Z.C., Wang, C., Ebert, D.: Dodeca-rings map: Interactively finding patterns and events in large geo-temporal data. In: 2014 I.E. Conference on Visual Analytics Science and Technology (VAST). pp. 353–354 (2014)Google Scholar
- Havre, S., Hetzler, B., Nowell, L.: Theme River: visualizing theme changes over time. In: IEEE Symposium on Information Visualization 2000. INFOVIS 2000. Proceedings. pp. 115–123 (2000)Google Scholar
- Hoffman, D.D.: Visual Intelligence: How We Create what We See. W. W. Norton, New York (2000)Google Scholar
- Lamping, J., Rao, R., Pirolli, P.: A Focus+Context technique based on hyperbolic geometry for visualizing large hierarchies. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 401–408. ACM Press/Addison-Wesley, New York (1995)Google Scholar
- Novotny M.: Visually effective information visualization of large data. In: Proceedings of the 8th Central European Seminar on Computer Graphics (CESCG 2004), pp. 41–48. CRC Press, Boca Raton (2004)Google Scholar
- Paas, F., Renkl, A., Sweller, J.: Cognitive load theory: instructional implications of the interaction between information structures and cognitive architecture. Instr. Sci. 32, 1–8 (2004). https://doi.org/10.1023/B:TRUC.0000021806.17516.d0CrossRefGoogle Scholar
- Shneiderman, B.: The eyes have it: a task by data type taxonomy for information visualizations. In: IEEE Symposium on Visual Languages, 1996. Proceedings. pp. 336–343 (1996)Google Scholar
- Thomas, J.J., Cook, K.A.: Illuminating the Path: The Research and Development Agenda for Visual Analytics. IEEE Computer Society/Pacific Northwest National Laboratory (PNNL), Los Alamitos/Richland (2005)Google Scholar
- Ware, C.: Information Visualization, Third Edition: Perception for Design. Morgan Kaufmann, Burlington (2012)Google Scholar
- Wei, S., Hu, K., Cheng, L., Tang, H., Du, W., Guo, C., Pan, C., Li, M., Yu, B., Li, X., Chen, Y.V., Qian, Z.C., Zhu, Y.M.: Crowd Analyzer: a collaborative visual analytic system. In: 2015 I.E. Conference on Visual Analytics Science and Technology (VAST). pp. 177–178 (2015)Google Scholar
- Weick, K.E.: Sensemaking in Organizations. SAGE, Thousand Oaks (1995)Google Scholar