# Methods for comparing scanpaths and saliency maps: strengths and weaknesses

## Abstract

In this article, we are interested in the computational modeling of visual attention. We report methods commonly used to assess the performance of these kinds of models. We survey the strengths and weaknesses of common assessment methods based on *diachronic* eye-tracking data. We then illustrate the use of some methods to benchmark computational models of visual attention.

### Keywords

Saliency Visual attention Eye movement## Introduction

Eye tracking is a well-known technique for analyzing visual perception and attention shift and assessing user interfaces. However, until now, data analysis from eye-tracking studies has focused on *synchronic* indicators such as fixation (duration, number, etc.) or saccade (amplitude, velocity, etc.), rather than *diachronic* indicators (scanpaths or saliency maps). *Synchronic* means that an event occurs at a specific point in time, while *diachronic* means that this event is taken into account over time. We focus in this article on diachronic measures and review different ways of analyzing sequences of fixations represented as scanpaths or saliency maps.

Visual scanpaths depend on bottom-up and top-down factors such as the task users are asked to perform (Simola, Salojärvi, & Kojo, 2008), the nature of the stimuli (Yarbus, 1967), and the intrinsic variability of subjects (Viviani, 1990). Being able to measure the difference (or similarity) between two visual behaviors is fundamental both for differentiating the impact of different factors and for understanding what governs our cognitive processes. It also plays a key role in assessing the performance of computational models on overt visual attention, by, for example, evaluating how well saliency-based models predict where observers look.

In this study, we survey common methods for evaluating the difference/similarity between scanpaths and between saliency maps. In the first section, we describe state-of-the-art methods commonly used to compare visual scanpaths. We then consider the comparison methods that involve either two saliency maps or one saliency map plus a set of visual fixations. We first define how human saliency maps are computed and list some of their most important properties. The strengths and weaknesses of each method are emphasized. In the fourth section, we address inter observer variability, which reflects the natural dispersion of fixations existing between observers watching the same stimuli. It is important to understand the mechanisms underlying this phenomenon, since this dispersion can be used as an upper bound for a prediction. To illustrate the latter point and the use of common similarity metrics, we compare ground truth and model-predicted saliency maps. Finally, we broaden the scope of this article by raising a fundamental question: Do all visual fixations have the same meaning and role, and is it possible to classify fixations as being bottom-up, cognitive, top-down, or semantic?

## Methods for comparing scanpaths

Different metrics are available for comparing two scanpaths, using either distance-based methods (string edit technique or Mannan distance) or vector-based methods. Distance-based methods compare scanpaths only from their spatial characteristics, while vector-based approaches perform the comparison across different dimensions (frequency, time, etc.). These metrics are more or less complex and relevant depending on the situation to be analyzed. However, there is no consensus in the community on the use of a given metric. In this section, we present three metrics: the string edit metric, Mannan’s metric, and a vector-based metric.

### String edit metric

*Levenshtein distance*) that gives a measure of the similarity of the strings (Levenshtein, 1966). This technique was originally developed to account for the edit distance between two words, and the measured distance is the number of deletions, insertions, or substitutions that are necessary for the two words to be identical (which is also called the

*alignment procedure*). This metric takes as input two strings (coding AOIs) and computes the minimum number of edits needed to transform one string into the other. A cost is associated with each transformation and each character. The goal is to find a series of transformations that minimizes the cost of aligning one sequence with the other. The total cost is the edit distance between the two strings. When the cost is minimal, the similarity between the two strings is maximal (i.e., when the two strings are identical, the distance is equal to 0). Conversely, the distance increases with the cost and, therefore, with the dissimilarity between the two strings. Figure 1 illustrates the method. The Levenshtein distance is the most common way to compare scanpaths (Josephson & Holmes, 2002; Privitera & Stark, 2000) and has been widely used for assessing the usability of Web pages (Baccino, 2004).

LCS is the length of the

*longest common subsequence*, which represents the score obtained by allowing only addition and deletion, not substitution;*Damerau-Levenshtein distance*allows addition, deletion, substitution, and the transposition of two adjacent characters; and*Hamming distance*allows only substitution (and hence, applies only to strings of the same length).

Since the string edit is based on a comparison of the sequence of fixations occurring in predefined AOIs, the question is how to define these AOIs. There are two ways: automatically gridded AOIs or content-based AOIs. The former is built by putting a grid of equally sized areas across the visual material, but for the latter, the meaningful regions of the stimulus need to be subjectively chosen. Whatever AOIs are constructed, the string edit method means that only the quantized spatial position of the visual fixations are taken into account. Hence, some small differences in scanpaths may change the string, while others produce the same string.

The string edit method is limited when certain AOIs have not been fixated, so there is a good deal of missing data.

### Mannan’s metric

*Is*) represents the average linear distance between two scanpaths (

*D*), with randomized scanpaths having the same size (

*Dr*). These randomly generated scanpaths are used for weighting the sequence of real fixations, taking into account the fact that real scanpaths may convey a randomized component. The similarity index (

*Is*) is given by

\( {D^2} = \frac{{{n_1}\sum\nolimits_{j = 1}^{{n_2}} {d_{2j}^2} }}{{2{n_1}{n_2}\left( {{a^2} + {b^2}} \right)}} + \frac{{{n_2}\sum\nolimits_{i = 1}^{{n_1}} {d_{1i}^2} }}{{2{n_1}{n_2}\left( {{a^2} + {b^2}} \right)}} \)

*n*_{1}and*n*_{2}are the number of fixations in the two traces;*d*_{1i}is the distance between the*i*th fixation in the first trace and its nearest neighbor in the second trace;*d*_{2j}is the distance between the*j*th fixation in the second trace and that of its nearest neighbor in the second one;*a*and*b*are the side lengths of the image; and*Dr*is the distance between two sets of random locations.

*Is*) range from 0 (random scanpath) to 100 (identity). The main drawbacks of this technique are the following:

The Mannan distance does not take into account the

*temporal order*of fixation sequence. This means that two sequences of fixations having a reversed order but with an identical spatial configuration give the same Mannan distance.A difficult problem occurs when the two scanpaths have very different size (the number of fixations between them is very different). The Mannan distance may show a great similarity while the shapes of the scanpaths are definitely different. The Mannan distance is not tolerant to high variability between scanpaths.

### Vector-based metric

An interesting method was recently proposed by Jarodzka, Holmqvist, and Nystr (2010). Each scanpath is viewed as a sequence of geometric vectors that correspond to subsequent saccades of the scanpath. The vector representation shows the length and the direction of each saccade. A saccade is defined by a starting position (fixation *n*) and ending position (fixation *n * + 1). Then a scanpath with *n* fixations is represented by a set of *n* − 1 vectors, and several properties can therefore be preserved, such as the shape of the scanpath, the scanpath length, and the position and duration of fixations. The sequences that have to be compared are aligned according to their shapes (although this alignment can be performed on other dimensions: length, durations, angle, etc.).

Each vector of one scanpath corresponds to one or more vectors of another scanpath, such that the path in the matrix of similarity between the vectors going from (*1, 1*) (similarity between the first vectors) to (*n*, *m*) (similarity between the last vectors) is the shortest one. Once the scanpaths are aligned, various measures of similarity between vectors (or sequences of vectors) can be used, such as average difference in amplitude, average distance between fixations, and average difference in duration.

This measure compares only two scanpaths. Sometimes, the overall aim is to compare whole groups of subjects with each other.

It is presumed that fixations and saccades must occur. Other eye movements such as smooth pursuit are not handled. Smooth pursuit movements are important when a video is watched. However, the problem may be solved if it is possible to represent smooth pursuit as a series of short vectors that are not clustered into one long vector.

This alignment procedure is independent of the stimulus content. However, the chosen dimensions may be weighted by some semantic values carefully selected by the researcher.

## Methods for comparing saliency maps

Comparing two scanpaths requires taking a number of factors, such as the temporal dimension or the alignment procedure, into account. To overcome these problems, another kind of method can be used. In this section, we focus on approaches involving two bidimensional maps. We first briefly describe how the visual fixations are used to compute a continuous saliency map. Second, we describe three common methods used to evaluate the degree of similarity between two saliency maps: a correlation-based measure, the Kullback–Leibler divergence, and receiver operating characteristic (ROC) analysis.

### From a discrete fixation map to a continuous saliency map

*f*

^{i}for the

*i*th observer is classically defined as

*x*is a vector representing the spatial coordinates and

*x*

_{f}(

*k*) is the spatial coordinates of the

*k*th visual fixation. The value

*M*is the number of visual fixations for the

*i*th observer. \( \delta \left( \cdot \right) \) is the Kronecker symbol (

*δ*(

*t*) = 1 if

*t*= 1; otherwise,

*δ*(

*t*) = 0).

*N*observers, the final fixation map

*f*is given by

*S*is then deduced by convolving the fixation map

*f*by an isotropic bidimensional Gaussian function

*σ*is the standard deviation of the Gaussian function. It is commonly accepted that

*σ*should be set to 1° of visual angle. One degree of visual angle represents an estimate of the size of the fovea. The standard deviation depends on the experimental setup (size of the screen and viewing distance). It is also implicitly assumed that a fixation can be approximated by a Gaussian distribution (Bruce & Tsotsos, 2006; Velichkovsky, Pomplum, Rieser, & Ritter, 1996). An example of fixation and saliency maps is given by Fig. 3. A heat map, which is a simple colored representation of the continuous saliency map, is also shown. Red areas pertain to salient areas, whereas blue areas are for nonsalient areas. Note that the fixation map illustrated by Fig. 3 is not exactly the one defined by the previous formula.

Fixation duration is not taken into account in the computation of the continuous saliency map. Itti (2005) showed that there was no significant correlation between model-predicted saliency and duration of fixation. Fixation duration is often considered to reflect the depth and the speed of visual processing in the brain. The longer the fixation duration, the deeper the visual processing (Henderson, 2007; Just & Carpenter, 1976). Total fixation time (the cumulative duration of fixations within a region) can be used to gauge the amount of total cognitive processing engaged with the fixated information (Rayner, 1998). There are a number of factors that influence the duration of fixations. Among these factors, the visual quality of the displayed stimuli plays an important role, as suggested by Mannan et al.’s (1995) experiment. They presented filtered and unfiltered photos to observers and reported a significant increase in the fixation duration for the filtered photos. Another factor is related to the number of objects present in the scene. Irwin and Zelinsky (2002) observed that the duration of fixations increases with the number of objects in the scene.

### Correlation-based measures

*r*between two maps

*H*and

*P*is defined as

*cov*(

*H*,

*P*) is the covariance between

*H*and

*P*and

*σ*

_{H}and

*σ*

_{P}represent the standard deviation of maps

*H*and

*P*, respectively.

The linear correlation coefficient has a value between −1 and 1. A value of 0 indicates that there is no linear correlation between the two maps. Values close to 0 indicate a poor correlation between the two sets. A value of 1 indicates a perfect correlation. The sign of *r* is helpful in determining whether data share the same structure. A value of −1 also indicates a perfect correlation, but the data vary together in opposite directions.

This indicator is very simple to compute and is invariant to linear transformation. Several studies have used this metric to assess the performance of computational models of visual attention (Jost, Ouerhani, von Wartburg, Mauri, & Haugli, 2005; Le Meur, Le Callet, Barba, & Thoreau, 2006; Rajashekar, van der Linde, Bovik, & Cormack, 2008). Correlations are usually reported with degrees of freedom (the total population minus 2) in parentheses and the significance level. For instance, the two continuous saliency maps illustrated by Fig. 4 are strongly correlated, *r* ( 393214 ) = .826, *p* < .001.

Note that the Spearman's rank correlation can also be used to measure the similarity between two sets of data (Toet, 2011).

### The Kullback–Leibler divergence

*R*and

*P*with probability density functions

*r*

_{k}and

*p*

_{k}. The KL-divergence between

*R*and

*P*is given by the relative entropy of

*P*with respect to

*R*:

The KL-divergence is defined only if *r*_{k} and *p*_{k} both sum to 1 and if *r*_{k} > 0 for any *k* such that *p*_{k} > 0.

The KL-divergence is not a distance, since it is not symmetric and does not satisfy the triangle inequality. The KL-divergence is nonlinear. It varies in the range of zero to infinity. A zero value indicates that the two probability density functions are strictly equal. The fact that the KL-divergence does not have a well-defined upper bound is a strong drawback.

*H*and

*P*). We first transform these maps into two bidimensional probability density functions by dividing each location of the map by the sum of all pixel values. The probability that an observer focuses on position

*x*is given by:

If we consider the example of Fig. 4, we compute the KL-divergence by first considering the saliency map (b) as the reference and, second, the saliency map (d) as the reference. We obtain *KL* = 3.33 and *KL* = 7.06, respectively. Since the KL-divergence is not a distance, the results are not the same. They indicate that the overall dissimilarity is highest when the continuous saliency map (d) is taken as the reference.

### Receiver operating characteristic analysis

The ROC analysis (Green & Swets, 1966) is probably the most popular and most widely used method in the community for assessing the degree of similarity of two saliency maps. ROC analysis classically involves two sets of data: The first is from the ground truth (also called the *actual values*), and the second is the prediction (also called the *outcomes*).

Here, we perform ROC analysis between two maps. It is also common to encounter a second method in the literature that involves fixation points and a saliency map. This method is described in the Hybrid Methods section.

- Thresholding the ground truth: The continuous saliency map is thresholded with a constant threshold in order to keep a given percentage of image pixels. For instance, we can keep the top 2 %, 5 %, 10 %, or 20 % salient pixels of the map, as illustrated by Fig. 5. This threshold is called \( T_G^x \) (
*G*for the ground truth and*x*indicating the percentage of image considered as being fixated). Thresholding the prediction: The threshold is systematically moved between the minimum and the maximum values of the map. A high-threshold value corresponds to an overdetection, whereas a smaller threshold affects the most salient areas of the map. This threshold is called \( T_P^x \) (P for the prediction and

*x*indicating the*i*th threshold).

For each pair of thresholds, four numbers featuring the quality of the classification are computed. They represent the true positives (TPs), the false positives (FPs), the false negatives (FNs), and the true negatives (TNs). The true positive number is the number of fixated pixels in the ground truth that are also labeled as fixated in the prediction.

An ROC curve that plots the FP rate (FPR) as a function of the TP rate (TPR) is usually used to display the classification result for the set of thresholds used. The TPR, also called *sensitivity* or *recall*, is defined as TPR = TP/(TP + FN), whereas the FPR is given by FPR = FP/(TP + FN). The ROC area, or the *area under curve* (AUC), provides a measure indicating the overall performance of the classification. A value of 1 indicates a perfect classification. The chance level is .5. The ROC curve of Fig. 6 is given in Fig. 7. There are different methods to compute the AUC. The simplest ones are based on the left and right Riemann sums. The left Riemann sum is illustrated by Fig. 7. A more efficient approximation can be obtained by a trapezoid approximation: Rather than computing the area of rectangles, the AUC is given by summing the area of trapezoids. In our example, the AUC value is 0.83.

## Hybrid methods

So far, we have focused on similarity metrics involving two scanpaths or two saliency maps. In this section, we describe methods based on a saliency map and a set of fixation points. We call this kind of method *hybrid* since it mixes two types of information. Four approaches are presented: ROC analysis, normalized scanpath saliency, percentile, and the Kullback–Leibler divergence.

### Receiver operating characteristic analysis

In a similar fashion to the method in the Receiver Operating Characteristic Analysis section, the control points and the fixation points are then used to plot an ROC curve. For each threshold, the observer’s fixations and the control ones are laid down on the thresholded map. The TPR (fixations that fall on fixated areas) and the FPR are determined. From this ROC curve, the AUC is computed. The confidence interval is computed by using a nonparametric bootstrap technique (Efron & Tibshirani, 1993). Many samples having the same size as the original set of human fixations are generated by sampling with replacement. These samples are called *bootstrap* samples. In general, 1,000 bootstrap samples are created. Each bootstrap sample is used as a set of control fixations. The ROC area between the continuous saliency map and the points of human fixation plus the control points is computed. The bootstrap distribution of each ROC analysis is computed, and a bootstrap percentile confidence interval is determined by percentiles of the bootstrap distribution, leaving off \( \frac{a}{2} \times 100\% \) of each tail of the distribution where *α* is the confidence level.

Sometimes, the quality of the classification relies on the equal error rate (EER). The EER is the location on an ROC curve where the FPR and the TPR are equal (i.e., the error at which false alarms equal the miss rate *FPR* = 1 − *TPR*). As with the AUC, the EER is used to compare the accuracy of the prediction. In general, the system with the lowest EER is the most accurate.

### Normalized scanpath saliency

The normalized scanpath saliency (NSS; Peters, Iyer, Itti, & Koch, 2005) is a metric that involves a saliency map and a set of fixations. The idea is to measure the saliency values at fixation locations along a subject’s scanpath.

*Z*

_{SM}is the standardized saliency map and

*Z*

_{SM}(

*x*

_{i}) represents the distance between the saliency value at

*x*

_{i}and the average of saliency expressed in units of the standard deviation. This value is negative when the saliency value at the fixation locations is below the mean, positive when above. To take account of the fact that we do not focus accurately on a particular point, the NSS value for a given fixation location is computed on a small neighborhood centered on that location:

*K*is a kernel with a bandwidth

*h*and

*π*is a neighborhood.

*NSS*(

*x*

_{f}(

*k*)) for all fixations

*M*of an observer. It is given by

### Percentile

*percentile*(Peters & Itti, 2008). A percentile value

*P*(

*x*

_{f}(

*k*)) is computed for each location of fixation points

*x*

_{f}(

*k*). This score is the ratio between the number of locations in the saliency map with values smaller than the saliency value at point

*x*

_{f}(

*k*) and the set of all locations. The percentile value is defined as follows:

*X*is the set of locations of the saliency map

*SM*and

*x*

_{f}(

*k*) is the location of the

*k*th fixation. \( \left| \cdot \right| \) indicates set size.

The final score is the average of *P* (*x*_{f} (*k*)) for all fixations of an observer. By definition, the percentile metric has a well-defined upper bound (100 %) indicating the highest similarity between fixation points and saliency map. The chance level is 50 %.

### The Kullback–Leibler divergence

The KL-divergence, defined in The Kullback–Leibler Divergence section, is used here to compute the dissimilarity between the histogram of saliency sampled at eye fixations and that sampled at random locations. Itti and Baldi (2009) were the first to use this method. The set of control points (or the set of nonfixated points) are drawn from a uniform spatial distribution. However, human fixations are not randomly distributed, since they are governed by various factors such as the central bias explained earlier. To be more agnostic to this kind of mechanism, Zhang, Tong, Marks, Shan, and Cottrell (2008) measured the KL-divergence between the saliency distribution of fixated points of a test image and the saliency distribution at the same pixel locations but of a randomly chosen image from the test set. To evaluate the variability of the score, the evaluation was repeated 100 times with 100 different sets of control points.

Contrary to the previous KL-divergence method in The Kullback–Liebler Divergence section, a good prediction has a high KL-divergence score. Indeed, as the reference distribution represents chance, the saliency computed at human-fixated locations should be higher than that computed at random locations.

## Measuring a realistic upper bound

Most of the methods mentioned above have a well-defined theoretical upper bound. When the performance of a computational model is assessed, it is then reasonable to seek to approach this upper bound. For instance, according to the ROC analysis, an AUC equal or close to one would indicate a very good performance. In our context, this goal is almost impossible to reach. Indeed, there is a natural dispersion of fixations among different subjects looking at the same image. This dispersion (also called *inter observer congruency* [IOC^{1}]) is contingent upon a number of factors. First, Tatler et al. (2005) showed that the consistency between visual fixations of different subjects is high just after the stimulus onset but progressively decreases over time. Among the reasons that might explain this variability, the most probable one concerns the time course of bottom-up and top-down mechanisms. Just after the stimulus onset, our attention is mostly steered by low-level visual features, whereas top-down mechanisms become more influential after several seconds of viewing. The second factor concerns the visual content itself. In the case where there is nothing that stands out from the background, the IOC would be small. On the contrary, a visual scene composed of salient areas would presumably attract our visual attention, leading to high congruency. The presence of particular features, such as human faces, human beings, or animals, tends to increase the consistency between observers. A number of studies have shown that we are able to identify and recognize human faces and animals very quickly in a natural scene (Delorme, Richard, & Fabre-Thorpe, 2010; Rousselet, Macé, & Fabre-Thorpe, 2003). Whereas human faces and animals have the ability to attract our attention, decreasing the dispersion between observers, this ability is modulated by novelty or even emotion. Althoff and Cohen’s (1999) study is a good example of this point. They investigated the effect of memory or prior experience on eye movements. They found that visual scanpaths made when famous faces were viewed were more variable than those made when nonfamous faces were viewed. A third factor that could account for the variance between people might be related to cultural differences. Nisbett (2003) compared the visual scan pattern of American and Asian populations. He found that Asian people tend to look more at the background and spend less time on focal objects than do American people. However, a recent study casts doubt on the influence of cultural differences on oculomotor behavior (Rayner, Castelhano, & Yang, 2009).

The IOC can be measured by using a one-against-all approach, also called *leave one out* (Torralba et al., 2006). It consists in computing the degree of similarity between the fixations of one observer and those of the other subjects. The final value is obtained by averaging the degree of similarity over all subjects.

*i*th observer). This map is thresholded so that the most fixated areas are set to 1 and the other areas are set to 0. To assess the degree of similarity between the

*i*th observer and the other subjects, the hit rate (as described in the Reciever Operating Characteristic Analysis section)—that is, the percentage of fixations that fall into the fixated regions—is computed. Iterating over all subjects and averaging the scores gives the IOC. Figure 12 illustrates this method.

Assessing the IOC is fundamental to evaluating the performance of the saliency algorithm, although this is overlooked most of the time. An absolute score between a set of fixations and a predicted map is interesting but is not sufficient to draw any conclusions. A low score of prediction does not systematically indicate that the saliency model performs poorly. Such a statement would be true if the dispersion between the observers is low, but false otherwise. Therefore, it is much more relevant to compare the performance of computational models to the IOC (Judd, Ehinger, Durand, & Torralba, 2009; Torralba et al., 2006) or to express the performance directly by normalizing the similarity score by the IOC (Zhao & Koch, 2011). The normalized score would be close to 1 for a good prediction.

## Example: performance of state-of-the-art computational models

In this section, we examine the performance of the most prominent saliency models that have been proposed in the literature. The quality of the predicted saliency maps is given here by two metrics: the hit rate (see the Receiver Operating Characteristic Analysis section) and the NSS (see the Normalilzed Scanpath Saliency section). These metrics are hybrid metrics, since they involve a set of visual fixations and a map. We believe that these metrics are the best way to assess the relevance of a predicted saliency map. As compared with saliency map-based methods, hybrid methods are nonparametric. Human saliency maps are obtained by convolving a fixation map by a 2-D Gaussian function, which is parameterized by its mean and its standard deviation. Note that instead of using the hit rate, we could have used an ROC analysis.

To perform the analysis, we use two eye-tracking data sets that are available on the Internet. They are described in the Eye-Tracking Data Sets section. We present and comment on each model’s performance in the Benchmark section.

### Eye-tracking data sets

Eye-tracking data sets freely available on the Web

Data set name | Number of subjects | Task | Number of pictures | Stimuli presentation time |
---|---|---|---|---|

Le Meur | 30 | Free viewing | 40 | 14 s |

Kootstra | 31 | Free viewing | 100 | – |

Bruce | 20 | Free viewing | 120 | 4 s |

MIT | 15 | Free viewing | 1003 | 3 s |

DOVES | 29 | Free viewing + task | 101 (grayscale) | 5 s |

FIFA | 27 | Free viewing + task | 180 | 2 s |

Ehinger | 14 | Task | 912 | – |

These data sets can be used to evaluate the performance of computational models. There is only an implicit consensus on how to set up an eye-tracking test. There is no document that accurately describes what must be done and what must be avoided in the experimental setting. For instance, should observers perform a task when viewing stimuli or not? Do the methods used to identify fixations and saccades from the raw eye data give similar results? We have to be aware that these data sets have been collected in different environments and with different apparatus and settings. To evaluate the performance of saliency models, it is highly recommended that more than one data set be used in order to strengthen the findings.

### Benchmark

We compare the performance of four state-of-the-art models: Itti’s model (Itti, Koch, & Niebur, 1998), Le Meur’s model (Le Meur et al., 2006), Bruce’s model (Bruce & Tsotsos, 2009), and Judd’s model (Judd et al., 2009). (For a brief review of saliency models, see Le Meur & Le Callet, 2009.)

NSS scores for four state-of-the-art saliency models on the Le Meur and Bruce data sets

Avg. NSS ± | ||
---|---|---|

Model/Data set | Le Meur | Bruce |

Itti | 0.60+/−0.10 | 0.99+/−0.05 |

Le Meur | 0.77+/−0.13 | 0.87+/−0.03 |

Bruce | 0.60+/−0.09 | 0.72+/−0.04 |

Judd | 0.82+/−0.11 | 0.87+/−0.05 |

Under the ROC metric, Judd’s model has the highest performance, as is illustrated by Fig. 14. This result was expected, since this model uses specific detectors (face detection, for instance) that improve the ability to detect salient areas. In addition, this model uses a function to favor the center of the picture in order to take the central bias into account. However, the results are more contrasted under the NSS metric shown in Table 2. On average, across both databases, Judd’s model is still the highest performing. On Bruce’s data set, Itti’s model performs the best, with a value of 0.99, whereas Judd’s model performs at 0.87. The model ranking is therefore dependent on the metric used. It is therefore fundamental to use more than one metric when assessing the performance of computational models of visual attention.

## Limitation: Do visual fixations have the same meaning?

Current computational models of visual attention focus on identifying fixated locations of salient areas. From an input picture, a model computes a topographic map indicating the most visually interesting parts. This prediction is then compared with ground truth fixations. The evaluation methodology seems to be appropriate. Unfortunately, an important point is overlooked. By doing this kind of comparison, most researchers have implicitly assumed that fixations, whatever their durations, saccade amplitudes, and start-times, are all similar. In this section, we emphasize the fact that different populations of fixations may exist.

Fixations differ in both their durations and their saccade amplitudes during real-world scene viewing. Antes (1974) was among the first researchers to report these variations. He observed that fixation duration increases while saccade size decreases over the course of scene inspection. This early observation was confirmed by a number of studies (Over, Hooge, Vlaskamp, & Erkelens, 2007; Tatler & Vincent, 2008). The variation in the duration of visual fixations is contingent upon factors such as the quality of the stimulus and the number of objects in the scene (as explained in the From a Discrete Fixation Map to a Continuous Saliency Map section). However, this variance might be explained by functional differences in the fixations. To investigate this point, Velichkovsky and colleagues (Unema, Pannasch, Joos, & Velichkovsky, 2005; Velichkovsky, 2002) conjointly analyzed the fixation duration and the subsequent saccade amplitude. They found a nonlinear distribution indicating that (1) short fixations are associated with long saccades and, conversely, (2) longer fixations are associated with shorter saccades (Fig. 6 in Unema et al., 2005). This dichotomy permits us to disentangle focal-ambient fixations, using the terminology introduced by Trevarthen in 1968. Ambient processing is characterized by short fixations associated with long saccades. This mode might be used to extract contextual information in order to identify the whole scene. Focal processing is characterized by long fixations with short saccades. This mode may be related to recognition and conscious understanding processes. Pannasch, Schulz, and Velichkovsky (2011) proposed the classification of fixations on the basis of the amplitude of previous saccades. If the preceding saccade amplitude is greater than a threshold, the fixation is assumed to belong to the ambient visual-processing mode. Otherwise, the fixation belongs to the focal mode. The authors chose a threshold equal to 5° of visual angle. This choice is justified by the size of the parafoveal region in which visual acuity is good. Recently, Follet, Le Meur, and Baccino (2011) proposed an automatic solution to classify visual fixations into focal and ambient groups. From this classification, they computed two saliency maps, one composed of focal fixations and the other based on ambient fixations. By comparing these maps with model-predicted saliency maps, they found that focal fixations are more bottom-up and more centered than ambient ones.

## Conclusion

This article provides an extensive overview of the different ways of analyzing *diachronic variables* from eye-tracking data, because they are generally underused by researchers. These diachronic indicators are scanpaths or saliency maps generated to represent the sequence of fixations over time. They are usually provided by eye-tracking software for illustrative purposes, but no real means to compare them are given. This article aims to fill that gap by providing different methods of comparing diachronic variables and calculating relevant indices that might be used in experimental and applied environments. These diachronic variables give a more complete description of the visual attention time course than do synchronic variables and may inform us about the underlying cognitive processes. The ultimate step would be to relate the visual behavior recorded with eyetrackers accurately to the concurrent thoughts of the user.

Despite looking at many analysis methods, some variables are still ignored (fixation duration, pupil diameter, etc.), and it is very challenging to study the way these variables can be taken into account within diachronic data. A great improvement was recently made by combining eye movements with other techniques such as fMRI or EEG. For example, the development of EFRP (eye-fixation-related potentials) that tries to associate the displacement of the eye with some brain wave components (Baccino, 2011; Baccino & Manunta, 2005) is a first step in that direction. But other tracks should be explored, such as EDR (electro dermal response) or ECG (electrocardiography). We are confident that researchers in this area will find new ways to go further in order to have a more complete understanding of human behavior.

## Requirements

A free software, computing some of these diachronic indicators, can be found at the following address: http://www.irisa.fr/temics/staff/lemeur/.

## Footnotes

- 1.
Note that a high dispersion corresponds to a lack of congruency.

### References

- Althoff, R. R., & Cohen, N. J. (1999). Eye-movement-based memory effect: A reprocessing effect in face perception.
*Journal of Experimental Psychology: Learning, Memory, and Cognition, 25,*997–1010.PubMedCrossRefGoogle Scholar - Antes, J. R. (1974). The time course of picture viewing.
*Journal of Experimental Psychology, 103,*62–70.PubMedCrossRefGoogle Scholar - Baccino, T. (2004).
*La Lecture électronique [Digital Reading]*. Grenoble: Presses Universitaires de Grenoble, Coll. Sciences et Technologies de la Connaissance.Google Scholar - Baccino, T. (2011). Eye movements and concurrent ERP's: EFRPs investigations in reading. In S. Liversedge, I. D. Gilchrist, & S. Everling (Eds.),
*Handbook on eye movements*(pp. 857–870). Oxford: Oxford University Press.Google Scholar - Baccino, T., & Manunta, Y. (2005). Eye-fixation-related potentials: Insight into parafoveal processing.
*Journal of Psychophysiology, 19,*204–215.CrossRefGoogle Scholar - Bindemann, M. (2010). Scene and screen center bias early eye movements in scene viewing.
*Vision Research, 50,*2577–2587.PubMedCrossRefGoogle Scholar - Bruce, N. D. B., & Tsotsos, J. K. (2006). Saliency based on information maximisation.
*Advances in Neural Information Processing System, 18,*155–162.Google Scholar - Bruce, N. D. B., & Tsotsos, J. K. (2009). Saliency, attention, and visual search: An information theoretic approach.
*Journal of Vision, 9*(3), 1–24.PubMedCrossRefGoogle Scholar - Chanceaux, M., Guérin-Dugué, A., Lemaire, B., & Baccino, T. (2009). Towards a model of information seeking by integrating visual, semantic and memory maps. In B. Caputo & M. Vincze (Eds.),
*ICVW 2008*(pp. 65–78). Heidelberg: Springer.Google Scholar - Delorme, A., Richard, G., & Fabre-Thorpe, M. (2010). Key visual features for rapid categorization of animals in natural scenes.
*Frontiers in Psychology, 1,*21.PubMedGoogle Scholar - Efron, B., & Tibshirani, R. (1993).
*An introduction to the bootstrap*. New York: Chapman and Hall.Google Scholar - Einhauser, W., & Konig, P. (2003). Does luminance-contrast contribute to a saliency for overt visual attention?
*European Journal of Neuroscience, 17,*1089–1097.PubMedCrossRefGoogle Scholar - Follet, B., Le Meur, O., & Baccino, T. (2011). New insights into ambient and focal visual fixations using an automatic classification algorithm.
*I-Perception, 2,*592–610.PubMedCrossRefGoogle Scholar - Green, D., & Swets, J. (1966).
*Signal detection theory and psychophysics*. New York: Wiley.Google Scholar - Henderson, J. M. (2007). Regarding scenes.
*Current Directions in Psychological Science, 16,*219–222.CrossRefGoogle Scholar - Irwin, D. E., & Zelinsky, G. J. (2002). Eye movements and scene perception: Memory for things observed.
*Perception & Psychophysics, 64,*882–895.CrossRefGoogle Scholar - Itti, L. (2005). Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes.
*Visual Cognition, 12,*1093–1123.CrossRefGoogle Scholar - Itti, L., & Baldi, P. (2009). Bayesian surprise attracts human attention.
*Vision Research, 49,*1295–1306.PubMedCrossRefGoogle Scholar - Itti, L., Koch, C., & Niebur, E. (1998). A model for saliency-based visual attention for rapid scene analysis.
*IEEE Transactions on Pattern Analysis and Machine Intelligence, 20,*1254–1259.CrossRefGoogle Scholar - Jarodzka, H., Holmqvist, K., & Nystr, M. (2010). A vector-based, multidimensional scanpath similarity measure. In C. Morimoto & H. Instance (Eds.),
*Proceedings of the 2010 symposium on eye tracking research and applications*(pp. 211–218). New York: ACM.CrossRefGoogle Scholar - Josephson, S., & Holmes, M. E. (2002). Attention to repeated images on the World-Wide Web: Another look at scanpath theory.
*Behavior Research Methods, Instruments, & Computers, 34,*539–548.CrossRefGoogle Scholar - Jost, T., Ouerhani, N., von Wartburg, R., Mauri, R., & Haugli, H. (2005). Assessing the contribution of color in visual attention.
*Computer Vision and Image Understanding, 100,*107–123.CrossRefGoogle Scholar - Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009).
*Learning to predict where humans look.*Paper presented at the IEEE International Conference on Computer Vision (ICCV), Kyoto, Japan.Google Scholar - Just, M. A., & Carpenter, P. A. (1976). Eye fixations and cognitive processes.
*Cognitive Psychology, 8,*441–480.CrossRefGoogle Scholar - Le Meur, O., Baccino, T., & Roumy, A. (2011). Prediction of the inter-observer visual congruency (IOVC) and application to image ranking. In
*Proceedings of ACM Multimedia*(pp. 373–382). Scottsdale, Arizona.Google Scholar - Le Meur, O., & Le Callet, P. (2009). What we see is most likely to be what matters: Visual attention and applications
*.*In*Proceedings of International Conference on Image Processing*(pp. 3085–3088). Cairo, Egypt.Google Scholar - Le Meur, O., Le Callet, P., Barba, D., & Thoreau, D. (2006). A coherent computational approach to model bottom-up visual attention.
*IEEE Transactions on Pattern Analysis and Machine Intelligence, 28,*802–817.PubMedCrossRefGoogle Scholar - Lemaire, B., Guérin-Dugué, A., Baccino, T., Chanceaux, M., & Pasqualotti, L. (2011). A cognitive computational model of eye movements investigating visual strategies on textual material
*,*In*Proceedings of the Annual Conference of the Cognitive Science Society*(pp. 1146–1151). Boston, MA.Google Scholar - Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions and reversals.
*Soviet Physics – Doklady, 6,*707–710.Google Scholar - Mannan, S. K., Ruddock, K. H., & Wooding, D. S. (1995). Automatic control of saccadic eye movements made in visual inspection of briefly presented 2-D images.
*Spatial Vision, 9,*363–386.PubMedCrossRefGoogle Scholar - Mannan, S. K., Ruddock, K. H., & Wooding, D. S. (1996). The relationship between the locations of spatial features and those of fixations made during visual examination of briefly presented images.
*Spatial Vision, 10,*165–188.PubMedCrossRefGoogle Scholar - Mannan, S. K., Ruddock, K. H., & Wooding, D. S. (1997). Fixation sequences made during visual examination of briefly presented 2D images.
*Spatial Vision, 11,*157–178.PubMedCrossRefGoogle Scholar - Nisbett, R. (2003).
*The geography of thought: How Asians and Westerners think differently … and why*. New York: Free Press.Google Scholar - Over, E. A. B., Hooge, I. T. C., Vlaskamp, B. N. S., & Erkelens, C. J. (2007). Coarse-to-fine eye movement strategy in visual search.
*Vision Research, 47,*2272–2280.PubMedCrossRefGoogle Scholar - Pannasch, S., Schulz, J., & Velichkovsky, B. M. (2011). On the control of visual fixation durations in free viewing of complex images.
*Attention, Perception, & Psychophysics, 73,*1120–1132.CrossRefGoogle Scholar - Peters, R. J., & Itti, L. (2008). Applying computational tools to predict gaze direction in interactive visual environments.
*ACM Transactions on Applied Perception, 5,*1–21.CrossRefGoogle Scholar - Peters, R. J., Iyer, A., Itti, L., & Koch, C. (2005). Components of bottom-up gaze allocation in natural images.
*Vision Research, 45,*2397–2416.PubMedCrossRefGoogle Scholar - Privitera, C. M., & Stark, L. W. (2000). Algorithms for defining visual regions-of-interest: Comparison with eye fixations.
*IEEE Transactions on Pattern Analysis and Machine Intelligence, 22,*970–982.CrossRefGoogle Scholar - Rajashekar, U., van der Linde, I., Bovik, A. C., & Cormack, L. K. (2008). Gaffe: A gaze-attentive fixation finding engine.
*IEEE Transactions on Image Processing, 17,*564–573.PubMedCrossRefGoogle Scholar - Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research.
*Psychological Bulletin, 124,*372–422.PubMedCrossRefGoogle Scholar - Rayner, K., Castelhano, M. S., & Yang, J. (2009). Eye movements when looking at unusual-weird scenes: Are there cultural differences?
*Journal of Experimental Psychology: Learning, Memory, and Cognition, 35,*154–259.CrossRefGoogle Scholar - Renninger, L. W., Verghese, P., & Coughlan, J. (2007). Where to look next? Eye movements reduce local uncertainty.
*Journal of Vision, 7*(3, Art. 6), 1–17.CrossRefGoogle Scholar - Rousselet, G. A., Macé, J. M., & Fabre-Thorpe, M. (2003). Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes.
*Journal of Vision, 3*(6), 440–456.PubMedCrossRefGoogle Scholar - Simola, J., Salojärvi, J., & Kojo, I. (2008). Using hidden Markov model to uncover processing states from eye movements in information search tasks.
*Cognitive Systems Research, 9,*237–251.CrossRefGoogle Scholar - Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions.
*Journal of Vision, 7*(14, Art. 4), 1–17. doi:10.1167/7.14.4 PubMedCrossRefGoogle Scholar - Tatler, B. W., Baddeley, R. J., & Gilchrist, I. D. (2005). Visual correlates of fixation selection: Effects of scale and time.
*Vision Research, 45,*643–659.PubMedCrossRefGoogle Scholar - Tatler, B. W., & Vincent, B. T. (2008). Systematic tendencies in scene viewing.
*Journal of Eye Movement Research, 2*(2), 1–18.Google Scholar - Toet, A. (2011). Computational versus psychophysical bottom-up image saliency: A comparative evaluation study.
*IEEE Trans. on Pattern Analysis and Machine Intelligence, 33,*2131–2146.CrossRefGoogle Scholar - Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search.
*Psychological Review, 113,*766–786.PubMedCrossRefGoogle Scholar - Trevarthen, C. B. (1968). Two mechanisms of vision in primates.
*Psychologische Forschung, 31,*299–337.PubMedCrossRefGoogle Scholar - Unema, P. J. A., Pannasch, S., Joos, M., & Velichkovsky, B. M. (2005). Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration.
*Visual Cognition, 12,*473–494.CrossRefGoogle Scholar - Velichkovsky, B. M. (2002). Heterarchy of cognition: The depths and the highs of a framework for memory research.
*Memory, 10,*405–419.PubMedCrossRefGoogle Scholar - Velichkovsky, B. M., Pomplum, M., Rieser, J., & Ritter, H. J. (1996).
*Attention and communication: Eye-movement-based research paradigms. Visual attention and cognition*. Amsterdam: Elsevier.Google Scholar - Viviani, P. (1990). Eye movements in visual search: Cognitive, perceptual and motor control aspects.
*Reviews of Oculomotor Research, 4,*353–393.PubMedGoogle Scholar - Wagner, R. A., & Fischer, M. J. (1974). The string-to-string correction problem.
*Journal of the ACM, 21,*168–173.CrossRefGoogle Scholar - Yarbus, A. (1967).
*Eye movements and vision*. New York: Plenum.Google Scholar - Zhang, L., Tong, M. H., Marks, T. K., Shan, H., & Cottrell, G. W. (2008). Sun: A Bayesian framework for salience using natural statistics.
*Journal of Vision, 8*(7, Art. 32), 1–20.CrossRefGoogle Scholar - Zhao, Q., & Koch, C. (2011). Learning a saliency map using fixated locations in natural scenes.
*Journal of Vision, 11*(3, Art. 9), 1–15.CrossRefGoogle Scholar