Color Science and Engineering for the Display of Remote Sensing Images

Part of the Augmented Vision and Reality book series (Augment Vis Real, volume 3)


In this chapter we discuss the color science issues that arise in the display and interpretation of artificially-colored remote-sensing images, and discuss some solutions to these challenges. The focus is on visualizing images that naturally have more than three components of information, and thus displaying them as a color image necessarily implies a reduction of information. A good understanding of display hardware and human color vision is useful in constructing and interpreting hyperspectral visualizations. After detailing key challenges, we review and propose solutions to create and refine visualizations to be more effective.


Color Display Basis functions White balance Adaptation 

1 Introduction

Visualizing hyperspectral images is challenging because there is simply more information in a hyperspectral image than we can visually process at once. The best solution depends on the application. For example, if identifying and differentiating certain plants is of interest, then classifying pixels into those plants and using distinct false colors to represent them may be the most useful approach. Such labeling may be layered on top of a full visualization of the image data. Full visualizations can be useful in orienting the viewer, analyzing image content, and understanding contextually results from classification or un-mixing algorithms.

In this chapter we discuss from a color science perspective some of the problems and issues that arise in creating and interpreting hyperspectral visualizations, and describe solutions that may be useful for a variety of visualization approaches. We hope this chapter will be useful both for those who must make sense of hyperspectral imagery, and for those who design image processing tools. First, in Sect. 2 we discuss key challenges: information loss, visual interpretation, color saturation, pre-attentive imagery, metrics, and then use principal components analysis as a case study. Then in Sect. 3 we discuss some solutions that can help address these challenges, including white balance, optimized basis functions, and adapting basis functions. In Sect. 4 we conclude and consider some of the open questions in this area.

2 Challenges

Consider a hyperspectral image H with d components, that is, each pixel is a d-dimensional vector. To visualize H, each pixel can be mapped to a three component vector that can be displayed as an RGB value on a monitor. Thus, hyperspectral visualization can be viewed as a d → 3 dimensionality reduction problem, and any standard dimensionality reduction solution can be used.

However, a good visualization will present information in a way that is easy for a human to quickly and correctly interpret. In the following sections we explain some of the key issues that make achieving this ideal difficult. To dig further into the imaging and color science issues presented in this section, we refer the reader to the following references: for those new to color science, Stone’s book [1] provides a friendly introduction; formulas and further details for most of the topics discussed here can be found on Wikipedia; most of these formulas and more details about practical digital processing for imaging can be found in the recent book by Trussell and Vrhel [2]. For those whose research involves color engineering, we recommend the compilation of surveys published as Digital Color Imaging [3]. Additional references for specific issues are provided where they arise in the text.

2.1 Information Loss

All (static) hyperspectral visualizations are lossy. First, one maps from d dimensions down to three displayed dimensions. Second, most displays are only 8 bits, and this often entails quantization error. Consider for example principal components analysis, which maps the pixel H[i][j] to a displayable three dimensional RGB pixel as follows:
$$ \begin{aligned} R[i][j] &= p_1^TH[i][j] \\ G[i][j] &= p_2^TH[i][j] \\ B[i][j] &= p_3^TH[i][j], \end{aligned} $$
where p1, p2, and p3 are the first three principal components of the image H. Even if the original image H is 8-bit, the projections in (1) will generally be a much higher bit depth, but will have to be quantized to be displayed.

A third cause of information loss is clipping pixel values to the display range, which makes it impossible to differentiate the clipped pixels. It is common to scale hyperspectral visualizations in order to increase contrast, but this causes pixels to exceed the display value range (e.g. [0…1]), and they must be clipped. It is tempting to increase contrast with a nonlinear scaling, such as a sigmoid function, but for finite-depth displays (e.g. 8 bit displays), this just moves the information loss to middle-range pixels because the nonlinearly-scaled values must be quantized to display values.

2.2 Metrics

The right metric for a visualization depends on the application. Once a metric has been decided upon, a visualization strategy should be formed that directly optimizes for the given metric. A general metric is how well differences in the spectra correlate to perceived differences in an image. For example, Jacobson and Gupta looked at the correlation between the chrominance (a*b* distance in CIELAB space) of different pixels and the angle between the corresponding original spectra [4]. Cui et al. define a similar correlation-based metric that considers Euclidean distances of pixels in spectral space and CIELAB color space [5], as well as separation of features and aspects of interactive visualization as metrics. Visualization methods that employ PCA have used metrics such as component maximum energy/minimum correlation (MEMC) index and entropy to evaluate properties of different methods [6].

It is important to recognize that measuring the quality of a visualization by its energy or related “information measure” such as entropy can reward noise.

Ultimately, the right metric depends on the application, and should be a subjective evaluation of usefulness. However, since one cannot construct visualizations that directly optimize for subjective evaluations, we must use our knowledge of human vision and attempt to correlate subjective evaluations with optimizable objective criteria.

2.3 Visual Interpretation

How do people interpret an image? For a simple false-color visualization with only a few false colors, distinct false colors such as red, green, blue, orange, and purple can be used. Given only a few such false colors, humans will interpret the false colors as representing unrelated categories. However, sociocultural training and the natural world colors cause people to instinctively interpret some false colors in standard ways, particularly that blue means water, brown means ground, green means dense vegetation, and yellow means lighter vegetation (e.g. grass). Visualizations that agree with these intuitive mappings will be faster and easier to understand than false color maps that require the viewer to consult a legend for each color.

If more than a few false colors are displayed, then people do not see a set of unrelated colors, but instead interpret similar colors as being similar. For example, if you show a human a remote sensing image with 50 different false colors including 15 shades of green, the natural inclination is to interpret the green areas as being related. Exactly how many false colors can be used before humans start to interpret the colors as image colors depends on contextual clues (if the spatial cues suggest it is an image, people are more likely to interpret it as an image) as well as on the person. Color naming research indicates that most people perceive roughly 11 distinct color categories, and so this suggests that up to 11 false colors might be used as unrelated colors, but we hypothesize that when arranged in an image, it may take only seven colors before people intuitively interpret the colors as image colors rather than true false colors. Thus, when using a multiplicity of false colors, mapping similar concepts to similar colors will match a human’s intuitive assumptions about what colors mean. This effect is of course dependent on context, for example, in a natural image we do not assume that a red fruit and a red ball are related.

Given a multiplicity of false colors, people will want to judge the similarity of the underlying spectra or generating material by how similar the colors appear to them. It is an ongoing quest in color science to accurately quantify how similar two colors appear. Measuring the similarity between colors using RGB descriptions is dangerous. RGB is a color description that is useful for monitors, which almost all operate by adding amounts of red (R), green (G), and blue (B) light to create an image. So an RGB color can be directly interpreted by a monitor as how much of each light the monitor will put out. However monitors differ, and thus a specific RGB value will look different on two different monitors. For this reason, RGB is called a device-dependent color description, and how much a difference in RGB matters depends on the display hardware.

But the problem with using RGB to measure color differences actually has more to do with the human observer. If a color distance metric is good, then any difference of 10 units between two colors should appear to be the same difference to a viewer. But that is simply not the case with RGB values, even if they are standardized (such as sRGB or Adobe RGB).

A solution to measuring color differences that is considered by color engineers to work tolerably in most practical situations is to describe the colors in the CIELAB colorspace, and then measure color difference as Euclidean distance in the CIELAB space (or a variant thereof, see for example \({\Updelta}E_{94}\)). The CIELAB colorspace is a device-independent color description based on measuring the actual spectra of light representing the color. Nothing in color science is simple however, and there are a number of caveats about measuring color differences with CIELAB. One caveat is that the appearance of a color depends upon the surrounding colors (see Fig. 1 for an example), and CIELAB does not take surrounding colors into account. (More complicated color appearance models exist that do take surrounding colors into account [7].) A second caveat is that one must convert a monitor’s RGB colors to CIELAB colors. Because RGB is a device-dependent colorspace and CIELAB is a device-independent colorspace, the correct way to map a monitor’s RGB colors to CIELAB colors requires measuring the spectra of colors displayed by the monitor in order to fit a model of how RGB colors map to CIELAB colors for that monitor. However, most monitors have a built-in standardized setting corresponding to the device-independent sRGB space, and so it is practical to assume the monitor is sRGB and then use standard sRGB-to-CIELAB calculations.
Fig. 1

Example of simultaneous contrast: the three small gray squares are exactly the same color, but the gray square on the blue background will appear pinkish compared to the center gray square, and the gray square on the yellow background will appear dark compared to the center gray square. If this figure was a visualization, a viewer would likely incorrectly judge the gray pixels to represent different spectra in the three different places

Another concern in quantifying how humans perceive differences is that humans interpret hue differences differently than lightness or saturation differences. In the context of an image, if adjacent pixels have the same hue they are more likely to be considered part of the same object than if they have different hues. Thus it is important to consider how hue will be rendered when designing or interpreting a visualization.

2.4 Color Saturation and Neutrals

We are more sensitive to small variations in neutrals than in bright colors; there really are many shades of gray. It is easier to consistently judge differences between neutral colors than between saturated colors. For example, the perceptual difference between a pinkish-gray and a blueish gray may be the same as the perceptual difference between a bright red and a cherry red, but it is generally easier to remember, describe, and re-recognize the difference between the grays. Further, if a visualization has many strong colors they will cause simultaneous contrast, which causes the same color to look significantly different depending on the surrounding colors. An example is shown in Fig. 1. Simultaneous contrast makes it difficult to recognize the same color in different parts of an image, and to accurately judge differences between pixels.

Small regions of bright saturated colors are termed pre-attentive imagery [8], because such regions draw the viewer’s attention irrespective of the user’s intended focus. Thus, it is advisable to use bright saturated colors sparingly as highlights, labels, or to layer important information onto a background visualization.

2.5 Color Blindness

Roughly 5% of the population is color-blind, with the majority unable to distinguish red from green at the same luminance. Designing and modifying images to maximally inform color blind users is an area of active interest [9].

2.6 Case Study: Principal Components Analysis for Visualization

Principal components analysis (PCA) is a standard method for visualizing hyperspectral images [10]. First, the d-dimensional image is treated as a bag of d-dimensional pixels, that is, d-dimensional vectors, and orthogonal directions of maximum variance are found one after another. For visualization, usually the first three principal component directions \(p_1, p_2, p_3 \;{\in}\;\mathcal{R}^d\) are chosen, and projecting the image onto these three principal components captures the most image variance possible with only three (linear) dimensions. This projection step maps the d-dimensional pixel H[i][j]  \(\in \mathcal{R}^d\) to a new three-dimensional pixel \(v[i][j] =[p_1 {\,}p_2 \,p_3]^TH[i][j].\) In order to display each projected pixel’s three components as RGB values, each component is shifted up to remove negative values so that the smallest value is 0, and then scaled or clipped so that the largest value is 1. The resulting image often looks completely washed out, so the standard is to linearly stretch each of the three components so that 2% of pixels are at the minimum and 2% are at the maximum value.

The top images in Figs. 2 and 3 are example PCA visualizations. The visualizations are useful in that they highlight differences in the image. However, they are not optimized for human vision. Specifically, the colors do not have a natural or consistent meaning; rather one must learn what a color means for each image. Perceived color differences do not have a clear meaning. Saturated and bright colors are abundant in the visualizations, leaving no colors for labeling important aspects or adding a second layer of information (such as classification results).
Fig. 2

Images are projections of a 224-band AVIRIS image of Moffet field [14]. Top Standard PCA visualization. Bottom Cosine-basis function visualization adapted with principal components as described in Sect. 3

Fig. 3

Images are projections of a 224-band AVIRIS image of Jasper ridge [14]. Top Standard PCA visualization. Bottom Cosine-basis function visualization adapted with principal components as described in Sect. 3

There are many variations on PCA for visualization that can improve its performance, but which still suffer from most of the above concerns. Tyo et al. suggest treating the PCA components as YUV colors, which are more orthogonal than RGB colors, and then transforming from YUV to RGB for display [11]. ICA or other metrics can be used instead of PCA to reduce the dimensionality [12]. PCA on the image wavelet coefficients was investigated to better accentuate spatial relationships of pixels [13]. In the next section, we propose a new method to use principal components (or other such measures of relative importance of the wavelengths) to adapt basis functions optimized for human vision.

3 Some Solutions

The challenges described in the last section can be addressed in many ways; in this section we describe three approaches to these challenges that can be used separately, or combined together or with other visualization strategies. First we discuss basis functions that optimize for the peculiarities of sRGB display hardware and human vision. Then we describe two approaches to adapt any given linear projection to a function of the spectra such as variance or signal-to-noise ratio (SNR). Our last suggestion is white balance, which can be used as a post-processing step for any visualization. MATLAB code to implement these solutions is available from

3.1 Optimized Basis Functions

Principal components form basis functions that adapt to a particular image. This has a strong advantage in highlighting the interesting image information. However, it has the disadvantages described in the previous section. Recently, Jacobson and Gupta proposed fixed basis functions that do not adapt to image information, but do create visualizations with consistent colors and optimized human vision properties [14, 15].

Two of these basis functions are shown in Fig. 4. The top example shows the constant luma disk basis functions [15], and as shown in Fig. 5, this basis is optimal in that each component is displayed with the same brightness (luma) value, the same saturation, and the same perceptual change between components as measured in \({\Updelta}E\) (Euclidean distance in CIELAB space). Unfortunately, in order to have those desirable properties and stay within the sRGB gamut, the constant luma disk basis function can only produce colors that are not very bright or saturated, as can be seen in the colorbar beneath the basis functions in Fig. 4 — the colorbar shows for each component (i.e., each wavelength) what color would be displayed if the hyperspectral image only had energy at that component.
Fig. 4

Two examples of basis functions designed to give equal perceptual weight to each hyperspectral component. The basis functions can be rendered for any number of components, here they are shown for d = 30 components

Fig. 5

Color properties across wavelength are shown for two of the basis functions proposed by Jacobson et al. [15]. The basis functions can be rendered for any number of components, here they are shown for d = 30 components

The bottom example shows the cosine basis functions [15]. These basis functions use more of the gamut, but do not have as optimal perceptual qualities, as shown in Fig. 5.

Figure 6 (left, bottom) shows an example hyperspectral image visualized with the cosine basis function.
Fig. 6

Top The cosine basis function AVIRIS hyperspectral visualization of Jasper ridge has a slight green color-cast. Bottom The same visualization white-balanced

3.2 Adapting Basis Functions

Here we describe a new method to adapt any given set of three basis functions to take into account up to three measures of the relative importance of the wavelengths. For example, the signal-to-noise ratio (SNR) is one such measure that describes how noisy each wavelength is, and the top three principal components specify three measures of relative importance of the wavelengths with respect to a specific image. The three basis functions to be adapted might be the first three principal components \(p_1, p_2, p_3\), or the basis functions described in the last section, or the color matching basis functions to accurately reproduce what a human would see over the visible wavelengths [16], or some other set of three basis functions.

Denote the three basis functions to be adapted \(r,g,b \in {\mathcal{R}}^d\), and the three measures of importance \(f_1, f_2, f_3 \in {\mathcal{R}}^d.\) For the case of SNR \(f_1 =f_2 =f_3 =SNR.\) As another example, to adapt to the top three principal component vectors, \(f_1 =p_1, f_2 =p_2, f_3=p_3.\) In this section we discuss two ways to adapt the basis functions: a simple weighting of the basis functions, and a linear adaptation of the basis functions, which is a generalization of the SNR-adaptation proposed in [15].

A simple solution is to use the \(f_1, f_2, f_3\) to simply weight the basis functions. Compute the normalized functions \(\tilde{f}_k = f_k/\left(\max_{\lambda} f_k(\lambda)\right)\) for k = 1, 2, 3. Then form new basis functions
$$ \begin{aligned} r^{\prime}[\lambda] &= \tilde{f}_1[\lambda]r[\lambda]\\ g^{\prime}[\lambda] &= \tilde{f}_2[\lambda]g[\lambda] \\ b^{\prime}[\lambda] &= \tilde{f}_3[\lambda]b[\lambda]. \end{aligned} $$
Then form normalized basis functions:
$$ \begin{aligned} \tilde{r} &= \frac{r^{\prime}}{\sum_{\lambda} r^{\prime}[\lambda]} \\ \tilde{g} &= \frac{g^{\prime}}{\sum_{\lambda} g^{\prime}[\lambda]} \\ \tilde{b} &= \frac{b^{\prime}}{\sum_{\lambda} b^{\prime}[\lambda]}.\\ \end{aligned} $$
Then form the linear sRGB image planes by projecting the hyperspectral image H:
$$ \begin{aligned} \hbox{linear } R &= \tilde{r}^TH \\ \hbox{linear } G &= \tilde{g}^TH \\ \hbox{linear } B &= \tilde{b}^TH. \end{aligned} $$
These linear values are then gamma-corrected to form display sRGB values, usually using the standard monitor gamma of 2.2.

Weighting will cause wavelengths with high f to become brighter relative to wavelengths with high f. We recommend a slightly different solution that adapts the basis function by transferring the visualization weight of wavelengths with low f to wavelengths with high f. This re-apportions the visualization basis function, so that wavelengths with high f use up more of the visualization basis function than wavelengths with low f. For example, if f is SNR over wavelengths, and if the cosine basis function is used, then wavelengths with higher f will be both brighter and have a greater hue difference with respect to neighboring wavelengths.

For each fk (k = 1, 2, 3), construct each row of the adapting matrix Ak by starting at the leftmost column of Ak which does not yet sum to 1, and add to it until the column sums to 1 or until the row sum is equal to fk(λ). If the fk(λ) for that row is not exhausted but the column already sums to 1, then add to the next column in that row until that column sums to 1 or until the row sum is equal to fk(λ).

Then the adapted basis functions are:
$$ r^{\prime} = A_1r\quad g^{\prime} = A_2g\quad b^{\prime} = A_3b. $$


Consider a d = 5 component hyperspectral image H. Here we have shown how to form the adapting matrix for the basis function r:
$$ f_1 = \left[\begin{array}{l} .2 \\ 3 \\ 1 \\ .5 \\ .3 \end{array}\right], \quad \hbox{then} \quad A_1 =\left[\begin{array}{lllll} .2 & 0 & 0 & 0 & 0 \\ .8 & 1 & 1 & .2 & 0 \\ 0 & 0 & 0 & .8 & .2 \\ 0 & 0 &0 & 0 & .5 \\ 0 & 0 & 0 & 0 & .3 \\ \end{array}\right]. $$

As described above for weighting, these basis functions are then normalized to each sum to 1, the image H is projected onto them to form linear sRGB values, which are gamma-corrected to form display sRGB values.

Examples of PCA-adapted cosine basis function images are shown as the bottom images in Figs. 2 and 3. Most of the features visible in one image can be seen in the other image, but some features are easier to see in one or the other image. For example, the texture in the lakes in the Moffet field image is more clearly rendered in the PCA-adapted image, but some of the roads are better highlighted in the PCA image.

The PCA-adapted images have a number of advantages. Even though the cosine basis functions in the PCA-adapted images have been adapted independently for the Moffet field and Jasper ridge image, the color representation is similar aiding interpretation, and the colors appear somewhat natural, for example in both PCA-adapted images the vegetation displays as green. The natural palette of the cosine basis function is preserved, and in the Jasper ridge PCA-adapted image, one single bright red pixel stands out (easier to see at 200% magnification).

3.3 White Balance

When looking at a scene, we naturally adapt our vision so that the predominant illuminant appears white, and we call this adaptation white balancing. Many digital cameras also automatically white balance photos so that if you take a picture under a very yellow light, the photo does not come out looking yellowish. We suggest applying white balance to visualizations for two reasons. First, a visualization will look more like a natural scene if it is white-balanced. Second, white-balancing tends to create more grays and neutral colors, and the resulting image may appear slightly sharper and appear to have higher contrast. To actually increase contrast in an image, one can scale the pixel RGB values, however this often leads to clipping values at the extremes, which causes information loss.

A number of methods for white-balancing have been proposed [2, 17]. Typically, you must decide which original RGB value should appear as white or neutral in the white-balanced image. One standard approach is to use the maximum component values in the image to be white. A second standard approach is to make the so-called “grayworld assumption,” which implies setting the average pixel value of the image to be an average gray. Both methods are effective in practice, although each one is better suited for certain types of images: average value for images rich in color, and maximum value for images with one dominant color.

Here we illustrate one method based on the grayworld assumption, as shown in Fig. 6. Note that we believe it is more justified to do white-balancing on the linear sRGB values, as we described, but white-balancing display RGB values will also be effective. Here are the steps we take:
  • Step 1: Let \([\bar{r} \,\bar{g}\,\bar{b}]\) be the average linear sRGB value of the image.

  • Step 2: Denote the linear sRGB value of the ith pixel as \([r_i {\quad}g_i {\quad}b_i]\). Calculate the white-balanced linear sRGB value of the ith pixel to be \([\tilde{r}_i \,\tilde{g}_i \,\tilde{b}_i] = [r_i/\bar{r} \,g_i/\bar{g} \, b_i/\bar{b}]\). Note that at the end of this step, the mean value of the image is \([1\,1 \,1]\).

  • Step 3: Compute \(\bar{y} = 0.2126\bar{r} + 0.7152\bar{g} + 0.0722\bar{b}\), which is the relative luminance of the average linear sRGB value of the image \([\bar{r} \,\bar{g} \, \bar{b}]\) [2].

  • Step 4: Calculate the normalized white-balanced linear sRGB value of the ith pixel to be \([\hat{r}_i \,\hat{g}_i \, \hat{b}_i] = [\tilde{r}_i\bar{y} \,\tilde{g}_i\bar{y} \, \tilde{b}_i\bar{y}]\). Note that at the end of this step, the mean value of the image is \([\bar{y} \,\bar{y} \,\bar{y}]\), so the relative luminance of the image is preserved.

  • Step 5: Clip values that are outside the 0 to 1 range.

  • Step 6: Convert the normalized white-balanced linear sRGB value \([\hat{r}_i \,\hat{g}_i \,\hat{b}_i]\) for the ith pixel into display sRGB values using the standard sRGB formula.

The disadvantages of white balancing a visualization are that spectra will not be rendered exactly the same in images with different white balance, and the full gamut may not be used.

4 Conclusions and Open Questions

In this chapter we have discussed the color science issues that are relevant to false-color displays of hyperspectral imagery. We hope that the challenges discussed and some of the partial solutions proposed here will spurn further thought into how to design visualizations that take into account the nonlinearities of human vision. Although theoretically the color science issues raised here are important, there is a serious lack of experimental evidence documenting what makes a hyperspectral visualization helpful or effective in practice, and this of course will depend on the exact application. This research area needs thorough and careful subjective testing that simulates as close as possible real tasks, ideally with benchmark images and standardized viewing conditions so that experimental results can be reproduced by future researchers as they compare their new ideas to the old.


  1. 1.
    Stone, M.: A Field Guide to Digital Color. AK Peters Ltd., Massachusetts (2003)Google Scholar
  2. 2.
    Trussell, J., Vrhel, M.: Fundamentals of Digital Imaging. Cambridge University Press, London (2008)Google Scholar
  3. 3.
    Sharma, G. (ed.): Digital Color Imaging. CRC Press, USA (2003)Google Scholar
  4. 4.
    Jacobson, N.P., Gupta, M.R.: Design goals and solutions for the display of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 43(11), 2684–2692 (2005)CrossRefGoogle Scholar
  5. 5.
    Cui, M., Razdan, A., Hu, J., Wonka, P.: Interactive hyperspectral image visualization using convex optimization. IEEE Trans. Geosci. Remote Sens. 47, 1673–1684 (2009)CrossRefGoogle Scholar
  6. 6.
    Tsagaris, V., Anastassopoulos, V., Lampropoulos, G.: Fusion of hyperspectral data using segmented PCT for color representation and classification. IEEE Trans. Geosci. Remote Sens. 43, 2365–2375 (2005)CrossRefGoogle Scholar
  7. 7.
    Fairchild, M.: Color Appearance Models. Addison Wesley Inc., Reading, Massachusetts (2005)Google Scholar
  8. 8.
    Healey, C., Booth, K.S., Enns, J.: Visualizing real-time multivariate data using preattentive processing. ACM Trans. Model Comput. Simul. 5(3), 190–221 (1995)CrossRefGoogle Scholar
  9. 9.
    Jefferson, L., Harvey, R.: Accommodating color blind computer users. In: Proceedings of the International ACM SIGACCESS Conference on Computers Accessibility vol. 8, pp. 40–47, 2006Google Scholar
  10. 10.
    Ready, P.J., Wintz, P.A.: Information extraction, SNR improvement, and data compression in multispectral imagery. IEEE Trans. Commun. 21(10), 1123–1131 (1973)CrossRefGoogle Scholar
  11. 11.
    Tyo, J.S., Konsolakis, A., Diersen, D.I., Olsen, R.C.: Principal-components-based display strategy for spectral imagery. IEEE Trans. Geosci. Remote Sens. 41(3), (2003)Google Scholar
  12. 12.
    Du, H., Qi, H., Wang, X., Ramanath, R., Snyder, W.E.: Band selection using independent component analysis for hyperspectral image processing. In. Proceedings of the 32nd Applied Imagery Pattern Recognition Workshop, pp. 93–98. Washington, DC, USA (2003)Google Scholar
  13. 13.
    Gupta, M.R., Jacobson, N.P.: Wavelet principal components analysis and its application to hyperspectral images. IEEE Int. Conf. Image Proc. (2006)Google Scholar
  14. 14.
    R.O.G. et al.: Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 65, 227–248 (1998)Google Scholar
  15. 15.
    Jacobson, N.P., Gupta, M.R., Cole, J.B.: Linear fusion of image sets for display. IEEE Trans. Geosci. Remote Sens. 45(10), 3277–3288 (2007)CrossRefGoogle Scholar
  16. 16.
    Wyszecki, G., Stiles, W.S.: Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd edn. Wiley, New York (2000)Google Scholar
  17. 17.
    Lukac, R.: New framework for automatic white balancing of digital camera images. Signal Process. 88, 582–592 (2008)CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUniversity of WashingtonSeattleUSA

Personalised recommendations