Integrated Sensing and Processing for Hyperspectral Imagery

Chapter
Part of the Augmented Vision and Reality book series (Augment Vis Real, volume 3)

Abstract

In this chapter, we present an information sensing system which integrates sensing and processing resulting in the direct collection of data which is relevant to the application. Broadly, integrated sensing and processing (ISP) considers algorithms that are integrated with the collection of data. That is, traditional sensor development tries to come up with the “best” sensor in terms of SNR, resolution, data rates, integration time, and so forth, while traditional algorithm development tasks might wish to optimize probability of detection, false alarm rate, and class separability. For a typical automatic target recognition (ATR) problem, the goal of ISP is to field algorithms which “tell” the sensor what kind of data to collect next and the sensor alters its parameters to collect the “best” information in order that the algorithm performs optimally. We illustrate the concept of ISP using a near Infrared (NIR) hyperspectral imaging sensor. This prototype sensor incorporates a digital mirror array (DMA) device in order to realize a Hadamard multiplexed imaging system. Specific Hadamard codes can be sent to the sensor to realize inner products of the underlying scene rather than the scene itself. The developed ISP algorithms utilize these codes to overcome issues traditionally associated with hyperspectral imaging (i.e. Data Glut and SNR issues) while also performing a object detection task. The underlying integration of the sensing and processing results in algorithms which have better overall performance while collecting less data.

Keywords

Hyperspectral imaging Adaptive imaging Compressive imaging Hadamard multiplexing 

1 Introduction

This chapter presents the development of algorithms for Integrated Sensing and Processing (ISP) utilizing a hyperspectral imaging sensor. The ISP paradigm seeks to determine the best sensing parameters for achieving the performance objectives of a given algorithm. The exploitation algorithm may also have components which adapt to the imagery being sensed. In this context, ISP is a coupling between adaptive algorithms and adaptive sensing. Considering the problem of object detection/classification in hyperspectral imagery, ISP can increase sensing and algorithm performance in several ways. Firstly, hyperspectral exploitation usually suffers from a data glut problem. That is, a hyperspectral sensor generates a cube of data where each spatial pixel is represented as a spectral vector. The first step in most exploitation algorithms is some type of data reduction, or spectral band selection. A question which should naturally arise is: Why sense particular information which is going to be immediately eliminated through a data reduction algorithm? If one can design a data collection system that integrates the sensor with the data reduction algorithm, then only information which is pertinent to the exploitation task need be sensed. Secondly, traditional hypserspectral imagers can suffer SNR degradation as compared with broadband imagers. When one is attempting high spatial resolution imaging and the sensing system separates the light into a large number of spectral components, then there is a significant loss of photons being sensed by the detector array. Thus, to get enough light to make a meaningful image, one must increase the detector integration time. If one is sensing a dynamic scene, longer integration time cannot be tolerated; which leads to significant loss of SNR in the final hyperspectral image. In Sect. 2, we show a solution to this SNR issue using spatial/spectral multiplexing.

In order to investigate algorithms for integrated sensing and processing of imagery, we use a near infrared (NIR) Hadamard multiplexing imaging sensor. This prototype sensor was developed by PlainSight Systems (PSS) and incorporates a digital mirror array (DMA) device in order to realize a Hadamard multiplexed imaging system. The known Signal-to-Noise (SNR) advantage in Hadamard spectroscopy [1] extended to imaging systems [2, 3] allows for the collection of a hyperspectral cube of data with more efficient light collection over standard “Pushbroom” hyperspectral imagers.

The PlainSight NSTIS is a Spatial Light Modulator (SLM)-based multiplexing hyperspectral imaging camera, operable in the spectral range of about 900–1,700 nm, with no macro moving parts. As the SLM device, the system uses a Digital Micro-mirror Array (DMA) commercially available by Texas Instruments for projector display applications. The DMA contains 848 columns and 600 rows of mirrors and measures 10.2 mm × 13.6 mm. In Fig. 1, a DMA is shown with its glass cover removed.
Fig. 1

Digital mirror array acts as an electronic shutter to select and encode spatial/spectral features in the scene

When the scene is illuminated on the DMA device, a standard raster scan could be implemented by turning the first column of mirrors ON, sending this column to a diffraction grating which results in a spectral representation of the first spatial column of the scene being illuminated on the detector array. This process is reflected in Fig. 2.
Fig. 2

Pushbroom hyperspectral imaging with a DMA device

If multiple slits (columns) in the DMA array are opened as shown in Fig. 3, the detector array will be presented with the superposition of the spectra of many columns. Such a system has the advantage of realizing optimal SNR when the sequence of open slits constitutes a Hadamard pattern [1]. Each individual frame collected at the detector array is not physically meaningful as an image, but when all the patterns of the Hadamard sequence have been recorded, the full hyperspectral data cube is recoverable by digital post-processing [2].
Fig. 3

Multiplexed Hadamard hyperspectral imaging

The PlainSight NSTIS sensor implements the process from Fig. 3 where the detector array is a standard Indigo Phoenix large-area InGaAs camera operating in the Near Infrared wavelengths.

During standard operation of the system, the sensor collects 512 raw frames of data. Each frame is 522 × 256 pixels and represents superposition of spectra vs. spatial row as shown in Fig. 3. The 512 frames are collected using the 256 Walsh (0 and 1 s) patterns that determine the columns of the DMA to be opened or closed. In other words, each column of the DMA is controlled by a bit of the Walsh code. If the bit is 0, the column is closed whereas if the bit is 1, the column is open. Since the theory of optimal SNR is based upon Hadamard (1 and −1 s) patterns, one needs to collect two Walsh patterns to generate a single Hadamard pattern. Thus, the 512 collected frames represent the required Walsh patterns to form a full set of 256 Hadamard patterns. Since each column in the DMA array will hit the Diffraction grating at a different location, the spectra will hit the detector array at a different location per column. We refer to this as a Skewness in spectra which spreads the information across 522 pixels in the spectral dimension but represents only 266 actual spectral bins. Of course, this spatial/spectral mixing and skewness is invertable once all 256 Hadamard patterns have been collected. The resultant hyperspectral scene is dimension 256 × 256 with 266 spectral bands from 900 to 1,700 nm.

Given a sensor that accommodates adaptation while imaging, the ISP concepts we will discuss can be viewed as within the realm of compressive sensing (as presented by Donoho [4] and Candes et al. [5]) in that we will collect far fewer image samples than what would normally be required to exploit the entire scene of interest. Neifeld and Shankar [6] have done similar work on concepts for feature-specific imaging while Mahalanobis and Daniel [7] have looked at exploitation driven compression algorithms (another form of ISP).

The outline of this chapter is as follows. Section 2 presents an algorithm for variable resolution sensing where high resolution imagery is driven by an ATR metric. Section 3 presents the results of an experiment which demonstrates the developed algorithms implemented in a prototype ISP hyperspectral sensor, while Sect. 4 presents concluding remarks and future work.

2 Variable Resolution Hyperspectral Sensing

2.1 Mathematical Representation

Since the sensor encodes data identically and independently on each spatial column of the scene, we will perform the mathematical analysis given an individual, but arbitrary spatial column. Thus, for the underlying hyper-spectral scene, S(λ, rc), we will consider only a particular column of data S(λ, r). We wish to establish a correspondence between the sampling of the hyperspectral row, S(λ, r), as a digital image and a particular mirror of the DMA device. As described in the description of Fig. 3, each row of the scene hits the diffraction grating at a different place, and thus the entire spectrum is shifted on the focal plane as a function of the row. This is referred to as spectral “skewness”. Thus, as a particular row enters the system, the underlying scene actually becomes S(λ(r), r), where the spectrum is now a function of row. We now make the substitution ω = λ(r) and ignore this dependency for the moment. So we are concerned with sensing the hyperspectral row image S(ω, r). The sampling of this function brought about from the DMA generates a matrix S of dimension 522 × 256. We are thus interested in sensing this array with Hadamard vectors of length 256. An example scene matrix, S, is given in Fig. 4. Recall, this is a spectral x spatial data matrix, so there is no intrinsic interpretability.
Fig. 4

Example scene matrix S representing the wavelength × spatial row information for a given spatial column. Collected from multiplexing hyperspectral imager

The imaging system will sense this data from Fig. 4 with Hadamard multiplexing, thus we will measure a collection of transformations of this data rather than the data itself. Looking at the Hadamard basis, we are interested in encoding the spatial component of this data which is of dimension 256. We take a standard ordering of the Hadamard basis for ℜ256 as shown in the example below which is of dimension 8.
$$ {\text{H}}_{ 8} = \left[ {\begin{array}{*{20}c} 1 & { + 1} & { + 1} & { + 1} & { + 1} & { + 1} & { + 1} & { + 1} \\ 1 & { + 1} & { + 1} & { + 1} & { - 1} & { - 1} & { - 1} & { - 1} \\ 1 & { + 1} & { - 1} & { - 1} & { + 1} & { + 1} & { - 1} & { - 1} \\ 1 & { + 1} & { - 1} & { - 1} & { - 1} & { - 1} & { + 1} & { + 1} \\ 1 & { - 1} & { + 1} & { - 1} & { + 1} & { - 1} & { + 1} & { - 1} \\ 1 & { - 1} & { + 1} & { - 1} & { - 1} & { + 1} & { - 1} & { + 1} \\ 1 & { - 1} & { - 1} & { + 1} & { + 1} & { - 1} & { - 1} & { + 1} \\ 1 & { - 1} & { - 1} & { + 1} & { - 1} & { + 1} & { + 1} & { - 1} \\ \end{array} } \right]. $$
It is important to note that
$$ H_{N} = H_{N}^{T} = H_{N}^{ - 1} . $$
(1)
A particular single frame sensed by the camera when in multiplexed mode is resultant by a particular column of this Hadamard matrix, denoted hi. This ith frame of collected data is the 522 × 1 vector
$$ f_{i} = Sh_{i} . $$
(2)
Therefore, sensing with all 256 Hadamard codes yields the 522 × 256 data matrix.
$$ F = SH_{256} . $$
(3)
This is the data which gets collected during normal operation of the sensor. It is easy to see that to exactly recover the underlying scene, S, from the sensed frames, F, we use Eq. 1 to get
$$ S = FH_{256} . $$
(4)

This implies that if all 256 Hadamard vectors are sequentially encoded into the mirror array and sensed through the camera, then we can fully recover S from the actual collected data F. Performing this recovery on all spatial columns of this data will recover the full hyperspetral data cube.

The relationship between the underlying spectral parameter and our indexing parameter was previously given by
$$ \omega = \lambda \left( r \right). $$
(5)
Specifically, the sensing instrument being used for this discussion introduces a spectral “skewness” where the underlying spectral representation is shown in Fig. 5.
Fig. 5

Spectral skewness for the actual hyperspectral scene as sensed through the multiplexing Hadamard hyperspectral imager. The diagonal lines show the lines of constant wavelength

Thus, the “unskewed” scene, S(λ, r), representation can be garnered from the recovered data, S(ω, r), by following the lines of constant wavelength from Fig. 5. This is illustrated in Fig. 6.
Fig. 6

Skewness correction in scene representation. Left is skewed data representation while right is the unskewed version following lines of constant wavelength from Fig. 5

2.2 Reduced Resolution Imaging

Equation 4 implies that if all 256 Hadamard vectors are sequentially encoded into the mirror array and sensed through the camera, then we can fully recover the scene, S, from the data frames, F, collected by the sensor. Since we are interested in compressed sensing, we desire to know what can be recovered about S if we sense only a few of the Hadamard vectors. Consider, for example if we sense only the first 4 codes of the Hadamard matrix.
$$ {\mathbf{H}}_{256,4} = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ 1 & \vdots & 1 & 1 \\ 1 & \vdots & { - 1} & { - 1} \\ \vdots & 1 & { - 1} & { - 1} \\ \vdots & { - 1} & 1 & { - 1} \\ 1 & \vdots & 1 & { - 1} \\ 1 & \vdots & { - 1} & 1 \\ 1 & { - 1} & { - 1} & 1 \\ \end{array} } \right]. $$
(6)
Then, we have sensed the four vectors
$$ F_{256,4} = SH_{256,4} = \left[ {\begin{array}{*{20}c} {Sh_{1} } & {Sh_{2} } & {Sh_{3} } & {Sh_{4} } \\ \end{array} } \right]. $$
(7)
The dimension of F is 522 × 4. Define the approximate scene by \( \hat{S} \) as follows:
$$ \begin{aligned} \hat{S} & = F_{256,4} H_{256,4}^{T} = \left[ {\begin{array}{*{20}c} {Sh_{1} } & {Sh_{2} } & {Sh_{3} } & {Sh_{4} } \\ \end{array} } \right]H_{256,4}^{T} = S\left[ {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & {h_{3} } & {h_{4} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {h_{1}^{T} } \\ {h_{2}^{T} } \\ {h_{3}^{T} } \\ {h_{4}^{T} } \\ \end{array} } \right] \\ & \quad = S\left[ {h_{1} h_{1}^{T} + h_{2} h_{2}^{T} + h_{3} h_{3}^{T} + h_{4} h_{4}^{T} } \right]. \\ \end{aligned} $$
(8)
However, if we let 164 be the 64 × 64 matrix of all 1 s, then it can be shown that
$$ \left[ {h_{1} h_{1}^{T} + h_{2} h_{2}^{T} + h_{3} h_{3}^{T} + h_{4} h_{4}^{T} } \right] = 4\left[ {\begin{array}{*{20}c} {1_{64} } & 0 & \cdots & 0 \\ 0 & {1_{64} } & {} & \vdots \\ \vdots & {} & {1_{64} } & 0 \\ 0 & \cdots & 0 & {1_{64} } \\ \end{array} } \right]. $$
(9)
Thus,
$$ \hat{S} \propto S\left[ {\begin{array}{*{20}c} {1_{64} } & 0 & \cdots & 0 \\ 0 & {1_{64} } & {} & \vdots \\ \vdots & {} & {1_{64} } & 0 \\ 0 & \cdots & {} & {1_{64} } \\ \end{array} } \right]. $$
(10)

So the underlying scene is approximated by averaging. (i.e. the first 64 columns of S are averaged and establish the first 64 columns of the approximation). Again, the dimensions of the matrix S(ω, r), are wavelength, ω, and spatial row, r, implying that the spatial row information in \( \hat{S} \) is the average of 64 spatial rows of the scene: A low pass filtering in the spatial row dimension. In the wavelength dimension it is somewhat more complicated. It would appear that the data in the wavelength dimension is not smoothed along the wavelength axis, but simply averaged over 64 spatial rows. However, we recall that ω = λ(r) is a function of the real spectral parameter which has an index which is a function of the spatial row. Thus, \( \hat{S} \), the coarse scale approximation to S, smoothes S in both the wavelength and spatial row dimensions.

The lines of constant wavelength used for the coarse resolution “deskewing” are represented in Figs. 7 and 8.
Fig. 7

Spectral skewness for the actual coarse resolution scene. The diagonal lines show the lines of constant wavelength

Fig. 8

Skewness correction in coarse scale scene representation. Left is skewed data approximation while right is the unskewed version following lines of constant wavelength from Fig. 7

At this point, one can apply a metric to the reduced resolution imagery which defines which spatial areas are to be sensed at a finer resolution. The process can then continue until the highest resolution possible is achieved over the spatial areas desired by the controlling metric.

To garner more understanding of this process, we need more detail. With the equations for a coarse scale approximation of the scene defined by sensing four frames of data,
$$ \hat{S} = F_{256,4} H_{256,4}^{T} , $$
(11)
we are in a position to establish some type of measurable criteria as to whether any of the data matrix \( \hat{S} \), needs to be approximated to a finer resolution. We will later establish an Automatic Target Recognition (ATR) criteria in more detail, but for now we will just assume that a decision is made by some criteria as to where in the array needs further detail. The extra approximation detail is collected in the same manner as previously described for the 256 row dimensional case. That is, we will consider each of the four dyadic spatial row “blocks” which have been averaged in the first approximation. The next level approximation will be made with the reduced size Hadamard basis set H64,4. Thus, we will sense the scene as
$$ F_{64,4} = SH_{64,4} = \left[ {\begin{array}{*{20}c} {Sh_{1} } & {Sh_{2} } & {Sh_{3} } & {Sh_{4} } \\ \end{array} } \right], $$
(12)
where
$$ {\mathbf{H}}_{64,4} = \left. {\overbrace {{\left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ 1 & \vdots & 1 & 1 \\ 1 & \vdots & { - 1} & { - 1} \\ \vdots & 1 & { - 1} & { - 1} \\ \vdots & { - 1} & 1 & { - 1} \\ 1 & \vdots & 1 & { - 1} \\ 1 & \vdots & { - 1} & 1 \\ 1 & { - 1} & { - 1} & 1 \\ \end{array} } \right]}}^{4}} \right\}64. $$
(13)
With this formalization,
$$ \hat{S}_{64} = F_{64,4} H_{64,4}^{T} . $$
(14)
For the purpose of illustration, if we assume that the second dyadic block has been flagged for finer resolution approximation, then the next level approximation becomes
$$ \hat{S}_{64} \propto S\left[ {\begin{array}{*{20}c} {1_{64} } & 0 & \cdots & 0 \\ 0 & {\left[ {\begin{array}{*{20}c} {1_{16} } & 0 & \cdots & 0 \\ 0 & {1_{16} } & {} & \vdots \\ \vdots & {} & {1_{16} } & 0 \\ 0 & \cdots & 0 & {1_{16} } \\ \end{array} } \right]} & {} & \vdots \\ \vdots & {} & {1_{64} } & 0 \\ 0 & \cdots & 0 & {1_{64} } \\ \end{array} } \right]. $$
(15)
This procedure continues until the criteria for further resolution processing is not satisfied with any of the remaining dyadic blocks. The final approximation will be something of the form
$$ \hat{S}_{\text{final}} \propto S\left[ {{\text{diag}}\left[ {1_{{k_{1} }} ,1_{{k_{2} }} , \ldots ,1_{{k_{M} }} } \right]} \right], $$
(16)
with the parameters {k1k2, …, kM}defining the local resolution and are determined iteratively by the controlling criteria metric. For example, a spectral MACH [8] filter could be inserted at this stage as a controlling criteria for finer resolution sampling.
For the final multi-resolution scene representation, the “skewness” is also multiple scales. This results in a set of piecewise linear lines of constant wavelength denoted by Fig. 9. An example on real imagery is given in Sect. 3.
Fig. 9

Spectral skewness for multi-resolution scene. Left is for coarse scale representation while the right is for the final multi-resolution scene representation

3 Experimental Results

In this section, we describe experiments to demonstrate (i) the improvement in SNR by using a Hadamard encoded coded aperture, and (ii) the benefit of ISP by imaging a scene in variable resolution that dramatically reduces the amount of raw hyperspectral data which must be collected. The sensor was placed in a data collection tower and imagery was collected of a surrogate “tank” target emplaced in the grass below the tower. Figure 10 shows the target emplacement with a regular visible camera with approximately the same spatial resolution as the hyperspectral sensing system. The associated example imagery taken from the hyperspectral sensor is shown in Fig. 11, where the image is spectral band 210.
Fig. 10

Top: target emplacement shown with visible sensor. Tank target is inside the box. Bottom: close-up of target taken with visible camera standing directly in front of target

Fig. 11

Band 210 from hyperspectral sensor of target area. Tank target is inside the box

3.1 Improving SNR Using Hadamard Multiplexing

The first algorithm demonstrated was in hyperspectral imaging. The SNR gain from Hadamard multiplexing was tested by gathering a hyperspectral data cube in a standard raster scan mode. Several different cubes of the same scene were collected in this mode so that “signal” and “noise” cubes could be estimated. The “signal” cube was estimated as the average data cube and the “noise” cube was taken as the signal subtracted from each collected cube. With these estimates for signal and noise, a signal-to-noise ratio was calculated. For a 16 ms integration time per frame, the SNR in raster scan mode was calculated as 12 dB while taking imagery with a 1 ms integration time yielded an SNR of 3 dB. Samples of typical imagery collected in raster scan mode are shown in Fig. 12.
Fig. 12

Band 20 from hyperspectral sensor in raster scan mode (left: 16 ms integration time; right: 1 ms integration time)

The same sensing and computations were conducted with the sensor set to Hadamard multiplexing mode. The SNR gain becomes clear as the 16 ms integration time yields an SNR of 17 dB. This reflects a 5 dB gain in SNR. For the 1 ms integration time the SNR improvements are more dramatic. The Hadamard multiplexing mode increases the SNR from 3 to 13 dB, a 10 dB improvement. Figure 13 presents typical imagery collected in Hadamard multiplexing mode. The gain in SNR becomes more pronounced as the light level is decreased. One notices that the 1 ms raster scan image contains virtually no signal information while the 1 ms Hadamard multiplexing image is comparable to the 16 ms raster scan image. The SNR gain for Hadamard multiplexing imaging is qualitatively supported by this experiment.
Fig. 13

Band 20 from hyperspectral sensor in Hadamard multiplexing mode (left: 16 ms integration time; right: 1 ms integration time)

3.2 Variable Resolution Hyperspectral Sensing

This experimental setup was then used to test the variable resolution hyperspectral sensing algorithm described in Sect. 2. A training cube was collected and the average target spectral vector was calculated. This vector was then taken as the driving characteristic for what areas are identified for finer resolution sensing. For example, at each sensing level, the current approximate data cube is compared against the average target spectra. The L1 norm is used for comparison and if this norm is smaller than a defined threshold, then that resolution cell is identified as requiring more resolution. The sensing continues in this manner until the highest possible resolution is attained. Figure 14 shows band 210 of the hyperspectral scene used for training. With the training signature calculated, the sequence of collected frames is shown in Fig. 15.
Fig. 14

Band 210 from hyperspectral sensor used for training the variable resolution imaging algorithm

Fig. 15

Band 210 from hyperspectral sensor in variable resolution sensing mode: increased resolution progresses from the top left to the bottom right. Note the full resolution on the target and less resolution elsewhere

One notes that the final variable resolution collected by the sensor has full resolution on the target and less resolution elsewhere. Also, the sensing terminated on the parking lot area (top left of image) after the very coarsest resolution was collected. The final collected variable resolution data cube results from sensing only 14% of the pixels required for full resolution everywhere. This represents a substantial savings in sensing resources as well as addressing the typical data glut problem associated with hyperspectral data exploitation. The full resolution and variable resolution images are shown in Fig. 16 for comparison.
Fig. 16

Band 210 from hyperspectral sensor (left: full resolution mode; right: variable resolution mode)

The next example in Fig. 17 shows the output of the algorithm adapted to generate fine resolution only where a certain spatial recognition criteria has been satisfied. Although any algorithm can be used, our metric is a spatial correlation filter which has been designed to detect the shape of the vehicle in the center of the scene.
Fig. 17

The image is sensed with task-specific compressed sensing algorithm based upon a Hadamard multiplexing sensor. The resulting multi-resolution image is represented at the far right and shows fine resolution on the target and coarse resolution elsewhere

Essentially, the image cube is further resolved by applying additional Hadamard vectors to sense only in regions where there is a potential match with the object of interest as determined by the response of the filter to low-resolution data. Large portions of the scene in the background and fore-ground are discontinued early in the sensing process, whereas resolution is progressively added only to the region that exhibit peaks that are potentially due to the car. This approach improves the sensing process by greatly reducing the overall volume of data and the time required to collect it. In the end, only the spatial information that is salient for the object recognition algorithm to recognize the car is gathered in detail.

4 Summary

The concept of Integrated Sensing and Processing (ISP) is a unique way to address the issue of large amounts of data associated with hyperspectral imaging. Typically, much of the data collected by a conventional sensor is often not of interest and discarded during analysis. In this chapter, we discussed a coded aperture hyperspectral imager that allows data to be collected at variable resolution by dynamically controlling the aperture. In an ISP framework, the sensor can collect relevant information only in areas where features (or objects) of interest may be present, and thereby greatly reduce the amount of raw data that needs to be sensed.

Specifically, we first described the conceptual design of the coded aperture hyperspectral imager developed by Plain Sight Systems [9]. It is noteworthy that the raw data sensed by this instrument is not a hyperspectral image, but a mix of coded spatial and spectral information which must be digitally processed to recover the hyperspectral data cube. We presented the algebraic framework for reconstructing the hyperspectral data cube using the Hadamard transform matrix, and described a method for varying resolution in the reconstructed scene.

The coded aperture imager’s ability to collect less data than a conventional sensor was shown by means of illustrative examples. The essence of the experiments shows that raw data can be collected sparsely across the scene, driven by performance metrics such as pattern match criteria and therefore only a fraction of the underlying pixel need to be sensed. Fundamentally, it becomes possible to retain the salient information in the scene while avoiding the need to measure irrelevant information. This has the potential to significantly reduce the requirements for data links and on-board storage in future generation of sensors that are based on the ISP paradigm.

References

  1. 1.
    Harwit, M., Sloane, N.J.A.: Hadamard Transform Optics. Academic Press, New York (1979)MATHGoogle Scholar
  2. 2.
    DeVerse, R.A., Hammaker, R.M., Fately, W.G.: Realization of the Hadamard multiplex advantage using a programmable optical mask in a dispersive flat-field near-infrared spectrometer. Appl. Spectrosc. 54(12), 1751–1758 (2000)CrossRefGoogle Scholar
  3. 3.
    Wuttig, A., Riesenberg, R.: Sensitive Hadamard transform imaging spectrometer with a simple MEMS. In: Proceedings of SPIE 4881, pp. 167–178 (2003)Google Scholar
  4. 4.
    Donoho, D.L.: Compressed sensing, Stanford University: Department of Statistics report 2004-25, October, 2004Google Scholar
  5. 5.
    Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)CrossRefGoogle Scholar
  6. 6.
    Neifeld, M.A., Shankar, P.: Feature-specific imaging. Appl. Opt. 42, 3379–3389 (2003)CrossRefGoogle Scholar
  7. 7.
    Mahalanobis, A., Daniell, C.: Data compression and correlation filtering: a seamless approach to pattern recognition. In: Javidi, B. (ed.) Smart Imaging Systems. SPIE Press, Bellingham (2001)Google Scholar
  8. 8.
    Mahalanobis, A., Vijaya Kumar, B.V.K., Song, S., Sims, S.R.F., Epperson, J.F.: Unconstrained correlation filters. App. Opt. 33(17), 3751–3759 (1994)CrossRefGoogle Scholar
  9. 9.
    Fateley, W.G., Coifman, R.R., Geshwind, F., DeVerse, R.A.: System and method for encoded spatio-spectral information processing. US Patent # 6,859,275, February 2005Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Lockheed Martin Missiles and Fire ControOrlandoUSA

Personalised recommendations