# Non-negative matrix factorization based unmixing for principal component transformed hyperspectral data

## Abstract

Non-negative matrix factorization (NMF) has been widely used in mixture analysis for hyperspectral remote sensing. When used for spectral unmixing analysis, however, it has two main shortcomings: (1) since the dimensionality of hyperspectral data is usually very large, NMF tends to suffer from large computational complexity for the popular multiplicative iteration rule; (2) NMF is sensitive to noise (outliers), and thus the corrupted data will make the results of NMF meaningless. Although principal component analysis (PCA) can be used to mitigate these two problems, the transformed data will contain negative numbers, hindering the direct use of the multiplicative iteration rule of NMF. In this paper, we analyze the impact of PCA on NMF, and find that multiplicative NMF can also be applicable to data after principal component transformation. Based on this conclusion, we present a method to perform NMF in the principal component space, named ‘principal component NMF’ (PCNMF). Experimental results show that PCNMF is both accurate and time-saving.

## Keywords

Non-negative matrix factorization (NMF) Principal component analysis (PCA) Endmember Hyperspectral## CLC number

TP751.1## 1 Introduction

Spectral unmixing analysis has received increasing interest in hyperspectral remote sensing, since the mixture pixels widely exist in hyperspectral imagery. The linear mixing model (LMM) plays an important role in hyperspectral unmixing analysis (Keshava and Mustard, 2002; Plaza *et al.*, 2004; Bioucas-Dias *et al.*, 2012). It assumes that any pixel in the image can be regarded as a linear combination of several pure spectral signatures (called ‘end-members’) weighted by corresponding abundance fractions. Owing to the physical constraints, the abundances satisfy the abundance non-negative constraint (ANC) and abundance sum-to-one constraint (ASC). In the assumption of LMM, when all the end-members are available, the abundances of the end-members can be obtained conveniently using linear unmixing methods (Heinz and Chang, 2001; Parente and Plaza, 2010; Heylen *et al.*, 2011). Therefore, some methods concentrate on endmember selection, for instance, pixel purity index (PPI) (Boardman, 1992), N-FINDR (Winter, 1999; Ji *et al.*, 2015), orthogonal bases algorithm (OBA) (Tao *et al.*, 2007a), iterative error analysis (IEA) (Neville *et al.*, 1999), simplex growing algorithm (SGA) (Chang *et al.*, 2006), successive projection algorithm (SPA) (Zhang *et al.*, 2008), maximum volume by householder transformation (MVHT) (Liu and Zhang, 2012), Gaussian elimination method (GEM) (Geng *et al.*, 2013b), and fast Gram determinant based algorithm (FGDA) (Sun *et al.*, 2014). All these algorithms assume that at least one pure pixel exists for each endmember in the image.

However, the pure-pixel assumption is hard to satisfy for real hyperspectral images. In this case, another alternative approach for unmixing analysis, endmember generation, is required. Methods of this type include minimum volume transform (MVT) (Craig, 1994), minimum volume simplex analysis (MVSA) (Li and Bioucas-Dias, 2008), MINVEST (Hendrix *et al.*, 2012), minimum volume enclosing simplex (MVES) (Chan *et al.*, 2009; Ambikapathi *et al.*, 2011), simplex identification via split augmented Lagrangian (SISAL) (Bioucas-Dias, 2009), iterated constrained endmember (ICE) (Berman *et al.*, 2004), and geometric optimization model (GOM) (Geng *et al.*, 2013a). Take ICE as an example. It formulates an optimization problem with an effort to minimize the reconstruction error regularized by a constrained term, i.e., the sum of variances of the simplex vertices. In each iteration of ICE, the abundance fractions of each pixel can be found by solving a quadratic programming problem, which is very time-consuming. In recent years, non-negative matrix factorization (NMF) has been applied to hyperspectral data unmixing (Miao and Qi, 2007; Zymnis *et al.*, 2007; Jia and Qian, 2009; Huck *et al.*, 2010; Liu *et al.*, 2011; Ji *et al.*, 2013; Zhu *et al.*, 2014). The multiplicative update algorithm for NMF was provided by Lee and Seung (1999), which is demonstrated to be computationally simple and does not need any manually set parameters. Note that NMF suffers from two main problems when used for hyperspectral unmixing analysis. One is that the multiplicative iteration version of NMF is very time-consuming if performed directly to the original hyperspectral data since the dimensionality of hyperspectral data is very high (generally more than 100). The other disadvantage of NMF is that it is very sensitive to noise (outliers). To address the two problems, the operation of dimensionality reduction can be conducted, which can not only reduce the data size but also improve the signal-to-noise ratio (SNR) of the data set. However, after dimensionality reduction, the multiplicative learning rule for NMF cannot be used since dimensionality reduced data generally contain negatives. In this paper, we explore the impact of principal component analysis (PCA) on NMF and find that the multiplicative updating rule is still applicable to the data after the dimensionality reducing process, which contains only the step of rotation and does not involve translation. Then, we present a new approach for NMF in the principal component (PC) space, namely PCNMF.

## 2 Background

In this section, we briefly review the theory of LMM and NMF.

### 2.1 Linear mixing model

**r**_{ i }can be expressed by a linear combination of several endmember vectors

**e**_{ j }(

*j*= 1, 2,…,

*p*) with the physical constrained conditions:

*p*is the number of endmembers in the image,

**c**_{ i }= [

*c*

_{ i }

_{1},

*c*

_{ i }

_{2},…,

*c*

_{ ip }]

^{T}and

*c*

_{ ij }is a scalar representing the fractional abundance of endmember vector

**e**_{ j }in the pixel

**r**_{ i }.

*=[*

**E**

**e**_{1},

**e**_{2},…,

**e**_{ p }] is an

*L*×

*p*mixing matrix (

*L*is the number of bands for the original data).

### 2.2 Non-negative matrix factorization

*L*×

*M*non-negative matrix

*(*

**R***M*≫

*L*in general) and a positive integer

*p*<

*L*, the task of non-negative matrix factorization is to find two non-negative matrices

**E**_{ L }

_{×p}and

**C**_{ p }

_{×M}such that

*and*

**E***is to construct the following optimization problem:*

**C**_{F}represents the Frobenius norm. There are many methods that can be used to solve Eq. (4), among which the most popular one is the multiplicative iteration rule. The learning process for the multiplicative NMF is

If the initial matrices * E* and

*are strictly non-negative, these matrices can remain non-negative throughout the iterations. If the ASC needs to be considered, we can just replace the matrices*

**C***and*

**R***by \(\bar R = \left[ {\begin{array}{*{20}c}R \\ {\delta 1_M^{\rm{T}}} \end{array} } \right],\;\;\bar E = \left[ {\begin{array}{*{20}c}E \\ {\delta 1_p^{\rm{T}}} \end{array} } \right]\), where*

**E****1**

_{ M }is an

*M*-dimensional column vector, and

**1**

_{ p }a

*p*-dimensional column vector, with all elements equal to one, and

*δ*is a positive parameter to control the effect of ASC, which is usually assigned manually. In recent years, NMF has been widely used in unmixing analysis for hyperspectral data (Miao and Qi, 2007; Zymnis

*et al.*, 2007; Jia and Qian, 2009; Huck

*et al.*, 2010; Liu

*et al.*, 2011; Ji

*et al.*, 2013; Zhu

*et al.*, 2014). Most of these methods perform NMF directly to the original data, so they are generally time-consuming and sensitive to noise (outliers). Although dimensionality reduction can mitigate these problems well, it leads to another intractable situation; i.e., the non-negative property cannot be satisfied for the dimensionality reduced data. As a result, the multiplicative updating rule cannot be applied in the dimensionality reduced space. Interestingly, we find that the multiplicative updating rule is still applicative for dimensionality reduction where only rotation operation is contained. It will be elaborated in the following section.

## 3 Impact of PCA on NMF

In this section, taking PCA (Jolliffe, 2002) as an example, we investigate the applicability of NMF in the PC space. As is well known, the standard PCA process contains two main steps, translation and rotation. More specifically, translation means moving the center of the data to the mean vector, while rotation means projecting the mean-removed data to the directions of the eigenvectors derived from its co-variance matrix, which is equivalent to rotating the data by the orthogonal matrix (eigenvector matrix). Hence, in the following, we discuss the impact of general translation and rotation on NMF.

### 3.1 Impact of rotation

*≥ 0. However, the data may contain negatives after the rotation by orthogonal matrix*

**R***. Denote \(\hat R = {V^{\rm{T}}}R\) and*

**V***=*

**Ê**

**V**^{T}

*as the rotated data and endmember matrix, respectively. First, we consider the multiplicative learning rule for*

**E***, which can be rewritten as*

**C***=*

**Ê**

**V**^{T}

*into Eq. (7), we have*

**E***in the PC space. The multiplicative updating rule for*

**E***is*

**E***=*

**Ê***and \(\hat R = {V^{\rm{T}}}R\) into Eq. (9), wehave*

**C***cannot be eliminated in the multiplicative updating rule for the endmember matrix. Both \(\hat R\) and*

**V***in Eq. (10) may contain negatives in the PC space. Therefore, this is the main obstacle to prevent us from applying NMF in the PC space.*

**Ê**### 3.2 Impact of translation

*, but the update rule of*

**C***may not hold, since \(\hat R\) in Eq. (10) may contain negatives. To make the multiplicative update rule applicable to the rotated data, a simple trick to make all data non-negative is by translation. Here, supposing the data and endmembers contain negatives, we can select a vector*

**E**

**r**_{0}(

*L*-dimensional column vector), such that \((R - {r_0}1_M^{\rm{T}}) \geq 0\) and \((E - {r_0}1_p^{\rm{T}}) \geq 0\). Then the corresponding multiplicative updating rules become

*satisfies ASC, i.e., \(1_M^{\rm{T}} = 1_p^{\rm{T}}C\), the translation does not change the value of the objective function. However, the ASC of*

**C***cannot be guaranteed in the multiplicative learning process (see Eqs. (5), (6), (11), and (12)). Therefore, the translation of the data will change the final solution of NMF. Moreover, the number of such*

**C**

**r**_{0}which can achieve the non-negativity of

*and*

**R***is infinite, and different*

**E**

**r**_{0}will lead to different

*and*

**E***even with the same initialization. Therefore, although the way of translation can make*

**C***≥ 0,*

**R***≥ 0, it is unnatural and its use in real applications is not suggested.*

**E**## 4 Principal component non-negative matrix factorization

In Section 3, we learn that both steps of the PCA process could cause the failure of applying multiplicative update rules to NMF in the PC space. For the rotation step, since the effect of rotation matrix * V* can be completely eliminated, as Eq. (8), the update rule for the abundance matrix remains the same. Yet, the rotation matrix

*cannot be eliminated in the updateruleof*

**V***, as Eq. (10), so the update rule of*

**E***is no longer universally applicable. Although data translation can solve the problem caused by*

**E***, the analysis in Section 3.2 indicates that the translation will change the final solution, and different*

**V**

**r**_{0}will lead to different results.

Interestingly, based on our observation, the maximum spectral angle between pixels in real hyperspectral data is mostly small (generally less than 45°). In addition, the rotation operation will not change the spectral angles between data points. Both facts motivate us to apply the orthogonal procrustes (OP) technique to solve the non-negative problem of * R* and

*in the PC space. That is, forcibly rotating all the data points into the first quadrant of the PC space will make the update rule still work for*

**E***.*

**E***to a given matrix*

**B***by an orthogonal transformation matrix*

**A***, which maps*

**Q***to*

**B***as closely as possible. Mathematically, this problem can be stated as follows:*

**A***and*

**U***satisfy the singular value decomposition (SVD) equation*

**W**

**B**^{T}

*=*

**A**

**UDW**^{T}.

*=[1, 1], and*

**A***be the mean vector of the data set. Then we can calculate*

**B***by Eq. (15). It can be seen that all data are transformed to the first quadrant, being non-negative. As mentioned before, to achieve the non-negativity requirement for both*

**Q***and*

**R***, the data should satisfy one premise; that is, the maximum spectral angle of the data set should be no larger than 90°. Since digital number, radiance, or reflectance values are all non-negative, real hyperspectral data all exist in the first quadrant. As a result, the maximum spectral angle of the real data set is no larger than 90°. Moreover, we compute a series of real hyperspectral data sets, and find that the maximum spectral angle is usually less than 45° in practice. Therefore, by OP transformation, we can perform the multiplicative update rules in the PC space without involving the translation process. Clearly, the PC transformation without translation and the OP transformation can be combined into one process, named ‘PC-OP transformation’ here. Overall, PC-OP transformation has the following advantages: (1) reduce the dimensionality of the data set and thus the computational complexity, (2) make all data non-negative, so that multiplicative update rules of NMF can still work in the PC space, and (3) improve the SNR of the data set by choosing only the first*

**E***p*− 1 PC bands, to improve the final accuracy.

- 1.
Initialization: Due to the non-convexity of the objective function (4), the random initialization on

and**E**will definitely lead to the local minima problem, which makes the solution meaningless. Taking a 2D situation as an example, the scatter points in Fig. 2 are the highly mixed data cloud. Points**C***A, B*, and*C*are three real endmembers, and obviously Δ*ABC*is the most compacted triangle that encloses all these points. Because of the non-convexity of NMF, any triangle in the plane that can enclose the data set will be a local solution of NMF, such as Δ*A*_{1}*B*_{1}*C*_{1}and Δ*A*_{2}*B*_{2}*C*_{2}. However, the vertices*A*_{1},*B*_{1},*C*_{1}or*A*_{2},*B*_{2},*C*_{2}are far different from the true endmembers*A, B*, and*C*, so they have no physical meanings.To avoid unreasonable local minima, Tao

*et al.*(2007b) used the N-FINDR outputs as the endmember initialization. We can employ the same strategy. We use the results of FGDA (Sun*et al.*, 2014), which are independent of the dimensionality of data, as the initialization for. For**E**, it is estimated by solving the non-negative least-squares constraint problem (lsqnonneg) on the augmented data and endmember matrices, \(\bar R\) and**C****Ē**_{0}. - 2.
Stopping conditions: There are two widely used criteria, maximum iteration number and minimum error tolerance. Here, we employ the maximum iteration number.

## 5 Experiments

We have conducted tests based on both simulated and real data to evaluate the performance of PCNMF. Both endmember accuracy and computation time have been evaluated. For fair comparison, we set the same initializations of * E* and

*, maximum iteration number maxiter, and ASC weight*

**C***δ*for both NMF and PCNMF in all experiments.

### 5.1 Simulated data

Here, we design two experiments to evaluate the performance of FGDA, NMF, and PCNMF. The spectra of three minerals (Alunite, Calcite, and Kaolinite) from the U.S. Geological Survey (USGS) Digital Spectral Library are selected as endmember signatures. Then 2000 mixture vectors are generated according to Eqs. (1) and (2) with abundance fractions following a Dirichlet distribution. To ensure that no pure pixel exists, fractions are not allowed to be larger than 0.9. The selection of the maximum iteration number, maxiter, may be different for different data sets. For the simulated data sets used in this study, the NMF algorithms can generally produce acceptable results when the iteration number is around 4000, so we set maxiter to 4000. The ASC weight is set to *S* = 13 for all experiments. In addition, since the data are randomly generated, the average result of 10 runs is presented in the following. To evaluate the performance of these methods, the metrics of rmsSAD, rmsSID (Nascimento and Bioucas-Dias, 2005), and the relative abundance error (RAE) (Geng *et al.*, 2015) are used, where SAD stands for the spectral angle distance, and SID stands for the spectral information divergence. The abundance matrix of FGDA is estimated by a least-squares method with the ASC weight also set to 13.

#### 5.1.1 Experiment 1: accuracy test for noiseless data

#### 5.1.2 Experiment 2: accuracy test for noisy data

As can be seen from Fig. 4, our proposed method PCNMF is always better than FGDA and NMF with the original dimensional data. When SNR≥15 dB, FGDA has the worst performance. When SNR=10 dB, NMF has the lowest accuracy in all metrics, which indicates that NMF is sensitive to noise. The superiority of PCNMF can be attributed to the fact that PCA contributes to the improvement of SNR and suppression of noise. NMF in the PC space is more accurate, particularly when SNR is low.

#### 5.1.3 Experiment 3: computation time

*and*

**E***in the low-dimensional PC space, PCNMF costs much less time than NMF.*

**C**Computation time for different methods

Method | Time (s) |
---|---|

FGDA | 0.0093 |

NMF | 12.5288 |

PCNMF | 0.7324 |

### 5.2 Real data

*et al.*, 1998). AVIRIS is a high-quality, low-noise hyperspectral instrument that acquires data in 224 contiguous spectral bands ranging from 365 to 2500 nm. The selected subscence is shown in Fig. 5 with a size of 191 × 250 pixels. Due to water absorption or low SNR, bands 1–3, 104–113, 148–167, and 221–224 are removed. The maximum SADs of the original, PC transformed, and OP transformed data are listed in Table 2. The maximum SAD of the original data is around 23°, greatly less than 90°, and the PC and PC-OP transformations have little effect on SADs between pixels.

The maximum SAD of the original, PC transformed, and OP transformed data

Data | Maximum SAD (degree) |
---|---|

Original | 23.7468 |

PC transformed | 28.9216 |

OP transformed | 28.9216 |

*et al.*, 2009) and the ground-truth (Swayze

*et al.*, 1992), we set the number of endmembers as

*p*= 14. For fair comparison, FGDA is also conducted on the data after dimensionality reduction by Algorithm . Also, the 14 endmembers extracted by FGDA are used as the initial endmembers for PCNMF. For the maximum iteration number and ASC weight, the same values used in the simulations are applied, i.e., maxiter=4000,

*δ*=13. The resampled spectra from the USGS Digital Spectral Library are selected as the ground-truth for comparison. For each mineral, the library spectrum, which has both small SAD and small SID with the endmember extracted from FGDA, is selected as the reference spectrum. The comparison of FGDA and PCNMF in terms of SAD is shown in Table 3. It can be seen that 10 out of 14 endmembers extracted by PCNMF outperform those by FGDA and it has smaller average SAD compared to FGDA.

The SAD between USGS reference spectra and extracted endmembers by FGDA and PCNMF

No. | Substance | SAD (degree) | |
---|---|---|---|

FGDA | PCNMF | ||

1 | Muscovite IL107 | | 5.9231 |

2 | Desert Vanish GDS141 | 9.4029 | |

3 | Alunite GDS84 Na03 | 4.6300 | |

4 | Kaolin/Smect KLF508 | | 5.8254 |

5 | Montmorillonite SWy-1 | 6.9298 | |

6 | Kaolinite CM7 | | 4.8635 |

7 | Buddingtonite NHB2301 | 5.4664 | |

8 | Alunite GDS82 Na82 | 12.4593 | |

9 | Montmorillonite+Illi CM42 | 6.2363 | |

10 | Chalcedony CU91-6A | 5.5653 | |

11 | Alunite AL706 | 7.7288 | |

12 | Montmorillonite+Illi CM37 | 4.8154 | |

13 | Kaolin/Smect KLF511 | | 4.3025 |

14 | Kaolin/Smect H89-FR-5 | 4.2887 | |

Average | 6.2034 | |

## 6 Conclusions

In this paper, we have explored the possibility of applying the PCA dimensionality reduction technique to the multiplicative update rules of NMF. The main obstacle is that data after PC transformation may contain negatives, which could be caused by both steps of PCA (i.e., translation and rotation). We have proved that the rotation matrix of PCA can be eliminated using the multiplicative learning rule for * C*, but not for

*. To solve the non-negative problem of PCA data, one possible way is to add a large positive vector,*

**E**

**r**_{0}, to all data. However, different

**r**_{0}will lead to different

*and*

**E***, so translation is not an advisable way for practical applications. According to our observation, the maximum SAD of real hyperspectral data is not large (generally less than 45°). Since rotation operation will not change the SADs between data vectors, we thereby introduce the OP transformation to forcibly rotate the data cloud into the first quadrant in the PC space. This NMF-based unmixing analysis method for data after PC rotation and OP transformations is named PCNMF. Compared to the original NMF, PCNMF is more robust to noise. Moreover, since the data dimensionality is reduced to*

**C***p*− 1, our method can greatly save computational time.

## References

- Ambikapathi, A., Chan, T.H., Ma, W.K.,
*et al.*, 2011. Chance-constrained robust minimum-volume enclosing simplex algorithm for hyperspectral unmixing.*IEEE Trans. Geosci. Remote Sens.*,**49**(11):4194–4209. http://dx.doi.org/10.1109/TGRS.2011.2151197CrossRefGoogle Scholar - Berman, M., Kiiveri, H., Lagerstrom, R.,
*et al.*, 2004. ICE: a statistical approach to identifying endmembers in hyperspectral images.*IEEE Trans. Geosci. Remote Sens.*,**42**(10):2085–2095. http://dx.doi.org/10.1109/TGRS.2004.835299CrossRefGoogle Scholar - Bioucas-Dias, J.M., 2009. A variable splitting augmented Lagrangian approach to linear spectral unmixing. 1st Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, p.1–4. http://dx.doi.org/10.1109/WHISPERS.2009.5289072Google Scholar
- Bioucas-Dias, J.M., Plaza, A., Dobigeon, N.,
*et al.*, 2012. Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches.*IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.*,**5**(2):354–379. http://dx.doi.org/10.1109/JSTARS.2012.2194696CrossRefGoogle Scholar - Boardman, J.W., 1992. Automated spectral unmixing of AVIRIS data using convex geometry concepts. Summaries of the 4th Annual JPL Airborne Geoscience Workshop, p.11–14.Google Scholar
- Chan, T.H., Chi, C.Y., Huang, Y.M.,
*et al.*, 2009. A convex analysis-based minimum-volume enclosing simplex algorithm for hyperspectral unmixing.*IEEE Trans. Signal Process.*,**57**(11):4418–4432. http://dx.doi.org/10.1109/TSP.2009.2025802MathSciNetCrossRefGoogle Scholar - Chang, C.I., Wu, C.C., Liu, M.,
*et al.*, 2006. A new growing method for simplex-based endmember extraction algorithm.*IEEE Trans. Geosci. Remote Sens.*,**44**(10):2804–2819. http://dx.doi.org/10.1109/TGRS.2006.881803CrossRefGoogle Scholar - Craig, M.D., 1994. Minimum-volume transforms for remotely sensed data.
*IEEE Trans. Geosci. Remote Sens.*,**32**(3):542–552. http://dx.doi.org/10.1109/36.297973CrossRefGoogle Scholar - Geng, X.R., Ji, L.Y., Zhao, Y.C.,
*et al.*, 2013a. A new endmember generation algorithm based on a geometric optimization model for hyperspectral images.*IEEE Geosci. Remote Sens. Lett.*,**10**(4):811–815. http://dx.doi.org/10.1109/LGRS.2012.2224635CrossRefGoogle Scholar - Geng, X.R., Xiao, Z.Q., Ji, L.Y.,
*et al.*, 2013b. A Gaussian elimination based fast endmember extraction algorithm for hyperspectral imagery.*ISPRS J. Photogr. Remote Sens.*,**79**:211–218. http://dx.doi.org/10.1016/j.isprsjprs.2013.02.020CrossRefGoogle Scholar - Geng, X.R., Sun, K., Ji, L.Y.,
*et al.*, 2015. Optimizing the endmembers using volume invariant constrained model.*IEEE Trans. Image Process.*,**24**(11):3441–3449. http://dx.doi.org/10.1109/TIP.2015.2446196MathSciNetCrossRefGoogle Scholar - Green, B.F., 1952. The orthogonal approximation of an oblique structure in factor analysis.
*Psychometrika*,**17**(4):429–440. http://dx.doi.org/10.1007/BF02288918MathSciNetCrossRefGoogle Scholar - Green, R.O., Eastwood, M.L., Sarture, C.M.,
*et al.*, 1998. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS).*Remote Sens. Environ.*,**65**(3):227–248. http://dx.doi.org/10.1016/S0034-4257(98)00064-9CrossRefGoogle Scholar - Heinz, D.C., Chang, C.I., 2001. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery.
*IEEE Trans. Geosci. Remote Sens.*,**39**(3):529–545. http://dx.doi.org/10.1109/36.911111CrossRefGoogle Scholar - Hendrix, E.M.T., Garcia, I., Plaza, J.,
*et al.*, 2012. A new minimum-volume enclosing algorithm for endmember identification and abundance estimation in hyperspectral data.*IEEE Trans. Geosci. Remote Sens.*,**50**(7):2744–2757. http://dx.doi.org/10.1109/TGRS.2011.2174443CrossRefGoogle Scholar - Heylen, R., Burazerovic, D., Scheunders, P., 2011. Fully constrained least squares spectral unmixing by simplex projection.
*IEEE Trans. Geosci. Remote Sens.*,**49**(11):4112–4122. http://dx.doi.org/10.1109/TGRS.2011.2155070CrossRefGoogle Scholar - Huck, A., Guillaume, M., Blanc-Talon, J., 2010. Minimum dispersion constrained nonnegative matrix factorization to unmix hyperspectral data.
*IEEE Trans. Geosci. Remote Sens.*,**48**(6):2590–2602. http://dx.doi.org/10.1109/TGRS.2009.2038483CrossRefGoogle Scholar - Ji, L.Y., Geng, X.R., Yu, K.,
*et al.*, 2013. A new non-negative matrix factorization method based on barycentric coordinates for endmember extraction in hyperspectral remote sensing.*Int. J. Remote Sens.*,**34**(19):6577–6586. http://dx.doi.org/10.1080/01431161.2013.804223CrossRefGoogle Scholar - Ji, L.Y., Geng, X.R., Sun, K.,
*et al.*, 2015. Modified NFINDR endmember extraction algorithm for remotesensing imagery.*Int. J. Remote Sens.*,**36**(8):2148–2162. http://dx.doi.org/10.1080/01431161.2015.1034895CrossRefGoogle Scholar - Jia, S., Qian, Y.T., 2009. Constrained nonnegative matrix factorization for hyperspectral unmixing.
*IEEE Trans. Geosci. Remote Sens.*,**47**(1):161–173. http://dx.doi.org/10.1109/TGRS.2008.2002882CrossRefGoogle Scholar - Jolliffe, I.T., 2002. Principal Component Analysis. Springer.zbMATHGoogle Scholar
- Keshava, N., Mustard, J.F., 2002. Spectral unmixing.
*IEEE Signal Process. Mag.*,**19**(1):44–57. http://dx.doi.org/10.1109/79.974727CrossRefGoogle Scholar - Lee, D.D., Seung, H.S., 1999. Learning the parts of objects by non-negative matrix factorization.
*Nature*,**401**(6755):788–791. http://dx.doi.org/10.1038/44565CrossRefGoogle Scholar - Li, J., Bioucas-Dias, J.M., 2008. Minimum volume simplex analysis: a fast algorithm to unmix hyperspectral data.
*IEEE Int. Geoscience and Remote Sensing Symp.*, p.250–253. http://dx.doi.org/10.1109/IGARSS.2008.4779330Google Scholar - Liu, J.M., Zhang, J.S., 2012. A new maximum simplex volume method based on householder transformation for endmember extraction.
*IEEE Trans. Geosci. Remote Sens.*,**50**(1):104–118. http://dx.doi.org/10.1109/TGRS.2011.2158829CrossRefGoogle Scholar - Liu, X.S., Xia, W., Wang, B.,
*et al.*, 2011. An approach based on constrained nonnegative matrix factorization to unmix hyperspectral data.*IEEE Trans. Geosci. Remote Sens.*,**49**(2):757–772. http://dx.doi.org/10.1109/TGRS.2010.2068053CrossRefGoogle Scholar - Miao, L.D., Qi, H.R., 2007. Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization.
*IEEE Trans. Geosci. Remote Sens.*,**45**(3):765–777. http://dx.doi.org/10.1109/TGRS.2006.888466CrossRefGoogle Scholar - Nascimento, J.M.P., Bioucas-Dias, J.M., 2005. Vertex component analysis: a fast algorithm to unmix hyperspectral data.
*IEEE Trans. Geosci. Remote Sens.*,**43**(4):898–910. http://dx.doi.org/10.1109/TGRS.2005.844293CrossRefGoogle Scholar - Neville, R.A., Staenz, K., Szeredi, T.,
*et al.*, 1999. Automatic endmember extraction from hyperspectral data for mineral exploration.*Canadian Symp. on Remote Sensing*, p.21–24.Google Scholar - Parente, M., Plaza, A., 2010. Survey of geometric and statistical unmixing algorithms for hyperspectral images. 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, p.1–4. http://dx.doi.org/10.1109/WHISPERS.2010.5594929Google Scholar
- Plaza, A., Martinez, P., Perez, R.,
*et al.*, 2004. A quantitative and comparative analysis of endmember extraction algorithms from hyperspectral data.*IEEE Trans. Geosci. Remote Sens.*,**42**(3):650–663. http://dx.doi.org/10.1109/TGRS.2003.820314CrossRefGoogle Scholar - Schönemann, P.H., 1966. A generalized solution of the orthogonal procrustes problem.
*Psychometrika*,**31**(1):1–10. http://dx.doi.org/10.1007/BF02289451MathSciNetCrossRefGoogle Scholar - Sun, K., Geng, X.R., Wang, P.S.,
*et al.*, 2014. A fast endmember extraction algorithm based on Gram determinant.*IEEE Geosci. Remote Sens. Lett.*,**11**(6):1124–1128. http://dx.doi.org/10.1109/LGRS.2013.2288093CrossRefGoogle Scholar - Swayze, G., Clark, R.N., Kruse, F.,
*et al.*, 1992. Groundtruthing AVIRIS mineral mapping at Cuprite, Nevada. Summaries of the 3rd Annual JPL Airborne Geoscience Workshop, p.47–49.Google Scholar - Tao, X.T., Wang, B., Zhang, L.M.,
*et al.*, 2007a. A new endmember extraction algorithm based on orthogonal bases of subspace formed by endmembers. IEEE Int. Geoscience and Remote Sensing Symp., p.2006–2009. http://dx.doi.org/10.1109/IGARSS.2007.4423223Google Scholar - Tao, X.T., Wang, B., Zhang, L.M.,
*et al.*, 2007b. A new scheme for decomposition of mixed pixels based on nonnegative matrix factorization. IEEE Int. Geoscience and Remote Sensing Symp., p.1759–1762. http://dx.doi.org/10.1109/IGARSS.2007.4423160Google Scholar - Winter, M.E., 1999. N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data.
*SPIE*,**3753**:266–275.Google Scholar - Zhang, J.K., Rivard, B., Rogge, D.M., 2008. The successive projection algorithm (SPA), an algorithm with a spatial constraint for the automatic search of endmembers in hyperspectral data.
*Sensors*,**8**(2):1321–1342. http://dx.doi.org/10.3390/s8021321CrossRefGoogle Scholar - Zhu, F.Y., Wang, Y., Xiang, S.M.,
*et al.*, 2014. Structured sparse method for hyperspectral unmixing.*ISPRS J. Photogr. Remote Sens.*,**88**:101–118.CrossRefGoogle Scholar - Zymnis, A., Kim, S.J., Skaf, J.,
*et al.*, 2007. Hyperspectral image unmixing via alternating projected subgradients. 41st Asilomar Conf. on Signals, Systems and Computers, p.1164–1168. http://dx.doi.org/10.1109/ACSSC.2007.4487406Google Scholar