Skip to main content
Log in

Tight Semi-nonnegative Matrix Factorization

  • MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

The nonnegative matrix factorization is a widely used, flexible matrix decomposition, finding applications in biology, image and signal processing and information retrieval, among other areas. Here we present a related matrix factorization. A multi-objective optimization problem finds conical combinations of templates that approximate a given data matrix. The templates are chosen so that as far as possible only the initial data set can be represented this way. However, the templates are not required to be nonnegative nor convex combinations of the original data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.

Similar content being viewed by others

REFERENCES

  1. D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature 401 (6755), 788–791 (1999).

    Article  Google Scholar 

  2. C. Ding, X. He, and H. D. Simon, “On the equivalence of nonnegative matrix factorization and spectral clustering,” in SIAM International Conference on Data Mining (2005).

  3. J.-P. Brunet et al., “Metagenes and molecular pattern discovery using matrix factorization,” Proc. Natl. Acad. Sci. U. S. A. 101 (12), 4164–4169 (2004).

    Article  Google Scholar 

  4. C. Ding, T. Li, and W. Peng, “On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing,” Comput. Stat. Data Anal. 52 (8), 3913–3927 (2008).

    Article  MathSciNet  Google Scholar 

  5. Y. Zhang and Y. Wang, “Nonnegative matrix factorization: A comprehensive review,” IEEE Trans. Knowl. Data Eng. 25 (6), 1336–1353 (2013).

    Article  Google Scholar 

  6. C. H. Q. Ding, T. Li, and M. I. Jordan, “Convex and semi-nonnegative matrix factorizations,” IEEE Trans. Pattern Anal. Mach. Intell. 32 (1), 45–55 (2010).

    Article  Google Scholar 

  7. A. Cutler and L. Breiman, “Archetypal analysis,” Technometrics 36 (4), 338–347 (1994).

    Article  MathSciNet  Google Scholar 

  8. A. Damle and Y. Sun, “A geometric approach to archetypal analysis and nonnegative matrix factorization,” Technometrics 59 (3), 361–370 (2016).

    Article  MathSciNet  Google Scholar 

  9. H. Javadi and A. Montanari, “Nonnegative matrix factorization via archetypal analysis,” J. Am. Stat. Assoc. 115 (530), 896–907 (2020).

    Article  MathSciNet  Google Scholar 

  10. T. H. Chan et al., “A convex analysis-based minimum-volume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Trans. Signal Process. 57 (11), 4418–4432 (2009).

    Article  MathSciNet  Google Scholar 

  11. X. Fu et al., “Robust volume minimization-based matrix factorization for remote sensing and document clustering,” IEEE Trans. Signal Process. 64 (23), 6254–6268 (2016).

    Article  MathSciNet  Google Scholar 

  12. C. H. Lin et al., “Identifiability of the simplex volume minimization criterion for blind hyperspectral unmixing: The no-pure-pixel case,” IEEE Trans. Geosci. Remote Sens. 53 (10), 5530–5546 (2015).

    Article  Google Scholar 

  13. L. Miao and H. Qi, “Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,” IEEE Trans. Geosci. Remote Sens. 45 (3), 765–777 (2007).

    Article  Google Scholar 

  14. G. Zhou et al., “Minimum-volume-constrained nonnegative matrix factorization: Enhanced ability of learning parts,” IEEE Trans. Neural Networks 22 (10), 1626–1637 (2011).

    Article  Google Scholar 

  15. D. Lee and H. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13—Proceedings of the 2000 Conference (NIPS 2000) (2001), pp. 556–562.

  16. K. A. Krakowski and J. H. Manton, “On the computation of the Karcher mean on spheres and special orthogonal groups,” in Proceedings of the Workshop on Robotics and Mathematics (2007).

  17. D. W. Dreisigmeyer, Equality Constraints, Riemannian Manifolds and Direct Search Methods (2006). http://www.optimization-online.org/DB_FILE/2007/ 08/1743.pdf

  18. A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM J. Matrix Anal. Appl. 20 (2), 303–353 (1999).

    Article  MathSciNet  Google Scholar 

  19. T. G. Kolda, R. M. Lewis, and V. Torczon, “Optimization by direct search: New perspectives on some classical and modern methods,” SIAM Rev. 45 (3), 385–482 (2003).

    Article  MathSciNet  Google Scholar 

  20. C. Boutsidis and P. Drineas, “Random projections for the nonnegative least-squares problem,” Linear Algebra Its Appl. 431 (5), 760–771 (2009).

    Article  MathSciNet  Google Scholar 

  21. H. Kim and H. Park, “Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method,” SIAM J. Matrix Anal. Appl. 30 (2), 713–730 (2008).

    Article  MathSciNet  Google Scholar 

  22. C.-J. Lin, “Projected gradient methods for nonnegative matrix factorization,” Neural Comput. 19 (10), 2756–2779 (2007).

    Article  MathSciNet  Google Scholar 

  23. H. Park and J. Kim, “Toward faster nonnegative matrix factorization: A new algorithm and comparisons,” in 2008 IEEE 13th International Conference on Data Mining (2008), pp. 353–362.

  24. D. W. Dreisigmeyer, “Direct search methods on reductive homogeneous spaces,” J. Optim. Theory Appl. 176 (3), 585604 (2018).

    Article  MathSciNet  Google Scholar 

  25. S. Gratton et al., “Direct search based on probabilistic descent,” SIAM J. Optim. 25 (3), 1515–1541 (2015).

    Article  MathSciNet  Google Scholar 

  26. V. G. Sigillito et al., “Classification of radar returns from the ionosphere using neural networks,” Johns Hopkins APL Tech. Dig. 10 (3), 262–266 (1989).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David W. Dreisigmeyer.

Ethics declarations

Any opinions and conclusions expressed herein are those of the author and do not necessarily represent the views of the U.S. Census Bureau. The research in this paper does not use any confidential Census Bureau information. This was authored by an employee of the US national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Additional information

David Wayne Dreisigmeyer was born in 1971. He graduated from Juniata College (BS in Pre-law, 1994) and Colorado State University (MS in Mathematics, 1999, and PhD in Electrical Engineering, 2004). He currently works at the United States Census Bureau’s Center for Economic Studies and formerly as a Federal Data Strategy Fellow, and is the author of 10 papers. His scientific interests include uses of differential geometry in optimization and pattern analysis.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dreisigmeyer, D.W. Tight Semi-nonnegative Matrix Factorization. Pattern Recognit. Image Anal. 30, 632–637 (2020). https://doi.org/10.1134/S1054661820040124

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661820040124

Keywords:

Navigation