The Discrete Basis Problem

  • Pauli Miettinen
  • Taneli Mielikäinen
  • Aristides Gionis
  • Gautam Das
  • Heikki Mannila
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4213)


Matrix decomposition methods represent a data matrix as a product of two smaller matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. If the original data is binary, the interpretation of the basis vectors is hard. We describe a matrix decomposition formulation, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We show that the problem is computationally difficult and give a simple greedy algorithm for solving it. We present experimental results for the algorithm. The method gives intuitively appealing basis vectors. On the other hand, the continuous decomposition methods often give better reconstruction accuracies. We discuss the reasons for this behavior.


Basis Vector Association Rule Reconstruction Error Latent Dirichlet Allocation Binary Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Golub, G., Van Loan, C.: Matrix Computations. JHU Press (1996)Google Scholar
  2. 2.
    Lee, D., Seung, H.: Learning the parts of objects by Non-negative Matrix Factorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  3. 3.
    Blei, D., Ng, A., Jordan, M.: Latent Dirichlet Allocation. Journal of Machine Learning Research 3, 993–1022 (2003)zbMATHCrossRefGoogle Scholar
  4. 4.
    Buntine, W.: Variational extensions to EM and multinomial PCA. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS, vol. 2430, p. 23. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  5. 5.
    Lee, D., Seung, H.: Algorithms for Non-negative Matrix Factorization. Advances in Neural Information Processing Systems 13, 556–562 (2001)Google Scholar
  6. 6.
    Hofmann, T.: Probabilistic Latent Semantic Indexing. In: ACM Conference on Research and Development in Information Retrieval, pp. 50–57 (1999)Google Scholar
  7. 7.
    Kabán, A., Bingham, E., Hirsimäki, T.: Learning to read between the lines: The aspect Bernoulli model. In: ICDM (2004)Google Scholar
  8. 8.
    Seppänen, J., Bingham, E., Mannila, H.: A simple algorithm for topic identification in 0–1 data. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS, vol. 2838, pp. 423–434. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Koyutürk, M., Grama, A., Ramakrsihnan, N.: Compression, clustering, and pattern discovery in very-high-dimensional discrete-attribute data sets. IEEE Transactions on Knowledge and Data Engineering, 447–461 (2005)Google Scholar
  10. 10.
    Gionis, A., Mannila, H., Seppänen, J.K.: Geometric and combinatorial tiles in 0-1 data. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS, vol. 3202, pp. 173–184. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Geerts, F., Goethals, B., Mielikäinen, T.: Tiling databases. In: Suzuki, E., Arikawa, S. (eds.) DS 2004. LNCS, vol. 3245, pp. 278–289. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Besson, J., Pensa, R., Robardet, C., Boulicaut, J.F.: Constraint-based mining of fault-tolerant patterns from boolean data. In: KDID (2006)Google Scholar
  13. 13.
    Mishra, N., Ron, D., Swaminathan, R.: On finding large conjunctive clusters. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS, vol. 2777, pp. 448–462. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Brayton, R.K., Hachtel, G.D., Sangiovanni-Vincentelli, A.L.: Multilevel logic synthesis. Proceedings of the IEEE 78(2), 264–300 (1990)CrossRefGoogle Scholar
  15. 15.
    Banerjee, A., et al.: A generalized maximum entropy approach to Bregman co-clustering and matrix approximations. In: KDD, pp. 509–514 (2004)Google Scholar
  16. 16.
    Monson, S.D., Pullman, N.J., Rees, R.: A survey of clique and biclique coverings and factorizations of (0,1)-matrices. Bulletin of the ICA 14, 17–86 (1995)zbMATHMathSciNetGoogle Scholar
  17. 17.
    Garey, M.R., Johnson, D.S.: Computers and intractability: A guide to the theory of NP-Completeness. W. H. Freeman & Co., New York (1979)zbMATHGoogle Scholar
  18. 18.
    Downey, R.G., Fellows, M.R.: Parameterized Complexity. In: Monographs in computer science. Springer, New York (1999)Google Scholar
  19. 19.
    Agrawal, R., Imielinski, T., Swami, A.: Mining association rules between sets of items in large databases. In: SIGMOD (1993)Google Scholar
  20. 20.
    Lang, K.: Newsweeder: Learning to filter netnews. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 331–339 (1995)Google Scholar
  21. 21.
    Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Pauli Miettinen
    • 1
  • Taneli Mielikäinen
    • 1
  • Aristides Gionis
    • 1
  • Gautam Das
    • 2
  • Heikki Mannila
    • 1
  1. 1.HIIT Basic Research Unit, Department of Computer ScienceUniversity of HelsinkiFinland
  2. 2.Computer Science and Engineering DepartmentUniversity of Texas at ArlingtonArlingtonUSA

Personalised recommendations