Skip to main content
Log in

An effective and interpretable method for document classification

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

As the number of documents has been rapidly increasing in recent time, automatic text categorization is becoming a more important and fundamental task in information retrieval and text mining. Accuracy and interpretability are two important aspects of a text classifier. While the accuracy of a classifier measures the ability to correctly classify unseen data, interpretability is the ability of the classifier to be understood by humans and provide reasons why each data instance is assigned to a label. This paper proposes an interpretable classification method by exploiting the Dirichlet process mixture model of von Mises–Fisher distributions for directional data. By using the labeled information of the training data explicitly and determining automatically the number of topics for each class, the learned topics are coherent, relevant and discriminative. They help interpret as well as distinguish classes. Our experimental results showed the advantages of our approach in terms of separability, interpretability and effectiveness in classification task of datasets with high dimension and complex distribution. Our method is highly competitive with state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Mview-LDA is inferred by variational method. It is based on the source code at http://www.cs.cmu.edu/~pengtaox/code/.

  2. The source code of GLDA is provided by Li.

  3. Source code is available in Zhu’s home page http://bigml.cs.tsinghua.edu.cn/~jun/medlda.shtml.

  4. http://www.csie.ntu.edu.tw/~cjlin/libsvm/index.html.

  5. http://www.shoup.net/ntl/.

  6. http://www.forbes.com/sites/hughmcintyre/2015/03/08/these-are-the-most-streamed-women-on-spotify/.

References

  1. Linh NV, Anh NK, Than K, Tat NN (2015) Effective and interpretable document classification using distinctly labeled Dirichlet process mixture models of von Mises-Fisher distributions. In: Database systems for advanced applications. Springer, Switzerland, pp 139–153

  2. Delgado MF, Cernadas E, Barro S, Amorim DG (2014) Do we need hundreds of classifiers to solve real world classification problems?”. J Mach Learn Res 15(1):3133–3181

    MathSciNet  MATH  Google Scholar 

  3. Van de Merckt T, Decaestecker C. (1995) About breaking the trade off between accuracy and comprehensibility in concept learning. In: IJCAI’95 workshop on machine learning and comprehensibility

  4. Hofmann T (2001) Unsupervised learning by probabilistic latent semantic analysis. Mach Learn 42(1–2):177–196

    Article  MATH  Google Scholar 

  5. Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  6. Ramage D, Manning CD, Dumais S (2011) Partially labeled topic models for interpretable text mining. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2011, pp 457–465

  7. Ahmed A, Xing EP (2010) Staying informed: supervised and semi-supervised multi-view topical analysis of ideological perspective. In: Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 1140–1150

  8. Anh NK, Tam NT, Linh NV (2013) Document clustering using dirichlet process mixture model of von Mises–Fisher distributions. In: 4th International symposium on information and communication technology, pp 131–138

  9. Shotwell MS, Slate EH et al (2011) Bayesian outlier detection with dirichlet process mixtures. Bayesian Anal 6(4):665–690

    Article  MathSciNet  MATH  Google Scholar 

  10. Manning CD, Raghavan P, Schütze H et al (2008) Introduction to information retrieval, vol 1. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  11. Anh NK, Van Linh N, Ky LH, Tam NT (2013) Document classification using semi-supervived mixture model of von Mises–Fisher distributions on document manifold. In: Proceedings of the fourth symposium on information and communication technology. ACM, pp 94–100

  12. Anh NK, Tam NT, Linh NV (2013) Document clustering using mixture model of von mises–fisher distributions on document manifold. In: International conference on soft computing and pattern recognition, pp 140–145

  13. Gopal S, Yang Y (2014) Von Mises–Fisher clustering models. In: Proceedings of The 31st international conference on machine learning, pp 154–162

  14. Reisinger J, Waters A, Silverthorn B, Mooney RJ (2010) Spherical topic models. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 903–910

  15. Zhong S, Ghosh J (2005) Generative model-based document clustering: a comparative study. Knowl Inf Syst 8:374–384

    Article  Google Scholar 

  16. Zhu J, Ahmed A, Xing EP (2012) Medlda: maximum margin supervised topic models. J Mach Learn Res 13(1):2237–2278

    MathSciNet  MATH  Google Scholar 

  17. Blei DM, McAuliffe JD (2007) Supervised topic models. In: Advances in neural information processing systems 20, proceedings of the twenty-first annual conference on neural information processing systems, Vancouver, BC, Canada, December 3–6, 2007, pp 121–128

  18. Wang C, Blei DM, Li F (2009) Simultaneous image classification and annotation. In: IEEE computer society conference on computer vision and pattern recognition (CVPR 2009), 20–25 June 2009. Florida, USA, Miami, pp 1903–1910

  19. Anh NK, Linh NV, Toi NK, Tam NT (2013) Multi-labeled document classification using semi-supervived mixture model of watson distributions on document manifold. In: International conference on soft computing and pattern recognition, pp 123–128

  20. Than K, Ho TB, Nguyen DK (2014) An effective framework for supervised dimension reduction. Neurocomputing 139:397–407

    Article  Google Scholar 

  21. Lacoste-Julien S, Sha F, Jordan MI (2008) Disclda: discriminative learning for dimensionality reduction and classification. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems 21. Curran Associates, Inc., pp 897–904

  22. Ramage D, Hall D, Nallapati R, Manning CD (2009) Labeled lda: A supervised topic model for credit attribution in multi-labeled corpora. In: Proceedings of the 2009 conference on empirical methods in natural language processing: volume 1–volume 1. Association for Computational Linguistics, , pp 248–256

  23. Banerjee A, Dhillon IS, Ghosh J, Sra S (2005) Clustering on the unit hypersphere using von Mises–Fisher distributions. J Mach Learn Res 6:1345–1382

    MathSciNet  MATH  Google Scholar 

  24. Xie P, Xing EP (2013) Integrating document clustering and topic modeling. In: Proceedings of the twenty-ninth conference on uncertainty in artificial intelligence, Bellevue, WA, USA, August 11–15 (2013)

  25. Li X, OuYang J, Lu Y, Zhou X, Tian T (2015) Group topic model: organizing topics into groups. Inf Retr J 18(1):1–25

    Article  Google Scholar 

  26. Ferguson TS (1973) A Bayesian analysis of some nonparametric problems. Ann Stat 1(2):209–230

    Article  MathSciNet  MATH  Google Scholar 

  27. Sethuraman J (1994) A constructive definition of Dirichlet priors. Stat Sin 4:639–650

    MathSciNet  MATH  Google Scholar 

  28. Ishwaran H, James LF (2001) Gibbs sampling methods for stick-breaking priors. J Am Stat Assoc 96(453):161–173

    Article  MathSciNet  MATH  Google Scholar 

  29. Neal RM (2000) Markov chain sampling methods for Dirichlet process mixture models. J Comput Graph Stat 9(2):249–265

    MathSciNet  Google Scholar 

  30. Blei DM, Jordan MI (2006) Variational inference for Dirichlet process mixtures. Bayesian Anal 1(1):121–144

    Article  MathSciNet  MATH  Google Scholar 

  31. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119

  32. Jiang Q, Zhu J, Sun M, Xing EP (2012) Monte carlo methods for maximum margin supervised topic models. In: Advances in neural information processing systems, pp 1592–1600

  33. Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11)

  34. Röder M, Both A, Hinneburg A (2015) Exploring the space of topic coherence measures. In: Proceedings of the eighth ACM international conference on web search and data mining, WSDM 2015, Shanghai, China, February 2–6, 2015, pp 399–408

  35. Niyogi P (2013) Manifold regularization and semi-supervised learning: some theoretical analyses. J Mach Learn Res 14(1):1229–1250

    MathSciNet  MATH  Google Scholar 

  36. Ng AY, Jordan MI (2001) On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In: Advances in Neural information processing systems, pp 841–848

Download references

Acknowledgments

This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 102.05-2014.28 and by the Air Force Office of Scientific Research (AFOSR), Asian Office of Aerospace Research & Development (AOARD), and US Army International Technology Center, Pacific (ITC-PAC) under Award Number FA2386-15-1-4011.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ngo Van Linh.

Additional information

A part of this work appears in [1].

Appendices

Appendix

1.1 Lower bound function

The log likelihood of the corpus is bounded by lower bound when using Jensens inequality

$$\begin{aligned}&\log P\left( \mathbf X ^v,\mathbf Z ^v|\alpha _{v},\zeta _{v},\{\kappa _{v,t}\}_{v=1,t=1}^{V,T},\rho _{v}\right) \nonumber \\&\quad \ge E_{q}[\log P(\mathbf X ^v,\mathbf Z ^v,\mathbf U _v,\varvec{\mu }_v)] - E_{q}[\log q((\mathbf U _v,\varvec{\mu }_v,\mathbf Z ^v|\gamma _v,\tilde{\mu }_v,\tilde{\kappa }_v,\phi )] \nonumber \\ \end{aligned}$$
(30)

We expand this lower bound as follows

$$\begin{aligned} L(\varvec{\gamma }_{v},\tilde{\varvec{\mu }}_{v},\tilde{\varvec{\kappa }}_{v},\varvec{\phi })&= E_{q}[\log P(\mathbf X ^v|\mathbf Z ^v,\varvec{\kappa }_{v},\varvec{\mu }_v))] + E_{q}[\log P(\mathbf Z ^v|\mathbf U ^v)] \nonumber \\&\quad + E_{q}[\log P(\mathbf U _v|\alpha _v)] + E_{q}[\log P(\varvec{\mu }_v |\zeta _{v},\rho _{v})] - E_{q}[\log q(\mathbf U _v|\varvec{\gamma }_{v})]\nonumber \\&\quad - E_{q}\left[ \log q(\mathbf Z ^v|\varvec{\phi })\right] \nonumber \\&\quad - E_{q}\left[ \log q(\varvec{\mu }_v |\tilde{\varvec{\mu }}_{v},\tilde{\varvec{\kappa }}_{v})\right] = \sum _{n=1}^{N_v}(E_{q}[\log P(x_n^v|z_n^v,\varvec{\kappa }_{v},\varvec{\mu }_v))]\nonumber \\&\quad + E_{q}\left[ \log P(z_n^v|\mathbf U ^v)\right] ) \nonumber \\&\quad + \sum _{t=1}^{\infty }E_{q}[\log P(u_{v,t}|\alpha _v)] + \sum _{t=1}^{\infty }E_{q}[\log P(\mu _{v,t} |\zeta _{v},\rho _{v})]\nonumber \\&\quad - \sum _{t=1}^{\infty }E_{q}[\log q(u_{v,t}|\gamma _{v,t_1},\gamma _{v,t_2})] \nonumber \\&\quad - \sum _{n=1}^{N_v}E_{q}[\log q(z_n^v|\phi _n)] - \sum _{t=1}^{\infty }E_{q}[\log q(\mu _{v,t} |\tilde{\mu }_{v,t},\tilde{\kappa }_{v,t})] \end{aligned}$$
(31)

We note that \(q(z_{n}^v>T)=0\) and \(E_q[\log (1-u_{v,T})]=0\) when truncating topics. Therefore,

$$\begin{aligned} L =&\sum _{n=1}^{N_v}\sum _{t=1}^{T} \phi _{n,t}E_q\left[ \log v\hbox {MF}(x_n|\mu _{n,t},\kappa _{v,t})\right] + \sum _{n=1}^{N_v} E_{q}\left[ \log (\prod _{t=1}^{T}(1-u_{v,t})^{1[z_{n}^v>t]}u_{v,t}^{1[z_n^v=t]})\right] \nonumber \\&+ \sum _{t=1}^{T}E_{q}[\log \hbox {beta}(u_{v,t}|1,\alpha _v)] + \sum _{t=1}^{T}E_{q}[\log v\hbox {MF}(\mu _{v,t} |\zeta _{v},\rho _{v})] \nonumber \\&-\sum _{t=1}^{T}E_{q}[\log \hbox {beta}(u_{v,t}|\gamma _{v,t_1},\gamma _{v,t_2})] - \sum _{n=1}^{N_v}\sum _{t=1}^{T} \phi _{n,t}\log \phi _{n,t}\nonumber \\&- \sum _{t=1}^{T}E_{q}[\log v\hbox {MF}(\mu _{v,t} |\tilde{\mu }_{v,t},\tilde{\kappa }_{v,t})] \nonumber \\&= \sum _{n=1}^{N_v}\sum _{t=1}^{T} \phi _{n,t}(\log C_d(\kappa _{v,t})+ \kappa _{v,t}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v)\nonumber \\&+\sum _{n=1}^{N_v}\sum _{t=1}^{T}(\sum _{j=t+1}^{T} \phi _{n,j}(\varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \nonumber \\&+ \phi _{n,t}(\varPsi (\gamma _{v,t_{1}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}}))) + \sum _{t=1}^{T}(\alpha _v-1)(\varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \nonumber \\&+ \sum _{t=1}^{T}(\log C_d(\rho _{v})+\rho _{v}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}\zeta _{v}) + \sum _{t=1}^{T}((\gamma _{v,t_{1}}-1)(\varPsi (\gamma _{v,t_{1}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \nonumber \\&+ (\gamma _{v,t_{2}}-1)(\varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})))- \sum _{n=1}^{N_v}\sum _{t=1}^{T} \phi _{n,t}\log \phi _{n,t} \nonumber \\&- \sum _{t=1}^{T}(\log C_d(\tilde{\kappa }_{v,t})+ \tilde{\kappa }_{v,t}A_d(\tilde{\kappa }_{v,t})) \end{aligned}$$
(32)

Now, we optimize this objective function to infer variational parameters and estimate hyperparameters.

Variational parameter \(\phi _{n,t}\)

We know that \(\sum _{t=1}^T \phi _{n,t}=1\), and by isolating the terms that contain \(\phi _{n}\) in objective function (as in Eq. 32) and adding the Lagrange multipliers \(\lambda \), we obtain

$$\begin{aligned} L_{\phi _n}&= \sum _{t=1}^{T} \phi _{n,t}\left( \log C_d(\kappa _{v,t})+ \kappa _{v,t}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v\right) \nonumber \\&\quad +\,\sum _{t=1}^{T}\left( \sum _{j=t+1}^{T} \phi _{n,j}\left( \varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})\right) \right. \nonumber \\&\left. \quad +\,\phi _{n,t}\left( \varPsi (\gamma _{v,t_{1}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})\right) \right) -\sum _{t=1}^{T} \phi _{n,t}\log \phi _{n,t} + \lambda \left( \sum _{t=1}^T \phi _{n,t}-1\right) \end{aligned}$$
(33)

We compute the derivative with respect to \(\phi _{n,t}\) as follows

$$\begin{aligned} \frac{\delta L_{\phi _{n,t}}}{\delta \phi _{n,t}} =&\left( \log C_d(\kappa _{v,t})+ \kappa _{v,t}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v\right) +\sum _{t=1}^{T-1}(\varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \nonumber \\&+ (\varPsi (\gamma _{v,t_{1}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}}))-1-\log \phi _{n,t} + \lambda \end{aligned}$$
(34)

When \(\{\kappa _{v,t}\}_{t=1}^{T}\) are equally set and the Eq. 34 is set to zero, we have

$$\begin{aligned} \phi _{n,t} \propto \exp (S_{n,t}) \end{aligned}$$
(35)

where

$$\begin{aligned} S_{n,t}&= \kappa _{v,t} A_{d}(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}x_{n} + (\varPsi (\gamma _{v,t_{1}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \nonumber \\&\quad + \sum _{t=1}^{T-1}(\varPsi (\gamma _{v,t_{2}})-\varPsi (\gamma _{v,t_{1}}+\gamma _{v,t_{2}})) \end{aligned}$$
(36)

Variational parameter \(\tilde{\mu }_{v,t}\)

We maximize Eq. 32 with respect to \(\tilde{\mu }_{v,t}\) subject to \(||\tilde{\mu }_{v,t}||=1\). By adding the appropriate Lagrange multiplier, we have

$$\begin{aligned}&L_{\tilde{\mu }_{v,t}} = \sum _{n=1}^{N_v} \phi _{n,t}\kappa _{v,t}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v + \rho _{v}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}\zeta _{v} - \lambda \left( \tilde{\mu }_{v,t}^{T}\tilde{\mu }_{v,t}-1\right) \end{aligned}$$
(37)

Taking derivatives with respect to \(\tilde{\mu }_{v,t}\), we obtain

$$\begin{aligned} \frac{\delta L}{\delta \tilde{\mu }_{v,t}} = \sum _{n=1}^{N_v} \phi _{n,t}\kappa _{v,t}A_d(\tilde{\kappa }_{v,t})x_{n}^v + \rho _{v}A_d(\tilde{\kappa }_{v,t})\zeta _{v} - 2\lambda \tilde{\mu }_{v,t} \end{aligned}$$
(38)

Setting it to zero, we get

$$\begin{aligned} \tilde{\mu }_{v,t} = \frac{\sum _{n=1}^{N}\kappa _{v,t} \phi _{n,t}x_{n}+\rho _{v}A_d(\tilde{\kappa }_{v,t})\zeta _{v}}{\Vert \sum _{n=1}^{N}\kappa _{v,t} \phi _{n,t}x_{n}+\rho _{v}A_d(\tilde{\kappa }_{v,t})\zeta _{v}\Vert } \end{aligned}$$
(39)

Variational parameter \(\tilde{\kappa }_{v,t}\)

We maximize Eq. 32 with respect to \(\tilde{\kappa }_{v,t}\).

$$\begin{aligned}&L_{\tilde{\kappa }_{v,t}} = \sum _{n=1}^{N_v} \phi _{n,t} \kappa _{v,t}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v+ \rho _{v}A_d(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}\zeta _{v} - (\log C_d(\tilde{\kappa }_{v,t})+ \tilde{\kappa }_{v,t}A_d(\tilde{\kappa }_{v,t})) \end{aligned}$$
(40)

We take the derivative with respect to \(\tilde{\kappa }_{v,t}\)

$$\begin{aligned} \frac{\delta L}{\delta \tilde{\kappa }_{v,t}} =&\sum _{n=1}^{N_v} \phi _{n,t} \kappa _{v,t}A_d^{'}(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}x_{n}^v+ \rho _{v}A_d^{'}(\tilde{\kappa }_{v,t})\tilde{\mu }_{v,t}^{T}\zeta _{v} \nonumber \\&- \left( \frac{C_d^{'}(\tilde{\kappa }_{v,t})}{C_d(\tilde{\kappa }_{v,t})} +A_d(\tilde{\kappa }_{v,t})+\tilde{\kappa }_{v,t}A_d^{'}(\tilde{\kappa }_{v,t})\right) \end{aligned}$$
(41)

where \(A_d^{'}(\tilde{\kappa }_{v,t})= \frac{\delta A_d(\tilde{\kappa }_{v,t})}{\delta \tilde{\kappa }_{v,t}}\) and \(C_d^{'}(\tilde{\kappa }_{v,t})= \frac{\delta C_d(\tilde{\kappa }_{v,t})}{\delta \tilde{\kappa }_{v,t}}\). Setting this derivative to zero, we have:

$$\begin{aligned} \tilde{\kappa }_{v,t} = \sum _{n=1}^{N_v} \phi _{n,t} \kappa _{v,t}\tilde{\mu }_{v,t}^{T}x_{n}^v+ \rho _{v}\tilde{\mu }_{v,t}^{T}\zeta _{v} \end{aligned}$$
(42)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Linh, N., Anh, N.K., Than, K. et al. An effective and interpretable method for document classification. Knowl Inf Syst 50, 763–793 (2017). https://doi.org/10.1007/s10115-016-0956-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-016-0956-6

Keywords

Navigation