Abstract
As the number of documents has been rapidly increasing in recent time, automatic text categorization is becoming a more important and fundamental task in information retrieval and text mining. Accuracy and interpretability are two important aspects of a text classifier. While the accuracy of a classifier measures the ability to correctly classify unseen data, interpretability is the ability of the classifier to be understood by humans and provide reasons why each data instance is assigned to a label. This paper proposes an interpretable classification method by exploiting the Dirichlet process mixture model of von Mises–Fisher distributions for directional data. By using the labeled information of the training data explicitly and determining automatically the number of topics for each class, the learned topics are coherent, relevant and discriminative. They help interpret as well as distinguish classes. Our experimental results showed the advantages of our approach in terms of separability, interpretability and effectiveness in classification task of datasets with high dimension and complex distribution. Our method is highly competitive with state-of-the-art approaches.
Similar content being viewed by others
Notes
Mview-LDA is inferred by variational method. It is based on the source code at http://www.cs.cmu.edu/~pengtaox/code/.
The source code of GLDA is provided by Li.
Source code is available in Zhu’s home page http://bigml.cs.tsinghua.edu.cn/~jun/medlda.shtml.
References
Linh NV, Anh NK, Than K, Tat NN (2015) Effective and interpretable document classification using distinctly labeled Dirichlet process mixture models of von Mises-Fisher distributions. In: Database systems for advanced applications. Springer, Switzerland, pp 139–153
Delgado MF, Cernadas E, Barro S, Amorim DG (2014) Do we need hundreds of classifiers to solve real world classification problems?”. J Mach Learn Res 15(1):3133–3181
Van de Merckt T, Decaestecker C. (1995) About breaking the trade off between accuracy and comprehensibility in concept learning. In: IJCAI’95 workshop on machine learning and comprehensibility
Hofmann T (2001) Unsupervised learning by probabilistic latent semantic analysis. Mach Learn 42(1–2):177–196
Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022
Ramage D, Manning CD, Dumais S (2011) Partially labeled topic models for interpretable text mining. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2011, pp 457–465
Ahmed A, Xing EP (2010) Staying informed: supervised and semi-supervised multi-view topical analysis of ideological perspective. In: Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 1140–1150
Anh NK, Tam NT, Linh NV (2013) Document clustering using dirichlet process mixture model of von Mises–Fisher distributions. In: 4th International symposium on information and communication technology, pp 131–138
Shotwell MS, Slate EH et al (2011) Bayesian outlier detection with dirichlet process mixtures. Bayesian Anal 6(4):665–690
Manning CD, Raghavan P, Schütze H et al (2008) Introduction to information retrieval, vol 1. Cambridge University Press, Cambridge
Anh NK, Van Linh N, Ky LH, Tam NT (2013) Document classification using semi-supervived mixture model of von Mises–Fisher distributions on document manifold. In: Proceedings of the fourth symposium on information and communication technology. ACM, pp 94–100
Anh NK, Tam NT, Linh NV (2013) Document clustering using mixture model of von mises–fisher distributions on document manifold. In: International conference on soft computing and pattern recognition, pp 140–145
Gopal S, Yang Y (2014) Von Mises–Fisher clustering models. In: Proceedings of The 31st international conference on machine learning, pp 154–162
Reisinger J, Waters A, Silverthorn B, Mooney RJ (2010) Spherical topic models. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 903–910
Zhong S, Ghosh J (2005) Generative model-based document clustering: a comparative study. Knowl Inf Syst 8:374–384
Zhu J, Ahmed A, Xing EP (2012) Medlda: maximum margin supervised topic models. J Mach Learn Res 13(1):2237–2278
Blei DM, McAuliffe JD (2007) Supervised topic models. In: Advances in neural information processing systems 20, proceedings of the twenty-first annual conference on neural information processing systems, Vancouver, BC, Canada, December 3–6, 2007, pp 121–128
Wang C, Blei DM, Li F (2009) Simultaneous image classification and annotation. In: IEEE computer society conference on computer vision and pattern recognition (CVPR 2009), 20–25 June 2009. Florida, USA, Miami, pp 1903–1910
Anh NK, Linh NV, Toi NK, Tam NT (2013) Multi-labeled document classification using semi-supervived mixture model of watson distributions on document manifold. In: International conference on soft computing and pattern recognition, pp 123–128
Than K, Ho TB, Nguyen DK (2014) An effective framework for supervised dimension reduction. Neurocomputing 139:397–407
Lacoste-Julien S, Sha F, Jordan MI (2008) Disclda: discriminative learning for dimensionality reduction and classification. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems 21. Curran Associates, Inc., pp 897–904
Ramage D, Hall D, Nallapati R, Manning CD (2009) Labeled lda: A supervised topic model for credit attribution in multi-labeled corpora. In: Proceedings of the 2009 conference on empirical methods in natural language processing: volume 1–volume 1. Association for Computational Linguistics, , pp 248–256
Banerjee A, Dhillon IS, Ghosh J, Sra S (2005) Clustering on the unit hypersphere using von Mises–Fisher distributions. J Mach Learn Res 6:1345–1382
Xie P, Xing EP (2013) Integrating document clustering and topic modeling. In: Proceedings of the twenty-ninth conference on uncertainty in artificial intelligence, Bellevue, WA, USA, August 11–15 (2013)
Li X, OuYang J, Lu Y, Zhou X, Tian T (2015) Group topic model: organizing topics into groups. Inf Retr J 18(1):1–25
Ferguson TS (1973) A Bayesian analysis of some nonparametric problems. Ann Stat 1(2):209–230
Sethuraman J (1994) A constructive definition of Dirichlet priors. Stat Sin 4:639–650
Ishwaran H, James LF (2001) Gibbs sampling methods for stick-breaking priors. J Am Stat Assoc 96(453):161–173
Neal RM (2000) Markov chain sampling methods for Dirichlet process mixture models. J Comput Graph Stat 9(2):249–265
Blei DM, Jordan MI (2006) Variational inference for Dirichlet process mixtures. Bayesian Anal 1(1):121–144
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119
Jiang Q, Zhu J, Sun M, Xing EP (2012) Monte carlo methods for maximum margin supervised topic models. In: Advances in neural information processing systems, pp 1592–1600
Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11)
Röder M, Both A, Hinneburg A (2015) Exploring the space of topic coherence measures. In: Proceedings of the eighth ACM international conference on web search and data mining, WSDM 2015, Shanghai, China, February 2–6, 2015, pp 399–408
Niyogi P (2013) Manifold regularization and semi-supervised learning: some theoretical analyses. J Mach Learn Res 14(1):1229–1250
Ng AY, Jordan MI (2001) On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In: Advances in Neural information processing systems, pp 841–848
Acknowledgments
This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 102.05-2014.28 and by the Air Force Office of Scientific Research (AFOSR), Asian Office of Aerospace Research & Development (AOARD), and US Army International Technology Center, Pacific (ITC-PAC) under Award Number FA2386-15-1-4011.
Author information
Authors and Affiliations
Corresponding author
Additional information
A part of this work appears in [1].
Appendices
Appendix
1.1 Lower bound function
The log likelihood of the corpus is bounded by lower bound when using Jensens inequality
We expand this lower bound as follows
We note that \(q(z_{n}^v>T)=0\) and \(E_q[\log (1-u_{v,T})]=0\) when truncating topics. Therefore,
Now, we optimize this objective function to infer variational parameters and estimate hyperparameters.
Variational parameter \(\phi _{n,t}\)
We know that \(\sum _{t=1}^T \phi _{n,t}=1\), and by isolating the terms that contain \(\phi _{n}\) in objective function (as in Eq. 32) and adding the Lagrange multipliers \(\lambda \), we obtain
We compute the derivative with respect to \(\phi _{n,t}\) as follows
When \(\{\kappa _{v,t}\}_{t=1}^{T}\) are equally set and the Eq. 34 is set to zero, we have
where
Variational parameter \(\tilde{\mu }_{v,t}\)
We maximize Eq. 32 with respect to \(\tilde{\mu }_{v,t}\) subject to \(||\tilde{\mu }_{v,t}||=1\). By adding the appropriate Lagrange multiplier, we have
Taking derivatives with respect to \(\tilde{\mu }_{v,t}\), we obtain
Setting it to zero, we get
Variational parameter \(\tilde{\kappa }_{v,t}\)
We maximize Eq. 32 with respect to \(\tilde{\kappa }_{v,t}\).
We take the derivative with respect to \(\tilde{\kappa }_{v,t}\)
where \(A_d^{'}(\tilde{\kappa }_{v,t})= \frac{\delta A_d(\tilde{\kappa }_{v,t})}{\delta \tilde{\kappa }_{v,t}}\) and \(C_d^{'}(\tilde{\kappa }_{v,t})= \frac{\delta C_d(\tilde{\kappa }_{v,t})}{\delta \tilde{\kappa }_{v,t}}\). Setting this derivative to zero, we have:
Rights and permissions
About this article
Cite this article
Van Linh, N., Anh, N.K., Than, K. et al. An effective and interpretable method for document classification. Knowl Inf Syst 50, 763–793 (2017). https://doi.org/10.1007/s10115-016-0956-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10115-016-0956-6