Automated Learning of RVM for Large Scale Text Sets: Divide to Conquer
Three methods are investigated and presented for automated learning of Relevance Vector Machines (RVM) in large scale text sets. RVM probabilistic Bayesian nature allows both predictive distributions on test instances and model-based selection yielding a parsimonious solution. However, scaling up the algorithm is not workable in most digital information processing applications. We look at the properties of the baseline RVM algorithm and propose new scaling approaches based on choosing appropriate working sets which retain the most informative data. Incremental, ensemble and boosting algorithms are deployed to improve classification performance by taking advantage of the large training set available. Results on Reuters-21578 are presented, showing performance gains and maintaining sparse solutions that can be deployed in distributed environments.
KeywordsSupport Vector Machine Sparse Solution Relevant Vector Machine Automate Learn Sparse Bayesian Learn
Unable to display preview. Download preview PDF.
- Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research I, 211–214 (2001)Google Scholar
- Joachims, T.: Learning to Classify Text Using SVM. Kluwer, Dordrecht (2002)Google Scholar
- Sebastiani, F.: Classification of Text, Automatic. In: Brown, K. (ed.) The Encyclopedia of Language and Linguistics, 2nd edn., vol. 14, Elsevier, Amsterdam (2006)Google Scholar
- Eyheramendy, S., Genkin, A., Ju, W., Lewis, D., Madigan, D.: Sparse Bayesian Classifiers for Text Classification. Journal of Intelligence Community R&D (2003)Google Scholar