Abstract
In information retrieval, Latent Semantic Analysis (LSA) is a method to handle large and sparse document vectors. LSA reduces the dimension of document vectors by producing a set of topics related to the documents and terms statistically. Therefore, it needs a certain number of words and takes no account of semantic relations of words.
In this paper, by clustering the words using semantic distances of words, the dimension of document vectors is reduced to the number of word-clusters. Word distance is able to be calculated by using WordNet or Word2Vec. This method is free from the amount of words and documents. For especially small documents, we use word’s definition in a dictionary and calculate the similarities between documents. For demonstration in standard cases, we use the problem of classification of BBC dataset and evaluate their accuracies, producing document clusters by LSA, word-clustering with WordNet, and word-clustering with Word2Vec.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)
Moravec, P., Kolovrat, M., Snášel, V.: LSI vs. wordnet ontology in dimension reduction for information retrieval. In: Dateso, pp. 18–26 (2004)
Miller, G.: WordNet: An Electronic Lexical Database. MIT Press, Cambridge (1998)
Greene, D., Cunningham, P.: Practical solutions to the problem of diagonal dominance in kernel document clustering. In: Proceedings of 23rd International Conference on Machine learning (ICML 2006), pp. 377–384 (2006)
Wu, Z., Palmer, M.: Verbs semantics and lexical selection. In: Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics, pp. 113–138 (1994)
Budanitsky, A., Hirst, G.: Evaluating WordNet-based measures of lexical semantic relatedness. Comput. Linguist. 32(1), 13–47 (2006)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 26, 3111–3119 (2013)
Schmid, H.: Probabilistic part-of-speech tagging using decision trees. In: New Methods in Language Processing, pp. 154–164 (2013). http://www.cis.uni-muenchen.de/%7Eschmid/tools/TreeTagger/
Bond, F., Baldwin, T., Fothergill, R., Uchimoto, K.: Japanese SemCor: a sense-tagged corpus of Japanese. In: Proceedings of the 6th Global WordNet Conference (GWC 2012), pp. 56–63 (2012). http://compling.hss.ntu.edu.sg/wnja/index.en.html
Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). https://nlp.stanford.edu/projects/glove/
Tian, Y., Lo, D.: A comparative study on the effectiveness of part-of-speech tagging techniques on bug reports. In: 2015 IEEE 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 570–574 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Deguchi, T., Ishii, N. (2021). Document Similarity by Word Clustering with Semantic Distance. In: Sanjurjo González, H., Pastor LĂłpez, I., GarcĂa Bringas, P., Quintián, H., Corchado, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2021. Lecture Notes in Computer Science(), vol 12886. Springer, Cham. https://doi.org/10.1007/978-3-030-86271-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-86271-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86270-1
Online ISBN: 978-3-030-86271-8
eBook Packages: Computer ScienceComputer Science (R0)