Abstract
Ancient Chinese text segmentation is the basic work of the intelligentization of ancient books. In this paper, an unsupervised lexicon construction algorithm based on the minimum entropy model is applied to a large-scale ancient text corpus, and a dictionary composed of high-frequency co-occurring neighbor characters is extracted. Two experiments were performed on this lexicon. Firstly, the experimental results of ancient text segmentation are compared before and after the lexicon is imported into the word segmentation tool. Secondly, the words such as person’s name, place name, official name and person relationship in CDBD are added to the lexicon, and then the experimental results of ancient text segmentation before and after the optimized lexicon is imported into the word segmentation tool are compared. The above two experimental results show that the lexicon has different enhancement effects on the segmentation effect of ancient texts in different periods, and the optimization effect of CDBD data is not obvious. This article is one of the few works that applies monolingual word segmentation to ancient Chinese word segmentation. The work of this paper enriches the research in related fields.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Su, J.: Principle of minimum entropy (2): Thesaurus construction of decisively [J/OL] (2018). https://kexue.fm/archives/5476.
Harvard University, Academia Sinica, and Peking University, China Biographical Database Project (Cbdb) [M/OL]. https://projects.iq.harvard.edu/chinesecbdb. Accessed 24 Apr 2018
Chu, C., Nakazawa, T., Kawahara, D., et al.: Chinese-Japanese machine translation exploiting Chinese characters. ACM Trans. Asian Lang. Inf. Process. 12(4), 1–25 (2013)
Che, C., Zhao, H., Wu, X., Zhou, D., Zhang, Q.: A word segmentation method of ancient Chinese based on word alignment. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 761–772. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_59
Zheng, X., Chen, H., Xu, T.: Deep learning for Chinese word segmentation and POS tagging. In: Proceedings of the Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, F October 2013. Association for Computational Linguistics
Liu, Y., Che, W., Guo, J., Qin, B., Lium T.: Exploring segment representations for neural segmentation models. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. Aaai Press, New York, USA, pp. 2880–2886 (2016)
Su, J.: NLP library based on the principle of minimum entropy: nlp zero [J/OL] (2018). https://kexue.fm/archives/5597
Acknowledgement
This work is funded by Characteristic Innovation Project (No. 19TS15) of Guangdong University of Foreign Studies.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, Y., Liang, J., Huang, X. (2021). Ancient Chinese Lexicon Construction Based on Unsupervised Algorithm of Minimum Entropy and CBDB Optimization. In: Zu, Q., Tang, Y., Mladenović, V. (eds) Human Centered Computing. HCC 2020. Lecture Notes in Computer Science(), vol 12634. Springer, Cham. https://doi.org/10.1007/978-3-030-70626-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-70626-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-70625-8
Online ISBN: 978-3-030-70626-5
eBook Packages: Computer ScienceComputer Science (R0)