Abstract
Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications. The fundamental problem is to predict the conditional probability of the next word given the sequence of previous words. Therefore, statistical language models, i.e. n-grams based models, have been extensively used on this task in real application. However, the characteristics of extremely different typing behaviors usually lead to serious sparsity problem, even n-gram with smoothing will fail. A reasonable approach to tackle this problem is to use the recently proposed neural models, such as probabilistic neural language model, recurrent neural network and word2vec. They can leverage more semantically similar words for estimating the probability. However, there is no conclusion on which approach of the two will work better in real application. In this paper, we conduct an extensive empirical study to show the differences between statistical and neural language models. The experimental results show that the two different approach have individual advantages, and a hybrid approach will bring a significant improvement.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
N-grams with up to 5 (i.e. 4 words of context) have been reported, though, but due to data scarcity, most predictions are made with a much shorter context.
- 2.
Since word2vec is proposed as a simplified version of NLM, and RNN can be viewed as more complicated than NLM, we conduct the discussions on word2vec vs. NLM, and RNN vs. NLM, respectively.
References
Chen, S.F., Goodman, J.: An empirical study of smoothing techniques for language modeling. In: ACL, 310–318 (1996)
Reinhard, K., Hermann, N.: Improved backing-off for m-gram language modeling. In: Acoustics, Speech, and Signal Processing, pp. 181–184 (1995)
Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1151 (2003)
Mikolov, T., Chen, K., Greg, C., Jeffrey, D.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed Representations of Words and Phrases and their Compositionality. In: NIPS, pp. 3111–3119 (2013)
Mikolov, T., Martin, K., Lukas, B., Jan, C., Sanjeev, K.: Recurrent neural network based language model. In: INTERSPEECH 2010, pp. 1045–1048 (2010)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Chen, S.-Y., Wang, R., Zhao, H.: Neural Network Language Model for Chinese Pinyin Input Method Engine (2015)
Katz, S.: Estimation of probabilities from sparse data for the language model component of a speech recognizer. ASSP IEEE Trans. 35(3), 400–401 (1987)
Moore, R.C., Quirk, C.: Improved smoothing for N-gram language models based on ordinary counts. In: ACL, pp. 349–352 (2009)
Stolcke, A.: Srilm-an extensible language modeling toolkit. In: INTERSPEECH 2002, pp. 257–286 (2002)
Chen, H.: Machine learning for information retrieval: Neural networks, symbolic learning, and genetic algorithms. ASIS 46(46), 194–216 (1995)
Bengio, Y.: Deep learning of semantics for natural language. In: Twitter Boston (2016)
Zhai, C., John, L.: A study of smoothing methods for language models applied to ad hoc information retrieval. In: ACM SIGIR, pp. 334–342 (2001)
Trnka, K.: Adaptive language modeling for word prediction. In: ACL, pp. 61–66 (2008)
Zheng, X., Chen, H., Tianyu, X.: Deep learning for chinese word segmentation and POS tagging. In: EMNL, pp. 647–657 (2013)
Zou, W.Y., Socher, R., Cer, D.M., Manning, C.D.: Bilingual word embeddings for phrase-based machine translation. In: EMNL, pp. 1393–1398 (2013)
Goldberg, Y., Levy, O.: word2vec Explained: deriving Mikolov et al’.s negative sampling word embedding method. arXiv preprint arXiv:1402.3722 (2014)
Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: NIPS, pp. 2177–2185 (2014)
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)
Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Comput. 12, 2451–2471 (2000)
Acknowledgments
The work was funded by 973 Program of China under Grant No. 2014CB340401, the National Key R&D Program of China under Grant No. 2016QY02D0405, the National Natural Science Foundation of China (NSFC) under Grants No. 61232010, 61472401, 61433014, 61425016, and 61203298, the Key Research Program of the CAS under Grant No. KGZD-EW-T03-2, and the Youth Innovation Promotion Association CAS under Grants No. 20144310 and 2016102.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Zhang, H., Lan, Y., Guo, J., Xu, J., Cheng, X. (2017). Neural or Statistical: An Empirical Study on Language Models for Chinese Input Recommendation on Mobile. In: Wen, J., Nie, J., Ruan, T., Liu, Y., Qian, T. (eds) Information Retrieval. CCIR 2017. Lecture Notes in Computer Science(), vol 10390. Springer, Cham. https://doi.org/10.1007/978-3-319-68699-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-68699-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68698-1
Online ISBN: 978-3-319-68699-8
eBook Packages: Computer ScienceComputer Science (R0)