Skip to main content

Neural or Statistical: An Empirical Study on Language Models for Chinese Input Recommendation on Mobile

  • Conference paper
  • First Online:
Information Retrieval (CCIR 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10390))

Included in the following conference series:

  • 581 Accesses

Abstract

Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications. The fundamental problem is to predict the conditional probability of the next word given the sequence of previous words. Therefore, statistical language models, i.e. n-grams based models, have been extensively used on this task in real application. However, the characteristics of extremely different typing behaviors usually lead to serious sparsity problem, even n-gram with smoothing will fail. A reasonable approach to tackle this problem is to use the recently proposed neural models, such as probabilistic neural language model, recurrent neural network and word2vec. They can leverage more semantically similar words for estimating the probability. However, there is no conclusion on which approach of the two will work better in real application. In this paper, we conduct an extensive empirical study to show the differences between statistical and neural language models. The experimental results show that the two different approach have individual advantages, and a hybrid approach will bring a significant improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    N-grams with up to 5 (i.e. 4 words of context) have been reported, though, but due to data scarcity, most predictions are made with a much shorter context.

  2. 2.

    Since word2vec is proposed as a simplified version of NLM, and RNN can be viewed as more complicated than NLM, we conduct the discussions on word2vec vs. NLM, and RNN vs. NLM, respectively.

References

  1. Chen, S.F., Goodman, J.: An empirical study of smoothing techniques for language modeling. In: ACL, 310–318 (1996)

    Google Scholar 

  2. Reinhard, K., Hermann, N.: Improved backing-off for m-gram language modeling. In: Acoustics, Speech, and Signal Processing, pp. 181–184 (1995)

    Google Scholar 

  3. Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1151 (2003)

    MATH  Google Scholar 

  4. Mikolov, T., Chen, K., Greg, C., Jeffrey, D.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  5. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed Representations of Words and Phrases and their Compositionality. In: NIPS, pp. 3111–3119 (2013)

    Google Scholar 

  6. Mikolov, T., Martin, K., Lukas, B., Jan, C., Sanjeev, K.: Recurrent neural network based language model. In: INTERSPEECH 2010, pp. 1045–1048 (2010)

    Google Scholar 

  7. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Google Scholar 

  8. Chen, S.-Y., Wang, R., Zhao, H.: Neural Network Language Model for Chinese Pinyin Input Method Engine (2015)

    Google Scholar 

  9. Katz, S.: Estimation of probabilities from sparse data for the language model component of a speech recognizer. ASSP IEEE Trans. 35(3), 400–401 (1987)

    Article  Google Scholar 

  10. Moore, R.C., Quirk, C.: Improved smoothing for N-gram language models based on ordinary counts. In: ACL, pp. 349–352 (2009)

    Google Scholar 

  11. Stolcke, A.: Srilm-an extensible language modeling toolkit. In: INTERSPEECH 2002, pp. 257–286 (2002)

    Google Scholar 

  12. Chen, H.: Machine learning for information retrieval: Neural networks, symbolic learning, and genetic algorithms. ASIS 46(46), 194–216 (1995)

    Google Scholar 

  13. Bengio, Y.: Deep learning of semantics for natural language. In: Twitter Boston (2016)

    Google Scholar 

  14. Zhai, C., John, L.: A study of smoothing methods for language models applied to ad hoc information retrieval. In: ACM SIGIR, pp. 334–342 (2001)

    Google Scholar 

  15. Trnka, K.: Adaptive language modeling for word prediction. In: ACL, pp. 61–66 (2008)

    Google Scholar 

  16. Zheng, X., Chen, H., Tianyu, X.: Deep learning for chinese word segmentation and POS tagging. In: EMNL, pp. 647–657 (2013)

    Google Scholar 

  17. Zou, W.Y., Socher, R., Cer, D.M., Manning, C.D.: Bilingual word embeddings for phrase-based machine translation. In: EMNL, pp. 1393–1398 (2013)

    Google Scholar 

  18. Goldberg, Y., Levy, O.: word2vec Explained: deriving Mikolov et al’.s negative sampling word embedding method. arXiv preprint arXiv:1402.3722 (2014)

  19. Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: NIPS, pp. 2177–2185 (2014)

    Google Scholar 

  20. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Article  Google Scholar 

  21. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Comput. 12, 2451–2471 (2000)

    Article  Google Scholar 

Download references

Acknowledgments

The work was funded by 973 Program of China under Grant No. 2014CB340401, the National Key R&D Program of China under Grant No. 2016QY02D0405, the National Natural Science Foundation of China (NSFC) under Grants No. 61232010, 61472401, 61433014, 61425016, and 61203298, the Key Research Program of the CAS under Grant No. KGZD-EW-T03-2, and the Youth Innovation Promotion Association CAS under Grants No. 20144310 and 2016102.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hainan Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhang, H., Lan, Y., Guo, J., Xu, J., Cheng, X. (2017). Neural or Statistical: An Empirical Study on Language Models for Chinese Input Recommendation on Mobile. In: Wen, J., Nie, J., Ruan, T., Liu, Y., Qian, T. (eds) Information Retrieval. CCIR 2017. Lecture Notes in Computer Science(), vol 10390. Springer, Cham. https://doi.org/10.1007/978-3-319-68699-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68699-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68698-1

  • Online ISBN: 978-3-319-68699-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics