Advertisement

Cluster Computing

, Volume 22, Supplement 1, pp 2089–2100 | Cite as

Who is answering whom? Finding “Reply-To” relations in group chats with deep bidirectional LSTM networks

  • Gaoyang Guo
  • Chaokun WangEmail author
  • Jun Chen
  • Pengcheng Ge
  • Weijun Chen
Article

Abstract

Social networks facilitate communication among Internet users while generating large volumes of online short-text conversations every day. This leads to a huge number of free-style asynchronous conversations where multiple users are involved and multiple topics are discussed at the same time in the same place, e.g., an instant group chat in WeChat. Here emerges an interesting problem: as a result of a large number of users and topics, the conversation structure may get into a mess, which often interferes with the acquisition of messages users are interested in. For example, when a user enters a conversation, (s)he usually does not want to read all the historical messages, but just hope to get the messages that are the most relevant to some messages (s)he cares about. Therefore, it is an essential task to understand the logical correlations among messages, which benefits text mining, natural language processing, and web intelligence techniques. In this paper, we focus on “reply-to” relations, such as Q&A between messages in group chats. At first, a model called LSTM-RT is presented to predict the “reply-to” relations between messages, which is based on deep bidirectional LSTM networks. Then, three versions of the LSTM-RT model are proposed. In detail, the first version is based on a non-siamese architecture, which processes ordered message pairs; The other two versions are end-to-end models, which are based on the word level and the sentence level, respectively. Finally, experimental results conducted on two real-world group chat data sets demonstrate the effectiveness of the proposed model.

Keywords

Group chats “Reply-to” relations LSTM networks 

Notes

Acknowledgements

This work was supported in part by the Intelligent Manufacturing Comprehensive Standardization and New Pattern Application Project of Ministry of Industry and Information Technology (Experimental validation of key technical standards for trusted services in industrial Internet), the National Natural Science Foundation of China (No. 61373023), and the China National Arts Fund (No. 20164129).

References

  1. 1.
    Wang, M., Wang, C., Yu, J.X., Zhang, J.: Community detection in social networks: an in-depth benchmarking study with a procedure-oriented framework. Proc. VLDB Endow. 8(10), 998–1009 (2015)CrossRefGoogle Scholar
  2. 2.
    Church, K., de Oliveira, R.: What’s up with whatsapp?: comparing mobile instant messaging behaviors with traditional SMS. In: Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 352–361. ACM (2013)Google Scholar
  3. 3.
    Qiu, J., Li, Y., Tang, J., Lu, Z., Ye, H., Chen, B., Yang, Q., Hopcroft, J.E.: The lifecycle and cascade of wechat social messaging groups. In: Proceedings of the 25th International Conference on World Wide Web, pp. 311–320 (2016)Google Scholar
  4. 4.
    Wang, C., Xin, X., Shang, J.: When to make a topic popular again? A temporal model for topic re-hotting prediction in online social networks. IEEE Trans. Signal Inf. Process. Over Netw. (2017).  https://doi.org/10.1109/TSIPN.2017.2670498 Google Scholar
  5. 5.
    Guo, G., Wang, C., Chen, J., Ge, P.: Who is answering to whom? Finding “reply-to” relations in group chats with long short-term memory networks. In: Proceedings of the 7th International Conference on Emerging Databases. Lecture Notes in Electrical Engineering, vol. 461, pp. 161–171 (2017)Google Scholar
  6. 6.
    Mikolov, T., Karafiát, M., Burget, L., Cernockỳ, J., Khudanpur, S.: Recurrent neural network based language model. Interspeech 2, 1045–1048 (2010)Google Scholar
  7. 7.
    Pichotta, K., Mooney, R.J.: Learning statistical scripts with lSTM recurrent neural networks. In: AAAI, pp. 2800–2806 (2016)Google Scholar
  8. 8.
    Kalchbrenner, N., Blunsom, P.: Recurrent continuous translation models. In: EMNLP, pp. 1700–1709 (2013)Google Scholar
  9. 9.
    Gers, F.A., Schmidhuber, E.: Lstm recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans. Neural Netw. 12(6), 1333–1340 (2001)CrossRefGoogle Scholar
  10. 10.
    Wei, X., Lin, H., Yang, L., Yu, Y.: A convolution-lstm-based deep neural network for cross-domain mooc forum post classification. Information 8(3), 92 (2017)CrossRefGoogle Scholar
  11. 11.
    Cai, M., Liu, J.: Maxout neurons for deep convolutional and lstm neural networks in speech recognition. Speech Commun. 77, 53–64 (2016)CrossRefGoogle Scholar
  12. 12.
    Mueller, J., Thyagarajan, A.: Siamese recurrent architectures for learning sentence similarity. In: AAAI, pp. 2786–2792 (2016)Google Scholar
  13. 13.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst., 3111–3119 (2013)Google Scholar
  14. 14.
    Kiros, R., Zhu, Y., Salakhutdinov, R.R., Zemel, R., Urtasun, R., Torralba, A., Fidler, S.: Skip-thought vectors. Adv. Neural Inf. Process. Syst., 3294–3302 (2015)Google Scholar
  15. 15.
    Tsai, T.C., Liu, T.S., Han, C.C.: Waterchat: A group chat application based on opportunistic mobile social networks. J. Commun. 12(7), 405–411 (2017)CrossRefGoogle Scholar
  16. 16.
    Elsner, M., Charniak, E.: Disentangling chat. Comput. Linguist. 36(3), 389–409 (2010)CrossRefGoogle Scholar
  17. 17.
    Kim, J., Lee, W., Song, J.J., Lee, S.B.: Optimized combinatorial clustering for stochastic processes. Cluster Comput. 20(2), 1135–1148 (2017)CrossRefGoogle Scholar
  18. 18.
    Elsner, M., Charniak, E.: You talking to me? a corpus and algorithm for conversation disentanglement. In: ACL, pp. 834–842 (2008)Google Scholar
  19. 19.
    Wang, L., Oard, D.W.: Context-based message expansion for disentanglement of interleaved text conversations. In: Proceedings of the 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 200–208 (2009)Google Scholar
  20. 20.
    Wang, Y.C., Joshi, M., Cohen, W.W., Rosé, C.P.: Recovering implicit thread structure in newsgroup style conversations. In: ICWSM, pp. 152–160 (2008)Google Scholar
  21. 21.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  22. 22.
    Graves, A., Mohamed, A.r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649 (2013)Google Scholar
  23. 23.
    Graves, A., Jaitly, N., Mohamed, A.r.: Hybrid speech recognition with deep bidirectional lSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 273–278 (2013)Google Scholar
  24. 24.
    Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., et al.: Deep speech 2: end-to-end speech recognition in English and Mandarin. In: International Conference on Machine Learning, pp. 173–182 (2016)Google Scholar
  25. 25.
    Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Inf. Process Manag. 24(5), 513–523 (1988)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Gaoyang Guo
    • 1
  • Chaokun Wang
    • 1
    Email author
  • Jun Chen
    • 1
  • Pengcheng Ge
    • 2
  • Weijun Chen
    • 3
  1. 1.School of SoftwareTsinghua UniversityBeijingChina
  2. 2.Lenovo Information Technology LtdBeijingChina
  3. 3.Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations