Abstract
Recently many works have proposed to cast human-machine interaction in a sentence generation scheme. Neural networks models can learn how to generate a probable sentence based on the user’s statement along with a partial view of the dialogue history. While appealing to some extent, these approaches require huge training sets of general-purpose data and lack a principled way to intertwine language generation with information retrieval from back-end resources to fuel the dialogue with actualised and precise knowledge. As a practical alternative, in this paper, we present Lilia, a showcase for fast bootstrap of conversation-like dialogues based on a goal-oriented system. First, a comparison of goal-oriented and conversational system features is led, then a conversion process is described for the fast bootstrap of a new system, finalised with an on-line training of the system’s main components. Lilia is dedicated to a chit-chat task, where speakers exchange viewpoints on a displayed image while trying collaboratively to derive its author’s intention. Evaluations with user trials showed its efficiency in a realistic setup.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Al Moubayed, S., Beskow, J., Skantze, G., Granström, B.: Furhat: a back-projected human-like robot head for multiparty human-machine interaction. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds.) Cognitive Behavioural Systems. LNCS, vol. 7403, pp. 114–130. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34584-5_9
Barlier, M., Perolat, J., Laroche, R., Pietquin, O.: Human-machine dialogue as a stochastic game. In: SIGDIAL, September 2015
Budzianowski, P., et al.: Sub-domain modelling for dialogue management with hierarchical reinforcement learning. arXiv.org (2017)
Casanueva, I., et al.: A benchmarking environment for reinforcement learning based task oriented dialogue management. arXiv.org, November 2017
Chaminade, T.: An experimental approach to study the physiology of natural social interactions. Interact. Stud. 18(2), 254–276 (2017)
Daubigney, L., Geist, M., Chandramohan, S., Pietquin, O.: A comprehensive reinforcement learning framework for dialogue management optimization. Sel. Top. Sign. Proces. 6(8), 891–902 (2012)
Ekeinhor-Komi, T., Bouraoui, J.L., Laroche, R., Lefèvre, F.: Towards a virtual personal assistant based on a user-defined portfolio of multi-domain vocal applications. In: 2016 IEEE SLT Workshop, pp. 106–113 (2016)
Ferreira, E., Jabaian, B., Lefèvre, F.: Online adaptative zero-shot learning spoken language understanding using word-embedding. In: IEEE ICASSP (2015)
Ferreira, E., Lefèvre, F.: Reinforcement-learning based dialogue system for human-robot interactions with socially-inspired rewards. Comput. Speech Lang. 34(1), 256–274 (2015)
Ghazvininejad, M., et al.: A knowledge-grounded neural conversation model. arXiv.org (2017)
Jurafsky, D., Martin, J.H.: Speech and Language Processing, Chap. 29, 3rd edn., pp. 201–213. Prentice-Hall Inc., Upper Saddle River (2017). Dialogue Systems and Chatbots
Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1–2), 99–134 (1998)
Lewis, M., Yarats, D., Dauphin, Y.N., Parikh, D., Batra, D.: Deal or no deal? End-to-end learning for negotiation dialogues. arXiv.org (2017)
Lucignano, L., Cutugno, F., Rossi, S., Finzi, A.: A dialogue system for multimodal human-robot interaction. In: ICMI (2013)
Riou, M., Jabaian, B., Huet, S., Lefèvre, F.: Online adaptation of an attention-based neural network for natural language generation. In: INTERSPEECH (2017)
Riou, M., Jabaian, B., Huet, S., Lefèvre, F.: Joint on-line learning of a zero-shot spoken semantic parser and a reinforcement learning dialogue manager. In: IEEE ICASSP (2019)
Serban, I.V., et al.: A deep reinforcement learning Chatbot. arXiv.org (2017)
Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: AAAI (2016)
Sidnell, J.: Conversation Analysis: An Introduction. Wiley-Blackwell, West Sussex (2010)
Su, P.H., et al.: On-line active reward learning for policy optimisation in spoken dialogue systems. In: ACL, pp. 2431–2441, Berlin, Germany, August 2016
Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. IEEE Trans. Neural Netw. 9(5), 1054 (1998)
Thomson, B., Young, S.: Bayesian update of dialogue state: a POMDP framework for spoken dialogue systems. Comput. Speech Lang. 24(4), 562–588 (2010)
Wen, T.H., Miao, Y., Blunsom, P., Young, S.: Latent intention dialogue models. In: ICML 2017, p. 10 (2017)
Wen, T.H., et al.: A network-based end-to-end trainable task-oriented dialogue system. In: ACL (2017)
Williams, J., Raux, A., Ramachandran, D., Black, A.: The dialog state tracking challenge. In: SIGDIAL (2013)
Young, S., et al.: The hidden information state model: a practical framework for POMDP-based spoken dialogue management. Comput. Speech Lang. 24(2), 150–174 (2010)
Zhao, T., Eskenazi, M.: Zero-shot dialog generation with cross-domain latent actions. arXiv.org, May 2018
Acknowledgments
This workshop has been partially supported by grants ANR-16-CONV-0002 (ILCB), ANR-11-LABX-0036 (BLRI).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Riou, M., Jabaian, B., Huet, S., Lefèvre, F. (2019). Lilia, A Showcase for Fast Bootstrap of Conversation-Like Dialogues Based on a Goal-Oriented System. In: Martín-Vide, C., Purver, M., Pollak, S. (eds) Statistical Language and Speech Processing. SLSP 2019. Lecture Notes in Computer Science(), vol 11816. Springer, Cham. https://doi.org/10.1007/978-3-030-31372-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-31372-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-31371-5
Online ISBN: 978-3-030-31372-2
eBook Packages: Computer ScienceComputer Science (R0)