The Effects of Continuous Conversation and Task Complexity on Usability of an AI-Based Conversational Agent in Smart Home Environments
Conversational agents have gained increasing popularity over the last decade in a variety of personal, public, and occupational settings due to rapid advances of artificial intelligence (AI) and natural language processing (NLP). However, how users can interact with such technologies is still understudied. The objective of this study was to investigate type of conversation (presence and absence of continuous conversation) and task complexity (high vs. low) on usability metrics (i.e., task completion time, number of queries used in completing tasks, and perceived system usability) with conversational agents in smart home environments. Eighteen participants joined this study and completed required tasks. The results showed that there was a significant effect of type of conversation on task completion time and number of queries per task. Task complexity significantly extended task completion time and increased number of queries per task. The results may help with the design of more usable conversational agents.
KeywordsContinuous conversation Conversational agent Smart home Usability
Compliance with Ethical Standards
The study was approved by the Logistics Department for Civilian Ethics Committee of Alibaba Group. All subjects who participated in the experiment were provided with and signed an informed consent form. All relevant ethical safeguards have been met with regard to subject protection.
- 1.Sciuto A, Saini A, Forlizzi J, Hong JI (eds) (2018) Hey Alexa, what’s up?: a mixed-methods studies of in-home conversational agent usage. In: Proceedings of the 2018 on designing interactive systems conference 2018. ACM Google Scholar
- 2.Toxtli C, Cranshaw J (eds) (2018) Understanding chatbot-mediated task management. In: Proceedings of the 2018 CHI conference on human factors in computing systems. ACMGoogle Scholar
- 3.Shum H-Y, He X-D, Li DJ, Fo IT (2018) From Eliza to XiaoIce: challenges and opportunities with social chatbots. Electron Eng 19(1):10–26Google Scholar
- 4.Kopp S, Gesellensetter L, Krämer NC, Wachsmuth I (eds) (2005) A conversational agent as museum guide–design and evaluation of a real-world application. Intelligent virtual agents. SpringerGoogle Scholar
- 5.López G, Quesada L, Guerrero LA (2018) Alexa versus Siri versus Cortana versus google assistant: a comparison of speech-based natural user interfaces. Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing, pp 241–50Google Scholar
- 8.Luger E, Sellen A (eds) (2016) Like having a really bad PA: the gulf between user expectation and experience of conversational agents. In: Proceedings of the 2016 CHI conference on human factors in computing systems. ACMGoogle Scholar
- 10.Ghosh S, Pherwani J (eds) (2015) Designing of a natural voice assistants for mobile through user centered design approach. Springer International Publishing, ChamGoogle Scholar
- 11.Lewis JR, Sauro J (eds) (2009) The factor structure of the system usability scale. First International Conference on Human Centered Design, Held as Part of HCI International, San Diego, CA, USA, July 19–24, 2009Google Scholar
- 12.Shawar BA, Atwell ES (2007) Chatbots: are they really useful? J Lang Technol and Comput Linguist 22(1):29–49Google Scholar