How Do You Want Your Chatbot? An Exploratory Wizard-of-Oz Study with Young, Urban Indians

  • Indrani Medhi Thies
  • Nandita Menon
  • Sneha Magapu
  • Manisha Subramony
  • Jacki O’Neill
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10513)

Abstract

As text-messaging chatbots become increasingly “human”, it will be important to understand the personal interactions that users are seeking with a chatbot. What chatbot personalities are most compelling to young, urban users in India? To explore this question, we first conducted exploratory Wizard-of-Oz (WoZ) studies with 14 users that simulated interactions with a hypothetical chatbot. We evaluated three personalities for the chatbot—Maya, a productivity oriented bot with nerd wit; Ada, a fun, flirtatious bot; and Evi, an emotional buddy bot. We followed up with one-on-one interviews with the users discussing their experiences with each of the chatbots, what they liked, and what they did not. Overall our results show that users wanted a chatbot like Maya, who could add value to their life while being a friend, by making useful recommendations. But they also wanted preferred traits of Ada and Evi infused into Maya.

Keywords

Chatbots Wizard-of-Oz Urban india 

References

  1. 1.
    ALICE: Artificial Intelligence Foundation. http://www.alicebot.org/bios/richardwallace.html
  2. 2.
    Augello, A., Gentile, M., Weideveld, L., Dignum, F.: A model of a social chatbot. In: De Pietro, G., Gallo, L., Howlett, R.J., Jain, L.C. (eds.) Intelligent Interactive Multimedia Systems and Services 2016. SIST, vol. 55, pp. 637–647. Springer, Cham (2016). doi: 10.1007/978-3-319-39345-2_57 CrossRefGoogle Scholar
  3. 3.
    Bee, N., André, E., Tober, S.: Breaking the ice in human-agent communication: eye-gaze based initiation of contact with an embodied conversational agent. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 229–242. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-04380-2_26 CrossRefGoogle Scholar
  4. 4.
    Bella, M., Hanington, B.: Universal Methods of Design, p. 204. Rockport Publishers, Beverly (2012)Google Scholar
  5. 5.
    Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput. Hum. Interact. (TOCHI) 12(2), 293–327 (2005)CrossRefGoogle Scholar
  6. 6.
    Bickmore, T., Pfeifer, L., Byron, D., Forsythe, S., Henault, L., Jack, B., Silliman, R., Paasche-Orlow, M.: Usability of conversational agents by patients with inadequate health literacy: evidence from two clinical trials. J. Health Commun. 15, 197–210 (2010)CrossRefGoogle Scholar
  7. 7.
    Bickmore, T., Schulman, D., Vardoulakis, L.: Tinker - a relational agent museum guide. J. Auton. Agents Multi Agent Syst. 27(2), 254–276 (2013)CrossRefGoogle Scholar
  8. 8.
    Bickmore, T., Schulman, D.: Practical approaches to comforting users with relational agents. In: ACM SIGCHI Conference on Human Factors in Computing Systems (2007)Google Scholar
  9. 9.
    Breazeal, C.: Designing Sociable Robots. MIT Press, Cambridge (2002)MATHGoogle Scholar
  10. 10.
    Cassell, J.: Embodied conversational agent: representation and intelligence in user interfaces. Al Mag. 22(4), 67–83 (2001)Google Scholar
  11. 11.
  12. 12.
    Cooper, J.: The digital divide: the special case of gender. J. Comput. Assist. Learn. 22, 320–334 (2006)CrossRefGoogle Scholar
  13. 13.
    Correa, T.: The participation divide among “online experts”: experience, skills, and psychological factors as predictors of college students’ web content creation. J. Comput. Mediat. Commun. 16(1), 71–92 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Corti, K., Gillespie, A.: Co-constructing intersubjectivity with artificial conversational agents: people are more likely to initiate repairs of misunderstandings with agents represented as human. Comput. Hum. Behav. 58, 431–442 (2016)CrossRefGoogle Scholar
  15. 15.
    Crutzen, R., Peters, G.-J.Y., Dias Portugal, S., Fisser, E.M., Grolleman, J.J.: An artificially intelligent chat agent that answers adolescents’ questions related to sex, drugs, and alcohol: an exploratory study. J. Adolesc. Health 48, 514–519 (2011)CrossRefGoogle Scholar
  16. 16.
    Dale, R.: The return of the chatbots. Nat. Lang. Eng. 22(5), 811–817 (2016)CrossRefGoogle Scholar
  17. 17.
    D’Onfro, J.: Microsoft Created a Chatbot in China that has Millions of Loyal Followers who talk to it like in the Movie ‘Her’. Business Insider, UK (2015). http://www.businessinsider.in/Microsoft-created-a-chatbot-in-China-that-has-millions-of-loyal-followers-who-talk-to-it-like-in-the-movie-Her/articleshow/48312697.cms
  18. 18.
    Drew, P., Heritage, J.: Conversation Analysis. Sage, London (2006)CrossRefGoogle Scholar
  19. 19.
    Dryer, D.C.: Getting personal with computers: how to design personalities for agents. Appl. Artif. Intell. 13(3), 273–295 (1999)CrossRefGoogle Scholar
  20. 20.
    Hill, J., Ford, W.R., Farreras, I.G.: Real conversations with artificial intelligence: a comparison between human–human online conversations and human–chatbot conversations. Comput. Hum. Behav. 49, 245–250 (2015)CrossRefGoogle Scholar
  21. 21.
  22. 22.
    Kakaraparty, S.: GUIDE: How to get Natasha for HIKE! (2015). http://www.techkmate.com/2015/02/guidehow-to-get-natasha-for-hike.html
  23. 23.
    Kelley, J.F.: An iterative design methodology for user-friendly natural language office information applications. ACM Trans. Office Inf. Syst. 2(1), 26–41 (1984)CrossRefGoogle Scholar
  24. 24.
    Kerly, A., Bull, S.: The potential for chatbots in negotiated learner modelling: a Wizard-of-Oz study. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 443–452. Springer, Heidelberg (2006). doi: 10.1007/11774303_44 CrossRefGoogle Scholar
  25. 25.
    Kerly, A., Ahmad, N., Bull, S.: Investigating learner trust in open learner models using a ‘Wizard of Oz’ approach. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 722–724. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-69132-7_89 CrossRefGoogle Scholar
  26. 26.
    Klein, J., Moon, Y., Picard, R.: This computer responds to user frustration: theory, design, results, and implications. Interact. Comput. 14, 119–140 (2002)CrossRefGoogle Scholar
  27. 27.
    Lomas, N.: Microsoft officially outs another AI chatbot, called Zo (2016). https://techcrunch.com/2016/12/14/microsoft-officially-outs-another-ai-chatbot-called-zo/
  28. 28.
    Luger, E., Sellen, A.: “Like Having a Really bad PA”: the gulf between user expectation and experience of conversational agents. In: Proceedings of CHI 2016 (2016)Google Scholar
  29. 29.
    McKirdy, A.: Microsoft says Line’s popular Rinna character is new way to engage customers (2015). http://www.japantimes.co.jp/news/2015/08/19/business/tech/microsoft-says-lines-popular-rinna-character-new-way-engage-customers/#.WIh5oFN95aQ
  30. 30.
    McGuire, A.: Helping behaviors in the natural environment: dimensions and correlates of helping. Pers. Soc. Psychol. Bull. 20(1), 45–56 (1994)CrossRefGoogle Scholar
  31. 31.
    Moon, Y.: Intimate Self-Disclosure Exchanges: Using Computers to Build Reciprocal Relationships with Consumers. Harvard Business School, Cambridge (1998)Google Scholar
  32. 32.
    Morkes, J., Kernal, H., Nass, C.: Humor in task-oriented computer-mediated communication and human-computer interaction. In: CHI 1998 (1998)Google Scholar
  33. 33.
    Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)CrossRefGoogle Scholar
  34. 34.
    Nass, C., Moon, Y., Carney, P.: Are people polite to computers? Responses to computer-based interviewing systems. J. Appl. Psychol. 29(5), 1093–1110 (1999)Google Scholar
  35. 35.
    Nass, C., Fogg, B., Moon, Y.: Can computers be teammates? Int. J. Hum Comput Stud. 45, 669–678 (1996)CrossRefGoogle Scholar
  36. 36.
    Pan, X., Slater, M.: A preliminary study of shy males interacting with a virtual female. In: Presence 2007: The 10th Annual International Workshop on Presence (2007)Google Scholar
  37. 37.
    Quora. How Does Natasha the bot of Hike Work? https://www.quora.com/How-does-Natasha-the-bot-of-Hike-work
  38. 38.
    Reeves, B., Nass, C.: The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, New York (1996)Google Scholar
  39. 39.
    Selker, T.: Coach: a teaching agent that learns. Commun. ACM 37(7), 92–99 (1994)CrossRefGoogle Scholar
  40. 40.
    Vinyals, O., Le, Q.: A neural conversational model. In: Proceedings of ICML Deep Learning Workshop (2015)Google Scholar
  41. 41.
    Weizenbaum, J.: ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 36–45 (1966)CrossRefGoogle Scholar
  42. 42.
    Weizenbaum, J.: Contextual understanding by computers. Commun. ACM 10(8), 474–480 (1967)CrossRefMATHGoogle Scholar
  43. 43.
    Thomas, F., Johnston, O.: Disney Animation: The illusion of life. Abbeville Press, New York (1981)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2017

Authors and Affiliations

  • Indrani Medhi Thies
    • 1
  • Nandita Menon
    • 2
  • Sneha Magapu
    • 2
  • Manisha Subramony
    • 2
  • Jacki O’Neill
    • 1
  1. 1.Microsoft ResearchBangaloreIndia
  2. 2.Microsoft India Development CentreHyderabadIndia

Personalised recommendations