Advertisement

The Challenges of Designing a Robot for a Satisfaction Survey: Surveying Humans Using a Social Robot

  • Scott HeathEmail author
  • Jacki Liddle
  • Janet Wiles
Article

Abstract

The field of social robotics promises robots that can interact with humans in a variety of naturalistic ways, due to their embodiment, considered form, and social abilities. For providing a satisfaction survey, when compared to a web-based form, a social robot is theoretically capable of providing some of the benefits of a face-to-face interview without requiring a human. In this paper we set up our social robot, Opie, with a dialog-enabled chat-bot to run a satisfaction survey using off-the-shelf technologies. We collected audio and transcripts during the interaction, and attitudes towards the survey after the interaction. Twenty-one participants were recruited for the study, each played two games on a tablet and answered survey questions to the robot and through an electronic form. The results indicated that while participants were able to provide answers to questions, none of the components of the robot were robust to all the different situations that emerged during the satisfaction survey. From these results, we discuss how errors affected survey answers (when compared to the electronic form), and attitudes towards the robot. We conclude with recommendations for a set of non-trivial abilities that are needed before social robot surveyors are a reality.

Keywords

Social robots Chat-bots Language Communication Survey 

Notes

Acknowledgements

The authors would like to acknowledge the ARC Centre of Excellence for the Dynamics of Language (CoEDL) for funding (grant no. CE140100041), support, and discussion about social robots and language; the OPAL team for help using Opie; the HARLIE team for their AIML “brain”; and Chanon Kachornvuthidej for help with transcription.

Funding

This study was funded by the ARC Centre of Excellence for the Dynamics of Language (CoEDL) (Grant No. CE140100041).

Compliance with ethical standards

Conflicts of interest

The authors declare that they have no conflict of interest.

Ethical statement

This study was approved by the ethics committee at the University of Queensland’s School of Information Technology and Electrical Engineering (reference no. 2017001053).

Open practices

All non-identifiable data has been made publicly available on Gitlab (https://gitlab.com/opal_robotics/robot_satisfaction_survey). This includes anonymised collected data except the audio recording.

References

  1. 1.
    7 Little Words - The Official Website. http://www.7littlewords.com/
  2. 2.
    Aquilino WS (1991) Telephone versus face-to-face interviewing for household drug use surveys. Int J Addict 27(1):71–91CrossRefGoogle Scholar
  3. 3.
    Asoh H, Hayamizu S, Hara I, Motomura Y, Akaho S, Socially embedded learning of the office-conversant mobile robot Jijo-2, p 9Google Scholar
  4. 4.
    Atay C, Ireland D, Liddle J, Wiles J, Vogel A, Angus D, Bradford D, Campbell A, Rushin O, Chenery HJ (2016) Can a smartphone-based chatbot engage older community group members? The impact of specialised content. Alzheimer’s Dement 12((7, Supplement)):P1005–P1006CrossRefGoogle Scholar
  5. 5.
    Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F (2018) Social robots for education: a review. Sci Robot 3(21):eaat5954CrossRefGoogle Scholar
  6. 6.
    Briggs P, Scheutz M, Tickle-Degnen L (2015) Are robots ready for administering health status surveys’: first results from an HRI study with subjects with Parkinson’s disease. In: Proceedings of the tenth annual ACM/IEEE international conference on Human–Robot interaction, HRI ’15, pp 327–334. ACMGoogle Scholar
  7. 7.
    Broekens J, Heerink M, Rosendal H (2009) Assistive social robots in elderly care: a review. Gerontechnology 8(2):94–103CrossRefGoogle Scholar
  8. 8.
    Brooke J (1996) others: SUS-A quick and dirty usability scale. Usability Eval Ind 189(194):4–7Google Scholar
  9. 9.
    Dale R (2016) The return of the chatbots. Nat Lang Eng 22(05):811–817CrossRefGoogle Scholar
  10. 10.
    Dautenhahn K, Woods S, Kaouri C, Walters ML, Werry aI (2005) What is a robot companion—friend, assistant or butler? In: 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 1192–1197Google Scholar
  11. 11.
    de Graaf MMA, Ben Allouch S (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Syst 61(12):1476–1486CrossRefGoogle Scholar
  12. 12.
    DiSalvo CF, Gemperle F, Forlizzi J, Kiesler S (2002) All robots are not created equal: the design and perception of humanoid robot heads. In: Proceedings of the 4th conference on designing interactive systems: processes, practices, methods, and techniques, DIS ’02. ACM, pp 321–326Google Scholar
  13. 13.
    Durantin G, Heath S, Wiles J (2017) Social moments: a perspective on interaction for social robotics. Front Robot AI 4:24 CrossRefGoogle Scholar
  14. 14.
    Floridi L, Taddeo M, Turilli M (2009) Turing’s imitation game: still an impossible challenge for all machines and some judges—an evaluation of the2008 Loebner contest. Minds Mach 19(1):145–150CrossRefGoogle Scholar
  15. 15.
    Ghose S, Barua JJ (2013) Toward the implementation of a topic specific dialogue based natural language chatbot as an undergraduate advisor. In: 2013 international conference on informatics, electronics and vision (ICIEV), pp 1–5Google Scholar
  16. 16.
    Gockley R, Bruce A, Forlizzi J, Michalowski M, Mundell A, Rosenthal S, Sellner B, Simmons R, Snipes K, ACS (2005) Designing robots for long-term social interaction. In: 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 1338–1343Google Scholar
  17. 17.
    Goldstein P (2017) 7 Ways AI could save the government money and boost productivity. https://fedtechmagazine.com/article/2017/05/7-ways-ai-could-save-government-money-and-boost-productivity
  18. 18.
    Goldwater S, Jurafsky D, Manning CD (2010) Which words are hard to recognize? Prosodic, lexical, and disfluency factors that increase speech recognition error rates. Speech Commun 52(3):181–200CrossRefGoogle Scholar
  19. 19.
    Google: GRPC. https://grpc.io/
  20. 20.
  21. 21.
    Green A (2018) It’s time for an AI chat. REIQ J, 44 Google Scholar
  22. 22.
    Heath S, Durantin G, Boden M, Hensby K, Taufatofua J, Olsson O, Weigel J, Pounds P, Wiles J (2017) Spatiotemporal aspects of engagement during dialogic storytelling child–robot interaction. Front Robot AI 4:27CrossRefGoogle Scholar
  23. 23.
    Hoffman G, Ju W (2014) Designing robots with movement in mind. J Hum Robot Interact 3(1):91–122CrossRefGoogle Scholar
  24. 24.
    Intel\(^{\textregistered }\) RealSense\(^{{\rm TM}}\) depth and tracking cameras. https://www.intelrealsense.com/
  25. 25.
    Ioannou A, Andreou E, Christofi M (2015) Pre-schoolers’ interest and caring behaviour around a humanoid robot. TechTrends 59(2):23–26CrossRefGoogle Scholar
  26. 26.
    Ireland D, Atay C, Liddle J, Bradford D, Lee H, Rushin O, Mullins T, Angus D, Wiles J, McBride S, Vogel A (2016) Hello harlie: enabling speech monitoring through chat-bot conversations. Digital Health Innov Consum Clin Connect Community 227:55–60Google Scholar
  27. 27.
    Keel PK, Crow S, Davis TL, Mitchell JE (2002) Assessment of eating disorders: comparison of interview and questionnaire data from a long-term follow-up study of bulimia nervosa. J Psychosom Res 53(5):1043–1047CrossRefGoogle Scholar
  28. 28.
    Kennedy J, Baxter P, Belpaeme T (2015) Comparing robot embodiments in a guided discovery learning interaction with children. Int J Soc Robot 7(2):293–308CrossRefGoogle Scholar
  29. 29.
    Kennedy J, Lemaignan S, Montassier C, Lavalade P, Irfan B, Papadopoulos F, Senft E, Belpaeme T (2017) Child speech recognition in human–robot interaction: evaluations and recommendations. In: Proceedings of the 2017 ACM/IEEE international conference on human–robot interaction, HRI’17. ACM, pp 82–90Google Scholar
  30. 30.
    Kory J, Breazeal C (2014) Storytelling with robots: learning companions for preschool children’s language development. In: The 23rd IEEE international symposium on robot and human interactive communication, pp 643–648Google Scholar
  31. 31.
    Krol M (1999) Have we witnessed a real-life turing test? Computer 32(3):27–30CrossRefGoogle Scholar
  32. 32.
    LimeSurvey (2017). https://www.limesurvey.org/
  33. 33.
    Malmir M, Forster D, Youngstrom K, Morrison L, Movellan JR (2013) Home alone: social robots for digital ethnography of toddler behavior. In: Proceedings of the IEEE international conference on computer vision workshops (ICCVW), pp 762–768 Google Scholar
  34. 34.
    Manary MP, Boulding W, Staelin R, Glickman SW (2013) The patient experience and health outcomes. N Engl J Med 368(3):201–203CrossRefGoogle Scholar
  35. 35.
    Mauldin ML (1994) Chatterbots, tinymuds, and the turing test entering the loebner prize competition. In: Proceedings of the twelfth AAAI national conference on artificial intelligence, AAAI’94. AAAI Press, pp 16–21Google Scholar
  36. 36.
    Mavridis N (2015) A review of verbal and non-verbal human–robot interactive communication. Robot Auton Syst 63:22–35MathSciNetCrossRefGoogle Scholar
  37. 37.
  38. 38.
    Pinillos R, Marcos S, Feliz R, Zalama E, Gómez-García-Bermejo J (2016) Long-term assessment of a service robot in a hotel environment. Robot Auton Syst 79:40–57CrossRefGoogle Scholar
  39. 39.
    Powers DMW (1998) The total turing test and the Loebner prize. In: Proceedings of the joint conferences on new methods in language processing and computational natural language learning, NeMLaP3/CoNLL’98. Association for Computational Linguistics, pp 279–280Google Scholar
  40. 40.
    Protalinski E (2017) Google’s speech recognition technology now has a 4.9% word error rate. https://venturebeat.com/2017/05/17/googles-speech-recognition-technology-now-has-a-4-9-word-error-rate/
  41. 41.
    Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) ROS: an open-source robot operating system. In: ICRA workshop on open source software, vol 3, p 5. Kobe, JapanGoogle Scholar
  42. 42.
    Roy N, Baltus G, Fox D, Gemperle F, Goetz J, Hirsch T, Margaritis D, Montemerlo M, Pineau J, Schulte J, Thrun S (2000) Towards personal service robots for the elderly, p 7Google Scholar
  43. 43.
    Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot?: Effects of error, task type and personality on human–robot cooperation and trust. In: Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction, HRI’15. ACM, pp 141–148Google Scholar
  44. 44.
    Sampson SE (1998) Gathering customer feedback via the Internet: instruments and prospects. Ind Manag Data Syst 98(2):71–82MathSciNetCrossRefGoogle Scholar
  45. 45.
    Schaefer KE, Sanders TL, Yordon RE, Billings DR, Hancock P (2012) Classification of robot form: factors predicting perceived trustworthiness. Proc Hum Factors Ergon Soc Annu Meet 56(1):1548–1552CrossRefGoogle Scholar
  46. 46.
    Siegel M, Breazeal C, Norton MI (2009) Persuasive robotics: the influence of robot gender on human behavior. In: 2009 IEEE/RSJ international conference on intelligent robots and systems, pp 2563–2568Google Scholar
  47. 47.
    Stivers T, Enfield NJ, Brown P, Englert C, Hayashi M, Heinemann T, Hoymann G, Rossano F, Ruiter JPd, Yoon KE, Levinson SC (2009) Universals and cultural variation in turn-taking in conversation. Proc Natl Acad Sci 106(26):10587–10592CrossRefGoogle Scholar
  48. 48.
  49. 49.
    Tan ZH, Thomsen NB, Duan X, Vlachos E, Shepstone SE, Rasmussen MH, Højvang JL (2018) iSocioBot: a multimodal interactive social robot. Int J Soc Robot 10(1):5–19CrossRefGoogle Scholar
  50. 50.
    Tatman R (2017) Gender and dialect bias in YouTube’s automatic captions. In: Proceedings of the first ACL workshop on ethics in natural language processing. Association for Computational Linguistics, pp 53–59Google Scholar
  51. 51.
    Taufatofua J, Heath S, Ramirez-Brinez CA, Sommer K, Durantin G, Kong W, Wiles J, Pounds P (2018) Designing for robust movement in a child-friendly robot. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 7667–7674Google Scholar
  52. 52.
    Tellex S, Kollar T, Dickerson S, Walter MR, Banerjee AG, Teller S, Roy N (2011) Understanding natural language commands for robotic navigation and mobile manipulation. In: Twenty-fifth AAAI conference on artificial intelligenceGoogle Scholar
  53. 53.
    Thorne C (2017) Chatbots for troubleshooting: a survey. Lang Linguist Compass 11(10):e12253CrossRefGoogle Scholar
  54. 54.
    van der Poel HG, Tillier C, de Blok WM, Acar C, van Muilekom EH, van den Bergh RC (2013) Interview-based versus questionnaire-based quality of life outcomes before and after prostatectomy. J Endourol 27(11):1411–1416CrossRefGoogle Scholar
  55. 55.
    Vlachos E, Jochum E, Schärfe H (2016) Head orientation behavior of users and durations in playful open-ended interactions with an android robot. In: Koh JT, Dunstan BJ, Silvera-Tawil D, Velonaki M (eds) Cultural robotics, lecture notes in computer science. Springer, pp 67–77Google Scholar
  56. 56.
    Wallace R (2003) The elements of AIML style. Alice AI Foundation, BostonGoogle Scholar
  57. 57.
    Wilbur WJ, Sirotkin K (1992) The automatic identification of stop words. J Inf Sci 18(1):45–55CrossRefGoogle Scholar
  58. 58.
    Yates WR (1993) The course of eating disorders: long-term follow-up studies of Anorexia and Bulimia Nervosa. Psychosomatics 34(2):189CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.School of Information Technology and Electrical EngineeringThe University of QueenslandBrisbaneAustralia

Personalised recommendations