Skip to main content
Log in

A model to measure QoE for virtual personal assistant

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Until now the virtual assistants (like Siri, Google Now and Cortana) have primarily been confined to voice input and output only. Is there a justification for voice only confinement or can we enhance the user experience by adding a visual output? We hypothesized that providing a higher level of visual/auditory immersion would enhance the quality of user experience. In order to test this hypothesis, we first developed 4 variants of virtual assistant, each with a different audio/visual level of immersion. Developed virtual assistant systems were the following; audio only, audio and 2D visual display, audio and 3D visual display and audio and immersive 3D visual display. We developed a plan for usability testing of all 4 variants. The usability testing was conducted with 30 subjects against eight (8) dependent variables included presence, involvement, attention, reliability, dependency, easiness, satisfaction and expectations. Each subject rated these dependent variables based on a scale of 1–5, 5 being the highest value. The raw data collected from usability testing was then analyzed through several tools in order to determine the factors contributing towards the quality of experience for each of the 4 variants. The significant factors were then used develop a model that measures the quality of user experience. It was found that each variant had a different set of significant variables. Hence, in order to rate each system there is a need to develop a scale that is dependent upon the unique set of variables for the respective variant. Furthermore, it was found that variant 4 scored the highest rate for Quality of Experience (QoE). Lastly several other qualitative conclusions were also drawn from this research that will guide future work in the field of virtual assistants.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. http://psico.fcep.urv.es/utilitats/factor/.

References

  1. Alben L (1996) Quality of experience: defining the criteria for effective interaction design. Interactions 3(3):11–15

    Article  Google Scholar 

  2. Avery DR, McKay PF, Wilson DC, Volpone SD, Killham EA (2011) Does voice go flat? How tenure diminishes the impact of voice. Hum Resour Manag 50(1):147–158

    Article  Google Scholar 

  3. Baglin J (2014) Improving your exploratory factor analysis for ordinal data: a demonstration using FACTOR. J Pract Assess Res Eval 19(5):2

    Google Scholar 

  4. Bartlett’s Test, https://en.wikipedia.org/wiki/Bartlett%27s_test. Accessed 12 Sept 2015

  5. Branco G, Almeida L, Beires N, Gomes R (2006) Evaluation of a multimodal virtual personal assistant, 20th International Symposium on Human Factors in Telecommunication, HFT

  6. Costello AB, Osborne JW (2005) Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. J Pract Assess Res Eval 10(7)

  7. Ebrahimi T (2001) Quality of experience: a new look into quality and its impact in future personal communications

  8. Fogg BJ, Tseng H (1999) The elements of computer credibility. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, p 80–87

  9. Follmer S, Leithinger D, Olwal A, Hogge A, Ishii H (2013) inFORM: dynamic physical affordances and constraints through shape and object actuation. InProceedings of the 26th annual ACM symposium on User interface software and technology(UIST ‘13). ACM, New York, NY, USA, p 417–426

  10. Forlizzi J, Battarbee K. Understanding experience in interactive systems. In Proceedings of the 5th conference on designing interactive systems: processes, practices, methods, and techniques. ACM, p 261–268

  11. Freedman DA, Pisani R, Purves R (2007) Statistics, 4th edn. W.W. Norton & Company. ISBN 978–0–393-92972-0

  12. Gebhardt S, Pick S, Oster T, Hentschel B, Kuhlen T (2014) An evaluation of a smart-phone-based menu system for immersive virtual environments. In IEEE Symposium on 3D User Interfaces 2014, 3DUI 2014 - Proceedings, p 31–34

  13. Hamam A, El Saddik A, Alja’am J (2014) A quality of experience model for haptic virtual environments. ACM Trans Multimed Comput Commun Appl 10(3), Article 28

    Article  Google Scholar 

  14. http://www.voxiebox.com/. Accessed 11 Sept 2015

  15. Johnston M et al. (2014) MVA: the multimodal virtual assistant. In Proc. SIGDIAL Conference, p 257–259

  16. Jung Y, Kuijper A, Fellner D, Kipp M, Miksatko J, Gratch J, Thalmann D (2011) Believable virtual characters in human-computer dialogs. Eurographics 2011 - State of The Art Report, p 75–100

  17. Kazap Z, Magnenat-Thalmann N (2007) Intelligent virtual humans with autonomy and personality: State-of the-art. IOS Press, Intelligent Decision Technologies

  18. Kumar D, Sachan A (2014) Bridging the gap between disabled people and new technology in interactive web application with the help of voice. 2014 International Conference on Advances in Engineering and Technology Research (ICAETR), p 1–5

  19. Laurel B (2013) Computers as theatre. Addison-Wesley

  20. Leithinger D, Ishii H (2010) Relief: a scalable actuated shape display. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction. ACM, p 221–222

  21. Lodato J (2005) Advances in voice recognition: a first-hand look at the magic of voice-recognition technology. Futurist 39:1

    Google Scholar 

  22. Nahl D, Bilal D (2007) Information and emotion: the emergent affective paradigm in information behavior research and theory. Information Today, Inc

  23. Nickell J (2005) Secrets of the sideshows. University Press of Kentucky, Kentucky

    Google Scholar 

  24. O’connell I, Rock J (2011) Projection apparatus and method for pepper’s ghost illusion. US Patent 7(883):212

    Google Scholar 

  25. Riccardi G (2014) Towards healthcare personal agents. In Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges (RFMIR ‘14). ACM, New York, NY, USA, p 53–56

  26. Sabrowski JC (2013) Holographic technology at home

  27. SitePal. SitePal. Oddcast Inc. http://www.sitepal.com/. Accessed 10 Sept 2015

  28. Song Q, Shen H (2012) Intelligent voice assistant

  29. Teixeiraa A, Hämäläinenb A, Avelarb J, Almeidaa N, Némethd G, Fegyód T, Zainkód C, Csapód T, Tóthd B, Oliveiraa A, Dias MS (2013) Speech-centric multimodal interaction for easy-to-access online services – a personal life assistant for the elderly. 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, DSAI

  30. Thalmann NM, Thalmann D (2012) Virtual humans: back to the future. p 1–8. Proceedings of the 2012 Graphics Interace Conference

  31. Vinayagamoorthy V, Gillies M, Steed A (2006) Building expression into virtual characters. Eurographics 2006: State of the Art Report

  32. Volonte M, Babu S, Chaturvedi H, Newsome N, Ebrahimi E, Roy T, Daily SB, Fasolino T (2016) Effects of virtual human appearance fidelity on emotion contagion in affective inter-personal simulations. IEEE Trans Vis Comput Graph (99):1–1

  33. Weiss B, Wechsung I, Kühnel C, Möller S (2015) Evaluating embodied conversational agents in multimodal interfaces. Rev Comput Cogn Sci 1:6

    Article  Google Scholar 

  34. Whalen TE, Noel S, Stewart J (2003) Measuring the human side of virtual reality. In Proceedings of the IEEE International Symposium on Virtual Environments, Human-Computer Interfaces and Measurement Systems (VECIMS’03), p 8–12

  35. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering: an introduction. Springer

  36. Wright P, McCarthy J (2005) The value of the novel in designing for experience. In Future interaction design. Springer, p 9–30

  37. Wu W, Arefin A, Rivas R, Nahrstedt K, Sheppard R, Yang Z (2009) Quality of experience in distributed interactive multimedia environments: toward a theoretical framework. In Proceedings of the 17th ACM International Conference on Multimedia, p 481–490

  38. Xu B, Yu Y (2010) A personalized assistant in 3D virtual shopping environment. 2010 2nd International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), p 266–269

  39. Yagi A, Imura M, Kuroda Y, Oshiro O (2011) 360-degree fog projection interactive display. In SIGGRAPH Asia 2011 Emerging Technologies. ACM, p 19

Download references

Acknowledgments

Permission to make digital or hardcopies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credits permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamad Eid.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saad, U., Afzal, U., El-Issawi, A. et al. A model to measure QoE for virtual personal assistant. Multimed Tools Appl 76, 12517–12537 (2017). https://doi.org/10.1007/s11042-016-3650-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3650-5

Keywords

Navigation