Advertisement

Requirements Engineering

, Volume 23, Issue 3, pp 401–424 | Cite as

Improving the identification of hedonic quality in user requirements: a second controlled experiment

  • Andreas Maier
  • Daniel M. Berry
RE 2017
  • 47 Downloads

Abstract

Systematically engineering a good user experience (UX) into a computer-based system under development demands that the user requirements of the system reflect all needs, including emotional, of all stakeholders. User requirements address two different types of qualities: pragmatic qualities (PQs), that address system functionality and usability, and hedonic qualities (HQs) that address the stakeholder’s psychological well-being. Studies show that users tend to describe such satisfying UXes mainly with PQs and that some users seem to believe that they are describing an HQ when they are actually describing a PQ. The problem is to see if classification of any user requirement as PQ-related or HQ-related is difficult, and if so, why. We conducted two controlled experiments involving the same twelve requirements-engineering and UX professionals, hereinafter called “analysts.” The first experiment, which had the twelve analysts classifying each of 105 user requirements as PQ-related or HQ-related, shows that neither (1) an analyst’s involvement in the project from which the requirements came nor (2) the analyst’s use of a detailed model of the qualities in addition to the standard definitions of “PQ” and “HQ” has a positive effect on the consistency of the analyst’s classification with that of others. The second experiment, which had the twelve analysts classifying each of a set of 50 user requirements, derived from the 105 of the first experiment, showed that difficulties seem to be caused both by the analyst’s lacking skill in applying the definitions of “PQ” and “HQ” and by poorly written user requirement specifications. The first experiment revealed that classification of user requirements is a lot harder than initially assumed. The second experiment provided evidence that the difficulties can be mitigated by the combination of (1) training analysts in applying the definitions of “PQ” and “HQ” and (2) casting user requirement specifications in a new template that forces provision of the information needed for reliable classification. The experiment shows also that neither training analysts nor casting user requirement specifications in the new template, by itself, mitigates the difficulty in classifying user requirements.

Keywords

Hedonic quality Pragmatic quality Controlled experiment User experience Project involvement Definitions of pragmatic and hedonic qualities Quality model Classifier training User story template 

Notes

Acknowledgements

The authors thank this paper’s anonymous reviewers for RE’17 and for this special issue of REJ, Sebastian Adam, Joerg Doerr, and Andreas Jedlitschka for their comments on earlier drafts of this paper. Daniel Berry’s work was supported in part by a Canadian NSERC grant NSERC-RGPIN227055-15.

References

  1. 1.
    Hassenzahl M (2004) The interplay of beauty, goodness, and usability in interactive products. Hum Comput Interact 19(4):319–349CrossRefGoogle Scholar
  2. 2.
    Diefenbach S, Kolb N, Hassenzahl M (2014) The ‘hedonic’ in human–computer interaction: history, contributions, and future research directions. In: Proceedings of conference on designing interactive systems (DIS), pp 305–314Google Scholar
  3. 3.
    Hassenzahl M (2003) The thing and I: understanding the relationship between user and product. In: Blythe M, Overbeeke K, Monk A, Wright P (eds) Funology: from usability to enjoyment. Kluwer, Norwell, pp 31–42CrossRefGoogle Scholar
  4. 4.
    Jordan P (2000) Designing pleasurable products: an introduction to the new human factors. CRC, LondonCrossRefGoogle Scholar
  5. 5.
    Roto V, Law E, Vermeeren A, Hoonholt J (2011) User experience white paper. In: Demarcating user experience, 2011. http://www.allaboutux.org/uxwhitepaper
  6. 6.
    Ramos I, Berry D (2005) Is emotion relevant to requirements engineering? Requir Eng J 10(3):238–242CrossRefGoogle Scholar
  7. 7.
    Ramos I, Berry DM, Carvalho JA (2005) Requirements engineering for organizational transformation. Inf Softw Technol 47(7):479–495CrossRefGoogle Scholar
  8. 8.
    Thew S, Sutcliffe A (2008) Investigating the role of ‘soft issues’ in the RE process. In: Proceedings of IEEE international requirements engineering conference (RE), pp 63–66Google Scholar
  9. 9.
    Milne A, Maiden N (2012) Power and politics in requirements engineering: Embracing the dark side? Requir Eng J 17(2):83–98CrossRefGoogle Scholar
  10. 10.
    Sutcliffe A, Rayson P, Bull CN, Sawyer P (2014) Discovering affect-laden requirements to achieve system acceptance. In: Proceedings of IEEE international requirements engineering conference (RE), pp 173–182Google Scholar
  11. 11.
    Hassenzahl M, Beu A, Burmester M (2001) Engineering joy. IEEE Softw 18:70–76CrossRefGoogle Scholar
  12. 12.
    Hassenzahl M, Diefenbach S, Göritz A (2010) Needs, affect, and interactive products—facets of user experience. Interact Comput 22(5):353–362CrossRefGoogle Scholar
  13. 13.
    Desmet P, Overbeeke C, Tax S (2001) Designing products with added emotional value: development and application of an approach for research through design. Des J 4(1):32–47Google Scholar
  14. 14.
    McCarthy J, Wright P (2004) Technology as experience. Interactions 11(5):42–43CrossRefGoogle Scholar
  15. 15.
    Hassenzahl M, Tractinsky N (2006) User experience—a research agenda. Behav Inf Technol 25(2):91–97CrossRefGoogle Scholar
  16. 16.
    Hassenzahl M (2008) User experience (UX): towards an experiential perspective on product quality. In: Proceedings of 20th international conference of association for Francophone d’Interaction Homme–Machine (IHM), pp. 11–15Google Scholar
  17. 17.
    Doerr J (2011) Elicitation of a complete set of non-functional requirements, Ph.D. dissertation, University of Kaiserslautern, Kaiserslautern, DEGoogle Scholar
  18. 18.
    Partala T, Kallinen A (2012) Understanding the most satisfying and unsatisfying user experiences: emotions, psychological needs, and context. Interact Comput 24(1):25–34CrossRefGoogle Scholar
  19. 19.
    Maier A, Berry DM (2017) Improving the identification of hedonic quality in user requirements—a controlled experiment. In: Proceedings of IEEE international requirements engineering conference (RE), pp 205–214Google Scholar
  20. 20.
    Alliance A (2018) Glossary: role-feature-reason. https://www.agilealliance.org/glossary/role-feature/. Accessed 2 Apr 2018
  21. 21.
    Diefenbach S, Hassenzahl M (2011) The dilemma of the hedonic—appreciated, but hard to justify. Interact Comput 23(5):461–472CrossRefGoogle Scholar
  22. 22.
    Maier A (2017) An experiment package for the evaluation of difficulties in the classification of user requirements that are provided as user stories, Fraunhofer IESE, Tech. Rep. Report No. 002.17/E. https://cs.uwaterloo.ca/~dberry/FTP_SITE/tech.reports/MaierBerryExperimentalMaterials/
  23. 23.
    Basili V, Caldiera G, Rombach H (2001) Goal question metric paradigm. In: Marciniak J (ed) Encyclopedia of software engineering, vol 1. Wiley, New York, pp 528–532Google Scholar
  24. 24.
    Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In: Guide to advanced empirical software engineering. Springer, London, pp 201–228Google Scholar
  25. 25.
  26. 26.
    Fleiss JL (1971) Measuring nominal scale agreement among many raters. Psycholog Bull 76(5):378–382CrossRefGoogle Scholar
  27. 27.
    Landis J, Koch G (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174CrossRefMATHGoogle Scholar
  28. 28.
    Maier A, Berry DM (2017) Online appendix for improving the identification of hedonic quality in user requirements—two controlled experiments, University of Waterloo, Tech. Rep. Technical Report. https://cs.uwaterloo.ca/~dberry/FTP_SITE/tech.reports/MaierBerryOnlineAppendix/
  29. 29.
    Keppel G, Wickens T (2004) Design and analysis: a researchers handbook, 4th edn. Prentice Hall, Englewood CliffsGoogle Scholar
  30. 30.
    Anderson L, Krathwohl D, Airasian P, Cruikshank K, Mayer R, Pintrich P, Raths J, Wittrock M (2001) A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. Pearson, Allyn & Bacon, New YorkGoogle Scholar
  31. 31.
    Bloom B, Engelhart M, Furst E, Hill W, Krathwohl D (1956) Taxonomy of educational objectives, handbook I: the cognitive domain. Longmans, Green and Co, New YorkGoogle Scholar
  32. 32.
    Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering. Springer, HeidelbergCrossRefMATHGoogle Scholar
  33. 33.
    Gregor S (2002) A theory of theories in information systems. In: Gregor S, Hart D (eds) Information systems foundations: building the theoretical base. Australian National University, CanberraGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceTechnical University of KaiserslauternKaiserslauternGermany
  2. 2.Fraunhofer Institute for Experimental Software Engineering IESEKaiserslauternGermany
  3. 3.Cheriton School of Computer ScienceUniversity of WaterlooWaterlooCanada

Personalised recommendations