Empirical Software Engineering

, Volume 12, Issue 1, pp 3–33 | Cite as

Pair-wise comparisons versus planning game partitioning—experiments on requirements prioritisation techniques

  • Lena KarlssonEmail author
  • Thomas Thelin
  • Björn Regnell
  • Patrik Berander
  • Claes Wohlin


The process of selecting the right set of requirements for a product release is dependent on how well the organisation succeeds in prioritising the requirements candidates. This paper describes two consecutive controlled experiments comparing different requirements prioritisation techniques with the objective of understanding differences in time-consumption, ease of use and accuracy. The first experiment evaluates Pair-wise comparisons and a variation of the Planning game. As the Planning game turned out as superior, the second experiment was designed to compare the Planning game to Tool-supported pair-wise comparisons. The results indicate that the manual pair-wise comparisons is the most time-consuming of the techniques, and also the least easy to use. Tool-supported pair-wise comparisons is the fastest technique and it is as easy to use as the Planning game. The techniques do not differ significantly regarding accuracy.


Requirements engineering Requirements prioritisation Release planning Decision making Controlled experiment 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



The authors would like to thank all experiment participants for contributing with their time and effort.


  1. Beck K (1999) Extreme programming explained. Addison-Wesley, Reading, MAGoogle Scholar
  2. Berander P (2004) Using students as subjects in requirements prioritization. Proc Int Symp Empirical Software Engineering, Redondo Beach, CA, USA, pp 167–176Google Scholar
  3. Berander P, Andrews A (2005) Requirements prioritization. In: Aurum A, Wohlin C (eds), Engineering and Managing Software Requirements. Springer-Verlag, Berlin, GermanyGoogle Scholar
  4. Carmone FJ, Kara A, Zanakis SH (1997) A Monte Carlo investigation of incomplete pairwise comparison matrices in AHP. European Journal of Operational Research 102:538–553CrossRefzbMATHGoogle Scholar
  5. Carver J, Jaccheri L, Morasca S, Shull F (2003) Issues in using students in empirical studies in software engineering education. Proc Int Software Metrics Symp Sydney, Australia, pp 239–249Google Scholar
  6. Davis AM (2003) The art of requirements triage. IEEE Computer 36:42–49Google Scholar
  7. Greer D, Ruhe G (2004) Software release planning: an evolutionary and iterative approach. Information and Software Technology 46:243–253CrossRefGoogle Scholar
  8. Harker PT (1987) Incomplete pairwise comparisons in the analytic hierarchy process. Mathl. Modelling 9:837–848CrossRefMathSciNetGoogle Scholar
  9. Höst M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empirical Software Engineering 5:201–214CrossRefzbMATHGoogle Scholar
  10. IEEE Std 830–1998. (1998). IEEE recommended practice for software requirements specifications. IEEEGoogle Scholar
  11. Karlsson J (1996) Software requirements prioritizing. Proc Int Conf Req Eng Colorado Springs, Colorado, USA, pp 110–116Google Scholar
  12. Karlsson J, Ryan K (1997) A cost-value approach for prioritizing requirements. IEEE Software 14:67–74CrossRefGoogle Scholar
  13. Karlsson J, Olsson S, Ryan K (1997) Improved practical support for large-scale requirements prioritising. Journ Req Eng 2:51–60CrossRefGoogle Scholar
  14. Karlsson J, Wohlin C, Regnell B (1998) An evaluation of methods for prioritizing software requirements. Inf and Software Techn 39:939–947CrossRefGoogle Scholar
  15. Karlsson L, Berander P, Regnell B, Wohlin C (2004) Requirements prioritisation: an experiment on exhaustive pair-wise comparisons versus planning game partitioning. Proc Int Conf Empirical Assessment in Software Engineering. Edinburgh, United Kingdom, pp 145–154Google Scholar
  16. Lauesen S (2002) Software requirements-styles and techniques. Addison-Wesley, HarlowGoogle Scholar
  17. Leffingwell D, Widrig D (2000) Managing Software Requirements-A unified approach. Addison-WesleyGoogle Scholar
  18. Lehtola L, Kauppinen M (2004) Empirical evaluation of two requirements prioritization methods in product development projects. Proc European Software Process Improvement Conf Trondheim, Norway, pp 161–170Google Scholar
  19. Lubars M, Potts C, Richter C (1992) A review of the state of the practice in requirements modeling. Proc IEEE Int Symp Req Eng, pp 2–14Google Scholar
  20. Moisiadis F (2002) The fundamentals of prioritising requirements. Proc Systems Engineering, Test & Evaluation Conf, Sydney, Australia, pp 108–119Google Scholar
  21. Newkirk JW, Martin RC (2001) Extreme programming in practice. Addison-Wesley, HarlowGoogle Scholar
  22. Robson C (1997) Real World Research. Blackwell, OxfordGoogle Scholar
  23. Ruhe G, Eberlein A, Pfal D (2002) Quantitative WinWin: a new method for decision support in requirements negotiation. Proc of the Int Conf on Software Engineering and Knowledge Engineering, pp 159–166Google Scholar
  24. Runeson P (2003) Using students as experiment subjects—an analysis on graduate and freshmen student data. Proc Int Conf Empirical Assessment and Evaluation in Software Engineering. Keele, United Kingdom, pp 95–102Google Scholar
  25. Saaty TL, Vargas LG (2001) Models, methods, concepts & applications of the analytic hierarchy process. Kluwer Academic Publishers, Norwell, MAGoogle Scholar
  26. Sawyer P (2000) Packaged software: challenges for RE. Proc Int Workshop on Req Eng: Foundations of Software Quality. Stockholm, Sweden, pp 137–142Google Scholar
  27. Shen Y, Hoerl AE, McConnell W (1992) An incomplete design in the analytic hierarchy process. Mathl. Comput. Modelling 16:121–129CrossRefzbMATHGoogle Scholar
  28. Siddiqi J, Shekaran MC (1996) Requirements engineering: the emerging wisdom. IEEE Software 13:15–19.CrossRefGoogle Scholar
  29. Siegel S, Castellan JN (1988) Nonparametric statistics for the behavioral sciences. 2nd ed. McGraw-Hill, New YorkGoogle Scholar
  30. Tichy WF (2000) Hints for reviewing empirical work in software engineering. Empirical Software Engineering 5:309–312CrossRefMathSciNetGoogle Scholar
  31. Wiegers K (1999) Software requirements. Microsoft Press, Redmond, WAGoogle Scholar
  32. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2000) Experimentation in software engineering—an introduction. Kluwer Academic PublishersGoogle Scholar
  33. Wohlin C, Aurum A (2005) What is important when deciding to include a software requirement in a project or release? Proc Int Symp on Empirical Software Engineering. Noosa Heads, Australia, pp 237–246Google Scholar
  34. Yourdon E (1999) Death March. Prentice-Hall, Upper Saddle River, NJGoogle Scholar
  35. Zhang Q, Nishimura T (1996) A method of evaluation for scaling in the analytic hierarchy process. Proc Int Conf Systems, Man and Cybernetics. Beijing, China, pp. 1888–1893., last visited 2005–12–18

Copyright information

© Springer Science + Business Media, LLC 2006

Authors and Affiliations

  • Lena Karlsson
    • 1
    Email author
  • Thomas Thelin
    • 1
  • Björn Regnell
    • 1
  • Patrik Berander
    • 2
  • Claes Wohlin
    • 2
  1. 1.Department of Communication SystemsLund UniversitySweden
  2. 2.Department of Systems and Software Engineering, School of EngineeringBlekinge Institute of TechnologySweden

Personalised recommendations