Advertisement

Empirical Software Engineering

, Volume 13, Issue 1, pp 39–62 | Cite as

Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment

  • Muhammad Ali BabarEmail author
  • Barbara Kitchenham
  • Ross Jeffery
Article

Abstract

Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.

Keywords

Architecture evaluation Process improvement Controlled experiments Groupware support Scenario development 

Notes

Acknowledgment

We greatly appreciate the anonymous reviewers’ comments, which helped us improve this paper. We are grateful to the participants of this controlled experiment. Xiaowen Wang helped in preparing reference scenario profile and marking scenario profiles. The first author was working with the National ICT Australia when the reported work was performed.

References

  1. Ali-Babar M, Verner J (2005) Groupware requirements for supporting software architecture evaluation process. In: Proceedings of the International Workshop on Distributed Software Development, Paris, 29 August 2005Google Scholar
  2. Ali-Babar M, Zhu L, Jeffery R (2004) A Framework for Classifying and Comparing Software Architecture Evaluation Methods. In: Proceedings of the 15th Australian Software Engineering Conference, Melbourne, 13–16 April 2004Google Scholar
  3. Ali-Babar M, Kitchenham B, Gorton I (2006a) Towards a distributed software architecture evaluation process—a preliminary assessment. In: Proceedings of the 28th International Conference on Software Engineering (Emerging Result Track), Shanghai, 20–28 May 2006Google Scholar
  4. Ali-Babar M, Kitchenham B, Zhu L, Gorton I, Jeffery R (2006b) An empirical study of groupware support for distributed software architecture evaluation process. J Syst Softw 79(7):912–925CrossRefGoogle Scholar
  5. Basili VR, Selby RW, Hutchens DH (1986) Experimentation in software engineering. IEEE Trans Softw Eng 12(7):733–743Google Scholar
  6. Bass L, Clements P, Kazman R (2003) Software architecture in practice. Addison-Wesley, ReadingGoogle Scholar
  7. Bengtsson P (2002) Architecture-level modifiability analysis. Ph.D. Thesis, Blekinge Institute of TechnologyGoogle Scholar
  8. Bengtsson P, Bosch J (2000) An experiment on creating scenario profiles for software change. Ann Softw Eng 9:59–78CrossRefGoogle Scholar
  9. Biuk-Aghai RP, Hawryszkiewyez IT (1999) Analysis of virtual workspaces. In: Proceedings of the Database Applications in Non-Traditional Environments. Japan, 28–30 November 1999Google Scholar
  10. Boeham B, Grunbacher P, Briggs RO (2001) Developing groupware for requirements negotiation: lessons learned. IEEE Softw 18(3):46–55CrossRefGoogle Scholar
  11. Clements P, Kazman R, Klein M (2002) Evaluating software architectures: methods and case studies. Addison-Wesley, ReadingGoogle Scholar
  12. Damian DE, Eberlein A, Shaw MLG, Gaines BR (2000) Using different communication media in requirements negotiation. IEEE Softw 17(3):28–36CrossRefGoogle Scholar
  13. Dobrica L, Niemela E (2002) A survey on software architecture analysis methods. IEEE Trans Softw Eng 28(7):638–653CrossRefGoogle Scholar
  14. Ellis CA, Gibbs SJ, Rein GL (1991) Groupware: some issues and experiences. Commun ACM 34(1):38–58CrossRefGoogle Scholar
  15. Fjermestad J (2004) An analysis of communication mode in group support systems research. Decis Support Syst 37(2):239–263Google Scholar
  16. Fjermestad J, Hiltz SR (1998–1999) An assessment of group support systems experimental research: methodology and results. J Manage Inf Syst 15(3):7–149Google Scholar
  17. Fjermestad J, Hiltz SR (2000–2001) Group support systems: a descriptive evaluation of case and field studies. J Manage Inf Syst 17(3):115–159Google Scholar
  18. Genuchten MV, Cornelissen W, Dijk CV (1997–98) Supporting inspection with an electronic meeting system. J Manage Inf Syst 14(3):165–178Google Scholar
  19. Genuchten MV, Van Dijk C, Scholten H, Vogel D (2001) Using group support systems for software inspections. IEEE Softw 18(3):60–65CrossRefGoogle Scholar
  20. Halling M, Grunbacher P, Biffl S (2001) Tailoring a COTS group support system for software requirements inspection. In: Proceedings of the 16th International Conference on Automated Software Engineering, San Diego, 26–29 November 2001Google Scholar
  21. Herbsleb JD, Moitra D (2001) Global software development. IEEE Softw 18(2):16–20CrossRefGoogle Scholar
  22. Hiltz R, Turoff M (1978) The network of nations: human communication via computer. Addison-Wesley, ReadingGoogle Scholar
  23. Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5:201–214CrossRefGoogle Scholar
  24. Jarvenpaa SL, Rao VS, Huber GP (1988) Computer support for meetings of groups working on unstructured problems: a field experiment. MIS Q 12(4):645–666CrossRefGoogle Scholar
  25. Kazman R, Bass L (2002) Making architecture reviews work in the real world. IEEE Softw 19(1):67–73CrossRefGoogle Scholar
  26. Kazman R, Bass L, Abowd G, Webb M (1994) SAAM: a method for analyzing the properties of software architectures. In: Proceedings of the 16th International Conference on Software Engineering, Sorrento, May 1994Google Scholar
  27. Kazman R, Abowd G, Bass L, Clements P (1996) Scenario-based analysis of software architecture. IEEE Softw Eng 13(6):47–55CrossRefGoogle Scholar
  28. Kazman R, Barbacci M, Klein M, Carriere SJ (1999) Experience with performing architecture tradeoff analysis. In: Proceedings of the 21st International Conference on Software Engineering, Los Angeles, MayGoogle Scholar
  29. Kazman R, Klein M, Clements P (2000) ATAM: method for architecture evaluation. CMU/SEI-2000-TR-004, Software Engineering Institute, Carnegie Mellon University, PittsburghGoogle Scholar
  30. Kiesler S, Siegel J, McGuire TW (1984) Social psychological aspects of computer-mediated communication. Am Psychol 9(10):1123–1134CrossRefGoogle Scholar
  31. Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734CrossRefGoogle Scholar
  32. Kitchenham B, Fay J, Linkman S (2004) The case against cross-over design in software engineering. In: Proceedings of the 11th International Workshop on Software Technology and Engineering Practice, Amsterdam, 19–21 September 2003Google Scholar
  33. Lanubile F, Mallardo T, Calefato F (2003) Tool support for geographically dispersed inspection teams. Softw Process Improv Pract 8(4):217–231CrossRefGoogle Scholar
  34. Lassing N, Bengtsson P, Bosch J, Vliet HV (2002) Experience with ALMA: architecture-level modifiability analysis. J Syst Softw 61(1):47–57CrossRefGoogle Scholar
  35. Lassing N, Rijsenbrij D, Vliet HV (2003) How well can we predict changes at architecture design time? J Syst Softw 65(2):141–153Google Scholar
  36. Maranzano JF, Rozsypal SA, Zimmerman GH, Warnken GW, Wirth PE, Weiss DM (2005) Architecture reviews: practice and experience. IEEE Softw 22(2):34–43CrossRefGoogle Scholar
  37. McGrath JE, Hollingshead AB (1994) Groups interacting with technology. Sage, Newbury ParkGoogle Scholar
  38. Nunamaker J, Vogel D, Heminger A, Martz B (1989) Experiences at IBM with group support systems: a field study. Decis Support Syst 5:183–196CrossRefGoogle Scholar
  39. Nunamaker JF, Dennis AR, Valacich JS, Vogel D, George JF (1991) Electronic meeting systems to support group work. Commun ACM 34(7):40–61CrossRefGoogle Scholar
  40. Nunamaker JF, Briggs RO, Mittleman DD, Vogel DR, Balthazard PA (1996–1997) Lessons from a dozen years of group support systems research: a discussion of lab and field findings. J Manage Inf Syst 13(3):163–207Google Scholar
  41. Paasivaara M, Lassenius C (2003) Collaboration practices in global inter-organizational software development projects. Softw Process Improv Pract 8(4):183–199CrossRefGoogle Scholar
  42. Perry DE, Porter A, Wade MW, Votta LG, Perpich J (2002) Reducing inspection interval in large-scale software development. IEEE Trans Softw Eng 28(7):695–705CrossRefGoogle Scholar
  43. Poole MS, Desanctis G (1990) Understanding the use of group decision support systems: the theory of adaptive structuration. In: Fulk J, Steinfield C (eds) Organizations and communication technology. Sage, Newbury, pp 173–193Google Scholar
  44. Porter AA, Johnson PM (1997) Assessing software review meetings: results of a comparative analysis of two experimental studies. IEEE Trans Softw Eng 23(3):129–145CrossRefGoogle Scholar
  45. Rosnow RL, Rosenthal R (1997) People studying people: artifacts and ethics in behavioral research. Freeman, San FranciscoGoogle Scholar
  46. Sakthivel S (2005) Virtual workgroups in offshore systems development. Inf Softw Technol 47(5):305–318CrossRefGoogle Scholar
  47. Sauer C, Jeffery DR, Land L, Yetton P (2000) The effectiveness of software development technical reviews: a behaviorally motivated program of research. IEEE Trans Softw Eng 26(1):1–14CrossRefGoogle Scholar
  48. Senn S (2002) Cross-over trials in clinical research. Wiley, New YorkGoogle Scholar
  49. Toothaker LE, Miller L (1996) Introductory statistics for the behavioral science. Brooks/Cole, Pacific GroveGoogle Scholar
  50. Tyran CK, George JF (2002) Improving software inspections with group process support. Commun ACM 45(9):87–92CrossRefGoogle Scholar
  51. Tyran CK, Dennis AR, Vogal DR, Nunamaker JF (1992) The application of electronic meeting technology to support strategic management. MIS Q 16:313–334CrossRefGoogle Scholar
  52. Valacich JS, Dennis AR, Nunamaker JF (1991) Electronic meeting support: the GroupSystems concepts. Int J Man-Mach Stud 34(2):261–282CrossRefGoogle Scholar
  53. Valacich J, Dennis AR, Nunamaker JF (1992) Group size and anonymity effects on computer-mediated idea generation. Small Group Res 23(1):49–73CrossRefGoogle Scholar
  54. Wohlin C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, NorwellzbMATHGoogle Scholar
  55. Zwiki (2004) Zwiki system. http://www.zwiki.org. Cited 30 November 2004.

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • Muhammad Ali Babar
    • 1
    Email author
  • Barbara Kitchenham
    • 2
  • Ross Jeffery
    • 2
  1. 1.LeroUniversity of LimerickLimerickIreland
  2. 2.National ICT AustraliaSydneyAustralia

Personalised recommendations