An evolutionary testbed for software technology evaluation

  • Mikael Lindvall
  • Ioana Rus
  • Forrest Shull
  • Marvin Zelkowitz
  • Paolo Donzelli
  • Atif Memon
  • Victor Basili
  • Patricia Costa
  • Roseanne Tvedt
  • Lorin Hochstein
  • Sima Asgari
  • Chris Ackermann
  • Dan Pech
Article

Abstract.

Empirical evidence and technology evaluation are needed to close the gap between the state of the art and the state of the practice in software engineering. However, there are difficulties associated with evaluating technologies based on empirical evidence: insufficient specification of context variables, cost of experimentation, and risks associated with trying out new technologies. In this paper, we propose the idea of an evolutionary testbed for addressing these problems. We demonstrate the utility of the testbed in empirical studies involving two different research technologies applied to the testbed, as well as the results of these studies. The work is part of NASA’s High Dependability Computing Project (HDCP), in which we are evaluating a wide range of new technologies for improving the dependability of NASA mission-critical systems.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Basili VR, Green S, Laitenberger O, Lanubile F, Shull F, Sorumgaard S, Zelkowitz MV (1996) The empirical investigation of perspective-based reading. J Empir Softw Eng 1(2):133–164Google Scholar
  2. 2.
    Basili VR, McGarry FE, Pajerski R, Zelkowitz MV (2002) Lessons learned from 25 years of process improvement: the rise and fall of the NASA Software Engineering Laboratory. In: IEEE Computer Society and ACM international conference on software engineering (ICSE 2002), pp 69–79Google Scholar
  3. 3.
    Basili V, Donzelli P, Asgari S (2004) Modelling dependability – the unified model of dependability. University of Maryland, Technical Report CS-TR-46–01Google Scholar
  4. 4.
    Boehm B, Bhuta J, Garlan D, Gradman E, Huang L, Lam A, Madachy R, Medvidovic N, Meyer K, Meyers S, Perez G, Reinholtz K, Roshandel R, Rouquette N (2004) Using empirical testbeds to accelerate technology maturity and transition: the SCRover experience. In: 2004 international symposium on empirical software engineering (ISESE‘04), pp 117–126Google Scholar
  5. 5.
    Dennis G (2003) TSAFE: building a trusted computing base for air traffic control software. Masters Thesis, MIT, Cambridge, MAGoogle Scholar
  6. 6.
    Erzberger H, Paielli RA (2002) Concept for next generation air traffic control system. Air Traffic Control Q 10(4):355–378Google Scholar
  7. 7.
    Kitchenham BA, Dyba T, Jorgensen M (2004) Evidence-based software engineering. In: Proceedings of the 26th international conference on software engineering, pp 273–281Google Scholar
  8. 8.
    Murphy GC, Notkin D, Sullivan K (1995) Software reflexion models: bridging the gap between source and high-level models. In: Proceedings of the 3rd ACM SIGSOFT symposium on the foundations of software engineering. ACM Press, New York, pp 18–28Google Scholar
  9. 9.
    Regnell B, Runeson P, Thelin T (2000) Are the perspectives really different? – further experimentation on scenario-based reading of requirements. J Empir Softw Eng 5(4):331–356Google Scholar
  10. 10.
    Schroeder PJ, Bolaki P, Gopu V (2004) Comparing the fault detection effectiveness of N-way and random test suites. In: Proceedings of the 2004 international symposium on empirical software engineering, Redondo Beach, CA, pp 49–59Google Scholar
  11. 11.
    Shull F, Basili V, Carver J, Maldonado JC, Travassos GH, Mendonça M, Fabbri S (2002) Replicating software engineering experiments: addressing the tacit knowledge problem. In: International symposium on empirical software engineering (ISESE’02) Nara, JapanGoogle Scholar
  12. 12.
    Travassos GH, Shull F, Fredericks M, Basili VR (1999) Detecting defects in object-oriented designs: using reading techniques to increase software quality. In: Proceedings of the conference on object-oriented programming, systems, languages, and applications (OOPSLA)Google Scholar
  13. 13.
    Tesoriero Tvedt R, Costa P, Lindvall M (2002) Does the code match the design? A process for architecture evaluation. In: Proceedings of the international conference on software maintenance, pp 393–401Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Mikael Lindvall
    • 2
  • Ioana Rus
    • 2
  • Forrest Shull
    • 2
  • Marvin Zelkowitz
    • 1
    • 2
  • Paolo Donzelli
    • 1
  • Atif Memon
    • 1
  • Victor Basili
    • 1
    • 2
  • Patricia Costa
    • 2
  • Roseanne Tvedt
    • 2
  • Lorin Hochstein
    • 1
  • Sima Asgari
    • 1
  • Chris Ackermann
    • 2
  • Dan Pech
    • 2
  1. 1.University of MarylandUSA
  2. 2.Fraunhofer Center for Experimental Software EngineeringMDUSA

Personalised recommendations