Advertisement

Verification and Validation of Reusable ADA Components

  • C. K. Bullard
  • D. S. Guindi
  • W. B. Ligon
  • W. M. McCracken
  • S. Rugaber

Abstract

This paper discusses the verification and validation (V&V) of reusable software written in the Ada programming language. The research includes methodological and experimental studies of aspects of V&V that are affected when reusable components are considered.

There are two aspects of reuse of concern to V&V: portability and adaptability. In the former case, the reusable component must be moved from one hardware/operating system environment to another. Usually, the intended functional behavior remains the same, and the main V&V concern is to assure that assumptions made by the software about the computing environment are not violated.

The latter case is concerned with incorporating a component into a new application environment. The Ada generic facility supports this by allowing developers to “instantiate” a component in a variety of ways, depending on the requirements of a new application. The characteristics that may be altered are delineated in such a way that the developer need only look at the specification and not the body of a generic component in order to understand what it does.

We have taken two approaches to investigating the V&V of reusable software. The first is methodological. It looks at the total software development life cycle to understand how reuse perturbs traditional methodologies. Its purpose is to characterize reuse errors and propose techniques for limiting them.

The other approach is experimental. We have adapted an existing methodology (Mutation Analysis) to detect instances of reuse errors. This tells us whether reuse errors are easily detectable and how easy it is to modify existing methods and tools. In particular, it can detect instances of Ada code that are not portable to new environments or not adaptable to different applications.

Keywords

Application Environment Garbage Collection Reusable Software Reusable Component Adequacy Criterion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Appelbe, W. F., DeMillo, R. A., Guindi, D. S., King, K. N., and McCracken, W. M., 1988, Using Mutation Analysis for Testing Ada Programs, Proceedings of Ada Europe, Munich.Google Scholar
  2. Barnes, J. G. P., 1984, Programming in Ada, Addison-Wesley Publishers, Reading, MA.Google Scholar
  3. Booch, G., 1986, Software Components with Ada, 2nd Ed., Benjamin/Cummings Publishing Company, Menlo Park, CA.Google Scholar
  4. Bowser, J., 1988, Reference Manual for Ada Mutant Operators, Technical Report GIT-SERC-88/02, Software Engineering Center, Georgia Institute of Technology, Atlanta, GA.Google Scholar
  5. Budd, T. A., DeMillo, R. A., Lipton, R. J., and Sayward, F. G., 1978, The Design of a Prototype Mutation System for Program Testing, Proceedings NCC, AFIPS Conference Record, pp. 623–627. Google Scholar
  6. Budd, T. A., 1980, Mutation Analysis of Program Test Data, PhD Dissertation, Yale University, New Haven, Connecticutt.Google Scholar
  7. Budd, T. A., Hess, R., and Sayward, F. G., 1980, EXPER Implementor’s Guide, Department of Computer Science, Yale University.Google Scholar
  8. Budd, T. A.,and Angluin, D., 1982, Two Notions of Correctness and Their Relation to Testing, Acta Informatics, Vol. 18, No. 1, Springer-Verlag, pp. 31–45. Google Scholar
  9. DeMillo, R. A., Lipton, R. J., and Sayward, F. G., 1978, Hints on Test Data Selection: Help for the Practicing Programmer, Computer, Vol. 11, No. 4. Google Scholar
  10. DeMillo, R. A., and Spafford, E. H., 1986, The Mothra Software Testing Environment, Proceedings of the 11th Nasa Software Engineering Laboratory Workshop, Goddard Space Center.Google Scholar
  11. DeMillo, R. A., McCracken, W. M., Martin, R. J., and Passafiume, J. F., 1987, Software Testing and Evaluation, Benjamin/Cummings, Menlo Park, CA.Google Scholar
  12. DeMillo, R. A., Guindi, D. S., King, K. N., McCracken, W. M., and Offutt, A. J., 1988, An Extended Overview of the MOTHRA Mutation System, Proceedings of the Second Workshop on Software Testing Verification, and Analysis, Banff Alberta.Google Scholar
  13. Department of Defense, 1980, Ada Programming Language, MIL-STD-1815, Superintendent of Documents, U. S. Government Printing Office, Washington, D.C.Google Scholar
  14. Digital Equipment Corporation, 1985, Portability and the Portability Summary, Developing Ada Programs on VAX/VMS, Maynard, MA.Google Scholar
  15. Dynamics Research Corporation, 1988, ADAMAT. An Introduction to the Concepts and Principles of Dynamics Research Corporation’s Ada Measurement and Analysis Tool, Andover, MA, 1988. EVB Software Engineering, 1987, GRACE Notes, Frederick, MD.Google Scholar
  16. Goodenough, J. B., and Gerhart, S. L., 1975, Toward a Theory of Test Data Selection, Proceedings of the International Conference on Reliable Software, Vol. 10, No. 6, ACM SIGPLAN, pp. 493–510. Google Scholar
  17. Johnson, S. C., 1979, Lint, a C Program Checker, UNIX Programmer’s Supplementary Documents, Vol. 1, Berkeley, CA.Google Scholar
  18. Joint Logistics Commanders, 1983, Workshop on Post Deployment Software Support, Orlando, FL.Google Scholar
  19. Lecarme, O., and Gart, Mireille Pellissier, 1986, Software Portability, McGraw-Hill, New York, NY. Nilssen, J., and Wallis, P. J., ed., 1984, Portability and Style in Ada, Cambridge University Press, Cambridge.Google Scholar
  20. Offutt, A. J., 1988, Automatic Test Data Generation, GIT-ICS 88/28, PhD Dissertation, Georgia Institute of Technology, Atlanta, GA.Google Scholar
  21. Ryder, B. G., 1974, The PFORT Verifier, Software–Practice and Experience, Vol. 4, pp. 359–377.MATHCrossRefGoogle Scholar
  22. Spafford, E. H., 1988, Extending Mutation Testing to Find Environmental Bugs, Technical Report SERC-TR21-P, Software Engineering Research Center, Purdue University, W. Lafayette, IN.Google Scholar
  23. Wallis, P. J., 1982, Portable Programming, John Wiley and Sons, New York, NY.Google Scholar
  24. Walsh, P. J., 1985, A Measure of Test Case Completeness, PhD Dissertation, State University of New York.Google Scholar
  25. Weyuker, E. J., and Ostrand, T. J., 1980, Theories of Program Testing and the Application of Revealing Subdomains, IEEE Transactions of Software Engineering, Vol. 6, No. 3, pp. 236–246.CrossRefGoogle Scholar
  26. Weyuker, E. J., 1983, Assessing Test Data Adequacy Through Program Inference, TOPLAS, Vol. 5, No. 4, pp. 641–655.MATHCrossRefGoogle Scholar
  27. Weyuker, E. J., 1988, The Evaluation of Program-Based Software Test Data Adequacy Criteria, Communications of the ACM, Vol. 31, No. 6, pp. 676–686.MathSciNetCrossRefGoogle Scholar

Copyright information

© Plenum Press, New York 1990

Authors and Affiliations

  • C. K. Bullard
    • 1
  • D. S. Guindi
    • 1
  • W. B. Ligon
    • 1
  • W. M. McCracken
    • 1
  • S. Rugaber
    • 1
  1. 1.Software Engineering Research CenterGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations