Advertisement

Software Quality Journal

, Volume 3, Issue 1, pp 45–58 | Cite as

Dual programming approach to software testing

  • M. Ghiassi
  • K. I. S. Woldman
Papers

Abstract

The testing phase of the software development process consumes about one-half of the development time and resources. This paper addresses the automation of the analysis stage of testing. Dual programming is introduced as one approach to implement this automation. It uses a higher level language to duplicate the functionality of the software under test. We contend that a higher level language (HLL) uses fewer lines of code than a lower level language (LLL) to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent. The HLL program becomes the oracle for the LLL version. This paper describes experiments carried out using different categories of applications, and it identifies those most likely to profit from this approach. A metric is used to quantify savings realized. The results of the research are: (a) that dual programming can be used to automate the analysis stage of software testing; (b) that substantial savings of the cost of this testing phase can be realized when the appropriate pairing of primal and dual languages is made, and (c) that it is now possible to build a totally automated testing system. Recommendations are made regarding the applicability of the method to specific classes of applications.

Keywords

software testing test automation dual programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abdel-Hamid, T.K. (1988) The economics of software quality assurance: a simulation-based case study. MIS Quarterly, September, 395–411Google Scholar
  2. Adams, J.M. and Taha, A. (1992) An experiment in software redundancy with diverse methodologies, in Proceedings of the Twenty-Fifth Hawaii International Conference on Systems Science, January 1992, pp. 83–9Google Scholar
  3. Albrecht, A.J. and Gaffney, J.E.Jr (1983) Software function, source lines of code, and development effort prediction: a software science validation. IEEE Transactions on Software Engineering, SE-9(6), 639–47Google Scholar
  4. Anvari, M. and Rose, G.F. (1985) An expert system for software verification. Robotics and Automation Conference, Santa Barbara, CA, May 1985, pp 108–11Google Scholar
  5. Avizienis, A. (1985) The N-Version approach to fault-tolerant software. IEEE Transactions on Software Engineering, SE-11(12), 1491–500Google Scholar
  6. Bender, R. (1993) How much testing is enough: moving to quantitative criteria. Tenth International Conference on Testing Computer Software, Washington, D.C., June 1993.Google Scholar
  7. Bieman, J.M. and Yin, H. (1992) Designing for software testability using automated oracles. Proceedings International Test Conference, September 1992, pp. 900–907.Google Scholar
  8. Boehm, B.W. (1981) Software Engineering Economics (Prentice Hall)Google Scholar
  9. Brilliant, S.S., Knight, J.C. and Leveson, N.G. (1990) Analysis of faults in an N-Version software experiment. IEEE Transactions on Software Engineering, 16(2), 238–47Google Scholar
  10. Carver, D.L. (1988) A knowledge-based testing assistant. Conference Proceedings, IEEE — Southeastern, pp. 542–545Google Scholar
  11. Eckhardt, D.E.Jr and Lee, L.D. (1985) A theoretical basis for the analysis of multiversion software subject to coincident errors. IEEE Transactions on Software Engineering, SE-11(12), 1511–17Google Scholar
  12. Ghiassi, M., Ketabchi, M.A. and Sadeghi, K.J. (1992) An integrated software testing system based on an object-oriented DBMS. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, Vol. II, January 1992.Google Scholar
  13. Harrison, W. and Adrangi, B. (1986) The role of programming language in estimating software development costs. Journal of Management Information Systems, III(3), 102–9Google Scholar
  14. Henderson, B.M. (1989) Big brother — automated test controller. Sixth International Conference on Testing Computer Software, Washington, DC, June 1989.Google Scholar
  15. Jones, C. (1986) Programming Productivity (McGraw-Hill, New York)Google Scholar
  16. King, J. (1992) It's C.Y.A. time. Computerworld, March 30, 85–86Google Scholar
  17. Knight, J.C. and Leveson, N.G. (1986) An experimental evaluation of the assumption of independence in multiversion programming. IEEE Transactions on Software Engineering, SE-12(1), 96–109Google Scholar
  18. Korel, B. (1989) TESTGEN — a software test data generation system. Sixth International Conference on Testing Computer Software, Washington, DC, June 1989.Google Scholar
  19. Littlewood, B. and Miller, D.R. (1989) Conceptual modeling of coincident failures in multiversion software. IEEE Transactions on Software Engineering, 1, 12Google Scholar
  20. Panzl, D.J. (1978) Automatic software test drivers. IEEE Computer, 11(4), 44–50Google Scholar
  21. Richardson, D.J. Aha, S.L. and O'Malley, T.O. (1992) Specification-based test oracles for reactive systems Proceedings 14th International Conference on Software Engineering (ICSE-14), May 1992.Google Scholar
  22. Schneider, V. (1978) Prediction of software effort and project duration — four new formulas. ACM SIGPLAN Notices, June 1978, 49–59Google Scholar
  23. Tepandi, J. (1988) A knowledge-based approach to the specification-based program testing. Computers and Artificial Intelligence, 1, 39–48Google Scholar
  24. Wolfram, S. (1991) Mathematica: A System for Doing Mathematics by Computer (Addison-Wesley)Google Scholar

Copyright information

© Chapman & Hall 1994

Authors and Affiliations

  • M. Ghiassi
    • 1
  • K. I. S. Woldman
    • 1
  1. 1.Department of Decision and Information SciencesSanta Clara UniversitySanta ClaraUSA

Personalised recommendations