Back-to-Back Testing Framework Using a Machine Learning Method

Part of the Studies in Computational Intelligence book series (SCI, volume 443)

Abstract

In back-to-back testing of software, expected outputs (test oracles) are generated from software that is similar to SUT (software under test), and are compared with test outputs from the SUT in order to reveal faults. The advantages of back-to-back testing are that one can automatically perform the creation of expected outputs that is one of the most costly processes in software testing, and one can obtain detailed expected outputs that are not limited to a specific aspect, such as state transitions. However, it is not easy to automatically classify the differences between the test outputs and the expected outputs into two groups, that is, one resulting from failures of the SUT and another resulting from intended functional differences between the SUT and the similar software. The manual classification is too costly and back-to-back testing can hardly be applied unless the functions of the similar software are exactly equal to the intended functions of the SUT. To solve this costly classification problem, this paper proposes a novel back-to-back testing framework in which a SVM (support vector machine) classifies them automatically.

Keywords

Support Vector Machine Software Testing Test Engineer Software Reliability Output Difference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andreou, A.S., Economides, K.A., Sofokleous, A.A.: An automatic software test-data generation scheme based on data flow criteria and genetic algorithms. In: Proc. 7th International Conference on Computer and Information Technology, pp. 867–872 (2007)Google Scholar
  2. 2.
    Beizer, B.: Software Testing Techniques, 2nd edn. Van Nostrand Reinhold (1990)Google Scholar
  3. 3.
    Binder, R.V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley (1999)Google Scholar
  4. 4.
    IEEE: IEEE Standard Glossary of Software Engineering Terminology/IEEE Std 610.12-1990 (1991)Google Scholar
  5. 5.
    Joachims, T.: Making large-scale svm learning practical. In: Scholkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods - Support Vector Learning. MIT Press (1999)Google Scholar
  6. 6.
    Kalaji, A., Hierons, R.M., Swift, S.: Generating feasible transition paths for testing from an extended finite state machine (efsm). In: Proc. International Conference on Software Testing Verification and Validation, pp. 230–239 (2009)Google Scholar
  7. 7.
    Mohapatra, D., Bhuyan, P., Mohapatra, D.P.: Automated test case generation and its optimization for path testing using genetic algorithm and sampling. In: Proc. WASE International Conference on Information Engineering, pp. 643–646 (2009)Google Scholar
  8. 8.
    Myers, G.J.: The Art of Software Testing. John Wiley & Sons (1979)Google Scholar
  9. 9.
    Whittaker, J.A., Thomason, M.G.: A markov chain model for statistical software testing. IEEE Transactions on Software Engineering 20(10), 812–824 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Tomohiko Takagi
    • 1
  • Takeshi Utsumi
    • 2
  • Zengo Furukawa
    • 1
  1. 1.Faculty of EngineeringKagawa UniversityTakamatsu-shiJapan
  2. 2.Graduate School of EngineeringKagawa UniversityTakamatsu-shiJapan

Personalised recommendations