Adaptive Random Testing Through Iterative Partitioning

  • T. Y. Chen
  • De Hao Huang
  • Zhi Quan Zhou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4006)


Random testing (RT) is a fundamental and important software testing technique. Based on the observation that failure-causing inputs tend to be clustered together in the input domain, the approach of Adaptive Random Testing (ART) has been proposed to improve the fault-detection capability of RT. ART employs the location information of previously executed test cases to enforce an even spread of random test cases over the entire input domain. There have been several implementations (algorithms) of ART based on different intuitions and principles. Due to the nature of the principles adopted, these implementations have their own advantages and disadvantages. The majority of them require intensive computations to ensure the generation of evenly spread test cases, and hence incur high overhead. In this paper, we propose the notion of iterative partitioning to reduce the amount of the computation while retaining a high fault-detection capability. As a result, the cost effectiveness of ART has been improved.


Grid Cell Random Testing Failure Pattern Partitioning Scheme Test Case Generation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bird, D.L., Munoz, C.U.: Automatic generation of random self-checking test cases. IBM Systems Journal 22(3), 229–245 (1983)CrossRefGoogle Scholar
  2. 2.
    Chan, K.P., Chen, T.Y., Towey, D.: Normalized restricted random testing. In: Rosen, J.-P., Strohmeier, A. (eds.) Ada-Europe 2003. LNCS, vol. 2655, pp. 368–381. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  3. 3.
    Chen, T.Y., Eddy, G., Merkel, R., Wong, P.K.: Adaptive random testing through dynamic partitioning. In: Proceedings of the 4th International Conference on Quality Software (QSIC 2004), pp. 79–86. IEEE Computer Society Press, Los Alamitos (2004)CrossRefGoogle Scholar
  4. 4.
    Chen, T.Y., Huang, D.H.: Adaptive random testing by localization. In: Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC 2004), pp. 292–298. IEEE Computer Society, Los Alamitos (2004)CrossRefGoogle Scholar
  5. 5.
    Chen, T.Y., Leung, H., Mak, I.K.: Adaptive Random Testing. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 320–329. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Chen, T.Y., Tse, T.H., Yu, Y.T.: Proportional sampling strategy: A compendium and some insights. The Journal of Systems and Software 58(1), 65–81 (2001)CrossRefGoogle Scholar
  7. 7.
    Cobb, R., Mills, H.D.: Engineering software under statistical quality control. IEEE Software 7(6), 45–54 (1990)CrossRefGoogle Scholar
  8. 8.
    Collected Algorithms from ACM. Association for Computing Machinery (1980)Google Scholar
  9. 9.
    Dabóczi, T., Kollár, I., Simon, G., Megyeri, T.: Automatic testing of graphical user interfaces. In: Proceedings of the 20th IEEE Instrumentation and Measurement Technology Conference 2003 (IMTC 2003), pp. 441–445 (2003)Google Scholar
  10. 10.
    Forrester, J.E., Miller, B.P.: An empirical study of the robustness of Windows NT applications using random testing. In: Proceedings of the 4th USENIX Windows Systems Symposium, pp. 59–68 (2000)Google Scholar
  11. 11.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed automated random testing. In: Proceedings of ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation (PLDI), pp. 213–223 (2005)Google Scholar
  12. 12.
    Hailpern, B., Santhanam, P.: Software debugging, testing, and verification. IBM Systems Journal 41(1), 4–12 (2002)CrossRefGoogle Scholar
  13. 13.
    Hamlet, R.: Random testing. In: Marciniak, J. (ed.) Encyclopedia of Software Engineering, 2nd edn. John Wiley & Sons, Chichester (2002)Google Scholar
  14. 14.
    Loo, P.S., Tsai, W.K.: Random testing revisited. Information and Software Technology 30(7), 402–417 (1988)CrossRefGoogle Scholar
  15. 15.
    Mak, I.K.: On the effectiveness of random testing. Master’s thesis, Department of Computer Science, The University of Melbourne (1997)Google Scholar
  16. 16.
    Mayer, J.: Adaptive Random Testing by Bisection and Localization. In: Grieskamp, W., Weise, C. (eds.) FATES 2005. LNCS, vol. 3997, pp. 72–86. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Miller, B.P., Fredriksen, L., So, B.: An empirical study of the reliability of UNIX utilities. Communications of the ACM 33(12), 32–44 (1990)CrossRefGoogle Scholar
  18. 18.
    Miller, B.P., Koski, D., Lee, C.P., Maganty, V., Murthy, R., Natarajan, A., Steidl, J.: Fuzz revisited: A re-examination of the reliability of UNIX utilities and services. Technical Report CS-TR-1995-1268, University of Wisconsin (1995)Google Scholar
  19. 19.
  20. 20.
    Nyman, N.: In defense of monkey testing: Random testing can find bugs, even in well engineered software. Microsoft Corporation,
  21. 21.
    Slutz, D.: Massive stochastic testing of SQL. In: Proceedings of the 24th International Conference on Very Large Data Bases (VLDB 1998), pp. 618–622 (1998)Google Scholar
  22. 22.
    Yoshikawa, T., Shimura, K., Ozawa, T.: Random program generator for Java JIT compiler test system. In: Proceedings of the 3rd International Conference on Quality Software (QSIC 2003), pp. 20–24. IEEE Computer, Los Alamitos (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • T. Y. Chen
    • 1
  • De Hao Huang
    • 1
  • Zhi Quan Zhou
    • 2
  1. 1.Faculty of Information & Communication TechnologiesSwinburne University of TechnologyHawthornAustralia
  2. 2.School of IT & Computer ScienceUniversity of WollongongWollongongAustralia

Personalised recommendations