Enhancing adaptive random testing for programs with high dimensional input domains or failure-unrelated parameters
- 139 Downloads
Adaptive random testing (ART), an enhancement of random testing (RT), aims to both randomly select and evenly spread test cases. Recently, it has been observed that the effectiveness of some ART algorithms may deteriorate as the number of program input parameters (dimensionality) increases. In this article, we analyse various problems of one ART algorithm, namely fixed-sized-candidate-set ART (FSCS-ART), in the high dimensional input domain setting, and study how FSCS-ART can be further enhanced to address these problems. We propose to add a filtering process of inputs into FSCS-ART to achieve a more even-spread of test cases and better failure detection effectiveness in high dimensional space. Our study shows that this solution, termed as FSCS-ART-FE, can improve FSCS-ART not only in the case of high dimensional space, but also in the case of having failure-unrelated parameters. Both cases are common in real life programs. Therefore, we recommend using FSCS-ART-FE instead of FSCS-ART whenever possible. Other ART algorithms may face similar problems as FSCS-ART; hence our study also brings insight into the improvement of other ART algorithms in high dimensional space.
KeywordsSoftware testing Random testing Adaptive random testing Fixed-sized-candidate-set ART High dimension problem Failure-unrelated parameters
This research project is supported by an Australian Research Council Discovery Grant (DP0880295).
- Bellman, R. (1957). Dynamic programming. New Jersey: Princeton University Press.Google Scholar
- Bishop, P. G. (1993). The variation of software survival times for different operational input profiles. In Proceedings of the 23rd International Symposium on Fault-Tolerant Computing (FTCS-23) (pp. 98–107). IEEE Computer Society Press.Google Scholar
- Branicky, M. S., LaValle, S. M., Olson, K., & Yang, L. (2001). Quasi-randomized path planning. In Proceedings of the 2001 IEEE International Conference on Robotics and Automation (pp. 1481–1487).Google Scholar
- Chen, T. Y., Kuo, F.-C., & Liu, H. (2007). On test case distributions of adaptive random testing. In Proceedings of the 19th International Conference on Software Engineering and Knowledge Engineering (SEKE’07) (pp. 141–144). Boston.Google Scholar
- Chen, T. Y., Kuo, F.-C., & Zhou, Z. Q. (2005). On the Relationships between the distribution of failure-causing inputs and effectiveness of adaptive random testing. In Proceedings of the 17th International Conference on Software Engineering and Knowledge Engineering (SEKE’05) (pp. 306–311). Taipei, Taiwan.Google Scholar
- Chen, T. Y., & Merkel, R. (2008). An upper bound on software testing effectiveness. ACM Transaction on Software Engineering Methodologies.Google Scholar
- Dabóczi, T., Kollár, I., Simon, G., & Megyeri, T. (2003). Automatic testing of graphical user interfaces. In Proceedings of the 20th IEEE Instrumentation and Measurement Technology Conference 2003 (IMTC’03) (pp. 441–445). Vail, CO.Google Scholar
- Forrester, J. E., & Miller, B. P. (2000). An empirical study of the robustness of Windows NT applications using random testing. In Proceedings of the 4th USENIX Windows Systems Symposium (pp. 59–68). Seattle.Google Scholar
- Godefroid, P., Klarlund, N., & Sen, K. (2005). Dart: Directed automated random testing. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’05) (pp. 213–223).Google Scholar
- Hamlet, R. (2002). Random testing. In J. Marciniak (Ed.), Encyclopedia of software engineering (2nd edn.). Wiley.Google Scholar
- Kuo, F.-C., Chen, T. Y., Liu, H., & Chan, W. K. (2007). Enhancing adaptive random testing in high dimensional input domain. In Proceedings of the 22nd Annual ACM Symposium on Applied Computing (SAC’07) (pp. 1467–1472). ACM Press.Google Scholar
- Mak, I. K. (1997). On the effectiveness of random testing. Master’s thesis, Department of Computer Science, University of Melbourne.Google Scholar
- Mayer, J. (2005). Lattice-based adaptive random testing. In Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE’05) (pp. 333–336). New York: ACM Press.Google Scholar
- Miller, E. (2005). Website testing, http://www.soft.com/eValid/Technology/White.Papers/website.testing.html, Software Research, Inc. Accessed 21 Feb 2008.
- Miller, B. P., Koski, D., Lee, C. P., Maganty, V., Murthy, R., Natarajan, A., & Steidl, J. (1995). Fuzz revisited: A re-examination of the reliability of UNIX utilities and services. Tech. Rep. CS-TR-1995-1268, University of Wisconsin.Google Scholar
- Myers, G. J., Sandler, C., Badgett, T., & Thomas, T. M. (2004). The art of software testing (2nd edn.). New Jersey: Wiley.Google Scholar
- Nyman, N. In defense of monkey testing: Random testing can find bugs, even in well engineered software, http://www.automationjunkies.com/resources/nyman_monkey.rtf, Microsoft Corporation. Accessed 21 Feb 2008.
- Regehr, J. (2005). Random testing of interrupt-driven software. In Proceedings of the 5th ACM International Conference on Embedded software (EMSOFT’05) (pp. 290–298). New York, NY: ACM Press.Google Scholar
- Sen, K., Marinov, D., & Agha, G. (2005). CUTE: A concolic unit testing engine for C. In Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC/FSE-13) (pp. 263–272). New York, NY: ACM Press.Google Scholar
- Slutz, D. (1998). Massive stochastic testing of SQL. In Proceedings of the 24th International Conference on Very Large Databases (VLDB’98) (pp. 618–622).Google Scholar
- Yoshikawa, T., Shimura, K., & Ozawa, T. (2003). Random program generator for Java JIT compiler test system. In Proceedings of the 3rd International Conference on Quality Software (QSIC’03) (pp. 20–24). IEEE Computer Society Press.Google Scholar