Advertisement

The FITTEST Tool Suite for Testing Future Internet Applications

  • Tanja E. J. Vos
  • Paolo Tonella
  • I. S. Wishnu B. Prasetya
  • Peter M. Kruse
  • Onn Shehory
  • Alessandra Bagnato
  • Mark Harman
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8432)

Abstract

Future Internet applications are expected to be much more complex and powerful, by exploiting various dynamic capabilities For testing, this is very challenging, as it means that the range of possible behavior to test is much larger, and moreover it may at the run time change quite frequently and significantly with respect to the assumed behavior tested prior to the release of such an application. The traditional way of testing will not be able to keep up with such dynamics. The Future Internet Testing (FITTEST) project (http://crest.cs.ucl.ac.uk/fittest/), a research project funded by the European Commission (grant agreement n. 257574) from 2010 till 2013, was set to explore new testing techniques that will improve our capacity to deal with the challenges of testing Future Internet applications. Such techniques should not be seen as replacement of the traditional testing, but rather as a way to complement it. This paper gives an overview of the set of tools produced by the FITTEST project, implementing those techniques.

Keywords

Service Composition System Under Test Test Case Generation Test Case Prioritization Audit Testing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This work has been funded by the European Union FP7 project FITTEST (grant agreement n. 257574). The work presented in this paper is due to the contributions of many researchers, among which Sebastian Bauersfeld, Nelly O. Condori, Urko Rueda, Arthur Baars, Roberto Tiella, Cu Duy Nguyen, Alessandro Marchetto, Alex Elyasov, Etienne Brosse, Alessandra Bagnato, Kiran Lakhotia, Yue Jia, Bilha Mendelson, Daniel Citron and Joachim Wegener.

References

  1. 1.
    Vos, T., Tonella, P., Wegener, J., Harman, M., Prasetya, I.S.W.B., Ur, S.: Testing of future internet applications running in the cloud. In: Tilley, S., Parveen, T. (eds.) Software Testing in the Cloud: Perspectives on an Emerging Discipline, pp. 305–321. IGI Global, Hershey (2013)Google Scholar
  2. 2.
    Prasetya, I.S.W.B., Elyasov, A., Middelkoop, A., Hage, J.: FITTEST log format (version 1.1). Technical report UUCS-2012-014, Utrecht University (2012)Google Scholar
  3. 3.
    Middelkoop, A., Elyasov, A.B., Prasetya, W.: Functional instrumentation of ActionScript programs with Asil. In: Gill, A., Hage, J. (eds.) IFL 2011. LNCS, vol. 7257, pp. 1–16. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  4. 4.
    Swierstra, S.D., et al.: UU Attribute Grammar System (1998). https://www.cs.uu.nl/foswiki/HUT/AttributeGrammarSystem
  5. 5.
    Dias Neto, A.C., Subramanyan, R., Vieira, M., Travassos, G.H.: A survey on model-based testing approaches: a systematic review. In: 1st ACM International Workshop on Empirical Assessment of Software Engineering Languages and Technologies, pp. 31–36. ACM, New York (2007)Google Scholar
  6. 6.
    Shafique, M., Labiche, Y.: A systematic review of model based testing tool support. Technical report SCE-10-04, Carleton University, Canada (2010)Google Scholar
  7. 7.
    Marchetto, A., Tonella, P., Ricca, F.: ReAjax: a reverse engineering tool for Ajax web applications. Softw. IET 6(1), 33–49 (2012)CrossRefGoogle Scholar
  8. 8.
    Babenko, A., Mariani, L., Pastore, F.: AVA: automated interpretation of dynamically detected anomalies. In: Proceedings of the International Symposium on Software Testing and Analysis (2009)Google Scholar
  9. 9.
    Dallmeier, V., Lindig, C., Wasylkowski, A., Zeller, A.: Mining object behavior with ADABU. In: Proceedings of the International Workshop on Dynamic Systems Analysis (2006)Google Scholar
  10. 10.
    Mariani, L., Marchetto, A., Nguyen, C.D., Tonella, P., Baars, A.I.: Revolution: automatic evolution of mined specifications. In: ISSRE, pp. 241–250 (2012)Google Scholar
  11. 11.
    Nguyen, C.D., Tonella, P.: Automated inference of classifications and dependencies for combinatorial testing. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, ASE (2013)Google Scholar
  12. 12.
    Ernst, M.D., Perkins, J.H., Guo, P.J., McCamant, S., Pacheco, C., Tschantz, M.S., Xiao, C.: The daikon system for dynamic detection of likely invariants. Sci. Comput. Program. 69, 35–45 (2007)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Elyasov, A., Prasetya, I.S.W.B., Hage, J.: Guided algebraic specification mining for failure simplification. In: Yenigün, H., Yilmaz, C., Ulrich, A. (eds.) ICTSS 2013. LNCS, vol. 8254, pp. 223–238. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  14. 14.
    Elyasov, A., Prasetya, I.S.W.B., Hage, J.: Log-based reduction by rewriting. Technical report UUCS-2012-013, Utrecht University (2012)Google Scholar
  15. 15.
    Prasetya, I.S.W.B., Hage, J., Elyasov, A.: Using sub-cases to improve log-based oracles inference. Technical report UUCS-2012-012, Utrecht University (2012)Google Scholar
  16. 16.
    Anon.: The daikon invariant detector user manual (2010). https://groups.csail.mit.edu/pag/daikon/download/doc/daikon.html
  17. 17.
    Nguyen, C.D., Marchetto, A., Tonella, P.: Combining model-based and combinatorial testing for effective test case generation. In: Proceedings of the 2012 International Symposium on Software Testing and Analysis, pp. 100–110. ACM (2012)Google Scholar
  18. 18.
    Tonella, P.: FITTEST deliverable D4.3: test data generation and UML2 profile (2013)Google Scholar
  19. 19.
    Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Proceedings of the 13th Conference on Foundations of Software Engineering, ESEC/FSE, pp. 416–419. ACM, New York (2011)Google Scholar
  20. 20.
    Rothermel, G., Harrold, M.J.: A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. 6(2), 173–210 (1997)CrossRefGoogle Scholar
  21. 21.
    Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27, 929–948 (2001)CrossRefGoogle Scholar
  22. 22.
    W3C: Web service description language (WSDL). Technical report. http://www.w3.org/tr/wsdl20. Accessed Dec 2010
  23. 23.
    Nguyen, D.C., Marchetto, A., Tonella, P.: Change sensitivity based prioritization for audit testing of webservice compositions. In: Proceedings of the 6th International Workshop on Mutation Analysis (co-located with ICST), pp. 357–365 (2011)Google Scholar
  24. 24.
    Ludwig, H., Keller, A., Dan, A., King, R., Franck, R.: A service level agreement language for dynamic electronic services. Electron. Commer. Res. 3, 43–59 (2003). doi: 10.1023/A:1021525310424 CrossRefGoogle Scholar
  25. 25.
    W3C: XML path language (XPath). Technical report (1999). http://www.w3.org/tr/xpath/
  26. 26.
    W3C: XML schema. Technical report. http://www.w3.org/xml/schema. Accessed Dec 2010
  27. 27.
    Cohen, M.B., Snyder, J., Rothermel, G.: Testing across configurations: implications for combinatorial testing. SIGSOFT Softw. Eng. Notes 31, 1–9 (2006)CrossRefGoogle Scholar
  28. 28.
    Kuhn, D.R., Wallace, D.R., Gallo, A.M.: Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng. 30, 418–421 (2004)CrossRefGoogle Scholar
  29. 29.
    Grochtmann, M., Grimm, K.: Classification trees for partition testing. Softw. Test. Verif. Reliab. 3(2), 63–82 (1993)CrossRefGoogle Scholar
  30. 30.
    Kruse, P.M., Bauer, J., Wegener, J.: Numerical constraints for combinatorial interaction testing. In: Proceedings of ICST 2012 Workshops (ICSTW 2012), Montreal, Canada (2012)Google Scholar
  31. 31.
    Grochtmann, M., Wegener, J.: Test case design using classification trees and the classification-tree editor CTE. In: Proceedings of the 8th International Software Quality Week, San Francisco, USA (1995)Google Scholar
  32. 32.
    Lehmann, E., Wegener, J.: Test case design by means of the CTE XL. In: Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000), Kopenhagen, Denmark, Citeseer (2000)Google Scholar
  33. 33.
    Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. 43, 11:1–11:29 (2011)CrossRefGoogle Scholar
  34. 34.
    Kruse, P.M., Luniak, M.: Automated test case generation using classification trees. Softw. Qual. Prof. 13(1), 4–12 (2010)Google Scholar
  35. 35.
    Kruse, P.M., Schieferdecker, I.: Comparison of approaches to prioritized test generation for combinatorial interaction testing. In: Federated Conference on Computer Science and Information Systems (FedCSIS) 2012, Wroclaw, Poland (2012)Google Scholar
  36. 36.
    Kruse, P.M., Wegener, J.: Test sequence generation from classification trees. In: Proceedings of ICST 2012 Workshops (ICSTW 2012), Montreal, Canada (2012)Google Scholar
  37. 37.
    Kruse, P.M., Lakhotia, K.: Multi objective algorithms for automated generation of combinatorial test cases with the classification tree method. In: Symposium on Search Based Software Engineering (SSBSE 2011) (2011)Google Scholar
  38. 38.
    Ferrer, J., Kruse, P.M., Chicano, J.F., Alba, E.: Evolutionary algorithm for prioritized pairwise test data generation. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO) 2012, Philadelphia, USA (2012)Google Scholar
  39. 39.
    Prasetya, I.S.W.B., Amorim, J., Vos, T., Baars, A.: Using Haskell to script combinatoric testing of web services. In: 6th Iberian Conference on Information Systems and Technologies (CISTI). IEEE (2011)Google Scholar
  40. 40.
    Cohen, D.M., Dalal, S.R., Fredman, M.L., Patton, G.C.: The AETG system: an approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)CrossRefGoogle Scholar
  41. 41.
    Cohen, M.B., Gibbons, P.B., Mugridge, W.B., Colbourn, C.J.: Constructing test suites for interaction testing. In: Proceedings of the 25th International Conference on Software Engineering, ICSE ’03, pp. 38–48. IEEE Computer Society, Washington, DC (2003)Google Scholar
  42. 42.
    Hnich, B., Prestwich, S., Selensky, E., Smith, B.: Constraint models for the covering test problem. Constraints 11, 199–219 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  43. 43.
    Lei, Y., Tai, K.: In-parameter-order: a test generation strategy for pairwise testing. In: Proceedings of the 3rd IEEE International Symposium on High-Assurance Systems Engineering, 1998, pp. 254–261 (1998)Google Scholar
  44. 44.
    Garvin, B., Cohen, M., Dwyer, M.: Evaluating improvements to a meta-heuristic search for constrained interaction testing. Emp. Softw. Eng. 16(1), 61–102 (2011)CrossRefGoogle Scholar
  45. 45.
    Calvagna, A., Gargantini, A.: A formal logic approach to constrained combinatorial testing. J. Autom. Reasoning 45, 331–358 (2010)CrossRefzbMATHMathSciNetGoogle Scholar
  46. 46.
    Jia, Y., Cohen, M.B., Harman, M., Petke, J.: Learning combinatorial interaction testing strategies using hyperheuristic search. Technical report RN/13/17, Department of Computer Sciences, University of College London (2013)Google Scholar
  47. 47.
    Harman, M., Burke, E., Clark, J., Yao, X.: Dynamic adaptive search based software engineering. In: Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’12, pp. 1–8 (2012)Google Scholar
  48. 48.
    Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Ozcan, E., Qu, R.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)CrossRefGoogle Scholar
  49. 49.
    Bauersfeld, S., Vos, T.E.J.: GUITest: a Java library for fully automated GUI robustness testing. In: Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, ASE 2012, pp. 330–333. ACM, New York (2012)Google Scholar
  50. 50.
    Bauersfeld, S., Vos, T.E.: A reinforcement learning approach to automated GUI robustness testing. In: 4th Symposium on Search Based-Software Engineering, p. 7 (2012)Google Scholar
  51. 51.
    Edelstein, O., Farchi, E., Goldin, E., Nir, Y., Ratsaby, G., Ur, S.: Framework for testing multithreaded java programs. Concur. Comput. Pract. Exp. 15(3–5), 485–499 (2003)CrossRefzbMATHGoogle Scholar
  52. 52.
    Juristo, N., Moreno, A., Vegas, S.: Reviewing 25 years of testing technique experiments. Emp. Softw. Eng. 9(1–2), 7–44 (2004)CrossRefGoogle Scholar
  53. 53.
    Hesari, S., Mashayekhi, H., Ramsin, R.: Towards a general framework for evaluating software development methodologies. In: Proceedings of 34th IEEE COMPSAC, pp. 208–217 (2010)Google Scholar
  54. 54.
    Vos, T.E.J., Marín, B., Escalona, M.J., Marchetto, A.: A methodological framework for evaluating software testing techniques and tools. In: 12th International Conference on Quality Software, Xi’an, China, 27–29 August 2012, pp. 230–239 (2012)Google Scholar
  55. 55.
    Vos, T.E.J.: Evolutionary testing for complex systems. ERCIM News 2009(78) (2009)Google Scholar
  56. 56.
    Vos, T.E.J.: Continuous evolutionary automated testing for the future internet. ERCIM News 2010(82), 50–51 (2010)Google Scholar
  57. 57.
    Nguyen, C., Mendelson, B., Citron, D., Shehory, O., Vos, T., Condori-Fernandez, N.: Evaluating the fittest automated testing tools: an industrial case study. In: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 332–339 (2013)Google Scholar
  58. 58.
    Nguyen, C., Tonella, P., Vos, T., Condori, N., Mendelson, B., Citron, D., Shehory, O.: Test prioritization based on change sensitivity: an industrial case study. Technical report UU-CS-2014-012, Utrecht University (2014)Google Scholar
  59. 59.
    Shehory, O., Citron, D., Kruse, P.M., Fernandez, N.C., Vos, T.E.J., Mendelson, B.: Assessing the applicability of a combinatorial testing tool within an industrial environment. In: Proceedings of the 11th Workshop on Experimental Software Engineering (ESELAW 2014), CiBSE (2014)Google Scholar
  60. 60.
  61. 61.
    Brosse, E., Bagnato, A., Vos, T., Condori-Fernandez, N.: Evaluating the FITTEST automated testing tools in SOFTEAM: an industrial case study. Technical report UU-CS-2014-009, Utrecht University (2014)Google Scholar
  62. 62.
    Kruse, P., Condori-Fernandez, N., Vos, T., Bagnato, A., Brosse, E.: Combinatorial testing tool learnability in an industrial environment. In: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 304–312 (2013)Google Scholar
  63. 63.
    Condori-Fernández, N., Vos, T., Kruse, P., Brosse, E., Bagnato, A.: Analyzing the applicability of a combinatorial testing tool in an industrial environment. Technical report UU-CS-2014-008, Utrecht University (2014)Google Scholar
  64. 64.
    Bauersfeld, S., Condori-Fernandez, N., Vos, T., Brosse, E.: Evaluating rogue user an industrial case study at softeam. Technical report UU-CS-2014-010, Utrecht University (2014)Google Scholar
  65. 65.
    Puoskari, E., Vos, T.E.J., Condori-Fernandez, N., Kruse, P.M.: Evaluating applicability of combinatorial testing in an industrial environment: a case study. In: Proceedings of the JAMAICA, pp. 7–12. ACM (2013)Google Scholar
  66. 66.
    Bauersfeld, S., de Rojas, A., Vos, T.: Evaluating rogue user testing in industry: an experience report. Technical report UU-CS-2014-011, Utrecht University (2014)Google Scholar
  67. 67.
    Zeller, A.: Isolating cause-effect chains from computer programs. In: 10th ACM SIGSOFT symposium on Foundations of Software Engineering (FSE), pp. 1–10 (2002)Google Scholar
  68. 68.
    Elyasov, A., Prasetya, I., Hage, J., Nikas, A.: Reduce first, debug later. In: Proceedings of ICSE 2014 Workshops - 9th International Workshop on Automation of Software Test (AST 2014). ACM-IEEE, Washington, DC (2014)Google Scholar
  69. 69.
    Naish, L., Lee, H.J., Ramamohanarao, K.: A model for spectra-based software diagnosis. ACM Trans. Softw. Eng. Methodol 20(3), 11:1–11:32 (2011)CrossRefGoogle Scholar
  70. 70.
    Prasetya, I.S.W.B., Sturala, A., Middelkoop, A., Hage, J., Elyasov, A.: Compact traceable logging. In: 5th International Conference on Advances in System Testing and Validation (VALID) (2013)Google Scholar
  71. 71.
    Tonella, P., Marchetto, A., Nguyen, C.D., Jia, Y., Lakhotia, K., Harman, M.: Finding the optimal balance between over and under approximation of models inferred from execution logs. In: Proceedings of the Fifth IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 21–30 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Tanja E. J. Vos
    • 1
  • Paolo Tonella
    • 2
  • I. S. Wishnu B. Prasetya
    • 3
  • Peter M. Kruse
    • 4
  • Onn Shehory
    • 5
  • Alessandra Bagnato
    • 6
  • Mark Harman
    • 7
  1. 1.Universidad Politécnica de ValenciaValenciaSpain
  2. 2.Fondazione Bruno KesslerTrentoItaly
  3. 3.Universiteit van UtrechtUtrechtThe Netherlands
  4. 4.Berner & MattnerBerlinGermany
  5. 5.IBM Research HaifaHaifaIsrael
  6. 6.SofteamParisFrance
  7. 7.University College LondonLondonUK

Personalised recommendations