Advertisement

RETORCH: Resource-Aware End-to-End Test Orchestration

  • Cristian AugustoEmail author
  • Jesús Morán
  • Antonia Bertolino
  • Claudio de la Riva
  • Javier Tuya
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1010)

Abstract

Continuous integration practices introduce incremental changes in the code to both improve the quality and add new functionality. These changes can introduce faults that can be timely detected through continuous testing by automating the test cases and re-executing them at each code change. However, re-executing all test cases at each change may not be always feasible, especially for those test cases that make heavy use of resources thoroughly like End-to-End test cases that need a complex test infrastructure. This paper is focused on optimizing the usage of the resources employed during End-to-End testing (e.g., storage, memory, web servers or tables of a database, among others) through a resource-aware test orchestration technique in the context of continuous integration in the cloud. In order to optimize both the cost/usage of resources and the execution time, the approach proposes to (i) identify the resources required by the End-to-End test cases, (ii) group together those tests that need the same resources, (iii) deploy the tests in both dependency isolated and elastic environments, and (iv) schedule their parallel execution in several machines.

Keywords

Software testing Continuous integration Continuous testing Testing in the cloud End-to-End testing Test orchestration 

Notes

Acknowledgments

This work was supported in part by the Spanish Ministry of Economy and Competitiveness under TestEAMoS (TIN2016-76956-C3-1-R) project and ERDF funds, and by the European Project ElasTest in the Horizon 2020 research and innovation program (GA No. 731535).

References

  1. 1.
    Meyer, M.: Continuous integration and its tools. IEEE Softw. 31, 14–16 (2014).  https://doi.org/10.1109/MS.2014.58CrossRefGoogle Scholar
  2. 2.
    Yoo, S., Harman, M.: Regression Testing Minimisation, Selection and Prioritisation: A Survey, p. 60 (2007)Google Scholar
  3. 3.
    Fitzgerald, B., Stol, K.-J.: Continuous software engineering: a roadmap and agenda. J. Syst. Softw. 123, 176–189 (2017).  https://doi.org/10.1016/j.jss.2015.06.063CrossRefGoogle Scholar
  4. 4.
    Bertolino, A., et al.: A systematic review on cloud testing. ACM Comput. Surv. (2019, to appear)Google Scholar
  5. 5.
    Bertolino, A., Calabró, A., De Angelis, G., Gallego, M., García, B., Gortázar, F.: When the testing gets tough, the tough get ElasTest. In: Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings, pp. 17–20. ACM, New York (2018).  https://doi.org/10.1145/3183440.3183497
  6. 6.
    Harman, M.: Making the case for MORTO: multi objective regression test optimization. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, pp. 111–114. IEEE, Berlin (2011).  https://doi.org/10.1109/ICSTW.2011.60
  7. 7.
    Bertolino, A.: Software testing research: achievements, challenges, dreams. In: 2007 Future of Software Engineering, pp. 85–103. IEEE Computer Society, Washington, DC (2007).  https://doi.org/10.1109/FOSE.2007.25
  8. 8.
    Rothermel, G., Harrold, M.J., von Ronne, J., Hong, C.: Empirical studies of test-suite reduction. Softw. Test. Verif. Reliab. 12, 219–249 (2002).  https://doi.org/10.1002/stvr.256CrossRefGoogle Scholar
  9. 9.
    Wong, W.E., Horgan, J.R., London, S., Mathur, A.: Effect of test set minimization on fault detection effectiveness. In: Proceedings - International Conference on Software Engineering, p. 41 (1995).  https://doi.org/10.1002/(SICI)1097-024X(19980410)28:4%3c347::AID-SPE145%3e3.0.CO;2-LGoogle Scholar
  10. 10.
    Engström, E., Skoglund, M., Runeson, P.: Empirical evaluations of regression test selection techniques: a systematic review. In: ESEM 2008: Proceedings of the 2008 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 22–31 (2008).  https://doi.org/10.1145/1414004.1414011
  11. 11.
    Bell, J., Kaiser, G., Melski, E., Dattatreya, M.: Efficient dependency detection for safe java test acceleration. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp. 770–781. ACM, New York (2015).  https://doi.org/10.1145/2786805.2786823
  12. 12.
    Gyori, A., Shi, A., Hariri, F., Marinov, D.: Reliable testing: detecting state-polluting tests to prevent test dependency. In: Proceedings of the 2015 International Symposium on Software Testing and Analysis, p. 223. ACM, New York (2015).  https://doi.org/10.1145/2771783.2771793
  13. 13.
    Gambi, A., Bell, J., Zeller, A.: Practical test dependency detection. In: 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), pp. 1–11 (2018).  https://doi.org/10.1109/ICST.2018.00011
  14. 14.
    Gambi, A., Gorla, A., Zeller, A.: O!Snap: cost-efficient testing in the cloud. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 454–459 (2017).  https://doi.org/10.1109/ICST.2017.51
  15. 15.
    Chakraborty, S.S., Shah, V.: Towards an approach and framework for test-execution plan derivation. In: 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), pp. 488–491 (2011).  https://doi.org/10.1109/ASE.2011.6100106
  16. 16.
    Yu, L., Su, Y., Wang, Q.: Scheduling test execution of WBEM applications. In: Proceedings - Asia-Pacific Software Engineering Conference, APSEC, pp. 323–330 (2009).  https://doi.org/10.1109/APSEC.2009.27
  17. 17.
    García, B., et al.: A proposal to orchestrate test cases. In: 2018 11th International Conference on the Quality of Information and Communications Technology (QUATIC), pp. 38–46 (2018).  https://doi.org/10.1109/QUATIC.2018.00016
  18. 18.
    Esfahani, H., et al.: CloudBuild: Microsoft’s distributed and caching build service. In: Proceedings of the 38th International Conference on Software Engineering Companion, pp. 11–20. ACM, New York (2016).  https://doi.org/10.1145/2889160.2889222
  19. 19.
    Pérez, P.F.: A web application to make teaching online easy. Contribute to pabloFuente/full-teaching development by creating an account on GitHub (2019)Google Scholar
  20. 20.
  21. 21.
  22. 22.
    WebRTC Home | WebRTC. https://webrtc.org/

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Computer Science DepartmentUniversity of OviedoGijónSpain
  2. 2.ISTI-CNR, Consiglio Nazionale Delle RicerchePisaItaly

Personalised recommendations