Skip to main content

Comparison of open-source runtime testing tools for microservices

Abstract

In recent years, there has been an increase in the number of software applications developed using the microservices architectural pattern. This trend is due to the benefits derived from the more traditional N-tier architectural patterns that use monolithic designs for each tier. The value of using the microservices architectural pattern, particularly in the cloud, has been pioneered by companies such as Netflix and Google. These companies have created protocols and tools to support the development of cloud-based applications. However, the testing of microservices applications continues to be challenging due to the added complexity of network communication between the collaborating services. In addition, an increasing number of tools are being used to test microservices-based applications, which makes selecting the most appropriate tool(s) a challenging task. In this article, we compare several open-source tools used to support the testing of microservices based on testing levels, the scaffolding required, languages used for test cases, and the type of interface used to interact with the applications under test. We describe a prototype for a microservices-based application called Rideshare that allows users to reserve rides from available drivers. Using the Rideshare application, we performed a study using a subset of selected open-source tools to determine the overhead added by these tools. We present the results of the study and describe our experiences in configuring the tools to test the Rideshare application using different testing approaches.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. https://github.com/AITestingOrg/microservices-sandbox

  2. https://stem-cyle.cis.fiu.edu/

  3. https://github.com/FudanSELab/train-ticket

References

Download references

Acknowledgements

The authors would like to thank the Test.ai and Ultimate Kronos Group (UKG) teams for the contributions, feedback, and insight provided in preparing this manuscript. The authors would also like to thank the reviewers for their meaningful comments toward improving the article.

Funding

This work is supported in part by Test.ai and Ultimate Kronos Group (UKG). Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Test.ai and UKG. The authors would like to thank the teams at Test.ai and UKG for the contributions, feedback, and insight provided in preparing this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan P. Sotomayor.

Ethics declarations

Financial interests

The authors Dionny Santiago and Tariq M. King have worked for both companies, but the funding of this project is independent of the work of the companies. The testbed application used in the experiments of this research was developed in collaboration with UKG. The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sotomayor, J.P., Allala, S.C., Santiago, D. et al. Comparison of open-source runtime testing tools for microservices. Software Qual J (2022). https://doi.org/10.1007/s11219-022-09583-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11219-022-09583-4

Keywords

  • Microservices
  • Open-source testing tools
  • Software testing
  • Testing levels
  • Microservices testing