Abstract
Modern software operates in complex ecosystems and is exposed to multiple sources of uncertainty that emerge in different phases of the development lifecycle, such as early requirement analysis or late testing and in-field monitoring. This paper envisions a novel methodology to deal with uncertainty in online model-based testing. We make use of model-based reinforcement learning to gather runtime evidence, spot and quantify existing uncertainties of the system under test. Preliminary experiments show that our novel testing approach has the potential of overcoming the major weaknesses of existing online testing techniques tailored to uncertainty quantification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aichernig, B.K., Mostowski, W., Mousavi, M.R., Tappler, M., Taromirad, M.: Model learning and model-based testing. In: Bennaceur, A., Hähnle, R., Meinke, K. (eds.) Machine Learning for Dynamic Software Analysis: Potentials and Limits. LNCS, vol. 11026, pp. 74–100. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96562-8_3
Perez-Palacin, D., Mirandola, R.: Uncertainties in the modeling of self-adaptive systems: a taxonomy and an example of availability evaluation. In: International Conference on Performance Engineering, pp. 3–14 (2014)
Trubiani, C., Apel, S.: PLUS: performance learning for uncertainty of software. In: International Conference on Software Engineering: NIER, pp. 77–80 (2019)
Garlan, D.: Software engineering in an uncertain world. In: International Workshop on Future of Software Engineering Research, pp. 125–128 (2010)
Zhang, M., Ali, S., Yue, T.: Uncertainty-wise test case generation and minimization for cyber-physical systems. J. Syst. Softw. 153, 1–21 (2019). https://www.sciencedirect.com/science/article/pii/S0164121219300561
Menghi, C., Nejati, S., Briand, L., Parache, Y.I.: Approximation-refinement testing of compute-intensive cyber-physical models: an approach based on system identification. In: Proceedings of the International Conference on Software Engineering, pp. 372–384 (2020)
Camilli, M., Gargantini, A., Scandurra, P., Bellettini, C.: Towards inverse uncertainty quantification in software development (short paper). In: Cimatti, A., Sirjani, M. (eds.) SEFM 2017. LNCS, vol. 10469, pp. 375–381. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66197-1_24
Camilli, M., Bellettini, C., Gargantini, A., Scandurra, P.: Online model-based testing under uncertainty. In: International Symposium on Software Reliability Engineering, pp. 36–46 (2018)
Camilli, M., Gargantini, A., Scandurra, P.: Model-based hypothesis testing of uncertain software systems. Softw. Test. Verif. Reliab. 30(2), e1730 (2020)
Ghavamzadeh, M., Mannor, S., Pineau, J., Tamar, A.: Bayesian reinforcement learning: a survey. Found. Trends Mach. Learn. 8(5–6), 359–483 (2015). https://doi.org/10.1561/2200000049
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (1994)
Forejt, V., Kwiatkowska, M., Norman, G., Parker, D.: Automated verification techniques for probabilistic systems. In: Bernardo, M., Issarny, V. (eds.) SFM 2011. LNCS, vol. 6659, pp. 53–113. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21455-4_3
Robert, C.P.: The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation, 2nd edn. Springer, New York (2007). https://doi.org/10.1007/0-387-71599-1
Insua, D., Ruggeri, F., Wiper, M.: Bayesian Analysis of Stochastic Process Models. Wiley Series in Probability and Statistics. Wiley, Hoboken (2012)
Zhang, M., Ali, S., Yue, T.: Uncertainty-wise test case generation and minimization for cyber-physical systems. J. Syst. Softw. 153, 1–21 (2019). https://doi.org/10.1016/j.jss.2019.03.011
Camilli, M., Gargantini, A., Scandurra, P., Trubiani, C.: Uncertainty-aware exploration in model-based testing. In: 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST), pp. 71–81 (2021)
Vlassis, N., Ghavamzadeh, M., Mannor, S., Poupart, P.: Bayesian reinforcement learning. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning, pp. 359–386. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_11
Ramirez, A.J., Jensen, A.C., Cheng, B.H.C.: A taxonomy of uncertainty for dynamically adaptive systems. In: International Symposium on Software Engineering for Adaptive and Self-Managing Systems, pp. 99–108 (2012)
Perez-Palacin, D., Mirandola, R.: Uncertainties in the modeling of self-adaptive systems: a taxonomy and an example of availability evaluation. In: Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, ICPE 2014, pp. 3–14. ACM, New York (2014). http://doi.acm.org/10.1145/2568088.2568095
Troya, J., Moreno, N., Bertoa, M.F., Vallecillo, A.: Uncertainty representation in software models: a survey. Softw. Syst. Model. 20(4), 1183–1213 (2021). https://doi.org/10.1007/s10270-020-00842-1
Mahdavi-Hezavehi, S., Weyns, D., Avgeriou, P., Calinescu, R., Mirandola, R., Perez-Palacin, D.: Uncertainty in self-adaptive systems: a research community perspective. ACM Trans. Adapt. Auton. Syst. 15(4), 1–36 (2021)
Walkinshaw, N., Fraser, G.: Uncertainty-driven black-box test data generation. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 253–263 (2017)
Zhang, M., Ali, S., Yue, T., Norgren, R., Okariz, O.: Uncertainty-wise cyber-physical system test modeling. Softw. Syst. Model. 18(2), 1379–1418 (2017). https://doi.org/10.1007/s10270-017-0609-6
Camilli, M., Mirandola, R., Scandurra, P.: Runtime equilibrium verification for resilient cyber-physical systems. In: IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS) 2021, pp. 71–80 (2021)
Camilli, M., Mirandola, R., Scandurra, P.: Taming model uncertainty in self-adaptive systems using Bayesian model averaging. In: Proceedings of the 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2022, pp. 25–35. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3524844.3528056
Bernardi, S., et al.: Living with uncertainty in model-based development. In: Heinrich, R., Durán, F., Talcott, C., Zschaler, S. (eds.) Composing Model-Based Analysis Tools, pp. 159–185. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81915-6_8
Arcaini, P., Inverso, O., Trubiani, C.: Automated model-based performance analysis of software product lines under uncertainty. Inf. Softw. Technol. 127, 106371 (2020)
Aleti, A., Trubiani, C., van Hoorn, A., Jamshidi, P.: An efficient method for uncertainty propagation in robust software performance estimation. J. Syst. Softw. 138, 222–235 (2018)
Acknowledgements
This work has been partially funded by MUR PRIN project 2017TWRCNB SEDUCE, and the PNRR MUR project VITALITY (ECS00000041) Spoke 2 ASTRA - Advanced Space Technologies and Research Alliance.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Camilli, M., Mirandola, R., Scandurra, P., Trubiani, C. (2023). Towards Online Testing Under Uncertainty Using Model-Based Reinforcement Learning. In: Batista, T., Bureš, T., Raibulet, C., Muccini, H. (eds) Software Architecture. ECSA 2022 Tracks and Workshops. ECSA 2022. Lecture Notes in Computer Science, vol 13928. Springer, Cham. https://doi.org/10.1007/978-3-031-36889-9_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-36889-9_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-36888-2
Online ISBN: 978-3-031-36889-9
eBook Packages: Computer ScienceComputer Science (R0)