Advertisement

Towards Inverse Uncertainty Quantification in Software Development (Short Paper)

  • Matteo CamilliEmail author
  • Angelo Gargantini
  • Patrizia Scandurra
  • Carlo Bellettini
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10469)

Abstract

With the purpose of delivering more robust systems, this paper revisits the problem of Inverse Uncertainty Quantification that is related to the discrepancy between the measured data at runtime (while the system executes) and the formal specification (i.e., a mathematical model) of the system under consideration, and the value calibration of unknown parameters in the model. We foster an approach to quantify and mitigate system uncertainty during the development cycle by combining Bayesian reasoning and online Model-based testing.

References

  1. 1.
    Garlan, D.: Software engineering in an uncertain world. In: Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research, pp. 125–128 (2010)Google Scholar
  2. 2.
    Esfahani, N., Malek, S.: Uncertainty in self-adaptive software systems. In: Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Software Engineering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 214–238. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-35813-5_9CrossRefGoogle Scholar
  3. 3.
    Ramirez, A.J., Jensen, A.C., Cheng, B.H.C.: A taxonomy of uncertainty for dynamically adaptive systems. In: Proceedings of the 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), pp. 99–108 (2012)Google Scholar
  4. 4.
    Arendt, P.D., Apley, D.W., Chen, W.: Quantification of model uncertainty: calibration, model discrepancy, and identifiability. J. Mech. Des. 134(10) (2012)CrossRefGoogle Scholar
  5. 5.
    Lee, S.H., Chen, W.: A comparative study of uncertainty propagation methods for black-box-type problems. Struct. Multi. Optim. 37(3), 239 (2008)CrossRefGoogle Scholar
  6. 6.
    Berger, J.: Statistical Decision Theory and Bayesian Analysis, Springer Series in Statistics. Springer, New York (1985)Google Scholar
  7. 7.
    Broy, M., Jonsson, B., Katoen, J.-P., Leucker, M., Pretschner, A.: Model-Based Testing of Reactive Systems: Advanced Lectures (Lecture Notes in Computer Science). Springer, New York (2005)CrossRefGoogle Scholar
  8. 8.
    Insua, D., Ruggeri, F., Wiper, M.: Bayesian Analysis of Stochastic Process Models, Wiley Series in Probability and Statistics. Wiley, Hoboken (2012)CrossRefGoogle Scholar
  9. 9.
    Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-22110-1_47CrossRefGoogle Scholar
  10. 10.
    Kwiatkowska, M., Norman, G., Pacheco, A.: Model checking expected time and expected reward formulae with random time bounds. Comput. Mathe. Appl. 51(2), 305–316 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Tretmans, J., Belinfante, A.: Automatic testing with formal methods. In: 7th European International Conference on Software Testing, Analysis & Review, pp. 8–12 (1999)Google Scholar
  12. 12.
    Perkins, T.J.: Maximum likelihood trajectories for continuous-time markov chains. In: Proceedings of the 22nd International Conference on Neural Information Processing Systems, pp. 1437–1445 (2009)Google Scholar
  13. 13.
    Perez-Palacin, D., Mirandola, R.: Uncertainties in the modeling of self-adaptive systems: a taxonomy and an example of availability evaluation. In: Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, pp. 3–14 (2014)Google Scholar
  14. 14.
    Epifani, I., Ghezzi, C., Mirandola, R., Tamburrelli, G.: Model evolution by run-time parameter adaptation. In: 2009 IEEE 31st International Conference on Software Engineering, pp. 111–121, May 2009Google Scholar
  15. 15.
    Calinescu, R., Ghezzi, C., Johnson, K., Pezzè, M., Rafiq, Y., Tamburrelli, G.: Formal verification with confidence intervals to establish quality of service properties of software systems. IEEE Trans. Reliab. 65(1), 107–125 (2016)CrossRefGoogle Scholar
  16. 16.
    Walkinshaw, N., Fraser, G.: Uncertainty-driven black-box test data generation. In: IEEE International Conference on Software Testing, Verification and Validation (2017)Google Scholar
  17. 17.
    Namin, A.S., Sridharan, M.: Bayesian reasoning for software testing. In: Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research, pp. 349–354 (2010)Google Scholar
  18. 18.
    Bernardo, J., Smith, A.: Bayesian Theory, Wiley Series in Probability and Statistics. Wiley, Hoboken (2006)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Matteo Camilli
    • 1
    Email author
  • Angelo Gargantini
    • 2
  • Patrizia Scandurra
    • 2
  • Carlo Bellettini
    • 1
  1. 1.Department of Computer ScienceUniversità degli Studi di MilanoMilanItaly
  2. 2.Department of Management, Information and Production Engineering (DIGIP)Università degli Studi di BergamoBergamoItaly

Personalised recommendations