Advertisement

Prediction of Coverage of Expensive Concurrency Metrics Using Cheaper Metrics

  • Bohuslav Křena
  • Hana PluháčkováEmail author
  • Shmuel Ur
  • Tomáš Vojnar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10672)

Abstract

Testing of concurrent programs is difficult since the scheduling non-determinism requires one to test a huge number of different thread interleavings. Moreover, a simple repetition of test executions will typically examine similar interleavings only. One popular way how to deal with this problem is to use the noise injection approach, which is, however, parametrized with many parameters whose suitable values are difficult to find. To find such values, one needs to run many experiments and use some metric to evaluate them. Measuring the achieved coverage can, however, slow down the experiments. To minimize this problem, we show that there are correlations between metrics of different cost and that one can find a suitable test and noise setting to maximize coverage under a costly metrics by experiments with a cheaper metrics.

Notes

Acknowledgement

The work was supported by the Czech Science Foundation (project 17-12465S), the internal BUT project FIT-S-17-4014, and the IT4IXS: IT4Innovations Excellence in Science project (LQ1602).

References

  1. 1.
    Avros, R., Hrubá, V., Křena, B., Letko, Z., Pluháčková, H., Ur, S., Vojnar, T., Volkovich, Z.: Boosted decision trees for behaviour mining of concurrent programs. In: Proceeding of MEMICS 2014. NOVPRESS (2014)Google Scholar
  2. 2.
    Avros, R., Hrubá, V., Křena, B., Letko, Z., Pluháčková, H., Ur, S., Vojnar, T., Volkovich, Z.: Boosted Decision Trees for Behaviour Mining of Concurrent Programs. Extended version of [1], under submission, 2017Google Scholar
  3. 3.
    Bensalem, S., Havelund, K.: Dynamic deadlock analysis of multi-threaded programs. In: Ur, S., Bin, E., Wolfsthal, Y. (eds.) HVC 2005. LNCS, vol. 3875, pp. 208–223. Springer, Heidelberg (2006).  https://doi.org/10.1007/11678779_15 CrossRefGoogle Scholar
  4. 4.
    Bron, A., Farchi, E., Magid, Y., Nir, Y., Ur, S.: Applications of synchronization coverage. In: Proceeding of PPoPP 2005. ACM Press (2005)Google Scholar
  5. 5.
    Ciancarini, P., Poggi, F., Rossi, D., Sillitti, A.: Mining concurrency bugs. Embed. Multi-Core Syst. Mixed Criticality Summit, CPS Week (2016). http://www.artemis-emc2.eu/fileadmin/user_upload/Publications/2016_EMC2_Summit_Wien/15RCiancariniPoggiRossiSillittiConcurrencyBugs.pdf
  6. 6.
    Edelstein, O., Farchi, E., Nir, Y., Ratsaby, G., Ur, S.: Multithreaded Java Program Test Generation. IBM Syst. J. 41, 111–125 (2002)CrossRefGoogle Scholar
  7. 7.
    Edelstein, O., Farchi, E., Goldin, E., Nir, Y., Ratsaby, G., Ur, S.: Framework for testing multi-threaded Java programs. Concurrency Comput.: Pract. Experience 15(35), 485–499 (2003)CrossRefzbMATHGoogle Scholar
  8. 8.
    Elmas, T., Qadeer, S., Tasiran, S.: Goldilocks: a race and transaction-aware Java runtime. In: Proceeding of PLDI 2007. ACM Press (2007)Google Scholar
  9. 9.
    Fiedor, J., Hrubá, V., Křena, B., Letko, Z., Ur, S., Vojnar, T.: Advances in noise-based testing of concurrent software. Softw. Test. Verification Reliab. 25(3), 272–309 (2015)CrossRefGoogle Scholar
  10. 10.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. SSS. Springer, New York (2009).  https://doi.org/10.1007/978-0-387-84858-7 CrossRefzbMATHGoogle Scholar
  11. 11.
    Hrubá, V., Křena, B., Letko, Z., Ur, S., Vojnar, T.: Testing of concurrent programs using genetic algorithms. In: Fraser, G., Teixeira de Souza, J. (eds.) SSBSE 2012. LNCS, vol. 7515, pp. 152–167. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33119-0_12 CrossRefGoogle Scholar
  12. 12.
    Hrubá, V., Křena, B., Letko, Z., Pluháčková, H., Vojnar, T.: Multi-objective genetic optimization for noise-based testing of concurrent software. In: Le Goues, C., Yoo, S. (eds.) SSBSE 2014. LNCS, vol. 8636, pp. 107–122. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-09940-8_8 Google Scholar
  13. 13.
    Hwang, G.-H., Lin, H.-Y., Lin, S.-Y., Lin, C.-S.: Statement-coverage testing for concurrent programs in reachability testing. J. Inf. Sci. Eng. 30(4), 1095–1113 (2014)MathSciNetGoogle Scholar
  14. 14.
    James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning. STS, vol. 103. Springer, New York (2013).  https://doi.org/10.1007/978-1-4614-7138-7 zbMATHGoogle Scholar
  15. 15.
    Křena, B., Letko, Z., Vojnar, T.: Coverage metrics for saturation-based and search-based testing of concurrent software. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 177–192. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29860-8_14 CrossRefGoogle Scholar
  16. 16.
    Křena, B., Vojnar, T.: Automated formal analysis and verification: an overview. Int. J. Gen. Syst. 42(4), 335–365 (2013). Taylor and FrancisMathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Kwanghue, J., Amarmend, D., Geunseok, Y., Jung-Won, L., Byungjeong, L.: Bug severity prediction by classifying normal bugs with text and meta-field information. Adv. Sci. Technol. Lett. 129 (2016). Mechanical EngineeringGoogle Scholar
  18. 18.
    Lu, S., Tucek, J., Qin, F., Zhou, Y.: AVIO: detecting atomicity violations via access interleaving invariants. In: Proceeding of ASPLOS 2006. ACM press (2006)Google Scholar
  19. 19.
    Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., Anderson, T.: Eraser: a dynamic data race detector for multi-threaded programs. In: Proceeding of SOSP 1997. ACM press (1997)Google Scholar
  20. 20.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Trainin, E., Nir-Buchbinder, Y., Tzoref-Brill, R., Zlotnick, A., Ur, S., Farchi, E.: Forcing small models of conditions on program interleaving for detection of concurrent bugs. In: Proceeding of PADTAD 2009. ACM Press (2009)Google Scholar
  22. 22.
    Yu, J., Narayanasamy, S., Pereira, C., Pokam, G.: Maple: a coverage-driven testing tool for multithreaded programs. In: Proceeding of OOPSLA 2012. ACM Press (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Bohuslav Křena
    • 1
  • Hana Pluháčková
    • 1
    Email author
  • Shmuel Ur
    • 1
  • Tomáš Vojnar
    • 1
  1. 1.IT4Innovations Centre of Excellence, FITBrno University of TechnologyBrnoCzech Republic

Personalised recommendations