Advertisement

On the relation of control-flow and performance feature interactions: a case study

  • Sergiy KolesnikovEmail author
  • Norbert Siegmund
  • Christian Kästner
  • Sven Apel
Article

Abstract

Detecting feature interactions is imperative for accurately predicting performance of highly-configurable systems. State-of-the-art performance prediction techniques rely on supervised machine learning for detecting feature interactions, which, in turn, relies on time-consuming performance measurements to obtain training data. By providing information about potentially interacting features, we can reduce the number of required performance measurements and make the overall performance prediction process more time efficient. We expect that information about potentially interacting features can be obtained by analyzing the source code of a highly-configurable system, which is computationally cheaper than performing multiple performance measurements. To this end, we conducted an in-depth qualitative case study on two real-world systems (mbedTLS and SQLite), in which we explored the relation between internal (precisely control-flow) feature interactions, detected through static program analysis, and external (precisely performance) feature interactions, detected by performance-prediction techniques using performance measurements. We found that a relation exists that can potentially be exploited to predict performance interactions.

Keywords

Highly configurable software system Feature Feature interaction Feature-interaction prediction Control-flow feature interaction Performance feature interaction Variability 

Notes

Acknowledgements

Kolesnikov’s, and Apel’s work has been supported by the German Research Foundation (AP 206/5, AP 206/6, AP 206/7, AP 206/11) and by the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) project No. 849928. Siegmund’s work has been supported by the German Research Foundation under the contracts SI 2171/2 and SI 2171/3. Kästner’s work has been supported in part by the National Science Foundation (awards 1318808, 1552944, and 1717022), the Science of Security Lablet (H9823014C0140), and AFRL and DARPA (FA8750-16-2-0042).

References

  1. Apel S, Batory D, Kästner C, Saake G (2013a) Feature-Oriented Software Product Lines. Springer, BerlinGoogle Scholar
  2. Apel S, Kolesnikov S, Siegmund N, Kästner C, Garvin B (2013b) Exploring feature interactions in the wild: The new feature-interaction challenge. In: Proceedings of FOSD Workshop. ACM, pp 1–8Google Scholar
  3. Borgelt C (2012) Frequent item set mining. Wiley Interdiscip Rev: Data Mining Knowl Discov 2(6):437–456Google Scholar
  4. Bruns G (2005) Foundations for features. In: Feature Interactions in Telecommunications and Software Systems VIII. IOS Press, pp 3–11Google Scholar
  5. Duong T, Rizzo J (2011) Here come the ⊕ ninjas. https://web.archive.org/web/20150630133214/http://www.hpcc.ecs.soton.ac.uk/dan/talks/bullrun/Beast.pdf, Accessed: 2019-03-01
  6. Ferreira G, Kästner C, Pfeffer J, Apel S (2015) Characterizing complexity of highly-configurable systems with variational call graphs: Analyzing configuration options interactions complexity in function calls. In: Proceedings Hotsos, ACM, pp 17:1–2Google Scholar
  7. Flyvbjerg B (2006) Five misunderstandings about case-study research. Qual Inq 12(2):219–245CrossRefGoogle Scholar
  8. Garvin BJ, Cohen M (2011) Feature interaction faults revisited: An exploratory study. In: Proceedings of ISSRE. IEEE, pp 90–99Google Scholar
  9. Guo J, Czarnecki K, Apel S, Siegmund N, Wasowski A (2013) Variability-aware performance prediction: A statistical learning approach. In: Proceedings of ASE. IEEE, pp 301–311Google Scholar
  10. Guo J, Yang D, Siegmund N, Apel S, Sarkar A, Valov P, Czarnecki K, Wasowski A, Yu H (2018) Data-efficient performance learning for configurable systems. Empir Softw Eng 23(3):1826–1867CrossRefGoogle Scholar
  11. Jaccard P (1912) The distribution of the flora in the alpine zone. New phytologist 11(2):37–50CrossRefGoogle Scholar
  12. Kaltenecker C, Grebhahn A, Siegmund N, Guo J, Apel S (2019) Distance-based sampling of software configuration spaces. In:. Proceedings of ICSE, ACM, to appearGoogle Scholar
  13. Kang K, Cohen S, Hess J, Novak W, Peterson A (1990) Feature-Oriented Domain Analysis (FODA) Feasibility Study. Tech. Rep. CMU/SEI-90-TR-21, Carnegie Mellon UniversityGoogle Scholar
  14. Kästner C, Apel S, ur Rahman SS, Rosenmüller M, Batory D, Saake G (2009) On the impact of the optional feature problem: Analysis and case studies. In: Proceedings of SPLC, pp 181–190Google Scholar
  15. Kim C, Batory D, Khurshid S (2011) Reducing combinatorics in testing product lines. In: Proceedings of AOSD. ACM, pp 57–68Google Scholar
  16. Kolesnikov S, Siegmund N, Kästner C, Grebhahn A, Apel S (2018) Tradeoffs in modeling performance of highly configurable software systems. Software and Systems Modeling (SoSyM) pp 1–19, online firstGoogle Scholar
  17. Lillack M, Kästner C, Bodden E (2018) Tracking load-time configuration options. IEEE Trans Softw Eng (TSE) 44(12):1269–1291CrossRefGoogle Scholar
  18. Maqbool O, Babri H (2007) Hierarchical clustering for software architecture recovery. IEEE Trans Softw Eng 33(11):759–780CrossRefGoogle Scholar
  19. Medeiros F, Kästner C, Ribeiro M, Gheyi R, Apel S (2016) A comparison of 10 sampling algorithms for configurable systems. In: Proceedings of ICSE. ACM, pp 643–654Google Scholar
  20. Meinicke J, Wong C, Kästner C, Thüm T, Saake G (2016) On essential configuration complexity: measuring interactions in highly-configurable systems. In: Proceedings of ASE. ACM Press, pp 483–494Google Scholar
  21. Nair V, Menzies T, Siegmund N, Apel S (2017) Using bad learners to find good configurations. In: Proceedings of ESEC/FSE, pp 257–267Google Scholar
  22. Nair V, Menzies T, Siegmund N, Apel S (2018a) Faster discovery of faster system configurations with spectral learning. Autom Softw Eng 25(2):247–277Google Scholar
  23. Nair V, Yu Z, Menzies T, Siegmund N, Apel S (2018b) Finding faster configurations using FLASH. IEEE Transactions on Software Engineering (TSE), pp 1–1,  https://doi.org/10.1109/TSE.2018.2870895, online first
  24. Nguyen T, Koc U, Cheng J, Foster JS, Porter A (2016) iGen: Dynamic interaction inference for configurable software. In: Proceedings of FSE. ACM, pp 655-665Google Scholar
  25. Passos L, Queiroz R, Mukelabai M, Berger T, Apel S, Czarnecki K, Padilla J (2018) A study of feature scattering in the Linux kernel. IEEE Transactions on Software Engineering (TSE) Online firstGoogle Scholar
  26. Qiao Y, He J, Yang Y, Ji L (2013) Analyzing malware by abstracting the frequent itemsets in API call sequences. In: Proceedings of TrustCom. IEEE, pp 265–270Google Scholar
  27. Reisner E, Song C, Ma K, Foster JS, Porter A (2010) Using symbolic evaluation to understand behavior in configurable software systems. In: Proceedings of ICSE. ACM, pp 445–454Google Scholar
  28. Sarkar A, Guo J, Siegmund N, Apel S, Czarnecki K (2015) Cost-efficient sampling for performance prediction of configurable systems. In: Proceedings of ASE. IEEE, pp 342–352Google Scholar
  29. Shull F, Singer J, Sjøberg D (2007) Guide to Advanced Empirical Software Engineering. Springer, BerlinGoogle Scholar
  30. Siegmund N, Kolesnikov S, Kästner C, Apel S, Batory D, Rosenmüller M, Saake G (2012) Predicting performance via automated feature-interaction detection. In: Proceedings of ICSE. IEEE, pp 167–177Google Scholar
  31. Siegmund N, von Rhein A, Apel S (2013a) Family-based performance measurement. In: Proceedings of GPCE. ACM, pp 95–104Google Scholar
  32. Siegmund N, Rosenmu̇ller M, Kȧstner C, Giarrusso P, Apel S, Kolesnikov S (2013b) Scalable prediction of non-functional properties in software product lines: Footprint and memory consumption. Inf Softw Technol 55(3):491–507Google Scholar
  33. Siegmund N, Grebhahn A, Apel S, Kästner C (2015) Performance-influence models for highly configurable systems. In: Proceedings of ESEC/FSE. ACM, pp 284–294Google Scholar
  34. Soares LR, Meinicke J, Nadi S, Kästner C, de Almeida ES (2018) Exploring Feature interactions without specifications: A controlled experiment. In: Proceedings of GPCE. ACM Press, pp 41–52Google Scholar
  35. Tartler R, Lohmann D, Dietrich C, Egger C, Sincero J (2012) Configuration coverage in the analysis of large-scale system software. SIGOPS Oper Syst Rev (ACM OSR) 45(3):10–14CrossRefGoogle Scholar
  36. Thereska E, Doebel B, Zheng A, Nobel P (2010) Practical performance models for complex, popular applications. SIGMETRICS Perform Eval Rev 38(1):1–12CrossRefGoogle Scholar
  37. von Rhein A, Grebhahn A, Apel S, Siegmund N, Beyer D, Berger T (2015) Presence-condition simplification in highly configurable systems. In: Proceedings of ICSE. IEEE, vol 1, pp 178–188Google Scholar
  38. von Rhein A, Liebig J, Janker A, Kästner C, Apel S (2018) Variability-aware static analysis at scale: An empirical study. ACM Trans Softw Eng Methodol (TOSEM) 27(4):18:1–18:33Google Scholar
  39. Westermann D, Happe J, Krebs R, Farahbod R (2012) Automated inference of goal-oriented performance prediction functions. In: Proceedings of ASE. ACM, pp 190–199Google Scholar
  40. Yin R (2003) Case Study Research–Design and Methods. Sage, Newbury ParkGoogle Scholar
  41. Zave P (2009) Modularity in distributed feature composition. Software Requirements and Design: The Work of Michael Jackson, pp 267–290Google Scholar
  42. Zhang Y, Guo J, Blais E, Czarnecki K (2015) Performance prediction of configurable software systems by Fourier learning. In: Proceedings of ASE. IEEE, pp 365–373Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.University of PassauPassauGermany
  2. 2.Bauhaus-University WeimarWeimarGermany
  3. 3.Carnegie Mellon UniversityPittsburghUSA
  4. 4.Saarland UniversitySaarbrückenGermany

Personalised recommendations