The increasing adoption of open source software (OSS) components in software systems introduces new quality risks and testing challenges. OSS components are developed and maintained by open communities and the fluctuation of community members and structures can result in instability of the software quality. Hence, an investigation is necessary to analyze the impact open community dynamics and the quality of the OSS, such as the level and trends in internal communications and content distribution. The analysis results provide inputs to drive selective testing for effective validation and verification of OSS components. The paper suggests an approach for monitoring community dynamics continuously, including communications like email and blogs, and repositories of bugs and fixes. Detection of patterns in the monitored behavior such as changes in traffic levels within and across clusters can be used in turn to drive testing efforts. Our proposal is demonstrated in the case of the XWiki OSS, a Java-based environment that allows for the storing of structured data and the execution of server side scripts within the wiki interface. We illustrate our concepts, methods and approach behind this approach for risk based testing of OSS.


Open Source Software Confusion Matrix Software Quality Email Communication Open Source Software Project 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kenett, R.S., Baker, E.: Process Improvement and CMMI for Systems and Software. CRC Press (2010)Google Scholar
  2. 2.
    Franch, X., Susi, A., Annosi, M.C., Ayala, C., Glott, R., Gross, D., Kenett, R., Mancinelli, F., Pop Ramsamy, C.T., Ameller, D., et al.: Managing risk in open source software adoption. In: Proc. 8th Int. Conf. on Software Engineering and Applications (ICSOFT-EA 2013). SciTePress (2013)Google Scholar
  3. 3.
    Bai, X., Kenett, R.S., Yu, W.: Risk assessment and adaptive group testing of semantic web services. International Journal of Software Engineering and Knowledge Engineering 22(05), 595–620 (2012)CrossRefGoogle Scholar
  4. 4.
    Harel, A., Kenett, R.S., Ruggeri, F.: Modeling web usability diagnostics on the basis of usage statistics. In: Statistical Methods in eCommerce Research, pp. 131–172 (2008)Google Scholar
  5. 5.
    Kenett, R.S., Harel, A., Ruggeri, F.: Controlling the usability of web services. International Journal of Software Engineering and Knowledge Engineering 19(05), 627–651 (2009)CrossRefGoogle Scholar
  6. 6.
    Clauset, A., Newman, M.E., Moore, C.: Finding community structure in very large networks. Physical Review E 70(6), 066111 (2004)Google Scholar
  7. 7.
    Nagappan, N., Ball, T., Zeller, A.: Mining metrics to predict component failures. In: Proceedings of the 28th International Conference on Software Engineering (ICSE 2006), pp. 452–461. ACM, New York (2006)Google Scholar
  8. 8.
    Hata, H., Mizuno, O., Kikuno, T.: Bug prediction based on fine-grained module histories. In: Proceedings of the 2012 International Conference on Software Engineering (ICSE 2012), pp. 200–210. IEEE Press, Piscataway (2012)CrossRefGoogle Scholar
  9. 9.
    Kim, S., Whitehead Jr., E.J., Zhang, Y.: Classifying Software Changes: Clean or Buggy? IEEE Trans. Softw. Eng. 34(2), 181–196 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Inbal Yahav
    • 1
  • Ron S. Kenett
    • 2
    • 3
    • 4
  • Xiaoying Bai
    • 5
  1. 1.Graduate School of Business AdministrationBar Ilan UniversityIsrael
  2. 2.The KPA GroupIsrael
  3. 3.Univ. of TorinoItaly
  4. 4.NYU-PolyUSA
  5. 5.Dept. of Comp. Science and TechnologyTsinghua UniversityChina

Personalised recommendations