Advertisement

Detection of Seed Methods for Quantification of Feature Confinement

  • Andrzej Olszak
  • Eric Bouwers
  • Bo Nørregaard Jørgensen
  • Joost Visser
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7304)

Abstract

The way features are implemented in source code has a significant influence on multiple quality aspects of a software system. Hence, it is important to regularly evaluate the quality of feature confinement. Unfortunately, existing approaches to such measurement rely on expert judgement for tracing links between features and source code which hinders the ability to perform cost-efficient and consistent evaluations over time or on a large portfolio of systems.

In this paper, we propose an approach to automating measurement of feature confinement by detecting the methods which play a central role in implementations of features, the so-called seed methods, and using them as starting points for a static slicing algorithm. We show that this approach achieves the same level of performance compared to the use of manually identified seed methods. Furthermore we illustrate the scalability of the approach by tracking the evolution of feature scattering and tangling in an open-source project over a period of ten years.

Keywords

Source Code Ground Truth Code Unit Feature Implementation Static Trace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alves, T.L., Correia, J., Visser, J.: Benchmark-based aggregation of metrics to ratings. In: Proceedings of the IWSM/MENSURA 2011, The Joint Conference of the 21th International Workshop on Software Measurement (IWSM) and the 6th International Conference on Software Process and Product Measurement, Mensura (2011)Google Scholar
  2. 2.
    Alves, T.L., Ypma, C., Visser, J.: Deriving metric thresholds from benchmark data. In: Proceedings of the 2010 IEEE International Conference on Software Maintenance, ICSM 2010, pp. 1–10. IEEE Computer Society, Washington, DC (2010)Google Scholar
  3. 3.
    Baggen, R., Schill, K., Visser, J.: Standardized code quality benchmarking for improving software maintainability. In: 4th International Workshop on Software Quality and Maintainability (SQM 2010), Madrid, Spain, March 15 (2010)Google Scholar
  4. 4.
    Basili, V.R., Caldiera, G., Rombach, H.D.: The goal question metric approach. In: Encyclopedia of Software Engineering. Wiley (1994)Google Scholar
  5. 5.
    Brcina, R., Riebisch, M.: Architecting for evolvability by means of traceability and features. In: 23rd IEEE/ACM International Conference on Automated Software Engineering - Workshops, ASE Workshops 2008, pp. 72–81 (September 2008)Google Scholar
  6. 6.
    Eaddy, M., Zimmermann, T., Sherwood, K.D., Garg, V., Murphy, G.C., Nagappan, N., Aho, A.V.: Do crosscutting concerns cause defects? IEEE Transactions on Software Engineering 34, 497–515 (2008)CrossRefGoogle Scholar
  7. 7.
    Marin, M., van Deursen, A., Moonen, L.: Identifying crosscutting concerns using fan-in analysis. ACM Transactions on Software Engineering and Methodology 17, 3:1–3:37 (2007)CrossRefGoogle Scholar
  8. 8.
    Olszak, A., Jørgensen, B.N.: Remodularizing java programs for improved locality of feature implementations in source code. Science of Computer Programming (2010) (in press, corrected proof)Google Scholar
  9. 9.
    Parnas, D.L.: On the criteria to be used in decomposing systems into modules. Communications of the ACM 15, 1053–1058 (1972)CrossRefGoogle Scholar
  10. 10.
    Salah, M., Mancoridis, S.: A hierarchy of dynamic software views: From object-interactions to feature-interactions. In: Proceedings of the 20th IEEE International Conference on Software Maintenance, pp. 72–81. IEEE Computer Society, Washington, DC (2004)CrossRefGoogle Scholar
  11. 11.
    Turner, C.R., Fuggetta, A., Lavazza, L., Wolf, A.L.: A conceptual basis for feature engineering. Journal of Systems and Software 49, 3–15 (1999)CrossRefGoogle Scholar
  12. 12.
    Walkinshaw, N., Roper, M., Wood, M.: Feature location and extraction using landmarks and barriers. In: IEEE International Conference on Software Maintenance (ICSM 2007), pp. 54–63 (October 2007)Google Scholar
  13. 13.
    Wilde, N., Scully, M.C.: Software reconnaissance: mapping program features to code. Journal of Software Maintenance 7, 49–62 (1995)CrossRefGoogle Scholar
  14. 14.
    Wong, W.E., Gokhale, S.S., Horgan, J.R.: Quantifying the closeness between program components and features. Journal of Systems and Software 54, 87–98 (2000)CrossRefGoogle Scholar
  15. 15.
    Zhao, W., Zhang, L., Liu, Y., Sun, J., Yang, F.: Sniafl: Towards a static noninteractive approach to feature location. ACM Transactions on Software Engineering and Methodology 15, 195–226 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Andrzej Olszak
    • 1
  • Eric Bouwers
    • 2
    • 3
  • Bo Nørregaard Jørgensen
    • 1
  • Joost Visser
    • 2
  1. 1.University of Southern DenmarkOdenseDenmark
  2. 2.Software Improvement GroupAmsterdamThe Netherlands
  3. 3.Delft University of TechnologyDelftThe Netherlands

Personalised recommendations