Information Systems Frontiers

, Volume 18, Issue 5, pp 825–853 | Cite as

Ranking software components for reuse based on non-functional properties

Article

Abstract

One of the biggest obstacles to software reuse is the cost involved in evaluating the suitability of possible reusable components. In recent years, code search engines have made significant progress in establishing the semantic suitability of components for new usage scenarios, but the problem of ranking components according to their non-functional suitability has largely been neglected. The main difficulty is that a component’s non-functional suitability for a specific reuse scenario is usually influenced by multiple, “soft” criteria, but the relative weighting of metrics for these criteria is rarely known quantitatively. What is required, therefore, is an effective and reliable strategy for ranking software components based on their non-functional properties without requiring users to provide quantitative weighting information. In this paper we present a novel approach for achieving this based on the non-dominated sorting of components driven by a specification of the relative importance of non-functional properties as a partial ordering. After describing the ranking algorithm and its implementation in a component search engine, we provide an explorative study of its properties on a sample set of components harvested from Maven Central.

Keywords

Software quality Software metrics Software components Non-functional requirements Software testing Software reuse Software component ranking Sorting Multi-criteria decision making 

References

  1. Abreu, F. B., & Carapuça, R. (1994). Object-oriented software engineering: measuring and controlling the development process. In Proceedings of the 4th international conference on software quality, Vol. 186.Google Scholar
  2. Ammann, P., & Offutt, J. (2008). Introduction to software testing: Cambridge University Press.Google Scholar
  3. Apache Foundation (2015). Maven Central: http://search.maven.org. http://search.maven.org. (accessed 2015-08-05).
  4. Bajracharya, S., Ngo, T., Linstead, E., Dou, Y., Rigor, P., Baldi, P., & Lopes, C. (2006). Sourcerer: a search engine for open source code supporting structure-based search. In Companion to the 21st ACM SIGPLAN symposium on object-oriented programming systems, languages, and applications (pp. 681–682). ACM.Google Scholar
  5. Bansiya, J., & Davis, C. G. (2002). A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering, 28(1), 4–17.CrossRefGoogle Scholar
  6. Basili, V. R., Caldiera, G., & Rombach, H. D. (1994). Encyclopedia of software engineering, chap. The goal question metric approach: Wiley.Google Scholar
  7. Bauer, V., Eckhardt, J., Hauptmann, B., & Klimek, M. (2014). An exploratory study on reuse at google. In Proceedings of the 1st international workshop on software engineering research and industrial practices (pp. 14–23). ACM.Google Scholar
  8. Beck, K. (2003). Test-driven development: by example: Addison-Wesley Professional.Google Scholar
  9. Biffl, S., Aurum, A., Boehm, B., Erdogmus, H., & Grünbacher, P. (2006). Value-based software engineering: Springer Science & Business Media.Google Scholar
  10. Bourque, P., & Farley, R. (2014). Guide to the software engineering body of knowledge, version 3.0: IEEE Computer Society.Google Scholar
  11. Bouwers, E., van Deursen, A., & Visser, J. (2014). Towards a catalog format for software metrics. In Proceedings of the 5th workshop on emerging trends in software metrics (WETSoM).Google Scholar
  12. Bray, T. (2014). The javascript object notation (json) data interchange format. RFC 7159, RFC Editor. https://www.rfc-editor.org/rfc/rfc7159.txt.
  13. Briand, L., Devanbu, P., & Melo, W. (1997). An investigation into coupling measures for c++. In Proceedings of the 19th international conference on software engineering (pp. 412–421). ACM.Google Scholar
  14. Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20(6), 476–493.Google Scholar
  15. Chung, L., & do Prado Leite, J. C. S. (2009). On non-functional requirements in software engineering. In Conceptual modeling: foundations and applications (pp. 363–379). Springer.Google Scholar
  16. Chung, L., Nixon, B. A., Yu, E., & Mylopoulos, J. (2012). Non-functional requirements in software engineering Vol. 5: Springer Science & Business Media.Google Scholar
  17. Cornelissen, B., Zaidman, A., Van Deursen, A., Moonen, L., & Koschke, R. (2009). A systematic survey of program comprehension through dynamic analysis. IEEE Transactions on Software Engineering, 35(5), 684–702.CrossRefGoogle Scholar
  18. Croft, W. B., Metzler, D., & Strohman, T. (2010). Search engines: information retrieval in practice Vol. 283. Reading: Addison-Wesley.Google Scholar
  19. Deb, K. (2001). Multi-objective optimization using evolutionary algorithms Vol. 16: Wiley.Google Scholar
  20. Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2).Google Scholar
  21. Eckhardt, J., Vogelsang, A., & Fernández, D. M. (2016). Are “non-functional” requirements really non-functional?: an investigation of non-functional requirements in practice. In Proceedings of the 38th international conference on software engineering, ICSE ’16 (pp. 832–842). ACM.Google Scholar
  22. Frakes, W. B., & Kang, K. (2005). Software reuse research: status and future. IEEE Transactions on Software Engineering, 31(7), 529–536.CrossRefGoogle Scholar
  23. Freed, B., & Borenstein, N. (1996). Rfc 2045. MIME format standards.Google Scholar
  24. Freeman, E., Robson, E., Bates, B., & Sierra, K. (2004). Head first design patterns: O’Reilly Media Inc.Google Scholar
  25. Georges, A., Buytaert, D., & Eeckhout, L. (2007). Statistically rigorous java performance evaluation. ACM SIGPLAN Notices, 42(10), 57–76.CrossRefGoogle Scholar
  26. Harman, M., & Clark, J. (2004). Metrics are fitness functions too. In 10th international symposium on software metrics, 2004. Proceedings (pp. 58–69). IEEE.Google Scholar
  27. Harman, M., & Jones, B. F. (2001). Search-based software engineering. Information and Software Technology, 43(14).Google Scholar
  28. Harman, M., Mansouri, S. A., & Zhang, Y. (2012). Search-based software engineering: trends, techniques and applications. ACM Computing Surveys (CSUR), 45(1).Google Scholar
  29. Heineman, G.T., & Councill, W.T. (2001). Component-based software engineering. Putting the pieces together, p. 5.Google Scholar
  30. Henderson-Sellers, B. (1995). Object-oriented metrics: measures of complexity: Prentice-Hall, Inc.Google Scholar
  31. Holmes, R., & Walker, R. J. (2012). Systematizing pragmatic software reuse. ACM Transactions on Software Engineering and Methodology, 21(4).Google Scholar
  32. Holmes, R., Walker, R. J., & Murphy, G. C. (2006). Approximate structural context matching: an approach to recommend relevant examples. IEEE Transactions on Software Engineering, 32(12), 952–970.CrossRefGoogle Scholar
  33. Hummel, O. (2008). Semantic component retrieval in software engineering. Ph.D. thesis, University of Mannheim.Google Scholar
  34. Hummel, O., & Janjic, W. (2013). Test-driven reuse: Key to improving precision of search engines for software reuse. In Finding source code on the web for remix and reuse. Springer.Google Scholar
  35. Hummel, O., Janjic, W., & Atkinson, C. (2008). Code conjurer: pulling reusable software out of thin air. IEEE Software, 25(5).Google Scholar
  36. Inoue, K., Yokomori, R., Yamamoto, T., Matsushita, M., & Kusumoto, S. (2005). Ranking significance of software components based on use relations. IEEE Transactions on Software Engineering, 31(3).Google Scholar
  37. ISO/IEC (2001/2002). ISO/IEC 9126 – Quality Model -1, Internal Metrics -2, External Metrics -3, Quality in Use metrics -4.Google Scholar
  38. ISO/IEC (2011). ISO/IEC 25010:2011 – Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) – System and software quality models.Google Scholar
  39. Joachims, T., Granka, L., Pan, B., Hembrooke, H., & Gay, G. (2005). Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval. ACM.Google Scholar
  40. Josefsson, S. (2006). The base16, base32, and base64 data encodings. RFC 4648, RFC Editor. https://www.rfc-editor.org/rfc/rfc4648.txt.
  41. Jureczko, M., & Spinellis, D. (2010). Using Object-Oriented Design Metrics to Predict Software Defects, Monographs of System Dependability, vol. Models and Methodology of System Dependability, pp. 69–81. Oficyna Wydawnicza Politechniki Wroclawskiej, Wroclaw, Poland.Google Scholar
  42. Kaner, C., & Bond, W. P. (2004). Software engineering metrics: what do they measure and how do we know?. In METRICS 2004. IEEE CS.Google Scholar
  43. Kessel, M., & Atkinson, C. (2015a). Measuring the superfluous functionality in software components. In Proceedings of the 18th international ACM SIGSOFT symposium on component-based software engineering. ACM.Google Scholar
  44. Kessel, M., & Atkinson, C. (2015b). Ranking software components for pragmatic reuse. In Proceedings of the 6th international workshop on emerging trends in software metrics. ACM.Google Scholar
  45. Kitchenham, B. (2010). What’s up with software metrics?–a preliminary mapping study. Journal of Systems and Software, 83(1), 37–51.CrossRefGoogle Scholar
  46. Kitchenham, B., & Pfleeger, S. L. (1996). Software quality: the elusive target [special issues section]. IEEE Software, 13(1), 12–21.CrossRefGoogle Scholar
  47. Krueger, C. W. (1992). Software reuse. ACM Computing Surveys (CSUR), 24(2).Google Scholar
  48. Landes, D., & Studer, R. (1995). The treatment of non-functional requirements in MIKE: Springer.Google Scholar
  49. Letier, E., & van Lamsweerde, A. (2004). Reasoning about partial goal satisfaction for requirements and design engineering. SIGSOFT Software Engineering Notes, 29(6), 53–62.CrossRefGoogle Scholar
  50. Makady, S., & Walker, R. J. (2013). Validating pragmatic reuse tasks by leveraging existing test suites. Software: Practice and Experience, 43(9).Google Scholar
  51. Manning, C. D., Raghavan, P., Schütze, H., & et al. (2008). Introduction to information retrieval Vol. 1. Cambridge: Cambridge University Press.Google Scholar
  52. McMillan, C., Grechanik, M., Poshyvanyk, D., Xie, Q., & Fu, C. (2011). Portfolio: finding relevant functions and their usage. In 33rd international conference on software engineering (ICSE), 2011 (pp. 111–120). IEEE.Google Scholar
  53. Meyers, T. M., & Binkley, D. (2007). An empirical study of slice-based cohesion and coupling metrics. ACM Transactions on Software Engineering and Methodology (TOSEM), 17(1).Google Scholar
  54. Mkaouer, M. W., Kessentini, M., Bechikh, S., Cinnéide, M. Ó., & Deb, K. (2015). On the use of many quality attributes for software refactoring: a many-objective search-based software engineering approach. Empirical Software Engineering, 1–43.Google Scholar
  55. Mylopoulos, J., Chung, L., & Nixon, B. (1992). Representing and using nonfunctional requirements: a process-oriented approach. IEEE Transactions on Software Engineering, 18(6), 483–497.CrossRefGoogle Scholar
  56. Nurolahzade, M. (2014). Test-driven reuse: improving the selection of semantically relevant code. Ph.D. thesis, University of Calgary.Google Scholar
  57. Ó Cinnéide, M., Tratt, L., Harman, M., Counsell, S., & Hemati Moghadam, I. (2012). Experimental assessment of software metrics using automated refactoring. In Proceedings of the ACM-IEEE international symposium on empirical software engineering and measurement, (pp. 49–58). ACM.Google Scholar
  58. Oracle Corporation/OpenJDK (2016). Java microbenchmarking harness. http://openjdk.java.net/projects/code-tools/jmh/. (accessed 2016-01-21).
  59. Reiss, S. P. (2009). Semantics-based code search. In The 31st international conference on software engineering, 2009. ICSE 2009. IEEE.Google Scholar
  60. Robillard, M. P., Walker, R. J., & Zimmermann, T. (2010). Recommendation systems for software engineering. IEEE Software, 27(4), 80–86.CrossRefGoogle Scholar
  61. Robillard, M. P., Maalej, W., Walker, R. J., & Zimmermann, T. (2014). Recommendation systems in software engineering: Springer.Google Scholar
  62. Sim, S. E., & Gallardo-Valencia, R. E. (2013). Finding source code on the web for remix and reuse: Springer.Google Scholar
  63. Slyngstad, O. P. N., Gupta, A., Conradi, R., Mohagheghi, P., Rønneberg, H., & Landre, E. (2006). An empirical study of developers views on software reuse in statoil asa. In Proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering (pp. 242–251). ACM.Google Scholar
  64. Sommerville, I. (2011). Software engineering, 9. ed., international edn. Pearson.Google Scholar
  65. Stolee, K. T., Elbaum, S., & Dwyer, M.B. (2015). Code search with input/output queries: generalizing, ranking, and assessment. Journal of Systems and Software.Google Scholar
  66. Szyperski, C. (2002). Component software: beyond object-oriented programming, 2nd edn. Boston: Addison-Wesley Longman Publishing Co., Inc.Google Scholar
  67. Wang, K., Walker, T., & Zheng, Z. (2009). Pskip: estimating relevance ranking quality from web search clickthrough data. In Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1355–1364). ACM.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.University of MannheimMannheimGermany

Personalised recommendations