Skip to main content

Data-Driven Decisions and Actions in Today’s Software Development

  • 15k Accesses

Abstract

Today’s software development is all about data: data about the software product itself, about the process and its different stages, about the customers and markets, about the development, the testing, the integration, the deployment, or the runtime aspects in the cloud. We use static and dynamic data of various kinds and quantities to analyze market feedback, feature impact, code quality, architectural design alternatives, or effects of performance optimizations. Development environments are no longer limited to IDEs in a desktop application or the like but span the Internet using live programming environments such as Cloud9 or large-volume repositories such as BitBucket, GitHub, GitLab, or StackOverflow. Software development has become “live” in the cloud, be it the coding, the testing, or the experimentation with different product options on the Internet. The inherent complexity puts a further burden on developers, since they need to stay alert when constantly switching between tasks in different phases. Research has been analyzing the development process, its data and stakeholders, for decades and is working on various tools that can help developers in their daily tasks to improve the quality of their work and their productivity. In this chapter, we critically reflect on the challenges faced by developers in a typical release cycle, identify inherent problems of the individual phases, and present the current state of the research that can help overcome these issues.

References

  1. Alcocer, J.P.S., Bergel, A., Valente, M.T.: Learning from source code history to identify performance failures. In: Proceedings of the 7th ACM/ SPEC International Conference on Performance Engineering (ICPE), pp. 37–48 (2016)

    Google Scholar 

  2. Alexandru, C.V., Panichella, S., Gall, H.: Reducing redundancies in multi-revision code analysis. In: IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Klagenfurt (2017)

    Google Scholar 

  3. Ali, S., Briand, L.C., Hemmati, H., Panesar-Walawege, R.K.: A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans. Softw. Eng. 36(6), 742–762 (2010)

    Google Scholar 

  4. Allamanis, M., Sutton, C.: Mining source code repositories at massive scale using language modeling. In: Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pp. 207–216. IEEE Press, Piscataway (2013)

    Google Scholar 

  5. Allamanis, M., Barr, E.T., Bird, C., Sutton, C.: Learning natural coding conventions. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 281–293. ACM, New York (2014)

    Google Scholar 

  6. Allamanis, M., Barr, E.T., Bird, C., Sutton, C.: Suggesting accurate method and class names. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pp. 38–49. ACM, New York (2015)

    Google Scholar 

  7. Allamanis, M., Barr, E.T., Devanbu, P., Sutton, C.: A Survey of Machine Learning for Big Code and Naturalness. arXiv e-prints, September 2017

    Google Scholar 

  8. Amann, S., Proksch, S., Nadi, S.: FeedBaG: an interaction tracker for visual studio. In: International Conference on Program Comprehension. IEEE, Piscataway (2016)

    Google Scholar 

  9. Amann, S., Proksch, S., Nadi, S., Mezini, M.: A study of visual studio usage in practice. In: International Conference on Software Analysis, Evolution, and Reengineering. IEEE, Piscataway (2016)

    Google Scholar 

  10. Bacchelli, A., Sasso, T.D., D’Ambros, M., Lanza, M.: Content classification of development emails. In: 34th International Conference on Software Engineering, ICSE 2012, June 2–9, Zurich, pp. 375–385 (2012)

    Google Scholar 

  11. Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)

    Google Scholar 

  12. Beizer, B.: Software Testing Techniques, 2nd edn. Van Nostrand Reinhold Co., New York (1990)

    Google Scholar 

  13. Beller, M., Gousios, G., Panichella, A., Zaidman, A.: When, how, and why developers (do not) test in their IDEs. In: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACMSIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE). ACM, New York (2015)

    Google Scholar 

  14. Binkley, D., Lawrie, D., Hill, E., Burge, J., Harris, I., Hebig, R., Keszocze, O., Reed, K., Slankas, J.: Task-driven software summarization. In: Proceedings of the International Conference on Software Maintenance (ICSM), pp. 432–435. IEEE, Piscataway (2013)

    Google Scholar 

  15. Brooks, F.P.Jr.: The Mythical Man-Month. Addison-Wesley, Reading (1975)

    Google Scholar 

  16. Bruch, M., Monperrus, M., Mezini, M.: Learning from examples to improve code completion systems. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 213–222. ACM, New York (2009)

    Google Scholar 

  17. Bulej, L., Bureš, T., Horký, V., Kotrč, J., Marek, L., Trojánek, T., Tůma, P.: Unit testing performance with stochastic performance logic. Autom. Softw. Eng. 24(1), 139–187 (2017)

    Google Scholar 

  18. Buse R.P.L., Weimer, W.R.: Automatically documenting program changes. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, ASE ’10, pp. 33–42. ACM, New York (2010)

    Google Scholar 

  19. Campbell, J.C., Hindle, A., Amaral, J.N.: Syntax errors just aren’t natural: improving error reporting with language models. In: Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, pp. 252–261. ACM, New York (2014)

    Google Scholar 

  20. Ceccato, M., Marchetto, A., Mariani, L., Nguyen, C.D., Tonella, P.: Do automatically generated test cases make debugging easier? An experimental assessment of debugging effectiveness and efficiency. ACM Trans. Softw. Eng. Methodol. 25(1), 5:1–5:38 (2015)

    Google Scholar 

  21. Chang, K.H., Cross II, J.H., Carlisle, W.H., Liao, S.-S.: A performance evaluation of heuristics-based test case generation methods for software branch coverage. Int. J. Softw. Eng. Knowl. Eng. 6(04), 585–608 (1996)

    Google Scholar 

  22. Chen, L.: Continuous delivery: huge benefits, but challenges too. Softw. IEEE 32(2), 50–54 (2015)

    Google Scholar 

  23. Chen, L.: Continuous delivery: overcoming adoption challenges. J. Syst. Softw. 128, 72–86 (2017)

    Google Scholar 

  24. Chen, N., Lin, J., Hoi, S.C.H., Xiao, X., Zhang, B.: Ar-miner: mining informative reviews for developers from mobile app marketplace. In: Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pp. 767–778. ACM, New York (2014)

    Google Scholar 

  25. Cito, J., Leitner, P., Fritz, T., Gall, H.C.: The making of cloud applications: an empirical study on software development for the cloud. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pp. 393–403. ACM, New York (2015)

    Google Scholar 

  26. Cito, J., Schermann, G., Wittern, J.E., Leitner, P., Zumberi, S., Gall, H.C.: An empirical analysis of the docker container ecosystem on github. In: Proceedings of the 14th International Conference on Mining Software Repositories, MSR ’17, pp. 323–333. IEEE Press, Piscataway (2017)

    Google Scholar 

  27. Ciurumelea, A., Schaufelbühl, A., Panichella, S., Gall, H.: Analyzing reviews and code of mobile apps for better release planning. In: 2017 IEEE 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), pp. 91–102 (2017)

    Google Scholar 

  28. Cortes-Coy, L.F., Vásquez, M.L., Aponte, J., Poshyvanyk, D.: On automatically generating commit messages via summarization of source code changes. In: Proceedings of the International Working Conference on Source Code Analysis and Manipulation (SCAM), pp. 275–284. IEEE, Piscataway (2014)

    Google Scholar 

  29. Daka, E., Campos, J., Fraser, G., Dorn, J., Weimer, W.: Modeling readability to improve unit tests. In: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACMSIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE). ACM, New York (2015)

    Google Scholar 

  30. D’Ambros, M., Lanza, M., Robbes, R.: Commit 2.0. In: Proceedings of the 1st Workshop on Web 2.0 for Software Engineering, Web2SE ’10, pp. 14–19. ACM, New York (2010)

    Google Scholar 

  31. D’Ambros, M., Lanza, M., Robbes, R.: Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir. Softw. Eng. 17(4–5), 531–577 (2012)

    Google Scholar 

  32. Damevski, K., Shepherd, D., Schneider, J., Pollock, L.: Mining sequences of developer interactions in visual studio for usage smells. IEEE Trans. Softw. Eng. 43(4), 359–371 (2016)

    Google Scholar 

  33. De Lucia, A., Di Penta, M., Oliveto, R., Panichella, A., Panichella, S.: Using IR methods for labeling source code artifacts: is it worthwhile? In: IEEE 20th International Conference on Program Comprehension, ICPC 2012, Passau, June 11–13, 2012, pp. 193–202 (2012)

    Google Scholar 

  34. De Lucia, A., Di Penta, M., Oliveto, R., Panichella, A., Panichella, S.: Labeling source code with information retrieval methods: an empirical study. Empir. Softw. Eng. 19(5), 1383–1420 (2014)

    Google Scholar 

  35. de Oliveira, A.B., Fischmeister, S., Diwan, A., Hauswirth, M., Sweeney, P.: Perphecy: performance regression test selection made simple but effective. In: Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST), Tokyo (2017)

    Google Scholar 

  36. Di Sorbo, A., Panichella, S., Visaggio, C.A., Di Penta, M., Canfora, G., Gall, H.C.: Development emails content analyzer: intention mining in developer discussions. In: 2015, 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 12–23. IEEE, Washington (2015)

    Google Scholar 

  37. Di Sorbo, A., Panichella, S., Alexandru, C., Shimagaki, J., Visaggio, C.A., Canfora, G., Gall, H.C.: What would users change in my app? summarizing app reviews for recommending software changes. In: 2016 ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), pp. 499–510 (2016)

    Google Scholar 

  38. Di Sorbo, A., Panichella, S., Alexandru, C.V., Visaggio, C.A., Canfora, G.: Surf: summarizer of user reviews feedback. In: Proceedings of the 39th International Conference on Software Engineering Companion, pp. 55–58. IEEE Press, Piscataway (2017)

    Google Scholar 

  39. Dias, M., Cassou, D., Ducasse, S.: Representing code history with development environment events. In: International Workshop on Smalltalk Technologies (2013)

    Google Scholar 

  40. Dyer, R.: Bringing ultra-large-scale software repository mining to the masses with Boa. PhD thesis, Ames (2013). AAI3610634

    Google Scholar 

  41. Fabijan, A., Dmitriev, P., Olsson, H.H., Bosch, J.: The evolution of continuous experimentation in software product development. In: International Conference on Software Engineering, ICSE, Buenos Aires (2017)

    Google Scholar 

  42. Ferguson, R., Korel, B.: The chaining approach for software test data generation. ACM Trans. Softw. Eng. Methodol. 5(1), 63–86 (1996)

    Google Scholar 

  43. Fluri, B., Wuersch, M., PInzger, M., Gall, H.: Change distilling: tree differencing for fine-grained source code change extraction. IEEE Trans. Softw. Eng. 33(11), 725–743 (2007)

    Google Scholar 

  44. Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE ’11, pp. 416–419. ACM, New York (2011)

    Google Scholar 

  45. Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)

    Google Scholar 

  46. Fraser, G., Arcuri, A.: 1600 faults in 100 projects: automatically finding faults while achieving high coverage with evosuite. Empir. Softw. Eng. 20(3), 611–639 (2015)

    Google Scholar 

  47. Fraser, G., Staats, M., McMinn, P., Arcuri, A., Padberg, F.: Does automated white-box test generation really help software testers? In: Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 291–301. ACM, New York (2013)

    Google Scholar 

  48. Gall, H.C., Fluri, B., Pinzger, M.: Change analysis with evolizer and changedistiller. Software, IEEE 26(1), 26–33 (2009)

    Google Scholar 

  49. Gallagher, M.J., Lakshmi Narasimhan, V: Adtest: a test data generation suite for ada software systems. IEEE Trans. Softw. Eng. 23(8), 473–484 (1997)

    Google Scholar 

  50. Georges, A., Buytaert, D., Eeckhout, L.: Statistically rigorous java performance evaluation. In: Proceedings of the 22nd Annual ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications, OOPSLA ’07, pp. 57–76. ACM, New York (2007)

    Google Scholar 

  51. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1989)

    Google Scholar 

  52. Grano, G., Di Sorbo, A., Mercaldo, F., Aaron Visaggio, C., Canfora, G., Panichella, S.: Android apps and user feedback: a dataset for software evolution and quality improvement. In: Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics, WAMA@ESEC/SIGSOFT FSE 2017, Paderborn, September 5, 2017, pp. 8–11 (2017)

    Google Scholar 

  53. Guzman, E., Maalej, W.: How do users like this feature? A fine grained sentiment analysis of app reviews. In: 2014 IEEE 22nd International Requirements Engineering Conference (RE), pp. 153–162 (2014)

    Google Scholar 

  54. Ha, E., Wagner, D.: Do android users write about electric sheep? Examining consumer reviews in google play. In: Consumer Communications and Networking Conference (CCNC), 2013 IEEE, pp. 149–157 (2013)

    Google Scholar 

  55. Hahn, U., Mani, I.: The challenges of automatic summarization. Computer 33(11), 29–36 (2000)

    Google Scholar 

  56. Haiduc, S., Aponte, J., Moreno, L., Marcus, A.: On the use of automated text summarization techniques for summarizing source code. In: Proceedings of the International Working Conference on Reverse Engineering (WCRE), pp. 35–44. IEEE, New York (2010)

    Google Scholar 

  57. Hammad, M., Abuljadayel, A., Khalaf, M.: Automatic summarising: the state of the art. Lect. Notes Softw. Eng. 4(2), 129–132 (2016)

    Google Scholar 

  58. Heger, C., Happe, J., Farahbod, R.: Automated root cause isolation of performance regressions during software development. In: Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, ICPE ’13, pp. 27–38. ACM, New York (2013)

    Google Scholar 

  59. Hellendoorn, V.J., Devanbu, P.T., Bacchelli, A.: Will they like this? Evaluating code contributions with language models. In: Proceedings of the 12th Working Conference on Mining Software Repositories, MSR ’15, pp. 157–167. IEEE Press, Piscataway (2015)

    Google Scholar 

  60. Hindle, A., Barr, E.T., Su, Z., Gabel, M., Devanbu, P.: On the naturalness of software. In: Proceedings of the 34th International Conference on Software Engineering, ICSE ’12, pp. 837–847. IEEE Press, Piscataway (2012)

    Google Scholar 

  61. Huang, P., Ma, X., Shen, D., Zhou, Y.: Performance regression testing target prioritization via performance risk analysis. In: Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pp. 60–71. ACM, New York (2014)

    Google Scholar 

  62. Humble, J., Farley, D.: Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison-Wesley Professional, Reading (2010)

    Google Scholar 

  63. Iacob, C., Harrison, R.: Retrieving and analyzing mobile apps feature requests from online reviews. In: Proceedings of the 10th Working Conference on Mining Software Repositories, MSR’13, pp. 41–44. IEEE Press, Piscataway (2013)

    Google Scholar 

  64. Jin, G., Song, L., Shi, X., Scherpelz, J., Lu, S.: Understanding and detecting real-world performance bugs. In: Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’12, pp. 77–88. ACM, New York (2012)

    Google Scholar 

  65. Kamimura, M., Murphy, G.C.: Towards generating human-oriented summaries of unit test cases. In: Proceedings of the International Conference on Program Comprehension (ICPC), May, pp. 215–218. IEEE, Piscataway (2013)

    Google Scholar 

  66. Kerzazi, N., Khomh, F., Adams, B.: Why do automated builds break? An empirical study. In: 30th IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 41–50. IEEE, Piscataway (2014)

    Google Scholar 

  67. Kim, J., You, B., Kwon, M., McMinn, P., Yoo, S.: Evaluating CAVM: a new search-based test data generation tool for C. In: International Symposium on Search-Based Software Engineering (SSBSE 2017) (2017)

    Google Scholar 

  68. Ko, A.J., Myers, B.A., Aung, H.H.: Six learning barriers in end-user programming systems. In: 2004 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 199–206. IEEE, Washington (2004)

    Google Scholar 

  69. Kocaguneli, E., Menzies, T., Keung, J.W.: On the value of ensemble effort estimation. IEEE Trans. Softw. Eng. 38(6), 1403–1416 (2012)

    Google Scholar 

  70. Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., Pohlmann, N.: Online controlled experiments at large scale. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1168–1176. ACM, New York (2013)

    Google Scholar 

  71. Lahiri, S.K., Hawblitzel, C., Kawaguchi, M., Rebêlo, H.: SYMDIFF: A Language-Agnostic Semantic Diff Tool for Imperative Programs, pp. 712–717. Springer, Berlin (2012)

    Google Scholar 

  72. Lakhotia, K., Harman, M., Gross, H. (2013) Austin: an open source tool for search based software testing of C programs. Inf. Softw. Technol. 55(1), 112–125 (2013)

    Google Scholar 

  73. Leitner, P., Bezemer, C.-P.: An exploratory study of the state of practice of performance testing in java-based open source projects. In: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering, ICPE ’17, pp. 373–384. ACM, New York (2017)

    Google Scholar 

  74. Maalej, W., Nabil, H.: Bug report, feature request, or simply praise? On automatically classifying app reviews. In: 2015 IEEE 23rd International Requirements Engineering Conference (RE), August, pp. 116–125 (2015)

    Google Scholar 

  75. McBurney, P.W., McMillan, C.: Automatic documentation generation via source code summarization of method context. In: Proceedings of the International Conference on Program Comprehension (ICPC), pp. 279–290. ACM, New York (2014)

    Google Scholar 

  76. McCandless, M., Hatcher, E., Gospodnetic, O.: Lucene in Action: Covers Apache Lucene 3.0. Manning Publications Co., Greenwich (2010)

    Google Scholar 

  77. McMinn, P.: Search-based software testing: past, present and future. In: Proceedings of the 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, ICSTW ’11, pp. 153–163. IEEE Computer Society, Washington (2011)

    Google Scholar 

  78. Mende, T., Koschke, R.: Revisiting the evaluation of defect prediction models. In: Proceedings of the 5th International Conference on Predictor Models in Software Engineering, PROMISE ’09, pp. 7:1–7:10. ACM, New York (2009)

    Google Scholar 

  79. Minelli, R., Mocci, A., Robbes, R., Lanza, M.: Taming the ide with fine-grained interaction data. In: International Conference on Program Comprehension (2016)

    Google Scholar 

  80. Moreno, L., Marcus, A.: Automatic software summarization: the state of the art. In: Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, May 20–28, 2017—Companion Volume, pp. 511–512 (2017)

    Google Scholar 

  81. Moreno, L., Aponte, J., Sridhara, G., Marcus, A., Pollock, L., Vijay-Shanker, K.: Automatic generation of natural language summaries for java classes. In: Proceedings of the International Conference on Program Comprehension (ICPC), May, pp. 23–32. IEEE, Piscataway (2013)

    Google Scholar 

  82. Moreno, L., Bavota, G., Di Penta, M., Oliveto, R., Marcus, A., Canfora, G.: Automatic generation of release notes. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22), Hong Kong, November 16–22, 2014, pp. 484–495 (2014)

    Google Scholar 

  83. Moritz, E., Linares-Vásquez, M., Poshyvanyk, D., Grechanik, M., McMillan, C., Gethers, M.: Export: detecting and visualizing API usages in large source code repositories. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, pp. 646–651. IEEE Press, Piscataway (2013)

    Google Scholar 

  84. Murphy, G.C.: Lightweight structural summarization as an aid to software evolution. PhD thesis (1996). AAI9704521

    Google Scholar 

  85. Murphy, G.C., Kersten, M., Findlater, L.: How are java software developers using the eclipse IDE? IEEE Softw. 23(4), 76–83 (2006)

    Google Scholar 

  86. Nazar, N., Hu, Y., Jiang, H.: Summarizing software artifacts: a literature review. J. Comput. Sci. Technol. 31(5), 883–909 (2016)

    Google Scholar 

  87. Negara, S., Vakilian, M., Chen, N., Johnson, R.E., Dig, D.: Is it dangerous to use version control histories to study source code evolution? In: European Conference on Object-Oriented Programming, pp. 79–103. Springer, Heidelberg (2012)

    Google Scholar 

  88. Palomba, F., Salza, P., Ciurumelea, A., Panichella, S., Gall, H.C., Ferrucci, F., De Lucia, A.: Recommending and localizing change requests for mobile apps based on user reviews. In: Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, May 20–28, pp. 106–117 (2017)

    Google Scholar 

  89. Panichella, S., Aponte, J., Di Penta, M., Marcus, A., Canfora, G.: Mining source code descriptions from developer communications. In: Proceedings of the International Conference on Program Comprehension, ICPC, pp. 63–72. IEEE, Los Alamitos (2012)

    Google Scholar 

  90. Panichella, S., Bavota, G., Di Penta, M., Canfora, G., Antoniol, G.: How developers’ collaborations identified from different sources tell us about code changes. In: 30th IEEE International Conference on Software Maintenance and Evolution, Victoria, September 29–October 3, pp. 251–260 (2014)

    Google Scholar 

  91. Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: ICST, pp. 1–10. IEEE Computer Society, Washington (2015)

    Google Scholar 

  92. Panichella, S., Di Sorbo, A., Guzman, E., Visaggio, C.A., Canfora, G., Gall, H.C.: How can I improve my app? Classifying user reviews for software maintenance and evolution. In: 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 281–290 (2015)

    Google Scholar 

  93. Panichella, S., Di Sorbo, A., Guzman, E., Visaggio, C.A., Canfora, G., Gall, G., Gall, H.C.: Ardoc: app reviews development oriented classifier. In: 2016 ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE), pp. 1023–1027 (2016)

    Google Scholar 

  94. Panichella, S., Panichella, A., Beller, M., Zaidman, A., Gall, H.C.: The impact of test case summaries on bug fixing performance: an empirical investigation. In: Proceedings of the 38th International Conference on Software Engineering, ICSE ’16, pp. 547–558. ACM, New York (2016)

    Google Scholar 

  95. Ponzanelli, L., Bavota, G., Di Penta, M., Oliveto, R., Lanza, M.: Prompter: a self-confident recommender system. In: 2014 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 577–580. IEEE, Victoria (2014)

    Google Scholar 

  96. Ponzanelli, L., Mocci, A., Lanza, M.: Summarizing complex development artifacts by mining heterogeneous data. In: Proceedings of the 12th Working Conference on Mining Software Repositories, MSR ’15, pp. 401–405. IEEE Press, Piscataway (2015)

    Google Scholar 

  97. Proksch, S., Bauer, V., Murphy, G.C.: How to build a recommendation system for software engineering. In: Software Engineering. Springer, Berlin (2015)

    Google Scholar 

  98. Proksch, S., Lerch, J., Mezini, M.: Intelligent code completion with Bayesian networks. Trans. Softw. Eng. Methodol. 25(1), 3 (2015)

    Google Scholar 

  99. Proksch, S., Amann, S., Nadi, S., Mezini, M.: Evaluating the evaluations of code recommender systems: a reality check. In: International Conference on Automated Software Engineering. ACM, New York (2016)

    Google Scholar 

  100. Proksch, S., Amann, S., Nadi, S.: Enriched event streams: a general dataset for empirical studies on in-IDE activities of software developers. In: International Conference on Mining Software Repositories (accepted Mining Challenge) (2017)

    Google Scholar 

  101. Proksch, S., Nadi, S., Amann, S., Mezini, M.: Enriching in-IDE process information with fine-grained source code history. In: International Conference on Software Analysis, Evolution, and Reengineering (2017)

    Google Scholar 

  102. Radevski, S., Hata, H., Matsumoto, K.: Towards building api usage example metrics. In: 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), vol. 1, pp. 619–623. IEEE, Piscataway (2016)

    Google Scholar 

  103. Rastkar, S., Murphy, G.C., Murray, G.: Summarizing software artifacts: a case study of bug reports. In: Proceedings of the 32Nd ACM/IEEE International Conference on Software Engineering - volume 1, ICSE ’10, pp. 505–514 (2010)

    Google Scholar 

  104. Rastkar, S., Murphy, G.C., Murray, G.: Automatic summarization of bug reports. IEEE Trans. Softw. Eng. 40(4), 366–380 (2014)

    Google Scholar 

  105. Ray, B., Hellendoorn, V., Godhane, S., Tu, Z., Bacchelli, A., Devanbu, P.: On the “naturalness” of buggy code. In: Proceedings of the 38th International Conference on Software Engineering, ICSE ’16, pp. 428–439. ACM, New York (2016)

    Google Scholar 

  106. Robillard, M.P.: What makes APIs hard to learn? Answers from developers. IEEE Softw. 26(6), 27–39 (2009)

    Google Scholar 

  107. Rojas, J.M., Fraser, G., Arcuri, A.: Automated unit test generation during software development: a controlled experiment and think-aloud observations. In: Proceedings of the 2015 International Symposium on Software Testing and Analysis, ISSTA 2015, pp. 338–349. ACM, New York (2015)

    Google Scholar 

  108. Saied, M.A., Benomar, O., Abdeen, H., Sahraoui, H.: Mining multi-level api usage patterns. In: 2015 IEEE 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 23–32. IEEE, Piscataway (2015)

    Google Scholar 

  109. Saied, M.A., Benomar, O., Abdeen, H., Sahraoui, H.: Mining multi-level api usage patterns. In: 2015 IEEE 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 23–32. IEEE, Piscataway (2015)

    Google Scholar 

  110. Scalabrino, S., Grano, G., Di Nucci, D., Oliveto, R., De Lucia, A.: Search-based testing of procedural programs: iterative single-target or multi-target approach? In: Search Based Software Engineering, October, pp. 64–79. Springer, Cham (2016)

    Google Scholar 

  111. Schermann, G., Cito, J., Leitner, P., Gall, H.C.: Towards quality gates in continuous delivery and deployment. In: 2016 IEEE 24th International Conference on Program Comprehension (ICPC), pp. 1–4. IEEE, Piscataway (2016)

    Google Scholar 

  112. Schermann, G., Schöni, D., Leitner, P., Gall, H.C.: Bifrost: supporting continuous deployment with automated enactment of multi-phase live testing strategies. In: Proceedings of the 17th International Middleware Conference, pp. 12:1–12:14. ACM, New York (2016)

    Google Scholar 

  113. Schermann, G., Cito, J., Leitner, P., Zdun, U., Gall, H.C.: We’re doing it live: an empirical study on continuous experimentation. J. Inf. Softw. Technol. (2017, under submission)

    Google Scholar 

  114. Spärck Jones, K.: Automatic summarising: the state of the art. Inf. Process. Manage. 43(6), 1449–1481 (2007)

    Google Scholar 

  115. Sridhara, G.: Automatic generation of descriptive summary comments for methods in object-oriented programs. PhD thesis, Newark (2012). AAI3499878

    Google Scholar 

  116. Sridhara, G., Hill, E., Muppaneni, D., Pollock, L., Vijay-Shanker, K.: Towards automatically generating summary comments for java methods. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp. 43–52. ACM, Piscataway (2010)

    Google Scholar 

  117. Sridhara, G., Pollock, L., Vijay-Shanker, K.: Automatically detecting and describing high level actions within methods. In: Proceedings of the International Conference on Software Engineering (ICSE), pp. 101–110. IEEE, Piscataway (2011)

    Google Scholar 

  118. Sridhara, G., Pollock, L., Vijay-Shanker, K.: Generating parameter comments and integrating with method summaries. In: Proceedings of the International Conference on Program Comprehension (ICPC), pp. 71–80. IEEE, Piscataway (2011)

    Google Scholar 

  119. Stefan, P., Horky, V., Bulej, L., Tuma, P.: Unit testing performance in java projects: are we there yet? In: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering, ICPE ’17, pp. 401–412. ACM, New York (2017)

    Google Scholar 

  120. Tu, Z., Su, Z., Devanbu, P.: On the localness of software. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 269–280. ACM, New York (2014)

    Google Scholar 

  121. VanHilst, M., Huang, S., Mulcahy, J., Ballantyne, W., Suarez-Rivero, E., Harwood, D.: Measuring effort in a corporate repository. In: IRI, pp. 246–252. IEEE Systems, Man, and Cybernetics Society, Piscataway (2011)

    Google Scholar 

  122. Vassallo, C., Panichella, S., Di Penta, M., Canfora, G.: Codes: mining source code descriptions from developers discussions. In: Proceedings of the International Conference on Program Comprehension (ICPC), pp. 106–109. ACM, New York (2014)

    Google Scholar 

  123. Vassallo, C., Zampetti, F., Romano, D., Beller, M., Panichella, A., Di Penta, M., Zaidman, A.: Continuous delivery practices in a large financial organization. In: 2016 IEEE International Conference on Software Maintenance and Evolution, ICSME 2016, Raleigh, October 2–7, 2016, pp. 519–528 (2016)

    Google Scholar 

  124. Vassallo, C., Schermann, G., Zampetti, F., Romano, D., Leitner, P., Zaidman, A., Di Penta, M., Panichella, S.: A tale of ci build failures: an open source and a financial organization perspective (2017)

    Google Scholar 

  125. Vithani, T.: Modeling the mobile application development lifecycle. In: Proceedings of the International MultiConference of Engineers and Computer Scientists 2014, vol. I, IMECS 2014, pp. 596–600 (2014)

    Google Scholar 

  126. Wong, E., Yang, J., Tan, L.: Autocomment: mining question and answer sites for automatic comment generation. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp. 562–567. IEEE, Piscataway (2013)

    Google Scholar 

  127. Wu, H.C., Luk, R.W.P., Wong, K.F., Kwok, K.L.: Interpreting tf-idf term weights as making relevance decisions. ACM Trans. Inf. Syst. (TOIS) 26(3), 13 (2008)

    Google Scholar 

  128. Xie, T., Pei, J.: Mapo: mining api usages from open source repositories. In: Proceedings of the 2006 International Workshop on Mining Software Repositories, pp. 54–57. ACM, New York (2006)

    Google Scholar 

  129. Zagalsky, A., Barzilay, O., Yehudai, A.: Example overflow: using social media for code recommendation. In: Proceedings of the Third International Workshop on Recommendation Systems for Software Engineering, pp. 38–42. IEEE Press, Piscataway (2012)

    Google Scholar 

  130. Zhou, Y., Gu, R., Chen, T., Huang, Z., Panichella, S., Gall, H.C.: Analyzing apis documentation and code to detect directive defects. In: Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, May 20–28, 2017, pp. 27–37 (2017)

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Sebastian Proksch .

Rights and permissions

This chapter is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.

Copyright information

© 2018 The Author(s)

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Gall, H. et al. (2018). Data-Driven Decisions and Actions in Today’s Software Development. In: Gruhn, V., Striemer, R. (eds) The Essence of Software Engineering. Springer, Cham. https://doi.org/10.1007/978-3-319-73897-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-73897-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-73896-3

  • Online ISBN: 978-3-319-73897-0

  • eBook Packages: Computer ScienceComputer Science (R0)