An empirical assessment of best-answer prediction models in technical Q&A sites

Abstract

Technical Q&A sites have become essential for software engineers as they constantly seek help from other experts to solve their work problems. Despite their success, many questions remain unresolved, sometimes because the asker does not acknowledge any helpful answer. In these cases, an information seeker can only browse all the answers within a question thread to assess their quality as potential solutions. We approach this time-consuming problem as a binary-classification task where a best-answer prediction model is built to identify the accepted answer among those within a resolved question thread, and the candidate solutions to those questions that have received answers but are still unresolved. In this paper, we report on a study aimed at assessing 26 best-answer prediction models in two steps. First, we study how models perform when predicting best answers in Stack Overflow, the most popular Q&A site for software engineers. Then, we assess performance in a cross-platform setting where the prediction models are trained on Stack Overflow and tested on other technical Q&A sites. Our findings show that the choice of the classifier and automatied parameter tuning have a large impact on the prediction of the best answer. We also demonstrate that our approach to the best-answer prediction problem is generalizable across technical Q&A sites. Finally, we provide practical recommendations to Q&A platform designers to curate and preserve the crowdsourced knowledge shared through these sites.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    http://stackoverflow.com/research/developer-survey-2016

  2. 2.

    http://data.stackexchange.com/stackoverflow/query/759408/daily-posts-in-2016

  3. 3.

    https://answers.yahoo.com

  4. 4.

    http://scn.sap.com

  5. 5.

    https://github.com/collab-uniba/emse_best-answer-prediction

  6. 6.

    https://business.stackoverflow.com/enterprise

  7. 7.

    https://www.r-project.org

  8. 8.

    http://www.cs.waikato.ac.nz/ml/weka

  9. 9.

    http://scikit-learn.org

  10. 10.

    https://archive.org/details/stackexchange

  11. 11.

    https://www.docusign.com

  12. 12.

    https://www.dwolla.com

  13. 13.

    http://scrapy.org

  14. 14.

    http://www.seleniumhq.org/projects/webdriver

  15. 15.

    https://cran.r-project.org/package=caret

  16. 16.

    https://github.com/collab-uniba/emse_best-answer-prediction/tree/master/additional_material

  17. 17.

    https://m2.icm.edu.pl/boruta

  18. 18.

    https://stackoverflow.com/questions/20864634

  19. 19.

    https://stackoverflow.com/a/20864795

  20. 20.

    https://stackoverflow.com/a/20864807

  21. 21.

    https://stackoverflow.com/questions/27727589

  22. 22.

    https://stackoverflow.com/questions/27727433

  23. 23.

    https://stackoverflow.com/a/27727523

  24. 24.

    https://stackoverflow.com/a/27729675

  25. 25.

    https://github.com/collab-uniba/emse_best-answer-prediction/tree/master/additional_material

References

  1. Abdalkareem R, Shihab E, Rilling J (2017) What do developers use the crowd for? A Study Using Stack Overflow IEEE Software 34(2):53–60

    Google Scholar 

  2. L.A. Adamic, J. Zhang, E. Bakshy, M.S. Ackerman, Knowledge sharing and yahoo answers, in Proceeding of the 17th international Conf. On world wide web – WWW ‘08 (ACM, 2008). https://doi.org/10.1145/1367497.1367587

  3. Adams NM, Hand DJ (1999) Comparing classifiers when the misallocation costs are uncertain. Pattern Recogn 32(7):1139–1147. https://doi.org/10.1016/s0031-3203(98)00154-x

    Article  Google Scholar 

  4. A. Anderson, D. Huttenlocher, J. Kleinberg, J. Leskovec, Discovering value from community activity on focused question answering sites, in Proc. of the 18th ACM SIGKDD Int’l Conf. On knowledge discovery and data mining – KDD ‘12 (ACM, 2012). https://doi.org/10.1145/2339530.2339665

  5. Arisholm E, Briand LC, Johannessen EB (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83(1):2–17. https://doi.org/10.1016/j.jss.2009.06.055

    Article  Google Scholar 

  6. M. Asaduzzaman, A.S Mashiyat, C.K. Roy, K.A. Schneider (2013). Answering questions about unanswered questions of Stack Overflow, In Proc. of the 10th IEEE Working Conf. on Mining Software Repositories (MSR 2013), pp. 97–100

  7. K. Bajaj, K. Pattabiraman, A. Mesbah, Mining questions asked by web developers, in Proc. of the 11th working Conf. On mining software repositories – MSR 2014 (ACM, 2014). https://doi.org/10.1145/2597073.2597083

  8. Barua A, Thomas SW, Hassan AE (2012) What are developers talking about? An analysis of topics and trends in Stack Overflow Empirical Software Engineering 19(3):619–654. https://doi.org/10.1007/s10664-012-9231-y

    Article  Google Scholar 

  9. Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. The Journal of Machine Learning Research 13(1):281–305

    MathSciNet  MATH  Google Scholar 

  10. Blei D, Ng A, Jordan M (2003) Latent Dirichlet allocation. The Journal of Machine Learning Research 3:993–1022

    MATH  Google Scholar 

  11. L.C. Borges, D.F. Ferreira. Power and type I errors rate of Scott-Knott, Tukey and Student-Newman-Keuls tests under normal and no-normal distributions of the residues. Rev. Mat. Estat., Sao Paulo, 211: 67–83 (2003)

  12. A. Bosu, C.S. Corley, D. Heaton, D. Chatterji, J.C. Carver, N.A. Kraft, Building reputation in StackOverflow: an empirical investigation, in 2013 10th working Conf. On mining software repositories (MSR) (IEEE, 2013). https://doi.org/10.1109/msr.2013.6624013

  13. G. Burel, Y. He, H. Alani, Automatic Identification of Best Answers in Online Enquiry Communities, in Lecture Notes in Computer Science (Springer, 2012), pp. 514–529. https://doi.org/10.1007/978-3-642-30284-841

  14. Cai Y, Chakravarthy S (2011) Predicting answer quality in q/a social networks: using temporal features, technical report, technical report. University of Texas at Arlington

  15. F. Calefato, F. Lanubile, M.C. Marasciulo, N. Novielli, Mining successful answers in stack overflow, in 2015 IEEE/ACM 12th working Conf. On mining software repositories (IEEE, 2015). https://doi.org/10.1109/msr.2015.56

  16. Calefato F, Lanubile F, Novielli N (2016) Moving to stack overflow: best-answer prediction in legacy developer forums. In: Proc. 10th Int’l Symposium on Empirical Softw. Eng. And Measurement (ESEM’16), Ciudad Real, 8–9 Sept. 2016. https://doi.org/10.1145/2961111.2962585

  17. Calefato F, Lanubile F, Novielli N (2018) How to ask for technical help? Evidence-based guidelines for writing questions on stack overflow. Inf Softw Technol 94:186–207. https://doi.org/10.1016/j.infsof.2017.10.009

    Article  Google Scholar 

  18. Catal C, Diri B (2009) A systematic review of software fault prediction studies. Expert Syst Appl 36(4):7346–7354. https://doi.org/10.1016/j.eswa.2008.10.027

    Article  Google Scholar 

  19. Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum, Hillsdale, NJ

    Google Scholar 

  20. WJ Conover (1999) Practical nonparametric statistics (3rd ed). Wiley

  21. D’Ambros M, Lanza M, Robbes R (2012) Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir Softw Eng 17(4–5):531–577. https://doi.org/10.1007/s10664-011-9173-9

    Article  Google Scholar 

  22. J. Davis, M. Goadrich, The relationship between precision-recall and ROC curves, in proceedings of the 23rd international conference on machine learning – ICML ‘06 (ACM 2006). https://doi.org/10.1145/1143844.1143874

  23. DeLong ER, DeLong DM, Clarke-Pearson DL (1988) Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44:837–845

    MATH  Article  Google Scholar 

  24. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research 7:1–30

    MathSciNet  MATH  Google Scholar 

  25. G. Denaro, M. Pezzè, An empirical evaluation of fault-proneness models, in Software Engineering, 2002. ICSE 2002. Proceedings of the 24rd international conference on, 2002, pp. 241–251. https://doi.org/10.1145/581368.581371

  26. Drummond C, Holte RC (2006) Cost curves: an improved method for visualizing classifier performance. Mach Learn 65(1):95–130. https://doi.org/10.1007/s10994-006-8199-5

    Article  Google Scholar 

  27. M. Duijn, A. Kučera, and A. Bacchelli (2015) Quality questions need quality code: classifying code fragments on stack overflow. In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR '15), pp 410–413

  28. C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the 17th international joint conference on Artificial intelligence - IJCAI'01 (Morgan Kaufmann, 2001), Vol. 2, pp. 973−978 (2001)

  29. Fawcett T (2006) An introduction to roc analysis. Pattern Recogn Lett 27(8):861–874

    MathSciNet  Article  Google Scholar 

  30. Fenton NE, Neil M (1999) A critique of software defect prediction models. IEEE Trans Softw Eng 25(5):675–689. https://doi.org/10.1109/32.815326

    Article  Google Scholar 

  31. Fu W, Menzies T, Shen X (2016) Tuning for software analytics: is it really necessary? Inf Softw Technol 76:135–146

    Article  Google Scholar 

  32. B. Ghotra, S. McIntosh, A.E. Hassan, Revisiting the Impact of Classification Techniques on the Performance of Defect Prediction Models, in Proc. of the International Conference on Software Engineering (ICSE), 2015, pp. 789–800

  33. G. Gkotsis, K. Stepanyan, C. Pedrinaci, J. Domingue, M. Liakata, It’s all in the content, in Proc. of the 2014 ACM Conf. On web science – WebSci ‘14 (ACM, 2014). https://doi.org/10.1145/2615569.2615681

  34. Gkotsis G, Liakata M, Pedrinaci C, Stepanyan K, Domingue J (2015) ACQUA: automated community-based question answering through the discretisation of shallow linguistic features. Journal of Web Science 1(1):1–15. https://doi.org/10.1561/106.00000001

    Article  Google Scholar 

  35. Hall MA (1999) Correlation-based feature selection for machine learning. University of Waikato, PhD Dissertation

    Google Scholar 

  36. Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304. https://doi.org/10.1109/TSE.2011.103

    Article  Google Scholar 

  37. K. Hart, A. Sarma, Perceptions of answer quality in an online technical question and answer forum, in Proc. of 7th Int’l workshop on coop. And human aspects of Softw. Eng. – CHASE’14 (ACM, 2014). https://doi.org/10.1145/2593702.2593703

  38. T.J. Hastie, R.J. Tibshirani, J.H. Friedman, The elements of statistical learning: data mining, inference, and prediction. Springer series in statistics (Springer, New York, 2009). Autres impressions: 2011 (corr.), 2013 (7e corr.). ISBN 978-0-387-84857-0

  39. He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans on Knowl and Data Eng 21(9):1263–1284. https://doi.org/10.1109/TKDE.2008.239

    Article  Google Scholar 

  40. Hosseini S, Turhan B, Gunarathna D (2017) A systematic literature review and Meta-analysis on cross project defect prediction. IEEE Trans Softw Eng

  41. Huang J, Ling CX (2005) Using AUC and accuracy in evaluating learning algorithms. IEEE Trans on Knowl and Data Eng 17(3):299–310. https://doi.org/10.1109/TKDE.2005.50

    Article  Google Scholar 

  42. Hyndman RJ, Athanasopoulos G (2017) Forecasting: principles and practice, 2nd edn http://otexts.org/fpp2

    Google Scholar 

  43. Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intelligent data analysis 6(5):429–449

    MATH  Article  Google Scholar 

  44. Jiang Y, Cukic B, Ma Y (2008a) Techniques for evaluating fault prediction models. Empir Softw Eng 13(5):561–595. https://doi.org/10.1007/s10664-008-9079-3

    Article  Google Scholar 

  45. Y. Jiang, B. Cukic, T. Menzies, Can data transformation help in the detection of fault-prone modules?, in Proceedings of the 2008 workshop on Defects in large software systems, ACM, 2008b, pp. 16–20

  46. Karegowda AG, Jayaram MA, Manjunath AS (2010) Feature subset selection problem using wrapper approach in supervised learning. Int J Comput Appl 1(7):13–17

    Google Scholar 

  47. J. Kincaid, R.J. Fishburne, R. Rogers, B. Chissom, Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy enlisted personnel. Research Branch Report 8(75) (1975)

  48. Kitchenham BA, Mendes E, Travassos GH (2007) Cross versus within-company cost estimation studies: a systematic review. IEEE Trans Softw Eng 33(5):316–329. https://doi.org/10.1109/TSE.2007.1001

    Article  Google Scholar 

  49. Kocaguneli E, Menzies T, Bener AB, Keung JW (2012) Exploiting the essential assumptions of analogy-based effort estimation. Software Engineering, IEEE Transactions on 38(2):425–438

    Article  Google Scholar 

  50. M. Kuhn, Building predictive models in r using the caret package. Journal of Statistical Software 28(1), 1–26 (2008). https://doi.org/10.18637/jss.v028.i05

  51. Kursa MB, Rudnicki WR (2010) Feature selection with the Boruta package. J Stat Softw 36(11):1–13

    Article  Google Scholar 

  52. Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf Softw Technol 58:388–402. https://doi.org/10.1016/j.infsof.2014.07.005

    Article  Google Scholar 

  53. C. Lemnaru, R. Potolea (2011). Imbalanced Classification Problems: Systematic Study, Issues and Best Practices. In: Zhang R., Zhang J., Zhang Z., Filipe J., Cordeiro J. (eds) Enterprise Information Systems. ICEIS 2011. Lecture notes in business information processing, vol 102. Springer, Berlin, Heidelberg

  54. Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng 34(4):485–496. https://doi.org/10.1109/TSE.2008.35

    Article  Google Scholar 

  55. Y. Liu, A. An, X. Huang, Advances in Knowledge discovery and data mining: 10th PacificAsia conference, PAKDD 2006, Singapore, April 9–12, 2006. Proceedings, in boosting prediction accuracy on imbalanced datasets with SVM ensembles, ed. by W.-K. Ng, M. Kitsuregawa, J. Li, K. Chang (Springer, Berlin, Heidelberg, 2006), pp. 107–118. ISBN 978-3-540-33207-7. https://doi.org/10.1007/1173113915

  56. Lopez V, Fernandez A, Garcıa S, Palade V, Herrera F (2013) An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci 250:113–141. https://doi.org/10.1016/j.ins.2013.07.007

    Article  Google Scholar 

  57. Malhotra R (2015) A systematic review of machine learning techniques for software fault prediction. Appl Soft Comput 27:504–518. https://doi.org/10.1016/j.asoc.2014.11.023

    Article  Google Scholar 

  58. R. Malhotra, M. Khanna. An empirical study for software change prediction using imbalanced data Empir Software Eng, 22: 2806 (2017) https://doi.org/10.1007/s10664-016-9488-7

  59. Mamykina L, Manoim B, Mittal M, Hripcsak G, Hartmann B (2011) Design lessons from the fastest Q&A site in the west. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, pp 2857–2866. https://doi.org/10.1145/1978942.1979366

  60. C.D. Manning, P. Raghavan H. Schütze, Introduction to information retrieval, Cambridge University press. 2008

  61. T. Mende, R. Koschke, Revisiting the Evaluation of Defect Prediction Models, in Proceedings of the 5th International Conference on Predictor Models in Software Engineering. PROMISE ‘09 (ACM, New York, NY, USA, 2009), pp. 7–1710. ISBN 978-1-60558-634-2. https://doi.org/10.1145/1540438.1540448

  62. T. Menzies, How not to do it: anti-patterns for data science in software engineering, in Proceedings of the 38th International Conference on Software Engineering Companion, ACM, 2016, pp. 887–887

  63. Menzies T, Shepperd M (2012) Special issue on repeatable results in software engineering prediction. Empir Softw Eng 17(1):1–17

    Article  Google Scholar 

  64. Menzies T, Greenwald J, Frank A (2007) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng 33(1):2–13. https://doi.org/10.1109/TSE.2007.10

    Article  Google Scholar 

  65. Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, Bener A (2010) Defect prediction from static code features: current results, limitations, new approaches. Autom Softw Eng 17(4):375–407. https://doi.org/10.1007/s10515-010-0069-5

    Article  Google Scholar 

  66. J. Nam and S. Kim. Heterogeneous defect prediction. In Proc. of the 10th Joint Meeting on Foundations of Software Engineering - ESEC/FSE’15 (ACM, 2015), 2015, pp. 508–519. https://doi.org/10.1145/2786805.2786814

  67. Nie L, Wei X, Zhang D, Wang X, Gao Z, Yang Y (2017) Data-driven answer selection in community QA systems. IEEE Trans Knowl Data Eng 29(6):1186–1198

    Article  Google Scholar 

  68. Parnin C, Treude C, Grammel L (2012) Crowd documentation: exploring the coverage and the dynamics of api discussions on stack overflow. Technical report, Georgia Institute of Technology

    Google Scholar 

  69. F. Peters, T. Menzies, A. Marcus. 2013. Better cross company defect prediction. In proceedings of the 10th working conference on mining software repositories (MSR '13). IEEE press, Piscataway, NJ, USA, 409-418

  70. E. Pitler, A. Nenkova, Revisiting readability, in Proc. of the Conf. On empirical methods in natural language processing – EMNLP ‘08 (ACL, 2008). https://doi.org/10.3115/1613715.1613742

  71. Polikar R (2006) Ensemble based systems in decision making. Circuits and Systems Magazine, IEEE 6(3):21–45. https://doi.org/10.1109/MCAS.2006.1688199

    Article  Google Scholar 

  72. L. Ponzanelli, A. Mocci, A. Bacchelli, M. Lanza, D. Fullerton, Improving low quality stack overflow post detection, in 2014 IEEE Int’l Conf. On software maintenance and evolution (IEEE, 2014). https://doi.org/10.1109/icsme.2014.90

  73. F.J. Provost, T. Fawcett, Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions., in KDD, vol. 97, 1997, pp. 43–48

  74. F.J. Provost, T. Fawcett, R. Kohavi, The case against accuracy estimation for comparing induction algorithms., in ICML, vol. 98, 1998, pp. 445–453

  75. F. Rahman, D. Posnett, P. Devanbu. 2012. Recalling the “imprecision” of cross-project defect prediction. In proc. 20th Int’l symposium on the foundations of software engineering (FSE ‘12), https://doi.org/10.1145/2393596.2393669

  76. Ringrose TJ, Hand DJ (1997) Construction and Assessment of Classification Rules Biometrics 53(3):1181. https://doi.org/10.2307/2533581

    Article  Google Scholar 

  77. X Robin, N Turck, A Hainard, N Tiberti, F Lisacek, J-C Sanchez, M. Müller (2011). pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics, 12:77 (2011)

  78. P.K. Roy, Z. Ahmad, J. P. Singh, .M.A. Ali Alryalat, N.P. Rana, Y. K Dwivedi (2017). Finding and ranking high-quality answers in community question answering sites. Global Journal of Flexible Systems Management, pp 1–16

  79. Saeys Y, Inza I, Larrañaga P (2007) A review of feature selection techniques in bioinformatics. Bioinformatics 23(19):2507–2517

    Article  Google Scholar 

  80. S. Scalabrino, M. Linares-Vásquez, D. Poshyvanyk and R. Oliveto. Improving code readability models with textual features. In proceedings of the IEEE 24th international conference on program comprehension (ICPC’16), Austin, TX, 2016, pp. 1–10. https://doi.org/10.1109/ICPC.2016.7503707

  81. Scott AJ, Knott M (1974) A cluster analysis method for grouping means in the analysis of variance. Biometrics 30(3):507–512

    MATH  Article  Google Scholar 

  82. C. Shah, Building a parsimonious model for identifying best answers using interaction history in community Q&A, in Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, American Society for Information Science, 2015, p. 51

  83. C. Shah, J. Pomerantz, Evaluating and predicting answer quality in community QA, in Proceeding of the 33rd Int’l ACM SIGIR Conf. On research and development in information retrieval – SIGIR ‘10 (ACM, 2010). https://doi.org/10.1145/1835449.1835518

  84. M. Shaw (2016) Progress Toward an Engineering Discipline of Software. ICSE 2016 Keynote

  85. M. Squire, Should we move to stack overflow? Measuring the utility of social Media for Developer Support, in Proceedings of the 2015 IEEE/ACM 37th IEEE Int’l Conf. On software engineering (IEEE, 2015). https://doi.org/10.1109/icse.2015.150

  86. C. Tantithamthavorn, S. McIntosh, A.E. Hassan, K. Matsumoto, Automated parameter optimization of classification techniques for defect prediction models, in Proc. of the international conference on software engineering (ICSE), 2016

  87. Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2017) An empirical comparison of model validation techniques for defect prediction models. IEEE Trans Softw Eng 43(1):1–18. https://doi.org/10.1109/TSE.2016.2584050

    Article  Google Scholar 

  88. Q. Tian, P. Zhang, B. Li, Towards Predicting the Best Answers in Community-based Question-Answering Services., in Proc. of the 7th Int’l Conf. on Weblogs and Social Media – ICWSM ‘13, ed. by E. Kiciman, N.B. Ellison, B. Hogan, P. Resnick, I. Soboroff (The AAAI Press, 2013). ISBN 978-1-57735-610-3

  89. A. Tosun, A. Bener, Reducing false alarms in software defect prediction by decision threshold optimization, in Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society, 2009, pp. 477–480

  90. C. Treude, O. Barzilay, M.-A. Storey, How do programmers ask and answer questions on the web?, in Proceeding of the 33rd Int’l Conf. On software engineering – ICSE ‘11 (ACM, 2011). https://doi.org/10.1145/1985793.1985907

  91. Turhan B (2012) On the dataset shift problem in software engineering prediction models. Empir Softw Eng 17(1–2):62–74

    Article  Google Scholar 

  92. B. Turhan, T. Menzies, A.B. Bener, J. Di Stefano. 2009. On the relative value of cross-company and within-company data for defect prediction. Empirical Softw. Eng. 14, 5 (October 2009), 540–578. https://doi.org/10.1007/s10664-008-9103-7

  93. B. Turhan, A. Tosun and A. Bener, Empirical Evaluation of Mixed-Project Defect Prediction Models, 37th EUROMICRO Conference on Software Engineering and Advanced Applications, 2011, pp. 396–403. https://doi.org/10.1109/SEAA.2011.59

  94. B. Vasilescu, A. Serebrenik, P. Devanbu, V. Filkov, How Social Q&A Sites Are Changing Knowledge Sharing in Open Source Software Communities, in Proc. of the 17th ACM Conf. on Computer Supported Cooperative Work. CSCW ‘14 (ACM, New York, NY, USA, 2014), pp. 342–354. ISBN 978-1-4503-2540-0. https://doi.org/10.1145/2531602.2531659

  95. Wahono RS (2015) A systematic literature review of software defect prediction: research trends, datasets, methods and frameworks. Journal of Software Engineering 1(1):1–16

    Google Scholar 

  96. Wang S, Chen TH, Hassan AE (2017) Understanding the factors for fast answers in technical Q&a websites. Empir Software Eng, pp:1–42. https://doi.org/10.1007/s10664-017-9558-5

  97. X. Xia, D. Lo, D. Correa, A. Sureka, E. Shihab (2016) It takes two to tango: deleted stack overflow question prediction with text and Meta features, IEEE 40th annual computer software and applications conference (COMPSAC’16), Atlanta, GE, USA, pp.73–82

  98. B. Xu, Z. Xing, X. Xia, D. Lo, Q. Wang, S. Li (2016a). Domain-specific cross-language relevant question retrieval. In Proc. of 13th Int’l Conf. on Mining Software Repositories (MSR’16), Austin, TX, USA, pp. 413–424

  99. B. Xu, D. Ye, Z. Xing, X. Xia, G. Chen, S. Li (2016b). Predicting semantically linkable knowledge in developer online forums via convolutional neural network. In proc of 31st IEEE/ACM international conference on automated software engineering (ASE’16), Singapore, pp. 51–62

  100. B. Xu, Z. Xing, X. Xia, D. Lo (2017). AnswerBot: Automated Generation of Answer Summary to Developers' Technical Questions, In Proc. of 32nd IEEE/ACM Int’l Conf. on Automated Software Engineering (ASE’17), Urbana-Champaign, IL, USA, 706–716

  101. Y. Yang, X. Liu, A Re-examination of Text Categorization Methods, in Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR ‘99 (ACM, New York, NY, USA, 1999), pp. 42–49. ISBN 1-58113-096-1. https://doi.org/10.1145/312624.312647

  102. Zhang H, Zhang X (2007) Comments on “data mining static code attributes to learn defect predictors”. IEEE Trans Softw Eng 33(9):635–637. https://doi.org/10.1109/TSE.2007.70706

    Article  Google Scholar 

  103. F. Zhang, A. Mockus, I. Keivanloo, Y. Zou. Towards building a universal defect prediction model with rank transformed predictors. Empir Softw Eng, 21, 5 (Oct. 2016), 2107–2145. https://doi.org/10.1007/s10664-015-9396-2

  104. Zheng W, Li M (2017) The best answer prediction by exploiting heterogeneous data on software development Q&a forum. Neurocomputing 269:212–219. https://doi.org/10.1016/j.neucom.2016.12.097

    Article  Google Scholar 

  105. T. Zimmermann, N. Nagappan, H. Gall, E. Giger, B. Murphy. 2009. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In proc, of 7th joint meeting of the European software engineering Conf. And ACM SIGSOFT symposium on the foundations of software engineering (ESEC/FSE '09). ACM, New York, NY, USA, 91-100. https://doi.org/10.1145/1595696.1595713

Download references

Acknowledgements

We thank Stack Overflow for providing their data. We also thank Burak Turhan, for his comments on cross-context defect prediction, and Margaret-Anne Storey, Alexey Zagalsky, and Daniel M. German for their feedback on the study. This work is partially supported by the project ‘EmoQuest - Investigating the Role of Emotions in Online Question & Answer Sites’, funded by the Italian Ministry of Education, University and Research (MIUR) under the program “Scientific Independence of young Researchers” (SIR). The computational work has been executed on the IT resources made available by two projects, ReCaS and PRISMA, funded by MIUR under the program “PON R&C 2007-2013.”

Author information

Affiliations

Authors

Corresponding author

Correspondence to Fabio Calefato.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by: David Lo

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Calefato, F., Lanubile, F. & Novielli, N. An empirical assessment of best-answer prediction models in technical Q&A sites. Empir Software Eng 24, 854–901 (2019). https://doi.org/10.1007/s10664-018-9642-5

Download citation

Keywords

  • Cross-platform prediction
  • Q&a
  • Stack overflow
  • Crowdsourcing
  • Knowledge sharing
  • Imbalanced datasets