Skip to main content
Log in

An empirical assessment of best-answer prediction models in technical Q&A sites

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Technical Q&A sites have become essential for software engineers as they constantly seek help from other experts to solve their work problems. Despite their success, many questions remain unresolved, sometimes because the asker does not acknowledge any helpful answer. In these cases, an information seeker can only browse all the answers within a question thread to assess their quality as potential solutions. We approach this time-consuming problem as a binary-classification task where a best-answer prediction model is built to identify the accepted answer among those within a resolved question thread, and the candidate solutions to those questions that have received answers but are still unresolved. In this paper, we report on a study aimed at assessing 26 best-answer prediction models in two steps. First, we study how models perform when predicting best answers in Stack Overflow, the most popular Q&A site for software engineers. Then, we assess performance in a cross-platform setting where the prediction models are trained on Stack Overflow and tested on other technical Q&A sites. Our findings show that the choice of the classifier and automatied parameter tuning have a large impact on the prediction of the best answer. We also demonstrate that our approach to the best-answer prediction problem is generalizable across technical Q&A sites. Finally, we provide practical recommendations to Q&A platform designers to curate and preserve the crowdsourced knowledge shared through these sites.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://stackoverflow.com/research/developer-survey-2016

  2. http://data.stackexchange.com/stackoverflow/query/759408/daily-posts-in-2016

  3. https://answers.yahoo.com

  4. http://scn.sap.com

  5. https://github.com/collab-uniba/emse_best-answer-prediction

  6. https://business.stackoverflow.com/enterprise

  7. https://www.r-project.org

  8. http://www.cs.waikato.ac.nz/ml/weka

  9. http://scikit-learn.org

  10. https://archive.org/details/stackexchange

  11. https://www.docusign.com

  12. https://www.dwolla.com

  13. http://scrapy.org

  14. http://www.seleniumhq.org/projects/webdriver

  15. https://cran.r-project.org/package=caret

  16. https://github.com/collab-uniba/emse_best-answer-prediction/tree/master/additional_material

  17. https://m2.icm.edu.pl/boruta

  18. https://stackoverflow.com/questions/20864634

  19. https://stackoverflow.com/a/20864795

  20. https://stackoverflow.com/a/20864807

  21. https://stackoverflow.com/questions/27727589

  22. https://stackoverflow.com/questions/27727433

  23. https://stackoverflow.com/a/27727523

  24. https://stackoverflow.com/a/27729675

  25. https://github.com/collab-uniba/emse_best-answer-prediction/tree/master/additional_material

References

  • Abdalkareem R, Shihab E, Rilling J (2017) What do developers use the crowd for? A Study Using Stack Overflow IEEE Software 34(2):53–60

    Google Scholar 

  • L.A. Adamic, J. Zhang, E. Bakshy, M.S. Ackerman, Knowledge sharing and yahoo answers, in Proceeding of the 17th international Conf. On world wide web – WWW ‘08 (ACM, 2008). https://doi.org/10.1145/1367497.1367587

  • Adams NM, Hand DJ (1999) Comparing classifiers when the misallocation costs are uncertain. Pattern Recogn 32(7):1139–1147. https://doi.org/10.1016/s0031-3203(98)00154-x

    Article  Google Scholar 

  • A. Anderson, D. Huttenlocher, J. Kleinberg, J. Leskovec, Discovering value from community activity on focused question answering sites, in Proc. of the 18th ACM SIGKDD Int’l Conf. On knowledge discovery and data mining – KDD ‘12 (ACM, 2012). https://doi.org/10.1145/2339530.2339665

  • Arisholm E, Briand LC, Johannessen EB (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83(1):2–17. https://doi.org/10.1016/j.jss.2009.06.055

    Article  Google Scholar 

  • M. Asaduzzaman, A.S Mashiyat, C.K. Roy, K.A. Schneider (2013). Answering questions about unanswered questions of Stack Overflow, In Proc. of the 10th IEEE Working Conf. on Mining Software Repositories (MSR 2013), pp. 97–100

  • K. Bajaj, K. Pattabiraman, A. Mesbah, Mining questions asked by web developers, in Proc. of the 11th working Conf. On mining software repositories – MSR 2014 (ACM, 2014). https://doi.org/10.1145/2597073.2597083

  • Barua A, Thomas SW, Hassan AE (2012) What are developers talking about? An analysis of topics and trends in Stack Overflow Empirical Software Engineering 19(3):619–654. https://doi.org/10.1007/s10664-012-9231-y

    Article  Google Scholar 

  • Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. The Journal of Machine Learning Research 13(1):281–305

    MathSciNet  MATH  Google Scholar 

  • Blei D, Ng A, Jordan M (2003) Latent Dirichlet allocation. The Journal of Machine Learning Research 3:993–1022

    MATH  Google Scholar 

  • L.C. Borges, D.F. Ferreira. Power and type I errors rate of Scott-Knott, Tukey and Student-Newman-Keuls tests under normal and no-normal distributions of the residues. Rev. Mat. Estat., Sao Paulo, 211: 67–83 (2003)

  • A. Bosu, C.S. Corley, D. Heaton, D. Chatterji, J.C. Carver, N.A. Kraft, Building reputation in StackOverflow: an empirical investigation, in 2013 10th working Conf. On mining software repositories (MSR) (IEEE, 2013). https://doi.org/10.1109/msr.2013.6624013

  • G. Burel, Y. He, H. Alani, Automatic Identification of Best Answers in Online Enquiry Communities, in Lecture Notes in Computer Science (Springer, 2012), pp. 514–529. https://doi.org/10.1007/978-3-642-30284-841

  • Cai Y, Chakravarthy S (2011) Predicting answer quality in q/a social networks: using temporal features, technical report, technical report. University of Texas at Arlington

  • F. Calefato, F. Lanubile, M.C. Marasciulo, N. Novielli, Mining successful answers in stack overflow, in 2015 IEEE/ACM 12th working Conf. On mining software repositories (IEEE, 2015). https://doi.org/10.1109/msr.2015.56

  • Calefato F, Lanubile F, Novielli N (2016) Moving to stack overflow: best-answer prediction in legacy developer forums. In: Proc. 10th Int’l Symposium on Empirical Softw. Eng. And Measurement (ESEM’16), Ciudad Real, 8–9 Sept. 2016. https://doi.org/10.1145/2961111.2962585

  • Calefato F, Lanubile F, Novielli N (2018) How to ask for technical help? Evidence-based guidelines for writing questions on stack overflow. Inf Softw Technol 94:186–207. https://doi.org/10.1016/j.infsof.2017.10.009

    Article  Google Scholar 

  • Catal C, Diri B (2009) A systematic review of software fault prediction studies. Expert Syst Appl 36(4):7346–7354. https://doi.org/10.1016/j.eswa.2008.10.027

    Article  Google Scholar 

  • Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum, Hillsdale, NJ

    MATH  Google Scholar 

  • WJ Conover (1999) Practical nonparametric statistics (3rd ed). Wiley

  • D’Ambros M, Lanza M, Robbes R (2012) Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir Softw Eng 17(4–5):531–577. https://doi.org/10.1007/s10664-011-9173-9

    Article  Google Scholar 

  • J. Davis, M. Goadrich, The relationship between precision-recall and ROC curves, in proceedings of the 23rd international conference on machine learning – ICML ‘06 (ACM 2006). https://doi.org/10.1145/1143844.1143874

  • DeLong ER, DeLong DM, Clarke-Pearson DL (1988) Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44:837–845

    Article  MATH  Google Scholar 

  • Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research 7:1–30

    MathSciNet  MATH  Google Scholar 

  • G. Denaro, M. Pezzè, An empirical evaluation of fault-proneness models, in Software Engineering, 2002. ICSE 2002. Proceedings of the 24rd international conference on, 2002, pp. 241–251. https://doi.org/10.1145/581368.581371

  • Drummond C, Holte RC (2006) Cost curves: an improved method for visualizing classifier performance. Mach Learn 65(1):95–130. https://doi.org/10.1007/s10994-006-8199-5

    Article  Google Scholar 

  • M. Duijn, A. Kučera, and A. Bacchelli (2015) Quality questions need quality code: classifying code fragments on stack overflow. In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR '15), pp 410–413

  • C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the 17th international joint conference on Artificial intelligence - IJCAI'01 (Morgan Kaufmann, 2001), Vol. 2, pp. 973−978 (2001)

  • Fawcett T (2006) An introduction to roc analysis. Pattern Recogn Lett 27(8):861–874

    Article  MathSciNet  Google Scholar 

  • Fenton NE, Neil M (1999) A critique of software defect prediction models. IEEE Trans Softw Eng 25(5):675–689. https://doi.org/10.1109/32.815326

    Article  Google Scholar 

  • Fu W, Menzies T, Shen X (2016) Tuning for software analytics: is it really necessary? Inf Softw Technol 76:135–146

    Article  Google Scholar 

  • B. Ghotra, S. McIntosh, A.E. Hassan, Revisiting the Impact of Classification Techniques on the Performance of Defect Prediction Models, in Proc. of the International Conference on Software Engineering (ICSE), 2015, pp. 789–800

  • G. Gkotsis, K. Stepanyan, C. Pedrinaci, J. Domingue, M. Liakata, It’s all in the content, in Proc. of the 2014 ACM Conf. On web science – WebSci ‘14 (ACM, 2014). https://doi.org/10.1145/2615569.2615681

  • Gkotsis G, Liakata M, Pedrinaci C, Stepanyan K, Domingue J (2015) ACQUA: automated community-based question answering through the discretisation of shallow linguistic features. Journal of Web Science 1(1):1–15. https://doi.org/10.1561/106.00000001

    Article  Google Scholar 

  • Hall MA (1999) Correlation-based feature selection for machine learning. University of Waikato, PhD Dissertation

    Google Scholar 

  • Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304. https://doi.org/10.1109/TSE.2011.103

    Article  Google Scholar 

  • K. Hart, A. Sarma, Perceptions of answer quality in an online technical question and answer forum, in Proc. of 7th Int’l workshop on coop. And human aspects of Softw. Eng. – CHASE’14 (ACM, 2014). https://doi.org/10.1145/2593702.2593703

  • T.J. Hastie, R.J. Tibshirani, J.H. Friedman, The elements of statistical learning: data mining, inference, and prediction. Springer series in statistics (Springer, New York, 2009). Autres impressions: 2011 (corr.), 2013 (7e corr.). ISBN 978-0-387-84857-0

  • He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans on Knowl and Data Eng 21(9):1263–1284. https://doi.org/10.1109/TKDE.2008.239

    Article  Google Scholar 

  • Hosseini S, Turhan B, Gunarathna D (2017) A systematic literature review and Meta-analysis on cross project defect prediction. IEEE Trans Softw Eng

  • Huang J, Ling CX (2005) Using AUC and accuracy in evaluating learning algorithms. IEEE Trans on Knowl and Data Eng 17(3):299–310. https://doi.org/10.1109/TKDE.2005.50

    Article  Google Scholar 

  • Hyndman RJ, Athanasopoulos G (2017) Forecasting: principles and practice, 2nd edn http://otexts.org/fpp2

    Google Scholar 

  • Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intelligent data analysis 6(5):429–449

    Article  MATH  Google Scholar 

  • Jiang Y, Cukic B, Ma Y (2008a) Techniques for evaluating fault prediction models. Empir Softw Eng 13(5):561–595. https://doi.org/10.1007/s10664-008-9079-3

    Article  Google Scholar 

  • Y. Jiang, B. Cukic, T. Menzies, Can data transformation help in the detection of fault-prone modules?, in Proceedings of the 2008 workshop on Defects in large software systems, ACM, 2008b, pp. 16–20

  • Karegowda AG, Jayaram MA, Manjunath AS (2010) Feature subset selection problem using wrapper approach in supervised learning. Int J Comput Appl 1(7):13–17

    Google Scholar 

  • J. Kincaid, R.J. Fishburne, R. Rogers, B. Chissom, Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy enlisted personnel. Research Branch Report 8(75) (1975)

  • Kitchenham BA, Mendes E, Travassos GH (2007) Cross versus within-company cost estimation studies: a systematic review. IEEE Trans Softw Eng 33(5):316–329. https://doi.org/10.1109/TSE.2007.1001

    Article  Google Scholar 

  • Kocaguneli E, Menzies T, Bener AB, Keung JW (2012) Exploiting the essential assumptions of analogy-based effort estimation. Software Engineering, IEEE Transactions on 38(2):425–438

    Article  Google Scholar 

  • M. Kuhn, Building predictive models in r using the caret package. Journal of Statistical Software 28(1), 1–26 (2008). https://doi.org/10.18637/jss.v028.i05

  • Kursa MB, Rudnicki WR (2010) Feature selection with the Boruta package. J Stat Softw 36(11):1–13

    Article  Google Scholar 

  • Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf Softw Technol 58:388–402. https://doi.org/10.1016/j.infsof.2014.07.005

    Article  Google Scholar 

  • C. Lemnaru, R. Potolea (2011). Imbalanced Classification Problems: Systematic Study, Issues and Best Practices. In: Zhang R., Zhang J., Zhang Z., Filipe J., Cordeiro J. (eds) Enterprise Information Systems. ICEIS 2011. Lecture notes in business information processing, vol 102. Springer, Berlin, Heidelberg

  • Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng 34(4):485–496. https://doi.org/10.1109/TSE.2008.35

    Article  Google Scholar 

  • Y. Liu, A. An, X. Huang, Advances in Knowledge discovery and data mining: 10th PacificAsia conference, PAKDD 2006, Singapore, April 9–12, 2006. Proceedings, in boosting prediction accuracy on imbalanced datasets with SVM ensembles, ed. by W.-K. Ng, M. Kitsuregawa, J. Li, K. Chang (Springer, Berlin, Heidelberg, 2006), pp. 107–118. ISBN 978-3-540-33207-7. https://doi.org/10.1007/1173113915

  • Lopez V, Fernandez A, Garcıa S, Palade V, Herrera F (2013) An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci 250:113–141. https://doi.org/10.1016/j.ins.2013.07.007

    Article  Google Scholar 

  • Malhotra R (2015) A systematic review of machine learning techniques for software fault prediction. Appl Soft Comput 27:504–518. https://doi.org/10.1016/j.asoc.2014.11.023

    Article  Google Scholar 

  • R. Malhotra, M. Khanna. An empirical study for software change prediction using imbalanced data Empir Software Eng, 22: 2806 (2017) https://doi.org/10.1007/s10664-016-9488-7

  • Mamykina L, Manoim B, Mittal M, Hripcsak G, Hartmann B (2011) Design lessons from the fastest Q&A site in the west. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, pp 2857–2866. https://doi.org/10.1145/1978942.1979366

  • C.D. Manning, P. Raghavan H. Schütze, Introduction to information retrieval, Cambridge University press. 2008

  • T. Mende, R. Koschke, Revisiting the Evaluation of Defect Prediction Models, in Proceedings of the 5th International Conference on Predictor Models in Software Engineering. PROMISE ‘09 (ACM, New York, NY, USA, 2009), pp. 7–1710. ISBN 978-1-60558-634-2. https://doi.org/10.1145/1540438.1540448

  • T. Menzies, How not to do it: anti-patterns for data science in software engineering, in Proceedings of the 38th International Conference on Software Engineering Companion, ACM, 2016, pp. 887–887

  • Menzies T, Shepperd M (2012) Special issue on repeatable results in software engineering prediction. Empir Softw Eng 17(1):1–17

    Article  Google Scholar 

  • Menzies T, Greenwald J, Frank A (2007) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng 33(1):2–13. https://doi.org/10.1109/TSE.2007.10

    Article  Google Scholar 

  • Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, Bener A (2010) Defect prediction from static code features: current results, limitations, new approaches. Autom Softw Eng 17(4):375–407. https://doi.org/10.1007/s10515-010-0069-5

    Article  Google Scholar 

  • J. Nam and S. Kim. Heterogeneous defect prediction. In Proc. of the 10th Joint Meeting on Foundations of Software Engineering - ESEC/FSE’15 (ACM, 2015), 2015, pp. 508–519. https://doi.org/10.1145/2786805.2786814

  • Nie L, Wei X, Zhang D, Wang X, Gao Z, Yang Y (2017) Data-driven answer selection in community QA systems. IEEE Trans Knowl Data Eng 29(6):1186–1198

    Article  Google Scholar 

  • Parnin C, Treude C, Grammel L (2012) Crowd documentation: exploring the coverage and the dynamics of api discussions on stack overflow. Technical report, Georgia Institute of Technology

    Google Scholar 

  • F. Peters, T. Menzies, A. Marcus. 2013. Better cross company defect prediction. In proceedings of the 10th working conference on mining software repositories (MSR '13). IEEE press, Piscataway, NJ, USA, 409-418

  • E. Pitler, A. Nenkova, Revisiting readability, in Proc. of the Conf. On empirical methods in natural language processing – EMNLP ‘08 (ACL, 2008). https://doi.org/10.3115/1613715.1613742

  • Polikar R (2006) Ensemble based systems in decision making. Circuits and Systems Magazine, IEEE 6(3):21–45. https://doi.org/10.1109/MCAS.2006.1688199

    Article  Google Scholar 

  • L. Ponzanelli, A. Mocci, A. Bacchelli, M. Lanza, D. Fullerton, Improving low quality stack overflow post detection, in 2014 IEEE Int’l Conf. On software maintenance and evolution (IEEE, 2014). https://doi.org/10.1109/icsme.2014.90

  • F.J. Provost, T. Fawcett, Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions., in KDD, vol. 97, 1997, pp. 43–48

  • F.J. Provost, T. Fawcett, R. Kohavi, The case against accuracy estimation for comparing induction algorithms., in ICML, vol. 98, 1998, pp. 445–453

  • F. Rahman, D. Posnett, P. Devanbu. 2012. Recalling the “imprecision” of cross-project defect prediction. In proc. 20th Int’l symposium on the foundations of software engineering (FSE ‘12), https://doi.org/10.1145/2393596.2393669

  • Ringrose TJ, Hand DJ (1997) Construction and Assessment of Classification Rules Biometrics 53(3):1181. https://doi.org/10.2307/2533581

    Article  Google Scholar 

  • X Robin, N Turck, A Hainard, N Tiberti, F Lisacek, J-C Sanchez, M. Müller (2011). pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics, 12:77 (2011)

  • P.K. Roy, Z. Ahmad, J. P. Singh, .M.A. Ali Alryalat, N.P. Rana, Y. K Dwivedi (2017). Finding and ranking high-quality answers in community question answering sites. Global Journal of Flexible Systems Management, pp 1–16

  • Saeys Y, Inza I, Larrañaga P (2007) A review of feature selection techniques in bioinformatics. Bioinformatics 23(19):2507–2517

    Article  Google Scholar 

  • S. Scalabrino, M. Linares-Vásquez, D. Poshyvanyk and R. Oliveto. Improving code readability models with textual features. In proceedings of the IEEE 24th international conference on program comprehension (ICPC’16), Austin, TX, 2016, pp. 1–10. https://doi.org/10.1109/ICPC.2016.7503707

  • Scott AJ, Knott M (1974) A cluster analysis method for grouping means in the analysis of variance. Biometrics 30(3):507–512

    Article  MATH  Google Scholar 

  • C. Shah, Building a parsimonious model for identifying best answers using interaction history in community Q&A, in Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, American Society for Information Science, 2015, p. 51

  • C. Shah, J. Pomerantz, Evaluating and predicting answer quality in community QA, in Proceeding of the 33rd Int’l ACM SIGIR Conf. On research and development in information retrieval – SIGIR ‘10 (ACM, 2010). https://doi.org/10.1145/1835449.1835518

  • M. Shaw (2016) Progress Toward an Engineering Discipline of Software. ICSE 2016 Keynote

  • M. Squire, Should we move to stack overflow? Measuring the utility of social Media for Developer Support, in Proceedings of the 2015 IEEE/ACM 37th IEEE Int’l Conf. On software engineering (IEEE, 2015). https://doi.org/10.1109/icse.2015.150

  • C. Tantithamthavorn, S. McIntosh, A.E. Hassan, K. Matsumoto, Automated parameter optimization of classification techniques for defect prediction models, in Proc. of the international conference on software engineering (ICSE), 2016

  • Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2017) An empirical comparison of model validation techniques for defect prediction models. IEEE Trans Softw Eng 43(1):1–18. https://doi.org/10.1109/TSE.2016.2584050

    Article  Google Scholar 

  • Q. Tian, P. Zhang, B. Li, Towards Predicting the Best Answers in Community-based Question-Answering Services., in Proc. of the 7th Int’l Conf. on Weblogs and Social Media – ICWSM ‘13, ed. by E. Kiciman, N.B. Ellison, B. Hogan, P. Resnick, I. Soboroff (The AAAI Press, 2013). ISBN 978-1-57735-610-3

  • A. Tosun, A. Bener, Reducing false alarms in software defect prediction by decision threshold optimization, in Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society, 2009, pp. 477–480

  • C. Treude, O. Barzilay, M.-A. Storey, How do programmers ask and answer questions on the web?, in Proceeding of the 33rd Int’l Conf. On software engineering – ICSE ‘11 (ACM, 2011). https://doi.org/10.1145/1985793.1985907

  • Turhan B (2012) On the dataset shift problem in software engineering prediction models. Empir Softw Eng 17(1–2):62–74

    Article  Google Scholar 

  • B. Turhan, T. Menzies, A.B. Bener, J. Di Stefano. 2009. On the relative value of cross-company and within-company data for defect prediction. Empirical Softw. Eng. 14, 5 (October 2009), 540–578. https://doi.org/10.1007/s10664-008-9103-7

  • B. Turhan, A. Tosun and A. Bener, Empirical Evaluation of Mixed-Project Defect Prediction Models, 37th EUROMICRO Conference on Software Engineering and Advanced Applications, 2011, pp. 396–403. https://doi.org/10.1109/SEAA.2011.59

  • B. Vasilescu, A. Serebrenik, P. Devanbu, V. Filkov, How Social Q&A Sites Are Changing Knowledge Sharing in Open Source Software Communities, in Proc. of the 17th ACM Conf. on Computer Supported Cooperative Work. CSCW ‘14 (ACM, New York, NY, USA, 2014), pp. 342–354. ISBN 978-1-4503-2540-0. https://doi.org/10.1145/2531602.2531659

  • Wahono RS (2015) A systematic literature review of software defect prediction: research trends, datasets, methods and frameworks. Journal of Software Engineering 1(1):1–16

    Google Scholar 

  • Wang S, Chen TH, Hassan AE (2017) Understanding the factors for fast answers in technical Q&a websites. Empir Software Eng, pp:1–42. https://doi.org/10.1007/s10664-017-9558-5

  • X. Xia, D. Lo, D. Correa, A. Sureka, E. Shihab (2016) It takes two to tango: deleted stack overflow question prediction with text and Meta features, IEEE 40th annual computer software and applications conference (COMPSAC’16), Atlanta, GE, USA, pp.73–82

  • B. Xu, Z. Xing, X. Xia, D. Lo, Q. Wang, S. Li (2016a). Domain-specific cross-language relevant question retrieval. In Proc. of 13th Int’l Conf. on Mining Software Repositories (MSR’16), Austin, TX, USA, pp. 413–424

  • B. Xu, D. Ye, Z. Xing, X. Xia, G. Chen, S. Li (2016b). Predicting semantically linkable knowledge in developer online forums via convolutional neural network. In proc of 31st IEEE/ACM international conference on automated software engineering (ASE’16), Singapore, pp. 51–62

  • B. Xu, Z. Xing, X. Xia, D. Lo (2017). AnswerBot: Automated Generation of Answer Summary to Developers' Technical Questions, In Proc. of 32nd IEEE/ACM Int’l Conf. on Automated Software Engineering (ASE’17), Urbana-Champaign, IL, USA, 706–716

  • Y. Yang, X. Liu, A Re-examination of Text Categorization Methods, in Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR ‘99 (ACM, New York, NY, USA, 1999), pp. 42–49. ISBN 1-58113-096-1. https://doi.org/10.1145/312624.312647

  • Zhang H, Zhang X (2007) Comments on “data mining static code attributes to learn defect predictors”. IEEE Trans Softw Eng 33(9):635–637. https://doi.org/10.1109/TSE.2007.70706

    Article  Google Scholar 

  • F. Zhang, A. Mockus, I. Keivanloo, Y. Zou. Towards building a universal defect prediction model with rank transformed predictors. Empir Softw Eng, 21, 5 (Oct. 2016), 2107–2145. https://doi.org/10.1007/s10664-015-9396-2

  • Zheng W, Li M (2017) The best answer prediction by exploiting heterogeneous data on software development Q&a forum. Neurocomputing 269:212–219. https://doi.org/10.1016/j.neucom.2016.12.097

    Article  Google Scholar 

  • T. Zimmermann, N. Nagappan, H. Gall, E. Giger, B. Murphy. 2009. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In proc, of 7th joint meeting of the European software engineering Conf. And ACM SIGSOFT symposium on the foundations of software engineering (ESEC/FSE '09). ACM, New York, NY, USA, 91-100. https://doi.org/10.1145/1595696.1595713

Download references

Acknowledgements

We thank Stack Overflow for providing their data. We also thank Burak Turhan, for his comments on cross-context defect prediction, and Margaret-Anne Storey, Alexey Zagalsky, and Daniel M. German for their feedback on the study. This work is partially supported by the project ‘EmoQuest - Investigating the Role of Emotions in Online Question & Answer Sites’, funded by the Italian Ministry of Education, University and Research (MIUR) under the program “Scientific Independence of young Researchers” (SIR). The computational work has been executed on the IT resources made available by two projects, ReCaS and PRISMA, funded by MIUR under the program “PON R&C 2007-2013.”

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabio Calefato.

Additional information

Communicated by: David Lo

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calefato, F., Lanubile, F. & Novielli, N. An empirical assessment of best-answer prediction models in technical Q&A sites. Empir Software Eng 24, 854–901 (2019). https://doi.org/10.1007/s10664-018-9642-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-018-9642-5

Keywords

Navigation