Advertisement

Research Progress in the Processing of Crowdsourced Test Reports

  • Naiqi Wang
  • Lizhi Cai
  • Mingang ChenEmail author
  • Chuwei Zhang
Conference paper
  • 49 Downloads
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 309)

Abstract

In recent years, crowdsourced testing, which uses collective intelligence to solve complex software testing tasks has gained widespread attention in academia and industry. However, due to a large number of workers participating in crowdsourced testing tasks, the submitted test reports set is too large, making it difficult for developers to review test reports. Therefore, how to effectively process and integrate crowdsourced test reports is always a significant challenge in the crowdsourced testing process. This paper deals with the crowdsourced test reports processing, sorts out some achievements in this field in recent years, and classifies, summarizes, and compares existing research results from four directions: duplicated reports detection, test reports aggregation and classification, priority ranking, and reports summarization. Finally explored the possible research directions, opportunities and challenges of the crowdsourced test reports.

Keywords

Software testing Crowdsourced testing Reports processing 

Notes

Acknowledgment

This work is funded by National Key R&D Program of China (No. 2018YFB 1403400).

References

  1. 1.
    Howe, J.: The rise of crowdsourcing. Wired Mag. 14(6), 1–4 (2016)Google Scholar
  2. 2.
    Mao, K., Capra, L., Harman, M., et al.: A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2017)CrossRefGoogle Scholar
  3. 3.
    Latoza, T., Hoek, A.: Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Softw. 33(1), 74–80 (2016)CrossRefGoogle Scholar
  4. 4.
    Hao, R., Feng, Y., Jones, J., Li, Y., Chen, Z.: CTRAS: crowdsourced test report aggregation and summarization. In: ICSE 2019 (2019)Google Scholar
  5. 5.
    Zhang, T., Chen, J., Luo, X., Li, T.: Bug reports for desktop software and mobile apps in GitHub: what is the difference? IEEE Softw. 36, 63–71 (2017)CrossRefGoogle Scholar
  6. 6.
    Zhang, X.F., Feng, Y., Liu, D., Chen, Z.Y., Xu, B.W.: Research progress of crowdsourced software testing. Ruan Jian Xue Bao/J. Softw. 29(1), 69–88 (2018)Google Scholar
  7. 7.
    Runeson, P., Alexandersson, M., Nyholm, O.: Detection of duplicate defect reports using natural language processing. In: Proceedings of the 29th International Conference on Software Engineering, pp. 499–510. IEEE Computer Society (2007)Google Scholar
  8. 8.
    Yang, X., Lo, D., Xia, X., Bao, L., Sun, J.: Combining word embedding with information retrieval to recommend similar bug reports. In: ISSRE 2016, pp. 127–137 (2016)Google Scholar
  9. 9.
    Rocha, H., Valente, M.T., Marques-Neto, H., Murphy, G.C.: An empirical study on recommendations of similar bugs. In: SANER 2016, pp. 46–56 (2016)Google Scholar
  10. 10.
    Hindle, A., Alipour, A., Stroulia, E.: A contextual approach towards more accurate duplicate bug report detection and ranking. Empir. Softw. Eng. 21, 368–410 (2016)CrossRefGoogle Scholar
  11. 11.
    Wang, X., Zhang, L., Xie, T., et al.: An approach to detecting duplicate bug reports using natural language and execution information. In: ACM/IEEE 30th International Conference on Software Engineering, ICSE 2008, pp. 461–470. IEEE (2008)Google Scholar
  12. 12.
    Sun, C., Lo, D., Wang, X., Jiang, J., Khoo, S.-C.: A discriminative model approach for accurate duplicate bug report retrieval. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, vol. 1. ACM (2010)Google Scholar
  13. 13.
    Information Retrieval. https://en.wikiipedia.org/wiki/information_retrieval. Accessed 10 Sept 2019
  14. 14.
    Sun, C., Lo, D., Khoo, S.C., et al.: Towards more accurate retrieval of duplicate bug reports. In: 26th IEEE/ACM International Conference on Automated Software Engineering, pp. 253–262. IEEE Computer Society (2011)Google Scholar
  15. 15.
    Nguyen, A.T., Nguyen, T.T., Nguyen, T.N., et al.: Duplicate bug report detection with a combination of information retrieval and topic modeling. In: 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 70–79. ACM (2012)Google Scholar
  16. 16.
    Liu, K., Tan, H.B.K., Zhang, H.: Has this bug been reported? In: 20th Working Conference on Reverse Engineering (WCRE), pp. 82–91. IEEE (2013)Google Scholar
  17. 17.
    Banerjee, S., Syed, Z., Helmick, J., Cukic, B.: A fusion approach for classifying duplicate problem reports. In: ISSRE 2013, pp. 208–217 (2013)Google Scholar
  18. 18.
    Wang, J., Cui, Q., Wang, Q., et al.: Towards effectively test report classification to assist crowdsourced testing. In: ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM (2016)Google Scholar
  19. 19.
    Jiang, H., Chen, X., He, T., et al.: Fuzzy clustering of crowdsourced test reports for apps. ACM Trans. Internet Technol. 18(2), 1–28 (2018)CrossRefGoogle Scholar
  20. 20.
    Feng, Y., Jones, J.A., Chen, Z., et al.: Multi-objective test report prioritization using image understanding. In: 31st IEEE/ACM International Conference on Automated Software Engineering, pp. 202–213 (2016)Google Scholar
  21. 21.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Computer Vision and Pattern Recognition, pp. 2169–2178. IEEE (2016)Google Scholar
  22. 22.
    Yang, Y., Yao, X., Gong, D.: Clustering study of crowdsourced test report with multi-source heterogeneous information. In: Tan, Y., Shi, Y. (eds.) DMBD 2019. CCIS, vol. 1071, pp. 135–145. Springer, Singapore (2019).  https://doi.org/10.1007/978-981-32-9563-6_14CrossRefGoogle Scholar
  23. 23.
    Wang, J., Cui, Q., Wang, S., et al.: Domain adaptation for test report classification in crowdsourced testing. In: International Conference on Software Engineering: Software Engineering in Practice Track. IEEE Press (2017)Google Scholar
  24. 24.
    Wang, J., Wang, S., Cui, Q., et al.: Local-based active classification of test report to assist crowdsourced testing. In: 31st IEEE/ACM International Conference. ACM (2016)Google Scholar
  25. 25.
    Wang, J., Li, M., Wang, S., Menzies, T., Wang, Q.: Images don’t lie: duplicate crowdtesting reports detection with screenshot information. Inf. Softw. Technol. 110, 139–155 (2019)CrossRefGoogle Scholar
  26. 26.
    Nazar, N., Jiang, H., Gao, G., et al.: Source code fragment summarization with small scale crowdsourcing based features. Front. Comput. Sci. 10(3), 504–517 (2016)CrossRefGoogle Scholar
  27. 27.
    Jiang, H., Zhang, J., Ma, H., et al.: Mining authorship characteristics in bug repositories. Sci. China Inf. Sci. 60(1), 012107 (2017)CrossRefGoogle Scholar
  28. 28.
    Chen, X., Jiang, H., Li, X., et al.: Automated quality assessment for crowdsourced test reports of mobile applications. In: 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE Computer Society (2018)Google Scholar
  29. 29.
    Feng, Y., Chen, Z., Jones, J.A., Fang, C., Xu, B.: Test report prioritization to assist crowdsourced testing. In: 10th ACM Joint Meeting on Foundations of Software Engineering, pp. 225–236 (2015)Google Scholar
  30. 30.
    Yu, L., Tsai, W.-T., Zhao, W., Wu, F.: Predicting defect priority based on neural networks. In: Cao, L., Zhong, J., Feng, Y. (eds.) ADMA 2010. LNCS (LNAI), vol. 6441, pp. 356–367. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-17313-4_35CrossRefGoogle Scholar
  31. 31.
    Mani, S., Catherine, R., Sinha, V.S., Dubey, A.: AUSUM: approach for unsupervised bug report summarization. In: ACM SIGSOFT International Symposium on the Foundations of Software Engineering, pp. 1–11 (2012)Google Scholar
  32. 32.
    Rastkar, S., Murphy, G.C., Murray, G.: Automatic summarization of bug reports. IEEE Trans. Softw. Eng. 40(4), 366–380 (2014)CrossRefGoogle Scholar
  33. 33.
    Kokate, P., Wankhade, N.R.: Automatic summarization of bug reports and bug triage classification. Int. J. Sci. Technol. Manag. Res. 2(6) (2017)Google Scholar
  34. 34.
    Jiang, H., Li, X., Ren, Z., et al.: Toward better summarizing bug reports with crowdsourcing elicited attributes. IEEE Trans. Reliab. 68, 1–21 (2018)Google Scholar
  35. 35.
    Fazzini, M., Prammer, M., d’Amorim, M., Orso, A.: Automatically translating bug reports into test cases for mobile apps. In: 27th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018), pp. 141–152. ACM (2018)Google Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2020

Authors and Affiliations

  • Naiqi Wang
    • 1
    • 2
  • Lizhi Cai
    • 1
    • 2
  • Mingang Chen
    • 2
    Email author
  • Chuwei Zhang
    • 3
  1. 1.School of Information Science and EngineeringEast China University of Science and TechnologyShanghaiChina
  2. 2.Shanghai Key Laboratory of Computer Software Testing and EvaluatingShanghai Development Center of Computer Software TechnologyShanghaiChina
  3. 3.Shanghai Foreign Affairs Service CenterShanghaiChina

Personalised recommendations