Expert Sourcing to Support the Identification of Model Elements in System Descriptions

Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 302)


Context. Expert sourcing is a novel approach to support quality assurance: it relies on methods and tooling from crowdsourcing research to split model quality assurance tasks and parallelize task execution across several expert users. Typical quality assurance tasks focus on checking an inspection object, e.g., a model, towards a reference document, e.g., a requirements specification, that is considered to be correct. For example, given a text-based system description and a corresponding model such as an Extended Entity Relationship (EER) diagram, experts are guided towards inspecting the model based on so-called expected model elements (EMEs). EMEs are entities, attributes and relations that appear in text and are reflected by the corresponding model. In common inspection tasks, EMEs are not explicitly expressed but implicitly available via textual descriptions. Thus, a main improvement is to make EMEs explicit by using crowdsourcing mechanisms to drive model quality assurance among experts. Objective and Method. In this paper, we investigate the effectiveness of identifying the EMEs through expert sourcing. To that end, we perform a feasibility study in which we compare EMEs identified through expert sourcing with EMEs provided by a task owner who has a deep knowledge of the entire system specification text. Conclusions. Results of the data analysis show that the effectiveness of the crowdsourcing-style EME acquisition is influenced by the complexity of these EMEs: entity EMEs can be harvested with high recall and precision, but the lexical and semantic variations of attribute EMEs hamper their automatic aggregation and reaching consensus (these EMEs are harvested with high precisions but limited recall). Based on these lessons learned we propose a new task design for expert sourcing EMEs.


Review Models Model quality assurance Model elements Empirical study Feasibility study Crowdsourcing Task design 



We would like to thank the participants of the software quality course at Vienna University of Technology in the winter term 2016/2017 for participating in the study.


  1. 1.
    Aurum, A., Petersson, H., Wohlin, C.: State-of-the-art: software inspection after 25 years. J. Softw. Test. Verif. Reliab. 12(3), 133–154 (2002)CrossRefGoogle Scholar
  2. 2.
    André, P., Kittur, A., Dow, S.: Crowd synthesis: extracting categories and clusters from complex data. In: Proceedings of the Conference on Computer Supported Cooperative Work (CSCW), pp. 989–998 (2014)Google Scholar
  3. 3.
    Chilton, L.B., Little, G., Edge, D., Weld, D.S., Landay, J.A.: Cascade: crowdsourcing taxonomy creation. In: Proceedings of the Conference on Human Factors in Computing Systems (CHI), pp. 1999–2008 (2013)Google Scholar
  4. 4.
    LaToza, T.D., van der Hoek, A.: Crowdsourcing in software engineering: models. IEEE Softw. Motiv. Chall. 33(1), 74–80 (2016)CrossRefGoogle Scholar
  5. 5.
    Mao, K., Capra, L., Harman, M., Jia, Y.: A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 126, 57–84 (2016)CrossRefGoogle Scholar
  6. 6.
    NASA: Software Formal Inspection Standards, NASA-STD-8739.9, NASA (2013)Google Scholar
  7. 7.
    Poesio, M., Chamberlain, J., Kruschwitz, U., Robaldo, L., Ducceschi, L.: Phrase detectives: utilizing collective intelligence for internet-scale language resource creation. ACM Trans. Interact. Intell. Syst. 3(1), 44p. (2013)Google Scholar
  8. 8.
    Quinn, A., Bederson, B.: Human computation: a survey and taxonomy of a growing field. In: Proceedings of Human Factors in Computing Systems (CHI), pp. 1403–1412 (2011)Google Scholar
  9. 9.
    Winkler, D., Sabou, M., Petrovic, S., Carneiro, G., Kalinowski, M., Biffl, S.: Investigating model quality assurance with a distributed and scalable review process. In: Proceedings of the 20th Ibero-American Conference on Software Engineering, Experimental Software Engineering (ESELAW) Track. Springer, Buenos Aires, Argentina (2017)Google Scholar
  10. 10.
    Winkler, D., Sabou, M., Petrovic, S., Carneiro, G., Kalinowski, M., Biffl, S.: Improving model inspection processes with crowdsourcing: findings from a controlled experiment. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 125–137. Springer, Cham (2017). CrossRefGoogle Scholar
  11. 11.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wessl, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Institute of Software Technology and Interactive SystemsVienna University of TechnologyViennaAustria

Personalised recommendations