Skip to main content

E-CAT: Evaluating Crowdsourced Android Testing

  • Conference paper
  • First Online:
Book cover Data Science (ICPCSEE 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 901))

Abstract

Everyday, millions of crowdsourcing tasks are accomplished in exchange for payments. Pricing acts as an important role in crowdsourcing campaigns, not only for the interest of requesters and workers, but also for the fair competition among the crowdsourcing markets, as well as its sustainable development. All the previous pricing strategies are based on the evaluation of results, however, in the scenario of crowdsourced android testing (CAT), the testing process of a worker is a factor that we cannot overlook. In this paper, we propose a unified model that combines Evaluation on both process and results of CAT (E-CAT). And based on the proposed E-CAT, we can construct the pricing strategy for CAT. On one hand, E-CAT enables the requesters to investigate the testing process of a worker from both aspects of depth and width. On the other hand, it helps the requesters evaluate the coming-outs of each worker.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Azim, T., Neamtiu, I.: Targeted and depth-first exploration for systematic testing of android apps. In: Acm Sigplan Notices, vol. 48, pp. 641–660. ACM (2013)

    Google Scholar 

  2. Bibi, S., Tsoumakas, G., Stamelos, I., Vlahavas, I.P.: Software defect prediction using regression via classification. In: AICCSA, pp. 330–336 (2006)

    Google Scholar 

  3. Callison-Burch, C.: Fast, cheap, and creative: evaluating translation quality using amazon’s mechanical turk. In: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, vol.1, pp. 286–295. Association for Computational Linguistics (2009)

    Google Scholar 

  4. Chin, E., Felt, A.P., Greenwood, K., Wagner, D.: Analyzing inter-application communication in android. In: Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services, pp. 239–252. ACM (2011)

    Google Scholar 

  5. Dekel, O., Shamir, O.: Vox populi: collecting high-quality labels from a crowd. In: COLT (2009)

    Google Scholar 

  6. Durbin, R., Eddy, S.R., Krogh, A., Mitchison, G.: Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge University Press, Cambridge (1998)

    Book  Google Scholar 

  7. Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)

    Google Scholar 

  8. Kamps, J., Geva, S., Peters, C., Sakai, T., Trotman, A., Voorhees, E.: Report on the SIGIR 2009 workshop on the future of IR evaluation. In: ACM SIGIR Forum, vol. 43, pp. 13–23. ACM (2009)

    Google Scholar 

  9. Kazai, G., Milic-Frayling, N.: On the evaluation of the quality of relevance assessments collected through crowdsourcing. In: SIGIR 2009 Workshop on the Future of IR Evaluation. vol. 21 (2009)

    Google Scholar 

  10. Le, J., Edmonds, A., Hester, V., Biewald, L.: Ensuring quality in crowdsourced search relevance evaluation: the effects of training question distribution. In: SIGIR 2010 workshop on crowdsourcing for search evaluation, vol. 2126 (2010)

    Google Scholar 

  11. Mahmood, R., Mirzaei, N., Malek, S.: Evodroid: Segmented evolutionary testing of android apps. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 599–609. ACM (2014)

    Google Scholar 

  12. Needleman, S.B., Wunsch, C.D.: A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. 48(3), 443–453 (1970)

    Article  Google Scholar 

  13. Rzeszotarski, J.M., Kittur, A.: Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 13–22. ACM (2011)

    Google Scholar 

  14. Shabtai, A., Fledel, Y., Kanonov, U., Elovici, Y., Dolev, S., Glezer, C.: Google android: a comprehensive security assessment. IEEE Secur. Priv. 2, 35–44 (2010)

    Article  Google Scholar 

Download references

Acknowledgment

This work is supported in part by the National Key Research and Development Program of China (2016YFC0800805), and the National Key Technology Research and Development Program of China (2015BAJ04B00).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tieke He .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lian, H., Qin, Z., Song, H., He, T. (2018). E-CAT: Evaluating Crowdsourced Android Testing. In: Zhou, Q., Gan, Y., Jing, W., Song, X., Wang, Y., Lu, Z. (eds) Data Science. ICPCSEE 2018. Communications in Computer and Information Science, vol 901. Springer, Singapore. https://doi.org/10.1007/978-981-13-2203-7_39

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-2203-7_39

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-2202-0

  • Online ISBN: 978-981-13-2203-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics