Skip to main content

Crowdsourcing in QoE Evaluation

  • Chapter
  • First Online:
Quality of Experience

Part of the book series: T-Labs Series in Telecommunication Services ((TLABS))

Abstract

Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of subjects for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly by circumventing bottle-necks in traditional laboratory setup. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation must be considered and we therefore discuss both these differences and their impact on the QoE evaluation in this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alonso O, Rose DE, Stewart B (2008) Crowdsourcing for relevance evaluation. ACM SIGIR Forum 42(2):9–15. doi:10.1145/1480506.1480508. http://doi.acm.org/10.1145/1480506.1480508

  2. Amazon Mechanical Turk (2013). http://mturk.com

  3. Barkowsky M, Li J, Han T, Youn S, Ok J, Lee C, Hedberg C, Ananth IV, Wang K, Brunnström K et al (2013) Towards standardized 3dtv qoe assessment: cross-lab study on display technology and viewing environment parameters. In: IS&T/SPIE electronic imaging, International Society for Optics and Photonics, pp 864, 809–864, 809

    Google Scholar 

  4. Catellier A, Pinson M, Ingram W, Webster A (2012) Impact of mobile devices and usage location on perceived multimedia quality. In: 2012 fourth international workshop on quality of multimedia experience (QoMEX), IEEE, pp 39–44

    Google Scholar 

  5. Chen KT, Chang CJ, Wu CC, Chang YC, Lei CL (2010) Quadrant of euphoria: a crowdsourcing platform for QoE assessment. Network, IEEE 24(2):28–35. doi:10.1109/MNET.2010.5430141

    Article  Google Scholar 

  6. De Simone F, Naccari M, Tagliasacchi M, Dufaux F, Tubaro S, Ebrahimi T (2009) Subjective assessment of H.264/AVC video sequences transmitted over a noisy channel. In: Proceedings of the first international workshop on quality of multimedia experience (QoMEX 2009), pp 204–209. doi:10.1109/QOMEX.2009.5246952

  7. Downs JS, Holbrook MB, Sheng S, Cranor LF (2010) Are your participants gaming the system? Screening mechanical turk workers. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’10, ACM, New York, NY, USA, pp 2399–2402 doi:10.1145/1753326.1753688. http://doi.acm.org/10.1145/1753326.1753688

  8. Eickhoff C, Harris CG, de Vries AP, Srinivasan P (2012) Quality through flow and immersion: gamifying crowdsourced relevance assessments. In: Proceedings of ACM SIGIR conference on research and development in information retrieval 2012. ACM

    Google Scholar 

  9. Eickhoff C, de Vries A (2012) Increasing cheat robustness of crowdsourcing tasks. Inf Retrieval 16(2):121–137. doi:10.1007/s10791-011-9181-9

    Google Scholar 

  10. Facebook (2013). http://www.facebook.com

  11. Gardlo B, Ries M, Hoßfeld T (2012) Impact of screening technique on crowdsourcing QoE assessments. In: 22nd international conference radioelektronika 2012, special session on quality in multimedia systems. Brno, Czech Republic

    Google Scholar 

  12. Hirth M, Hoßfeld T, Tran-Gia P (2011) Anatomy of a crowdsourcing platform—using the example of Microworkers.com. In: Workshop on future internet and next generation networks (FINGNet). Seoul, Korea. doi:10.1109/IMIS.2011.89

  13. Hoßfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, Tran-Gia P (2013) CrowdTesting: a novel methodology for subjective user studies and QoE evaluation. Technical report 486, University of Würzburg

    Google Scholar 

  14. Hoßfeld T, Seufert M, Hirth M, Zinner T, Tran-Gia P, Schatz R (2011) Quantification of youtube qoe via crowdsourcing. In: 2011 IEEE international symposium on multimedia (ISM), pp 494–499. doi:10.1109/ISM.2011.87

  15. Hsueh PY, Melville P, Sindhwani V (2009) Data quality from crowdsourcing: a study of annotation selection criteria. In: Proceedings of the NAACL HLT 2009 workshop on active learning for natural language processing, HLT ’09, Association for Computational Linguistics, Stroudsburg, PA, USA, pp 27–35 http://dl.acm.org/citation.cfm?id=1564131.1564137

  16. ITU-R BT.500-13 (2012) Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union, Geneva, Switzerland

    Google Scholar 

  17. Keimel C, Habigt J, Diepold K (2012) Challenges in crowd-based video quality assessment. In: 2012 fourth international workshop on quality of multimedia experience (QoMEX), pp 13–18. doi:10.1109/QoMEX.2012.6263866

  18. Keimel C, Habigt J, Horch C, Diepold K (2012) Qualitycrowd—a framework for crowd-based quality evaluation. In: Picture coding symposium (PCS), pp 245–248. doi:10.1109/PCS.2012.6213338

  19. Keimel C, Habigt J, Horch C, Diepold K (2012) Video quality evaluation in the cloud. In: Packet video workshop (PV), 2012 19th, international, pp 155–160. doi:10.1109/PV.2012.6229729

  20. Kittur A, Chi E, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceedings of the twenty-sixth annual SIGCHI conference on human factors in computing systems. ACM, pp 453–456

    Google Scholar 

  21. Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’08, ACM, New York, NY, USA, pp 453–456. doi:10.1145/1357054.1357127. http://doi.acm.org/10.1145/1357054.1357127

  22. Microworkers (2013). http://microworkers.com

  23. Pinson MH, Janowski L, Pepion R, Huynh-Thu Q, Schmidmer C, Corriveau P, Younkin A, Le Callet P, Barkowsky M, Ingram W (2011) The influence of subjects and environment on audiovisual subjective tests: an international study. Selected Topics in Signal Processing, IEEE Journal of, 6(6):640–651. doi:10.1109/JSTSP.2012.2215306

  24. Ross J, Irani L, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowdworkers? Shifting demographics in mechanical turk. In: Proceedings of the 28th of the international conference extended abstracts on human factors in computing systems, ACM, pp 2863–2872. doi:10.1145/1753846.1753873

  25. Sabou M, Bontcheva K, Scharl A (2012) Crowdsourcing research opportunities: lessons from natural language processing. In: Proceedings of the 12th international conference on knowledge management and knowledge technologies, ACM, p 17

    Google Scholar 

  26. Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, CSCW ’11, ACM, New York, NY, USA, pp 275–284. doi:10.1145/1958824.1958865 http://doi.acm.org/10.1145/1958824.1958865

  27. Suri S, Goldstein D, Mason W (2011) Honesty in an online labor market. In: Human computation: papers from the 2011 AAAI, Workshop (WS-11-11)

    Google Scholar 

  28. Technische Universität München, Institute for Data Processing: Qualitycrowd (2013). http://www.ldv.ei.tum.de/videolab

  29. Varela M, Mäki T, Skorin-Kapov L, Hoßfeld T (2013) Increasing payments in crowdsourcing: don’t look a gift horse in the mouth. In: 4th international workshop on perceptual quality of systems (PQS 2013). Vienna, Austria

    Google Scholar 

  30. Von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 319–326

    Google Scholar 

  31. Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. doi:10.1109/TIP.2003.819861

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Hoßfeld .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Hoßfeld, T., Keimel, C. (2014). Crowdsourcing in QoE Evaluation. In: Möller, S., Raake, A. (eds) Quality of Experience. T-Labs Series in Telecommunication Services. Springer, Cham. https://doi.org/10.1007/978-3-319-02681-7_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-02681-7_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-02680-0

  • Online ISBN: 978-3-319-02681-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics