Abstract
Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of subjects for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly by circumventing bottle-necks in traditional laboratory setup. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation must be considered and we therefore discuss both these differences and their impact on the QoE evaluation in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alonso O, Rose DE, Stewart B (2008) Crowdsourcing for relevance evaluation. ACM SIGIR Forum 42(2):9–15. doi:10.1145/1480506.1480508. http://doi.acm.org/10.1145/1480506.1480508
Amazon Mechanical Turk (2013). http://mturk.com
Barkowsky M, Li J, Han T, Youn S, Ok J, Lee C, Hedberg C, Ananth IV, Wang K, Brunnström K et al (2013) Towards standardized 3dtv qoe assessment: cross-lab study on display technology and viewing environment parameters. In: IS&T/SPIE electronic imaging, International Society for Optics and Photonics, pp 864, 809–864, 809
Catellier A, Pinson M, Ingram W, Webster A (2012) Impact of mobile devices and usage location on perceived multimedia quality. In: 2012 fourth international workshop on quality of multimedia experience (QoMEX), IEEE, pp 39–44
Chen KT, Chang CJ, Wu CC, Chang YC, Lei CL (2010) Quadrant of euphoria: a crowdsourcing platform for QoE assessment. Network, IEEE 24(2):28–35. doi:10.1109/MNET.2010.5430141
De Simone F, Naccari M, Tagliasacchi M, Dufaux F, Tubaro S, Ebrahimi T (2009) Subjective assessment of H.264/AVC video sequences transmitted over a noisy channel. In: Proceedings of the first international workshop on quality of multimedia experience (QoMEX 2009), pp 204–209. doi:10.1109/QOMEX.2009.5246952
Downs JS, Holbrook MB, Sheng S, Cranor LF (2010) Are your participants gaming the system? Screening mechanical turk workers. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’10, ACM, New York, NY, USA, pp 2399–2402 doi:10.1145/1753326.1753688. http://doi.acm.org/10.1145/1753326.1753688
Eickhoff C, Harris CG, de Vries AP, Srinivasan P (2012) Quality through flow and immersion: gamifying crowdsourced relevance assessments. In: Proceedings of ACM SIGIR conference on research and development in information retrieval 2012. ACM
Eickhoff C, de Vries A (2012) Increasing cheat robustness of crowdsourcing tasks. Inf Retrieval 16(2):121–137. doi:10.1007/s10791-011-9181-9
Facebook (2013). http://www.facebook.com
Gardlo B, Ries M, Hoßfeld T (2012) Impact of screening technique on crowdsourcing QoE assessments. In: 22nd international conference radioelektronika 2012, special session on quality in multimedia systems. Brno, Czech Republic
Hirth M, Hoßfeld T, Tran-Gia P (2011) Anatomy of a crowdsourcing platform—using the example of Microworkers.com. In: Workshop on future internet and next generation networks (FINGNet). Seoul, Korea. doi:10.1109/IMIS.2011.89
Hoßfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, Tran-Gia P (2013) CrowdTesting: a novel methodology for subjective user studies and QoE evaluation. Technical report 486, University of Würzburg
Hoßfeld T, Seufert M, Hirth M, Zinner T, Tran-Gia P, Schatz R (2011) Quantification of youtube qoe via crowdsourcing. In: 2011 IEEE international symposium on multimedia (ISM), pp 494–499. doi:10.1109/ISM.2011.87
Hsueh PY, Melville P, Sindhwani V (2009) Data quality from crowdsourcing: a study of annotation selection criteria. In: Proceedings of the NAACL HLT 2009 workshop on active learning for natural language processing, HLT ’09, Association for Computational Linguistics, Stroudsburg, PA, USA, pp 27–35 http://dl.acm.org/citation.cfm?id=1564131.1564137
ITU-R BT.500-13 (2012) Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union, Geneva, Switzerland
Keimel C, Habigt J, Diepold K (2012) Challenges in crowd-based video quality assessment. In: 2012 fourth international workshop on quality of multimedia experience (QoMEX), pp 13–18. doi:10.1109/QoMEX.2012.6263866
Keimel C, Habigt J, Horch C, Diepold K (2012) Qualitycrowd—a framework for crowd-based quality evaluation. In: Picture coding symposium (PCS), pp 245–248. doi:10.1109/PCS.2012.6213338
Keimel C, Habigt J, Horch C, Diepold K (2012) Video quality evaluation in the cloud. In: Packet video workshop (PV), 2012 19th, international, pp 155–160. doi:10.1109/PV.2012.6229729
Kittur A, Chi E, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceedings of the twenty-sixth annual SIGCHI conference on human factors in computing systems. ACM, pp 453–456
Kittur A, Chi EH, Suh B (2008) Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’08, ACM, New York, NY, USA, pp 453–456. doi:10.1145/1357054.1357127. http://doi.acm.org/10.1145/1357054.1357127
Microworkers (2013). http://microworkers.com
Pinson MH, Janowski L, Pepion R, Huynh-Thu Q, Schmidmer C, Corriveau P, Younkin A, Le Callet P, Barkowsky M, Ingram W (2011) The influence of subjects and environment on audiovisual subjective tests: an international study. Selected Topics in Signal Processing, IEEE Journal of, 6(6):640–651. doi:10.1109/JSTSP.2012.2215306
Ross J, Irani L, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowdworkers? Shifting demographics in mechanical turk. In: Proceedings of the 28th of the international conference extended abstracts on human factors in computing systems, ACM, pp 2863–2872. doi:10.1145/1753846.1753873
Sabou M, Bontcheva K, Scharl A (2012) Crowdsourcing research opportunities: lessons from natural language processing. In: Proceedings of the 12th international conference on knowledge management and knowledge technologies, ACM, p 17
Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, CSCW ’11, ACM, New York, NY, USA, pp 275–284. doi:10.1145/1958824.1958865 http://doi.acm.org/10.1145/1958824.1958865
Suri S, Goldstein D, Mason W (2011) Honesty in an online labor market. In: Human computation: papers from the 2011 AAAI, Workshop (WS-11-11)
Technische Universität München, Institute for Data Processing: Qualitycrowd (2013). http://www.ldv.ei.tum.de/videolab
Varela M, Mäki T, Skorin-Kapov L, Hoßfeld T (2013) Increasing payments in crowdsourcing: don’t look a gift horse in the mouth. In: 4th international workshop on perceptual quality of systems (PQS 2013). Vienna, Austria
Von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 319–326
Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. doi:10.1109/TIP.2003.819861
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Hoßfeld, T., Keimel, C. (2014). Crowdsourcing in QoE Evaluation. In: Möller, S., Raake, A. (eds) Quality of Experience. T-Labs Series in Telecommunication Services. Springer, Cham. https://doi.org/10.1007/978-3-319-02681-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-319-02681-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02680-0
Online ISBN: 978-3-319-02681-7
eBook Packages: EngineeringEngineering (R0)