Advertisement

Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments

Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions

  • Daniel Archambault
  • Helen Purchase
  • Tobias Hoßfeld

Part of the Lecture Notes in Computer Science book series (LNCS, volume 10264)

Also part of the Information Systems and Applications, incl. Internet/Web, and HCI book sub series (LNISA, volume 10264)

Table of contents

  1. Front Matter
    Pages I-VII
  2. Daniel Archambault, Helen C. Purchase, Tobias Hoßfeld
    Pages 1-5
  3. Ujwal Gadiraju, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault et al.
    Pages 6-26
  4. David Martin, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi et al.
    Pages 27-69
  5. Matthias Hirth, Jason Jacques, Peter Rodgers, Ognjen Scekic, Michael Wybrow
    Pages 70-95
  6. Rita Borgo, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren et al.
    Pages 96-138
  7. Darren J. Edwards, Linda T. Kaastra, Brian Fisher, Remco Chang, Min Chen
    Pages 139-153
  8. Sebastian Egger-Lampl, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi et al.
    Pages 154-190
  9. Ujwal Gadiraju, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault et al.
    Pages E1-E1
  10. Back Matter
    Pages 191-191

About these proceedings

Introduction

As the outcome of the Dagstuhl Seminar 15481 on Crowdsourcing and Human-Centered Experiments, this book is a primer for computer science researchers who intend to use crowdsourcing technology for human centered experiments.

The focus of this Dagstuhl seminar, held in Dagstuhl Castle in November 2015, was to discuss experiences and methodological considerations when using crowdsourcing platforms to run human-centered experiments to test the effectiveness of visual representations. The inspiring Dagstuhl atmosphere fostered discussions and brought together researchers from different research directions. The papers provide information on crowdsourcing technology and experimental methodologies, comparisons between crowdsourcing and lab experiments, the use of crowdsourcing for visualisation, psychology, QoE and HCI empirical studies, and finally the nature of crowdworkers and their work, their motivation and demographic background, as well as the relationships among people forming the crowdsourcing community.

Keywords

crowdsourcing crowdsourcing technology vizualization information vizualization empirical studies study design and design of experiments laboratory experiments crowdsourcing experiments human-computer interaction cognitive information theories quality of experience crowd democraphics motivation ethics

Editors and affiliations

  • Daniel Archambault
    • 1
  • Helen Purchase
    • 2
  • Tobias Hoßfeld
    • 3
  1. 1.Department of Computer ScienceSwansea UniversitySwanseaUnited Kingdom
  2. 2.University of GlasgowGlasgowUnited Kingdom
  3. 3.Modellierung adaptiver SystemeUniversität Duisburg-EssenEssenGermany

Bibliographic information

  • DOI https://doi.org/10.1007/978-3-319-66435-4
  • Copyright Information Springer International Publishing AG 2017
  • Publisher Name Springer, Cham
  • eBook Packages Computer Science
  • Print ISBN 978-3-319-66434-7
  • Online ISBN 978-3-319-66435-4
  • Series Print ISSN 0302-9743
  • Series Online ISSN 1611-3349
  • Buy this book on publisher's site