Multimodal Corpora pp 122-137

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5509) | Cite as

On the Contextual Analysis of Agreement Scores

  • Dennis Reidsma
  • Dirk Heylen
  • Rieks op den Akker
Chapter

Abstract

This paper explores the relation between agreement, data quality and machine learning, using the AMI corpus. The paper describes a novel approach that uses contextual information from other modalities to determine a more reliable subset of data, for annotations that have a low overall agreement.

Keywords

reliability annotation corpus multimodal context 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. AMI Consortium: Guidelines for dialogue act and addressee. Technical report (2005)Google Scholar
  2. Ba, S.O., Odobez, J.-M.: A study on visual focus of attention recognition from head pose in a meeting room. In: Renals, S., Bengio, S., Fiscus, J.G. (eds.) MLMI 2006. LNCS, vol. 4299, pp. 75–87. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. Ba, S.O., Odobez, J.M.: Head pose tracking and focus of attention recognition algorithms in meeting rooms. In: Stiefelhagen, R., Garofolo, J.S. (eds.) CLEAR 2006. LNCS, vol. 4122, pp. 345–357. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  4. Beigman Klebanov, B., Beigman, E., Diermeier, D.: Analyzing disagreements. In: Artstein, R., Boleda, G., Keller, F., Schulte im Walde, S. (eds.) Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, Manchester, UK, Coling 2008 Organizing Committee, August 2008, pp. 2–7 (2008) ISBN 978-3-540-69567-7 Google Scholar
  5. Beigman Klebanov, B., Shamir, E.: Reader-based exploration of lexical cohesion. Language Resources and Evaluation 40(2), 109–126 (2006)CrossRefGoogle Scholar
  6. Carletta, J.C.: Unleashing the killer corpus: experiences in creating the multi-everything AMI meeting corpus. Language Resources and Evaluation 41(2), 181–190 (2007)CrossRefGoogle Scholar
  7. Carletta, J.C., Ashby, S., Bourban, S., Flynn, M., Guillemot, M., Hain, T., Kadlec, J., Karaiskos, V., Kraaij, W., Kronenthal, M., Lathoud, G., Lincoln, M., Lisowska, A., McCowan, I., Post, W.M., Reidsma, D., Wellner, P.: The AMI meeting corpus: A pre-announcement. In: Renals, S., Bengio, S. (eds.) MLMI 2005. LNCS, vol. 3869, pp. 28–39. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. Carletta, J.C., Evert, S., Heid, U., Kilgour, J., Robertson, J., Voormann, H.: The NITE XML toolkit: flexible annotation for multi-modal language data. Behavior Research Methods, Instruments and Computers 35(3), 353–363 (2003)CrossRefGoogle Scholar
  9. Carletta, J.C., McKelvie, D., Isard, A., Mengel, A., Klein, M., Møller, M.B.: A generic approach to software support for linguistic annotation using xml. In: Sampson, G., McCarthy, D. (eds.) Corpus Linguistics: Readings in a Widening Discipline. Continuum International, London (2005)Google Scholar
  10. Goffman, E.: Footing. In: Forms of Talk, pp. 124–159. University of Pennsylvania Press, Philadelphia (1981)Google Scholar
  11. Jovanović, N.: To Whom It May Concern - Addressee Identification in Face-to-Face Meetings. Phd thesis, University of Twente (2007)Google Scholar
  12. Jovanović, N., op den Akker, H.J.A., Nijholt, A.: A corpus for studying addressing behaviour in multi-party dialogues. Language Resources and Evaluation 40(1), 5–23 (2006)CrossRefGoogle Scholar
  13. Krippendorff, K.: Content Analysis: An Introduction to its Methodology. The Sage CommText Series, vol. 5. Sage Publications, Beverly Hills (1980)MATHGoogle Scholar
  14. op den Akker, H.J.A., Theune, M.: How do I address you? modelling addressing behavior based on an analysis of multi-modal corpora of conversational discourse. In: Proceedings of the AISB symposium on Multi-modal Output Generation, MOG 2008, April 2008, pp. 10–17 (2008) ISBN 1-902956-69-9 Google Scholar
  15. Post, W.M., Cremers, A.H.M., Blanson Henkemans, O.A.: A research environment for meeting behavior. In: Proceedings of the 3rd Workshop on Social Intelligence Design, pp. 159–165 (2004)Google Scholar
  16. Reidsma, D.: Annotations and Subjective Machines — of annotators, embodied agents, users, and other humans. PhD thesis, University of Twente (October 2008)Google Scholar
  17. Reidsma, D., Carletta, J.C.: Reliability measurement without limits. Computational Linguistics 34(3), 319–326 (2008)CrossRefGoogle Scholar
  18. Reidsma, D., Hofs, D.H.W., Jovanović, N.: Designing focused and efficient annotation tools. In: Noldus, L.P.J.J., Grieco, F., Loijens, L.W.S., Zimmerman, P.H. (eds.) Measuring Behaviour, Wageningen, NL, September 2005a, pp. 149–152 (2005a)Google Scholar
  19. Reidsma, D., Hofs, D.H.W., Jovanović, N.: A presentation of a set of new annotation tools based on the NXT API. Poster at Measuring Behaviour 2005 (2005b)Google Scholar
  20. Voit, M., Stiefelhagen, R.: Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. In: IMCI 2008: Proceedings of the 10th international conference on Multimodal interfaces, pp. 173–180. ACM, New York (2008a)Google Scholar
  21. Voit, M., Stiefelhagen, R.: Visual focus of attention in dynamic meeting scenarios. In: Popescu-Belis, A., Stiefelhagen, R. (eds.) MLMI 2008. LNCS, vol. 5237, pp. 1–13. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Dennis Reidsma
    • 1
  • Dirk Heylen
    • 1
  • Rieks op den Akker
    • 1
  1. 1.Human Media InteractionUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations