Skip to main content

Human-Computer Question Answering: The Case for Quizbowl

  • Conference paper
  • First Online:
The NIPS '17 Competition: Building Intelligent Systems

Abstract

This article describes the 2017 Human-Computer Question Answering competition at NIPS 2017. We first describe the setting: the game of quiz bowl, argue why it makes a suitable game for human-computer competition, and then describe the logistics and preparation for our competition. After reviewing the results of the 2017 competition, we examine how we can improve the competition for future years.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/pinafore/qb-api

  2. 2.

    https://youtu.be/gNWU5TKaZ2Q

  3. 3.

    https://youtu.be/0kgnEUDMeug

  4. 4.

    https://competitions.codalab.org/

  5. 5.

    In a sense, our computers are playing quizbowl in the same way deaf students play.

  6. 6.

    https://www.theverge.com/2018/1/17/16900292/ai-reading-comprehension-machines-humans

References

  • Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 (2017).

    Google Scholar 

  • Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The Definitive Guide. O’Reilly.

    Google Scholar 

  • Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daumé III. 2014. Don’t Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation. In Empirical Methods in Natural Language Processing. docs/2014_emnlp_simtrans.pdf

  • Anupam Guha, Mohit Iyyer, Danny Bouman, and Jordan Boyd-Graber. 2015. Removing the Training Wheels: A Coreference Dataset that Entertains Humans and Challenges Computers. In North American Association for Computational Linguistics. docs/2015_naacl_qb_coref.pdf

  • He He, Jordan Boyd-Graber, and Hal Daumé III. 2016a. Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation. In North American Association for Computational Linguistics. docs/2016_naacl_interpretese.pdf

  • He He, Kevin Kwok, Jordan Boyd-Graber, and Hal Daumé III. 2016b. Opponent Modeling in Deep Reinforcement Learning. docs/2016_icml_opponent.pdf

  • Mandar Joshi, Eunsol Choi, Dan Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension.

    Google Scholar 

  • Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A Challenge Dataset for Open-Domain Question Answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2013–2018.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jordan Boyd-Graber .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Boyd-Graber, J., Feng, S., Rodriguez, P. (2018). Human-Computer Question Answering: The Case for Quizbowl. In: Escalera, S., Weimer, M. (eds) The NIPS '17 Competition: Building Intelligent Systems. The Springer Series on Challenges in Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-94042-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-94042-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-94041-0

  • Online ISBN: 978-3-319-94042-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics