Advertisement

Evidence Distilling for Fact Extraction and Verification

  • Yang Lin
  • Pengyu Huang
  • Yuxuan Lai
  • Yansong FengEmail author
  • Dongyan Zhao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11838)

Abstract

There has been an increasing attention to the task of fact checking. Among others, FEVER is a recently popular fact verification task in which a system is supposed to extract information from given Wikipedia documents and verify the given claim. In this paper, we present a four-stage model for this task including document retrieval, sentence selection, evidence sufficiency judgement and claim verification. Different from most existing models, we design a new evidence sufficiency judgement model to judge the sufficiency of the evidences for each claim and control the number of evidences dynamically. Experiments on FEVER show that our model is effective in judging the sufficiency of the evidence set and can get a better evidence F1 score with a comparable claim verification performance.

Keywords

Claim verification Fact checking Natural language inference 

Notes

Acknowledgment

This work is supported in part by the NSFC (Grant No.61672057, 61672058, 61872294), the National Hi-Tech R&D Program of China (No. 2018YFC0831900). For any correspondence, please contact Yansong Feng.

References

  1. 1.
    Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. arXiv:1508.05326 (2015)
  2. 2.
    Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H., Inkpen, D.: Enhanced LSTM for natural language inference. arXiv:1609.06038 (2016)
  3. 3.
    Hanselowski, A., Zhang, H., Li, Z., Sorokin, D., Gurevych, I.: UKP-Athene: multi-sentence textual entailment for claim verification (2018)Google Scholar
  4. 4.
    Kim, S., Hong, J.H., Kang, I., Kwak, N.: Semantic sentence matching with densely-connected recurrent and co-attentive information. arXiv:1805.11360 (2018)
  5. 5.
    Malon, C.: Team Papelo: transformer networks at FEVER (2019)Google Scholar
  6. 6.
    Nie, Y., Chen, H., Bansal, M.: Combining fact extraction and verification with neural semantic matching networks. arXiv:1811.07039 (2018)
  7. 7.
    Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)Google Scholar
  8. 8.
    Pomerleau, D., Rao., D.: Fake news challenge (2017). http://www.fakenewschallenge.org/
  9. 9.
    Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training. OpenAI (2018)Google Scholar
  10. 10.
    Thorne, J., Vlachos, A., Christodoulopoulos, C., Mittal, A.: FEVER: a large-scale dataset for fact extraction and verification (2018)Google Scholar
  11. 11.
    Vlachos, A., Riedel, S.: Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pp. 18–22 (2014)Google Scholar
  12. 12.
    Wang, W.Y.: “Liar, Liar Pants on Fire”: a new benchmark dataset for fake news detection. arXiv:1705.00648 (2017)
  13. 13.
    Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. arXiv:1704.05426 (2017)
  14. 14.
    Yoneda, T., Mitchell, J., Welbl, J., Stenetorp, P., Riedel, S.: Ucl machine reading group: four factor framework for fact finding (hexaf). In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pp. 97–102 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yang Lin
    • 1
  • Pengyu Huang
    • 2
  • Yuxuan Lai
    • 1
  • Yansong Feng
    • 1
    Email author
  • Dongyan Zhao
    • 1
  1. 1.Institute of Computer Science and TechnologyPeking UniversityBeijingChina
  2. 2.Beijing University of Posts and TelecommunicationsBeijingChina

Personalised recommendations