Abstract
Fact-checking of online health information has become necessary due to the increasing usage of internet by people searching for medical advice. There is a plethora of false information available to the public, which can put people in harm’s way. In order to aid the fact-checking process, recent research has leveraged the advancements made in NLP and deep learning techniques. Majority of the existing technology relies on the existence of labelled data, which is very limited. In this work we explored an unsupervised approach to identifying evidence sentences, which is the key task in claims verification process. We show by performing experiments on a publicly available dataset that our method achieves performance comparable to that of state-of-the-art supervised techniques. We also show how our proposed method can be adapted in cases where labelled data is available.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
References
Allcott, H., Gentzkow, M.: Social media and fake news in the 2016 election. J. Econ. Perspect. 31(2), 211–36 (2017)
Aly, R., et al.: FEVEROUS: fact extraction and verification over unstructured and structured information. arXiv preprint arXiv:2106.05707 (2021)
Bauchner, H., Golub, R.M., Fontanarosa, P.B.: Reporting and interpretation of randomized clinical trials. JAMA 322(8), 732–735 (2019)
Beltagy, I., Lo, K., Cohan, A.: SciBERT: a pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 (2019)
Beltagy, I., Peters, M.E., Cohan, A.: LongFormer: the long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 (2015)
Burns, P.B., Rohrich, R.J., Chung, K.C.: The levels of evidence and their role in evidence-based medicine. Plast. Reconstr. Surg. 128(1), 305 (2011)
Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055 (2017)
Chen, Q., Peng, Y., Lu, Z.: BioSentVec: creating sentence embeddings for biomedical texts. In: 2019 IEEE International Conference on Healthcare Informatics (ICHI), pp. 1–5. IEEE (2019)
Deka, P., Jurek-Loughrey, A., Deepak, P.: Improved methods to aid unsupervised evidence-based fact checking for online health news. J. Data Intell. 3(4), 474–504 (2022)
Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W.S., Zubiaga, A.: SemEval-2017 task 8: RumourEval: determining rumour veracity and support for rumours. arXiv preprint arXiv:1704.05972 (2017)
Dernoncourt, F., Lee, J.Y.: PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. arXiv preprint arXiv:1710.06071 (2017)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
DeYoung, J., Lehman, E., Nye, B., Marshall, I.J., Wallace, B.C.: Evidence inference 2.0: more data, better models. arXiv preprint arXiv:2005.04177 (2020)
Evanega, S., Lynas, M., Adams, J., Smolenyak, K., Insights, C.G.: Coronavirus misinformation: quantifying sources and themes in the COVID-19 ‘infodemic’. JMIR Preprints 19(10), 2020 (2020)
Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M., Zubiaga, A.: RumourEval 2019: determining rumour veracity and support for rumours. arXiv preprint arXiv:1809.06683 (2018)
Gu, Y., et al.: Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Healthc. (HEALTH) 3(1), 1–23 (2021)
Hassan, N., et al.: ClaimBuster: the first-ever end-to-end fact-checking system. Proc. VLDB Endow. 10(12), 1945–1948 (2017)
Henderson, M., et al.: Efficient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652 (2017)
Jin, Q., Dhingra, B., Liu, Z., Cohen, W.W., Lu, X.: PubMedQA: a dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146 (2019)
Khot, T., Sabharwal, A., Clark, P.: SciTaiL: a textual entailment dataset from science question answering. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Kouzy, R., et al.: Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on twitter. Cureus 12(3) (2020)
Lee, J., et al.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020)
Lehman, E., DeYoung, J., Barzilay, R., Wallace, B.C.: Inferring which medical treatments work from reports of clinical trials. arXiv preprint arXiv:1904.01606 (2019)
Li, X., Burns, G.A., Peng, N.: A paragraph-level multi-task learning model for scientific fact-verification. In: SDU@ AAAI (2021)
Liu, F., Shareghi, E., Meng, Z., Basaldella, M., Collier, N.: Self-alignment pretraining for biomedical entity representations. arXiv preprint arXiv:2010.11784 (2020)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Powers, D.M.: Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061 (2020)
Pradeep, R., Ma, X., Nogueira, R., Lin, J.: Scientific claim verification with VerT5erini. arXiv preprint arXiv:2010.11930 (2020)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)
Romanov, A., Shivade, C.: Lessons from natural language inference in the clinical domain. arXiv preprint arXiv:1808.06752 (2018)
Sadat, M., Caragea, C.: SciNLI: a corpus for natural language inference on scientific text. arXiv preprint arXiv:2203.06728 (2022)
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)
Thorne, J., Vlachos, A., Christodoulopoulos, C., Mittal, A.: FEVER: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355 (2018)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017)
Vlachos, A., Riedel, S.: Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pp. 18–22 (2014)
Wadden, D., et al.: Fact or fiction: verifying scientific claims. arXiv preprint arXiv:2004.14974 (2020)
Wadden, D., Lo, K., Wang, L.L., Cohan, A., Beltagy, I., Hajishirzi, H.: LongChecker: improving scientific claim verification by modeling full-abstract context. arXiv preprint arXiv:2112.01640 (2021)
Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426 (2017)
Wolf, T., et al.: HuggingFace’s transformers: state-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019)
Zhang, Z., Li, J., Fukumoto, F., Ye, Y.: Abstract, rationale, stance: a joint model for scientific claim verification. arXiv preprint arXiv:2110.15116 (2021)
Zhou, X., Zafarani, R.: A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput. Surv. (CSUR) 53(5), 1–40 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Deka, P., Jurek-Loughrey, A., P, D. (2022). Evidence Extraction to Validate Medical Claims in Fake News Detection. In: Traina, A., Wang, H., Zhang, Y., Siuly, S., Zhou, R., Chen, L. (eds) Health Information Science. HIS 2022. Lecture Notes in Computer Science, vol 13705. Springer, Cham. https://doi.org/10.1007/978-3-031-20627-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-20627-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20626-9
Online ISBN: 978-3-031-20627-6
eBook Packages: Computer ScienceComputer Science (R0)