Skip to main content

Pseudoscience Detection Using a Pre-trained Transformer Model with Intelligent ReLabeling

  • Conference paper
  • First Online:
  • 532 Accesses

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 176))

Abstract

Often dismissed as a harmless pastime for the gullible, pseudoscience nonetheless has devastating effects, including the loss of life. Its menace is amplified by the difficulty of differentiating pseudoscience from science, especially for the untrained eye. A novel method recognizes pseudoscience in the text by utilizing a fine-tuned Robustly Optimized Bidirectional Encoder Representation from Transformers Approach (RoBERTa) model. The dataset of 112,720 full-text articles used in this work is made publicly available to remedy the lack of datasets related to pseudoscience. A novel technique, Intelligent ReLabeling (IRL), is employed to minimize mislabeled data, enabling the rapid creation of high-quality textual datasets. IRL eliminates the need for expensive manual verification processes and minimizes domain expertise requirements in many applications. The final model trained with IRL achieves an F1 score of 0.929 on a separate manually labeled test dataset.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    See Sect. 3.1 for details.

  2. 2.

    The manual verification was performed by the authors through fact-checking against reliable sources as appropriate.

  3. 3.

    We will make the code for IRL publicly available, along with experimentally demonstrating its effectiveness using the IMDB reviews dataset [25]. We artificially add noise to the labels of the IMDB reviews dataset, substantially degrading the performance of models trained on the noisy data. Then, we show that applying IRL improves the performance of the models close to or to the same levels before adding noise.

  4. 4.

    See Sect. 3.3.

  5. 5.

    12-layer, 768-hidden, 12-heads, 125 M parameters.

  6. 6.

    https://github.com/huggingface/transformers.

  7. 7.

    3 tokens reserved for special tokens.

  8. 8.

    https://github.com/NVIDIA/apex.

References

  1. Boudry, M., Blancke, S., Pigliucci, M.: What makes weird beliefs thrive? the Epidemiology of Pseudoscience. Philos. Psychol. 28(8), 1177–1198 (2015)

    Article  Google Scholar 

  2. Hansson, S.O.: Science and pseudo-science (2017)

    Google Scholar 

  3. Hansson, S.O.: Science denial as a form of pseudoscience. Stud. Hist. Philos. Sci. Part a 63, 05 (2017)

    Google Scholar 

  4. Pigliucci, M., Boudry, M.: The dangers of pseudoscience. New York Times 21, 1735–1741 (2013)

    Google Scholar 

  5. Douglas, K.M., Sutton, R.M.: Climate change: why the conspiracy theories are dangerous. Bull. Atom. Sci. 71(2), 98–106 (2015)

    Article  Google Scholar 

  6. Torabi Asr, F., Taboada, M.: Big data and quality data for fake news and misinformation detection. Big Data Soc. 6(1), 2053951719843310 (2019)

    Article  Google Scholar 

  7. Lilienfeld, S.O., Ammirati, R., David, M.: Distinguishing science from pseudoscience in school psychology: science and scientific thinking as safeguards against human error. J. Sch. Psychol. 50 (2012)

    Google Scholar 

  8. Bond, C.F., Jr., DePaulo, B.M.: Accuracy of deception judgments. Pers. Soc. Psychol. Rev. 10(3), 214–234 (2006)

    Article  Google Scholar 

  9. Shu, K., Sliva, A., Wang, S., Tang, J., Liu, H.: Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor. Newslett. 19(1), 22–36 (2017)

    Article  Google Scholar 

  10. Niall J Conroy, Victoria L Rubin, and Yimin Chen. Automatic deception detection: Methods for finding fake news. Proc. Assoc. Inf. Sci. Technol. 52(1), 1–4, (2015)

    Google Scholar 

  11. Shvets, A.: A method of automatic detection of pseudoscientific publications. In Intelligent Systems’ 2014, pp. 533–539. Springer (2015)

    Google Scholar 

  12. Figueira, Á., Oliveira, L.: The current state of fake news: challenges´ and opportunities. Procedia Comput. Sci. 121, 817–825 (2017)

    Article  Google Scholar 

  13. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  14. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)

  15. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Adv. Neural Inf. Process. Syst. 5998–6008 (2017)

    Google Scholar 

  16. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  17. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: Glue: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018)

  18. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: Race: large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 (2017)

  19. Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822 (2018)

  20. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  21. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)

  22. Pew Research Centre.: State of the news media 2016 (2004)

    Google Scholar 

  23. Zimdars, M.: False, misleading, clickbait-y, and satirical “news” sources. https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview, 2016

  24. Aker, A., Vincentius, K., Bontcheva, K.: Credibility and transparency of news sources: Data collection and feature analysis (2019)

    Google Scholar 

  25. Maas, A., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thilina C. Rajapakse .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rajapakse, T.C., Nawarathna, R.D. (2021). Pseudoscience Detection Using a Pre-trained Transformer Model with Intelligent ReLabeling. In: Shakya, S., Balas, V.E., Haoxiang, W., Baig, Z. (eds) Proceedings of International Conference on Sustainable Expert Systems. Lecture Notes in Networks and Systems, vol 176. Springer, Singapore. https://doi.org/10.1007/978-981-33-4355-9_25

Download citation

  • DOI: https://doi.org/10.1007/978-981-33-4355-9_25

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-33-4354-2

  • Online ISBN: 978-981-33-4355-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics