Skip to main content
Log in

To protect science, we must use LLMs as zero-shot translators

  • Comment
  • Published:

From Nature Human Behaviour

View current issue Submit your manuscript

Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1: Two hypothetical use cases for LLMs based on real prompts and responses demonstrate the effect of inaccurate responses on user beliefs.

References

  1. Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610–623 (ACM, 2021).

  2. Mitchell, M. Science 381, adj5957 (2023).

    Article  PubMed  Google Scholar 

  3. Munn, L., Magee, L. & Arora, V. AI Soc. https://doi.org/10.1007/s00146-023-01756-4 (2023).

    Article  Google Scholar 

  4. Kidd, C. & Birhane, A. Science 380, 1222–1223 (2023).

    Article  CAS  PubMed  Google Scholar 

  5. Mielke, S. J., Szlam, A., Dinan, E. & Boureau, Y.-L. Trans. Assoc. Comput. Linguist. 10, 857–872 (2022).

    Article  Google Scholar 

  6. Kang, E. B. Big Data Soc. 10, https://doi.org/10.1177/20539517221146122 (2023).

  7. Holtzman, A., Buys, J., Du, L., Forbes, M. & Choi, Y. The curious case of neural text degeneration. In Proc. International Conf. on Learning Representations (ICLR) 2020, https://iclr.cc/virtual_2020/poster_rygGQyrFvH.html (International Conference on Learning Representations, 2020).

  8. Drezner, D. W. The Ideas Industry (Oxford Univ. Press, 2017).

  9. Feng, S., Park, C. Y., Liu, Y. & Tsvetkov, Y. From pretraining data to language models to downstream tasks: tracking the trails of political biases leading to unfair NLP models. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 11737–11762 (Association for Computational Linguistics, 2023).

  10. Johnson, M. et al. Trans. Assoc. Comput. Linguist. 5, 339–351 (2017).

    Article  Google Scholar 

  11. Kabir, S., Udo-Imeh, D. N., Kou, B. & Zhang, T. Preprint at arXiv, https://doi.org/10.48550/arXiv.2308.02312 (2023).

  12. Graeber, D. Bullshit Jobs: A Theory (Simon & Schuster, 2018).

  13. Lin, J. & Ryaboy, D. SIGKDD Explor. 14, 6–19 (2013).

    Article  Google Scholar 

  14. Liew, C. S. et al. ACM Comput. Surv. 49, 66 (2016).

    Google Scholar 

  15. Bender, E. M. & Friedman, B. Trans. Assoc. Comput. Linguist. 6, 587–604 (2018).

    Article  Google Scholar 

Download references

Acknowledgements

This work has been supported through research funding provided by the Wellcome Trust (grant no. 223765/Z/21/Z), Sloan Foundation (grant no. G-2021-16779), the Department of Health and Social Care, and Luminate Group to support the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford. The funders had no role in the decision to publish or the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brent Mittelstadt.

Ethics declarations

Competing interests

B.M. and S.W. declare no competing interests. C.R. was also an employee of Amazon Web Services during part of the writing of this article. He did not contribute to this article in his capacity as an Amazon employee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mittelstadt, B., Wachter, S. & Russell, C. To protect science, we must use LLMs as zero-shot translators. Nat Hum Behav 7, 1830–1832 (2023). https://doi.org/10.1038/s41562-023-01744-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-023-01744-0

  • Springer Nature Limited

This article is cited by

Navigation