Skip to main content
Log in

Should artificial intelligence be interpretable to humans?

  • Comment
  • Published:

From Nature Reviews Physics

View current issue Sign up to alerts

As artificial intelligence (AI) makes increasingly impressive contributions to science, scientists increasingly want to understand how AI reaches its conclusions. Matthew D. Schwartz discusses what it means to understand AI and whether such a goal is achievable — or even needed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1: The evolution of biological and artificial intelligence takes place on dramatically different timescales.

References

  1. Dyson, F. J. Time without end: Physics and biology in an open universe. Rev. Mod. Phys. 51, 447 (1979).

    Article  ADS  Google Scholar 

  2. Chowdhery, A. et al. PaLM: Scaling language modeling with pathways. Preprint at https://doi.org/10.48550/arXiv.2204.02311 (2022).

  3. Lewkowycz, A. Solving quantitative reasoning problems with Language models. Preprint at https://doi.org/10.48550/arXiv.2206.14858 (2022).

  4. Wei, J. et al. Chain of thought prompting elicits reasoning in large language models. Preprint at https://doi.org/10.48550/arXiv.2201.11903 (2022).

  5. Schwartz, M. D. Modern machine learning and particle physics. Harvard Data Sci. Rev. https://doi.org/10.1162/99608f92.beeb1183 (2021).

    Article  Google Scholar 

  6. Grojean, C. et al. Lessons on interpretable machine learning from particle physics. Nat. Rev. Phys. 4, 284–286 (2022).

    Article  Google Scholar 

  7. Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. https://doi.org/10.1038/s42254-022-00518-3 (2022).

    Article  Google Scholar 

  8. Nagel, T. What is it like to be a bat? Philos. Rev. 83, 435–450 (1974).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew D. Schwartz.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schwartz, M.D. Should artificial intelligence be interpretable to humans?. Nat Rev Phys 4, 741–742 (2022). https://doi.org/10.1038/s42254-022-00538-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42254-022-00538-z

  • Springer Nature Limited

Navigation