Skip to main content

Dowsing for Math Answers

  • Conference paper
  • First Online:
Experimental IR Meets Multilinguality, Multimodality, and Interaction (CLEF 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12880))

Abstract

Mathematical Information Retrieval (MathIR) focuses on using mathematical formulas and terminology to search and retrieve documents that include mathematical content. To index mathematical documents, we convert each formula into a token list that is compatible with natural language text. Then, given a natural language query that includes formulas, we select key terms and formulas from the query, again convert the query formulas into token lists, and finally search and rank results using standard search engine techniques. In this paper, we describe our approach in detail for a Community Question Answering task and evaluate the weight to be given to formula tokens versus text tokens. We also evaluate a regression-based approach to re-ranking based on metadata associated with the documents returned from the search.

Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2021, 21–24 September 2021, Bucharest, Romania.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://math.stackexchange.com.

  2. 2.

    https://cs.uwaterloo.ca/brushsearch.

  3. 3.

    A formula in LaTeX representation can be converted into MathML by using LaTeXML (https://dlmf.nist.gov/LaTeXML/).

  4. 4.

    https://www.cs.rit.edu/~dprl/ARQMath/.

  5. 5.

    All extracted formulas and keywords can be found in our Working Notes [14].

  6. 6.

    As our teachers admonished: “Always include the question as part of your answer!”.

  7. 7.

    For completeness, all one-way links between posts are converted to two-way links.

  8. 8.

    Having many unjudged answers implies that the evaluation might not be truly informative.

  9. 9.

    One of the five baselines, Linked MSE posts, uses privately-held data, not available to Lab participants. The other four are traditional text or math-aware search systems adapted for the task.

  10. 10.

    We acknowledge that we have not tested this hypothesis by substituting another math-aware search engine in place of Tangent-L within our experimental apparatus. However, such engines were used for four baselines and by other Lab participants.

  11. 11.

    A NVIDIA GeForce MX150 graphics card with 2 GB on-card RAM is available on the machine, but it is not used for the experiments.

  12. 12.

    https://scikit-learn.org.

References

  1. Abacha, A.B., Agichtein, E., Pinter, Y., Demner-Fushman, D.: Overview of the medical question answering task at TREC 2017 LiveQA. In: TREC 2017. NIST Special Publication, vol. 500-324 (2017)

    Google Scholar 

  2. Aizawa, A., Kohlhase, M., Ounis, I.: NTCIR-10 math pilot task overview. In: NTCIR-10, pp. 654–661 (2013)

    Google Scholar 

  3. Aizawa, A., Kohlhase, M., Ounis, I., Schubotz, M.: NTCIR-11 math-2 task overview. In: NTCIR-11, pp. 88–98 (2014)

    Google Scholar 

  4. Astrakhantsev, N.A., Fedorenko, D.G., Turdakov, D.Y.: Methods for automatic term recognition in domain-specific text collections: a survey. Program. Comput. Softw. 41(6), 336–349 (2015)

    Article  MathSciNet  Google Scholar 

  5. Fraser, D.J., Kane, A., Tompa, F.W.: Choosing math features for BM25 ranking with Tangent-L. In: DocEng 2018, pp. 17:1–17:10 (2018)

    Google Scholar 

  6. Guidi, F., Sacerdoti Coen, C.: A survey on retrieval of mathematical knowledge. Math. Comput. Sci. 10(4), 409–427 (2016)

    Article  MathSciNet  Google Scholar 

  7. Hopkins, M., Le Bras, R., Petrescu-Prahova, C., Stanovsky, G., Hajishirzi, H., Koncel-Kedziorski, R.: SemEval-2019 task 10: math question answering. In: SemEval-2019, pp. 893–899, June 2019

    Google Scholar 

  8. Lv, Y., Zhai, C.: Lower-bounding term frequency normalization. In: CIKM 2011, pp. 7–16 (2011)

    Google Scholar 

  9. Mansouri, B., Zanibbi, R., Oard, D.W.: Characterizing searches for mathematical concepts. In: JCDL 2019, pp. 57–66. IEEE (2019)

    Google Scholar 

  10. Miner, R.R., Carlisle, D., Ion, P.D.F.: Mathematical markup language (MathML) version 3.0, 2nd edn. W3C recommendation, W3C, April 2014

    Google Scholar 

  11. Nakov, P., et al.: SemEval-2017 task 3: community question answering. In: SemEval-2017. pp. 27–48, December 2018

    Google Scholar 

  12. Nakov, P., Màrquez, L., Magdy, W., Moschitti, A., Glass, J., Randeree, B.: SemEval-2015 task 3: answer selection in community question answering. In: SemEval-2015, pp. 269–281 (2015)

    Google Scholar 

  13. Nakov, P., et al.: SemEval-2016 task 3: community question answering. In: SemEval-2016, pp. 525–545 (2016)

    Google Scholar 

  14. Ng, Y.K., et al.: Dowsing for math answers with Tangent-L. In: CLEF 2020. CEUR Workshop Proceedings, vol. 2696 (2020)

    Google Scholar 

  15. Olvera-Lobo, M.-D., Gutiérrez-Artacho, J.: Question answering track evaluation in TREC, CLEF and NTCIR. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 353, pp. 13–22. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16486-1_2

    Chapter  Google Scholar 

  16. Pineau, D.C.: Math-aware search engines: physics applications and overview. CoRR abs/1609.03457 (2016)

    Google Scholar 

  17. Sojka, P., Novotný, V., Ayetiran, E.F., Lupták, D., Stefánik, M.: Quo Vadis, math information retrieval. In: RASLAN 2019, pp. 117–128. Tribun EU (2019)

    Google Scholar 

  18. Stoica, E., Evans, D.: Dynamic term selection in learning a query from examples. In: RIAO 2000, pp. 1703–1719. CID (2000)

    Google Scholar 

  19. Zanibbi, R., Aizawa, A., Kohlhase, M., Ounis, I., Topić, G., Davila, K.: NTCIR-12 MathIR task overview. In: NTCIR-12, pp. 299–308 (2016)

    Google Scholar 

  20. Zanibbi, R., Blostein, D.: Recognition and retrieval of mathematical expressions. Int. J. Doc. Anal. Recognit. 15(4), 331–357 (2012)

    Article  Google Scholar 

  21. Zanibbi, R., Davila, K., Kane, A., Tompa, F.W.: Multi-stage math formula search: using appearance-based similarity metrics at scale. In: SIGIR 2016, pp. 145–154 (2016)

    Google Scholar 

  22. Zanibbi, R., Oard, D.W., Agarwal, A., Mansouri, B.: Overview of ARQMath 2020 (updated working notes version): CLEF lab on answer retrieval for questions on math. In: CLEF 2020. CEUR Workshop Proceedings, vol. 2696 (2020)

    Google Scholar 

  23. Zanibbi, R., Orakwue, A.: Math search for the masses: multimodal search interfaces and appearance-based retrieval. In: Kerber, M., Carette, J., Kaliszyk, C., Rabe, F., Sorge, V. (eds.) CICM 2015. LNCS (LNAI), vol. 9150, pp. 18–36. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20615-8_2

    Chapter  Google Scholar 

Download references

Acknowledgements

This research has been funded by the Waterloo-Huawei Joint Innovation Lab and NSERC, the Natural Science and Engineering Research Council of Canada. George Labahn, Mirette Marzouk, and Kevin Wang provided useful guidance during our weekly research meetings. Gordon Cormack provided his research machine for indexing the corpus. The ARQMath Lab organizers (including, notably, Behrooz Mansouri) developed the idea for the Lab, submitted the proposal to CLEF, and prepared the dataset, the topics, the manual translation of the topic questions into formulas and keywords, and the relevance assessments. The NTCIR Math-IR dataset was made available through an agreement with the National Institute of Informatics. Andrew Kane and anonymous reviewers made valuable suggestions for improving our presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank Wm. Tompa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ng, Y.K., Fraser, D.J., Kassaie, B., Tompa, F.W. (2021). Dowsing for Math Answers. In: Candan, K.S., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2021. Lecture Notes in Computer Science(), vol 12880. Springer, Cham. https://doi.org/10.1007/978-3-030-85251-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85251-1_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85250-4

  • Online ISBN: 978-3-030-85251-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics