Skip to main content
Log in

Attentive deep neural networks for legal document retrieval

  • Original Research
  • Published:
Artificial Intelligence and Law Aims and scope Submit manuscript

Abstract

Legal text retrieval serves as a key component in a wide range of legal text processing tasks such as legal question answering, legal case entailment, and statute law retrieval. The performance of legal text retrieval depends, to a large extent, on the representation of text, both query and legal documents. Based on good representations, a legal text retrieval model can effectively match the query to its relevant documents. Because legal documents often contain long articles and only some parts are relevant to queries, it is quite a challenge for existing models to represent such documents. In this paper, we study the use of attentive neural network-based text representation for statute law document retrieval. We propose a general approach using deep neural networks with attention mechanisms. Based on it, we develop two hierarchical architectures with sparse attention to represent long sentences and articles, and we name them Attentive CNN and Paraformer. The methods are evaluated on datasets of different sizes and characteristics in English, Japanese, and Vietnamese. Experimental results show that: (i) Attentive neural methods substantially outperform non-neural methods in terms of retrieval performance across datasets and languages; (ii) Pretrained transformer-based models achieve better accuracy on small datasets at the cost of high computational complexity while lighter weight Attentive CNN achieves better accuracy on large datasets; and (iii) Our proposed Paraformer outperforms state-of-the-art methods on COLIEE dataset, achieving the highest recall and F2 scores in the top-N retrieval task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. https://www.uscourts.gov/statistics-reports/judicial-business-2020.

  2. https://thuvienphapluat.vn/van-ban-moi.

  3. https://thuvienphapluat.vn.

  4. https://keiji.vbest.jp.

  5. https://www.anwalt.de.

  6. https://sites.ualberta.ca/rabelo/COLIEE2021/

  7. http://vbpl.vn/tw/pages/home.aspx.

  8. https://thuvienphapluat.vn.

  9. https://hdpl.moj.gov.vn/Pages/home.aspx.

  10. http://hethongphapluat.com/hoi-dap-phap-luat.html.

  11. https://hoidapphapluat.net.

  12. https://www.elastic.co/.

  13. https://pypi.org/project/rank-bm25/.

References

  • Bach NX, Duy TK, Phuong TM (2019) A POS tagging model for Vietnamese social media text using BiLSTM-CRF with rich features. In: Proceedings of the 16th pacific rim international conference on artificial intelligence (pricai), part iii, pp 206–219

  • Bach NX, Thuy NTT, Chien DB, Duy TK, Hien TM, Phuong TM (2019) Reference extraction from Vietnamese legal documents. In: Proceedings of the 10th international symposium on information and communication technology (soict), pp 486–493

  • Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P et al. (2020). Language models are few-shot learners. arXiv:2005.14165

  • Chalkidis I, Kampas D (2019) Deep learning in law: early adaptation and legal word embeddings trained on large corpora. Artif Intell Law 27(2):171–198

    Article  Google Scholar 

  • Chen Q, Zhu X, Ling ZH, Wei S, Jiang H, Inkpen D (2017) Enhanced lstm for natural language inference. In: Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers), pp 1657–1668

  • Conneau A, Khandelwal K, Goyal N, Chaudhary V, Wenzek G, Guzmán F, Stoyanov V (2019) Unsupervised cross-lingual representation learning at scale. arXiv:1911.02116

  • Cooper WS (1971) A definition of relevance for information retrieval. Inf Storage Retr 7(1):19–37

    Article  MathSciNet  CAS  Google Scholar 

  • Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the north American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp 4171–4186. Minneapolis, Minnesota Association for Computational Linguistics

  • Frank S, Dhivya C, Kanika M, Jinane H, Andrew V, Hiroko B, John H (2021) A pentapus grapples with legal reasoning. Coliee workshop in icail, pp 78–83

  • Huang PS, He X, Gao J, Deng L, Acero A, Heck L (2013) Learning deep structured semantic models for web search using clickthrough data. In: Proceedings of the 22nd acm international conference on information & knowledge management, pp 2333–2338

  • Husa VJM (2016) Future of legal families. Oxford handbooks online: scholarly research reviews. Oxford University Press, Oxford

    Google Scholar 

  • Ito S (2008) Lecture series on ultimate facts. Shojihomu (in Japanese)

  • Kien PM, Nguyen HT, Bach NX, Tran V, Nguyen ML, Phuong TM (2020) Answering legal questions by learning neural attentive text representation. In: Proceedings of the 28th international conference on computational linguistics. Barcelona, Spain (Online) International Committee on Computational Linguistics, pp 988–998. https://aclanthology.org/2020.coling-main.86https://doi.org/10.18653/v1/2020.coling-main.86

  • Kim MY, Rabelo J, Okeke K, Goebel R (2022) Legal information retrieval and entailment based on bm25, transformer and semantic thesaurus methods. Rev. Socionetw. Strateg. 16(1):157–174

    Article  Google Scholar 

  • Kim Y (2014) Convolutional neural networks for sentence classification. In: Proceedings of the 2014 conference on empirical methods in natural language processing (emnlp), pp 1746–1751

  • Kowalski R, Datoo A (2021) Logical english meets legal english for swaps and derivatives. Artif Intell Law 30:163–197

    Article  Google Scholar 

  • Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Zettlemoyer L (2019) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461

  • Luhn HP (1957) A statistical approach to mechanized encoding and searching of literary information. IBM J Res Dev 1(4):309–317

    Article  MathSciNet  Google Scholar 

  • Martins A, Astudillo R (2016) From softmax to sparsemax: a sparse model of attention and multi-label classification. International conference on machine learning, pp 1614–1623

  • Masaharu Y, Youta S, Yasuhiro A (2021) Bert-based ensemble methods for information retrieval and legal textual entailment in coliee statute law task. Coliee workshop in icail, pp 78–83

  • Mikolov T, Grave E, Bojanowski P, Puhrsch C, Joulin A (2018) Advances in pre-training distributed word representations. In: Proceedings of the international conference on language resources and evaluation (lrec 2018)

  • Mikolov T, Kombrink S, Burget L, Černockỳ J, Khudanpur S (2011) Extensions of recurrent neural network language model. 2011 ieee international conference on acoustics, speech and signal processing (icassp), pp 5528–5531

  • Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, 26. https://proceedings.neurips.cc/paper/2013/hash/9aa42b31882ec039965f3c4923ce901b-Abstract.html

  • Mueller J, Thyagarajan A (2016) Siamese recurrent architectures for learning sentence similarity. In: thirtieth aaai conference on artificial intelligence

  • Nguyen HT, Nguyen PM, Vuong THY, Bui QM, Nguyen CM, Dang BT, Satoh K (2021) Jnlp team: deep learning approaches for legal processing tasks in coliee 2021. arXiv:2106.13405

  • Nguyen HT, Nguyen VH, Vu VA (2017) A knowledge representation for vietnamese legal document system. In: 2017 9th international conference on knowledge and systems engineering (kse), pp 30–35

  • Nguyen HT, Tran V, Nguyen PM, Vuong THY, Bui QM, Nguyen CM, Satoh K (2021) Paralaw nets–cross-lingual sentence-level pretraining for legal text processing. arXiv:2106.13403

  • Nguyen HT, Vuong HYT, Nguyen PM, Dang BT, Bui QM, Vu ST, Nguyen ML (2020). Jnlp team: deep learning for legal processing in coliee 2020. arXiv:2011.08071

  • Nguyen TS, Nguyen LM, Tojo S, Satoh K, Shimazu A (2018) Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts. Artif Intell Law 26(2):169–199

    Article  Google Scholar 

  • Palangi H, Deng L, Shen Y, Gao J, He X, Chen J, Ward R (2016) Deep sentence embedding using long short-term memory networks: analysis and application to information retrieval. IEEE/ACM Trans Audio Speech Lang Process 24(4):694–707

    Article  Google Scholar 

  • Pennington J, Socher R, Manning CD. (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (emnlp), pp 1532–1543

  • Rabelo J, Kim MY, Goebel R, Yoshioka M, Kano Y, Satoh K (2019) A summary of the coliee 2019 competition. In: Jsai international symposium on artificial intelligence, pp 34–49

  • Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training. The University of British Columbia Repository

  • Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I (2019) Language models are unsupervised multitask learners. OpenAI Blog 1(8):9

    Google Scholar 

  • Reimers N, Gurevych I (2019) Sentence-bert: sentence embeddings using siamese bert-networks. arXiv:1908.10084

  • Salton G, Buckley C (1988) Term-weighting approaches in automatic text retrieval. Inf Process Manag 24(5):513–523

    Article  Google Scholar 

  • Satoh K, Asai K, Kogawa T, Kubota M, Nakamura M, Nishigai Y, Takano C (2010) Proleg: an implementation of the presupposed ultimate fact theory of Japanese civil code by prolog technology. In: Jsai international symposium on artificial intelligence, pp 153–164

  • Šavelka J, Ashley KD (2021) Legal information retrieval for understanding statutory terms. Artif Intell Law 30:245–289

    Article  Google Scholar 

  • Severyn A, Moschitti A (2015) Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th international acm sigir conference on research and development in information retrieval, pp 373–382

  • Shao Y, Mao J , Liu Y, Ma W, Satoh K, Zhang M, Ma S (2020) Bert-pli: modeling paragraph-level interactions for legal case retrieval. Ijcai, pp 3501–3507

  • Shen Y, He X, Gao J, Deng L, Mesnil G (2014) A latent semantic model with convolutional-pooling structure for information retrieval. In: Proceedings of the 23rd acm international conference on conference on information and knowledge management, pp 101–110

  • Sugathadasa K, Ayesha B, de Silva N, Perera AS, Jayawardana V, Lakmal D, Perera M (2018) Legal document retrieval using document vector embeddings and deep learning. In: Science and information conference, pp 160–175

  • Tang D, Qin B, Liu T. (2015) Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp 1422–1432

  • Thanh NH, Quan BM, Nguyen C, Le T, Phuong NM, Binh DT et al. (2021) A summary of the alqac 2021 competition. In: 2021 13th international conference on knowledge and systems engineering (kse), pp 1–5

  • Tran V, Le Nguyen M, Tojo S, Satoh K (2020) Encoded summarization: summarizing documents into continuous vector space for legal case retrieval. Artif Intell Law 28:441–467

    Article  Google Scholar 

  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L , Gomez AN, Polosukhin I (2017) Attention is all you need. Advances in Neural Information Processing Systems, 30. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

  • Wang Y, Huang M, Zhu X, Zhao L (2016) Attention-based lstm for aspect-level sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 606–615

  • Wehnert S, Sudhi V, Dureja S, Kutty L, Shahania S, De Luca EW (2021) Legal norm retrieval with variations of the bert model combined with tf-idf vectorization. In: Proceedings of the eighteenth international conference on artificial intelligence and law, pp 285–294

  • Yilmaz ZA, Wang S, Yang W, Zhang H, Lin J (2019) Applying BERT to document retrieval with birch. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp): system demonstrations, pp 19–24

  • Yoshioka M, Aoki Y, Suzuki Y (2021) Bert-based ensemble methods with data augmentation for legal textual entailment in coliee statute law task. In: Proceedings of the eighteenth international conference on artificial intelligence and law, pp 278–284

  • Yoshioka M, Kano Y, Kiyota N, Satoh K (2018) Overview of Japanese statute law retrieval and entailment task at coliee-2018. In: Twelfth international workshop on juris-informatics (jurisin 2018)

Download references

Acknowledgements

This work was supported by JSPS Kakenhi Grant Number 20K20406. The research also was supported in part by the Asian Office of Aerospace R &D(AOARD), AirForce Office of Scientific Research (Grant No. FA2386-19-1-4041). The work would not be complete without valuable data from COLIEE.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ha-Thanh Nguyen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is an improved and extended work of Kien et al. (2020).

Appendices

Appendix 1 Data examples

See Tables 7, 8, 9.

Table 7 A sample in the Vietnamese dataset with highlighted parts
Table 8 A sample in the Japanese dataset
Table 9 A sample in the English dataset

Appendix 2 Grid search table for tuning paraformer*

\(\alpha\)

Validation

Test

P

R

F2

P

R

F2

Top_BM25=10

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7531

0.7099

0.7147

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.6154

0.5615

0.5675

0.7901

0.7346

0.7407

1.0

0.5231

0.4462

0.4547

0.3827

0.3457

0.3498

Top_BM25=20

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7778

0.7284

0.7339

0.9

0.5846

0.5385

0.5436

0.7654

0.7160

0.7215

1.0

0.4154

0.3462

0.3538

0.2840

0.2593

0.2620

Top_BM25=30

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7778

0.7284

0.7339

0.9

0.5692

0.5308

0.5350

0.7654

0.7160

0.7215

1.0

0.3077

0.2538

0.2598

0.1605

0.1543

0.1550

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7778

0.7284

0.7339

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.2308

0.1821

0.1871

0.1481

0.1420

0.1427

Top_BM25=50

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7778

0.7284

0.7339

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.2462

0.1974

0.2025

0.1481

0.1420

0.1427

Top_BM25=60

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.2308

0.1821

0.1871

0.1358

0.1296

0.1303

Top_BM25=70

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.2154

0.1846

0.1880

0.1358

0.1296

0.1303

Top_BM25=80

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.2000

0.1692

0.1726

0.1111

0.1049

0.1056

Top_BM25=90

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7778

0.7284

0.7339

1.0

0.1538

0.1308

0.1333

0.1111

0.1049

0.1056

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7654

0.7160

0.7215

1.0

0.1385

0.1231

0.1248

0.0988

0.0926

0.0933

Top_BM25=110

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7654

0.7160

0.7215

1.0

0.1385

0.1231

0.1248

0.0741

0.0679

0.0686

Top_BM25=120

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5692

0.5205

0.5256

0.7654

0.7160

0.7215

1.0

0.1231

0.1154

0.1162

0.0741

0.0679

0.0686

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5538

0.5154

0.5197

0.7654

0.7160

0.7215

1.0

0.1231

0.1154

0.1162

0.0741

0.0679

0.0686

Top_BM25=140

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5538

0.5154

0.5197

0.7654

0.7160

0.7215

1.0

0.1231

0.1154

0.1162

0.0741

0.0679

0.0686

Top_BM25=150

0.1

0.5077

0.4692

0.4735

0.6790

0.6481

0.6516

0.2

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.3

0.5231

0.4846

0.4889

0.6790

0.6481

0.6516

0.4

0.5846

0.5462

0.5504

0.6914

0.6543

0.6584

0.5

0.6000

0.5615

0.5658

0.6914

0.6543

0.6584

0.6

0.6308

0.5923

0.5966

0.7160

0.6790

0.6831

0.7

0.6462

0.6000

0.6051

0.7654

0.7222

0.7270

0.8

0.6154

0.5692

0.5744

0.7654

0.7160

0.7215

0.9

0.5538

0.5154

0.5197

0.7654

0.7160

0.7215

1.0

0.1231

0.1154

0.1162

0.0741

0.0679

0.0686

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen, HT., Phi, MK., Ngo, XB. et al. Attentive deep neural networks for legal document retrieval. Artif Intell Law 32, 57–86 (2024). https://doi.org/10.1007/s10506-022-09341-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-022-09341-8

Keywords

Navigation