Skip to main content

A Hybrid Summarization Method for Legal Judgment Documents Based on Lawformer

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14303))

  • 791 Accesses

Abstract

Legal Judgment Summarization (LJS) is a crucial task in the field of Legal Artificial Intelligence (LegalAI) since it can improve the efficiency of case retrieval for judicial work. However, most existing LJS methods are confronted with the challenges of long text and complex structural characteristics of legal judgment documents. To address these issues, we propose a hybrid method of extractive and abstractive summarization with encoding by Lawformer to enhance the quality of LJS. In this method, by segmentation, long legal judgment documents can be shortened into three relatively short parts according to their specific structure. Furthermore, Lawformer, a new pre-trained language model for long legal documents, is applied as an encoder to deal with the long text problem. Additionally, different summarization models are applied to summarize the corresponding part in terms of its structural characteristics, and the obtained summaries of each part are integrated into a high-quality summary involving both semantic and structural information. Extensive experiments are conducted to verify the performance of our method, and the comparative results show that the summary obtained by our method outperforms all other baselines in matching with the reference summary. It is indicated that our method is effective for LJS and has prospects for LegalAI applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://cail.cipsc.org.cn/

  2. 2.

    https://github.com/china-ai-law-challenge/CAIL2020/tree/master/sfzy/baseline.

References

  1. Jain, D., Borah, M.D., Biswas, A.: Summarization of legal documents: where are we now and the way forward. Comput. Sci. Rev. 40, 100388 (2021)

    Article  Google Scholar 

  2. Anand, D., Wagh, R.: Effective deep learning approaches for summarization of legal texts. J. King Saud Univ. – Comput. Inform. Sci. 34(5), 2141–2150 (2022)

    Google Scholar 

  3. Zhong, H., Xiao, C., et al.: How does NLP benefit legal system: a summary of legal artificial intelligence. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5218–5230 (2020)

    Google Scholar 

  4. Zhong, L., Zhong, Z., et al.: Automatic summarization of legal decisions using iterative masking of predictive sentences. In: Proceedings of the 17th International Conference on Artificial Intelligence and Law, pp. 163–172 (2019)

    Google Scholar 

  5. Nguyen, D., Nguyen, B., et al.: Robust deep reinforcement learning for extractive legal summarization. In: Neural Information Processing, pp. 597–604 (2021)

    Google Scholar 

  6. Gao, Y., Liu, Z., et al.: Extractive summarization of Chinese judgment documents via sentence embedding and memory network. In: CCF International Conference on Natural Language Processing and Chinese Computing, pp. 413–424 (2021)

    Google Scholar 

  7. Elaraby, M., Litman, D.: ArgLegalSumm: Improving abstractive summarization of legal documents with argument mining. arXiv:2209.01650 (2022)

  8. Yoon, J., Muhammad, J., et al.: Abstractive summarization of Korean legal cases using pre-trained language models. In: 16th International Conference on Ubiquitous Information Management and Communication, pp. 1–7 (2022)

    Google Scholar 

  9. Liu, J., Wu, J., Luo, X.: Chinese judicial summarising based on short sentence extraction and GPT-2. In: Qiu, H., Zhang, C., et al. (eds.) KSEM 2021, LNCS, vol. 12816, pp. 376–393. Springer, Cham (2021)

    Google Scholar 

  10. Radford, A., Wu, J., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  11. Gao, Y., Liu, Z., et al.: Extractive-abstractive summarization of judgment documents using multiple attention networks. In: Baroni, P., et al. (eds.) CLAR 2021, LNCS, vol. 13040, pp. 486–494. Springer, Cham (2021)

    Google Scholar 

  12. Dong, L., Yang, N., et al.: Unified language model pre-training for natural language understanding and generation. In: 33rd Conference on Neural Information Processing Systems, pp. 13042–13054 (2019)

    Google Scholar 

  13. Xiao, C., Hu, X., et al.: Lawformer: a pre-trained language model for chinese legal long documents. AI Open 2, 79–84 (2021)

    Article  Google Scholar 

  14. Liu, Y.: Fine-tune BERT for Extractive Summarization. arXiv:1903.10318 (2019)

  15. Vaswani, A., Shazeer, N., et al.: Attention is all you need. In: Annual Conference on Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  16. Nallapati, R., Zhou, B., et al.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280–290 (2016)

    Google Scholar 

  17. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1073–1083 (2017)

    Google Scholar 

  18. Devlin, J., Chang, M., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186 (2019)

    Google Scholar 

  19. Lin, C.Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  20. Mihalcea, R., Tarau, P.: TextRank: Bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp.404–411 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuming Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dan, J., Hu, W., Xu, L., Wang, Y., Wang, Y. (2023). A Hybrid Summarization Method for Legal Judgment Documents Based on Lawformer. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14303. Springer, Cham. https://doi.org/10.1007/978-3-031-44696-2_61

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44696-2_61

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44695-5

  • Online ISBN: 978-3-031-44696-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics