Artificial Intelligence and Predictive Justice: Limitations and Perspectives

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10868)

Abstract

One of the main barriers to effective prediction systems in the legal domain is the very limited availability of relevant data. This paper discusses the particular case of the Federal Court of Canada, and describes some perspectives on how best to overcome these problems. Part of the process involves an automatic annotation system, supervised by a manual annotation process. Several state-of-the-art methods on related tasks are presented, as well as promising approaches leveraging recent advances in natural language processing, such as vector word representations or recurrent neural networks. The insights outlined in the paper will be further explored in a near future, as this work is still an ongoing research.

Keywords

Legal artificial intelligence Predictive justice Natural language processing Machine learning 

Notes

Acknowledgments

We thank José Bonneau for his description of the difficulties in accessing legal court decisions. We also thank Diego Maupomé and Antoine Briand for their valuable comments.

References

  1. 1.
    CCH Canadienne Ltée c. Barreau du Haut-Canada, 2004 CSC 13. https://scc-csc.lexum.com/scc-csc/scc-csc/fr/item/2125/index.do
  2. 2.
    Wilson & Lafleur inc. c. Société Québécoise d’Information Juridique, j.E. 2000–17728 (C.A.)Google Scholar
  3. 3.
    Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., Lampos, V.: Predicting judicial decisions of the european court of human rights: a natural language processing perspective. PeerJ Comput. Sci. 2, e93 (2016)CrossRefGoogle Scholar
  4. 4.
    Almeida, H., Meurs, M.J., Kosseim, L., Butler, G., Tsang, A.: Machine learning for biomedical literature triage. PLOS ONE 9(12) (2014)CrossRefGoogle Scholar
  5. 5.
    Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3(Feb), 1137–1155 (2003)MATHGoogle Scholar
  6. 6.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  7. 7.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)MATHGoogle Scholar
  8. 8.
    Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in Neural Information Processing Systems, pp. 3079–3087 (2015)Google Scholar
  9. 9.
    Dieng, A.B., Wang, C., Gao, J., Paisley, J.: TopicRNN: a recurrent neural network with long-range semantic dependency. arXiv preprint arXiv:1611.01702 (2016)
  10. 10.
    Doyon, J.M.: Accessibilité aux jugements et droit d’auteur. CPI 20(3) (2008). http://www.lescpi.ca/s/2773
  11. 11.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  12. 12.
    Katz, D.M., Bommarito II, M.J., Blackman, J.: A general approach for predicting the behavior of the supreme court of the United States. PLoS ONE 12(4), e0174698 (2017)CrossRefGoogle Scholar
  13. 13.
    Kiros, R., Zhu, Y., Salakhutdinov, R.R., Zemel, R., Urtasun, R., Torralba, A., Fidler, S.: Skip-thought vectors. In: Advances in Neural Information Processing Systems, pp. 3294–3302 (2015)Google Scholar
  14. 14.
    Loevinger, L.: Jurimetrics: the methodology of legal inquiry. Law Contemp. Probl. 28(1), 5–35 (1963)CrossRefGoogle Scholar
  15. 15.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  16. 16.
    Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)Google Scholar
  17. 17.
    Radford, A., Jozefowicz, R., Sutskever, I.: Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444 (2017)
  18. 18.
    Ruder, S.: A survey of cross-lingual embedding models. arXiv preprint arXiv:1706.04902 (2017)
  19. 19.
    Schnabel, T., Labutov, I., Mimno, D., Joachims, T.: Evaluation methods for unsupervised word embeddings. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 298–307 (2015)Google Scholar
  20. 20.
    Spärck Jones, K.: IDF term weighting and IR research lessons. J. Doc. 60(5), 521–523 (2004)CrossRefGoogle Scholar
  21. 21.
    Sulea, O.M., Zampieri, M., Malmasi, S., Vela, M., Dinu, L.P., van Genabith, J.: Exploring the use of text classification in the legal domain. arXiv preprint arXiv:1710.09306 (2017)
  22. 22.
    Sulea, O.M., Zampieri, M., Vela, M., van Genabith, J.: Predicting the law area and decisions of french supreme court cases. arXiv preprint arXiv:1708.01681 (2017)
  23. 23.
    Tromans, R.: Legal AI - A Beginner’s Guide. Technical report, Tromans Consulting (2017). https://blogs.thomsonreuters.com/legal-uk/wp-content/uploads/sites/14/2017/02/Legal-AI-a-beginners-guide-web.pdf
  24. 24.
    Viera, A.J., Garrett, J.M., et al.: Understanding interobserver agreement: the kappa statistic. Fam. Med. 37(5), 360–363 (2005)Google Scholar
  25. 25.
    Weiss, G.M., Provost, F.: The effect of class distribution on classifier learning: an empirical study. Rutgers University (2001). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.9570
  26. 26.
    Zambrano, G.: Précédents et prédictions jurisprudentielles à l’ère des Big Data: parier sur le résultat (probable) d’un procès (2015). https://hal.archives-ouvertes.fr/hal-01496098. Accessed 28 Jan 2018

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Université du Québec à MontréalMontréalCanada

Personalised recommendations