Advertisement

Journal of Experimental Criminology

, Volume 11, Issue 2, pp 299–318 | Cite as

Isolating modeling effects in offender risk assessment

  • Zachary Hamilton
  • Melanie-Angela Neuilly
  • Stephen Lee
  • Robert Barnoski
Article

Abstract

Objectives

Recent evolutions in actuarial research have revealed the potential increased utility of machine learning and data-mining strategies to develop statistical models such as classification/decision-tree analysis and neural networks, which are said to mimic the decision-making of practitioners. The current article compares such actuarial modeling methods with a traditional logistic regression risk-assessment development approach.

Methods

Utilizing a large purposive sample of Washington State offenders (N = 297,600), the current study examines and compares the predictive validity of the currently used Washington State Static Risk Assessment (SRA) instrument to classification tree analysis/random forest and neural network models.

Results

Overall findings varied, being dependent on the outcome of interest, with the best model for each method resulting in AUCs ranging from 0.732 to 0.762. Findings reveal some predictive performance improvements with advanced machine-learning methodologies, yet the logistic regression models demonstrate comparable predictive performance.

Conclusions

The study concluded that while data-mining techniques hold potential for improvements over traditional methods, regression-based models demonstrate comparable, and often improved, prediction performance with noted parsimony and greater interpretability.

Keywords

Random forest Neural network Recidivism Risk assessment 

References

  1. Andrews, D. A. (1982). The level of supervision inventory (LSI). Ontario Ministry of Correctional Services.Google Scholar
  2. Andrews, D. A. (1995). No title. Cited in LS/CMI Manual pp. 117–144.Google Scholar
  3. Andrews, D. A., & Bonta, J. (1995). The LSI-R: the level of service inventory–revised. Toronto: Multi-Health Systems.Google Scholar
  4. Andrews, D., Bonta, J., & Hoge, R. (1990a). Classification for effective rehabilitation: rediscovering psychology. Criminal Justice and Behavior, 17(1), 19–52.CrossRefGoogle Scholar
  5. Andrews, D., Zinger, I., Hodge, R., Bonta, J., Gendreau, P., & Cullen, F. (1990b). Does correctional treatment work? a clinically relevant and psychologically informed meta-analysis. Criminology, 28(3), 369–392.CrossRefGoogle Scholar
  6. Andrews, D., Bonta, J., & Wormith, S. (2006). The recent past and near future of risk/need assessment. Crime and Delinquency, 52(1), 7–27.CrossRefGoogle Scholar
  7. Austin, J., Coleman, D., Peyton, J., & Johnson, K.D. (2003). Reliability and validity study of the LSI-R risk assessment instrument. Washington, D.C. Institute on Crime, Justice, and Corrections at The George Washington University.Google Scholar
  8. Baird, C. S. (1981). Probation and parole classification: the Wisconsin model. Corrections Today, 43, 36–41.Google Scholar
  9. Banks, S., Robbins, P. C., Silver, E., Vesselinov, R., Steadman, H. J., Monahan, J., Mulvey, E. P., Appelbaum, P. S., Grisso, T., & Roth, L. H. (2004). A multiple-models approach to violence risk assessment among people with mental disorder. Criminal Justice and Behavior, 31(3), 324–340.CrossRefGoogle Scholar
  10. Barnoski, R. (2010) Washington State static risk assessment—version 2.0. [Modified to improve reliability and validity, requested by Washington State Center for Court Research].Google Scholar
  11. Barnoski, R., Aos, S. (2003). Washington’s Offender Accountability Act: An Analysis of the Department of Corrections’ Risk Assessment. #03-12-1202.Google Scholar
  12. Barnoski, R., Drake, E. (2007). Washington’s Offender Accountability Act: Department of Corrections’ Static Risk Instrument. #07-03-1201.Google Scholar
  13. Berk, R., Sherman, L., Barnes, G., Kurtz, E., & Ahlman, L. (2009). Forecasting murder within a population of probationers and parolees: a high stakes application of statistical learning. Journal of the Royal Statistical Society A, 172(Part 1), 191–211.CrossRefGoogle Scholar
  14. Bishop, C. M. (1995). Neural networks for pattern recognition. Oxford: Oxford University Press.Google Scholar
  15. Breiman, L. (1996). Bagging predictors. Machine Learning Journal, 26, 123–140.Google Scholar
  16. Breiman, L. (2001a). Statistical modeling: the two cultures. Statistical Science, 16(3), 199–215.CrossRefGoogle Scholar
  17. Breiman, L. (2001b). Random forests. Machine Learning Journal, 45(1), 5–32.CrossRefGoogle Scholar
  18. Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. Monterey: Wadsworth and Brooks/Cole.Google Scholar
  19. Brennan, T., & Oliver, W. L. (2000). Evaluation of reliability and validity of COMPAS scales: national aggregate sample. Traverse City: Northpointe Institute for Public Management.Google Scholar
  20. Brennan, T., Dieterich, B., Breitenbach, M., & Mattson, B. (2009a). Commentary on NCCD “A questions of evidence: A critique of risk assessment models used in the justice system”. Northpointe Institute for Public Management, Inc. http://www.northpointeinc.com/files/whitepapers/Baird_Response_060409.pdf
  21. Brennan, T., Dieterich, W., & Ehret, B. (2009b). Evaluation the predictive validity of the comps risk and needs assessment system. Criminal Justice and Behavior, 36(1), 21–40.CrossRefGoogle Scholar
  22. Brodzinski, J. D., Crable, E. A., & Scherer, R. F. (1994). Using artificial intelligence to model juvenile recidivism patterns. Computers in Human Services, 10(4), 1–18.CrossRefGoogle Scholar
  23. Caulkins, J., Cohen, J., Gorr, W., & Wei, J. (1996). Predicting criminal recidivism: a comparison of neural networks with statistical models. Journal of Criminal Justice, 24(3), 227–240.CrossRefGoogle Scholar
  24. Cottle, C. C., Lee, R. J., & Heilbrun, K. (2001). The prediction of criminal recidivism in juveniles: a meta-analysis. Criminal Justice and Behavior, 28(3), 367–394.CrossRefGoogle Scholar
  25. Duwe, G. (2013). The development, validity, and reliability of the Minnesota screening tool assessing recidivism risk (MnSTARR). Criminal Justice Policy Review, XX, 1–35. doi: 10.1177/0887403413478821.Google Scholar
  26. Gardner, W., Lidz, C., Mulvey, E., & Shaw, E. (1996). A comparison of actuarial methods of identifying repetitively violent patients with mental illness. Law and Human Behavior, 20(1), 35–48.CrossRefGoogle Scholar
  27. Gottfredson, S., & Moriarty, L. (2006). Statistical risk assessment: Old problems and New applications. Crime and Delinquency, 52(1), 178–200.CrossRefGoogle Scholar
  28. Grann, M., & Langstrom, N. (2007). Actuarial assessment of violent risk: to weigh or Not to weigh. Criminal Justice and Behavior, 34(1), 22–36.CrossRefGoogle Scholar
  29. Hare, R. (1991). The revised psychopathy checklist. Toronto: Multi-Health Systems.Google Scholar
  30. Harper, P. R. (2005). A review and comparison of classification algorithms for medical decision making. Health Policy, 71(3), 315–331.CrossRefGoogle Scholar
  31. Harrell, F., Lee, K., & Mark, D. (1996). Multivariate prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in Medicine, 15, 361–387.CrossRefGoogle Scholar
  32. Haykin, S. (1999). Neural networks: a comprehensive foundation. Upper Saddle River: Prentice Hall.Google Scholar
  33. Hertz, J., Palmer, R. G., & Krogh, A. S. (1990). Introduction to the theory of neural computation. New York: Perseus Books.Google Scholar
  34. Jung, S., & Rawana, E. P. (1999). Risk-need assessment of juvenile offenders. Criminal Justice and Behavior, 26, 69–89.CrossRefGoogle Scholar
  35. Latessa, E., Smith, P., Lemke, R., Markarios, M., Lowenkamp, C. (2009) Creation and Validation of the Ohio Risk Assessment System – Final Report. Ohio Department of Rehabilitation and Correction. # 2005-JG-E0R-6269 and 2005-JG-C01-T8.Google Scholar
  36. Liu, Y. Y., Yang, M., Ramsay, M., Li, X. S., & Coid, J. W. (2011). A comparison of logistic regression, classification and regression tree, and neural networks models in predicting violent Re-offending. Journal of Quantitative Criminology, 27(4), 547–573.CrossRefGoogle Scholar
  37. Loeber, R., & Farrington, D. (1998). Serious and violent juvenile offenders: risk factors and successful interventions. Thousand Oaks: Sage.Google Scholar
  38. Monahan, J., Steadman, H. J., Appelbaum, P. S., Robbins, P. C., Mulvey, E. P., Silver, E., Roth, L. H., & Grisso, T. (2000). Developing a clinically useful actuarial tool for assessing violence risk. British Journal of Psychiatry, 176(4), 312–319.CrossRefGoogle Scholar
  39. Monahan, J., Steadman, H. J., Robbins, P. C., Appelbaum, P., Banks, S., & Grisso, T. (2005). An actuarial model of violence risk assessment for persons with mental disorders. Psychiatric Services, 56(7), 810–815.CrossRefGoogle Scholar
  40. Monahan, J., Steadman, H. J., Appelbaum, P. S., Grisso, T., Mulvey, E. P., & Roth, L. H. (2006). The classification of violence risk. Behavioral Sciences and the Law, 24(6), 721–730.CrossRefGoogle Scholar
  41. Neuilly, M.-A., Zgoba, K. M., Tita, G. E., & Lee, S. (2011). Predicting recidivism in homicide offenders using classification tree analysis. Homicide Studies, 15(2), 154–176.CrossRefGoogle Scholar
  42. Palocsay, S. W., Wang, P., & Brookshire, R. G. (2000). Predicting criminal recidivism using neural networks. Socio-Economic Planning Sciences, 34(4), 271–284.CrossRefGoogle Scholar
  43. Ripley, B. D. (1996). Pattern recognition and neural networks. New York: Cambridge University Press.CrossRefGoogle Scholar
  44. Rosenfeld, B., & Lewis, C. (2005). Assessing violence risk in stalking cases: a regression tree approach. Law and Human Behavior, 29(3), 343–357.CrossRefGoogle Scholar
  45. Schaffer, D., Kelly, B., & Lieberman, J. (2011). An exemplar-based approach to risk assessment: validating the risk management systems instrument. Criminal Justice Policy Review, 22(2), 167–186.CrossRefGoogle Scholar
  46. Silver, E., & Chow-Martin, L. (2002). A multiple-models approach to assessing recidivism risk: implications for judicial decision making. Criminal Justice and Behavior, 29(5), 538–568.CrossRefGoogle Scholar
  47. Silver, E., Smith, W. R., & Banks, S. (2000). Constructing actuarial devices for predicting recidivism: a comparison of methods. Criminal Justice and Behavior, 27(6), 733–764.CrossRefGoogle Scholar
  48. Skeem, J., & Louden, J. (2007). Assessment of Evidence on the quality of the correctional offender management profiling for alternative sanctions (COMPAS). Prepared for the California department of corrections and rehabilitation (CDCR). CA: Davis.Google Scholar
  49. Smith, M. (1993). Neural networks for statistical modeling. New York: Van Nostrand Reinhold.Google Scholar
  50. Smith, P., Cullen, F., & Latessa, E. (2009). Can 14,737 women be wrong? a meta-analysis of the LSI-R and recidivism for female offenders. Criminology and Public Policy, 8(1), 183–208.CrossRefGoogle Scholar
  51. Stalans, L. J., Yarnold, P. R., Seng, M., Olson, D. E., & Repp, M. (2004). Identifying three types of violent offenders and predicting violent recidivism while on probation: a classification tree analysis. Law and Human Behavior, 28(3), 253–262.CrossRefGoogle Scholar
  52. Steadman, H. J., Silver, E., Monahan, J., Appelbaum, P. S., Clark Robbins, P., Mulvey, E. P., Grisso, T., Roth, L. H., & Banks, S. (2000). A classification tree approach to the development of actuarial violence risk assessment tools. Law and Human Behavior, 24(1), 83–100.CrossRefGoogle Scholar
  53. Thomas, S., Leese, M., Walsh, E., McCrone, P., Moran, P., Burns, T., Creed, F., Tirer, P., & Fahy, T. (2005). A comparison of statistical models in predicting violence in psychotic illness. Comprehensive Psychiatry, 46, 296–303.CrossRefGoogle Scholar
  54. Tollenaar, N., & van der Heijden, P. G. M. (2013). Which method predicts recidivism best?: a comparison of statistical, machine learning, and data mining predictive models. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(2), 565–584.CrossRefGoogle Scholar
  55. Van Voorhis, P., Wright, E. M., Salisbury, E., & Bauman, A. (2010). Women’s risk factors and their contributions to the existing risk/needs assessment: the current status of a gender-responsive supplement. Criminal Justice and Behavior, 37, 261–288.CrossRefGoogle Scholar
  56. Wasserman, P. (1993). Advanced methods in neural computing. New York: Van Nostrand Reinhold.Google Scholar
  57. Wasserman, L. (2014). Rise of the machines. In X. Lin, D. L. Banks, C. Genest, G. Molenberghs, D. W. Scott, & J.-L. Wang (Eds.), Past, present, and future of statistical science (pp. 1–12). Boca Raton: CRC Press. Chapter 1.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Zachary Hamilton
    • 1
  • Melanie-Angela Neuilly
    • 2
  • Stephen Lee
    • 3
  • Robert Barnoski
    • 4
  1. 1.Department of Criminal Justice and CriminologyWashington State UniversitySpokaneUSA
  2. 2.Department of Criminal Justice and CriminologyWashington State UniversityPullmanUSA
  3. 3.Bioinformatics and Computational BiologyUniversity of IdahoMoscowUSA
  4. 4.Department of Criminal Justice and CriminologyWashington State UniversitySpokaneUSA

Personalised recommendations