Law and Human Behavior

, Volume 32, Issue 3, pp 266–278 | Cite as

Predicting Sex Offender Recidivism. I. Correcting for Item Overselection and Accuracy Overestimation in Scale Development. II. Sampling Error-Induced Attenuation of Predictive Validity Over Base Rate Information

Original Article

Abstract

The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003Minnesota Sex Offender Screening Tool—Revised (MnSOST—R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/∼dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool—Revised (MnSOST—R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST—R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in relation to state sexually violent person commitment statutes and the Supreme Court’s Kansas v. Hendricks decision regarding the constitutionality of such statutes.

Keywords

Base rate Bootstrap Cutting score Incremental validity Minnesota Sex Offender Screening Tool—Revised Recidivism Sex offender 

References

  1. Amenta, A. E., Guy, L. S., & Edens, J. F. (2003). Sex offender risk assessment: A cautionary note regarding measures attempting to quantify violence risk. Journal of Forensic Psychology Practice, 3, 39–50.CrossRefGoogle Scholar
  2. Bamber, D. (1975). The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology, 12, 387–415.CrossRefGoogle Scholar
  3. Barbaree, H. E., Seto, M. C., Langton, C. M., & Peacock, E. J. (2001). Evaluating the predictive accuracy of six risk assessment instruments for adult sex offenders. Criminal Justice and Behavior, 28, 490–521.CrossRefGoogle Scholar
  4. Bartosh, D. L., Garby, T., Lewis, D., & Gray, S. (2003). Differences in the predictive validity of actuarial risk assessments in relation to sex offender type. International Journal of Offender Therapy and Comparative Criminology, 47, 422–438.PubMedCrossRefGoogle Scholar
  5. Campbell, T. W. (2000). Sexual predator evaluations and phrenology: Considering issues of evidentiary reliability. Behavioral Sciences and the Law, 18, 111–130.PubMedCrossRefGoogle Scholar
  6. Campbell, T. W. (2003). Sex offenders and actuarial risk assessments: Ethical considerations. Behavioral Sciences and the Law, 21, 269–279.PubMedCrossRefGoogle Scholar
  7. Epperson, D. L., Kaul, J. D., Huot, S. J., Hesselton, D., Alexander, W., & Goldman, R. (2000, November). Cross-validation of the Minnesota Sex Offender Screening Tool—Revised (MnSOST—R). Paper presented at the meeting of the Association for the Treatment of Sexual Abusers, San Diego, CA. Retrieved March 30, 2006 from http://www.psychology.iastate.edu/faculty/epperson/atsa2000/sld001.htm.
  8. Epperson, D. L., Kaul, J. D., Huot, S., Goldman, R., & Alexander, W. (2003). Minnesota Sex Offender Screening Tool—Revised (MnSOST—R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/∼dle/mnsost_download.htm.
  9. Federal Rules of Evidence Rule 702, Pub. L. No. 93-595, §1, 88 Stat. 1937 (1975).Google Scholar
  10. Federal Rules of Evidence Rule 403, Pub. L. No. 93-595, §1, 88 Stat. 1932 (1975).Google Scholar
  11. Grove, W. M., Zald, D. H., Hallberg, A. M., Lebow, B., Snitz, E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12, 19–30.PubMedCrossRefGoogle Scholar
  12. Hanley, J. A., & McNeil, B. J. (1983a). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.Google Scholar
  13. Hanley, J. A., & McNeil, B. J. (1983b). A method for comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148, 839–843.PubMedGoogle Scholar
  14. Hanson, R. K. (1997). The development of a brief actuarial risk scale for sexual offense recidivism. (User report 1997–04.). Ottawa: Department of the Solicitor General of Canada.Google Scholar
  15. Hanson, R. K., & Morton-Bourgon, K. E. (2004). Predictors of sexual recidivism: An updated meta-analysis. (User report 2004–02.). Ottawa: Public Safety and Emergency Preparedness Canada.Google Scholar
  16. Hanson, R. K., & Thornton, D. (1999). Static 99: Improving actuarial risk assessments for sex offenders. (User report 1999–02.). Ottawa: Department of the Solicitor General of Canada.Google Scholar
  17. Harris, G. T., Rice, M. E., & Quinsey, V. L. (1993). Violent recidivism of mentally disordered offenders: The development of a statistical prediction instrument. Criminal Justice and Behavior, 20, 315–335.CrossRefGoogle Scholar
  18. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic Press.Google Scholar
  19. Janus, E. S. (2000). Sex predator commitment laws: Constitutional but unwise. Psychiatric Annals, 30, 411–420.Google Scholar
  20. Janus, E. S., & Meehl, P. E. (1997). Assessing the legal standard for predictions of dangerousness in sex offender commitment proceedings. Psychology, Public Policy, and the Law, 3, 33–64.CrossRefGoogle Scholar
  21. Janus, E. S., & Prentky, R. A. (2003). Forensic use of actuarial risk assessment with sex offenders: Accuracy, admissibility and accountability. American Criminal Law Review, 40, 1443–1499.Google Scholar
  22. Kansas v. Hendricks (1997). 117 S. Ct. 2072.Google Scholar
  23. Langton, C. M. (2003). Contrasting approaches to risk assessment with adult male sexual offenders: An evaluation of recidivism prediction schemes and the utility of supplementary clinical information for enhancing predictive accuracy (Unpublished doctoral dissertation, Institute of Medical Science, University of Toronto, Toronto, 2003). Dissertation Abstracts International. 64(4-B), 1907. (UMI No. 2003-95020-071).Google Scholar
  24. Langton, C. M., Barbaree, H. E., Seto, M. C., Peacock, E. J., Harkis, L., & Hansen, K. T. (in press). Actuarial assessment of risk for reoffence among adult sex offenders: Evaluating the predictive accuracy of the Static-2002 and five other instruments. Criminal Justice and Behavior.Google Scholar
  25. Litwack, T. R. (2001). Actuarial versus clinical assessments of dangerousness. Psychology, Public Policy, and the Law, 7, 409–443.CrossRefGoogle Scholar
  26. Meehl, P. E. (1965). Detecting latent clinical taxa by fallible quantitative indicators lacking an accepted criterion. (Report No.-PR-65-2). Minneapolis: University of Minnesota, Research Laboratories of the Department of Psychiatry.Google Scholar
  27. Meehl, P. E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52, 194–216.PubMedCrossRefGoogle Scholar
  28. Minnesota Board of Psychology. (2005). Minnesota Board of Psychology psychology practice act. Minneapolis: Minnesota Board of Psychology.Google Scholar
  29. Minnesota Department of Corrections. (2000a). Sex offender policy board and management study. St. Paul: Author.Google Scholar
  30. Minnesota Department of Corrections. (2000b). Sex offender supervision: 2000 report to the Legislature. St. Paul: Author.Google Scholar
  31. Mossman, D. (1994a). Further comments on portraying the accuracy of violence prediction. Law and Human Behavior, 18, 587–593.CrossRefGoogle Scholar
  32. Mossman, D. (1994b). Assessing predictions of violence: Being accurate about accuracy. Journal of Consulting and Clinical Psychology, 62, 783–792.PubMedCrossRefGoogle Scholar
  33. Nuffield, J. (1982). Parole decision-making in Canada: Research towards decision guidelines. Ottawa: Solicitor General of Canada Research Division.Google Scholar
  34. Prentky, R. A., Lee, A. F. S., Knight, R. A., & Cerce, D. (1997). Recidivism rates among child molesters and rapists: A methodological analysis. Law and Human Behavior, 21, 635–658.PubMedCrossRefGoogle Scholar
  35. Quinsey, V. L., Harris, G. T., Rice, M. E., & Cormier, C. A. (1998). Violent offenders: Appraising and managing risk. Washington, DC: American Psychological Association.Google Scholar
  36. Rice, M. E., & Harris, G. T. (1995). Violent recidivism: Assessing predictive validity. Journal of Consulting and Clinical Psychology, 63, 737–748.PubMedCrossRefGoogle Scholar
  37. SAS Institute, Inc. (2005). SAS language reference: Dictionary, Version 9. Cary, NC: Author.Google Scholar
  38. Scott, D. W. (1992). Multivariate density estimation: Theory, practice, and visualization. New York: Wiley.Google Scholar
  39. Seto, M. C. (2005). Is more better? Combining actuarial risk scales to predict recidivism among adult sex offenders. Psychological Assessment, 17, 156–167.PubMedCrossRefGoogle Scholar
  40. Shao, J. (1996). Bootstrap model selection. Journal of the American Statistical Association, 91, 655–665.CrossRefGoogle Scholar
  41. SPSS, Inc. (2005). SPSS 14.0 Base user’s guide. New York: Prentice-Hall.Google Scholar
  42. Szmukler, G. (2001). Violence risk prediction in practice. British Journal of Psychiatry, 178, 84–85.PubMedCrossRefGoogle Scholar
  43. Venables, W. N., Smith, D. M., & the R Development Core Team. (2002). An introduction to R. Bristol, UK: Network Theory, Ltd.Google Scholar
  44. Wollert, R. W. (2002). The importance of cross-validation in actuarial test construction: Shrinkage in the risk estimates for the Minnesota Sex Offender Screening Tool—Revised. Journal of Threat Assessment, 2, 87–102.CrossRefGoogle Scholar
  45. Wollert, R. W. (2003). Additional flaws in the Minnesota Sex Offender Screening Tool—Revised. Journal of Threat Assessment, 2, 65–78.CrossRefGoogle Scholar

Copyright information

© Springer Science + Business Media, LLC 2007

Authors and Affiliations

  1. 1.Department of PsychologyUniversity of Minnesota, Twin Cities CampusMinneapolisUSA

Personalised recommendations