Skip to main content

Advertisement

Log in

Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues

  • Review
  • Published:
Neuropsychology Review Aims and scope Submit manuscript

Abstract

Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the “transparent reporting of multivariate prediction models for individual prognosis or diagnosis” (TRIPOD) in the malingering literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Availability of Data and Materials

No parts of this work have been presented elsewhere. However, in compliance with data sharing requirements, data and analysis presented in Fig. 4 and Table 3 have been publicly shared under the following Figshare DOIs: Fig. 4 Data and Analysis (chi square and rtet for all 2-validity-measure concordances), https://doi.org/10.6084/m9.figshare.14256470.v2; Table 3 Data and Analysis (Prediction of malingering from 7 validity measures), https://doi.org/10.6084/m9.figshare.14368727.v3.

Notes

  1. The term “forensic” shall refer to any situation where there is an actual or potential legal question regarding the veracity of an examinee´s presentation such as when making determinations about the presence of sexual abuse, eligibility for disability benefits, fitness for fulfilling a particular role, such as being a parent or police officer, ability to stand trial, presence of neurocognitive and/or psychiatric conditions that may affect guilt, innocence, or sentencing of a criminal defendant, tort situations involving questions of the veracity of neurocognitive abilities and related conditions. Such actual or potential legal questions arise most frequently in medicolegal settings, but may arise also in clinical contexts (cf. Sherman et al., 2020, p. 9; Sweet et al., 2021, p. 1059).

  2. A “confusion table” or “confusion matrix” is a 2 × 2 diagnostic classification table (see Figs. 1 and 2). It is a special case of the 2 × 2 contingency table often used in psychological research to show the association of two binary variables.

  3. In Martin et al. (2020, pp. 99–100; Table 5), 53 TOMM studies are listed with an additional two studies listed on p. 112 for a total of 55 studies. However, there is inconsistency in how studies from articles that report multiple studies are listed in Martin et al. (2020). For consistency, in Parts I and II of the present review, each separate study, whether reported in a publication with other studies or by itself, is counted separately.

  4. The terms “positive predictive power” and “posterior probability” are sometimes considered conceptually distinct in that positive predictive power is an operating characteristic of a test in a given setting, computed as the ratio of true positive cases over the total number of all examinees with a positive diagnostic finding. In comparison, posterior probability is the probability a given examinee with a positive test result is actually a true positive case (cf. Fletcher et al., 2014, p. 118). Despite this conceptual distinction, the computation of both classification accuracy statistics is identical. In this article, these terms will be used in keeping with their conceptual distinction.

  5. Knowledge of the base rate of a condition is required for the calculation of classification accuracy statistics. Statistical modelling and analyses of hypothetical simulation data in this review will be based on commonly accepted estimates of base rates from the malingering literature. Examination of whether these estimates are tenable considering the findings of the present review is beyond the scope of this review.

  6. This quantity may also be referred to as the conditional correlation of \({PVT}_{1}\) and \({PVT}_{2}\) conditioned on \({\mathrm{M}}^{-}\).

  7. This quantity may also be referred to as the conditional correlation of \({PVT}_{1}\) and \({PVT}_{2}\), conditioned on \({\mathrm{M}}^{+}\).

  8. This quantity may also be referred to as the unconditional correlation of \(P{VT}_{1}\) and \({PVT}_{2}\), that is, the correlation is not conditioned on malingering status.

  9. In the original work (Larrabee et al., 2019, Table 4, page 1362) average absolute skew among 11 validity tests is reported as − .942. This value is arithmetically impossible because the average of absolute values cannot be a negative quantity. Also, while said Table 4 reports skew for 16 validity tests, said average skew computation is based on a subset of only 11 validity tests with no rationale given for exclusion of the other 5 validity tests. Therefore, in the present analysis, the average skew from all 16 validity tests was recalculated based on all 16 skew values as reported in the original work. This is the value given in the text above (average absolute skew = 1.00).

  10. An additional method for estimating posterior probabilities from multiple PVTs may be Markov Chain Monte Carlo algorithms that correctly implement multivariate Bayesian modelling (cf. Al-Khairullah & Al-Baldawi, 2021). When directly compared to frequentist approaches, such as logistic regression, such models have been shown to yield diagnostic accuracy comparable to logistic regression (e.g., Wang et al., 2014; Witteveen et al., 2018).

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Not applicable because this manuscript has only a single author.

Corresponding author

Correspondence to Christoph Leonhard.

Ethics declarations

Ethics Approval

Not applicable because this is a review and not a study where human or animal data were collected.

Competing Interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 251 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leonhard, C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 33, 581–603 (2023). https://doi.org/10.1007/s11065-023-09601-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11065-023-09601-7

Keywords

Navigation