Skip to main content

Advertisement

Log in

Detection System for Malingered PTSD and Related Response Biases

  • Published:
Psychological Injury and Law Aims and scope Submit manuscript

Abstract

This article consists mostly of an appendix on the detection of feigned/malingered PTSD that was justified after analysis of extant malingering detection systems and then presented in Young (2014a) as a long table. The submission reviewers at the journal had considered it appropriate that, although it had been published in book format, it is opened up to peer-review commentary to deal with errors of omission and commission, thereby leading to relevant changes, if any, before further use other than as a guide to assessments in the area. In this regard, we solicit reviews, comments, criticisms, suggestions for change, and so on, with a response (rebuttal) to follow. The present malingered PTSD detection system constitutes the first in the field. It incorporates multiple corrections and additions relative to the extant systems on which it is based (MND, Malingered Neurocognitive Dysfunction; MPRD, Malingered Pain-Related Disability; respectively, Slick, Sherman, & Iverson, 1999; Bianchini, Greve, & Glynn, 2005). It includes very specific rules and procedures both for testing and considering inconsistencies/discrepancies in the file history. Therefore, it is comprehensive and lengthy, or takes about ten times as long to present in tabular format as the MND and MPRD systems on which it is based, (portions in italics indicate what is new to the system). It was constructed to permit the creation of equivalent systems for neurocognition and pain, presented in Young (2014a). The system is useful to mental health professionals not well-versed in psychological testing because, aside from its testing component, it includes extensive procedures for evaluating inconsistencies/discrepancies in examinee files. The system needs evaluation of its reliability and validity, as well as clinical utility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ben-Porath, Y. S., & Tellegen, A. (2008/2011). MMPI-2-RF: Manual for administration, scoring, and interpretation. Minneapolis, MN: University of Minnesota Press.

  • Bianchini, K. J., Greve, K. W., & Glynn, G. (2005). On the diagnosis of malingered pain-related disability: Lessons from cognitive malingering research. The Spine Journal, 5, 404–417.

    Article  PubMed  Google Scholar 

  • Biehn, T. L., Elhai, J. D., Seligman, L. D., Tamburrino, M., & Forbes, D. (2013). Underlying dimensions of DSM-5 posttraumatic stress disorder and major depressive disorder symptoms. Psychological Injury and Law, 6, 290–298.

    Article  Google Scholar 

  • Boone, K. B. (2011). Clarification or confusion? A review of Rogers, Bender, and Johnson’s a critical analysis of the MND criteria for feigned cognitive impairment: Implications for forensic practice and research. Psychological Injury and Law, 4, 157–162.

    Article  Google Scholar 

  • Briere, J. (2001). Detailed assessment of posttraumatic stress professional manual. Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Briere, J. (2011). Trauma Symptom Inventory (TSI-2) professional manual (2nd ed.). Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Bruns, D., & Disorbio, J. M. (2003). Battery for health improvement 2 manual. Minneapolis, MN: Pearson Assessment Systems.

    Google Scholar 

  • Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Manual for the Restandardized Minnesota Multiphasic Personality Inventory: MMPI-2. An interpretive guide. Minneapolis, MN: University of Minnesota Press.

    Google Scholar 

  • Butcher, J. N., Graham, J. R., Ben-Porath, Y. S., Tellegen, A., Dahlstrom, W. G., & Kaemmer, G. (2001). Minnesota multiphasic personality inventory-2: Manual for administration and scoring (2nd ed.). Minneapolis, MN: University of Minnesota Press.

    Google Scholar 

  • Carone, D. A., & Bush, S. S. (2013). Mild traumatic brain injury: System validity assessment and malingering. New York: Springer.

    Google Scholar 

  • Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786 (1993).

  • Demakis, G. J., & Elhai, J. D. (2011). Neuropsychological and psychological aspects of malingered posttraumatic stress disorder. Psychological Injury and Law, 4, 24–31.

    Article  Google Scholar 

  • Disorbio, J. M., & Bruns, D. (2002). Brief battery for health improvement 2 manual. Minneapolis, MN: Pearson Assessment Systems.

    Google Scholar 

  • Frederick, R. I. (1997). Validity indicator profile manual. Minnetonka, MN: NCS Assessments.

    Google Scholar 

  • Friedman, M. J., Keane, T. M., & Resick, P. A. (2014). Handbook of PTSD: Science and practice (2nd ed.). New York: Guilford Press.

    Google Scholar 

  • Gervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Green, P. (2007). Development and validation of a Response Bias Scale (RBS) for the MMPI-2. Assessment, 14, 196–208.

    Article  PubMed  Google Scholar 

  • Green, P. (2005). Green’s word memory test for window’s: User’s manual. Edmonton: Green’s.

    Google Scholar 

  • Greve, K. W., Curtis, K. L., & Bianchini, K. J. (2013). Symptom validity testing: A summary of recent research. In S. Koffler, J. Morgan, I. S. Baron, & M. F. Greiffenstein (Eds.), Neuropsychology: Science & practice I (pp. 61–94). New York: Oxford University Press.

    Google Scholar 

  • Hathaway, S. R., & McKinley, J. C. (1943). Manual for the Minnesota multiphasic personality inventory. New York: Psychological Corporation.

    Google Scholar 

  • Henry, G. K., Heilbronner, R. L., Mittenberg, W., & Enders, C. (2006). The Henry-Heilbronner Index: A 15-item empirically derived MMPI-2 subscale for identifying probable malingering in personal injury litigants and disability claimants. The Clinical Neuropsychologist, 20, 786–797.

    Article  PubMed  Google Scholar 

  • Kane, A. W., & Dvoskin, J. A. (2011). Evaluation for personal injury claims. New York: Oxford University Press.

    Book  Google Scholar 

  • Larrabee, G. J. (2012a). Assessment of malingering. In G. J. Larrabee (Ed.), Forensic neuropsychology: A scientific approach (2nd ed., pp. 116–159). New York: Oxford University Press.

    Google Scholar 

  • Larrabee, G. J. (2012b). Forensic neuropsychology: A scientific approach. New York: Oxford University Press.

    Google Scholar 

  • Lees-Haley, P. R., English, L. T., & Glenn, W. J. (1991). A fake bad scale for the MMPI-2 for personal injury claimants. Psychological Reports, 68, 203–210.

    Article  PubMed  Google Scholar 

  • Miller, H. A. (2001). M-FAST: Miller-forensic assessment of symptoms test professional manual. Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Morel, K. R. (1995). Use of the binomial theorem in detecting fictitious posttraumatic stress disorder. Anxiety Disorders Practice Journal, 2, 55–62.

    Google Scholar 

  • Morel, K. R. (1998). Development and preliminary validation of a forced-choice test of response bias for posttraumatic stress disorder. Journal of Personality Assessment, 70, 299–314.

    Article  PubMed  Google Scholar 

  • Morey, L. (1991). Personality Assessment Inventory: Professional manual. Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Morey, L. (2007). Personality assessment inventory: Professional manual (2nd ed.). Lutz, FL: Psychological Assessment Resources.

    Google Scholar 

  • Odland, A., Lammy, A., Martin, P., Grote, C., & Mittenberg, W. (2015). Advanced administration and interpretation of multiple validity tests. Psychological Injury and Law, 8, 46–63.

    Article  Google Scholar 

  • Reynolds, C. R., & Horton, A. M., Jr. (2012). Detection of malingering during head injury litigation. New York: Springer Science + Business Media.

    Book  Google Scholar 

  • Reynolds, C. R., & Kamphaus, R. W. (2004). BASC-2: Behavior assessment system for children (2nd ed.). Circle Pines, MN: American Guidance Service.

    Google Scholar 

  • Rogers, R. (Ed.). (2008). Clinical assessment of malingering and deception (3rd ed.). New York: Guilford.

    Google Scholar 

  • Rogers, R., Bagby, R. M., & Dickens, S. E. (1992). Structured interview of reported symptoms. Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Rogers, R., Bender, S. D., & Johnson, S. F. (2011a). A critical analysis of the MND criteria for feigned cognitive impairment: Implications for forensic practice and research. Psychological Injury and Law, 4, 147–156.

    Article  Google Scholar 

  • Rogers, R., Bender, S. D., & Johnson, S. F. (2011b). A commentary on the MND model and the Boone critique: “Saying it doesn’t make it so”. Psychological Injury and Law, 4, 162–167.

    Google Scholar 

  • Rogers, R., Sewell, K. W., & Gillard, N. D. (2010). Structured Interview of Reported Symptoms, second edition: Professional manual. Lutz, FL: Psychological Assessment Resources.

    Google Scholar 

  • Rubenzer, S. (2009). Posttraumatic stress disorder: Assessing response style and malingering. Psychological Injury and Law, 2, 114–142.

    Article  Google Scholar 

  • Ruff, R. M., & Hibbard, K. M. (2003). RNBI Ruff Neurobehavioral Inventory professional manual. Odessa, FL: Psychological Assessment Resources.

    Google Scholar 

  • Schutte, C., Millis, S., Axelrod, B., & VanDyke, S. (2011). Derivation of a composite measure of embedded symptom validity indices. The Clinical Neuropsychologist, 25, 454–462.

    Article  PubMed  Google Scholar 

  • Sleep, C. E., Petty, J. A., & Wygant, D. B. (2015). Framing the results: Assessment of response bias through select self-report measures in psychological injury evaluations. Psychological Injury and Law, 8, 27–39.

    Article  Google Scholar 

  • Slick, D. J., Hopp, G., Strauss, E., & Thompson, G. B. (1997/2005). Victoria Symptom Validity Test: Professional manual. Odessa, FL: Psychological Assessment Resources.

  • Slick, D. J., & Sherman, M. S. (2012). Differential diagnosis of malingering and related clinical presentations. In E. M. S. Sherman & B. L. Brooks (Eds.), Pediatric forensic neuropsychology (pp. 113–135). New York: Oxford University Press.

    Google Scholar 

  • Slick, D. J., & Sherman, E. M. S. (2013). Differential diagnosis of malingering. In D. A. Carone & S. S. Bush (Eds.), Mild traumatic brain injury: System validity assessment and malingering (pp. 57–72). New York: Springer.

    Google Scholar 

  • Slick, D. J., Sherman, E. M. S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13, 545–561.

    Article  PubMed  Google Scholar 

  • Tombaugh, T. N. (1996). TOMM: The test of memory malingering manual. North Tonawanda, NY: Multi-Health Systems.

    Google Scholar 

  • Vasterling, J. J., Bryant, R. A., & Keane, T. M. (2012). PTSD and mild traumatic brain injury. New York: Guilford.

    Google Scholar 

  • Young, G. (2014a). Malingering, feigning, and response bias in psychiatric/psychological injury: Implications for practice and court. Dordrecht, Netherlands: Springer Science + Business Media.

    Book  Google Scholar 

  • Young, G. (2014b). Psychological injury and law II: Implications for mental health policy and ethics. Mental Health Law and Policy Journal, 3, 418–470.

    Google Scholar 

  • Young, G. (2014c). Resource material for ethical psychological assessment of symptom and performance validity, including malingering. Psychological Injury and Law, 7, 206–235.

    Article  Google Scholar 

  • Young, G. (2015). Psychological injuries, law, malingering, PTSD, and a new detection system. Unpublished manuscript, Department of Psychology, Glendon College, York University, Toronto, Ontario, Canada.

  • Young, G., & Drogin, E. Y. (2014). Psychological injury and law I: Causality, malingering, and PTSD. Mental Health Law and Policy Journal, 3, 373–417.

    Google Scholar 

  • Young, G., Lareau, C., & Pierre, B. (2014). One quintillion ways to have PTSD comorbidity: Recommendations for the disordered DSM-5. Psychological Injury and Law, 7, 61–74.

    Article  Google Scholar 

  • Zoellner, L. A., Bedard-Gilligan, M. A., Jun, J. J., Marks, L. H., & Garcia, N. M. (2013). The evolving construct of posttraumatic stress disorder (PTSD): DSM-5 criteria changes and legal implications. Psychological Injury and Law, 6, 277–289.

    Article  PubMed Central  PubMed  Google Scholar 

Download references

Conflict of interest

The author has no conflicts of interest related to this paper. He does mostly rehabilitation and some plaintiff work.

Disclaimer

The author receives royalties from his mentioned 2014 Book.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gerald Young.

Appendix

Appendix

Proposed Criteria for Non-Credible Feigned Posttraumatic Stress Disorder and Related Disability/Dysfunction

Introduction

The present system has been developed to help in detection of malingering and related response bias in forensic disability and related evaluations. The system is referred to as the Psychological Injury Disability/Dysfunction—Feigning/Malingering/Response Bias System (PID-FMR-S). It is composed of three systems that are quite uniform—the Feigned Posttraumatic Stress Disorder Disability/Dysfunction (F-PTSDR-D), the Feigned Neurocognitive Related Disability/Dysfunction (F-NCR-D), and the Feigned Pain-Related Disability/Dysfunction (F-PR-D) systems. These three systems cover the major psychological injuries of PTSD, pain, and TBI, respectively. The systems should be used as part of comprehensive evaluations that use state-of-the-art testing and search for inconsistencies/discrepancies. The overall system has been constructed as an impartial, middle-of-the-road one that is scientifically informed. It is published in the book by the system’s author, Gerald Young (Malingering, Feigning, and Response Bias in Psychiatric/Psychological Injury: Implications for Practice and Court; Springer Science+Business Media, 2014). In the book, Young considers alternate systems and builds on them (for neurocognition, the Malingered Neurocognitive Dysfunction, MND, Slick, Sherman, Iverson, 1999; for pain, the Malingered Pain-Related Disability, MPRD, Bianchini et al., 2005). In addition, the book reviews the literature on malingering, especially in Larrabee (2012b) and Reynolds and Horton (2012).

Aside from examining the MND and MPRD systems, the Young book considers the work of Larrabee (2012a), in particular. The proposals that (a) even one below-chance 1 performance on a forced-choice test and (b) below cut-off performance on three or perhaps two validity indicators from a battery is sufficient to attribute malingering are analyzed carefully. This has led to a more conservative, middle-of-the-road approach for testing criteria in the present system. At the same time, the inconsistency/discrepancy criteria are greatly elaborated in the present system compared to other systems. Moreover, there are other checks and balances that have been included. Therefore, in many ways the present system has aspects that are comparable to the proposals by Larrabee. To conclude, even for its testing criteria, the present system does not simply dismiss the prior work but builds on it.

As an introduction to the specifics of the system and in order to reinforce the notion that it respects and builds on the work of Larrabee (2012a), in the following, the diverse ways that the levels in the system related to definite malingering, definite response bias, and probable response bias are summarized briefly.

Aside from cases with extremely compelling evidence, such as frank admission or indisputable videographic evidence, definite malingering can be attributed in cases in which: (a) two or more forced-choice measures are failed at the below-chance 1 level; or (b) there are five or more test failures on other valid psychometric measures; or (c) there are three or more compelling inconsistencies; (d) any combinations of these types of evidence are found; or (e) other evidence replaces the weighting of these three types of evidence, such as extreme scores on valid psychometric tests or an overall judgment of the file that adds weight. When the latter obtains then, when numerical data can be gathered, three test failures could be sufficient to attribute malingering, everything else being equal.

As for assigning definite response bias, the criteria above apply, except that they involve one forced-choice test, not two, four other tests, not five or more, and two compelling inconsistencies, not three or more, with none of the extreme nature involved. In terms of probable response bias, the criteria exclude forced-choice test failure, but consider three other test failures, not four, and one compelling inconsistency, not two.

The reader will note that Larrabee (2012a) emphasized three if not two failures on relevant tests as very strong evidence of malingering. All things considered, the present system arrives at a protocol that might give a comparable weighting to such test failures.

Overall, those who had hoped for a system that catches either most evaluees or almost no evaluees in its malingering net will be disappointed, but those who adhere to a science-first approach will find the system rational and balanced. In this regard, the system has been constructed so that its application should yield similar ratings by different raters, or good inter-rater reliability. In addition, the system appears to have the elements needed for adequate validity (e.g., construct, content, criterion). Its state-of-the-art and middle-of-the-road approach constitute important principles underlying validity.

Given these considerations, use of the present system in practice has the potential to meet admissibility criteria in court, perhaps moreso than other systems, and should serve one’s practice growth in good stead. A worksheet has been developed to accompany its use. Note that through its inconsistencies/discrepancies criteria, the system should be quite helpful to mental health professionals who are not trained in psychological testing, such as psychiatrists.

Criteria

Criterion A: Evidence of significant external incentive. At least one clearly identified and substantial external incentive for conscious exaggeration or fabrication of symptoms is present at the time of examination (e.g., personal injury litigation, workers compensation benefits, psychiatric/psychological disability pension).

Criterion B: Evidence from psychological testing. Evidence that evaluee’s psychiatric, psychological, emotional, coping, and related capacities as indicated by formal psychometric testing (e.g., in the context of psychological or neuropsychological evaluation) are consistent with exaggeration or feigning of functional psychiatric/psychological disability.

A. Different Degrees of Certainty of Response Bias, According to Psychological Testing

A1) Definite Malingering.

i) The evidence is incontrovertible, even when the rest of the data gathered is considered. Below-chance performance (p < .05) on two or more forced-choice measures of psychiatric/psychological (e.g., cognitive or perceptual) function, e.g. , below-chance 1 performance on the TOMM [scores below tests’ clinical/threshold cut scores but that are higher than chance performance are dealt with in the next level], the VSVT, and the WMT. Also consider the VIP.

Or,

ii) Performance on five or more well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

A2) Definite negative response bias.

i) Below-chance performance (p < .05) on one forced-choice measure of psychiatric/psychological (e.g., cognitive or perceptual) function, e.g., below-chance 1 performance on the TOMM [scores below tests’ clinical/threshold cut scores but that are higher than chance performance are dealt with in the next level].

Note. If only one forced-choice test is administered and the evaluee fails at the below-chance 1 level, a second one is administered to determine whether the person reaches the definite malingering rating.

Or,

ii) Performance on four well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

Note. Failure on forced-choice measures that is not below-chance 1 but does meet pass-fail thresholds according to normative cut scores are considered for this criterion; i.e. , failure to reach critical thresholds based on normative or otherwise validly selected and justified cut scores. That is, forced-choice test results at the latter level as opposed to the below-chance 1 level could be included among the “well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms.” Note that the same rule applies in the next categories.

A3) Probable negative response bias.

 Performance on three well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

A3-4) Intermediate (Probable to possible, gray zone) negative response bias

i) The data meet the requirements for classification of possible negative response bias but not the classification of probable negative response bias. Nevertheless, there are supplementary data available about the evaluee that raises the ratings to the intermediate level.

For test data, this would refer to results for extra tests that had not used for the primary ratings because of the scoring rules described below, such as on a second personality test with numerous effort/validity detector scales not all of which had been used for the primary rating, and one or two indicating performance below accepted criteria for lack of effort/validity. That is, in addition to meeting criteria for A4, there is performance on two well-validated supplementary and not primary tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, which is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

Or,

ii) The data do not even meet the requirements for classification of possible negative response bias. Nevertheless, there are supplementary data available about the evaluee that raises the ratings to this intermediate level. For test data, this would refer to results for extra tests that had not been used for the primary ratings because of the scoring rules described below, such as on a second personality test with numerous effort/validity detector scales not all of which had been used for the primary rating, and three or more indicate performance below accepted criteria for lack of effort/validity. That is, performance on three or more well-validated supplementary and not primary tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

A4) Possible negative response bias.

 i) Performance on two well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

Or

ii) Criteria for Definite or Probable Response Bias are met except for Criterion D (i.e., primary psychiatric, neurological, or developmental, or other etiologies cannot be fully ruled out). In such cases, the alternate etiologies that cannot be ruled out should be specified.

A5) Minimal negative response bias.

i) Performance on one well-validated test designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity. When only one instrument is used, and the evaluee does not reach acceptable criteria, a second one should be used to establish by performance whether the response bias is classifiable as possible or minimal.

Or,

ii) Just-below cut score performance on two well-validated tests so that performance is at most partially consistent with exaggeration of diminished functional psychiatric/psychological capacity.

A6) No evident response bias.

i) Performance on not even one well-validated test designed to measure exaggeration or fabrication of psychiatric/psychological (e.g., cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

ii) There might be just-below cut score performance on one well-validated test but, despite this, performance is not even partially consistent with exaggeration of diminished functional psychiatric/psychological capacity.

Weighting Rules for Test Batteries

As for the nature of the 60 rules included in the present system for test use, they have been constructed to apply equally to the system developed for PTSD and its alteration for conditions of pain and TBI. The rules were constructed according to 10 pertinent principles and parameters, as specified in the following.

(a) There are two tracks in the system, Regular (for PTSD, pain) and Neuropsychological/Cognitive.

(b) There are multiple test types, including forced-choice, personality, and dedicated. They can be used in the system if scientifically supported for the question at hand.

(c-e) Some test types are more critical than others, e.g. , forced-choice; some criteria more critical than others, e.g. , below-chance 1 performance; and some tests more reliable and valid than others for the purposes at hand, e.g. , the MMPI-2-RF.

(f) Any one test can provide one to several validity indicators, depending on the research findings in the area.

(g) The tests should include 10–15 primary measures specified beforehand, with 5–8 positive findings, and at most 3–4 from any one instrument, needed to conclude significant feigning or related response bias, including of malingering.

(h) Tests that are correlated can be used within specified limits and their acknowledgment.

(i) Malingering can be concluded only when there is incontrovertible evidence after examination of the full reliable data set gathered.

(j) In general, test selection and score interpretations must be undertaken scientifically, impartially, and comprehensively, while considering the limits of the evaluees.

In terms of the categories within which the 60 rules fall, they group in the following ways. (a) Pathways/tracks in the system: 1, 13, 17, −18; (b) Testing/tests: 2–9, 26–28, 56: (c) Criteria: 10–12, 25, 29; (d) Supplementary/secondary factors: 14–16; (e) Independence/correlation: 19–24; (f) Rating adjustment: 30–32; (g) Test preselection: 33–35; (h) Administration: 36–40; (i) Cognitive/Neuropsychological: 41–45; (j) Less testing: 46–50; (k) Comparison with Larrabee: 51; (l) Evaluators: 52–55; (m) Altering system: 57–58; (n) Using all the data: 59–60.

These 60 rules are quite explicit, and qualify how to obtain and use all needed validity measures to detect malingering and related response biases in the present system. However, the rules should not be used in a box score fashion to arrive at conclusions about malingering and related response biases. The evaluator needs to examine the full data set gathered in comprehensive, scientifically-informed, impartial ways. The ratings are only a guide toward this end, albeit objective ones to the degree possible.

Rule 1: Two pathways. Note that the present rating system is sufficiently flexible to accommodate (a) a Regular pathway/system in the rating without cognitive/neuropsychological testing and (b) a second pathway of cognitive/neuropsychological testing. The rules provide clear instructions on how to use one pathway, the other, or both. That being said, most of the following rules apply to the Regular system and extra ones for the cognitive/neuropsychological system are given toward the end.

Rule 2: Forced-choice. With respect to forced-choice measures, evaluators are advised to include in their assessments “well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms,” and criteria have been described above for determining the level of malingering/response bias according to the results obtained on forced-choice tests. Essentially, there are two levels to consider: (a) below-chance 1 performance, considered more problematic, and (b) failing to reach critical thresholds based on normative or otherwise validly-selected and justified cut-scores.

Rule 3: Tests. The inclusion in the criteria of “well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms” includes psychological tests other than forced-choice ones that might provide evidence in formal psychological evaluation that the person has significantly misrepresented current status (e.g., exaggerated or minimized psychological symptoms/distress) in a manner that emphasizes the injury for which compensation is sought.

Rule 4: MMPI family. For example, responses on self-report measures of psychological function suggest impairment in the context of elevations on well-validated validity scales or indices consistent with exaggeration of physical/somatic (e.g., MMPI-2 FBS, MMPI-2-RF FBS-r or SVT-r) or emotional symptoms (e.g., MMPI-2 F, Fb, or Fp, or related MMPI-2-RF scales), or newer effort detection scales ( e.g. , RBS, HHI); or, on these measures, as well, evidence of vehement denial of psychological problems in a manner consistent with extreme defensiveness regarding psychological symptoms in order to further emphasize psychological complaints (e.g., MMPI-2 L or K at noted cut-offs, or their MMPI-2-RF equivalents).

Rule 5: Other tests needed. The underlying assumption in listing all these instruments is that they provide relevant information for the present ratings; but they do vary in the information that they provide, the levels of the cut-offs used, etc. Therefore, evaluators need to be aware of further tests that could be used in evaluations; these are described below and scoring rules for them are listed.

Rule 6: Improbable symptoms, etc. Well-validated instruments might include structured interview ones that aim to detect improbable symptoms, or extreme, too frequent, or otherwise non-credible ones, such as detected on the SIRS/SIRS-2 and the M-FAST.

Rule 7: PTSD. In addition, tests might include dedicated PTSD ones, such as the DAPS or perhaps the TSI-2, that have embedded evaluee validity scales for under- and over-reporting.

Rule 8: Pain. Tests aimed at other types of disability determinations, such as the BBHI-2 for pain and the RNBI for neurobehavioral symptoms, might be applicable, depending on the nature of the evaluee’s assessment taking place, given the equivalent embedded evaluee validity scales in these instruments, for under- and over-reporting.

Rule 9: Cognitive (embedded). Further, even when an assessment is not neuropsychological, good use could be made of embedded cognitive measures of invalidity/poor effort, such as for digit span.

Rule 10: 10–15 Primary. Of all the tests/measures/scales/indicators administered that are not forced-choice tests or embedded neuropsychological/cognitive measures, 10–15 should be considered primary, or as the ones designated to furnish for the present system critical information needed for assessing malingering and related response biases.

Rule 11: 5–8 Critical. The criteria of the present system indicate that, aside from below-chance 1 results from forced-choice and neuropsychological/cognitive testing, 5–8 invalidity results, at most, are needed from among the 10–15 primary measures to obtain maximal scores/levels in the system. Note that because there are 10–15 primary indices and doing poorly on 5–8 of them indicates significant doubt about the credibility of the evaluee, this suggests that doing poorly on about 50 % (or more) of the primary indices is critical in establishing the evaluee’s performance/effort quality. This rule has face validity.

Rule 12: Not at cut-off. Note that below-chance 1 performance on forced-choice testing is not counted in the primary indices, given its use elsewhere in the system. However, performance on these tests that do not meet cut-offs (even if higher than below-chance 1 performance) can count as among the 10–15 primary indices of the system, if specified beforehand.

Rule 13: Neuropsychology. Aside from stand-alone forced-choice tests such as the VSVT, structured interviews such as the SIRS/SIRS-2, and tests such as the MMPI family ones, when the assessment is neurocognitive or neuropsychological, many different embedded validity/effort detector tests/measures/scales can be used, given the tens of domains tested and the utility of having more than one for each domain, as needed.

Rule 14: Supplementary tests. However, the data obtained from these instruments should not be used as part of the 10–15 primary ones needed for purposes of obtaining ratings in the present system. That is, essentially, they should be used separately from the Regular system, and stand apart from them for use in the cognitive/neuropsychological one.

Rule 15: Secondary information. That is, these extra data sources might contribute secondary information to the Regular rating system, at best, aside from any data that they furnish for purposes outside the Regular rating system to the cognitive/neuropsychological one.

Rule 16: Pattern analysis. The same applies for neurocognitive/neuropsychological test pattern analysis deriving from these tests; normally, they should not be considered for use in the Regular system.

Rule 17: Limited cognitive testing. Note that if limited cognitive testing is given, rather than full-blown cognitive/neuropsychological testing, and there are not many validity indicators/tests/measures/scales available because of this decision, it might be best to consider them for rating of the Regular and not cognitive/neuropsychological path.

Rule 18: Neuropsychological path. That being said, there are rules given below (see Rules 41 to 44) that apply to rating the present system for the second path when full-blown cognitive/neurocognitive testing is administered.

Rule 19: Test independence. The selection of instruments chosen in an assessment must be carefully organized so that, to the degree possible, they are relatively independent and tapping different aspects of psychological function/response bias.

Rule 20: Prioritizing. For example, if two similar results are obtained for two tests that are aimed at measuring the same type of response bias, they should not both be considered as primary in the present rating system and both used to inflate the ratings.

Rule 21: Exception 1. One exception to this rule is when the better measure of the two yields negative results and the second one yields positive results; perhaps valid arguments are possible to justify using the secondary measure as the primary one.

Rule 22: Exception 2. Moreover, tests are never perfectly correlated, and even if they are substantially correlated, they might reflect different constructs to a degree. Therefore, consistent with the multitrait-multimethod approach, two very similar tests having positive results could be used in the ratings with the present system, if this decision can be appropriately justified.

Rule 23: Exception 3. Nevertheless, in general, to repeat, evaluators should avoid such reduplication in obtaining scores from tests administered in their batteries for rating purposes. They can accomplish this by selecting measures that are relatively independent and aimed at different categories of psychological function/response bias. For example, if the MMPI-2-RF is administered, any scores from another personality inventory that might be administered should not be considered as primary in calculating level of response bias in the present system. That being said, if a secondary omnibus instrument, such as a personality inventory, has a useful scale that is considered better for the purposes of the evaluation relative to those in the primary one, that scale in the secondary one can be used in ratings with the present system.

Rule 24: Exception 4. Note that this rule about generally trying to avoid duplication/overlap/correlated tests in establishing ratings with the present system does not apply to the needed use of several stand-alone, forced-choice tests, because they are cardinal in determining the presence of malingering.

Rule 25: Maximum use 1. For instruments that have more than one scale aimed at detecting effort or feigning, such as the MMPI family of tests, or in cognitive evaluation, the rule should be that any instrument of this type should contribute at most 3–4 primary measures among the 10–15 maximum that are needed in the present system to arrive at ratings, even if there are more than 3–4 of them that are included in the instrument and that have been scored. This rule needs implementation to avoid using only one of these instruments to obtain the needed results for all of the 5 primary validity indicators among the 10–15 required for obtaining results that can be used for a maximum rating in the present system.

Rule 26: Omnibus tests. In cases where assessors use two or more omnibus instruments with more than one relevant validity measure, as mentioned, one must be considered primary, with its validity scores used rather than any of the others. For this rule, everything else being equal, the MMPI family of tests is considered primary in such cases for rating with the present system.

Rule 27: Dedicated Tests. For PTSD or pain assessments, when two or more dedicated tests, such as the DAPS for PTSD, are used, normally only one should provide scores as primary measures for purposes of the present ratings.

Rule 28: Nondedicated tests. When validity indicators of feigning are used in tests that do not directly apply to PTSD or pain, or when they do not have associated with them research showing their applicability to the population at hand, their use must be justified. Moreover, for any one assessment, only one test from among them and, further, only one score from it should be used in the ratings.

Rule 29: Maximum use 2. If these tests are dedicated ones to detecting feigning, such as the SIRS, as long as they are validated for the population at hand, weighting of 2–3 of their measures could be used as part of the 10–15 primary ones for rating in the present system.

Rule 30: Adjusted rating, lowering it. When evaluees (a) score in the superior range for good effort on a validity indicator, if applicable, and/or (b) pass a majority of the validity tests/measures/scales given in the full battery, and/or (c) score positive for measures related to symptom minimization or underreporting of post-event symptoms at claim, they should be credited a half-level for each case in the reverse direction on the rating scale, up to a maximum of one full level in the reverse direction on the scale.

Rule 31: Adjusted rating, raising it. When evaluees (a) score in the superior range ( e.g. , 98th percentile) for poor effort on a validity indicator, if applicable, and/or (b) fail a majority of the validity tests/measures/scales given in the full battery, and/or (c) score positive for measures related to symptom minimization or underreporting of pre-event symptoms at claim, they should be credited a half-level for each case in the higher direction on the rating scale, up to a maximum of one full level in the higher direction on the scale.

Rule 32: Patterns. Clinical scales might prove informative for their patterns, such as on personality inventories. For example, in the MMPI family of tests, certain codes are associated with problematic clinical presentations with respect to effort and evaluee validity. Patterns such as this should be considered for half-level adjustment (lower, higher), as part of the prior two rules.

Rule 33: Preselection. In choosing usable measures from batteries that had been administered for rating purposes, decisions about which measures to use should be made beforehand, including the weightings involved, as justified and based on the scientific literature.

Rule 34: Fishing expeditions. Evaluators should avoid fishing expeditions of selecting just-right tests, and once the data are gathered, just-right scores, in order to get just-right conclusions to assessments, thereby lacking impartiality, comprehensiveness, and scientific underpinnings.

Rule 35: No exceptions. Evaluators should not ignore pre-selected measures, ones chosen for use beforehand according to the requirements of the present system, and they should not avoid administering obvious ones to use for rating in the battery, such as the MMPI family ones.

Rule 36: Ecological validity. Evaluators should administer the tests in a way that has ecological validity, e.g. , spreading them out and not giving one after the other.

Rule 37: Warnings. Evaluators should consider the issue of advising evaluees about tests, especially forced-choice ones, according to prevailing professional guidelines.

Rule 38: Qualifications. Only mental health professionals who are professionally qualified should select, administer, and interpret psychological tests.

Rule 39: State-of-the-art. It is important to note that the evaluator needs to use the most current, psychometrically and forensically valid instruments available, and not just the ones mentioned in this version of the F-PTSDR-D written in 2014.

Rule 40: No harm. In short, aside from using an appropriate battery of measures for the ratings that can be derived from the present system, each instrument selected should be administered in a way that does not harm the evaluee, while still permitting that the information required is gathered.

Rule 41: Cognitive/Neuropsychological testing. When an evaluation includes cognitive/neuropsychological testing, the procedures described in the present system can be complemented by a second path or track. Typically, in cognitive/neuropsychological testing, there are tens of evaluee validity indicators/tests/measures/scales that might be administered. The present system allows for 10–15 primary measures outside of cognitive/neuropsychological testing and, from among these, 5–8 critical validity indicators/tests/measures/scales with (positive) data are selected. In this regard, from among the cognitive/neuropsychological tests administered, an additional 10–15 primary measures and 5–8 critical validity indicators/tests/measures/scales can be selected from among the cognitive/neuropsychological tests administered.

Rule 42: Rating cognitive/neuropsychological tests. The rules of the present system should be applied to the cognitive/neuropsychological primary measures and critical results that are derived from application of Rule 41. That is, they will help arrive at evaluations of Definite to Probable Response Bias, in particular.

Rule 43: Cognitive/Neuropsychological and Regular rating. When both the Regular path in using the present rating system and the supplementary cognitive/neuropsychological one are both positive and lead to high ratings of response bias for an evaluee, this should be indicated.

Rule 44: Positive results for only one of the two paths. When either cognitive/neuropsychological or Regular rating leads to high ratings of response bias for an evaluee, but not both, this should be indicated. Conclusions to evaluations should note the difference in the two ratings and its implications.

Rule 45: Cognitive/Neuropsychological path alone. Of course, evaluators might want to proceed with just cognitive/neuropsychological testing in the second pathway of the system, and not use at all the Regular pathway. In this regard, they would use simply the embedded cognitive/neuropsychological validity indicators/tests/measures/scales with forced-choice measures, and none of the personality, structured interviews, and specific dedicated measures.

Rule 46: Test selection. The system is very flexible and, when testing is involved, the amount of tests/measures/scales administered can be as low as several to as high as multiples of 10.

Rule 47: Minimal testing. Minimally, at least when the Regular path or track is taken, appropriate use of the system requires a good omnibus personality test, such as the MMPI-2-RF or the PAI, a good feigning-detection interview instrument, such as the SIRS/SIRS-2 or M-FAST, a specific, dedicated test, and one or more stand-alone forced-choice measures, such as the VSVT or the TOMM. (Recommendations for 2014.)

Rule 48: Less than minimal testing. If evaluators choose to administer even less testing than this, they risk not having the option of getting sufficient critical tests/measures/scales/indicators that can be used to rate the upper levels of the rating system.

Rule 49: Less testing yet doing enough. That being said, there are both testing and non-testing rules that could be used to supplement below-minimum test use, for example, the one concerning especially high failure performance on tests (98 % percentile or more; see above) and the one for the whole file (see below).

Rule 50: Justify less testing. A problematic practice is that evaluators who are trained in psychological testing use less testing in assessments than the recommended minimum even when more testing can be administered. For example, it is conceivable that partially sufficient information can be gathered just in administering an MMPI family test, a structured interview one, or one forced-choice test. However, this option is strongly recommended against, unless it can be clearly justified, e.g., due to the level of concomitant physical or brain injuries, language barriers, etc. In such cases, it might be sufficient to use less that the recommended minimum of tests.

Rule 51: Larrabee (2012a). As an aside, it is noted that the structure established in the present system through its rules enables evaluators to arrive at high ratings on the present rating system in terms of malingering and definite response bias. For example, the system enables high ratings when there are positive results or performance on three or even two tests/measures/scales/validity indicators, which is consistent with the spirit of the work of Larrabee (2012a). Indeed, the system created might even be more sensitive to obtaining results at these higher levels compared to Larrabee’s procedures, given the rules developed. That being said, consideration of the whole file and alternative explanations, such as a cry for help, might render it less sensitive. This illustrates perfectly the middle-of-the-road, balanced approach that characterizes the present system. It was constructed with good rationale and logical perspectives, good scientific and practical ones, and consideration of other systems, published recommendations for their change, and other state-of-the-art literature. Evaluators should function from the same middle-of-the-road and state-of-the-art perspective in applying the system to their evaluees. Evaluators might want to check the conclusions derived from using the present system with those of Larrabee ( e.g. , likelihood ratios, positive predictive power, probability of multiple positive findings), or any other system of an actuarial, algorithmic nature for malingering detection, assuming the literature supports their use, using a compare-contrast format to help justify the use of the present system and the conclusions it allows for any assessment at hand.

Rule 52: Supplementary evaluators. Evaluators not trained in testing can acquire the services of those trained and competent to administer the types of tests recommended for use in the present system.

Rule 53: Seconding team work. Note that the evaluator who acquires such testing services is responsible for applying the present system to the case at hand, but only the testing evaluator can be responsible for interpreting the test data portion of the evaluation.

Rule 54: Leading team work. Or, evaluators might be trained and competent in testing, but prefer to have a second evaluator (help) seek inconsistencies/discrepancies in the file. The testing evaluator would be responsible for the inconsistencies/discrepancies noted and for combining all the information gathered for present rating purposes.

Rule 55: Interdisciplinary assessments. Evaluators using the present system might be functioning within the context of interdisciplinary teams of assessors. In contributing to and/or signing any executive summary, they are responsible as much as the others for how the ratings are used and for any overall alterations in equivalent ratings by the team.

Rule 56: Specific dedicated tests. [As of 2014.] If tests dedicated to specific psychological injuries are administered, such as in the Regular track, the DAPS and perhaps the TSI-2 make sense for PTSD, and the BBHI-2 or BHI-2 would be good for pain. In this regard, there are multiple cognitive or related measures that could be used. Other tests. Some other relevant instruments include the RNBI, the VIP, the WMT, and the MENT.

Rule 57: Altering rules on testing and test battery. As of 2014, the test battery rules and the testing procedures and tests indicated in the present system are the ones that can be scientifically and practically justified. However, as concepts and research accumulate, recommendations to change the present system might appear in the scientific literature and research that are both reliable and valid. Or, assessors might alter a rule or rules or use of the present system and its proposed testing battery in a way that is scientifically and practically justified. For example, the number of primary and critical tests and measures, presently are set at 10–15 and 5–8, respectively, but slight variations in these amounts might be acceptable at the scientific and practical levels.

Rule 58: Special populations. The usual cautions about using the correct norms for scoring and being sensitive to gender, minorities, age, and related differences apply to testing for the present system. Note that for children, the BASC-2 has appropriate validity checks.

Rule 59: Consider whole file. The rating of any level of negative response bias that is attributed to an evaluee according to the present system can be adjusted higher or lower by one-half to one full rating level on the scale depending on any additional reliable information in the assessment that is not considered elsewhere. These factors might include evaluator ones, evaluee ones, or systemic ones. The rationale for this decision must be documented. For example, litigation distress might be evident, but that could reflect either (a) non-merited factors, such as apprehension at continued evaluations that have reliably found difficulties with presentation/performance in the evaluee, or (b) genuine externally generated stress related to the case, e.g. , by third parties.

Rule 60: Combining test data with inconsistencies/discrepancies. Criterion C elaborates rules for combining test data with inconsistencies/discrepancies, after presentation of 30 possible inconsistencies/discrepancies.

Criterion C : Evidence from Inconsistencies/Discrepancies, With or Without Test Data Considered.

Inconsistency/discrepancy criteria can be used separately from those of the B set, or in conjunction with them, as presented in the second part of the C criteria. Inconsistencies/discrepancies can be found at two levels. Either marked/substantial or moderate/nontrivial evidence of inconsistency/discrepancy is possible. Moreover, marked/substantial inconsistencies/discrepancies can be divided into those that are less or most extremely compelling, such as in cases of frank admission, videographic evidence of working after being at work has been denied, and frank evidence elsewhere in the file, e.g., related to collateral information. Trivial evidence in these regards should be ignored. For the two levels of inconsistencies/discrepancies possible, with the more blatant ones receiving the highest rating, there is a subjective element in classifying them. Therefore, evaluators should be conservative when characterizing them as marked or substantial relative to moderate or nontrivial, and justify all classifications in these regards with clear material from the file and careful argument. Note that in section B3-4ii below, 15 examples are provided of possible inconsistencies/discrepancies, aside from the few examples provided in the sections that follow.

 a) Inconsistencies/Discrepancies in Conjunction with Testing

 a1) Inconsistency/Discrepancy between cognitive/neurocognitive test data and known patterns of brain functioning ( e.g. , as related to PTSD). In this regard, a pattern of test performance that is either markedly/substantially or moderately/nontrivially inconsistent/discrepant from currently accepted models of normal and abnormal central nervous system (CNS) function. The inconsistency/discrepancy must be consistent with an attempt to exaggerate or fabricate psychological dysfunction in testing ( e.g. , patient reports that she/he does not sleep at all). (Inconsistency #1)

 a2) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and known patterns of physiological reactivity. (Inconsistency #2)

 a2i) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and known patterns of physiological reactivity in the ambulance, at hospital, or shortly thereafter (e.g., no heart-rate increase with significant change in subjective traumatic reaction report). (Inconsistency #2, first example)

 a2ii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and known patterns of physiological reactivity in psychotherapy (e.g., no increase in neurovegetative signs during exposure therapy or systematic desensitization).

 a2iii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and known patterns of physiological reactivity to psychotropic medication (e.g., no decrease in neurovegetative signs to symptom-relevant medication).

 a3) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data and self-report. (Inconsistency #3)

a3i) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data on psychological status prior to event at claim and self-reported background history in interview. (Inconsistency #3, first example)

 a3ii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and self-reported behavior/symptoms/complaints/limitations/functions in interview.

a4) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and verbal and/or nonverbal observed behavior/symptoms/complaints/limitations/functions. (Inconsistency #4)

a4i) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and observed behavior/symptoms/complaints/limitations/functions while unaware of being observed. (Inconsistency #4, first example)

a4ii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and observed behavior/symptoms/complaints/limitations/functions while aware of being observed ( e.g. , evaluee endorses items indicating extreme fear in driving yet is observed to/indicates that driving to and from the session was okay).

a5) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data and information reported by reliable informants/collaterals. (Inconsistency #5)

a5i) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms on psychological status prior to event at claim and information reported by reliable informants/collaterals, such as primary care physicians and spouses, about background history. (Inconsistency #5, first example)

a5ii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and information reported by reliable informants/collaterals, such as primary care physicians and spouses, about behavior/symptoms/complaints/limitations/functions ( e.g. , evaluee endorses items indicating extreme fear in driving yet is reported by spouse to drive without a problem).

a6) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data and information reported in reliable documents. (Inconsistency #6)

a6i) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data on psychological status prior to event at claim and information reported in reliable documents, such as by primary care physicians and other mental health professionals, about background history. (Inconsistency #6, first example)

a6ii) Inconsistency/Discrepancy, either marked/substantial or moderate/nontrivial, between test data of PTSD-related symptoms after event at claim and information reported in reliable documents, such as by primary care physicians and other mental health professionals, about behavior/symptoms/complaints/limitations/functions ( e.g. , there is no documented history of psychological trauma in the ambulance or ER reports, yet the evaluee consistently endorses extreme traumatic reactions in the ambulance, at the hospital, or shortly thereafter).

b) Inconsistencies/Discrepancies in Conjunction with Self-Report (other than with testing)

Evidence that the evaluee’s self-reported behaviors, symptoms, complaints, or limitations and functions related to PTSD and related disorder/dysfunction are clearly consistent with exaggeration or feigning of physical, cognitive, or emotional/psychological components of the PTSD-related disability in that there is either a marked/substantial or moderate/nontrivial inconsistency/discrepancy between such self-report and any of the following:

b1) Known patterns of brain function. (Inconsistency #7)

b2) Known patterns of physiological function. (Inconsistency #8)

 [Self-reported PTSD-related symptoms are clearly discrepant with known patterns of physiological or neurological functioning (e.g., PTSD complaints by themselves should not be able to elicit marked/substantial, or moderate/nontrivial complaints of remote memory loss; PTSD complaints should not be able to elicit repetitive nightmares that exactly repeat the traumatic event and no other nightmares).]

b3) Observed behavior/symptoms/complaints/limitations/functions. (Inconsistency #9)

b3i) Observed behavior/symptoms/complaints/limitations/functions while unaware of being observed. (Inconsistency #9, first example)

b3ii) Observed behavior/symptoms/complaints/limitations/functions while aware of being observed.

 [Self-reported PTSD-related symptoms are clearly inconsistent/discrepant with reliable observations of behavior. Reported symptoms in a given behavioral domain (i.e., physical, cognitive, emotional; PTSD-related) are markedly/substantially or moderately/nontrivially inconsistent/discrepant with behavioral observations (e.g., patient complains of being unable to sleep well but appears quite alert). Such observation may occur in the context of formal evaluation.]

b4) Information reported by reliable informants/collaterals, such as primary care physicians and spouses. (Inconsistency #10)

b4i) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about background history. (Inconsistency #10, first example)

b4ii) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about behavior/symptoms/complaints/limitations/functions.

 [Self-reported PTSD-related symptoms are clearly discrepant with reliable observations of behavior. Reported symptoms in a given behavioral domain (i.e., physical, cognitive, emotional; PTSD-related) are markedly/substantially or moderately/nontrivially inconsistent/discrepant with behavioral observations (e.g., patient complains of being unable to sleep well but appears quite alert). Such observation may derive from the report of reliable collateral informants (e.g., evaluee’s friends or relatives).]

b5) Information reported in reliable documents, such as by primary care physicians and other mental health professionals. (Inconsistency #11)

b5i) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about background history. (Inconsistency #11, first example)

b5ii) Information reported in reliable documents, such as primary care physicians and other mental health professionals, about behavior/symptoms/complaints/limitations/functions.

 [Self-reported history is clearly inconsistent/discrepant with documented history, the evidence for which is reliable. For example, minimization or denial of marked/substantial or moderate/nontrivial concurrent or prior illness/injury (broadly defined) in a manner that emphasizes the injury for which compensation is sought. Also included would be marked/substantial or moderate/nontrivial overstatement of academic, vocational, or other achievement in a way that exaggerates the magnitude of loss due to the injury in question.]

c) Inconsistencies/Discrepancies in Conjunction with Observations (other than with testing and with self-report)

Evidence that the evaluee’s verbal and/or nonverbal observed behaviors, symptoms, complaints, or limitations and functions related to PTSD and related disorder/dysfunction are clearly consistent with exaggeration or feigning of physical, cognitive, or emotional/psychological components of the PTSD-related disability in that there is either a marked/substantial or moderate/nontrivial inconsistency/discrepancy between such observations and any of the following:

c1) Known patterns of brain function. (Inconsistency #12)

c2) Known patterns of physiological function. (Inconsistency #13)

c3) Information reported by reliable informants/collaterals, such as primary care physicians and spouses. (Inconsistency #14)

c3i) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about background history. (Inconsistency #14, first example)

c3ii) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about behavior/symptoms/complaints/limitations/functions.

c4) Information reported in reliable documents, such as by primary care physicians and other mental health professionals. (Inconsistency #15)

c4i) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about background history. (Inconsistency #15, first example)

c4ii) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about behavior/symptoms/complaints/limitations/functions.

d) Inconsistencies/Discrepancies in Conjunction with Collateral Information (other than with testing, self-report, and observations)

Evidence that the evaluee’s collaterally reported behaviors, symptoms, complaints, or limitations and functions related to PTSD and related disorder/dysfunction are clearly consistent with exaggeration or feigning of physical, cognitive, or emotional/psychological components of the PTSD-related disability in that there is either a marked/substantial or moderate/nontrivial inconsistency/discrepancy between such reports and any of the following:

d1) Known patterns of brain function. (Inconsistency #16)

d2) Known patterns of physiological function. (Inconsistency #17)

d3) Information reported in reliable documents, such as by primary care physicians and other mental health professionals. (Inconsistency #18)

d3i) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about background history. (Inconsistency #18, first example)

d3ii) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about behavior/symptoms/complaints/limitations/functions.

 e) Inconsistencies/Discrepancies in Conjunction with Documentation (other than with testing, self-report, observations, and collateral information)

Evidence that the evaluee’s documented behaviors, symptoms, complaints, or limitations and functions related to PTSD and related disorder/dysfunction are clearly consistent with exaggeration or feigning of physical, cognitive, or emotional/psychological components of the PTSD-related disability in that there is either a marked/substantial or moderate/nontrivial inconsistency/discrepancy between such documentation and any of the following:

e1) Known patterns of brain function. (Inconsistency #19)

e2) Known patterns of physiological function. (Inconsistency #20)

f) Inconsistencies/Discrepancies Within Major Data Sources (not between them, which are scored above)

f1) Known patterns of brain function. (Inconsistency #21)

f2) Known patterns of physiological function. (Inconsistency #22)

f3) Self-report. (Inconsistency #23)

f3i) Self-report of background history. (Inconsistency #23, first example)

f3ii) Self-report of behavior/symptoms/complaints/limitations/functions.

f4) Observed behavior/symptoms/complaints/limitations/functions. (Inconsistency #24)

f4i) Observed behavior/symptoms/complaints/limitations/functions while unaware of being observed. (Inconsistency #24, first example)

[Compelling self-presentation inconsistency/discrepancy. Compelling self-presentation inconsistencies/discrepancies occur when the difference in the way an evaluee presents verbally and/or nonverbally when being evaluated compared with when not aware of being evaluated is marked/substantial or moderate/nontrivial and such that it is not reasonable to believe the evaluee is not purposely controlling the difference and other explanations do not readily apply.]

f4ii) Observed behavior/symptoms/complaints/limitations/functions while aware of being observed.

f5) Information reported by reliable informants/collaterals. (Inconsistency #25)

f5i) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about background history. (Inconsistency #25, first example)

f5ii) Information reported by reliable informants/collaterals, such as primary care physicians and spouses, about behavior/symptoms/complaints/limitations/functions.

f6) Information reported in reliable documents. (Inconsistency #26)

f6i) Information reported in reliable documents, such as by primary care physicians and other mental health professionals, about background history. (Inconsistency #26, first example)

f6ii) Information reported in reliable documents, such as primary care physicians and other mental health professionals, about behavior/symptoms/complaints/limitations/functions.

 g) Other, Miscellaneous Inconsistencies/Discrepancies (e.g., there is evidence of no material causation for alleged psychological/psychiatric effects of event at claim)

[Self-reported symptoms are clearly discrepant with claimed causal factors, such as an index event. There are marked/substantial or moderate/nontrivial multiple pre-existing and concurrent, but incidental, extraneous factors, reliably ascertained, that can clearly account for the evaluee’s presentation pertaining to the diagnosis and disorder/disability at issue much more than an event at claim or even fully, but the evaluee keeps insisting that the event at claim explains all of or a good portion of the sequelae to the event in his/her presentation. Arguments of this nature must be made clearly by the evaluator, given the confounding counter-arguments possible.]

g1) No causality attributable to the event at claim, despite the evaluee’s insistence. (Inconsistency #27)

g2) Only minimal causality attributable, and out of the material range, despite the evaluee’s insistence. (Inconsistency #28)

g3) Material-level causality attributable to the event at claim, but not to the degree insisted by the evaluee. (Inconsistency #29)

g4) Other. (Inconsistency #30)

B. Different Degrees of Certainty of Response Bias, According to Inconsistencies/Discrepancies

B1) Definite Malingering.

i) One extremely compelling inconsistency/discrepancy that takes the form of (a) outright admission, (b) incontrovertible evidence on videographic surveillance, such as working after denial that it is taking place, or (c) or reliable collateral information in these regards. Other compelling inconsistencies of a less red-handed, extreme nature require three pieces of evidence for consideration at this level.

Or

ii) The evidence is incontrovertible (blatant, indisputable) when all the data gathered are considered. Three or more marked/substantial inconsistencies/discrepancies from items a–g above,

Or,

iii)

a) One marked/substantial inconsistency/discrepancy from items a–g, and

b) Performance on four (not five) well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

Or,

iv)

a) Two marked/substantial inconsistencies/discrepancies from items a–g, and

b) Performance on three (not five) well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

B2) Definite negative response bias.

i) Two marked/substantial inconsistencies/discrepancies from items a–g,

Or,

ii)

a) One marked/substantial inconsistency/discrepancy from items a–g, and

b) Performance on three (not four) well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

B3) Probable negative response bias.

i) One marked/substantial inconsistency/discrepancy from items a–g,

Or,

ii)

a) Five moderate/nontrivial inconsistencies/discrepancies from items a–g, and

b) Performance on two (not three) well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

B3-4) Intermediate (Probable to possible, gray zone) negative response bias.

The data meet the requirements for classification of possible negative response bias but not the classification of probable negative response bias. Nevertheless, there are supplementary data available about the evaluee that raises the ratings. For inconsistencies/discrepancies that have not been considered elsewhere in the system rating as marked/substantial or moderate/nontrivial, this could refer to:

i) Inconsistencies/discrepancies are reliably found in other assessments, such as different specialists in a multidisciplinary assessment of the evaluee that address pertinent mental health issues.

Or,

ii) There is clear evidence of or other confounding factors that might cast doubt on the validity of either the evaluee’s presentation on performance validity, although this would have to be clearly documented. In this regard, the evaluee would have to show five or more of the following 15 factors, as supported by clear evidence (five of these are needed because often they are hard to determine, so that even with some evidence in their support, five is considered the minimum needed to use this option in the present scoring system).

That being said, when one to four of these criteria are evident instead of five or more, and so they cannot be used as part of the data for rating Probable Response Bias, as per the above, the evaluator should use these as part of the ratings for Possible Negative Response bias, as per below, including them with the other inconsistencies/discrepancies in items a–g therein. Also, if the rating of Probable Negative Response Bias is almost attained but one or more moderate/nontrivial inconsistencies/discrepancies from items a–g are lacking, the ones from this list for Intermediate Negative Response Bias can be used.

a) Personality disorder of a problematic nature, e.g. , (i) antisocial personality disorder according to the DSM, or (ii) features of/subsyndromal expressions of one, or (iii) confrontational/uncooperative, resisting/refusing, without clear signs that the behavior is related to the claimed injury or other conditions such as schizophrenia, etc.

b) Blaming everyone and anything, overly suspicious, etc. , without clear signs that the behavior is related to the claimed injury or other conditions, such as schizophrenia, etc.

c) Not trying to mitigate loss; not being active in recommended therapy; not being a compliant patient adhering to treatment regimens, etc.

d) Unduly adopting the sick role, accepting overly solicitious behavior, etc.

e) Somatization effects not related to the influences of the claimed psychiatric/psychological injury.

f) Failure to treat substance abuse impeding progress, whether pre-event or post-event related, including of abuse of prescribed event-related medications.

g) Failure to take recommended medications, such as anti-depressants or needed pain medications, if applicable, for invalid medical reasons.

h) Refusing a work-hardening trial, refusing modified duties, refusing training for new work within residual capacities and transferable skills, etc. , as long as these options are psychiatrically/psychologically (and medically) indicated.

i) Catastrophizing/crying out for help at a level clearly beyond the nature of the injuries, even after education about it (if not used elsewhere).

j) Any other confound that is documentable, such as attorney or similar coaching.

As well, five factors derived from the pre-event background are considered as possible confounding factors that might cast doubt on the validity of the evaluee, although resilience to these stressors should be considered in balance:

k) Psychiatric/self harm/substance abuse history.

l) Criminal/legal/problematic military history; history of deceit/fraud.

m) History of, irregularity in/dissatisfaction with work or other role at issue.

n) History of, irregularity in/dissatisfaction with family, partners, friends, social life.

o) History of, financial stresses/bankruptcies/unsupported claims.

B4) Possible negative response bias.

i) Four moderate, nontrivial inconsistencies/discrepancies from items a–g,

Or,

ii)

a) Three moderate, nontrivial inconsistencies/discrepancies from items a–g, and

b) Performance on one (not two) well-validated tests designed to measure exaggeration or fabrication of psychiatric/psychological ( e.g. , cognitive or perceptual) symptoms, including forced-choice measures, is consistent with exaggeration of diminished functional psychiatric/psychological capacity.

B5) Minimal negative response bias.

i) Two moderate, nontrivial inconsistencies/discrepancies from items a–g

Or,

ii)

a) One moderate, nontrivial inconsistency/discrepancy from items a–g, and

b) Just-below cut score performance on one (not two or more) well-validated tests so that performance is at most partially consistent with exaggeration of diminished functional psychiatric/psychological capacity.

B6) No evident response bias.

Not even one moderate, nontrivial inconsistency/discrepancy from items a–g.

Criterion D: Behaviors meeting necessary criteria from groups B and C are not fully accounted for by psychiatric, neurologic, or developmental, or other factors.

The behaviors meeting the above criteria represent a likely (inferred but evident) volitional act aimed at achieving some secondary gain and cannot be fully accounted for by other disorders that result in significantly diminished capacity to appreciate laws or mores against malingering or inability to conform behavior to such standards. The simple presence of objectively documented pathology, illness, or injury (including psychiatric illness) expressly does not preclude a diagnosis of malingering. However, the “diagnostic” system presented should be used conservatively and prudently, especially because of the harm to evaluees that can be caused by false attributions of malingering and related presentation/performance response biases. For example, the options of probable, intermediate, and possible levels of response bias expressly do not preclude validity of the evaluee’s presentation, at least in part. Moreover, in arriving at conclusions about definite response bias, the evaluator is reminded (a) to evaluate the full data gathered for the evaluee and not just scores on one or more psychometric measures or computer interpretations of test results, and (b) the data must be gathered comprehensively, scientifically, and impartially. For example, an evaluee failing according to cut-off on three validity indicators might pass many more in the full battery administered and allowances could be made for these credible results, depending on other factors, such as their pattern. Importantly, attributions of overt malingering must especially take these factors and other relevant ones into account before concluding that malingering is present with incontrovertible evidence, or that other high ratings in the system are present at the level of “more likely than not” in the evaluee. That being said, when warranted, the astute evaluator can use language that clearly denies the credibility of the evaluee, even to significant degrees (despite having a lack of clear evidence about or knowledge of underlying motivation, and therefore without imputing directly motivation).

Note. This present rating system to evaluate non-credible, feigning/malingering and other response biases and presentations/performances in the psychiatric/psychological injury context is meant to be applicable to adult evaluees, in particular. It can be used with adolescents, though, but with caution, e.g. , in terms of using different tests/measures/scales of validity/effort. An important general reminder is that any assessment and interpretation of instrument results need to be sensitive to relevant age, gender, cultural/minority, and related differences.

1A reviewer recommended that I footnote all unqualified mentions of below-chance performance on forced-choice tests as statistically significant

Adopted from Young (2014a); Table 6.1

Adapted from Bianchini et al. (2005), which in turn was adapted from Slick et al. (1999).

Note. All relevant changes from the pain-related “diagnostic” system (MPRD) of Bianchini et al. (2005) are italicized for the present application to PTSD and related presentations.

Note for practice use of the table. The F-PTSDR-D rating system allows for evaluation of non-credible, feigned, or malingered evaluee presentation/performance by either (a) psychometric testing, (b) finding major inconsistencies/discrepancies in an evaluee’s data, or both. As such, the present F-PTSDR-D system is a malingering-related “diagnostic” system, or classificatory model, that is usable by psychiatrists, psychologists, and other mental health professionals.

Also, for evaluees presenting with simultaneous neuropsychological/cognitive, pain-related, and/or polytrauma disorder/disability/dysfunction in conjunction with PTSD claims, aside from the present PTSD-related system, the assessor should consult the revised systems have been developed to replace the MND (Malingered Neurocognitive Dysfunction) and MPRD (Malingered Pain-Related Disability) systems of Slick et al. (1999) and Bianchini et al. (2005), respectively. See tables on the F-NCR-D and F-PR-D systems, respectively, and the recommendations for their simultaneous use.

Abbreviations. PTSD posttraumatic stress disorder; TBI traumatic brain injury; TOMM Test of Memory Malingering (Tombaugh, 1996); VSVT Victoria Symptom Validity Test (Slick, Hopp, Strauss, & Thompson, 1997/2005); WMT Word Memory Test (Green, 2005); VIP Validity Indicator Profile (Frederick, 1997); MMPI Minnesota Multiphasic Personality Inventory (Hathaway & McKinley, 1943); MMPI-2 Minnesota Multiphasic Personality Inventory, Second Edition (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989; Butcher et al., 2001); FBS (SVS) Fake Bad Scale (Symptom Validity Scale) (Ben-Porath & Tellegen, 2008/2011; Lees-Haley, English, & Glenn, 1991); MMPI-2-RF Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form (Ben-Porath & Tellegen, 2008/2011); r revised (Ben-Porath & Tellegen, 2008/2011); Fb Infrequent Responses, back (Ben-Porath & Tellegen, 2008/2011); Fp Infrequent Psychopathology Responses (Ben-Porath & Tellegen, 2008/2011); RBS Response Bias Scale (Gervais, Ben-Porath, Wygant, & Green, 2007); HHI Henry Heilbronner Index (Henry, Heilbronner, Mittenberg, & Enders, 2006); L Uncommon Virtues, Lie scale (Bianchini et al., 2005); K Adjustment Validity, Correction scale (Bianchini et al., 2005); SIRS Structured Inventory for Reported Symptoms (Rogers, Bagby, & Dickens, 1992); SIRS-2 Structured Inventory of Reported Symptoms, Second Edition (Rogers, Sewell, & Gillard, 2010); M-FAST Miller Forensic Assessment of Symptoms Test (Miller, 2001); PTSD posttraumatic stress disorder; DAPS Detailed Assessment of Posttraumatic Stress (Briere, 2001); TSI-2 Trauma Symptom Inventory, Second Edition (Briere, 2011); BBHI-2 Brief Battery for Health Improvement, Second Edition (Disorbio & Bruns, 2002); RNBI Ruff Neurobehavioral Inventory (Ruff & Hibbard, 2003); PAI Personality Assessment Inventory (Morey, 1991, 2007); BHI-2 Battery for Health Improvement, Second Edition (Bruns & Disorbio, 2003); MENT Morel Emotional Numbing Test (Morel, 1995, 1998); BASC-2 Behavior Assessment System for Children, Second Edition (Reynolds & Kamphaus, 2004).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Young, G. Detection System for Malingered PTSD and Related Response Biases. Psychol. Inj. and Law 8, 169–183 (2015). https://doi.org/10.1007/s12207-015-9226-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12207-015-9226-2

Keywords

Navigation