Malingering is defined by the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013) as “the intentional production of false or grossly exaggerated physical or psychological symptoms, motivated by external incentives, such as avoiding military duty, avoiding work, obtaining financial compensation, evading criminal prosecution, or obtaining drugs” (p. 726). As malingering can compromise the efficiency of the whole mental health system and cause massive economical losses (Chafetz & Underhill, 2013; Rogers & Bender, 2018), a key component of many psychological evaluations, especially in legal settings, consists of assessing the credibility of presented problems (Giromini et al., 2022). In this article, we use the expression “Symptom Validity Assessment” (SVA) to broadly refer to the whole set of procedures, tools, and techniques (including interviews, tests, observational materials) used by professionals to assess the credibility of presented problems.

The two guest editors of the current special issue of Psychological Injury and Law, Dr. Thomas Merten and Dr. Brechje Dandachi-FitzGerald, together with their colleagues and collaborators, undertook a major effort several years ago to provide a comprehensive overview of the current state of European practices and beliefs regarding SVA. In particular, Merten et al. (2013) examined two European survey studies on SVA, namely, a study by McCarter et al. (2009) in the UK, and a study by Dandachi-FitzGerald et al. (2013) conducted in Denmark, Finland, Norway, the Netherlands, Italy, and Germany. From the latter study, the majority of Italian neuropsychologists who completed the survey (n = 49) included SVTs or PVTs in less than 20% of cases in both clinical (87.7%) and forensic (53.2%) assessments. In addition, only 21.9% of Italian respondents used SVTs and/or PVTs in every or almost every case in forensic assessments. More broadly, the results of both surveys highlighted that there is little consensus among practitioners from different European countries on what procedures should be followed when conducting SVAs, thus warranting additional systematic research on SVAs in Europe. Important to our goal, Merten et al. (2013) also highlighted that there was “no or almost no evidence for substantial research activities in some major European nations like France or Italy” (p. 131). As such, additional research on SVA from these two specific countries was deemed especially necessary.

In an attempt to contribute to filling these gaps in the literature, the current article was prepared to summarize responses provided by a sample of Italian psychologists to a subset of items on practices and beliefs related to malingering and SVA, drawn from a larger survey originally designed to investigate general psychological assessment practices and beliefs of Italian practitioners. The items on malingering and SVA selected for the current study were aimed at addressing the following two research questions: (a) What procedures do Italian psychologists use to assess the credibility of problems presented by their evaluees? (b) How often do Italian psychologists believe they encounter malingered presentations in their assessments?

Regarding our decision to focus on Italian psychologists, it should be noted that in recent years, quite a number of publications on SVA, malingering, and related topics have been published by Italian authors (e.g., Di Girolamo et al., 2021; Giromini & Viglione, 2022; Giromini et al., 2018, 2019, 2020, 2021; Mazza et al., 2019; Monaro et al., 2018; Orrù et al., 2021; Pace et al., 2019; Pignolo et al., 2021; Roma et al., 2020). Considering that research and practice are highly intertwined in this field, when we initiated this project, we determined that an update on the state of the art regarding malingering and SVA practices and beliefs in Italy would be particularly needed. Additionally, with regard to our choice to focus on psychologists only–thereby leaving out other mental health professionals such as psychiatrists and counselors–it should be noted that although in Italy, both psychologists and psychiatrists are allowed to conduct psychological assessments, the person who administers and interprets psychological tests in both the clinical and forensic contexts is typically a psychologist. Therefore, we concluded that a survey of Italian practices and beliefs regarding malingering and SVA should probably focus primarily on responses provided by psychologists.

Method

Procedure

Firstly, the authors of the current article generated a pool of items generally focused on psychological assessment practices and beliefs based on previously published surveys (Dandachi-FitzGerald et al., 2013; McCarter et al., 2009; Wright et al., 2017) as well as on their own clinical, forensic, and research experience. This pool of items also included a few questions concerning SVA and malingering. Next, a research project proposal was prepared and submitted to the Institutional Review Board (IRB) of the University of Turin, Italy. Once the project received approval (protocol number: 203037), the president of the National Board of Italian Psychologists (“Ordine degli Psicologi”) advertised the study in a formal newsletter of the board, inviting all Italian psychologists (117,762 as of 2020) to participate in the survey. Responses were collected online, using LimeSurvey.

Respondents

For the main research project broadly focused on psychological assessment practices and beliefs, the only inclusion criterion was that the respondent had to be a licensed psychologist duly registered in the national professional register. Of 527 psychologists who started the survey, 451 responded that they were currently performing or had previously performed psychological assessment in their practice. Of these 451, 116 dropped out soon after responding a few general questions regarding their standard assessment routines, so that the dataset with valid responses on assessment practices and beliefs of Italian psychological assessors was then reduced to 335 cases.

For the current article, another 178 were then excluded due to their not having performed any malingering-related evaluations in their career, so that the sample size was subsequently reduced to 157. More specifically, these additional 178 cases answered “No” to the question, “Have you ever performed psychological evaluations in which the evaluee could potentially have had an interest in intentionally producing false or grossly exaggerated physical or psychological symptoms, given the presence of external incentives?” It is worth mentioning that the format of this question was explicitly (and evidently) derived from the DSM-5 definition of the term “malingering” (American Psychiatric Association, 2013; p. 726), and that the reason why these additional 178 cases were excluded is that most of the malingering- and SVA-related items of our survey were presented to respondents only if they answered “Yes” to that question.

Finally, of the 157 psychologists who reported having conducted at least one malingering-related evaluation in their career, 19 left the survey before getting to the SVA-related items, and another 28 abandoned before reaching the end of the survey. Accordingly, the total number of respondents who completed all of the items included in this study is 110.

Selected Items

Relevant survey items selected for the current article addressed several questions related to malingering and SVA practices and beliefs. These questions can be grouped into the following three content areas.Footnote 1 A first content area (Practitioner’s Demographics and Assessment Routines) includes items about (a) respondents’ age, gender, educational qualification, and nationality, (b) how many years respondents have conducted psychological assessments and how frequently they currently do so, (c) whether or not respondents use psychological tests in their assessment routines, and (d) which psychological test(s) respondents use in their assessment routines, if they use tests. A second content area (Practitioner’s Procedures to Address Noncredible Responding) consists of a series of items focused, more specifically, on which instruments and procedures the respondents use in their standard SVA practice. Lastly, the third content area (Practitioner’s Beliefs Regarding Malingering) includes (a) items focused on what types of incentives to feign the respondents believe their evaluee(s) had in their most recent evaluation, and (b) items inspecting the respondents’ estimated base rates of malingering in various evaluation contexts. An English translation of all survey items examined in this article can be found in Appendix A.Footnote 2

Results

Practitioner’s Demographics and Assessment Routines

More than half (53.5%) of the respondents included in our initial sample (N = 157) described their gender as “women”; the remaining ones either described their gender as “man” (14.0%) or “non-binary” (< 1.0%), or left that survey item unanswered (31.8%). Ages ranged from 28 to 73, with an average number of years of 44.6 (SD = 10.1). About two-thirds reported having either a PsyD (66.4%), less than one-third reported having a Master’s Degree (27.1%), and the remaining ones reported having either a PhD (1.9%) or both a PhD and a PsyD (4.7%). All reported having Italian nationality.

The majority of respondents reported having conducted psychological assessments for ten or more years (65.0%), about a fifth (21.7%) reported having conducted psychological assessments for 5 or more years (but less than 10), and the remaining 13.4% reported having conducted psychological assessments for less than 5 years. The frequency with which respondents were conducting their psychological assessments varied from “every day” to “once a year.” These responses are difficult to categorize because several respondents wrote that the frequency with which they were conducting psychological assessments was influenced by many factors (e.g., time of the year, contextual factors) and was therefore highly variable. Among those who provided a numeric estimate (n = 134), 35.1% reported that they were conducting psychological assessments either on a daily basis or more than once a week, 26.9% reported an estimated frequency of about once a week, 24.6% reported an estimated frequency of about once a month, and the remaining 13.4% reported an estimated frequency of less than once a month.

The great majority of this initial sample, i.e., 136 out of 157 (or 86.6%), answered “Yes” to the question, “Do you use psychological tests in your assessment routines?”–21 only (i.e., 13.4%) answered “No” to that question. That is, almost 90% of Italian assessors who had performed at least one malingering-related evaluation in their career reported using psychological tests in their assessment routines. As shown in Table 1, the two most widely utilized instruments (percentage of use > 50%) were the Wechsler Scales (and/or similar or related measures) and the MMPI (any version). Other widely utilized measures (percentage of use > 33%) were various performance-based (projective) instruments, various neuropsychological measures, the Rorschach, various symptom-specific measures for adults, and/or various clinician reports. Important to our goal, only 13.2% reported using routinely one or more stand-alone symptom validity tests (SVTs) or performance validity tests (PVTs) in their assessments. Nevertheless, 66.9% reported using the Wechsler Scales, 65.4% the MMPI (any version), 30.9% the MCMI (any version), and 15.4% the PAI, all instruments that include embedded indicators of performance or symptom validity.

Table 1 Frequency of assessment measure use (N = 136)

A few caveats regarding Table 1 need to be pointed out. First, it is important to appreciate that Table 1 summarizes the frequency with which different tests are used in the assessment routines of Italian professionals who reported having performed, at least once in their career, malingering-related psychological evaluations. Table 1, thus, does not directly inform on which tests Italian practitioners use when they perform SVA–this latter question is addressed more in detail below, in the next section (i.e., the “Practitioner’s Procedures to Address Noncredible Responding” section). Second, it should be noted that the format with which the different types of tests are grouped for Table 1 was derived and freely adjusted from Wright et al. (2017) and so was the format of the related question content used for that specific item of the survey.

Practitioner’s Procedures to Address Noncredible Responding

Before getting to the survey items focused on the procedures used by Italian psychologists to assess noncredible responding, 19 individuals dropped out, so that the following subset of items was completed by 138 of the initial 157 respondents. At the beginning of this series of items, an open-ended question broadly asked, “In your assessment work, which procedures do you use to assess the credibility of presented symptoms and/or the presence of feigning?” The responses provided to this question are difficult to categorize and summarize, but the great majority of respondents (> 60%) spontaneously mentioned relying on interview data (with the client and/or with other people who are close to the client) and/or on psychological testing; additionally, more than 10% indicated (or added) that they (also) consider observational materials, and/or anamnestic information. With regards to psychological testing, only 33 respondents explicitly wrote the name of one or more of the tests they were prone to use in their SVA practice. Of those 33, 21 (i.e., 63.6%) cited the MMPI (any version), 10 (i.e., 30.3%) cited the IOP-29, 5 (i.e., 15.2%) cited the SIMS, 3 (9.1%) cited the TOMM, and the remaining ones cited various tests such as the PAI, SIRS, GSS, Rorschach task, Dot Counting Test, or CBA.

Next, four items investigated the respondents’ perceived utility of four different sources of information, i.e., (a) general clinical impressions formed based on the clinical interview with the evaluee; (b) observed behaviors (e.g., eye contact, blushing) manifested by the evaluee during the assessment; (c) SVTs and/or PVTs scores; (d) patterns of elevations on multiple clinical scales from personality inventories. For each item, the response options ranged from 1 (“not useful at all”) to 4 (“extremely useful”). Although all four sources of information received notably high rating values (on average, all exceeded 3), there was a statistically significant difference on the estimated utility of these four sources of information, F(3, 411) = 10.820, p < .001 (Table 2 and Fig. 1). More specifically, the Cohen’s d effect sizesFootnote 3 associated with the statistically significant differences between the perceived utility of the behaviors observed while performing the psychological evaluation and the perceived utility of the results of SVTs/PVTs and clinical scales’ scores were .41 and .58, respectively. The Cohen’s d value associated with the difference between the perceived utility of the general clinical impressions formed during the interview and the perceived utility of the clinical scales’ scores was .39. These effect sizes may be characterized as medium, according to Cohen (1988). Thus, overall, the behaviors observed while performing the psychological evaluation and the general, clinical impressions formed during the interview generally appeared to be the most useful sources of information to surveyed practitioners, with the results of SVTs and PVTs, as well as those from the clinical scales of personality inventories playing a relatively less important role.

Table 2 Perceived utility of different sources of information: descriptive statistics (N = 138)
Fig. 1
figure 1

Perceived utility of different sources of information: graphical representation (histograms). Note. Perceived utility was measured on a Likert scale as follows: 1 = not useful at all; 2 = somewhat useful; 3 = very useful; 4 = extremely useful

A subset of three items then focused on the respondents’ approach to interpreting the results of SVTs and/or PVTs. These three items presented three possible test outcomes and inquired whether or not each outcome could be considered to be “sufficient evidence” for a presentation to be characterized as noncredible (Table 3). For 10.9%, a single score above (or below) the suggested cut-off in one validity check (outcome #1) was deemed sufficient, whereas for 89.1%, this outcome would not be considered sufficient evidence. For 46.4%, a score notably above (or below) the suggested cut-off in one validity check (outcome #2) was deemed sufficient, whereas for 53.6%, this outcome would still not be considered sufficient evidence. Lastly, for 65.9% of respondents, failing two validity checks (outcome #3) would be considered to be sufficient evidence to characterize a clinical presentation as noncredible, whereas for 34.1%, this outcome would still not be enough to make such inference.

Table 3 Outcomes considered by respondents as “sufficient evidence” for a presentation to be noncredible

Lastly, respondents were asked whether before initiating a SVA, they would typically inform their clients that such evaluation could include measures aimed at identifying the possible lack of effort or cooperation with the assessment procedures, the possible presence of symptom exaggeration, and/or the possible presence of feigning. About one-third of the respondents (32.7%) answered “Yes” to this question, whereas the remaining two-thirds (67.3%) answered “No.”

Practitioner’s Beliefs Regarding Malingering

Right after asking “Have you ever performed psychological evaluations in which the evaluee could potentially have an interest in intentionally producing false or grossly exaggerated physical or psychological symptoms, given the presence of external incentives?,” respondents who answered “Yes” were asked what kinds of incentives to feign they believed their evaluees had in their most recent evaluation. Each response option addressed one specific, potential motivation to feign (primarily derived by the DSM-5 definition of malingering), and respondents were given the possibility to endorse more than one option, with the option “Other” offering an open-ended response field. As shown in Table 4, the most frequently endorsed incentive to feign consisted of financial motivations such as obtaining a disability check or financial compensation from an insurance company (51.0%). The second most frequently endorsed option involved obtaining adjustments or accommodations to one’s own working conditions, such as a reduced number of working hours and the possibility to work from home (35.7%). Other possible incentives (e.g., obtaining drugs, evading criminal prosecution) obtained remarkably lower endorsement rates (≤ 28.0%).

Table 4 Presumed incentives to feign in the most recent evaluation (N = 157)

Later in the survey, another subset of items inspected the respondents’ estimated base rates of malingering in six different situations/contexts. At this point, the number of respondents who were still in the survey had decreased to 110, and these final items asked the respondent to enter a number to indicate the frequency with which malingering would occur, in their opinion, in various assessment contexts. Estimated malingering base rates by our surveyed practitioners are reported in Table 5 (and graphically represented in Fig. 2). These rates ranged from an average of 14.6 (SD = 15.9) to an average of 49.9 (SD = 23.5); the grand mean calculated by averaging the estimated rates across all different evaluation contexts was 32.8 (SD = 15.3).

Table 5 Estimated malingering base rates by evaluation target: descriptive statistics
Fig. 2
figure 2

Estimated malingering base rates by evaluation target: graphical representation (histograms)

It should be pointed out that these estimated base rates statistically differed by evaluation context/target, F(5, 480) = 65.856, p < .001. More specifically, higher estimated base rates were observed for evaluations related to culpability and competence to stand trial, psychological injury, and work-related stress, whereas lower estimated base rates were observed for evaluations related to ADHD and specific learning disabilities (results of Bonferroni-corrected pairwise comparisons are reported in the last column of Table 6).

Table 6 Estimated malingering base rates by evaluation target: Cohen’s d effect sizes

Additional Analyses

As noted above, of the 335 Italian psychologists who reported conducting or having conducted psychological assessments, 178 were excluded from the current study because they answered “No” to the question, “Have you ever performed psychological evaluations in which the evaluee could potentially have had an interest in intentionally producing false or grossly exaggerated physical or psychological symptoms, given the presence of external incentives?” The fact that more than half of the psychological assessors we surveyed (178 of 335, or 53.1%) denied ever having conducted an evaluation in which a patient/evaluee may have been malingering is noteworthy. Indeed, regardless of which estimates one believes to be true, the likelihood that a patient is malingering is certainly greater than 0%, even in the clinical setting. Therefore, we performed some additional analyses as we wondered whether those who answered “No” to this question might perhaps have had little experience with psychological assessments. The data, however, suggested otherwise.

More than half of these 178 Italian assessors (50.6%) reported that they had been providing psychological assessments for 10 or more years, 19.7% reported that they had been providing psychological assessments for 5 or more years (but less than 10), and only 29.8% reported that they had been providing psychological assessments for less than 5 years. Regarding the frequency with which they were conducting psychological evaluations, a significant number of respondents (38.2%) indicated that they were conducting psychological evaluations once or more than once a week. All in all, these results suggest that lack of experience in psychological assessment is not a likely explanation for respondents denying that they have ever conducted an assessment in which a patient/evaluee may have been malingering.

General Discussion

In 2013, an article describing the state of the art of SVA practices and beliefs in European countries highlighted that there was little research activity in Italy and that Italian practitioners were less inclined to use SVTs and PVTs compared with their counterparts from other major European countries such as the Netherlands, Norway, or Germany (Merten et al., 2013). However, in recent years, some articles on malingering and SVA have been published by Italian authors (e.g., Di Girolamo et al., 2021; Giromini et al., 2018, 2019, 2020, 2021; Mazza et al., 2019; Monaro et al., 2018; Orrù et al., 2021; Pace et al., 2019; Pignolo et al., 2021; Roma et al., 2020), so an update of Italian professionals’ practices and beliefs regarding malingering and SVA was deemed necessary. To fill this gap, the current article extracted, from a larger survey investigating general psychological assessment practices and beliefs of Italian professionals, a subset of items more specifically focused on malingering and SVA, and analyzed responses provided to these items by a sample of Italian psychologists who had some experience with malingering-related evaluations. Selected items contributed to appreciate (a) which procedures Italian psychologists typically use when they need to evaluate the credibility of problems presented by their evaluees, and (b) Italian psychologists’ estimated base rates of malingering in various evaluation contexts.

A first consideration to be made is that of the 335 Italian psychologists who reported practicing or having practiced psychological assessment, the percentage of those who also reported having performed at least one malingering-related evaluation in their career was 46.9% (i.e., 157 out of 335). That is, about one in two Italian psychological assessors declared that at least once in their career they had to consider the possibility that their evaluee could be malingering. Although the authors of the current article believe that the credibility/validity of presented symptoms should be questioned (and tested) always–in all psychological evaluations, even when there are no evident incentives to feign–this datum speaks to the relevance that the phenomenon has gained within the past few years within the Italian context. Essentially, about one in two Italian assessors agrees that at least some of the individuals they have tested in their careers may have had an interest in deliberately exaggerating their symptoms in order to obtain a benefit such as disability-related compensation and prescription of drugs. This is probably good news, because admitting that the person tested may have had an interest in overreporting their symptoms in at least one of their assessments is an important step in recognizing that the credibility of the symptoms presented by the patient or evaluee cannot simply be taken for granted, but must be assessed. Looking at the other side of the same coin, however, it is somewhat concerning that about one in two Italian assessors believes that none of their patients or evaluees may have had an interest in exaggerating their symptoms to gain an external advantage. In our opinion, this finding suggests that some efforts should be made to sensitize Italian assessors to the risk of malingering in forensic and clinical contexts.

Another interesting result of this study is that even though only 13.2% of surveyed psychologists reported using stand-alone SVTs or PVTs routinely in their assessments, more than 60% spontaneously mentioned relying on these or similar kinds of validity checks, when inquired about their SVA routines. Thus, one might say that albeit Italian psychologists do not always question the credibility of presented symptoms, when they do so, they are relatively prone to use SVTs and/or PVTs to assist their decision making processes, and based on a small subset of respondents who provided explicit information on this matter, at this time, the most widely used validity checks, for these kinds of evaluations, appear to be the validity scales of the MMPI (any version), the IOP-29, the SIMS, and the TOMM.

Overall, surveyed practitioners seemed to trust their own observations, impressions, and overall clinical judgment more than the results of standardized tests (SVTs, PVTs, and/or clinical scales’ scores), when performing SVA. Indeed, as shown in Table 2, the perceived utility of test scores received lower ratings compared to other sources of information such as the clinical judgment or the observations the clinician makes during the assessment process (test administration, etc.). These findings are in line with those reported by Dandachi-FitzGerald et al. (2013), in which methods based on subjective clinical judgment (e.g., discrepancies between records, self-reports, and observed behavior, or implausible self-reported symptoms in interview) were often used to determine symptom validity, yet about two-thirds of surveyed psychologists (namely, 65.9%) responded that failing two validity checks could be considered, on its own, as sufficient evidence for a symptom presentation to be characterized as noncredible. This position is somewhat in line with Larrabee (2008) and Sherman et al. (2020), who suggested that two PVT failures could be a reasonable threshold to invalidate a given presentation. Conversely, notably fewer respondents (namely, 46.4%) considered a score notably above (or below) the suggested cut-off in one single validity check as sufficient evidence of non-credibility, and only one in ten (namely, 10.9%) considered a single failure, regardless of its entity, as sufficient evidence.

Noteworthy, as many as one in three of the surveyed psychologists (namely, 32.7%) reported warning their evaluees that their assessment instruments could include measures aimed at identifying possible feigning. Overall, we were not surprised by this result, given that a survey by Wetter and Corrigan (1995) found the majority of law students and attorneys felt an ethical obligation to warn their clients about the presence of validity checks in the psychological assessment process. Moreover, in the Dandachi-FitzGerald et al. (2013) survey, one in four of the European, surveyed neuropsychologists (25.4%) gave the same warning in almost every case (> 95%) in forensic assessments, whereas only 6.7% gave the same warning in almost every case (> 95%) in clinical settings. Consistently, the survey by Martin et al. (2015) in North America found that 22.2% of the surveyed neuropsychologists reported to warn always their examinees about the presence of indicators of poor effort, exaggeration, or faking within the testing instruments.

To our knowledge, there are no specific ethical guidelines, in Italy, on how to deal with this delicate issue. Therefore, we encourage Italian practitioners to consider the broader, international context and literature. As summarized by Iverson (2006), on the one hand, warning examinees that there are indicators of poor effort and exaggeration should be appropriate (Slick & Iverson, 2003), so that they can give full informed consent for their evaluation. On the other hand, warning examinees immediately before taking SVA measures deviates from the standard instructions used in the normative samples from which the test accuracy data were derived, possibly reducing their diagnostic efficacy (e.g., Gervais et al., 2001; Suhr & Gunstad, 2000) or leading to forms of more subtle malingering (Youngjohn et al., 1999). In terms of future perspectives, this datum should therefore alert scholars that research on the consequences of warnings (e.g., Banovic et al., 2021; Gegner et al., 2021; Jelicic et al., 2011) is highly needed also within the Italian context.

Responses from our surveyed psychologists also suggested that SVA practice might be particularly relevant, in Italy, in the presence of possible financial (e.g., disability check, reimbursement from insurance company) or work-related incentives (e.g., avoidance of return-to-work, reduction of working hours) incentives to feign. Conversely, a relatively small number of respondents identified other possible incentives to feign (e.g., those related to criminal responsibility, prescription of drugs, academic-related accommodations) as the most relevant ones, in their latest SVA. In this regard, however, we would like to highlight that in real-life practice the assessor may not know that a given incentive to feign exists until after the evaluation is over, and that at times the real incentive(s) to feign may remain unknown even after the whole evaluation is finished. Accordingly, as noted above, our recommendation would be to always assess the credibility of presented symptoms, in both clinical and forensic contexts.

Lastly, with regard to estimated malingering base rates, our respondents’ opinion was that malingering would occur, in Italy, in about one out of three evaluations with a possible incentive to feign (M = 32.8; SD = 15.3), with notable variability from one context to another. According to our respondents, malingering would be particularly frequent for evaluations related to culpability and competence to stand trial, psychological injury, and work-related stress, with estimated malingering prevalence located between 40 and 50%. In other evaluations addressing, for instance, possible ADHD or specific learning disabilities, the estimated base rates of malingering were notably lower, from 15 to 30%. All in all, our values are quite in line with Larrabee’s (2003) suggestion that malingering might occur in about 40% of assessments, with a notable variability from one setting to another (in his article, Larrabee pointed out that base rate estimates ranged from 15 to 64%). Nevertheless, it should be emphasized that: (a) these estimated base rates are notably higher compared to those found in other surveys cited by Rogers and Bender (2018), where the average base rate values were 15.7% and 17.4%; (b) the categories used in our study are fully tailored to the Italian context and, therefore, differ substantially from those used in the previously published studies (e.g., Martin & Schroeder, 2020; Mittenberg et al., 2002; Rogers & Bender, 2018); (c) the base rate of 40% estimated by Larrabee (2003) is likely overestimated according to Young (2015).

Comparing our results with those of Dandachi-FitzGerald et al. (2013), we can observe how SVA practices have changed over time in Italy. Before doing so, two differences should be noted. First, Dandachi-FitzGerald et al. (2013) recruited participants by forwarding an email to the Italian Society of Neuropsychology (“Società Italiana di Neuropsicologia”), which includes both physicians and psychologists, whereas our survey included only licensed psychologists. Second, Dandachi-FitzGerald et al. (2013) used the term “SVT” referring to “stand-alone performance validity tests, embedded indicators of symptom validity, and self-report measures of negative response bias” (p. 771), whereas, in our survey, we distinguished stand-alone SVTs and/or PVTs from embedded indicators of symptom validity. The only result reported separately by country by Dandachi-FitzGerald et al. (2013) refers to the frequency of inclusion of SVTs in clinical and forensic assessments. In 2013, Italian neuropsychologists (n = 49) reported that only 10.2% in clinical settings and 37.5% in forensic setting included SVTs in the majority of clinical assessments (i.e., more than 50% of cases). In 2021, Italian psychologists (n = 136) reported that only few of them (13.2%) routinely used stand-alone SVTs or PVTs, and that the majority of them routinely used instruments with embedded indicators of symptom validity, such as the MMPI (any version; 65.4%), the MCMI (any version; 30.9%), and/or the PAI (15.4%). Although the format of the questions was not the same in the two surveys, our findings seem to indicate an improvement in the inclusion of SVA indicators in the routine assessment practice of the Italian psychologists.

Like any other research projects, this study also has a number of limitations that need to be considered. First, as SVA was not the main target of the administered survey–which instead primarily focused on general assessment practices and beliefs of Italian psychologists–several research questions are left unanswered by this study. For instance, how many Italian practitioners would agree that the credibility of presented problems should be always questioned and tested in a forensic context? How many would agree that SVA should be performed in any clinical contexts too? Which SVTs and PVTs do Italian practitioners rely the most on? Which ones do they trust the least? How do they integrate the results of different SVTs and PVTs, when there are disagreements and inconsistencies? Unfortunately, the answers to these and several other similar questions await future research.

Another limitation of this study is that we included in our dataset only responses from Italian practitioners who reported having conducted at least one malingering-related assessment in their careers. Although this choice ensured that respondents had a minimum level of familiarity with the topic, it likely affected the outcome of our analyses in many ways. For example, it may have artificially inflated respondents’ estimated base rates of malingering, because professionals who did not suspect that their clients may have been malingering their symptoms in at least one of their evaluations are likely to believe that malingering is a very rare phenomenon. Similarly, those who did suspect that their patients were malingering are more likely to administer SVTs and PTVs and are likely to have better ideas about what methods should be used to identify invalid symptom presentations.

In addition, the fact that our respondents reported using various kinds of tests (for children, for adults, performance-based tests, self-report measures, etc.) suggests that they work in various different evaluation settings, and the context in which a professional works is likely to influence their practices and beliefs about malingering and SVA. For example, if a psychologist works in a setting where many evaluations are conducted to assess work ability for disability claims, their judgments will be biased toward viewing disability/financial incentives as a more common incentive for malingering than those who conduct a high proportion of academic assessments/work in an academic setting. However, our article did not explore this possible relationship.

Last but not least, there currently are more than 100,000 psychologists in Italy: even though not all of them conduct psychological assessment and likely only a subgroup of them also deal with SVA, the limited sample size of our study (a little more than a hundred participants) poses serious limits to the generalizability of our findings. Indeed, the psychologists who responded to our survey may be different from those who did not, which would increase the non-response error (Guterbock & Marcopulos, 2019). For example, not all of the Italian psychologists read the newsletter. Moreover, we advertised the survey only once in the newsletter and we did not send reminders; as such, only those who read the newsletter regularly saw the advertisement of the survey. Finally, we were not able to reach only our population of interests (i.e., Italian psychologists who perform SVA) because it is unknown how many Italian psychologists perform psychological assessment in general and SVA in particular. As such, the National Board of Italian Psychologists list was the only one available to reach the population of interests. Despite these limitations, our study still has the merit to provide a timely update on Italian practitioners’ practices and beliefs related to malingering and SVA, which–as noted above–had been warranted by several leading scholars.