Advertisement

Early intervention in psychiatry: scotomas, representativeness, and the lens of clinical populations

  • Jai L. ShahEmail author
  • Matthew I. Peters
Commentary

Abstract

Evidence supporting early intervention in mental health has gained prominence in recent years, with services for first episode psychosis having led the way. Despite this momentum, however, the extent to which rapidly accumulating data has been collected in samples resembling real-world clinical populations remains unclear. In this issue, Kline et al. compare and contrast two groups experiencing a first episode of psychosis: research participants, and a clinical sample receiving early intervention services at the same health centre. They find key differences—including the underrepresentation of vulnerable groups and surprisingly little overlap between the two samples—that should prompt reflection about blind spots, filters between research and clinical care, and how to tie the generation of evidence to practice-based research.

Keywords

Early psychosis Schizophrenia Representativeness Generalizability Implementation 

The early intervention (EI) movement in psychiatry has historically championed two broad objectives: provision of phase-specific interventions and reduction of treatment delays, both of which are intimately tied to complex issues of access to care. In first episode psychosis (FEP), waves of studies have established the efficacy and effectiveness of individual therapies and multicomponent intervention packages across a range of settings [1, 2]. Despite these successes, however, it is unclear whether the evidence base is truly generalizable or if it rests primarily on studies conducted in selective subsets of the FEP population. In other words, a major scotoma or ‘blind spot’ in EI remains the extent to which reported FEP research samples are representative of real-world service users.

To their credit, Kline and colleagues tackle this challenge directly in the current issue of SPPE [3]. They do so by comparing a convenience sample of FEP patients from an urban American EI service with an overlapping set of subjects who are participants in FEP research projects organized by the same centre. Some of their results are perhaps unsurprising, such as the higher proportions of “not otherwise specified” and affective psychosis diagnoses found in clinical compared to more strictly defined research samples. The research subgroup also had less ethnic and racial diversity, greater educational attainment, and higher estimated premorbid IQ than the clinical participants (although still lower than healthy controls). A similar pattern was found for cognitive tasks and social/role functioning, with the clinical sample having the poorest scores, healthy volunteers the best and research subjects intermediate between the two.

These substantive differences between samples provide cause for reflection about the state of clinically applicable knowledge in the field. EI patients who also participated in research performed better on a range of assessments than non-participating patients—while at the same time, vulnerable groups (whether defined in terms of ethnicity, education, cognition and/or functioning) are underrepresented in the research sample. Such sampling issues are, of course, unlikely to be limited to this study alone: the complexity of the authors’ findings underscore the challenges of external validity, especially as we enter an era when practice-based research holds increasing prominence [4].

Several filters could be contributing to the differences seen across research and clinical samples. Inclusion and exclusion criteria set out by researchers are designed to attain a relatively homogenous study population. In doing so, however, these criteria may limit the generalizability of reported results to somewhat narrow groups. The precision of study criteria will also vary based on the type and purpose of the investigation being conducted; for example, effectiveness studies may have more flexible criteria than clinical trials or neuroimaging studies. Next, even beyond inclusion/exclusion, suggestions of prospective research participants are often made by front-line clinicians who carry their own conscious and unconscious biases: patients who are seen as more capable—whether due to higher functional or cognitive capacity, motivation, ability to engage fruitfully with the research question, and so on—may be preferentially approached. And a range of potentially intersecting factors may influence the ability, willingness or interest of individuals themselves to participate in such studies when offered the opportunity: poverty and associated economic (dis)incentives, skepticism regarding research participation in marginalized groups, and even stigma may be prominent in specific subpopulations.

These alone could account for why white, more highly educated, and overall “squeaky-clean” participants (with fewer comorbidities) were overrepresented in the research compared to the clinical sample. However, it is also worth noting that out of the 44 research subjects and 77 EI service users, only 15 were members of both groups. The fact that most individuals were either patients or research participants (and not both) raises the possibility that these samples are actually drawing from different populations. Here Kline et al. acknowledge that the FEP research sample was selective, but refer to the overall clinical group as “naturalistic” even though it too was probably the product of idiosyncratic referral pathways to the EI service. While there are no insurance requirements to enter the PREP clinical service, one can imagine that these gaps might be (at least in part) a function of the fragmented nature of care pathways in health systems with generally substantial barriers to access. In contrast, clinical samples in broadly accessible EI services with well-publicized pathways and few systemic hurdles to entry are likely to be both larger and more representative of actual community samples—from which research samples could then be derived.

In recognition of these layers of complexity, EI teams have begun to situate their research findings in the context of the larger clinical population they serve [5], compared to peers who are matched in a number of dimensions [6], or even to those with similar diagnostic and treatment features who do not access EI treatment programs [7]. Ultimately, both clinical and research infrastructures must capture substantially larger samples (such as through well-publicized catchment-based services) if they are to accurately represent the total affected clinical population [8]. And since improving representativeness is itself an important implementation task, EI learning networks are now being designed with sampling and case identification strategies that reflect local realities [4, 9].

Set in this context, the investigation by Kline et al. can spark practical recommendations to strengthen both representativeness and service-based research in EI settings. First, prioritizing the establishment of a robust clinical EI service (within which research studies can then be nested) should itself guide more individuals to evidence-based care. Second, recruitment of research samples should be done primarily or ideally solely from the index clinical service, such that participation in research projects would be consistently offered to all eligible patients receiving EI care. Third, within this particular attention could be paid to whether vulnerable or underrepresented populations are present in both clinical and research samples, with corresponding adjustments and/or outreach efforts to bridge any identified gaps. Finally, the systematic collection of basic, clinically relevant individual-and service-level data in all those meeting criteria for treatment would allow for ongoing assessments of representativeness alongside measurement-based care and service improvement, not just the execution of defined research projects.

Changes such as these should address at least some of the scotomas that currently limit the ability of research projects to reflect their clinical populations and surrounding context. And though these steps undoubtedly come with challenges of their own, they can hopefully push the field towards a more transparent accounting of the dimensions in which reported research samples do (and do not) resemble clinical ones.

Notes

Acknowledgements

JLS wishes to thank the late Larry J Seidman for his generosity over many years.

Compliance with ethical standards

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. 1.
    Correll CU, Galling B, Pawar A, Krivko A, Bonetto C, Ruggeri M, Craig TJ, Nordentoft M, Srihari VH, Guloksuz S, Hui CLM, Chen EYH, Valencia M, Juarez F, Robinson DG, Schooler NR, Brunette MF, Mueser KT, Rosenheck RA, Marcy P, Addington J, Estroff SE, Robinson J, Penn D, Severe JB, Kane JM (2018) Comparison of early intervention services vs treatment as usual for early-phase psychosis: a systematic review, meta-analysis, and meta-regression. JAMA Psychiatry 75(6):555–565.  https://doi.org/10.1001/jamapsychiatry.2018.0623 CrossRefGoogle Scholar
  2. 2.
    Srihari VH, Shah J, Keshavan MS (2012) Is early intervention for psychosis feasible and effective? Psychiatr Clin N Am 35(3):613–631.  https://doi.org/10.1016/j.psc.2012.06.004 CrossRefGoogle Scholar
  3. 3.
    Kline E, Hendel V, Friedman-Yakoobian M, Mesholam-Gately RI, Findeisen A, Zimmet S, Wojcik JD, Petryshen TL, Woo T-UW, Goldstein JM, Shenton ME, Keshavan MS, McCarley RW, Seidman LJ (2018) A comparison of neurocognition and functioning in first episode psychosis populations: do research samples reflect the real world? Soc Psychiatry Psychiatr Epidemiol.  https://doi.org/10.1007/s00127-018-1631-x Google Scholar
  4. 4.
    RFA-MH-19-150 (2019) Early psychosis intervention network (EPINET): practice-based research to improve treatment outcomes (R01 Clinical Trial Optional). https://grants.nih.gov/grants/guide/rfa-files/RFA-MH-19-150.html. Accessed 11 Feb 2019
  5. 5.
    Shah JL, Crawford A, Mustafa SS, Iyer SN, Joober R, Malla AK (2017) Is the clinical high-risk state a valid concept? Retrospective examination in a first-episode psychosis sample. Psychiatr Serv 68(10):1046–1052.  https://doi.org/10.1176/appi.ps.201600304 CrossRefGoogle Scholar
  6. 6.
    Phutane VH, Tek C, Chwastiak L, Ratliff JC, Ozyuksel B, Woods SW, Srihari VH (2011) Cardiovascular risk in a first-episode psychosis sample: a ‘critical period’ for prevention? Schizophr Res 127(1–3):257–261.  https://doi.org/10.1016/j.schres.2010.12.008 CrossRefGoogle Scholar
  7. 7.
    Anderson KK, Norman R, MacDougall A, Edwards J, Palaniyappan L, Lau C, Kurdyak P (2018) Effectiveness of early psychosis intervention: comparison of service users and nonusers in population-based health administrative data. Am J Psychiatry 175(5):443–452.  https://doi.org/10.1176/appi.ajp.2017.17050480 CrossRefGoogle Scholar
  8. 8.
    Anderson KK, Norman R, MacDougall AG, Edwards J, Palaniyappan L, Lau C, Kurdyak P (2018) Estimating the incidence of first-episode psychosis using population-based health administrative data to inform early psychosis intervention services. Psychol Med.  https://doi.org/10.1017/s0033291718002933 Google Scholar
  9. 9.
    Malla A, Iyer S, Shah J, Joober R, Boksa P, Lal S, Fuhrer R, Andersson N, Abdel-Baki A, Hutt-MacLeod D, Beaton A, Reaume-Zimmer P, Chisholm-Nelson J, Rousseau C, Chandrasena R, Bourque J, Aubin D, Levasseur MA, Winkelmann I, Etter M, Kelland J, Tait C, Torrie J, Vallianatos H (2018) Canadian response to need for transformation of youth mental health services: ACCESS open minds (esprits ouverts). Early Interv Psychiatry.  https://doi.org/10.1111/eip.12772 Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Prevention and Early Intervention Program for Psychoses (PEPP-Montréal)Douglas Mental Health University InstituteMontrealCanada
  2. 2.Department of PsychiatryMcGill UniversityMontrealCanada

Personalised recommendations