Drawing inspiration from the evidence-based movement in medicine (Sackett et al. 1996), a number of evidence-based practice (EBP) initiatives have been undertaken in professional psychology over the course of the last two decades. For example, the American Psychological Association convened a Task Force in 2005 that produced a report describing psychology’s fundamental commitment to sophisticated EBP in psychology (American Psychological Association Presidential Task Force on Evidence-based Practice 2006) and a number of ad hoc working groups have been convened to help identify and disseminate empirically supported interventions in school psychology (Kraotchwill 2007; Kratochwill and Shernoff 2004). As defined by the Task Force, EBP in psychology “is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 273).

As with the broader EBP movement in psychology, attempts to advance EBP in school psychology have largely, to this point, focused exclusively on interventions. Whereas the importance of assessment has been alluded to various practice guidelines, installation efforts for evidence-based assessment (EBA) in the field remain noticeably absent. This unevenness is particularly concerning in our discipline given the prominent role of assessment and assessment-related activities in professional practice.

The Role of EBA in School Psychology

Advancing EBP in applied assessment is complicated by the fact that numerous pieces of information compete for our attention and it can be difficult to discern when information is both scientifically valid and clinically relevant (Youngstrom et al. 2015). As a safeguard, Lilienfeld et al. (2012) suggest that “all school psychologists, regardless of the setting in which they operate, need to develop and maintain a skill set that allows them to distinguish evidence-based from non-evidence based practices” (p. 8). According to Hunsley and Mash (2007),

A truly evidence-based approach to assessment, therefore, would involve an evaluation of the accuracy and usefulness of this complex decision-making task in light of potential errors in data synthesis and interpretation, the costs associated with the assessment process and, ultimately, the impact the assessment had on clinical outcomes for the person(s) being assessed (p. 30).

To aide in this endeavor, the EBA framework focuses specifically on the “3 Ps” of clinical assessment (Youngstrom 2013): prediction (how well does assessment data relate to a target criterion?), prescription (how well does assessment information inform treatment?), and process (is the assessment data useful for identifying variables that mediate treatment progress?). Traditional approaches to scale validation such as factor analysis remain important; however, in the EBA framework, particular emphasis is placed on using base rates, validity data, and Bayesian reasoning to reduce clinical ambiguity when making decisions at the level of the individual. Put simply, the goal of EBA is to put our preferred assessment methods and models to the test so that we can determine which procedures are most useful for our charges.

Special Issue and Future Research

In a previous issue of this journal, McGill (2018) utilized the EBA framework to illustrate the futility of using cognitive scatter analysis for specific learning disability (SLD) classification. In this study, positive predictive values (PPV [i.e., the probability that an individual will have significant cognitive scatter when SLD is present]) and diagnostic likelihood ratios were obtained for increasing levels of scatter among participants in the KABC-II normative sample. Probability nomograms were then constructed for each level of scatter (see Pendergast et al. 2018 for an in depth treatment of nomograms and their potential applications in school psychology). These nomograms illustrate well that the assessment information afforded by scatter analysis does not improve the probability of correct SLD classification beyond a priori base rates. Put simply, these assessment data likely have no diagnostic value as it pertains to SLD identification. As a result of this study, a special issue was commissioned to further elaborate on EBA in school psychology.

School psychology scholars and practitioners were asked to report original research addressing the application of EBA in the discipline. Styck, Aman, and Watkins examine the utility of patterns of performance on scores from the Wechsler Scales for differentiating between children and adolescents with Autism. Beaujean and Benson describe issues with cognitive test development and score interpretation and illustrate that it may be difficult for commercial ability measures to simultaneously measure both general and specific abilities well which may have broader implications for the use of diagnostic frameworks (i.e., PSW) that focus on the clinical interpretation of part scores. In one of the few studies of its kind, Parkin and Frisby examine the factor structure of the KTEA-3 and find inconsistent support for the measurement model suggested by the publisher indicating that comprehensive norm-referenced tests of achievement may also suffer from some of the same structural validity concerns that have been raised about intelligence tests.

Switching the focus to curriculum-based measurement, Van Norman and colleagues and Diggs and Christ examine the utility of multiple measures for gated screening and the incremental validity of reading comprehension data above and beyond oral reading fluency data respectively. Gross, Farmer, and Ochs conduct a comprehensive survey of assessment practices in the field and outline the various strengths and weaknesses associated with particular approaches. Finally, invited commentaries were solicited from two respected assessment scholars. Whereas both Ward and Canivez describe and outline potential strengths of the EBA approach, barriers to its dissemination are identified.

It is believed that the research contained in this issue is important for advancing the science of clinical assessment (i.e., McFall 1991). Ultimately, more investigation is needed to expand our understanding of the EBA framework. These efforts should be directed at going beyond traditional psychometric investigations of individual tests. Future research (a) examining the utility of decision-making frameworks, (b) the factors that influence knowledge dissemination and training in the field related to assessment, and (c) enhancing the utility of clinical judgment,Footnote 1 in particular, would be welcomed.