Skip to main content

Advertisement

Log in

Assessment in the Every Student Succeeds Act: Considerations for School Psychologists

  • Systematic Review
  • Published:
Contemporary School Psychology Aims and scope Submit manuscript

Abstract

The Every Student Succeeds Act (ESSA) aims to ensure that all students are college- and career-ready by requiring all schools to implement high-quality accountability systems and services for students. The ESSA impacts assessment practices in schools by requiring staff to account for a broader range of variables related to student well-being, including both academic and non-academic variables (e.g., student mental health). Schools likely will (and should) rely on the expertise of school psychologists in designing and implementing high-quality assessment systems. Thus, school psychologists must be prepared to review a variety of assessment materials and select instruments that are best suited for specific purposes, contexts, and populations. The aim of this article is to familiarize school psychologists with the ESSA requirements for school accountability as well as critical issues in evaluating assessment systems. More specifically, this article presents considerations for evaluating the validity and reliability of assessment tools and procedures. Implications for school psychologists engaged in implementation of the ESSA are described.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Achenbach, T. M., McConaughy, S. H., & Howell, C. T. (1987). Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychological Bulletin, 101, 213–232.

    Article  Google Scholar 

  • Ackerman, P. L., Beier, M., & Boyle, O. (2002). Individual differences in working memory within a nomological network of cognitive and perceptual speed abilities. Journal of Experimental Psychology, 131, 567–589.

    Article  PubMed  Google Scholar 

  • Alfonso, V. C., Oakland, T. D., LaRocca, R., & Spanakos, A. (2000). The course on individual cognitive assessment. School Psychology Review, 29, 52–64.

    Google Scholar 

  • Alliance for Excellent Education. (2016). Every Student Succeeds Act Primer: Accountability. Retrieved from http://www.marylandpublicschools.org/about/Documents/DAPI/ESEA/Resources/All4EDESSAAccountabilityPrimer.pdf

  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.

    Google Scholar 

  • Barrett, C., Cottrell, J., Newman, D., Pierce, B., & Anderson, A. (2015). Training school psychologists to identify specific learning disabilities: a content analysis of syllabi. School Psychology Review, 44, 271–288.

    Article  Google Scholar 

  • Briesch, A., Swaminathan, H., Walsh, M., & Chafouleas, S. (2014). Generalizability theory: a practical guide to study design, implementation, and interpretation. Journal of School Psychology, 52, 13–35.

    Article  PubMed  Google Scholar 

  • Brown, W. (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology, 3, 296–322.

    Google Scholar 

  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105.

    Article  Google Scholar 

  • Canivez, G. L. (2016). Bifactor modeling in construct validation of multifactored tests: implications for multidimensionality and test interpretation. In K. Schweizer & C. DiStefano (Eds.), Principles and methods of test construction: standards and recent advancements (pp. 247–271). Gottingen: Hogrefe.

    Google Scholar 

  • Cardinet, J., Tourneur, Y., & Allal, L. (1976). The symmetry of generalizability theory: applications to educational measurement. Journal of Educational Measurement, 13, 119–135.

    Article  Google Scholar 

  • Crocker, L., & Algina, J. (2006). Introduction to classical and modern test theory. Belmont: Wadsworth.

    Google Scholar 

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.

    Article  Google Scholar 

  • Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302.

    Article  Google Scholar 

  • Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements. New York: Wiley.

    Google Scholar 

  • Daly, B. D. (2007). Training programs in school psychology and their relation to professional practice. In Unpublished doctoral dissertation, University at Buffalo. The State University of New York.

  • Every Student Succeeds Act, P. L. 114-95(2015).

  • Floyd, F., & Widaman, K. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7, 286–299.

    Article  Google Scholar 

  • Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical–statistical controversy. Psychology, Public Policy, and Law, 2(2), 293–323.

    Article  Google Scholar 

  • Hayes, S. C., Nelson, R. O., & Jarrett, R. B. (1987). The treatment utility of assessment: a functional approach to evaluating assessment quality. American Psychologist, 42(11), 963–974.

    Article  Google Scholar 

  • Hertzog, C., & Schaie, W. (1988). Stability and change in adult intelligence: II. Simultaneous analysis of longitudinal means and covariance structures. Psychology and Aging, 3, 122–130.

    Article  PubMed  Google Scholar 

  • Hunsley, J., & Meyer, G. J. (2003). The incremental validity of psychological testing and assessment: conceptual, methodological, and statistical issues. Psychological Assessment, 15, 446–455.

    Article  PubMed  Google Scholar 

  • Kranzler, J. H., & Floyd, R. G. (2013). Assessing intelligence in children and adolescents: a practical guide. New York: Guilford Press.

    Google Scholar 

  • Laidra, K., Pullmann, H., & Allik, J. (2007). Personality and intelligence as predictors of academic achievement: a cross-sectional study from elementary to secondary school. Personality and Individual Differences, 42, 441–451.

    Article  Google Scholar 

  • LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11(4), 815–852.

    Article  Google Scholar 

  • Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635–694.

    Google Scholar 

  • May, H., Perez-Johnson, I., Haimson, J., Sattar, S., & Gleason, P. (2009). Using state tests in education experiments: A discussion of the issues (NCEE 2009-013). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences.

    Google Scholar 

  • McDermott, P., Fantuzzo, J., & Glutting, J. (1990). Just say no to subtest analysis: a critique on Wechsler theory and practice. Journal of Psychoeducational Assessment, 8, 290–302.

  • Murphy, K. R., & DeShon, R. (2000). Interrater correlations do not estimate the reliability of job performance ratings. Personnel Psychology, 53, 873–900.

    Article  Google Scholar 

  • National Association of School Psychologists. (2010). Model for comprehensive and integrated school psychological services, NASP Practice Model overview, [brochure]. Bethesda, MD.

  • National Association of School Psychologists (n.d.). Ensuring effective implementation of the Every Student Succeeds Act: frequently asked questions. Retrieved from http://www.nasponline.org/research-and-policy/current-law-and-policy-priorities/policy-priorities/the-every-student-succeeds-act/essa-faqs

  • No Child Left Behind Act, P.L. 107-110 (2001).

  • Padilla, J., & Benitez, I. (2014). Validity evidence based on response processes. Psicothema, 26(1), 136–144.

    PubMed  Google Scholar 

  • Pennsylvania Department of Education. (2016). The Every Student Succeeds Act (ESSA): what’s changed, what hasn’t, and what we don’t know yet. Retrieved from http://www.education.pa.gov/Documents/K-12/ESSA/Stakeholder/4-28-16/ESSA%20v%20NCLB%20Crosswalk.pdf

  • Plake, B. S., & Wise, L. L. (2014). What is the role and importance of the revised AERA, APA, NCME Standards for Educational and Psychological Testing? Educational Measurement: Issues And Practice, 33(4), 4–12.

    Article  Google Scholar 

  • Reise, S. P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47, 667–696.

    Article  PubMed  PubMed Central  Google Scholar 

  • Reynolds, C. R. (1986). The elusive professionalism of school psychology: lessons from the past, portents for the future. Professional School Psychology, 1, 41–46.

    Article  Google Scholar 

  • Rogowsky, B., Calhoun, G., & Tallal, P. (2015). Matching learning style to instructional method: effects on comprehension. Journal of Educational Psychology, 107, 64–78.

    Article  Google Scholar 

  • Sireci, S. G. (1998). The construct of content validity. Social Indicators Research, 45, 83–117.

    Article  Google Scholar 

  • Solheim, O. J., & Uppstad, P. H. (2011). Eye-tracking as a tool in process-oriented reading test validation. International Electronic Journal of Elementary Education, 4, 153–168.

    Google Scholar 

  • Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 271–295.

    Google Scholar 

  • Thompson, B. L., Green, S. B., & Yang, Y. (2010). Assessment of the maximal split-half coefficient to estimate reliability. Educational and Psychological Measurement, 70, 232–251.

    Article  Google Scholar 

  • Vaillancourt Strobach, K. (2018). Implementation of the Every Succeeds Act: update and next steps. Communiqué, 46(5), 9–11.

    Google Scholar 

  • Volpe, R., Briesch, A., & Gadow, K. (2011). The efficiency of behavior rating scales to assess inattentive-overactive and oppositional-defiant behaviors: applying generalizability theory to streamline assessment. Journal of School Psychology, 49, 131–155.

    Article  PubMed  Google Scholar 

  • Welner, K. G. (2013). Consequential validity and the transformation of tests from measurement tools to policy tools. Teachers College Record, 115, 9 available at: www.tcrecord.org (Accessed 13 June 2017).

  • Yang, Y., & Green, S. B. (2011). Coefficient alpha: a reliability coefficient for the 21st century? Journal Of Psychoeducational Assessment, 29(4), 377–392. https://doi.org/10.1177/0734282911406668.

  • Ysseldyke, J., Burns, M., Dawson, P., Kelley, B., Morrison, D., Ortiz, S., Rosenfield, S., & Telzrow, C. (2008). The blueprint for training and practice as the basis for best practices. In Best practices in school psychology V (volume 1). Bethesda, MD: National Association of School Psychologists.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sally L. Grapin.

Ethics declarations

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Conflict of Interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grapin, S.L., Benson, N.F. Assessment in the Every Student Succeeds Act: Considerations for School Psychologists. Contemp School Psychol 23, 211–219 (2019). https://doi.org/10.1007/s40688-018-0191-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40688-018-0191-0

Keywords

Navigation