Abstract
The specific learning disabilities (SLD) identification literature is replete with competing narratives concerning the advantages and disadvantages of various techniques and methods. Until a widely accepted and empirically proven SLD identification methodology is universally supported, evaluators should seek to improve the existing alternatives. This article describes the value of using norm-referenced testing of intellectual development to comprehensively identify specific learning disabilities (SLD) as advocated by the Core-Selective Evaluation Process (C-SEP). To this end, we will define intellectual development and describe practices such as integrated data analysis and task demand analysis.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Since the inception of the specific learning disability (SLD) category in 1975, the best and most accurate method of identification has been controversial and continues to perplex the field (Colker, 2011; Decker et al., 2013). From the 2006 Federal Regulations emerged “third method” approaches that incorporate pattern of strengths and weaknesses (PSW) data analysis to identify the presence of SLD. This addition has further exasperated the current controversy (Büttner & Hasselhorn, 2011; Decker et al., 2013; McGill et al., 2018). A broad range of perspectives and practices drives SLD identification (Cottrell & Barrett, 2017) and implementation (Maki & Adams, 2018, 2019; Stuebing et al., 2012), resulting in little agreement. Some scholars oppose SLD identification methods that utilize cognitive tests (i.e., PSW models) and instead advocate for intervention-based approaches (Fletcher & Miciak, 2019; Kranzler et al., 2019; Miciak et al., 2016). Advocates for “third method approaches” (Dehn, 2014; Flanagan & Schneider, 2016; Hale et al., 2010; Schultz & Stephens, 2018) endorse thoroughly examining PSWs in achievement, performance, or both, including cognition, to explain low achievement. Administrating norm-referenced tests to understand the relationship between psychological processing and learning is a critical underpinning of this method. While the Individuals with Disabilities Improvement Act of 2004 (IDEA)allows for an ability-achievement discrepancy approach, it is not strongly advocated for as a viable method of SLD identification like RTI and PSW (Maki & Adams, 2018) and, therefore, will not be discussed in any detail in this article.
All “third method” PSW approaches (e.g., Dual Discrepancy-Consistency [DD/C], Concordance-Discordance [C-Dm], Discrepancy/Consistency [D-CM]) utilize norm-referenced tests as part of the identification process (Alfonso & Flanagan, 2018; Flanagan et al., 2010; McGill et al., 2015; Stuebing et al., 2012; Taylor et al., 2017). A detailed review of these models is beyond the scope of this paper; however, a discussion of shared features that make these models problematic will follow. While PSW models represent a significant improvement over ability-achievement methods, some practices are reformulated revisions of the ability-achievement method. For example, all models mentioned above use standard score discrepancies between cognition and achievement to identify the presence of SLD. (Schultz & Stephens, 2018; Taylor et al., 2017). Schultz and Stephens (2018), authors of C-SEP, believe using norm-referenced tests for cognitive-achievement discrepancies is of little value, as norm-referenced “achievement” tests are integrated cognitive and linguistic representations using academic skills as the stimulus. For example, removing working memory and language comprehension from a reading comprehension measure is impossible. Instead, using norm-referenced tests helps explain the lack of appropriate progress and gain a deeper insight into learning. Strengths and weaknesses in achievement and performance, or both, can be determined without a norm-referenced test; however, measuring intellectual development without valid and reliable tools (i.e., norm-referenced tests) contradicts the procedural safeguards. Responsible and efficient testing practices (Schultz et al., 2021; Schultz & Stephens, 2018) measure the construct of intellectual development when using C-SEP.
Distinguishing C-SEP (Schultz & Stephens, 2018) from other third-method approaches is what evaluators use norm-referenced tests to measure (i.e., intellectual development) and in how evaluators analyze testing data. C-SEP provides a framework that integrates instructional response data with norm-referenced tests to provide a more complete understanding of a student with SLD. Instead of using norm-referenced tests to identify discrepancies between cognition and achievement to support the presence of SLD, C-SEP only uses norm-referenced tests to measure intellectual development (Schultz et al., 2021). Understanding intellectual development provides critical information to help explain why a student is (a) not meeting grade-level academic standards, (b) demonstrating poor instructional response, and (c) exhibiting a pattern of strengths and weaknesses in achievement, performance, or both.
Intellectual development will be further defined below, with the rest of this article dedicated to understanding student learning by integrating norm-referenced testing results, instructional response outcomes, and other data sources. Specific strategies for integrated data analysis (Curran & Hussong, 2009; O’Cathlain et al., 2010), specifically pattern analysis and task demand analysis, will be explained. Finally, C-SEP testing practices are contrasted with other third-method approaches.
Intellectual Development Explained
When examining a PSW, the term “Intellectual Development” was intentionally used to describe “one method of comparison” along with age and state-approved standards. It was left undefined in the statute and regulations despite discussion by stakeholders to clarify it in the Code of Federal Regulations, 2006. Some excerpts include:
Comment
Several commenters requested that the regulations include a definition of
"Intellectual Development"
Discussion
We do not believe it is necessary to define ‘’intellectual development’’ in these regulations. Intellectual development is included in § 300.309(a)(2)(ii) as one of three standards of comparison, along with age and State-approved grade-level standards. The reference to ‘’intellectual development’’ in this provision means that the child exhibits a pattern of strengths and weaknesses in performance relative to a standard of intellectual development such as commonly measured by IQ tests. Use of the term is consistent with the discretion provided in the Act in allowing the continued use of discrepancy models (p. 44,654).
Comment
Some commenters recommended using “cognitive ability” in place of “intellectual development” because “intellectual development” could be narrowly interpreted to mean performance on an IQ test. One commenter stated that the term “cognitive ability” is preferable because it reflects the fundamental concepts underlying SLD and can be assessed with various appropriate assessment tools. A few commenters stated that the reference to identifying a child’s pattern of strengths and weaknesses that are not related to intellectual development should be removed because a cognitive assessment is critical and should always be used to make a determination under the category of SLD.
Discussion
We believe the term ‘‘intellectual development’’ is the appropriate reference in this provision. Section 300.309(a)(2)(ii) permits the assessment of patterns of strengths and weakness in performance, including performance on assessments of cognitive ability. As stated previously, ‘‘intellectual development’’ is included as one of three methods of comparison, along with age and State-approved grade-level standards. The term ‘‘cognitive’’ is not the appropriate reference to performance because cognitive variation is not a reliable marker of SLD and is not related to intervention.
Changes
None (p.46,654).
Legal language used in federal regulations is intentional, and unless explicitly stated elsewhere, words retain their plain, ordinary, and literal meanings. Explicitly stated in the comments of the regulations is that “cognitive ability” is not a suitable synonym or replacement for “intellectual development.” The department also pointed out that a PSW in performance can include cognitive ability. This refusal leaves “intellectual development” undefined. The literal definition of “intellectual,‘’ according to the Cambridge Dictionary, means “relating to your ability to think and understand, especially complicated ideas.” According to the Cambridge Dictionary, “development” is “the process in which someone or something grows or changes and becomes more advanced (Cambridge Dictionary, 2023).” Many of these terms are included in the federal definition of SLD (“Statute and regulations,” 2022), for example, the ability to “listen, think, speak” and “understanding language” as well as the process of thinking (psychological processing).
Schultz et al. (2021) provide a practical definition of intellectual development comprised of the most essential elements of intelligence, which includes abstract thinking or reasoning, problem-solving ability, capacity to store knowledge (including academic knowledge), memory, environmental adaptation, mental speed, and linguistic competence. The work of Snyderman and Rothman (1987) and contemporary Cattell-Horn-Carroll (CHC; McGrew, 2009; McGrew, 2021) theory forms the basis of this definition. In simple terms, one’s intellectual development is “developed” via the interaction of four variables: innate cognitive ability, language development, acquired academic skills, and interaction with the environment.
Intellectual development is not a static or fixed construct (Blackwell et al., 2007; Macnamara & Rupani, 2017; Resnick & Schantz, 2015); instead, it is a dynamic construct that continues to develop with increased acculturation, education, and environmental interaction. It is difficult, if not impossible, to separate the highly correlated and interdependent constructs of cognition, language, and academic skills. Gottfredson’s (1997) definition of “intelligence” includes these interactions:
“Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings- “catching on,” “making sense” of things, or “figuring out” what to do (p. 13)”.
Several empirical studies underscore the dynamic aspects of intelligence. According to Plomin and Deary (2014), intelligence is one of the most heritable behavioral traits. They explored the relationship between genetics and intelligence by integrating twin studies with Genome-wide Complex Trait Analysis. The correlation between intelligence, education (academic knowledge and skills), and environment is significant for educators and psychologists. In addition, the researchers reported increases in heritability during the life course. This relationship supports the notion that intellectual development is dynamic, innate, and “under development” and recognizes the relationship between education, where one learns academic knowledge and skills, and advanced language skills. This study and others do not suggest that innate intelligence is 100% inheritable; however, it can account for a significant proportion of its composition. Deary et al. (2009) report a substantial heritability of g (general intelligence) from 30% in childhood to 50% in adulthood but point out that environmental variables play a critical role in intellectual development.
Parent’s level of education is an environmental factor with a solid and consistent correlation with intellectual development (Cave et al., 2022; Lemos et al., 2011). This environmental variable is a demographic consideration for well-known test batteries. Individuals with lower socioeconomic status (SES) tend to have lower IQ scores and more impacted areas of cognition than children from higher SES backgrounds (von Stumm & Plomin, 2015). Rindermann et al. (2010) studied 1,555 children to investigate the effects of crystallized intelligence and fluid reasoning with parents’ level of education and socioeconomic status. Their findings suggest that these two variables have a slightly more substantial effect on crystallized intelligence than fluid reasoning.
Additionally, parents’ level of education impacts language development in children significantly, which is usually attributed to having a higher quality and quantity of child-directed language (Rowe, 2012; Thomas et al., 2013). The links between language development and cognition and academic success are well documented (Berninger & O’Malley May, 2011; Hulme et al., 2020; Lauro et al., 2020; Oommen, 2014; Perlovsky, 2009).
To summarize, “intellectual development” is an individual’s innate cognitive ability, language, achievement, and environment operating collectively. Each variable interacts and influences the others; understanding this constellation of abilities requires accurate measurement. Intellectual development is much more than the static view of “intelligence,” while “development” is analogous to “under construction.” In addition, intellectual development was the word deliberately chosen as a method of comparison in IDEA. Arguably, nothing impacts intellectual development more than education. As authors of C-SEP, we contend that we can only truly understand learning or lack of learning by comprehensively measuring this construct. The remainder of this paper is dedicated to measuring intellectual development using C-SEP and subsequent data analysis.
Core-Selective Evaluation Process (C-SEP)
C-SEP is a well-designed problem-solving model that incorporates best practices in assessment to inform the evaluator’s professional judgment. C-SEP advocates strict adherence to standardized testing procedures outlined in test publishers’ manuals and adherence to the legal regulations that guide our profession (see Alfonso & Flanagan, 2018; Dombrowski, 2020; Schultz & Stephens, 2015; Schultz & Stephens-Pisecco, 2017; Schrank et al., 2017). C-SEP involves strategically using norm-referenced tests, considering the instructional response, and thoroughly examining exclusionary factors to explain underachievement. Various assessment tools and strategies generate valuable data that informs professional judgment. Careful consideration of all the data using mixed-method research design principles (i.e., convergent design) and analysis (integrated data analysis, mixed analysis, pattern analysis [Schultz et al., 2012])informs decision-making. This type of analysis is more suited to analyze and integrate the quantitative and qualitative data sets collected as part of an SLD evaluation. C-SEP recognizes the limitations of models that rely on standard score discrepancies and the limitations of RTI models. C-SEP minimizes these limitations by collecting complete and comprehensive data (i.e., RTI and norm-referenced data) and using accepted data analysis principles to inform professional judgment. If the preponderance of the evidence is consistent with the SLD definition using the state’s criteria, then the student would be considered to have an SLD.
Critics of C-SEP (Benson et al., 2018; Fletcher & Miciak, 2019) note that lacking a research base voids C-SEP as an evidence-based practice or a superior way to identify students with SLD. Schultz and Stephens (2018), authors of C-SEP, agree with that sentiment with the caveat that none of the models of SLD identification can be considered the “gold standard” of SLD identification (Stuebing et al., 2012). Due to the complex nature of SLD, which, by nature, comprises multiple interacting variables, renders it nearly impossible to control for any empirical studies (Schultz & Stephens, 2018). Research deeming PSW models ineffective (Kranzler et al., 2016; Miciak et al., 2014; Stuebing et al., 2012; Taylor et al., 2017) using formula configurations and discrepancies as evidence. Absent in these studies are discussions of the heterogeneous nature of SLD, integrating informal data, the influence of environmental factors, the examiner’s professional judgment, quality of instruction, student age, and so on. These limitations need to be sufficiently addressed in these studies. The “rebuttal literature” addresses many of the claims made by adversaries of PSW models (Christo et al., 2016; Flanagan & Schneider, 2016; Schneider& Kaufman, 2016; Schultz & Stephens, 2018). Without consensus, Schultz and Stephens (2018) recognize the need for improved and informed professional judgment. We also recognize that evaluators must use accepted research principles to evaluate students. Consequently, until a consensus, C-SEP is a way to bridge this divide and give practitioners a framework they can use in their current practice.
Schultz and Stephens (2015) contend that C-SEP is the “best” practice because it addresses the limitations of the other models, utilizes data analysis procedures beyond standard scores, and leads to more diagnostically precise identification. Below are some common criticisms and responses:
-
1.
Statistical limitations of models that use discrepancy formulas are concerning (Kranzler et al., 2019; Miciak, 2016; Taylor et al., 2017). While we share that concern to a degree, we also assert that the problem is with the examiner’s over-reliance on these formulas instead of using them to inform their professional judgment. In C-SEP, all norm-referenced data is anchored to actual data sources to help control for this limitation.
-
2.
We recognize that the standard third method, the “PSW” approach, utilizes standard score discrepancies between cognition and achievement tests to identify SLD’s presence (Alfonso & Flanagan, 2018; Schultz & Stephens-Pisecco, 2018a, b). These procedures are reminiscent of the IQ-Ability discrepancy models. In C-SEP, the basis of the data analysis is the understanding of relationships between cognition, language, and academics at a much deeper level to better understand the learner. Instead of considering cognition and achievement constructs as separate entities, we consider these two constructs and language as part of one’s overall “intellectual development.” C-SEP endorses data analysis techniques more suited to understanding a student’s PSW than just relying on standard scores.
-
3.
We recognize the limitations of RTI-based approaches as they rely too much on assumptive evidence instead of direct evidence, and we need to identify the disorder of psychological aspects of SLD to claim comprehensiveness. As Schneider and Kaufman (2017) stated, “Knowing things is preferable to not knowing things (p.13).” Language is a salient feature of SLD, and the RTI literature is noticeably absent regarding oral language-specific learning disabilities. Not only is it an eligibility category, but poor oral language development is highly associated with written language deficits (Foorman et al., 2018; Hulme et al., 2020; Kendeou, 2009) and executive functioning (Cutting et al., 2009). To not measure language using valid and reliable tools (i.e., norm-referenced tests), again, it would be challenging to claim comprehensiveness. In addition, speech/language problems are often missed (McGregor, 2020; Nation et al., 2004) in students with disabilities. With C-SEP, we formally assess language with all evaluations and describe how it contributes to learning problems (Schultz & Stephens, 2015; Schultz & Stephens, 2018; Schultz et al., 2023). Language assessment enhances the diagnostic preciseness of the method.
-
4.
RTI-based identification proponents often claim that RTI-based models lead to better intervention. The unanswered question is, “How does poor intervention response lead to better intervention response?” Theoretically, intense instruction has been provided and proven ineffective, or there would be no referral to special education. When using C-SEP, evaluators conduct a comprehensive evaluation to improve diagnostic precision. Special education includes adapting materials and methods, providing related services, and individualizing accommodations and modifications. “Poor treatment response” does not lead to individualized support. It requires examining intra-individual factors not easily observed to create an individualized program. In C-SEP, careful analysis of a student’s intellectual development assists in designing truly individualized programs.
-
5.
Most importantly, as scholar-practitioners, we do not foresee a consensus anytime soon on whether RTI or PSW models are the preferred model supported by empirical research. As the theoretical debate continues in the literature, school psychologists, educational diagnosticians, and IEP teams need a method (i.e., C-SEP) that utilizes the strengths of all known models.
When using the C-SEP framework of SLD identification, the examiner must be familiar with the instructions provided by the producer of the instrument. Of great importance is an understanding of the core set of tests that provide the broadest coverage of the intended construct, as well as selective tests that provide a deeper insight into the learner. The examiner must also be able to collect and analyze instructional response data (formative data), analyze multiple sources of other data, and describe a student’s intellectual development as it relates to or helps explain underachievement. Professional judgment is crucial in weighing evidence and sound decision-making.
Professional Judgment: A Vital Component of C-SEP
Professional judgment is defined as “The reasoned application of clear guidelines to the specific data and circumstances related to each unique individual. Professional judgment adheres to high standards based on research and informed practices established by professional organizations or agencies (Iowa Department of Education 2006, as cited in Schultz & Stephens, 2009).” The National Center for Learning Disabilities (NCLD; 2023) developed a resource in conjunction with 11 national organizations (e.g., National Association of School Psychologists [NASP], Learning Disability Association [LDA], Council for Exceptional Children [CEC]) that lays out the critical elements of a quality evaluation process when SLD is suspected. This resource references professional judgment and guides using cognitive assessments.
Maki et al. (2022) studied professional judgment and decision-making using the discrepancy/consistency PSW model and multiple data sets (pre-referral data, teacher and parent information, observation, and exclusionary factors). This study used nine vignettes and a national sample (N = 343) of school psychologists. The vignettes presented SLD positive, SLD negative, and ambiguous data. Some of the findings indicated a high accuracy rate (97.5%) with the SLD positive cases, with an overall accuracy of 65.5% between all three conditions, demonstrating a need for advanced data analysis skills beyond the current repertoire of skills.
Miller et al. (2016) compared three PSW models (DD/C, C-DM, and the Psychological Processing Analyzer (PPA) (software) using 11 case studies. Their findings indicated that the DD/C model had 100% agreement with the expert panel, the C-DM had 54%, and the PPA software was too limited for this study. Interestingly, this study demonstrated that 18 experts could reach a consensus on these 11 case studies that agreed with DD/C 100% of the time, indicating that the experts agreed on all the cases. A question for further study could ask what factored in the expert’s professional judgment and how they reached their conclusions without the DD/C. C-SEP believes solid professional judgment is contingent on collecting multiple data sets under different conditions combined with strong analytical skills to make informed decisions.
Formative Data
The consideration of multiple data sets when using C-SEP is required by special education policy. The first is data obtained “during instruction” measuring instructional response. The evidence required to achieve a preponderance of data requires some instructional response information. According to the IDEA, data collection at a minimum for an SLD evaluation must include, regardless of method, the following: data that demonstrate that prior to, or as a part of, the referral process, the child was provided appropriate instruction in regular education settings, delivered by qualified personnel; and data-based documentation of repeated assessments of achievement at reasonable intervals, reflecting formal assessment of student progress during instruction, which was provided to the child’s parents (Sect. 300.309 Determining The Existence of a Specific Learning Disability, 2018). A comprehensive RTI framework or multi-tiered system of support (MTSS), used as a service delivery model, is ideal for obtaining this information.
Historical Data
The historical data set carries much weight when using C-SEP. Information obtained from the family history that includes health concerns, developmental history, family history of disability, and outside reports is very useful in getting a complete picture of the student (Dombrowski, 2020; Schultz et al., 2012). A school history that includes report cards, intervention history, health records, state assessment results, and discipline records will provide preliminary evidence of when did this learner’s struggle began to surface. Exclusionary factors are assessed with this data as well.
Other Data
All methods require evidence that a child is not meeting state-approved standards in one of the qualifying areas. The IDEA also requires that each evaluation include an observation of the child and that the examiner note any relevant behavior related to a child’s academic functioning. Additional data collected for a comprehensive evaluation are teacher reports and interviews, permanent products, and curriculum samples (Kwiatek & Schultz, 2014). If a student is not meeting grade level expectations despite appropriate instruction and the exclusionary factors considered, the scale of evidence tips toward an SLD designation. Despite the wealth of data obtained without any norm-referenced tests, additional data is needed for an SLD identification. Evaluators must examine the student’s PSW relative to intellectual development.
Measuring Intellectual Development
Typical intellectual development in school-age children is characterized by having cognition, language, and academic skills working simultaneously in a manner conducive to successfully meeting the academic demands of school. These abilities develop concurrently and consistently when one is developing in a typical manner. Students suspected of SLD do not have typical intellectual development and, by definition, are not making adequate progress. These students are characterized by having inconsistent abilities or displaying a PSW in aspects of intellectual development.
The IDEA, as part of the evaluation in the PSW statute, states:
(ii) The child exhibits a pattern of strengths and weaknesses in performance, achievement, or both relative to age, State-approved grade-level standards, or intellectual development that is determined by the group to be relevant to the identification of a specific learning disability, using appropriate assessments, consistent with § 300.304 and § 300.305; and (IDEA Regulations, 2023).
When using a PSW method to identify SLD, assessing intellectual development provides a more in-depth and detailed picture of learning than examining discrepancies and is more aligned with the SLD definition and the statute. The most salient features of the SLD are aligned with the construct of intellectual development and the subsequent process to measure it. Tests of cognition measure psychological processes, language tests measure the involvement of language, and tests of achievement and other data measure the academic constructs (e.g., math calculation, basic reading, written expression). Consider the environmental factors and examine the exclusionary factors to determine if they are “exclusionary” or “contributory.” C-SEP testing practices and other assessment practices have been consistent with the statute and definition of SLD.
C-SEP Testing Practices
C-SEP testing practices are designed to provide sufficient coverage of intellectual development. Interpreting the data is done through the PSW lens instead of a discrepancy lens. A critical positive added value of testing, seldom mentioned, is the information obtained from the testing process. Providing a structured set of problems to gain insight into learning gives the examiner, usually a diagnostician or psychologist, the ability to collect an objective data set and observe behavior. SLD models that include testing give the examiner a primary source of data instead of relying on secondary sources. The basic steps of C-SEP are briefly described below in Table 1:
C-SEP testing practices are consistent with the recommendations of the National Center for Learning Disabilities (NCLD; 2023). Principle 7 states, “Assessments that measure aspects of cognitive functioning may be used to rule out intellectual disabilities or to inform educational decisions by documenting areas in which the student is struggling or excelling.“Since it is impossible to measure language or achievement without the influence of cognition, we apply this principle to all norm-referenced tests, as each test measures aspects of cognition. Since its inception, C-SEP has adhered to this principle regarding norm-referenced testing (Schultz & Stephens-Pisecco, 2015; Schultz & Stephens-Pisecco, 2017).
C-SEP Data Analysis
To truly understand a child’s intellectual development and how it manifests itself in the individual child is to utilize practices that go beyond typical standard score analysis. A low standard score signals the examiner to find out why it is low, locate supporting evidence, and help answer the question, “What are the implications of this low score?” As mentioned, the examiner should cluster the low scores and look for similarities and contradictions between the scores. C-SEP uses task demand analysis with low scores to understand its impact on learning. Specific instructions are contained in the test manuals on what processes, language, and skills are measured and how to conduct a demand analysis.
The Wechsler Individual Achievement Test, 4th Edition (WIAT-4; Breaux, 2020) Technical and Interpretive manual summarizes the input, cognitive processing, and output demands for each subtest to analyze tasks. Analyzing the task demands of a test provides a more informative interpretation of performance, especially when determining strengths and weaknesses for intervention. According to the manual, input and output difficulties are likely related to sensory, motor, cultural, or linguistic factors. Processing difficulties are consistent with neuropsychological disorders (e.g., SLD, ADHD). This type of analysis is consistent with measuring “intellectual development” and aids in diagnostic precision and determining instructional accommodations. The relationship to a definitional aspect of SLD, specifically the ability to “listen (input), think (processing), speak (output),” is clear. A practical example would be an interpretative statement such as this: The student has consistently struggled in tasks that require receptive language and may benefit from adding visual supports when providing instruction.
The Kaufman Tests of Educational Achievement, 3rd Edition (KTEA-3; 2014) also provide an interpretive option based on task demands. The neuropsychological and cognitive processes required by each subtest are aligned with the academic skill, reinforcing the measurement of intellectual development and how to interpret it. The Woodcock-Johnson Tests of Achievement-IV (WJ IV-ACH; Schrank et al., 2014a, b, c) manual supports the notion that achievement tests measure cognitive processes by stating that most achievement tests require the integration of multiple cognitive abilities, allowing the examiner to obtain information about processing. The Woodcock-Johnson Tests of Oral Language-IV (WJ IV-OL; 2014) manual states that tests of language also measure cognition. They further state that using the WJ IV assessment system (WJ IV tests of cognition, achievement, and oral language) can help the examiner consider the relationships among oral language, cognition, and achievement. These instructions are consistent with measuring intellectual development and C-SEP testing procedures.
To illustrate a practical application of task demand analysis, consider the emergent bilingual student. After testing, the examiner can examine low scores to analyze the language and cultural demands and the effects on the scores (Stephens et al., 2023). Alternatively, consider a student who is unable to read with fluency. Since fluency requires processing speed and word reading, a task demands analysis would help the examiner determine accommodations or instructions. For example, a student who can read words but needs better processing speed would not benefit from decoding strategies. Possible accommodations would be extra time and fluency interventions. (e.g., repeated readings, timed readings) Decoding strategies would be appropriate for a student who has adequate processing speed abilities but poor word reading skills. A common criticism of the use of norm-referenced is that they fail to contribute meaningful to intervention (Fletcher & Miciak, 2017, 2019; McGill et al., 2018; Miciak et al., 2016); however, this is countered by evidence to the contrary (Adlof, 2020; Flanagan et al., 2006; Schneider & Kauffman, 2017).
PSW models have frequently criticized reliability; however, alternative models, such as the “hybrid model” (Fletcher & Miciak, 2017), have yet to clearly articulate how poor instructional response, low achievement, and exclusionary factors lead to better intervention outcomes. In addition, the reliability of such models is unknown (Fletcher & Miciak, 2019; Spencer et al., 2014). The process must be reliable since no SLD method can claim reliability and direct linkage to positive treatment outcomes. This means collecting enough trustworthy data, representing points in time (i.e., history, during instruction) to measure current performance and predict future performance.
C-SEP does not rely on a “smoking gun” or arbitrary cut-offs to identify SLD; instead, it is done with careful analysis of each data source, understanding the data limitations, and adhering to the state criteria. As with all models, it requires professional judgment combined with objective data. Thoughtful professional judgment requires advanced training, critical thinking skills, data collection strategies, an understanding child development, and the ability to objectively integrate, interpret, and analyze data (Schultz & Stephens, 2009). An SLD designation is based on a preponderance of evidence using integrated data analysis. Oxford defines preponderance as “the quality or fact of being greater in number, quantity, or importance.” “Preponderance” has a legal meaning as well as a burden of proof that states “more likely to occur than not (Preponderance of the evidence, n.d. 2023).
When using C-SEP, an identification of SLD is recommended when the evidence shows (a) the student is not achieving adequately for a child’s age or meeting state-approved standards in one or more of the qualifying areas, (b) is exhibiting poor instructional response to both core and supplemental instruction (RTI), (c) has atypical intellectual development related to specific areas of learning (e.g., math, writing), despite other areas showing the ability, to think, reason and learn (e.g., math, listening comprehension), and (d) exclusionary factors are considered and ruled out as the primary reason for students failure. The decision is ultimately one of professional judgment using the preponderance standard.
Some studies and other scholarly work have demonstrated limitations when utilizing professional development, including inconsistent decision-making and confirmation bias (Benson & Newman, 2010; Maki & Adams, 2020). To reduce these limitations, C-SEP endorses integrated data analysis (IDA) procedures (see Curran & Hussong, 2009; Hussong et al., 2013) for practitioners working with multiple data sets. IDA is defined as pooling multiple data sources into one (Curran & Hussong, 2009). This type of analysis is most appropriate for mixed-methods research. Since no empirical research has conclusively solved the “which method should we use” question, then an empirical approach to each SLD referral and subsequent data is required.
A well-planned SLD evaluation has the same elements as a well-planned mixed-method convergent design. In a convergent design, the qualitative and quantitative data are collected during a similar timeframe. Data collection instruments, or various tools and strategies, are intentionally selected to have related items such that both instruments will elucidate data about the phenomena (Moseholm & Fetters, 2017). Pattern-seeking techniques are described in Schultz et al. (2012) and include the following steps to analyze data sets.
-
1.
Examine the chain of evidence, including information derived from informal assessments such as progress monitoring data, benchmark testing, historical data, curriculum-based assessments, and standardized testing results. Compare this data to referral concerns. Does it support the hypothesized academic problem? Is the data comprehensive and reflects learning under several conditions?
-
2.
Conduct pattern-seeking analysis by examining the trustworthiness of the data. To complete this step, the examiner must understand the limitations of different types of data. Some questions to help determine the trustworthiness would be: What was the treatment integrity of RTI? What are the expectations of the referral source, and did it impact the objectivity of the data? Does this data reflect learning or one of the exclusionary factors?
-
3.
Triangulate the data. When examining a PSW, three primary data sets will be collected: the historical data set, the formative data set, and the norm-referenced data set. The advantage of triangulating multiple types of data is that the strength of one type of data offsets the weaknesses of the other types of data. For example, a norm-referenced math test usually measures a relatively limited number of subskills in one administration. The formative and historical data set will no doubt have many more skills than the norm-referenced test, and since it is actual achievement data, it will have greater ecological validity.
-
4.
Cross-validate the data. In this step, the data is further examined to determine consistent confirmation of the hypothesis or whether discrepancies exist between the data sources. Can the discrepancies be explained or reconciled? Multiple data points must support the strengths and weaknesses when establishing a PSW with C-SEP.
To reiterate, when using C-SEP, the data collected must converge, and the preponderance of data must be consistent and relevant to identifying SLD. Contextual assessment practices will assist in the proper “weighing” of data (Schultz et al., 2021). Task demand analysis will assist in identifying specific psychological processes that are considered disordered. Task demand analysis will also help with diagnostic precision as each student has a unique “learning profile” based on numerous factors, including those not measured by norm-referenced tests. While cognitive profiles are often criticized (McGill et al., 2018), other evidence supports their use (Allen & Handcock, 2008; Berringer & O’Malley May 2011; Capin et al., 2021; Compton et al., 2011; Valdois et al., 2020). Cognitive profiles alone are insufficient; however, when language and all other factors are considered, a learner profile can be obtained to help with differential diagnosis, instructional decisions, and accommodations and modifications.
Conclusion
The best method of SLD identification will continue to be a subject of much debate as we are well in the second decade of IDEA. C-SEP practices encompass the eight principles of SLD identification in the NCLD Joint Principles Document (NCLD, 2023) and advocate for responsible and strategic testing practices. The measurement of intellectual development is more aligned with the definition of SLD and replaces discrepancy practices. Testing data documents areas in which a student is struggling or excelling and offers insight into a child’s unique PSW to create a learner profile. Professional judgment is informed by using a variety of tools and strategies. C-SEP endorses interpretive strategies: task demand analysis, contextual assessment, integrated data analysis, and pattern-seeking analysis. When adhering to instructions and interpretive options provided by the producer of the assessment and adherence to all state and federal regulations, C-SEP remains a viable option for identifying SLD.
References
Adlof, S. M. (2020). Promoting reading achievement in children with developmental language disorders: What can we learn from research on specific language impairment and dyslexia? Journal of Speech Language and Hearing Research, 63(10), 3277–3292. https://doi.org/10.1044/2020_jslhr-20-00118.
Alfonso, V. C., & Flanagan, D. P. (2018). Essentials of Specific Learning Disability Identification. 2nd Edition. New York: Wiley.
Allen, K. D., & Hancock, T. E. (2008). Reading comprehension improvement with individualized cognitive profiles and metacognition. Literacy Research and Instruction, 47(2), 124–139. https://doi.org/10.1080/19388070801938320.
Benson, N., & Newman, I. (2010). Potential utility of actuarial methods for identifying specific learning disabilities. Psychology in the Schools, 47(6), 538–550. https://doi.org/10.1002/pits.20489.
Benson, N. F., Beujean, N., McGill, R. J., & Dombrowski, S. C. (2018). Critique of the core-selective evaluation process. The Dialog, 47(2), 14–18.
Berninger, V. W., & O’Malley May, M. (2011). Evidence-based diagnosis and treatment for specific learning disabilities involving impairments in written and/or oral language. Journal of Learning Disabilities, 44(2), 167–183. https://doi.org/10.1177/0022219410391189.
Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78(1), 246–263. https://doi.org/10.1111/j.1467-8624.2007.00995.x.
Breaux, K. C. (2020). WIAT 4: Wechsler individual achievement test administration manual (4th ed.).
Büttner, G., & Hasselhorn, M. (2011). Learning disabilities: Debates on definitions, causes, subtypes, and responses. International Journal of Disability Development and Education, 58(1), 75–87. https://doi.org/10.1080/1034912x.2011.548476.
Cambridge dictionary: Find definitions, meanings & translations. (n.d.). Cambridge Dictionary| English Dictionary, Translations & Thesaurus. https://dictionary.cambridge.org/us/.
Capin, P., Cho, E., Miciak, J., Roberts, G., & Vaughn, S. (2021). Examining the reading and cognitive profiles of students with significant reading comprehension difficulties. Learning Disability Quarterly, 44(3), 183–196. https://doi.org/10.1177/0731948721989973.
Cave, S. N., Wright, M., & Von Stumm, S. (2022). Change and stability in the association of parents’ education with children’s intelligence. Intelligence, 90, 101597. https://doi.org/10.1016/j.intell.2021.101597.
Christo, C., D’Incau, B. J., & Ponzuric, J. (2016). Response to McGill and Busse, when theory trumps science: A critique of the PSW model for SLD identification. Contemporary School Psychology, 21(1), 19–22. https://doi.org/10.1007/s40688-016-0098-6.
Colker, R. (2011). The learning disability mess. Journal of Gender Social Policy & the Law, 19(4), 101–125. https://doi.org/10.18574/nyu/9780814708101.003.0013.
Compton, D. L., Fuchs, L. S., Fuchs, D., Lambert, W., & Hamlett, C. (2011). The cognitive and academic profiles of reading and mathematics learning disabilities. Journal of Learning Disabilities, 45(1), 79–95. https://doi.org/10.1177/0022219410393012.
Cottrell, J. M., & Barrett, C. A. (2017). Examining school psychologists’ perspectives about specific learning disabilities: Implications for practice. Psychology in the Schools, 54(3), 294–308. https://doi.org/10.1002/pits.21997.
Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: The simultaneous analysis of multiple data sets. Psychological Methods, 14(2), 81–100. https://doi.org/10.1037/a0015914.
Cutting, L. E., Materek, A., Cole, C. A., Levine, T. M., & Mahone, E. M. (2009). Effects of fluency, oral language, and executive function on reading comprehension performance. Annals of Dyslexia, 59(1), 34–54. https://doi.org/10.1007/s11881-009-0022-0.
Deary, I. J., Johnson, W., & Houlihan, L. M. (2009). Genetic foundations of human intelligence. Human Genetics, 126(1), 215–232. https://doi.org/10.1007/s00439-009-0655-4.
Decker, S. L., Hale, J. B., & Flanagan, D. P. (2013). Professional practice issues in the assessment of cognitive functioning for educational applications. Psychology in the Schools, 50(3), 300–313. https://doi.org/10.1002/pits.21675.
Dehn, M. J. (2014). Enhancing SLD diagnoses through the identification of psychological processing deficits. The Australian Educational and Developmental Psychologist, pp. 30, 119–139.
Dombrowski, S. C. (2020). Psychoeducational assessment and report writing. Springer Nature.(n.d.). EDBlogs| US Department of Education. https://sites.ed.gov/idea/files/finalregulations.pdf.
Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & Mascolo, J. T. (2006). The achievement test desk reference, 2nd ed. (ATDR-2). New York: Wiley.
Flanagan, D. P., & Schneider, W. J. (2016). Cross-battery assessment? Xba PSW? A case of mistaken identity: A commentary on Kranzler and colleagues’ classification agreement analysis of cross-battery assessment in the identification of specific learning disorders in children and youth. International Journal of School & Educational Psychology, 4(3), 137–145. https://doi.org/10.1080/21683603.2016.1192852.
Flanagan, D. P., Fiorello, C. A., & Ortiz, S. O. (2010). Enhancing practice through the application of Cattell-Horn-Carroll theory and research: A third method approach to specific learning disability identification. Psychology in the Schools, 47(7), 739–760. https://doi.org/10.1002/pits.20501.
Fletcher, J. M., & Miciak, J. (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Archives of Clinical Neuropsychology, 32(1), 2–7. https://doi.org/10.1093/arclin/acw103.
Fletcher, J. M., & Miciak, J. (2019). The identification of specific learning disabilities: A summary of research on best practices. Texas Center for Learning Disabilities.
Foorman, B. R., Petscher, Y., & Herrera, S. (2018). Unique and common effects of decoding and language factors in predicting reading comprehension in grades 1–10. Learning and Individual Differences, 63, 12–23. https://doi.org/10.1016/j.lindif.2018.02.011.
Gottfredson, L. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24, 13–23.
Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., Cohen, M., Davis, A., Decker, S., Denckla, M., Dumont, R., Elliott, C., Feifer, S., Fiorello, C., Flanagan, D., Fletcher-Janzen, E., Geary, D., Gerber, M., Gerner, M., & Yalof, J. (2010). Critical issues in response-to-Intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disability Quarterly, 33(3), 223–236. https://doi.org/10.1177/073194871003300310.
Hulme, C., Snowling, M. J., West, G., Lervåg, A., & Melby-Lervåg, M. (2020). Children’s language skills can be improved: Lessons from psychological science for educational policy. Current Directions in Psychological Science, 29(4), 372–377. https://doi.org/10.1177/0963721420923684.
Hussong, A. M., Curran, P. J., & Bauer, D. J. (2013). Integrative data analysis in clinical psychology research. Annual Review of Clinical Psychology, 9(1), 61–89. https://doi.org/10.1146/annurev-clinpsy-050212-185522.
IDEA regulations, 34 CFR §§ 300.1–300.818 (2023).
Individuals with Disabilities Education Act (IDEA) (2022, October 11). Individuals with Disabilities Education Act. https://sites.ed.gov/idea/.
Kaufman, A. S., & Kaufman, N. L. (2014). Kaufman Test of Educational Achievement, Third Edition. Bloomington, MN: NCS Pearson.
Kendeou, P., Van den Broek, P., White, M. J., & Lynch, J. S. (2009). Predicting reading comprehension in early elementary school: The independent contributions of oral language and decoding skills. Journal of Educational Psychology, 101(4), 765–778. https://doi.org/10.1037/a0015956.
Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016). Cross-battery assessment pattern of strengths and weaknesses approach to the identification of specific learning disorders: Evidence-based practice or pseudoscience? International Journal of School & Educational Psychology, 4(3), 146–157. https://doi.org/10.1080/21683603.2016.1192855.
Kranzler, J. H., Gilbert, K., Robert, C. R., Floyd, R. G., & Benson, N. F. (2019). Further examination of a critical assumption underlying the dual-discrepancy/Consistency approach to specific learning disability identification. School Psychology Review, 48(3), 207–221. https://doi.org/10.17105/spr-2018-0008.v48-3.
Kwiatek, R., & Schultz, E. K. (2014). Using informal assessment data to support the diagnosis of specific learning disability. The Dialog, 43, 12–15.
Lauro, J., Core, C., & Hoff, E. (2020). Explaining individual differences in trajectories of simultaneous bilingual development: Contributions of child and environmental factors. Child Development, 91(6), 2063–2082. https://doi.org/10.1111/cdev.13409.
Lemos, G. C., Almeida, L. S., & Colom, R. (2011). Intelligence of adolescents is related to their parents’ educational level but not to family income. Personality and Individual Differences, 50(7), 1062–1067. https://doi.org/10.1016/j.paid.2011.01.025.
Macnamara, B. N., & Rupani, N. S. (2017). The relationship between intelligence and mindset. Intelligence, 64, 52–59. https://doi.org/10.1016/j.intell.2017.07.003.
Maki, K. E., & Adams, S. R. (2018). A current landscape of specific learning disability identification: Training, practices, and implications. Psychology in the Schools, 56(1), 18–31. https://doi.org/10.1002/pits.22179.
Maki, K. E., & Adams, S. R. (2019). Specific learning disabilities identification: Do the identification methods and data matter? Learning Disability Quarterly, 43(2), 63–74. https://doi.org/10.1177/0731948719826296.
Maki, K. E., Kranzler, J. H., & Moody, M. E. (2022). Dual discrepancy/consistency pattern of strengths and weaknesses method of specific learning disability identification: Classification accuracy when combining clinical judgment with assessment data. Journal of School Psychology, 92, 33–48. https://doi.org/10.1016/j.jsp.2022.02.003.
McGill, R. J., Styck, K. M., Palomares, R. S., & Hass, M. R. (2015). Critical issues in specific learning disability identification. Learning Disability Quarterly, 39(3), 159–170. https://doi.org/10.1177/0731948715618504.
McGill, R. J., Dombrowski, S. C., & Canivez, G. L. (2018). Cognitive profile analysis in school psychology: History, issues, and continued concerns. Journal of School Psychology, 71, 108–121. https://doi.org/10.1016/j.jsp.2018.10.007.
McGregor, K. K. (2020). How we fail children with developmental language disorder. Language Speech and Hearing Services in Schools, 51(4), 981–992. https://doi.org/10.1044/2020_lshss-20-00003.
McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1–10. https://doi.org/10.1016/j.intell.2008.08.004.
McGrew, K. S. (2021). The cognitive-affective-motivation model of learning (CAMML): Standing on the shoulders of giants. Canadian Journal of School Psychology, 37(1), 117–134. https://doi.org/10.1177/08295735211054270.
McGrew, K. S., LaForte, E. M., & Schrank, F. A. (2014a). Technical Manual. Woodcock-Johnson IV. Riverside.
Miciak, J., Fletcher, J. M., Stuebing, K. K., Vaughn, S., & Tolar, T. D. (2014). Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. School Psychology Quarterly, 29(1), 21–37. https://doi.org/10.1037/spq0000037.
Miciak, J., Williams, J. L., Taylor, W. P., Cirino, P. T., Fletcher, J. M., & Vaughn, S. (2016). Do processing patterns of strengths and weaknesses predict differential treatment response? Journal of Educational Psychology, 108(6), 898–909. https://doi.org/10.1037/edu0000096.
Miller, D. C., Maricle, D. E., & Jones, A. M. (2016). Comparing three patterns of strengths and weaknesses models for identifying specific learning disabilities. Learning Disabilities: A Multidisciplinary Journal, 21(2), 31–45. https://doi.org/10.18666/ldmj-2016-v21-i2-7349.
Moseholm, E., & Fetters, M. D. (2017). Conceptual models to guide integration during analysis in convergent mixed methods studies. Methodological Innovations, 10(2), 205979911770311. https://doi.org/10.1177/2059799117703118.
Nation, K., Clarke, P., Marshall, C. M., & Durand, M. (2004). Hidden language impairments in children. Journal of Speech Language and Hearing Research, 47(1), 199–211. https://doi.org/10.1044/1092-4388(2004/017).
National Center for Learning Disabilities (2023). SLD & eligibility under IDEA: Resources to improve practice & policy. (2022, May 17). NCLDhttps://www.ncld.org/get-involved/understand-the-issues/sld-eligibility-under-idea-resources-to-improve-practice-policy/.
Oommen, A. (2014). Factors influencing intelligence quotient. Journal of Neurology & Stroke, 1(4). https://doi.org/10.15406/jnsk.2014.01.00023.
Perlovsky, L. (2009). Language and cognition. Neural Networks, 22(3), 247–257. https://doi.org/10.1016/j.neunet.2009.03.007.
Plomin, R., & Deary, I. J. (2014). Genetics and intelligence differences: Five special findings. Molecular Psychiatry, 20(1), 98–108. https://doi.org/10.1038/mp.2014.105.
Resnick, L. B., & Schantz, F. (2015). Re-thinking intelligence: Schools that build the mind. European Journal of Education, 50(3), 340–349. https://doi.org/10.1111/ejed.12139.
Rindermann, H., Flores-Mendoza, C., & Mansur-Alves, M. (2010). Reciprocal effects between fluid and crystallized intelligence and their dependence on parents’ socioeconomic status and education. Learning and Individual Differences, 20(5), 544–548. https://doi.org/10.1016/j.lindif.2010.07.002.
Rowe, M. L. (2012). A longitudinal investigation of the role of quantity and quality of child-directed speech in vocabulary development. Child Development, 83(5), 1762–1774. https://doi.org/10.1111/j.1467-8624.2012.01805.x.
Schneider, W. J., & Kaufman, A. S. (2017). Let’s not do away with comprehensive cognitive assessments just yet. Archives of Clinical Neuropsychology. https://doi.org/10.1093/arclin/acw104.
Schrank, F. A., Mather, N., & McGrew, K. S. (2014a). Woodcock-Johnson IV tests of oral language. Riverside.
Schrank, F. A., Mather, N., & McGrew, K. S. (2014b). Woodcock-Johnson IV tests of achievement. Riverside Publishing.
Schrank, F. A., Mather, N., & McGrew, K. S. (2014c). Woodcock-Johnson IV tests of oral Language. Riverside Publishing.
Schrank, F. A., Stephens-Pisecco, T. L., & Schultz, E. K. (2017). The core-selective evaluation process Applied to Identification of specific learning disability (Woodcock-Johnson IV Assessment Service Bulletin No. 8). Houghton Mifflin Hardcourt.
Schultz, E. K., & Stephens, T. L. (2009). Utilizing Professional Judgment within the SLD eligibility determination process: Guidelines for Educational diagnosticians and ARD Committee members. The Dialog, 38, 3–6.
Schultz, E. K., & Stephens, T. L. (2015). Core-selective evaluation process: An efficient & Comprehensive Approach to identify students with SLD using the WJ IV. The DiaLog, 44(2), 5–12.
Schultz, E. K., & Stephens, T. L. (2017). Using the core-selective evaluation process (C-SEP) to identify a pattern of strengths and weaknesses. The Dialog, 46(1), 9–15.
Schultz, E., & Stephens, T. (2018). Using the core-selective evaluation process to identify a PSW: Integrating research, practice, and policy. Special Education Research, Practice and Policy, 138–155.
Schultz, E. K., & Stephens-Pisecco, T. L. (2018a). Using the Core-Selective Evaluation Process to identify a PSW: Integrating Research, Practice, and Policy, Special Education Research, Policy & Practice, Fall 2018.
Schultz, E. K., & Stephens-Pisecco, T. L. (2018). Exposing Educational Propaganda: A response to Benson et al. (2018) critique of C-SEP. The Dialog, 48(1), 10–16.
Schultz, E. K., Simpson, C. G., & Lynch, S. (2012). Specific learning disability identification: What constitutes a pattern of strengths and weaknesses? Learning Disabilities, 18(2), 87–95.
Schultz, E. K., Rutherford, E., & Cavitt, D. (2021). Intellectual Development and Specific Learning Disability: The Role of Norm-Referenced Tests. Special Education Research, Policy & Practice, Fall 2021.
Schultz, E. K., Ramirez, K., & Stephens, T. L. (2023). Differentiating speech-language impairment and specific learning disability: Implications for comprehensive evaluations. The DiaLog: Journal of the Texas Educational Diagnosticians’ Association, 52,(1), 12–17.
Section 300.309 determining the existence of a specific learning disability. (2018, May 25). Individuals with Disabilities Education Act. https://sites.ed.gov/idea/regs/b/d/300.309.
Snyderman, M., & Rothman, S. (1987). Survey of expert opinion on intelligence and aptitude testing. American Psychologist, 42(2), 137–144. https://doi.org/10.1037/0003-066x.42.2.137.
Spencer, M., Wagner, R. K., Schatschneider, C., Quinn, J. M., Lopez, D., & Petscher, Y. (2014). Incorporating RTI in a hybrid model of reading disability. Learning Disability Quarterly, 37(3), 161–171. https://doi.org/10.1177/0731948714530967.
Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., Francis, D. J., & VanDerHeyden, A. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41(1), 3–22. https://doi.org/10.1080/02796015.2012.12087373.
Taylor, W. P., Miciak, J., Fletcher, J. M., & Francis, D. J. (2017). Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations. Psychological Assessment, 29(4), 446–457. https://doi.org/10.1037/pas0000356.
The preponderance of the evidence. (n.d.). LII / Legal Information Institute. https://www.law.cornell.edu/wex/preponderance_of_the_evidence.
Thomas, M. S. C., Forrester, N. A., & Ronald, A. (2013). Modeling socioeconomic status effects on language development. Developmental Psychology, 49(12), 2325–2343. https://doi.org/10.1037/a0032301.
Valdois, S., Reilhac, C., Ginestet, E., & Line Bosse, M. (2020). Varieties of cognitive profiles in poor readers: Evidence for a VAS-impaired subtype. Journal of Learning Disabilities, 54(3), 221–233. https://doi.org/10.1177/0022219420961332.
Von Stumm, S., & Plomin, R. (2015). Socioeconomic status and the growth of intelligence from infancy through adolescence. Intelligence, 48, 30–36. https://doi.org/10.1016/j.intell.2014.10.002.
Funding
Open access funding provided by SCELC, Statewide California Electronic Library Consortium. The authors declare that no funds, grants, or other supports were received during the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare there is no conflict of interest with this manuscript.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schultz, E.K., Stephens, T. & Olvera, P. Intellectual Development and the Core-Selective Evaluation Process: Gaining Insight and Understanding of Students with Specific Learning Disabilities. Contemp School Psychol 28, 353–364 (2024). https://doi.org/10.1007/s40688-024-00499-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40688-024-00499-3