Student Evaluation

  • Caroline I. Magyar
  • Vincent Pandolfi

Chapter Learning Objectives

  • Increase understanding of the purposes of student assessment

  • Increase understanding of the methods of assessment used in the ASD Classroom Model

  • Identify the assessment measures used within the ASD Classroom Model

  • Increase understanding of how assessment data inform intervention and program planning


Intellectual Disability Behavioral Difficulty Behavior Support Progress Monitoring Picture Exchange Communication System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Assessing the educational and behavioral support needs of students with ASD and evaluating their progress are essential activities in the ASD Classroom or Inclusion Model (referred to in this chapter as the ASD Classroom Model). Data pertaining to student characteristics, instructional levels, and quality of the instructional context help inform development of an education plan and help the ASD Classroom Team evaluate student outcome. Within this model, student evaluation (i.e., psychoeducational assessment) involves the continuous assessment of student performance and learning. Continuous assessment allows the ASD Classroom or Inclusion Team (referred in this chapter as ASD Classroom Team) to monitor changes in student performance, identify specific learning and behavioral difficulties, and to measure response to intervention and achievement. The ASD Classroom Team obtains data needed to resolve specific learning and/or behavioral difficulties in a timely manner: factors that can adversely affect student achievement.

The chapter begins with a review of the purposes of psychoeducational assessment in the ASD Classroom Model and describes the assessment methods that are routinely used in the assessment protocol. A table summarizes this information and includes the developmental or skill area assessed and the corresponding measure(s) that are used. The chapter describes how assessment data inform intervention and program planning and problem solving and concludes with a description of considerations in assessing students with ASD.

Purpose of Psychoeducational Assessment in the ASD Classroom Model

Psychoeducational assessment is the process used for gathering information about a student and the instructional context in order to develop an appropriate education and intervention plan (Salvia & Ysseldyke, 1978). Data are also used for progress monitoring and for problem-solving student learning and behavioral difficulties. Assessment data enable the team to understand the dynamic interaction between student learning characteristics and the instructional context so they can make decisions that ensure that instruction (intervention) is tailored to each student’s unique needs. This is the approach taken to student assessment in the ASD Classroom Model.

A proactive evaluation of instructional levels and intervention needs is required for students with ASD. This is because these students present with highly variable developmental, academic, and behavioral profiles that change over time and across contexts. The assessment protocol applied here is standardized and continuous which helps to ensure that similar assessments are provided for all students and allows for an efficient approach to decision-making and implementing timely interventions.

There are three primary purposes of assessment in the ASD Classroom Model. The first is to describe the student’s learning characteristics. This information determines the broad ability profile and learning style of the student and helps identify general curricular and behavior support need areas. The team can then decide on the types of general classroom instructional supports and modifications that are likely to be needed to maximize student participation and learning.

The second purpose of assessment focuses on routine progress monitoring and evaluation of student achievement. Progress monitoring serves several purposes. First, the team can evaluate the effectiveness of instruction and behavior support plans. Second, it allows for timely problem identification and subsequent intervention. Finally, progress monitoring is useful for detecting misalignment of instruction and student learning characteristics, even prior to the onset of learning and/or behavior problems. By regularly monitoring instructional levels and appropriately differentiating instruction, students are likely to be more actively engaged in learning and less likely to exhibit interfering behaviors. This proactive approach is essential to maintaining the (inclusive) classroom’s focus on instruction and not on behavior management.

The third purpose of assessment is for problem-solving specific learning and/or behavioral difficulties. Once a problem has been identified, the ASD Classroom Team needs to complete a functional assessment to determine the contextual and student-specific variables that predict and maintain learning and/or behavioral difficulties. Below is a description of the student evaluation protocol.

Student Evaluation Protocol

The ASD Classroom Model applies a standardized student evaluation protocol. This protocol includes two components: (a) guidelines to evaluate student learning characteristics and achievement, and (b) a student performance data system for progress monitoring and problem-solving student learning and behavioral difficulties. This protocol is used in conjunction with school, district, and state evaluation policies, procedures, and practices.

The student evaluation protocol applies a broad-based multi-method model of assessment. Assessment methods include norm- and criterion-referenced testing, curriculum-based assessment, ecological assessment, interview, and observation. In addition, the ASD Team uses a standardized data collection system (Magyar, 2006e) to assist in the activities related to the three purposes of assessment. The specific methods and measures used by a particular ASD Team at any point in time is determined by the purpose of the assessment and the specific data required to make student program planning decisions. Each of the assessment methods is described briefly below.

Assessment Methods

Norm-Referenced Assessment

Norm-referenced assessment helps identify broad student learning characteristics. The data reflect performance levels compared to age- or grade-level peers and indicate a student’s relative strengths and weaknesses within their own ability profile (e.g., see Sattler, 2001). These data establish a baseline level of functioning within each developmental domain assessed and provides a point of reference for evaluating student achievement over time. Developmental domains that are routinely evaluated using norm-referenced measures include cognition, communication and language, adaptive behavior, and social-emotional behavior. Table 8.1 indicates those measures routinely used in the ASD Classroom Model.
Table 8.1

Select measures used in the ASD Classroom Model for evaluating student learner characteristics and instructional context

Developmental area/skill


Core ASD characteristics

ASD symptoms

The Childhood Autism Rating Scale (Schopler et al., 1988)

Socialization and play

Social Skills Rating Scale (SSRS; Gresham & Elliot, 1990); Vineland Adaptive Behavior Scales-II Socialization Domain (Sparrow et al., 2005); Skillsteaming Checklist (Goldstein & McGinnis, 1997); Student Performance Data System (Magyar, 2006e)

Language and communication

Various measures including Peabody Picture Vocabulary Test-Fourth Edition (Dunn & Dunn, 2007); Test of Language Development-Primary Fourth Ed. (Newcomer & Hammill, 2008); Observation; Student Performance Data System (Magyar, 2006e)

Related characteristics

Adaptive, classroom participation and prosocial behavior

Vineland Adaptive Behavior Scales-II (Sparrow et al., 2005); Kindergarten Survival Skills Checklist (Vincent et al., 1980); Student Performance Data System (Magyar, 2006e)


Battelle Developmental Inventory-Second Edition (Newborg, 2005); Mullen Scales of Early Learning (Mullen, 1995); Wechsler Intelligence Scale for Children-Fourth Ed. (Wechsler, 2003); Stanford-Binet-Fifth Edition (Roid, 2003); and the Differential Ability Scales-II (Elliot, 2007)


Peabody Individual Achievement Test-Revised/Normative Update (PIAT-R/NU, Markwardt, 1997); Student Performance Data System (Magyar, 2006e); Teacher Made Tests

Mental health/Behavior

Child Behavior Checklist 1.5–5 (CBCL 1.5–5; Achenbach & Rescorla, 2000); Child Behavior Checklist 6–18 (CBCL 6–18; Achenbach & Rescorla, 2001; Functional Assessment Interview (see O'Neill et al., 1997))

Instructional context

Quality of instructional context

Classroom Observation Form (COF; Magyar & Pandolfi, 2006c)

Personnel performance

Personnel Performance Scale-Third Edition (PPS-3; Magyar & Panfolfi, 2006b)

Student engagement

Academic Engagement Form (Magyar & Panfolfi, 2006a)

Norm-referenced measures are not designed for an in-depth assessment of the student’s functioning within these developmental areas; rather, they provide data that describe samples of behavior or ability. Results assist the ASD Classroom Team in identifying general supports, accommodations, curricular content (core and supplemental), and instructional format. For example, if intelligence test data reflect working memory deficits, the teacher may make a curriculum modification such as reducing the number of math problems provided per independent work session. A related instructional support might include multiple opportunities to rehearse new information to facilitate the student’s acquisition of the instructional targets. If test data suggest that a student’s nonverbal skills are significantly better developed than his or her verbal skills, the teacher may include visual supports during instruction to improve verbal information processing and conduct regular comprehension checks to ensure learning.

Criterion-Referenced Testing

Criterion-referenced tests are used to measure a student’s performance in relation to some pre-established standard (Sattler, 2001). Many tests are commercially available, but teachers can also construct their own tests. Items are often linked to instructional goals (Salvia & Ysseldyke, 2001). Test data reflect the percentage of correct responses or rate of behavior and provide information pertaining to student instructional level, accuracy, retention, and mastery within a specific skill area (Salvia & Ysseldyke, 2001).

Criterion-referenced testing is used extensively in the ASD Classroom Model for those students receiving the supplemental curriculums (Magyar, 1998, 2000, 2001, 2003, 2006; described in Chap.7). Students are routinely evaluated against specific performance criteria contained in these curriculums. Lesson plans provide detail on the learning objectives, performance and mastery criteria, and instructional procedures, materials, and activities. Personnel collect student data during each lesson, which are then graphed at the end of the lesson or school day and progress reviewed. If the student shows progress the instructor continues with the lesson as written, and once the student attains mastery criteria for the instructional target set, he or she moves onto the next learning objective. If the student’s rate of learning is below his or her typical learning rate in that domain (which was known from previous assessment) the instructor determines if modifications are needed to the lesson (e.g., pace of instruction, error correction procedure, reinforcement system, etc.) or if additional data are needed to ascertain the reason for the learning difficulty (skill vs. performance deficit).

Other criterion-referenced measures can be used to identify relative academic strengths and weaknesses and to screen for potential learning difficulties (Shapiro, 1996). For example, the teacher may select a particular reading inventory to determine if a student is having difficulty with decoding and oral reading rate. Criterion-referenced data are shared with the other team members who discuss for intervention planning, and to monitor student response to the intervention.

Curriculum-Based Assessment

Curriculum based measurement (CBM) and curriculum based assessment (CBA; Gickling, Shane, & Croskery, 1989) are routinely used to assess student performance within the local curriculum. CBM utilizes curriculum materials and standardized measurement and scoring rules (e.g., number of words read correctly in 1 min). It is used for many reasons that include: (a) evaluating the effects of an instructional program, typically in the areas of mathematics, reading, spelling and written expression (Deno, 2003) and (b) assessing a student’s performance in specific academic areas (Howell & Nolet, 2000). These data are compared to the student’s previous levels of performance. Through repeated measurement, the teacher is able to detect small but meaningful changes in student performance and obtain the data for timely decision-making about a student’s response to specific academic interventions. CBM provides valuable information about the need for modifications to improve student achievement.

With CBA, the teacher assesses the quality of the instructional context, student performance, and the functional relationship between the two (Lentz & Shapiro, 1986). The assessment data allows educators to evaluate the match between the instruction being provided and the student’s ability, and to make decisions regarding curriculum and/or instructional adjustments to ensure an appropriate match (Gravois & Gickling, 2002). This method provides data to help determine if the student’s learning difficulty is the result of a skill deficit, a performance (i.e., motivational) issue, or a combination of both. Both CBA and CBM are used to identify areas of misalignment between student instructional levels and the instructional context. Findings from the assessment are used to adjust instruction, modify curriculum, and/or develop a behavior support or other intervention plan that increases the student’s skill, motivation to learn, or both.

Functional Behavior Assessment

Functional behavior assessment (FBA) is a type of ecological inventory designed to identify situations that predict and maintain behaviors that interfere with learning and socialization. FBA helps the ASD Classroom Team identify supports and interventions a student may need in order to gain full access to the curriculum and instructional process. The kinds of behaviors that interfere with student participation generally reflect behavioral excesses and behavioral deficits (Kanfer & Saslow, 1969), as well as problems with stimulus control. Behavioral excesses are behaviors that occur too often, with too much intensity, for long durations, or are not considered socially appropriate (e.g., stereotypies, monologues, circumscribed interests, aggression). Behavioral deficits are behaviors that occur with insufficient frequency, intensity, or duration, or with inadequate topography (e.g., social communication skills, social withdrawal, coping skills, and leisure skills). Problems with stimulus control refer to behaviors that occur at the wrong place or time (e.g., getting out of seat to play on the computer during a whole group lesson, laughing when a classmate is distressed).

Conceptualizing behavior problems in this way helps guide intervention planning. Different types of interventions may be better suited for each type of problem. For example, ameliorating behavioral excesses often include behavior reduction procedures such as differential reinforcement or extinction. Behavioral deficits are often addressed through direct instruction of skills and positive reinforcement procedures. Problems with stimulus control are often addressed through discrimination training and rule-based contingency management. In most instances, more than one type of intervention procedure is needed to address the complex behavior support needs of students with ASD.

The FBA protocol used in the ASD Classroom Model includes a combination of direct and indirect methods of assessment (e.g., see O’Neill et al., 1997). It includes on-going observation of the student and routine data collection regarding the antecedents and consequences of interfering and problematic social behaviors. This information is used to determine what persons, activities, settings, and/or times are associated with both the occurrence and nonoccurrence of problem behavior. Initial hypotheses are developed about the function(s) a behavior serves for a student, which most often include attention-seeking, avoidance/escape, tangible/activity-seeking, and sensory functions. Direct observation is also very helpful in identifying possible prosocial alternatives to problem behavior. These alternatives may include functionally equivalent behaviors and/or coping skills.

In addition to this general student monitoring, direct observation using various time-sampling methods is used to assess student behavior in different situations. Time sampling is useful for quantifying problem behavior when continuous observation throughout the school day is not possible. When using time sampling, it is important that observations be planned so that a representative sample of behavior is collected across relevant contexts.

Indirect assessment methods include rating scales and interview. Two rating scales are widely used to identify behavioral function: the Motivation Assessment Scale (Durand & Crimmins, 1992) and the Questions About Behavioral Function (Vollmer & Matson, 1999) scale. These measures are completed by team members, parents, and/or others, and provide data to assist with identifying the function of the target behavior. One semi-structured interview, the Functional Assessment Interview (O’Neill et al., 1997) allows for a thorough description of problem behavior as well as situations that predict and maintain it. These indirect methods are not a substitute for direction observation. Rating scale data should be used only to supplement findings from direct observation. Interview data are often helpful in clearly defining behaviors of concern, and can be especially helpful for planning the observation schedule.


Interviews gather information from parents, school personnel, the student, as well as community providers about the student’s perceived progress and needs. Interviewing collects information that cannot be obtained through direct observation and/or is used to supplement data obtained from other measures. Several types of interviews are used in the ASD Classroom Model: structured, semi-structured, and informal. Structured and semi-structured interviews are typically used to gather information about a specific assessment target such as a student’s ability to communicate a need for a break as part of an FBA, symptoms of an anxiety disorder, or assessment of a student’s adaptive behavior skills. Informal interviews are a routine part of the daily communication among team members to proactively assess the student’s overall functioning and maintain alertness to possible learning and/or behavioral difficulties.

Interviews can assist with collecting information about parent/teacher perceptions of student strengths and weaknesses, gathering information on student sleep quality, ascertaining parental concerns, determining the community resources available to the family, and identifying the frequency and quality of interactions among key stakeholders (e.g., parent–teacher or teacher–administrator interactions). Evaluators can use interviews to clarify possible reasons for discrepancies in results obtained from various measures and informants to achieve a better understanding of the student and his/her needs.


Direct observation is an indispensible part of student evaluation protocol. This method allows for the collection of quantitative and qualitative information about some aspect of the student’s performance or behavior and/or element of the instructional context. Common observation methods used in the student evaluation protocol include direct observation of permanent products (e.g., work samples), event recording (e.g., frequency of aggression), duration recording (e.g., length of time out of seat), latency recording (e.g., length of time required to respond to a question), and interval recording/time sampling (e.g., percentage of 5 s intervals student remained in seat). The specific method chosen is based on the purpose of the assessment, the nature of the behavior, and the data needed to make informed decisions. As noted above, it is important to conduct observations in a manner that allows you to obtain a representative sample of the student’s behavior.

Linking Student Evaluation Data to Program Planning

The overarching goal of student evaluation is to gather data on student needs and data that helps one to understand how student learning relates to the instructional context. The way data are used depends on the purpose of the assessment. If the purpose is to describe student learning characteristics, then descriptive data analysis (quantitative and qualitative) summarizes the student’s learning profile. If the purpose is to monitor progress, then data are gathered that shows the student’s response to instruction and/or intervention over time (e.g., use of graphs depicting performance across baseline and intervention phases). If the purpose is to examine more fully the student’s instructional needs then the data on student characteristics and the instructional context are studied jointly to guide the selection or modification of curricular, instructional, and/or behavior supports.

Describing Student Learning Characteristics

Understanding each student’s unique learning characteristics is necessary to guide the general design of the classroom (e.g., schedules, visual supports, functional communication supports) and instructional context (i.e., curriculum, instructional method, and behavior supports). The evaluation protocol routinely used in the ASD Classroom Model includes measures to assess autism symptoms, developmental and cognitive profile, language and social communication skills, personal and social self-sufficiency skills (adaptive skills), classroom adaptive skills, and emotional and behavioral needs. The instructional context is also assessed (described below) to evaluate its alignment with student needs which helps ensure student participation in the learning process and improves the likelihood of positive outcomes. Multiple methods are used including record review, interview and observation, and several norm-referenced tests and caregiver/teacher checklists and rating scales. The specific measures selected are determined by the purpose of the assessment, the scope of information needed, student-specific characteristics (e.g., presence of intellectual disability or severe language impairment), and characteristics of the measure itself (e.g., types of abilities/skills assessed, format, technical properties). Table 8.1 provides information on the student measures routinely used in the ASD Classroom Model.

Autism Symptoms

A variety of methods and measures is used to assess autism symptoms. The most common methods include direct observation, a review of diagnostic reports and records, and administration of the Childhood Autism Rating Scale (CARS; Schopler, Reichler, & Renner, 1988). The CARS is a 15-item paper and pencil measure that quantifies the severity of behaviors associated with autism. Items are rated on a scale from 1 (“normal”) to 4 (“severely abnormal”). Total Scores at or above 30 strongly suggest the presence of autism. Scores ranging from 30 to 36 indicate mild symptom presentation and scores at or above 37 indicate moderate to severe autism symptoms. The CARS is well suited for use by most ASD Classroom teams because it requires relatively little training to administer and is widely used in the assessment of individuals with autism (Saemundsen, Magnusson, Smari, & Sigurdardottir, 2003).

Developmental and Cognitive Ability

Several developmental and cognitive measures can be used to inform intervention planning. For young children, two measures are used routinely: the Battelle Developmental Inventory-Second Edition (Newborg, 2005; birth to 7:11) and the Mullen Scales of Early Learning (Mullen, 1995; birth to 68 months). The Battelle assesses personal-social, adaptive behavior, motor skills, communication, and cognitive skills. It can be used as a norm-referenced, criterion-referenced, or curriculum-based tool. The Mullen is a widely used norm-referenced measure that assesses motor, visual processing, and language skills. Because the items on these measures are developmentally sequenced, data contribute to the identification of functional academic goals. The choice of which measure to use should be based on the availability of the measure, examiner familiarity with administration and interpretation, and the age and ability level of the student. It is the first author’s experience, that both measures contribute good information to the determination of student ability and curriculum planning. However, it should be noted that the Mullen norms are now dated so caution should be exercised when interpreting standard scores.

Several different measures are available for school-age students. These include the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003), the Stanford-Binet-Fifth Edition (SB-5; Roid, 2003), and the Differential Ability Scales-Second Edition (DAS-II; Elliot, 2007). Each assesses a wide range of skills but the measures differ from one another with respect to the specific abilities they assess and test characteristics. For example, the WISC-IV assesses verbal and perceptual reasoning, working memory, and processing speed. The test includes sample and teaching items, and de-emphasizes speeded performance. The WISC-IV appears best suited for students functioning at or above the moderate range of intellectual disability. The Stanford-Binet-Fifth Edition assesses five broad areas in both verbal and nonverbal domains: fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory. Subtest starting points are tailored to ability level. Criterion-referenced change-sensitive scores allow for progress monitoring. An Extended IQ (Full Scale only) is available to assess both extremely high functioning and profoundly disabled individuals. Test accommodations exist for hearing/visual impairments, expressive language deficits, and severe orthopedic impairments. The DAS-II assesses verbal, nonverbal, and spatial reasoning skills. Out of level testing procedures allow for the assessment of students functioning at the extreme ranges of intelligence. Like the WISC-IV, the DAS-II provides teaching items. Although each measure provides for an assessment of nonverbal abilities, none is truly a nonverbal measure (e.g., solely uses gestures and/or pantomime) as all tests require at least some level of receptive language.

Social Communication

Across the age range individuals with ASD demonstrate a wide range of social communication deficits. For some students, skill deficits in some key areas such as joint attention and social referencing may be evident throughout his or her lifespan. Social communication deficits challenge the student with ASD to adapt socially in school. The nature of the social communication difficulties are related to factors such as age, language, cognitive ability, mental health status, and environmental factors. For example, some students will only demonstrate emerging skills such as eye contact, compliance, observing others, identifying familiar people, and imitating. Other students will demonstrate beginning social skills, such as joining in a group, turn taking, brief reciprocal exchange, pretend play, and basic coping skills. Still others will demonstrate advanced skills such as topic maintenance in a conversation that is of interest to another person, asking a friend out for a community activity, self-advocacy skills, managing feelings such as anger, and social problem solving. Maintaining appropriate supports over time is challenging because students with ASD demonstrate atypical developmental trajectories. Therefore, evaluation of social communication skills requires repeated assessment of the student and the social environment.

In the ASD Classroom Model, individual social skills are subsumed under several categories:
  • Basic interpersonal/friendship making (e.g., Eye contact, joining in, turn-taking, reciprocal conversation skills, giving a compliment)

  • Classroom/community skills (e.g., Waiting turn, raising hand, sharing, participating in group activities)

  • Expressing and managing feelings (e.g., Coping skills such as asking for help and problem-solving)

The evaluation protocol includes assessment of the student’s
  • Basic language skills

  • Social skills

  • Pragmatics and use of language across settings

  • Interpersonal interactions

  • Social problem solving skills

  • Self-management skills (includes self-monitoring, self-awareness, self-regulation)

A variety of methods and measures are used in the evaluation protocol and are listed in Table 8.1. These include:
  • Interview (caregiver, teacher, student, other)

  • Direct observation (analogue; structured and unstructured settings; quantitative and qualitative methods; student and environment) with a focus on initiation, response, duration of interaction (turns); time sampling assessing the target student v. peers across settings and interactions)

  • Rating scales (e.g., Social Skills Rating Scale, Gresham & Elliot, 1990; Vineland-II Socialization Domain)

  • Criterion-referenced and curriculum-based (e.g., Skillsteaming Checklist (Goldstein & McGinnis, 1997)

  • Functional behavioral assessment

  • Peer ratings

Data from the assessment describes the student’s social communication repertoire and the skills that the student needs to apply across different school and community environments. These data guide selection of instructional targets used in the classroom’s positive behavior support system and the student’s specific social communication training plan.

Prosocial and Adaptive Behavior

Adaptive behavior refers to an individual’s capacity for personal and social self-sufficiency in everyday real life situations (see AAMR, 2002). Students with ASD often demonstrate adaptive functioning significantly below the levels predicted by their age and/or level of cognitive functioning. Moreover, while students may show the acquisition of adaptive skills over time, the rate of acquisition is significantly below age expectations (Chawarska & Bearss, 2008). Social and communication skills are often observed to be lower relative to personal care and community living skills (e.g., Volkmar et al., 1987), and deficits in social communication skills can persist into adulthood (Shattuck et al., 2007). These deficits may compromise the student’s ability to understand and cope with the interpersonal and instructional demands within the school setting, and adversely affect his/her participation in the curriculum and instructional process.

Evaluation of a student’s adaptive skills informs our understanding of how well the student is functioning in the school and home environments. Assessment data identify specific skills needing improvement for successful academic and social engagement. Norm-referenced assessment is often used as part of the evaluation given its relevance in eligibility determinations; however, direct observation, and semi-structured interviews are also used. Data are typically collected from the school and home environments to capture differential performance that might be a function of the setting or informant bias. Data represent the frequency at which the student typically performs specific behaviors without assistance, and in some instances, the quality of that performance (e.g., fluency). The evaluation includes a functional assessment to distinguish between skill- and performance-based deficits (motivation) across settings, and this has implications for intervention. Skill-based deficits require direct skills instruction and performance-based deficits require an increased rate of individualized reinforcement or a specific behavior support plan.

Two measures of adaptive skills are used in the ASD Classroom Model: the Vineland Adaptive Behavior Scales-Second Edition (Sparrow, Cicchetti, & Balla, 2005) and the Kindergarten Survival Skills Checklist (Vincent et al., 1980). The Vineland is widely used in autism research and treatment (Klin, Saulnier, Tsatsanis, & Volkmar, 2005, p. 793) and is appropriate for all school-age students. The Vineland assesses four adaptive behavior domains: socialization, communication, daily living skills, and motor skills (birth to 5 years only). Each broad domain consists of sub-domains that reflect more specific areas of functioning such as interpersonal and friendship skills (Socialization Domain), understanding and speaking (Communication Domain), grooming and hygiene and domestic skills (both on the Daily Living Skills Domain), and fine and gross motor skills (Motor Skills Domain). An Interview Form, Teacher/Classroom Form and Parent/Caregiver Report Form are available for multi-informant evaluation and contribute data to planning comprehensive school-home-community support plans.

The Kindergarten Survival Skills Checklist (KSSC) is a criterion-referenced tool that measures the skills needed for successful classroom adjustment. The scale consists of 11 domains: Independent Task, Group Attending, Group Participation, Following Class Routine, Appropriate Classroom Behavior, Problem-Solving, Self-Care, Direction Following, Social and Play Skills, Game Playing Skills, and Functional Communication. The 81 items are rated on a scale of 1 “Always performs skill when required to do so” to 5 “Never performs skill when required to do so.” The Domain and Total scores indicate the degree of independence demonstrated by students in each adaptive area, with lower scores indicative of greater independent functioning. Preliminary studies found the scale to have adequate reliability and concurrent validity in samples of students with ASD (see Mruzek, Geiger, Magyar, & Smith, 2005; Pandolfi & Magyar, 2007). The KSSC appears to be sensitive to meaningful changes in adaptive classroom functioning and has the potential for use in progress monitoring and assessing response to interventions (Pandolfi & Magyar, 2007). However, replication of these findings on larger samples is needed. The KSSC’s item content seems relevant to a wide variety of adaptive curricula.

Academic and Learning Difficulties

Academic profiles have been studied predominantly in school-age students and adults with less intellectual disability, and few comprehensive studies provide detailed information on specific academic profiles associated with ASD subtypes. Typically, students with ASD and no intellectual disability show intact basic reading, spelling and math skills (Loveland & Tunali-Kotoski, 2005). However, academic achievement may not be consistent with measured intellectual ability for some students and some may demonstrate specific learning disabilities, particularly in the areas of written expression, reading, and math (e.g., Mayes & Calhoun, 2008; Reitzel & Szatmari, 2003). In addition, executive function difficulties manifest in inattention, poor self-management, and decreased likelihood of completing independent seatwork and homework. Circumscribed interests may interfere with academic engagement and self-management and therefore, FBA is often used to assist with teasing out what may be a skill deficit from a performance deficit.

A combination of assessment methods is used to describe a student’s academic and learning profile. Historically, norm-referenced achievement tests have been used in conjunction with intelligence tests to identify the presence of a specific learning disability. The norm-referenced Peabody Individual Achievement Test-Revised/Normative Update (PIAT-R/NU, Markwardt, 1997) is the measure used in the student evaluation protocol. The PIAT-R/NU samples a range of academic content areas and provides a broad screen of academic achievement. Several subtests utilize multiple choice response formats that are useful for assessing students with poor expressive language ability (Sattler, 2001). In addition, criterion-referenced and curriculum-based assessment within a Response to Intervention (RTI) model is also applied. These data are used not only to help identify learning disabilities, but to assess the extent of student knowledge and rate of learning in various content areas.

Emotional and Behavioral Disorders

Recent research suggests that rates of emotional and behavioral disorders (EBD) in children with ASD can range from 65 to 80% (e.g., DeBruin, Ferdinand, Meester, deNijs, & Verheij, 2006) and a recent study of children with ASD ages 5–17 found a rate of 72% (Leyfer et al., 2006). Rates vary due to methodological differences across studies and sample characteristics. The most common EBDs include internalizing disorders such as depression (e.g., Ghaziuddin, Ghaziuddin, & Greden, 2002), anxiety (e.g., Kim, Szatmari, Bryson, Streiner, & Wilson, 2000), and tic disorders (e.g., Baron-Cohen, Mortimore, Moriarty, Izaguirre, & Robertson, 1999; Gillberg & Billstedt, 2000); and behavioral disorders such as Attention Deficit Hyperactivity Disorder (e.g., Gillberg & Billstedt) and Oppositional Defiant Disorder (Gadow, DeVincent, & Drabick, 2008). Aggression and self-injury are reported to occur at lower rates than other EBDs (e.g., Holden & Gitlesen, 2006; Volkmar, Lord, Bailey, Schultz, & Klin, 2004), but the rate of behavior problems tends to increase as the severity of intellectual impairment increases (Wing & Gould, 1979). Given that students with ASD are at risk for developing an EBD, it is essential to screen for the presence of these disorders. Failure to identify and treat EBDs in a timely manner is problematic since these disorders can persist over time (see Gadow, DeVincent, Pomeroy, & Azizian, 2004; Mash & Dozois, 2003), are associated with poorer outcomes (Howlin, Goode, Hutton, & Rutter, 2004) and may moderate response to ASD specific treatment.

More studies of EBD measures are needed to support their use in individuals with ASD (Ozonoff, Goodlin-Jones, & Solomon, 2005). However, two promising measures used in the ASD Classroom Model are the Child Behavior Checklist 1.5–5 (CBCL 1.5–5; Achenbach & Rescorla, 2000) and the Child Behavior Checklist 6–18 (CBCL 6–18; Achenbach & Rescorla, 2001). These instruments are well researched and widely used in the general pediatric population (see Achenbach, 2006). The CBCL 1.5–5 is a norm-referenced parent report measure of emotional and behavioral problems in children aged 18 months to 5 years. The CBCL 1.5–5 assesses for internalizing (emotional) and externalizing (behavioral) syndromes across seven scales that include Emotionally Reactive, Anxious/Depressed, Somatic Complaints, Withdrawn, Sleep Problems, Attention Problems, and Aggressive Behavior. A recent confirmatory factor analysis indicated that the CBCL 1.5–5 measures the same constructs in children with ASD as it does in the general pediatric population (Pandolfi, Magyar, & Dill, 2009). Therefore, the CBCL 1.5–5 appears to be a useful tool to help identify EBD syndromes requiring specific intervention.

The CBCL 6–18 was developed for youth aged 6–18 years. It assesses for internalizing and externalizing problems across eight syndrome scales: Anxious/Depressed, Withdrawn/Depressed, Somatic Complaints, Social Problems, Thought Problems, Attention Problems, Aggressive Behavior, and Rule-Breaking Behavior. Preliminary research indicates that the CBCL measures the same constructs in youth with ASD as it does in the general population (Pandolfi, Magyar, & Dill, in review) and has utility in helping to identify co-occurring emotional and behavioral problems in youth with ASD (Magyar, Pandolfi, & Dill, 2008).

Evaluating the Instructional Context

Because learning represents the dynamic process between student characteristics and instruction, data are needed on the quality of the instructional context. The instructional context consists of many variables including (a) the method of instruction, (b) quality of the student–teacher interaction in terms of opportunities to respond, pace of instruction, accuracy in responding, and error correction methods, (c) duration of student engagement, and (d) effectiveness of the behavior support system to reduce interfering behavior and increase engagement and prosocial behavior (e.g., Howell, 1986).

A variety of methods and measures evaluate the quality of the instructional context. These were reviewed in  Chap. 4. The Classroom Observation Form (COF; Magyar & Pandolfi, 2006c) is a direct observation measure, which assesses two dimensions of the instructional context: the behavior support system and quality of instruction. Variables assessed include evaluation of the teacher: her interactions with students, the pace of instruction, providing students with opportunities to respond, use of reinforcement to prevent disruptive behavior, and management of problem behavior when it does occur.

Student engagement is measured using the Academic Engagement Form (Magyar & Panfolfi, 2006a), also described in  Chap. 4. This tool uses an interval recording of 1-min observation blocks, with 12 5-minute recording intervals. Scores greater than or equal to 80% are considered acceptable engagement levels.

To evaluate how well personnel are applying ASD instructional supports and methods, the Personnel Performance Scale-3 (PPS-3; Magyar & Panfolfi, 2006b) is recommended. The scale evaluates 23 skills across five domains: instruction, curriculum, environmental supports, positive behavior supports, and data collection and evaluation. Skills include application of discrete trial and direct instruction methods, incidental teaching methods, differentiated instruction, schedules of reinforcement, positive behavior supports, and collecting and evaluating student performance data. It is completed by selecting a 20- or 30-min observation block, divided into 5-min recording intervals. The observation blocks should be representative of personnel performance across a variety of teacher–student interactions. Scores of.80 or better represent acceptable or better levels of personnel performance (see  Chap. 4).

All of these measures require the evaluator to interview the teacher or related service provider to ascertain the specific methods of instruction, the elements of the behavior support system (e.g., target prosocial skills, schedules of reinforcement, behavior support plan), and any other pertinent information relevant to the observation. Data obtained from the evaluation of the instructional context are combined with the student’s learning characteristics and used to design the basic structure of the student’s educational plan, which includes adding supports and accommodations to the instructional context and/or modifying or enhancing it in some way. Progress monitoring and problem-solving assessments are used to fine-tune the plan and assist with addressing student learning or behavioral difficulties. This is described next.

Monitoring Student Progress and Evaluating Outcomes

Students learn and make progress when the instruction is aligned with their prior knowledge and information processing abilities. CBA and CBM assist educators in optimizing student learning and improving outcome. CBA allows the educator to plan instruction by gathering data on the student’s instructional needs within a particular curriculum (What needs to be taught?), and designing lessons to differentially instruct and provide corrective feedback to students during the learning process (How best to teach?). CBM provides a standardized way to evaluate progress in the local curriculum, and can be used to compare a student’s performance to his or her previous performance (e.g., during baseline or other intervention conditions), or to the performance of age- or grade-level peers. CBA and CBM are central components of progress monitoring assessments.

Ongoing considerations regarding child-specific factors as well as the instructional context guide interpretation of progress monitoring assessments. A student’s learning characteristics, ASD-specific symptoms, classroom adaptive skills, and behavioral adjustment may all affect performance differentially over time. Newly emerging problems within these areas and developmental maturation (e.g., improved self-awareness) can affect decisions regarding instruction. Personnel performance data may also be helpful in understanding reasons for a student’s rate of progress. The integrity of instruction and intervention should be carefully assessed prior to making significant (and perhaps unneeded) changes to the student’s education plan. Thus, in addition to the assessment of performance, ongoing assessment of student-specific and contextual factors is needed to help determine the correct instructional level for any given student, which helps maximize learning and improve outcomes (Gravois & Gickling, 2002).

Problem-Solving Student Learning and Behavior Problems

Ensuring that student needs align with the instructional context is challenging because the performance of any given student with ASD will vary over time and across developmental and educational curriculum areas. Thus, decision making about curriculum and instructional modifications is best informed by ongoing assessment of the student’s instructional needs and response to interventions. The problem-solving framework described by Deno (2005) is applied in a systematic manner to assist the ASD Classroom Team in managing misalignment issues. This framework includes:
  • Identifying that a performance problem exits

  • Defining the performance problem

  • Designing an intervention plan

  • Implementing the intervention

  • Resolving the performance issue

The model is applied continuously and in a flexible manner, often in conjunction with CBA and CBM, to assist with instructional planning. Through curriculum-based evaluation, the ASD Classroom Team routinely monitors student performance to identify when a performance problem arises. Then, if a problem does arise, the team or a subset of the team (e.g., consultant teacher, teacher and parent) meets to discuss the specific behavior(s) of concern and the potential functional nature of the problem (steps 1 and 2). Additional data may be needed to further define the nature of the problem. Next, the team discusses potential strategies to address the performance problem and devises an intervention plan. The plan specifies the supports or procedures to be implemented, and by whom, and how progress will be monitored (steps 3 and 4). Data obtained on the student’s response to the intervention are reviewed during regular team meetings to determine if the performance issue has been resolved. If the student continues to exhibit difficulties, then the team will continue to implement the problem-solving process until an effective solution has been identified and the difficulty is resolved (step 5).

Considerations in the Assessment of Students with ASD

A number of school personnel and members of the ASD Classroom Team will complete components of the evaluation. The specific person or persons involved is based on the purpose of the assessment and the methods used. Typically, the school psychologist completes cognitive and behavioral (including mental health) assessments. The teacher conducts formal and informal academic testing and routinely monitors student progress in the curriculum. The speech-language pathologist completes formal and informal testing of the student’s speech and language, pragmatic, and social communication skills. Occupational and physical therapists assess the student’s motor, sensory, adaptive, and recreation and leisure skills. The team works collaboratively to translate assessment findings into educational and other intervention plans.

Features of an ASD can adversely affect a student’s ability to participate in an assessment and therefore, compromise the validity of the data that are needed to make informed educational intervention decisions. Prior to the assessments evaluators should determine what supports will be needed to assist the student in his/her participation and increase the probability of obtaining valid results. If a student requires a one-to-one aide in the school setting, his or her role in the assessment should be clarified so that standardized testing procedures can be followed to the greatest extent possible. For evaluators unfamiliar with a particular student, interview with teachers and parents/caregivers, brief record review, and student observation can help identify needed supports. The evaluator should ascertain the supports that are currently in place for the student and be familiar with their use so they can be applied appropriately during the evaluation. If needed, additional supports can be developed specifically for the assessment. These include visual supports, environmental modifications, reinforcement systems, communication supports, and multiple testing sessions. Each is described briefly below.

Visual supports are additional stimuli that prompt the student to engage in behavior that is needed for a particular circumstance. They can take various forms (e.g., picture, gesture, written) and should be designed to improve the student’s understanding of the evaluation activities, their presentation sequence, and the expected participation behaviors. For example, an evaluator may develop a written schedule to list the tasks to be completed during cognitive testing, with information on the schedule to signal when a break will occur in the sequence. At the completion of each task, the evaluator can cross off the activity and provide a verbal cue to the student that the next activity will begin, while pointing to the next task listed. Visual supports should be displayed prominently and referred to for correspondence with the activity.

Communication supports assist students with functional communication and include augmentative or alternative systems such as sign language, the Picture Exchange Communication System (PECS, Frost & Bondy, 1994), communication boards, and other symbol-based systems. The evaluator should be thoroughly familiar with the student’s mode of communication and level of proficiency in using an alternative or augmentative system, prior to testing. This helps the evaluator determine the most appropriate measures for the student and the extent to which the use of alternative and augmentative communication systems represents a departure from standard administration.

Some students may demonstrate low motivation to participate in an evaluation. In this situation, the evaluator may want to consider the use of a reinforcement system. The evaluator needs to identify participation behaviors that will be reinforced and how reinforcement will be provided. Interview data and a preference assessment can provide some information to the evaluator on potential reinforcers. Ideally, further assessment should determine whether identified stimuli actually function as reinforcers. For students who understand secondary reinforcement, a token system can be developed and used. Verbal praise should always be paired with reinforcement. Short scheduled breaks (e.g., 2 min) that allow a student to engage in preferred activities in between tasks can also improve a student’s participation in the evaluation.

The evaluation context can affect a student’s participation. When it has been determined that a student may be sensitive to changes in his/her schedule and/or easily distracted by environmental stimuli, the evaluator may want to consider environmental supports. He/she should adequately prepare the evaluation site with comfortable seating and workspace arrangement, in a format that permits easy access to all testing relevant materials but limits the student from disrupting the materials, and remove all unnecessary or distracting stimuli. This setup is especially important when evaluating younger or more impulsive students, students who do not have much testing experience, or students who may be easily distracted by ambient noise or wall hangings. For students who have difficulty remaining seated it may be helpful to position the student so that he/she is sitting between the table and a wall, with the evaluator sitting in the “open” area. This allows the evaluator to assist the student in remaining seated, or let him/her get up for a break when needed and/or requested. Many test manuals provide guidance on appropriate seating position for optional test administration. The evaluator should be very familiar with all measures that he/she is using in order to ensure an adequate pace to task presentation. The evaluator may also want to consider multiple testing sessions and in-class testing to improve the chances that the student will have a positive experience.

Some students may require preparation for an evaluation. The less familiar the student is with the evaluation method and/or evaluator, the more likely he/she will benefit from specific preparation. The evaluator can prepare a student in a number of ways, but typically meeting ahead of time with the student and reviewing the purpose of the assessment and the various assessment activities, either through a Social StoryTM or verbal description, can help reduce anxiety for most students. For students with more significant functional impairment, the evaluation may best be completed in a familiar setting across multiple short sessions.

Once the student arrives for the evaluation, the examiner works to develop rapport and provides a brief description of the schedule of activities. He/she implements and/or refers to the various supports that are available to assist the student in full participation. Once the evaluation has begun, the evaluator should monitor the student for behaviors that may affect the validity of the data. Adjustments may need to be made to the evaluation plan as testing progresses. Most evaluators find it beneficial to take a small amount of time immediately following the testing to record notes regarding the student’s behavior and performance before observations and information are forgotten. In addition, the evaluator can determine if he/she has obtained all the necessary information and/or if any additional evaluation or information gathering is needed.


  1. Achenbach, T. M. (2006). Bibliography of published studies using the Achenbach System of Empirically Based Assessment (2006th ed.). Burlington, VT: University of Vermont, Research Center for Children, Youth, & Families.Google Scholar
  2. Achenbach, T. M., & Rescorla, L. A. (2000). Manual for the ASEBA preschool forms & profiles. Burlington, VT: University of Vermont, Research Center for Children, Youth, & Families.Google Scholar
  3. Achenbach, T. M., & Rescorla, L. A. (2001). Manual for the ASEBA school-age forms & profiles. Burlington, VT: University of Vermont, Research Center for Children, Youth, & Families.Google Scholar
  4. American Association on Mental Retardation. (2002). Mental retardation: Definition, classification, and systems of support. Washington, DC: Author.Google Scholar
  5. Baron-Cohen, S., Mortimore, C., Moriarty, J., Izaguirre, J., & Robertson, M. (1999). The prevalence of Gilles de la Tourette’s syndrome in children and adolescents with autism. Journal of Child Psychology and Psychiatry, 40(2), 213–218.CrossRefPubMedGoogle Scholar
  6. Chawarska, K., & Bearss, K. (2008). Assessment of cognitive and adaptive skills. In K. Chawarska, A. Klin, & F. R. Volkmar (Eds.), Autistic spectrum disorders in infants and toddlers: Diagnosis, assessment, and treatment (pp. 50–75). New York: Guildford Press.Google Scholar
  7. DeBruin, E. I., Ferdinand, R. F., Meester, S., deNijs, P. F. A., & Verheij, F. (2006). High rates of psychiatric co-morbidity in PDD-NOS. Journal of Autism and Developmental Disorders, 37, 877–886.CrossRefGoogle Scholar
  8. Deno, S. L. (2003). Developments in curriculum-based measurement. Journal of Special Education, 37(3), 184–192.CrossRefGoogle Scholar
  9. Deno, S. L. (2005). Problem-solving assessment. In R. Brown-Chidsey (Ed.), Assessment for intervention: A problem-solving approach (pp. 10–40). New York: Guilford Press.Google Scholar
  10. Dunn, L. M., & Dunn, D. M. (2007). Peabody picture vocabulary test (4th ed.). Minneapolis, MN: Pearson Assessment.Google Scholar
  11. Durand, V. M., & Crimmins, D. B. (1992). The Motivation Assessment Scale administrative guide. Topeka, KS: Monaco & Associates.Google Scholar
  12. Elliot, C. D. (2007). Differential Ability Scales (2nd ed.). San Antonio, TX: Harcourt Assessment, Inc.Google Scholar
  13. Frost, L. A., & Bondy, A. S. (1994). The picture exchange communication system training manual. Cherry Hill, NJ: PECS, Inc.Google Scholar
  14. Gadow, K. D., DeVincent, C. J., & Drabick, D. A. G. (2008). Oppositional defiant disorder as a clinical phenotype in children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 38, 1302–1310. doi: 10.1007/s10803-007-0516-8.CrossRefPubMedGoogle Scholar
  15. Gadow, K. D., DeVincent, C. J., Pomeroy, J., & Azizian, A. (2004). Psychiatric symptoms in preschool children with PDD and clinical and comparison samples. Journal of Autism and Developmental Disorders, 34, 379–393.CrossRefPubMedGoogle Scholar
  16. Ghaziuddin, M., Ghaziuddin, N., & Greden, J. (2002). Depression in persons with autism: Implications for research and clinical care. Journal of Autism and Developmental Disorders, 32(4), 299–306.CrossRefPubMedGoogle Scholar
  17. Gickling, E. E., Shane, R. L., & Croskery, K. M. (1989). Developing mathematics skills in low achieving high school students through curriculum-based assessment. School Psychology Review, 18, 344–355.Google Scholar
  18. Gillberg, C., & Billstedt, E. (2000). Autism and Asperger syndrome: Coexistence with other clinical disorders. Acta Psychiatrica Scandinavica, 102, 321–330.CrossRefPubMedGoogle Scholar
  19. Goldstein, A. P., & McGinnis, E. (1997). Skillstreaming the adolescent: New strategies and perspectives for teaching prosocial skills (rev. ed.). Champaign, IL: Research Press.Google Scholar
  20. Gravois, T. A., & Gickling, E. E. (2002). Best practices in curriculum-based assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 885–898). Bethesda, MD: National Association of School Psychologists.Google Scholar
  21. Gresham, F. M., & Elliot, S. N. (1990). The social skills rating system. Circle Pines, MN: American Guidance Service.Google Scholar
  22. Holden, B., & Gitlesen, J. (2006). A total population study of challenging behavior in the county of Hedmark, Norway: Prevalence, and risk markers. Research in Developmental Disabilities, 27, 456–465.CrossRefPubMedGoogle Scholar
  23. Howell, K. W. (1986). Direct assessment of academic performance. School Psychology Review, 15(3), 324–335.Google Scholar
  24. Howell, K.W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and Decision Making (3rd ed.). Florence, KY: Wadsworth Publishing Company.Google Scholar
  25. Howlin, P., Goode, S., Hutton, J., & Rutter, M. (2004). Adult outcome for children with autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 45, 212–229.CrossRefGoogle Scholar
  26. Kanfer, F. H., & Saslow, G. (1969). Behavioral diagnosis. In C. M. Franks (Ed.), Behavior therapy: Appraisal and status (pp. 417–444). New York: McGraw-Hill.Google Scholar
  27. Kim, J. A., Szatmari, P., Bryson, S. E., Streiner, D. L., & Wilson, F. J. (2000). The prevalence of anxiety and mood problems among children with autism and Asperger syndrome. Autism, 4, 117–131.CrossRefGoogle Scholar
  28. Klin, A., Saulnier, C., Tsatsanis, K., & Volkmar, F. R. (2005). Clinical evaluation in autism spectrum disorders: Psychological assessment within a transdisciplinary framework. In F. R. Volkmar, P. Rhea, A. Klin, & D. Cohen (Eds.), Handbook of autism and pervasive developmental disorders (3rd ed., Vol. 2, pp. 772–798). Hoboken, NJ: John Wiley & Sons, Inc.Google Scholar
  29. Lentz, F. E., Jr., & Shapiro, E. S. (1986). Functional assessment of the academic environment. School Psychology Review, 15, 346–357.Google Scholar
  30. Leyfer, O. T., Folstein, S. E., Bacalman, S., Davis, N. O., Dinh, E., Morgan, J., et al. (2006). Comorbidpsychiatric disorders in children with autism: Interview development and rates of disorders. Journal of Autism and Developmental Disorders, 36, 849–861.CrossRefPubMedGoogle Scholar
  31. Loveland, K. A., & Tunali-Kotoski, B. (2005). The school-age child with an autistic spectrum disorder. In F. R. Volkmar, P. Rhea, A. Klin, & D. Cohen (Eds.), Handbook of autism and pervasive developmental disorders (3rd ed., Vol. 1, pp. 247–287). Hoboken, NJ: John Wiley & Sons, Inc.Google Scholar
  32. Magyar, C. I. (1998, 2000, 2001, 2003, 2006). Supplemental Curriculums for ASD Program Development Model. Unpublished curriculum, University of Rochester, Rochester, NY.Google Scholar
  33. Magyar, C. I. (2006e). Student Performance Data System. Unpublished data packet, University of Rochester, Rochester, NY.Google Scholar
  34. Magyar, C. I. & Pandolfi, V. (2006c). Classroom Observation Form. Unpublished scale, University of Rochester, Rochester, NY.Google Scholar
  35. Magyar, C. I., Pandolfi, V., & Dill, C. A. (2008). Utility of the CBCL 6–18 in screening for psychopathology in youth with autism spectrum disorders. Poster presented at the International Meeting for Autism Research, London, UK.Google Scholar
  36. Magyar, C. I., & Panfolfi, V. (2006a). Academic Engagement Form. Unpublished scale, University of Rochester, Rochester, NY.Google Scholar
  37. Magyar, C. I., & Panfolfi, V. (2006b). Personnel Performance Scale-3rd Revision. Unpublished scale, University of Rochester, Rochester, NY.Google Scholar
  38. Markwardt, F. C. (1997). Peabody individual achievement test – revised/normative update. Circle Pines, MN: American Guidance Service.Google Scholar
  39. Mash, E. J., & Dozois, D. J. A. (2003). Child psychopathology: A developmental-systems perspective. In E. J. Mash & R. A. Barkley (Eds.), Child psychopathology (2nd ed., pp. 3–71). New York: Guilford Press.Google Scholar
  40. Mayes, S. D., & Calhoun, S. L. (2008). WISC-IV and WIAT-II profiles in children with high functioning autism. Journal of Autism and Developmental Disorders, 38, 428–439.CrossRefPubMedGoogle Scholar
  41. Mruzek, D., Geiger, T., Magyar, C. I., & Smith, T. (2005). The Kindergarten Survival Skills Checklist: Psychometric properties with typically developing children and children with autism. Poster presented at the Association for Behavior Analysis Annual Conference, Chicago, IL.Google Scholar
  42. Mullen, E. M. (1995). Mullen Scales of Early Learning. Circle Pines, MN: American Guidance Service.Google Scholar
  43. Newborg, J. (2005). Battelle developmental inventory- (2nd ed.). Itasca, IL: Riverside Publishing.Google Scholar
  44. Newcomer, P. L., & Hammill, D. D. (2008). Test of language development – primary (4th ed.). Austin, TX: ProEd.Google Scholar
  45. O’Neill, R. E., Horner, R. H., Albin, R. W., Sprague, J. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program development for problem behavior: A practical handbook. Pacific Grove, CA: Brooks/Cole Publishing Company.Google Scholar
  46. Ozonoff, S., Goodlin-Jones, B. L., & Solomon, M. (2005). Evidence-based assessment of autism spectrum disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 523–540.CrossRefPubMedGoogle Scholar
  47. Pandolfi, V., Magyar, C. I., & Dill, C. A. (in review). Confirmatory factor analysis of the CBCL 6–18 in a sample of youth with autism spectrum disorders. Journal of Autism and Developmental Disorders.Google Scholar
  48. Pandolfi, V., & Magyar, C. I. (2007). Kindergarten Survival Skills Checklist and autism: Preliminary findings. Poster presented at the National Association of School Psychologists Annual Conference, New York, NY.Google Scholar
  49. Pandolfi, V., Magyar, C. I., & Dill, C. A. (2009). Confirmatory factor analysis of the CBCL 1.5–5 in a sample of children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 39, 986–995. doi: doi 10.1007/s10803-009-0716-5.CrossRefPubMedGoogle Scholar
  50. Reitzel, J., & Szatmari, P. (2003). Cognitive and academic problems. In M. Prior (Ed.), Learning and behavior problems in Asperger syndrome (pp. 35–54). New York: Guilford Press.Google Scholar
  51. Roid, G. H. (2003). Stanford-Binet Intelligence Scales (5th ed.). Itasca, IL: Riverside.Google Scholar
  52. Saemundsen, E., Magnusson, P., Smari, J., & Sigurdardottir, S. (2003). Autistic Diagnostic Interview-Revised and the Childhood Autism Rating Scale: Convergence and discrepancy in diagnosing autism. Journal of Autism and Developmental Disorders, 33, 319–328.CrossRefPubMedGoogle Scholar
  53. Salvia, J., & Ysseldyke, J. E. (1978). Assessment in special and remedial education. Boston: Houghton Mifflin.Google Scholar
  54. Salvia, J., & Ysseldyke, J. E. (2001). Assessment (8th ed.). Boston: Houghton Mifflin.Google Scholar
  55. Sattler, J. M. (2001). Assessment of children: Cognitive applications (4th ed.). San Diego, CA: Author.Google Scholar
  56. Schopler, E., Reichler, R. J., & Renner, B. R. (1988). The Childhood Autism Rating Scale (CARS). Los Angeles: Western Psychological Services.Google Scholar
  57. Shapiro, E. S. (1996). Academic skill problems: Direct assessment and intervention (2nd ed.). New York: Guilford Press.Google Scholar
  58. Shattuck, P. T., Seltzer, M. M., Greenberg, J. S., Orsmond, G. I., Bolt, D., Kring, S., et al. (2007). Change in autism symptoms and maladaptive behaviors in adolescents and adults with autism spectrum disorder. Journal of Autism and Developmental Disabilities, 37, 1735–1747.CrossRefGoogle Scholar
  59. Sparrow, S. S., Cicchetti, D. V., & Balla, D. A. (2005). Vineland Adaptive Behavior Scales (2nd ed.). Circle Pines, MN: American Guidance Service.Google Scholar
  60. Vincent, L., Salisbury, C., Walter, G., Brown, P., Gruenewald, L., & Powers, M. (1980). Program evaluation and curriculum development in early childhood/special education: Criterion of the next environment. In W. Sailor, B. Wilcox, & L. Brown (Eds.), Methods of instruction for severely handicapped students (pp. 303–328). Baltimore: Paul H. Brookes Publishing Co.Google Scholar
  61. Volkmar, F., Lord, C., Bailey, A., Schultz, R., & Klin, A. (2004). Autism and pervasive developmental disorders. Journal of Child Psychology and Psychiatry and Allied Disciplines, 45, 135–155.CrossRefGoogle Scholar
  62. Volkmar, F., Sparrow, S. S., Goudreau, D., Cicchetti, D. V., Paul, R., & Cohen, D. J. (1987). Social deficits in autism: An operational approach using the Vineland Adaptive Behavior Scales. Journal of the American Academy of Child and Adolescent Psychiatry, 26(2), 156–161.CrossRefPubMedGoogle Scholar
  63. Vollmer, T. R., & Matson, J. L. (1999). Questions about behavioral function manual. Baton Rouge, LA: Scientific Publishers.Google Scholar
  64. Wechsler, D. (2003). Wechsler intelligence scale for children (4th ed.). San Antonio, TX: The Psychological Corporation.Google Scholar
  65. Wing, L., & Gould, J. (1979). Severe impairments of social interaction and associated abnormalities. Journal of Autism and Developmental Disorders, 9, 11–29.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Pediatrics School of Medicine and DentistryUniversity of RochesterRochesterUSA

Personalised recommendations