Journal of Behavioral Education

, Volume 20, Issue 1, pp 55–76 | Cite as

Use of Computer-Based Interventions to Teach Communication Skills to Children with Autism Spectrum Disorders: A Systematic Review

  • Sathiyaprakash Ramdoss
  • Russell Lang
  • Austin Mulloy
  • Jessica Franco
  • Mark O’Reilly
  • Robert Didden
  • Giulio Lancioni
Open Access
Original Paper

Abstract

The purpose of this review is to provide a systematic analysis of studies involving the use of computer-based interventions (CBI) to teach communication skills to children with autism spectrum disorders (ASD). This review evaluates intervention outcomes, appraises the certainty of evidence, and describes software and system requirements for each included study. This review has three main aims: (a) to evaluate the evidence-base regarding CBI, (b) to inform and guide practitioners interested in using CBI, and (c) to stimulate and guide future research aimed at improving the efficiency and effectiveness of CBI in communication for individuals with ASD. Results suggest that CBI should not yet be considered a researched-based approach to teaching communication skills to individuals with ASD. However, CBI does seem a promising practice that warrants future research.

Keywords

Autism Computer Computer-based Computer-assisted Communication Intervention 

Introduction

Impairment in communication is one of the defining features of autism spectrum disorder (ASD) (American Psychiatric Association, 2000) and is often the earliest observed symptom (Eigsti et al. 2007). Within the ASD population, 25–61% of children have a total absence of verbal communication (Weitz et al. 1997). Even when spoken communication is present, children with ASD may still use speech in limited or unusual ways (e.g., Lee et al. 1994; Lord and Paul 1997; McEvoy et al. 1988; Rapaport et al. 1985). Individuals with severe communication impairment have an increased risk of developing challenging behavior and often have fewer opportunities for school and community involvement (Sigafoos et al. 2006). If untreated, the communication deficits associated with ASD are likely to persist across an individual’s lifespan (National Research Council 2001).

A considerable amount of research has focused on developing interventions for improving communication skills of children with ASD (e.g., Schlosser and Wendt 2008; Sigafoos et al. 2009). In addition to improvements in verbalization, mean length of utterance, and spontaneity of language use, successful communication intervention has also been associated with decreases in problem behavior, increases in positive affect, and higher levels of joint attention (e.g., Carr and Durand 1985; Charlop-Christy and Trasowech 1991; Harding et al. 2005). Communication skills are often among the initial goals in educational programming for children with ASD (National Research Council 2001).

Communication interventions are often complex requiring low student/teacher ratios, specific procedures to train therapists or teachers, and many hours per week of intervention (Graff et al. 1998; Lang et al. 2009; National Research Council 2001). These complexities may present logistical obstacles to the implementation of communication interventions within settings where resources may be scarce (e.g., group homes, schools, and children’s homes). To the extent that these obstacles impede accurate implementation and/or reduce intervention dosage, communication interventions may be less effective.

The use of technology to more efficiently or accurately provide intervention or deliver instruction is not a recent development (e.g., Marrou 1956; Pressey 1926, 1932; Skinner 1968). However, tremendous advances in computer technology over the last two decades have increased technology versatility and reduced financial expense such that computers are now common in schools and children’s homes (Barron et al. 2006). Computers are now often used as instructional tools for children without disabilities (Inan et al. 2010).

Many researchers have suggested potential reasons why computer-based interventions (CBI) may be particularly effective with individuals with ASD. For example, software programs may be created that establish clear routines and expectations, reduce distractions, and provide additional controls for the influence of autism-specific characteristics such as stimulus overselectivity (Moore et al. 2000; Panyan 1984; Silver and Oakes 2001). Additionally, software programs may perform many of the tasks often found to be too time-consuming or cumbersome for classrooms with high student-to-teacher ratios, such as providing immediate reinforcement, systematically fading prompts based upon performance, and collecting data on every response (Higgins and Boone 1996).

Despite these potential advantages, researchers have also expressed concern that computers may exacerbate existing problems associated with ASD. For example, because CBI may involve a reduction in the amount of interaction between the individual with ASD and the teacher, therapist, or parent responsible for intervention, CBI may result in increased social isolation and a reduced opportunity to practice social interactions (e.g., Bernard-Opitz et al. 1990). Additionally, because children with ASD may have a tendency to perseverate on computer use, CBI may result in the development or strengthening of computer-based stereotypies, challenging behavior maintained by computer access, and obsessive compulsive-type behaviors (e.g., Powell 1996).

A few reviews of the research literature relevant to the use of CBI for children with ASD have been conducted. Higgins and Boone (1996) identified the software programs that could be used in CBI for individuals with ASD prior to 1996 and found that these programs were able to perform many of the functions necessary to deliver instruction to children with ASD. Blischak and Schlosser (2003) reviewed the research involving word processing software with synthetic speech capabilities and found that CBI using this software is a potential means for improving the spelling and frequency of spontaneous utterances of individuals with ASD. Finally, Fitzgerald et al.’s (2008) review reported improvements in academics (i.e., reading, mathematics, writing, social studies, and science) following the use of CBI by students with mild or high incidence disabilities. However, a systematic review of CBI to improve communication skills of children with ASD has not been conducted. Given the importance of communication intervention for children with ASD, the obstacles impeding implementation of communication interventions present in many settings, and the potential advantages and disadvantages of CBI, a systematic review of intervention research on this topic is warranted.

The purpose of this review is to provide a systematic analysis of studies involving the use of CBI to teach communication skills to children with ASD. This review describes characteristics of the included studies, evaluates intervention outcomes, and appraises the certainty of evidence. This review has three main aims: (a) to evaluate the evidence-base, (b) to inform and guide practitioners interested in the use of CBI, and (c) to stimulate and guide future research aimed at improving the efficiency and effectiveness of CBI in communication for individuals with ASD.

Method

This review involved a systematic analysis of studies that focused on the use of CBI to teach communication skills to individuals with ASD. Each identified study that met predetermined inclusion criteria was analyzed and summarized in terms of: (a) participant characteristics, (b) communication skills targeted, (c) details regarding the CBI, (d) outcomes of the intervention, and (e) certainty of evidence.

Search Procedures

Systematic searches were conducted in four electronic databases: Education Resources Information Center (ERIC), Medline, Psychology and Behavioral Sciences Collection, and PsycINFO. The search was limited to English language and peer-reviewed studies. On all four databases, the terms (autis* or asperger* or ASD or developmental disability or pervasive developmental disorder or PDD-NOS) and (computer* or computer-assisted or computer-based or computer-aided or software) and (language or communication or speech or social) were inserted into the keywords field. The abstracts of the resulting 222 studies were reviewed to identify studies for possible inclusion (see Inclusion and Exclusion Criteria below). The reference lists for studies meeting these criteria were also reviewed to identify additional articles for possible inclusion. Hand searches, covering January to June 2010, were then completed for the journals that had published the included studies. Our search of the databases, journals, and reference lists occurred during June and July 2010.

Inclusion and Exclusion Criteria

In order to be included in this review, an article had to be an intervention study that examined the effectiveness of CBI intended to improve the communication skills of at least one individual diagnosed with an ASD. Specifically, studies had to meet three inclusion criteria. First, the study had to contain at least one participant diagnosed with autism, Asperger’s, or PDD-NOS. Second, a communication skill had to be a dependent variable. For the purposes of this review, “communication skills” were defined as skills related to expressive language (e.g., vocal imitation, response to questions, commenting, making requests, or greetings) or receptive language (e.g., identification of target vocabulary words). Finally, the primary component of the intervention had to be delivered via a computer software program.

Studies were excluded from this review based upon the following four criteria. First, given the immense changes in the size, capacity, and versatility of technology over the last 20 years, studies published prior to 1990 were excluded so as to focus on studies involving technology still potentially relevant today (e.g., Colby 1973; Goldenberg 1979). Second, in order to focus on technologies logistically practical within present-day schools and homes, studies involving elaborate and highly technical virtual reality rendering machines were also excluded (e.g., Strickland et al. 1996). Several recent reviews have covered the use of video modeling with individuals with ASD (e.g., Delano 2007; McCoy and Hermansen 2006; Shukla-Mehta et al. 2010); therefore, studies in which computers were used solely as a means to deliver video modeling interventions were excluded (e.g., Kinney et al. 2003; Mechling and Langone 2000; Sansosti and Powell-Smith 2008). However, if the video modeling intervention also included a component that required the participant to provide input (e.g., mouse click, keyboard stroke, or screen touch) and this input in some way altered the course of the software program (e.g., presented a reinforcer, provided error correction, or delivered a prompt), then the study was still considered for inclusion. Finally, studies in which access to a computer was provided only as a reinforcer for communication were excluded (e.g., Bernard-Opitz et al. 1990).

Data Extraction

Each identified study was first assessed for inclusion or exclusion. Then, studies selected for inclusion were summarized in terms of the following features: (a) participant characteristics, (b) communication skills targeted, (c) details regarding the CBI, (d) intervention outcomes, and (e) certainty of evidence. Various procedural aspects were also noted, including setting, experimental design, and inter-observer agreement (IOA). Outcomes of CBI on communication skills were summarized for each study. If an intervention study targeted both communication skills and other skills (e.g., academics), only variables relevant to communication skills were included in data analysis (e.g., Heimann et al. 1995). For studies that employed group designs or analyzed data at the group level, standardized mean difference effect sizes were estimated from F-statistics or repeated-measures data using unbiased calculations of Hedges’ g (Cooper and Hedges 1994; Hedges and Olkin 1985). Hedges and Olkin demonstrated that Hedges’ g is less subject to error than other effect size calculations when used with small samples (i.e., n < 30). For single-subject design studies, the Nonoverlap of All Pairs (NAP; Parker and Vannest 2009) was calculated from the graphed data.

NAP is an index of data overlap between single-subject design phases similar to Percent of Nonoverlapping data (PND; Scruggs and Castro 1987), Percent of All Overlapping Data (PAND; Parker et al. 2007), and Percentage Exceeding Median (PEM; Ma 2006). However, NAP equals or outperforms PND, PAND, and PEM (Parker and Vannest 2009). For one, the external validity of NAP exceeds that of PND and PEM. NAP values correlate more strongly than PND and PEM values with visual analysis judgments and R2 effect sizes. Additionally, confidence intervals may be calculated for NAP values, whereas they may not for PND and PEM, due to the statistics’ unknown sampling distributions (Scotti et al. 1991; Ma 2006). Lastly, NAP is more robust than PND, PAND, and PEM to the influence of outliers (e.g., maximum values appearing one time during baseline). Addition of a single outlier to a data set can greatly alter PND, PAND, and PEM values, while the NAP statistic will not be substantially skewed. Consequently, NAP can more accurately represent the dominant trends in data.

NAP is calculated by comparing every baseline “A” data point with every intervention phase “B” data point. In studies of treatments designed to increase behavior, a “nonoverlapping pair” is an “AB” pair in which the “B” point is higher than the “A” point. The NAP is calculated by summing the number of comparison pairs not showing overlap and one-half the number of tied comparison pairs, and then dividing by the total number of comparisons. Mathematically, NAP is expressed as:
$$ {\text{NAP}}= {\frac{{(N_{\text{A}} \times N_{\text{B}} ) - (O+ .5[T])}}{{N_{\text{A}} \times N_{\text{B}} }}} $$
(1)
where NA = the number of data points in the “A” or baseline phase, NB = the number of data points in the “B” or treatment phase, O = the number of overlapping pairs of data points from “A” and “B” phases, and T = the number of comparisons in which both data points have the same y-value/dependent score.

Using the guidelines for interpretation recommended by Parker and Vannest (2009), NAP scores between 0 and .65 can be classified as “weak effects”, .66 to .92 as “medium effects”, and .92 to 1.0 as “strong effects”. For more complete details on NAP calculation procedures and statistical validation, see Parker and Vannest (2009).

Certainty of evidence was evaluated by considering the results in light of the research design and other methodological details (Schlosser and Sigafoos 2007). The certainty of evidence for each study was rated as “suggestive”, “preponderance”, or “conclusive”. This classification system was adapted from the descriptions provided by Smith (1981) and Simeonsson and Bailey (1991). The lowest level of certainty is classified as suggestive evidence. Studies within this category may have utilized an AB or intervention-only design, but did not involve a true experimental design (e.g., group design with random assignment, multiple baseline, or ABAB). The second level of certainty was classified as preponderance of evidence. Studies within this level contained the following five qualities. First, studies in this category utilized an experimental design. For single-subject designs, this also required demonstration of experimental control (e.g., divergence in data paths within an alternating treatment design). Second, adequate IOA and treatment fidelity measures were reported (i.e., a minimum of 20% of sessions with 80% or higher agreement or reliability). Third, dependent variables were operationally defined. Fourth, sufficient detail to enable replication was provided. Despite these four attributes, the fifth quality of studies at the preponderance level was that they were in some substantial way limited in their ability to control for alternative explanations for treatment effects. For example, if concurrent interventions (e.g., CBI and discrete trial training) were simultaneously targeting the same or related dependent variables and no design feature controlled for the non-CBI’s influence on the communication-dependent variable, the study may be classified at the preponderance level. The highest level of certainty was classified as conclusive. Within this level, studies had all the attributes of the preponderance level, but also provided at least some control for alternative explanations for treatment gains (e.g., a multiple baseline across participants in which the introduction of the CBI is staggered and concurrent interventions are held constant or a group design with appropriate blinding and randomization).

Reliability of Search Procedures and Inter-rater Agreement

In order to ensure the accuracy of the systematic search, two authors independently applied the inclusion and exclusion criteria to the list of 18 studies that resulted following the initial screening of the 222 abstracts. The two authors then independently made an initial determination as to whether each of the 18 studies identified met inclusion criteria. Ten studies met the criteria for inclusion. Agreement as to whether a study should be included or excluded was 94% (i.e., agreement was obtained on 17 of the 18 studies). Mechling et al. (2002) was identified for inclusion by one author and exclusion by the other. Ultimately, this article was excluded because the dependent variable (e.g., reading aloud) was considered to be more academic than communicative.

After the list of included studies was agreed upon, the first author extracted information to develop an initial summary of the 10 included studies. The accuracy of these summaries was independently checked by one of the remaining co-authors using a checklist that included the initial summary of the study and five questions regarding various details of the study. Specifically, (a) is this an accurate description of the participants? (b) Is this an accurate description of the communication skills being targeted? (c) Is this an accurate summary of the CBI? (d) Is this an accurate description of the results? (e) Is this an accurate summary of the certainty of evidence? Co-authors were asked to read the study and the summary and then complete the checklist. In cases where the summary was not considered accurate, the co-authors were asked to edit the summary to improve its accuracy. This process was continued until co-authors were in 100% agreement regarding the accuracy of the summaries. The resulting summaries were then used to create Table 1.
Table 1

Summary and analysis of reviewed studies

Citation

Participant characteristics

Communication skill(s) targeted

Computer-based intervention

Results and certainty of evidence

Bernard-Opitz et al. (1999)

9 Male and 1 female, nonverbal with severe autism, 3 to 7 years old (M = 6 years)

Vocal imitation of syllables

Software: IBM SpeechViewer system (IBM, 1988)

Results: CBI was more effective than person-implemented instruction (\( \hat{\delta }_{\text{F-test}} = 2.913 \)).

   

Procedures: Trainer/parent sat next to participant and operated microphone. Microphone was turned off during inappropriate vocalizations. Praise, small edibles, and computer animations were used as reinforcers.

Certainty: Suggestive, TF measures where NR and where needed because the accuracy in turning off the microphone during inappropriate vocalizations would effect computer data collection and outcomes.

   

Setting: University-based laboratory

 
   

Time: 5 min per session for 10 sessions.

 

Bosseler and Massaro (2003)

7 Male and 1 female, with moderate to severe autism, 7 to 12 years old (M = 10 years)

Receptive identification of pictures and the production of spoken words

Software: “Baldi” (Animated Speech Corp., 2010)

Results: Experiment 1: Number of vocabulary words increased (\( \hat{\delta }_{\text{RM}} = 0.710 \)). Experiment 2: Number of vocabulary words increased (\( \hat{\delta }_{\text{RM}} = 2.884 \))

   

Procedure: CBI progressed in difficulty ultimately requiring speech from participants. Delivered digital smiling or frowning face as feedback for target behavior.

Certainty: Conclusivea, Although TF was not measured, the 2nd experiment served the same function as TF by providing evidence that gains were due to CBI. The second experiment also provided a control for alternative explanations for treatment gains

   

Setting: School-based program for children with autism

 
   

Time: 10 to 40 min per session, 2 sessions per week, for 6 months

 

Coleman-Martin et al. (2005)

1 Womenb with moderate autism, 12 years old

Receptive identification of vocabulary words

Software: Microsoft PowerPoint

Results: Number of vocabulary words increased (NAP = 100%). Person-implemented instruction also associated with a NAP value of 100%

   

Procedure: Three conditions compared: CBI only, CBI + teacher instruction, and teacher instruction only. CBI involved displaying PowerPoint slides for each target word. First slide displayed the target word and prompted the student to say the word while the computer also said the word. Subsequent slides then presented words by phoneme and students repeated phonemes. The final slide presented and said the whole word. When completed a slide displayed a colorful picture and said “Excellent”.

Certainty: Suggestive, the design utilized was an ABACAD. In which “A” was baseline, “B” teacher instruction, “C” Teacher + CBI, and “D” CBI only. Therefore, the effect of CBI was not replicated for the participant with autism, and the sequencing of conditions may have influenced results

   

Setting: Self-contained classroom for students with autism

 
   

Time: NR

 

Heimann et al. (1995)

A total of 30 children were assigned to 1 of 3 groups based upon diagnosis. The autism group included 9 male and 2 female, mild to moderate autism, 7 to 14 years old (M = 9 years)

Sentence imitation, verbal expression, and phonological awareness

Software: The Alpha program (Nelson and Prinz 1991)

Results: For the autism group, sentence imitation improved (\( \hat{\delta }_{\text{RM}} = 0.428 \)), verbal expression improved (\( \hat{\delta }_{\text{RM}} = 0.446 \)), and phonological awareness improved (\( \hat{\delta }_{\text{RM}} = 0.164 \))

   

Procedure: Alpha provided visual and audio stimuli intended to be reinforcing and delivered specialized curriculum of 4 modules and 112 lessons. Participants progressed through the program as test scores indicated mastery of each level.

Certainty: Preponderance, group assignment was not random and teachers did not administer outcome measures to participants who they believed would be too difficult to test. This likely biased data toward positive results.

   

Setting: Schools and daycares in Sweden

 
   

Time: Sessions were 32 min (SD = 12.6 min). On average, children with autism received 25.6 sessions (SD = 7.5 sessions) across 16.9 weeks (SD = 5.7 weeks)

 

Hetzroni and Shalem (2005)

3 Male and 3 female with autism and moderate intellectual disability, 10 to 13 years old (M = 11 years)

Identify and match words to pictures of food items to be used within picture-based communication boards

Software: Designed by researchers using C++ programming language

Results: Correct matches between text and food item improved for all participants (M NAPc = 90.3%, values ranged from 79% to 97.9%); results maintained overtime

   

Procedure: Program implemented a 7 step gradual fading procedure (Krantz and McClannahan 1998) that moved from a picture of a food item (e.g., bag of chips) with a written label to only the written label. Smiley face graphics were used as reinforcement. Following CBI training students matched written text to food items in the classroom as a means to request

Certainty: Preponderance, although TF was NR, the researcher observed all intervention sessions to ensure computer program ran as designed

   

Setting: School computer room and classroom

 
   

Time: 8 to 11 sessions (M = 9 sessions), session length NR

 

Hetzroni and Tannous (2004)

3 Male and 2 female, with moderate autism, 7 to 12 years old (M = 10 years)

Decreasing delayed echolalia, immediate echolalia, irrelevant speech, and increasing relevant speech, and communication initiation

Software: A researcher developed program created for the study

Results: Delayed echolalia decreased (\( \hat{\delta }_{\text{RM}} = 1.156 \)), immediate echolalia decreased (\( \hat{\delta }_{\text{RM}} = 0.314 \)), irrelevant speech decreased (\( \hat{\delta }_{\text{RM}} = 0.552 \)), relevant speech increased (\( \hat{\delta }_{\text{RM}} = 1.273 \)), and communication initiations increased (\( \hat{\delta }_{\text{RM}} = 0.819 \))d

   

Procedure: Program provided animations of common settings from daily routines (e.g., meal time). For each setting, an animation of an adult asking a question (e.g., “what would you like to eat?”) was played and the child selected from a list of potential responses. The program then played an animation depicting the results of the chosen response (e.g., mother delivering requested food item)

Certainty: Suggestive, not all of the single-subject data were reported.

   

Setting: Computer room at school

 
   

Time: Exposure time increased as sessions progressed. Initial sessions were 10 min; final sessions 25 min for 6 to 18 sessions.

 

Massaro and Bosseler (2006)

4 Male and 1 female, with mild to moderate autism, 8 to 13 years old (M = 10 years)

Receptive identification of pictures and the production of spoken words

Software: Baldi

Results: The overall average of correct receptive responses pooled across lessons increased (\( \hat{\delta }_{\text{F-test}} = 3.694 \)). Animated talking head plus synthesized voice resulted in more correct responses than the voice alone.

   

Procedure: Program provided 5 exercises (pre-test, presentation, recognition, elicitation, and post-test). The program presented each novel item, prompted the student to select parts of items with the mouse, provided praise and a happy face as reinforcement, provided a sad face for incorrect responses, and eventually the student was asked to verbally name the items or state the items function.

Certainty: Suggestive, the data were collapsed across participants in the alternating treatment design and pre- post-testing

   

Setting: School program for children with autism

 
   

Time: 14 sessions, 30 min per session.

 

Moore and Calvart (2000)

12 Male and 2 female, mild to severe autism, 3 to 6 years old

Identification of vocabulary words

Software: A researcher developed program created for the study

Results: Number of words correctly identified increased (\( \hat{\delta }_{\text{F-test}}= 1.651 \))

   

Procedure: Program gave digitized voice commands (e.g., “touch the ladybug”) and provided animations, praise, and interesting sounds as a reinforcer contingent upon correct responses.

Certainty: Suggestive, details regarding the type of hardware and piloted software developed for this study were not provided in sufficient detail to enable replication.

   

Setting: Specialized school for children with autism

 
   

Time: CBI took place over a 2-day period, but duration of sessions was NR

 

Parsons and La Sorte (1993)

5 Male and 1 female, all with at least some verbal language and mild to moderate autism, 4.5 to 6.5 years old (M = 6 years)

Frequency of spontaneous utterances including: responses to questions, requests, comments, initiations, and decreasing echolalia

Software: Keytalk

Results: Number of utterances increased from baseline to speech condition (M NAP = 99.1%, values ranged from 97.2% to 100%) and from baseline to no speech (M NAP = 12%, values ranged from 0% to 27.8%).

   

Procedure: Clinician sat next to the participant and prompted the typing of words. The computer said the words and participant verbal utterances were counted during sessions.

Certainty: Suggestive, compared CBI with and without synthesized speech and alternated between conditions in a counter balanced way. However, the effect of the CBI with or without speech was evaluated in two AB designs and TF measures were not reported.

   

Setting: Empty room within school

 
   

Time: 30 min per session, 3 sessions per week for 10 weeks

 

Simpson et al. (2004)

2 Male and 2 female, with mild to moderate autism, 5 to 6 years old (M = 5.5 years)

Spontaneous verbal greetings to peers

Software: HyperStudio 3.2 (Robert Wagner Publishing, Inc. 1993–1998)

Results: Social greetings increased for all participants (M NAP = 97%, values ranged from 88.1% to 100%).

   

Procedure: HypersStudio presented a series of screens (“stacks”) that provided written instruction, synthesized speech, and video examples. Participants selected icons depicting different skills in order to watch instructional videos and hear synthesized speech.

Certainty: Conclusive, multiple baseline across participants controlled for influence of concurrent interventions.

   

Setting: Special education classroom

 
   

Time: 1 session per day for 24 days, each session contained 36 trials (12 trials in the morning, 12 after lunch, and 12 at the end of the day). Participants were allotted 45 min to complete each set of trials, but the total amount of time spent was NR

 

M mean, CBI computer-based instruction, \( \hat{\delta }_{\text{F-test}} \) standardized mean difference effect size, estimated from F-statistic using unbiased calculations of Hedges’ g, TF treatment fidelity, NR not reported, \( \hat{\delta }_{\text{RM}} \) standardized mean difference effect size, estimated from repeated-measures data using unbiased calculations of Hedges’ g, NAP nonoverlap of All Pairs statistic

aConclusive certainty of evidence requires the consideration of results from experiment 1 and 2 in tandem

bTwo other participants were involved that did not have autism, however, because the length of the initial baseline phase was not staggered across participants the design cannot be considered as an experimental multiple baseline

cMean nonoverlapping pairs across all participants

dBecause the single-subject data were incomplete, \( \hat{\delta }_{\text{RM}} \) was calculated for data compiled across participants

This approach was intended to ensure accuracy in the summary of studies and to provide a measure of inter-rater agreement on data extraction and analysis. There were 50 items on which there could be agreement or disagreement (i.e., 10 studies with 5 questions per study). Initial agreement was obtained on 45 items (90%) and then corrected until 100%.

Results

Table 1 summarizes the following: (a) participant characteristics, (b) communication skills, (c) details of the CBI, (d) outcomes, and (e) certainty of evidence.

Participant Characteristics

Collectively, the 10 studies provided intervention to a total of 70 participants. The sample size of individual studies ranged from 1 to 14 (M = 10). Participant ages ranged from 3 to 14 years (M = 8.2 years). Fifty-four (77%) of the participants were male and 16 (23%) were female. The participants in these studies were all diagnosed with autism. Across studies, various methods were used to describe participant characteristics. Using the information available in each study, it appears that the majority of participants could be classified as having mild to moderate autism. Only three studies mention including participants with characteristics suggesting severe autism (i.e., Bernard-Opitz et al. 1999; Bosseler and Massaro 2003; Moore and Calvart 2000).

Communication Skills Targeted

A variety of communication skills were measured, and most studies targeted multiple communication skills. Five studies evaluated changes in receptive language following the use of CBI to teach novel vocabulary words (Bosseler and Massaro 2003; Coleman-Martin et al. 2005; Hetzroni and Shalem 2005; Massaro and Bosseler 2006; Moore and Calvart 2000). Two studies were designed to increase the frequency of vocal imitation. Specifically, Bernard-Opitz et al. (1999) targeted imitation of syllables and Heimann et al. (1995) targeted imitation of spoken sentences. Three studies taught social and conversation initiations (Hetzroni and Tannous 2004; Parson’s and La Sorte 1993; Simpson et al. 2004). Six studies were designed to increase the frequency of spoken utterances (Bosseler and Massaro 2003; Massaro and Bosseler 2006; Heimann et al. 1995; Hetzroni and Tannous 2004; Parsons and La Sorte 1993; Simpson et al. 2004). One study taught phonological awareness (Heimann et al. 1995), and one taught responding to questions (Parsons and La Sorte 1993). Two studies sought to improve communication by decreasing echolalia and other inappropriate speech (Hetzroni and Tannous 2004; Parsons and La Sorte 1993).

Hardware

The majority (n = 8) of the studies implemented CBI using desktop computers equipped with a typical monitor, keyboard, and mouse. One study did not describe the computer (Moore and Calvart 2000) and one used a laptop (Massaro and Bosseler 2006). A few studies also used microphones (e.g., Bernard-Opitz et al. 1999) or touch screens (e.g., Bosseler and Massaro 2003) to allow participants to provide input. Across studies in which the hardware was described in detail, the hardware utilized was well below the performance ability of common store-bought computers available today. For example, all studies in which the processing speed and memory capabilities of the hardware was reported identified computers with less than 2 GHz of processing speed and less than 512 MB of RAM. Table 2 provides a description of the minimum system requirements for each of the software programs used.
Table 2

Summary of software characteristics

Software

Capabilities

Availability and price

Minimum system requirements

Citations

Alpha Program

Student selects words or makes sentences and the computer then displays the words/sentences; animated color graphics; gives audible speech output and American Sign Language translation.

Discontinued

Designed for use with Apple IIe & Apple IIGS

AbleData (2006)

Baldi/Timo

Realistic animated talking head models craniofacial movements of actual speech and produces synthesized speech.

Available as “Timo” from Animated Speech Corporation; 1 user less than US $100; 5 users less than US $250; Timo Lesson Creator, US $250.

600 MHz CPU, 128 MB RAM, 16 bit or better color monitor, Sound Blaster or compatible 16-bit sound card, speaker or headphones, microphone (optional for some sections), CD-ROM drive, 300 MB disk space

ASC (2005)

HyperStudio

The multimedia presentation tool features video, sound (e.g., voice), animations, text, and interactive options that can be embedded in stacks, similar to PowerPoint slides.

Available as HyperStudio version 5; price less than US $200

Macintosh Edition: OS X 10.4.11 or later, G4 400 MHz CPU, 256 MB RAM, 800 × 600 monitor, 800 MB disk space

MacKiev (2010)

   

Windows Edition: Windows XP or later, 600 MHz, 512 MB RAM

 
   

Video card: 100% DirectX 9.0c compatible, 800 × 600 16-bit color monitor, 1 GB disk space

 

Keytalk

As words are typed, the computer displays the words on screen and produces synthesized speech.

Discontinued

Designed for use with Apple IIe & Apple IIGS

AbleData (2006)

PowerPoint

The presentation software includes prerecorded sounds (e.g., speech), animations, and text that can be embedded within slides.

Available from Microsoft; ranges in price less than US $200

500 MHz CPU, 256 MB RAM, 1024 × 576 color resolution monitor, graphics card with 64 MB video memory, 1.5 GB disk space

Microsoft PowerPoint (1997–2003)

Speechviewer

Coverts speech interactive graphic displays synchronized with audio playback; contains modules designed to improve pitch, loudness, timing, and vowel production.

Discontinued

512 MB RAM and IBM M-Audio, any color monitor, Capture and Playback Adapter or Micro Channel

Synapse Adaptive (2010)

Software Programs

The software used included programs that were designed by researchers specifically for their studies (Hetzroni and Shalem 2005; Hetzroni and Tannous 2004; Moore and Calvart 2000) and programs that are or were mass produced and marketed. Two studies used software programs designed to deliver multimedia presentations (i.e., PowerPoint and HyperStudio). These programs required the researcher or teacher to create the presentation that was used in the intervention using the tools provided within the programs (Coleman-Martin et al. 2005; Simpson et al. 2004). Four studies used programs specifically designed to deliver speech and language interventions to children with developmental disabilities. Of these four studies, three of the software programs have been discontinued and may be difficult to purchase (IBM Speechwriter, Keytalk, and Alpha Program). Table 2 provides a summary of each program’s capabilities, availability, price at the time this review was submitted, minimum system requirements, and citations for program product information.

Outcomes

All studies reported CBI was associated with participant improvement on communication-related dependent variables. When averaging across dependent outcomes within studies and then across studies, CBI was found to have a repeated-measures-derived effect size of 1.015, an F-statistic-derived effect size of 3.898, and a NAP of 96.6%. When interpreting the effect size estimates reported here, readers should be aware that single-group, repeated-measures \( \hat{\delta }{\text{s}} \) are larger than those resulting from independent group, post-test-only designs due to the correlation between pre- and post-tests (Dunlap et al. 1996; Rosenthal 1994). Confidence intervals for the effect size estimates and NAP statistics were not calculated, and statistical tests of significance were not performed, due to the inadequate size of study samples and the resulting instability of variance estimates (Hedges and Olkin 1985).

Six of the studies evaluated the effect of CBI across time for a single group or a single participant using repeated-measures (i.e., pre- and post-tests) or multiple baseline designs involving single AB phase pairs (Bosseler and Massaro 2003; Heimann et al. 1995; Hetzroni and Shalem 2005; Hetzroni and Tannous 2004; Moore and Calvart 2000; Simpson et al. 2004). In these studies, CBI was observed to be associated with improvements in participants’ (a) number of vocabulary words (\( \hat{\delta }_{\text{RM}} = 0.710 \), \( \hat{\delta }_{\text{RM}} = 2.884 \); Bosseler and Massaro 2003), (b) words correctly identified (\( \hat{\delta }_{\text{F-test}} = 1.651 \); Moore and Calvart 2000), (c) correct matches between text and food items (M NAP = 90.3%, values ranged from 79 to 97.9%; Hetzroni and Shalem 2005), (d) sentence imitation (\( \hat{\delta }_{\text{RM}} = 0.428 \); Heimann et al., 1995), (e) phonological awareness (\( \hat{\delta }_{\text{RM}} = 0.164 \); Heimann et al. 1995), (f) verbal expression (\( \hat{\delta }_{\text{RM}} = 0.446 \); Heimann et al. 1995), (g) communication initiations (\( \hat{\delta }_{\text{RM}} = 0.819 \); Hetzroni and Tannous 2004), (h) relevant speech (\( \hat{\delta }_{\text{RM}} = 1.273 \); Hetzroni and Tannous 2004), and (i) social greetings (M NAP = 97%, values ranged from 88.1% to 100%). Also, the studies reported CBI to be associated with decreases in (a) delayed echolalia (\( \hat{\delta }_{\text{RM}} = 1.156 \)), (b) immediate echolalia (\( \hat{\delta }_{\text{RM}} = 0.314 \)), and (c) irrelevant speech (\( \hat{\delta }_{\text{RM}} = 0.552 \)).

Two studies tested the effect of CBI by comparing it to person-implemented instruction (Bernard-Opitz et al., 1999; Coleman-Martin et al., 2005). Bernard-Opitz and colleagues (1999) found CBI to be associated with larger improvements in percentages of vocal imitations (\( \hat{\delta }_{\text{F-test}} = 2.913 \)). Coleman-Martin et al. (2005) found both CBI and person-implemented instruction to be associated with NAP values of 100%, as well as the participant’s correct identification of 100% of vocabulary words.

The final two studies inspected the effect of CBI by conducting component analyses of CBI programs (Massaro and Bosseler 2006; Parsons and La Sorte 1993). Massaro and Bosseler (2006) compared the effect of a software program that included an animated talking head plus synthesized voice with the effect of the same program that included the voice alone (i.e., the animated talking head was disabled). The animated talking head was associated with a higher overall average of correct receptive responses pooled across lessons (\( \hat{\delta }_{\text{F-test}} = 3.694 \)). Similarly, Parsons and La Sorte (1993) compared the effect of a software program that included a speech feature with the effect of the same program when the speech feature was disabled. Greater increases in participants’ number of utterances were observed from baseline to intervention when the speech feature was enabled (M NAP = 99.1%, values ranged from 97.2 to 100%) than from baseline to intervention when the speech feature was disabled (M NAP = 12%, values ranged from 0 to 27.8%).

Certainty of Evidence

The certainty of evidence in each study was classified as suggestive, preponderance, or conclusive. The certainty of evidence for an intervention effect was rated as conclusive for two studies (Bosseler and Massaro 2003; Simpson et al. 2004). Two studies were rated as providing the preponderance level of certainty (Heimann et al. 1995; Hetzroni and Shalem 2005). In the remaining six studies, the certainty of evidence for an intervention effect was judged to be suggestive. These suggestive ratings were due to the use of nonexperimental designs or a lack of experimental control concerning the improvement in the communication skill–dependent variable. Table 1 gives the specific reason each study was rated at a certain level.

Discussion

Our systematic search yielded 10 studies involving the use of CBI to teach communication skills to individuals with autism. Summary and analysis of these studies revealed that the existing literature base is perhaps best described as limited with respect to the overall scope of the existing corpus of studies. In terms of scope, the current database must be considered limited because of the sheer paucity of studies (n = 10) and the relatively small number of participants (n = 70). In terms of methodological quality, perhaps the most important limitation is that many of the studies contained research designs that could only provide a suggestive level of certainty regarding the ability of CBI to cause meaningful improvement in communication of children with ASD. However, even though the data must be taken with caution, all studies did report some improvement in communication. Therefore, although CBI for communication skills of children with ASD should not yet be considered a researched-based approach, it does seem a promising practice that certainly warrants future research.

In terms of our aim to inform and guide practitioners interested in the use of CBI, a few important considerations are raised by this review. First, the summary of the software programs utilized in CBI (see Table 2) reveals that only three of the software programs evaluated in peer-reviewed journals are currently being manufactured and marketed (i.e., HyperStudio, PowerPoint, and Baldi/Timo). Of this group, only Baldi (new version called “Timo”) is designed specifically for the purpose of improving communication. Both PowerPoint and HyperStudio require the teacher or therapist to create the curriculum or intervention using the software. As such, the effects of CBI implemented with these programs likely depend more on the qualities of the PowerPoint or HyperStudio presentation created than on the software itself.

CBI is an intervention delivery system, and just as in person-implemented intervention, the success of the intervention depends in large part on the extent to which the system (the person or the computer) is able to implement effective techniques (e.g., prompting, reinforcement, and breaking complex concepts down to simple components). Practitioners interested in the use of CBI should be careful to ensure that interventions created using software programs not specifically designed to teach communication (PowerPoint and HyperStudio) contain the functional properties of effective communication interventions. For example, if communication is to be increased, the intervention delivery system needs to have the ability to provide reinforcement (i.e., the mechanism by which behaviors become more frequent). The type and qualities of stimuli that are able to act as reinforcers differ across individuals, and therefore, communication intervention must be able to individualize reinforcers (i.e., to select the properties of reinforcers depending on child preference). When creating PowerPoint or HyperStudio presentations for CBI, the interests and preferences of the student should be considered. For example, a student interested in trains could have noises, animations, and graphics related to trains used as reinforcers in his or her program.

Communication interventions should also be designed to promote generalization (i.e., use of the target skill in the natural environment). Hetzroni and Tannous (2004) created a program that was based upon specific activities from the participants’ daily routine during which communication could be improved. The program involved graphic animations of playtime, mealtime, and a hygiene routine and played a recorded voice that asked children questions related to these activities (e.g., “What game do you want to play?” for playtime). The participant was then presented with three different pictures of options and was allowed to choose between options using the cursor. The computer then presented a video clip depicting the results of the chosen response, for example, a video of a child engaged in the requested play activity. Data on the children’s communication were collected in the natural environment during mealtime, playtime, and hygiene that suggested the CBI improved communication in these real world situations. CBI programs should be created to promote generalization by involving natural settings and providing representations of the natural consequences associated with specific communication behaviors in those settings.

A second issue for practitioners to consider prior to implementing CBI involves the ability of the individual to use the computer (Grynszpan et al. 2008). For example, some students may have deficits in visual discrimination, intellectual functioning, and fine motor skills that preclude their ability to move the mouse, press keys, or attend to the stimuli on the monitor (Wong et al. 2009). These potential skill deficits may be one of the reasons why the majority of studies reviewed involved individuals with mild to moderate autism, and few studies included individuals with severe autism. Very few interventions designed to improve the computer interface ability of the individuals with autism have been evaluated, and additional research in this area is warranted (Grynszpan et al. 2008; Wong et al. 2009).

The Baldi/Timo program differs from the other currently available software evaluated in the research literature because it was designed specifically to improve communication and not as a multipurpose presentation tool. Baldi/Timo can be individualized in a number of specific ways to meet the unique needs of an individual student. Specifically, the speaking, captioning, and graphic animations can be turned on or off depending upon the level of support desired, and in the event, one or more of these functions are found to be distracting or aversive to the student. This ability to select the features used also allows a type of prompt selection that could be used in a manner consistent with different prompting hierarchies (e.g., least to most, graduated guidance, and errorless learning). Timo may be individualized further by purchasing an additional software component called “Timo’s Lesson Creator”, which allows modification to individual lesson titles, uploading specific images (e.g., pictures of the student), and individualized recorded messages that can greet the student by name, give pre-specified reinforcers, and deliver specific instructions or prompts based upon the child’s educational program. For example, if the child’s teacher says a specific phrase when it is time to line-up or clean the classroom that phrase can be spoken by the program (ASC 2005). In this way, elements of the natural environment may be included that may promote the generalization of acquired skills.

In terms of directions for future research regarding CBI in communication for individuals with autism, many potential research questions remain unanswered. Perhaps most important is the need for additional research capable of providing a high level of certainty with a larger number of participants with mild to severe autism. Additionally, few studies have addressed the relative efficacy of CBI versus person-implemented intervention or on the potential adverse consequences associated with CBI (e.g., reduced opportunities for social interaction). One such study by Chen and Bernard-Opitz (1993) compared CBI to person-implemented instruction with four children with autism and found fewer problem behaviors during CBI than person-implemented, but no difference in learning rate. Future research in which similar comparisons are made using more current technology and additional comparisons is needed.

Notes

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

*Studies included within the review

  1. AbleData. (2006). Alpha interactive language seriesGator super sentences. Retrieved July 18, 2010 from: http://www.abledata.com.
  2. American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: American Psychiatric Association.CrossRefGoogle Scholar
  3. Barron, A. E., Harmes, C. J., & Kemker, K. J. (2006). Technology as a classroom tool: Learning with laptop computers. In T. W. H. Leo & R. Subramaniam (Eds.), Handbook of research on literacy in technology at the K-12 level. Hershey, PA: IGI Global.Google Scholar
  4. Bernard-Opitz, V., Ross, K., & Tuttas, M. (1990). Computer-assisted instruction for autistic children. Annals Academy of Medicine, 19, 611–616.Google Scholar
  5. *Bernard-Opitz, V., Sriram, N., & Sapuan, S. (1999). Enhancing vocal imitations in children with autism using the IBM Speechviewer. Autism, 3, 131–147.CrossRefGoogle Scholar
  6. Blischak, D. M., & Schlosser, R. W. (2003). Use of technology to support independent spelling by students with Autism. Topics in Language Disorders, 23, 293–304.CrossRefGoogle Scholar
  7. *Bosseler, A., & Massaro, D. (2003). Development and evaluation of a computer-animated tutor for vocabulary and language learning in children with autism. Journal of Autism and Developmental Disorders, 33, 653–672.PubMedCrossRefGoogle Scholar
  8. Carr, E. G., & Durand, V. M. (1985). Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis, 18, 111–126.PubMedCrossRefGoogle Scholar
  9. Charlop-Christy, M., & Trasowech, J. (1991). Increasing autistic children’s spontaneous speech. Journal of Applied Behavior Analysis, 24, 141–161.CrossRefGoogle Scholar
  10. Chen, S. H. A., & Bernard-Opitz, V. (1993). Comparison of personal and computer-assisted instruction for children with autism. Mental Retardation, 31, 368–376.PubMedGoogle Scholar
  11. Colby, K. (1973). The rationale for computer-based treatment of language difficulties in non-speaking autistic children. Journal of Autism and Childhood Schizophrenia, 3, 254–260.PubMedCrossRefGoogle Scholar
  12. *Coleman-Martin, M., Heller, K., Cihak, D., & Irvine, K. (2005). Using computer- assisted instruction and the nonverbal reading approach to teach word identification. Focus on Autism & Other Developmental Disabilities, 20, 80–90.CrossRefGoogle Scholar
  13. Cooper, H., & Hedges, L. V. (Eds.). (1994). The handbook of research synthesis. New York: Russell Sage Foundation.Google Scholar
  14. Delano, M. E. (2007). Video modeling interventions for individuals with autism. Remedial and Special Education, 28, 33–42.CrossRefGoogle Scholar
  15. Dunlap, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measures designs. Psychological Methods, 1, 170–177.CrossRefGoogle Scholar
  16. Eigsti, I., Bennetto, L., & Dadlani, M. (2007). Beyond pragmatics: morphosyntactic development in autism. Journal of Autism and Developmental Disorders, 37, 1007–1023.PubMedCrossRefGoogle Scholar
  17. Fitzgerald, G., Koury, K., & Mitchem, K. (2008). Research on computer-mediated instruction for students with high incidence disabilities. Journal of Educational Computing Research, 38, 201–233.CrossRefGoogle Scholar
  18. Goldenberg, E. P. (1979). Special technology for special children. Baltimore: University Park Press.Google Scholar
  19. Graff, R., Green, G., & Libby, M. (1998). Effects of two levels of treatment intensity on a young child with severe disabilities. Behavioral Interventions, 13, 21–41.CrossRefGoogle Scholar
  20. Grynszpan, O., Martin, J., & Nadel, J. (2008). Multimedia factors for users with high functioning autism: An empirical investigation. International Journal of Human-Computer Studies, 66, 628–639.CrossRefGoogle Scholar
  21. Harding, J., Wacker, D. P., Berg, W. K., Barretto, A., & Ringdahl, J. (2005). Evaluation of relations between specific antecedent stimuli and self-injury during functional analysis conditions. American Journal on Mental Retardation, 110, 205–215.PubMedCrossRefGoogle Scholar
  22. Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando: Academic Press.Google Scholar
  23. *Heimann, M., Nelson, K. E., Tjus, T., & Gillberg, C. (1995). Increasing reading and communication skills in children with autism through an interactive multimedia computer program. Journal of Autism and Developmental Disorders, 25, 459–480.PubMedCrossRefGoogle Scholar
  24. *Hetzroni, O., & Shalem, U. (2005). From logos to orthographic symbols: a multilevel fading computer program for teaching nonverbal children with autism. Focus on Autism & Other Developmental Disabilities, 20, 201–212.CrossRefGoogle Scholar
  25. Hetzroni, O., & Tannous, J. (2004). Computer-based intervention program on the communicative functions of children with autism. Journal of Autism and Developmental Disorders, 34, 95–113.PubMedCrossRefGoogle Scholar
  26. Higgins, K., & Boone, R. (1996). Creating individualized computer-assisted instruction for students with autism using multimedia. Focus on Autism & Other Developmental Disabilities, 11, 69.CrossRefGoogle Scholar
  27. IBM. (1988). IBM personal System/2 SpeechViewer: A guide to clinical and educational applications. Atlanta, GA: International Business Machines Corporation.Google Scholar
  28. Inan, F. A., Lowther, D. L., Ross, S. M., & Strahl, D. (2010). Pattern of classroom activities during students’ use of computers: Relations between instructional strategies and computer applications. Teaching and Teacher Education, 26, 540–546.CrossRefGoogle Scholar
  29. Kinney, E., Vedora, J., & Stromer, R. (2003). Computer-presented video models to teach generative spelling to a child with an autism spectrum disorder. Journal of Positive Behavior Interventions, 5, 22–29.CrossRefGoogle Scholar
  30. Krantz, P. J., & McClannahan, L. E. (1998). Social interaction skills for children with autism: A script-fading procedure for beginning readers. Journal of Applied Behavior Analysis, 31, 191–202.PubMedCrossRefGoogle Scholar
  31. Lang, R., Machalicek, W., Rispoli, M. J., & Regester, A. (2009). Training parents to implement communication interventions for children with autism spectrum disorders: A systematic review of training procedures. Evidenced Based Communication Assessment and Intervention, 3, 174–190.CrossRefGoogle Scholar
  32. Lee, A., Hobson, R. P., & Chiat, S. (1994). I, you, me, and autism: An experimental study. Journal of Autism and Developmental Disorders, 24, 155–176.PubMedCrossRefGoogle Scholar
  33. Lord, C., & Paul, R. (1997). Language and communication in autism. In J. Cohen & F. R. Volkmar (Eds.), Handbook of autism, pervasive developmental disorders (2nd ed.). New York: Wiley.Google Scholar
  34. Ma, H. H. (2006). An alternative method for quantitative synthesis of single-subject research: Percentage of data points exceeding the median. Behavior Modification, 30, 598–617.PubMedCrossRefGoogle Scholar
  35. MacKiev.com. (2010). Welcome to Roger Wagner’s HyperStudio ©. Retrieved September 27, 2010 from: http://www.mackiev.com/hyperstudio/.
  36. Marrou, H. I. (1956). A history of education in antiquity (G. Lamb, Trans.). London: Sheed and Ward.Google Scholar
  37. *Massaro, D., & Bosseler, A. (2006). Read my lips: The importance of the face in a computer-animated tutor for vocabulary learning by children with autism. Autism: The International Journal of Research & Practice, 10, 495–510.Google Scholar
  38. McCoy, K., & Hermansen, E. (2006). A review of model types and effects. Education and Treatment of Children, 30, 183–213.CrossRefGoogle Scholar
  39. McEvoy, R. E., Loveland, K. A., & Landry, S. H. (1988). The functions of immediate echolalia in autistic children: A developmental perspective. Journal of Autism and Developmental Disabilities, 18, 657–668.CrossRefGoogle Scholar
  40. Mechling, L. C., Gast, D. L., & Langthone, J. (2002). Computer-based video instruction to teach persons with moderate intellectual disability to read grocery aisle signs and locate items. The Journal of Special Education, 35, 224–240.CrossRefGoogle Scholar
  41. Mechling, L., & Langone, J. (2000). The effects of a computer-based instructional program with video anchors on the use of photographs for prompting augmentative communication. Education & Training in Mental Retardation & Developmental Disabilities, 35, 90–105.Google Scholar
  42. Microsoft PowerPoint. (1997–2003). [Software]. Redmond, WA: Microsoft Corporation.Google Scholar
  43. *Moore, M., & Calvart, S. (2000). Vocabulary acquisition for children with autism: Teacher or computer instruction. Journal of Autism and Developmental Disorders, 30, 359–362.PubMedCrossRefGoogle Scholar
  44. Moore, D., McGrath, P., & Thorpe, J. (2000). Computer-aided learning for people with autism: A framework for research and development. Innovations in Education and Training International, 37, 218–228.Google Scholar
  45. National Research Council. (2001). Educating children with autism. Washington, DC: National Academy Press.Google Scholar
  46. Nelson, K. E., & Prinz, P. M. (1991). Alpha interactive language series/gator super sentences [Computer Software]. Warriors Mark, PA: Super Impact Images.Google Scholar
  47. Panyan, M. (1984). Computer technology for autistic students. Journal of Autism and Developmental Disorders, 14, 375–382.PubMedCrossRefGoogle Scholar
  48. Parker, R. I., Hagan-Burke, S., & Vannest, K. (2007). Percent of all non-overlapping data (PAND): An alternative to PND. Journal of Special Education, 40, 194–204.CrossRefGoogle Scholar
  49. Parker, R. I., & Vannest, K. (2009). An improved effect size for single-case research: Nonoverlap of all pairs. Behavior Therapy, 40, 357–367.PubMedCrossRefGoogle Scholar
  50. *Parsons, C. L., & La Sorte, D. (1993). The effect of computers with synthesized speech and no speech on the spontaneous communication of children with autism. Australian Journal of Human Communication Disorders, 21, 12–31.Google Scholar
  51. Powell, S. (1996). The use of computers in teaching people with autism. In P. Shattock & P. Linfoot (Eds.), Autism on the agenda, papers from NAS conference (pp. 128–132).Google Scholar
  52. Pressey, S. L. (1926). A simple device for teaching, testing, and research in learning. School and Society, 23, 373–376.Google Scholar
  53. Pressey, S. L. (1932). A third and fourth contribution towards the coming industrial revolution in education. School and Society, 36, 934.Google Scholar
  54. Rapaport, J. L., Rumsey, J. M., & Sceery, W. R. (1985). Autistic children as adults: Psychiatric, social, and behavioral outcomes. American Academy of Child Psychology Journal, 24, 456–474.Google Scholar
  55. Robert Wagner Publishing, Inc. (1993–1998). HyperStudio 3.2.Google Scholar
  56. Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. Hedges (Eds.), The handbook of research synthesis (pp. 231–244). New York, NY: Russell Sage Foundation.Google Scholar
  57. Sansosti, F. J., & Powell-Smith, K. A. (2008). Using computer-presented social stories and video models to increase the social communication skills of children with high functioning autism spectrum disorders. Journal of Positive Behavioral Interventions, 10, 162–178.CrossRefGoogle Scholar
  58. Schlosser, R. W., & Sigafoos, J. (2007). Editorial: Moving evidence-base practice forward. Evidenced-based Communication Assessment and Intervention, 1, 1–3.CrossRefGoogle Scholar
  59. Schlosser, R. W., & Wendt, O. (2008). Effects of augmentative and alternative communication I intervention on speech production in children with autism: A systematic review. American Journal of Speech Language Pathology, 17, 212–230.PubMedCrossRefGoogle Scholar
  60. Scotti, J., Evans, I., & Meyer, L. (1991). A meta-analysis of intervention research with problem behavior: Treatment validity and standards of practice. American Journal on Mental Retardation, 96, 233–256.PubMedGoogle Scholar
  61. Scruggs, M., & Castro, B. (1987). The quantitative synthesis of single-subject research. Remedial and Special Education, 8, 24–33.CrossRefGoogle Scholar
  62. Shukla-Mehta, S., Miller, T., & Callahan, K. J. (2010). Evaluating the effectiveness of video instruction on social and communication skills training for children with autism spectrum disorders: A review of the literature. Focus on Autism and Other Developmental Disabilities, 25, 23–36.CrossRefGoogle Scholar
  63. Sigafoos, J., Arthur-Kelly, M., & Butterfield, N. (2006). Enhancing everyday communication for children with disabilities. Baltimore, MD: Paul H Brookes Publishing Co.Google Scholar
  64. Sigafoos, J., O’Reilly, M., & Lancioni, G. E. (2009). Communication. In J. L. Matson (Ed.), Applied behavior analysis for children with autism spectrum disorders (pp. 109–127). New York: Springer Science.CrossRefGoogle Scholar
  65. Silver, M., & Oakes, P. (2001). Evaluation of a new computer intervention to teach people with autism or Asperger syndrome to recognize and predict emotions in others. Autism: The International Journal of Research and Practice, 5, 299–316.Google Scholar
  66. Simeonsson, R., & Bailey, D. (1991). Evaluating programme impact: Levels of certainty. In D. Mitchell & R. Brown (Eds.), Early intervention studies for young children with special needs (pp. 280–296). New York: Chapman and Hall.Google Scholar
  67. *Simpson, A., Lagone, J., & Ayers, K. M. (2004). Embedded video and computer based instruction to improve social skills for students with autism. Education and Training in Developmental Disabilities, 39, 240–252.Google Scholar
  68. Skinner, B. F. (1968). The Technology of Teaching. New York: Apple-Century-Croft.Google Scholar
  69. Smith, N. (1981). The certainty of evidence in health evaluations. Evaluation and Program Planning, 4, 23–278.CrossRefGoogle Scholar
  70. Strickland, D., Marcus, L. M., Mesibov, G. B., & Hogan, K. (1996). Brief report: Two case studies using virtual reality as a learning tool for autistic children. Journal of Autism and Developmental Disabilities, 26, 251–259.CrossRefGoogle Scholar
  71. Synapse Adaptive. (2010). The therapy tool that has people talking. Retrieved July, 20, 2010 from: http://www.synapseadaptive.com/edmark/prod/sv3/.
  72. Team Up with Timo: Animated Learning Tutor. (2005).[Software]. San Francisco, CA: Animated Speech Corporation.Google Scholar
  73. Weitz, C., Dexter, M., & Moore, J. (1997). AAC and children with developmental disabilities. In S. Glennen & D. De Coste (Eds.), Handbook of augmentative and alternative communication (pp. 395–431). San Diego, CA: Singular.Google Scholar
  74. Wong, A. W. K., Chan, C. C. H., Li-Tsang, C. W. P., & Lam, C. S. (2009). Competence of people with intellectual disabilities on using human-computer interface. Research in Developmental Disabilities, 30, 107–123.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Sathiyaprakash Ramdoss
    • 1
    • 6
  • Russell Lang
    • 2
  • Austin Mulloy
    • 1
  • Jessica Franco
    • 3
  • Mark O’Reilly
    • 1
  • Robert Didden
    • 4
  • Giulio Lancioni
    • 5
  1. 1.The Meadows Center for the Prevention of Educational RiskThe University of Texas at AustinAustinUSA
  2. 2.Texas State UniversitySan MarcosUSA
  3. 3.Autism Community NetworkSan AntonioUSA
  4. 4.Radboud University NijmegenNijmegenThe Netherlands
  5. 5.University of BariBariItaly
  6. 6.Department of Special EducationThe University of Texas at AustinAustinUSA

Personalised recommendations