Journal of Autism and Developmental Disorders

, Volume 45, Issue 7, pp 1989–1996 | Cite as

Validation of Autism Spectrum Disorder Diagnoses in Large Healthcare Systems with Electronic Medical Records

  • Karen J. Coleman
  • Marta A. Lutsky
  • Vincent Yau
  • Yinge Qian
  • Magdalena E. Pomichowski
  • Phillip M. Crawford
  • Frances L. Lynch
  • Jeanne M. Madden
  • Ashli Owen-Smith
  • John A. Pearson
  • Kathryn A. Pearson
  • Donna Rusinak
  • Virginia P. Quinn
  • Lisa A. Croen
Original Paper


To identify factors associated with valid Autism Spectrum Disorder (ASD) diagnoses from electronic sources in large healthcare systems. We examined 1,272 charts from ASD diagnosed youth <18 years old. Expert reviewers classified diagnoses as confirmed, probable, possible, ruled out, or not enough information. A total of 845 were classified with 81 % as a confirmed, probable, or possible ASD diagnosis. The predictors of valid ASD diagnoses were >2 diagnoses in the medical record (OR 2.94; 95 % CI 2.03–4.25; p < 0.001) and being male (OR 1.51; 95 % CI 1.05–2.17; p = 0.03). In large integrated healthcare settings, at least two diagnoses can be used to identify ASD patients for population-based research.


Population-based Racial/ethnic minorities Chart review Children Adolescents 


Autism spectrum disorders (ASD) are a heterogeneous group of complex neurodevelopmental disorders with early childhood onset that are characterized by impairments in communication and social interaction, and repetitive behavior (Lord et al. 2006). ASD can have devastating impacts on the development of the affected child and his/her family and community. While there is considerable controversy about whether the incidence of ASD is increasing, the most recent estimate suggests that the disorder affects 1 out of every 68 children in the US (CDC 2014), for a total of more than 3.5 million affected families. It is estimated that the lifetime cost for all currently affected youth in the US could be $61 billion per year (Buescher et al. 2014).

As part of healthcare reform, systems of care are being incentivized to establish integrated electronic medical records and to put them to meaningful use (Blumenthal and Tavenner 2010). Research using routinely collected health data is part of the process of meaningful use and should contribute to improvements in the quality of the services provided to healthcare system members. In order to use these increasingly rich sources of patient information, clinicians and researchers must be confident in the validity of the diagnoses of health conditions recorded in electronic medical records.

Although there have been a number of ASD studies that use electronic sources of medical information (Croen et al. 2011; Schendel et al. 2012; Zerbo et al. 2013), there has been very little work done on the validity of recorded ASD diagnoses in electronic health records of large healthcare systems. These electronic health records include diagnoses made by providers inside these systems as well as those diagnoses made by outside providers. The diagnoses from outside providers are available for research in the form of insurance claims databases. Recently, Burke et al. (2014) published a report on the validity of using one or more ASD diagnoses to identify ASD cases in insurance claims databases. They found that having two or more diagnoses in the claims records provided the best positive predictive value (PPV) for an ASD diagnosis.

The findings from this study were limited because the authors only evaluated the number of ASD diagnoses as an indicator of a valid diagnosis and the study sample was limited to patients under 5 years old and those who received the majority of their healthcare over their lifetime in the selected healthcare system. Our study was designed to expand the work of Burke et al. (2014) by providing a more complete picture of the validity of ASD diagnoses across the range of youth found in large healthcare systems (school aged children, adolescents, and those with less documentation in their medical records due to shorter membership in the healthcare system). We also consider a number of factors associated with a valid ASD diagnosis in the same analyses, not just whether or not a child had more than one diagnosis in their medical records. Our findings will further facilitate the use of electronic data to study population-based outcomes for ASD patients at low cost with greater reliability.



Data were obtained from four sites participating in a study funded through the Mental Health Research Network (MHRN), a consortium of 19 public-domain research centers based in large healthcare systems (MHRN 2014). Together, these 19 systems provide insurance coverage and, in many cases, healthcare services to approximately 12.5 million people living in 15 states.

The four healthcare sites contributing to this study have a wide diversity of practice in identification and treatment of ASD. Two of the systems have specialized ASD evaluation, diagnosis, and treatment planning centers for youth. In these systems, youth suspected of having ASD are generally referred by their pediatrician to an ASD center, where they are evaluated by a multidisciplinary clinical team including pediatric psychiatrists and psychologists with expertise in autism, general and developmental pediatricians, and social workers. The larger of these two systems (Site 1 in Table 1) has 56 % of its ASD cases identified in these centers; however, in contrast, the smaller system (Site 3 in Table 1) has the majority (87 %) of its youth diagnosed outside of the specialized centers in pediatric, behavioral health, and primary care settings. The remaining two healthcare systems (Sites 2 and 4 in Table 1) participating in the study did not have specialized ASD centers and thus all ASD diagnoses were made in pediatric, primary care, or behavioral health settings.
Table 1

Distribution of chart review outcomes for four large integrated healthcare systems






Ruled outd





33 % (275)

81 % (682)

19 % (163)

34 % (427)



Site 1

32 % (55)

76 % (130)

24 % (41)

46 % (145)



Site 2

44 % (118)

83 % (223)

17 % (45)

47 % (240)



Site 3

14 % (36)

79 % (205)

21 % (56)

5 % (13)



Site 4

46 % (66)

86 % (124)

14 % (21)

17 % (29)



1–4 years

44 % (65)

88 % (131)

12 % (18)

16 % (28)



5–11 years

36 % (136)

79 % (298)

21 % (79)

31 % (167)



12–18 years

23 % (74)

79 % (253)

21 % (66)

42 % (253)




35 % (187)

83 % (437)

17 % (91)

32 % (251)




28 % (88)

77 % (245)

23 % (72)

36 % (177)




26 % (125)

81 % (392)

19 % (94)

38 % (296)




42 % (150)

81 % (290)

19 % (69)

27 % (132)



1 ASD diagnosis

27 % (98)

72 % (259)

28 % (101)

39 % (231)



>1 ASD diagnosis

36 % (177)

87 % (423)

13 % (62)

29 % (196)




31 % (112)

76 % (277)

24 % (87)

33 % (179)



Black/African American

47 % (36)

86 % (65)

14 % (11)

34 % (41)




41 % (32)

82 % (64)

18 % (14)

41 % (55)




31 % (19)

85 % (52)

15 % (9)

34 % (32)




29 % (76)

84 % (224)

16 % (42)

31 % (121)



Data are presented as % (n)

a PPV positive predictive value calculated using the Validated column as the denominator

bConservative is defined as confirmed cases only

cBroad is defined as all confirmed, probable, and possible ASD cases combined

dRuled out as an ASD diagnosis and percentile of cases is calculated using the total validated column

e NEI not enough information to make a judgment about the diagnosis and percentile of cases is calculated using the total reviewed column

fTotal cases where a judgment could be made about the validity of the diagnosis

gTotal cases reviewed regardless of the outcome

hOther is defined as American Indian/Alaskan Native, Hawaiian/Pacific Islander, Mixed Race, and missing race/ethnicity

For all four settings, ASD diagnoses were recorded in electronic medical records or in insurance claims data that included care from providers outside of the healthcare systems. Although we used insurance claims as a data source for this study, the majority of the diagnoses we abstracted came from internal electronic medical records. Both data sources were used for analyses. Diagnoses may have or may not have been preceded by a thorough clinical assessment using the DSM-IV diagnostic criteria for ASD (APA 2000). Diagnoses recorded in the medical record may have been made years before the youth was a member of the healthcare system.


To have a representative sample of youth for chart review, we used the following inclusion criteria: (1) current membership in the healthcare plans as of December, 2010; (2) aged <18 years in December 2010; and (3) having at least one diagnosis code for Autistic Disorder (299.0), Pervasive Developmental Disorder Not Otherwise Specified (PDD-NOS; 299.9), and/or Asperger Syndrome (299.8) during the period of 1995–2010. Using these criteria we identified 19,628 ASD cases (7,011 at Site 1, 10,285 at Site 2, 1,651 at Site 3, and 681 at Site 4). Among these cases 15,461 (79 %) were diagnosed in a setting other than a specialty ASD center.

Chart Review Sample

We did not include ASD diagnoses made at specialty ASD centers in any chart review. ASD diagnoses are made by clinical professionals at these centers who are trained using a standardized protocol based upon the Autism Diagnostic Observation Schedule (ADOS; Lord et al. 1989), and thus these diagnoses are assumed to be accurate because they are made using gold standard diagnostic methods. The focus of our analyses was to determine the validity of ASD diagnoses appearing in medical records or insurance claims for youth whose diagnosis was not made at an identifiable ASD center within one of the four study sites.

After expert consultation from healthcare personnel at each study site involved in diagnosing ASD and a review of the literature (CDC 2007; Idring et al. 2012; Windham et al. 2011), we identified several characteristics available in the electronic medical record that might be good determinants of a valid ASD diagnosis. These characteristics were (1) age of the youth (1–4, 5–11, 12–17 years), (2) gender (male, female), (3) provider recording the diagnosis [pediatrician, specialist (behavioral health/developmental pediatrics)], and (4) number of ASD diagnoses at least 1 day apart (1, 2+) in the entire medical record.

Each study site determined the number of youth meeting eligibility criteria in those categories in 2010 (n = 19,628 across all sites), and a sampling method was created to select the adequate numbers of youth from each strata necessary to detect a PPV of 80 % (standard error < 8 %). Using this method, 1,272 individual ASD youth were selected for chart review across all study sites (6 % of the total diagnosed population).

Chart Review Method


We developed a process of chart abstraction and expert chart review based on the CDC’s Metropolitan Atlanta Developmental Disabilities Surveillance Program (MADDSP) methods (Avchen et al. 2011; Rice et al. 2007). Briefly, 1–2 chart abstractors were trained at each site to gather information from electronic medical records and paper charts into a database using a standardized protocol (see Appendix I). Once data were abstracted, clinical experts from each site reviewed the information abstracted for each case using a standardized protocol (see Appendix II) and then completed a reviewer form with a final assessment of the validity of the diagnosis (confirmed, probable, possible, ruled out, or not enough information to make an assessment). Expert reviewers were developmental pediatricians, licensed social workers, or developmental psychologists who diagnosed and treated ASD youth in their respective healthcare systems.

There were a number of criteria necessary for reviewers to make a final assessment which are detailed specifically in Appendix II. Briefly, a confirmed diagnosis required a complete, documented assessment using DSM-IV diagnostic criteria for ASD (APA 2000). A probable diagnosis did not have all the material necessary to complete a full assessment based upon DSM-IV diagnostic criteria; however, it was required that at least all of the following were met: (1) the diagnosis was made by a credible source (developmental pediatrician, educational testing center, etc.), (2) the documentation that did exist stated clearly that DSM-IV criteria were used to make the diagnosis, and (3) there were some patient behaviors documented that were consistent with elements of the DSM-IV criteria. A possible case did not have enough detail to complete a DSM-IV checklist but did require secondhand reports of an ASD assessment by a professional, or that some of the behaviors associated with ASD were documented. A ruled out diagnosis was one where the information in the chart conflicted with an ASD diagnosis or the information clearly indicated another condition. Finally, diagnoses with not enough information were ones where there was not enough detail in either the electronic or paper charts to make a definitive decision about the diagnosis.


Training for all chart abstractors across all study sites was conducted by the second author (ML) and training for all expert chart reviewers across all sites was conducted by a developmental clinical psychologist who worked at a specialized ASD center at one of the healthcare sites in the study. Both trainings consisted of two stages. The first stage was a detailed review of the abstractors’ protocol for abstractors (Appendix I) and the expert reviewer protocol for the expert reviewers (Appendix II) and then practice in either abstraction (abstractors) or review (experts) with two known ASD cases for each group (four cases total). The ASD cases were then reviewed by the second author for the abstractors and by the developmental psychologist for the expert reviewers and a list of discrepancies was distributed separately to all abstractors/reviewers to discuss for further training.

In the second stage of the training, all chart abstractors and expert reviewers were provided with three known cases, one for each ASD diagnosis (Autistic Disorder, PDD-NOS, and Asperger Syndrome), and their results were verified by the second author/developmental psychologist. If there was not 100 % agreement on all information abstracted from the charts/expert reviewer form between the second author/developmental psychologist and the abstractors/reviewers, further training was conducted until all abstractors/reviewers had 100 % agreement with the trainers. Once data collection began, methods for actual abstraction and expert review differed slightly at each site due to variations in electronic medical records, documentation of diagnoses, and availability of historical information in paper based charts.


The primary outcome used for our analyses was the validity of the ASD diagnosis. We defined a valid diagnosis in two ways. The broad definition of a valid diagnosis included all confirmed, probable, and possible expert reviewer outcomes. The second definition was more conservative and only used confirmed diagnosis outcomes. Non-valid diagnoses were those that were classified as ruled out. Because we could not make any conclusions about those diagnoses for which there was not enough information for judgment, they were not included in our PPV calculations nor our multivariate analyses for determinants of ASD diagnosis validity. However, we did conduct paired comparisons between these youth and the youth with the broadest definition of a valid ASD diagnosis using the Chi square statistic to understand the population of youth without enough information.

Data for each characteristic [age of the youth (1–4, 5–11, 12–17 years); gender (male, female), provider recording the diagnosis (pediatrician, behavioral health/developmental pediatrics), and number of ASD diagnoses at least 1 day apart (1, 2+) in the entire medical record] are presented first as unadjusted percentages and frequencies. In addition to the hypothesized youth characteristics, we were also interested, post hoc, in exploring the validity of ASD diagnoses for different races/ethnicities (non-Hispanic White, non-Hispanic Black, Hispanic, Asian, and Other). Other included American Indian/Alaskan Native, Hawaiian/Pacific Islander, Mixed Race, and Missing/Unknown because the sample sizes were too small for us to analyze as separate categories. Data were summarized for each characteristic for four outcome categories: conservative definition of validity, broad definition of validity, ruled out, and not enough information (see Table 1).

We calculated PPV for each of the two valid case definitions for each characteristic (see Table 1). For each characteristic, the PPV for the conservative definition of validity was confirmed cases/total cases with a judgment from chart review and the PPV for the broad definition of validity was confirmed, probable, and possible cases/total cases with a judgment from chart review.

To determine which factors were most likely to be associated with a valid ASD diagnosis when all factors were considered at once, two multivariate logistic regressions were run with all youth characteristics and each healthcare site as predictors. The first regression used the broadest definition of a valid case (referent) versus ruled out as the dichotomous outcome, and the second regression used a more conservative definition of a valid case (referent) versus ruled out as the dichotomous outcome. Regressions were generated by entering all predictors (see Table 1) into the model at once and regressing them onto the dichotomous outcome. Outcomes from multivariate regressions are referred to as “adjusted” for the predictors in the model. Receiver operating characteristic (ROC) curves were generated for each model (see Fig. 1) and the c statistic is reported to evaluate the area under the curve. Analyses were conducted using SAS 9.3, Cary, NC.
Fig. 1

Receiver operating characteristic (ROC) curves for the two multivariate logistic regression analyses: a confirmed diagnoses only as a true positive [area under the curve (c statistic = 0.72)] and b confirmed, probable, and possible as a true positive [area under the curve (c statistic = 0.69)]


Of the 1,272 charts reviewed, 54 % (n = 682) met the broad definition of a valid case (confirmed, probable, or possible). Only 13 % (n = 163) of cases were ruled out for an ASD diagnosis. The remaining 34 % of cases (n = 427) did not have enough information to make a judgment. These patients were more likely to be from the two largest healthcare sites (X2(1) = 13.13; p < 0.001 for Site 1 and X2(1) = 26.21; p < 0.001 for Site 2), be 12–17 years old (X2(1) = 22.73; p < 0.001), and have only one ASD diagnosis in their medical record (X2(1) = 4.50; p = 0.03). Race/ethnicity, type of provider making the diagnosis, and gender were not factors in the rates of missing information. For the 845 cases that had sufficient information to make a judgment, 33 % met the conservative definition of a valid diagnosis (confirmed) and 81 % met the broad definition of a valid diagnosis (confirmed, probable, and possible). PPVs ranged from 72 to 88 % when using the broad definition of valid diagnosis (see Table 1).

Table 2 provides the outcomes from the adjusted logistic regression analyses. Figure 1a shows the ROC curve for the model predicting the likelihood of a confirmed diagnosis only (conservative definition of validity; c = 0.72) and Fig. 1b the ROC curve for the model predicting the likelihood of a confirmed, probable, or possible diagnosis (broad definition of validity; c = 0.69). Regardless of definition of ASD diagnosis validity used, the strongest predictors were two or more ASD diagnoses in the medical record (conservative definition: OR 1.71, 95 % CI 1.24,2.38; broad definition: OR 2.94; 95 % CI 2.03,4.25) and male gender (conservative definition: OR 1.55, 95 % CI 1.12,2.15; broad definition: OR 1.51; 95 % CI 1.05,2.17).
Table 2

Youth characteristics associated with a valid ASD diagnosis recorded in medical records










Site 2 versus Site 1

1.67 (1.02,2.74)



1.68 (0.90,3.14)



Site 3 versus Site 1

0.41 (0.25,0.68)



1.35 (0.81,2.24)



Site 4 versus Site 1

1.86 (1.20,2.89)



1.71 (1.01,2.87)



Age ≥5 and <12 versus <5 years

0.94 (0.61,1.44)



0.57 (0.31,1.03)



Age ≥12 and <19 versus <5 years

0.52 (0.33,0.82)



0.63 (0.34,1.17)



Male versus female

1.55 (1.12,2.15)



1.51 (1.05,2.17)



Specialist versus pediatrician

1.73 (1.26,2.38)



1.03 (0.70, 1.50)



≥2 dx versus 1 dx

1.71 (1.24,2.38)



2.94 (2.03,4.25)



Black versus White non-Hispanic

1.10 (0.63,1.93)



1.50 (0.71,3.10)



Asian versus White non-Hispanic

0.67 (0.05,6.57)



1.70 (0.77,3.74)



Hispanic versus White non-Hispanic

0.85 (0.48,1.49)



1.35 (0.68,2.69)



Othera versus White non-Hispanic

0.71 (0.48,1.04)



1.78 (1.16,2.73)



Findings are adjusted for all factors in the model

aOther race/ethnicity included American Indian/Alaskan Native, Hawaiian/Pacific Islander, Mixed Race, and missing race/ethnicity

If only confirmed ASD diagnoses are used as the outcome, youth between 12 and 17 years old had decreased odds of having a valid ASD diagnosis compared to children under 5 years old (OR 0.52; 95 % CI 0.33,0.82) and having at least one ASD diagnosis made by a specialist increased the odds of having a valid ASD diagnosis compared to a pediatrician (OR 1.73; 95 % CI 1.26,2.38). In addition, when using the more conservative validity definition, healthcare site was also a predictor of a confirmed ASD diagnosis.


This is one of the first studies to attempt to validate ASD diagnoses from electronic sources in large healthcare systems. Our study is unique in that it considered the largest possible population of youth with ASD diagnoses and included multiple characteristics at once to determine the most important factors to use when putting ASD diagnoses from electronic medical sources to “meaningful use” for surveillance and research. We found that having two or more ASD diagnoses in electronic sources and being male were the factors most associated with a valid ASD diagnosis. If a more conservative definition of validity was used (confirmed outcomes only), having at least one diagnosis by a specialist, and being <5 years old were also associated with a valid ASD diagnosis.

The only other published study on the validity of ASD diagnoses from large healthcare systems also found that more than one diagnosis was strongly associated with a valid ASD diagnosis (Burke et al. 2014). Using the most conservative criteria, Burke and colleagues reported an ASD diagnosis confirmation rate of 43 % whereas we were able to confirm 33 %. Using the broader criteria, Burke and colleagues reported a 74 % diagnostic confirmation rate while we found 81 %. The percent of diagnoses for which there was not enough information to make a judgment was similar between the two studies (26 % in the Burke study and 34 % in the present study). Likewise, PPVs were also similar between the two studies.

There are important differences in methodology between our study and the one by Burke et al. (2014) that should be noted, however. These investigators only tested one a priori factor to determine validity—whether or not a child had one or more than one diagnosis in electronic insurance claims databases. In addition, the authors pre-selected their sample to be heavily weighted for younger children (<5 years old), those with longer enrollment history, and those with a “rich claims history”. They also pre-selected the medical charts for validation based upon those physicians who made more than one claim in the electronic insurance claims database and for those who were specialists. Their findings were very positive, providing strong support for the two diagnosis approach, however, their findings are not generalizable to data available from electronic sources that have pediatric providers primarily making diagnoses, older children and adolescents, less complete documentation and shorter membership history, and a good representation of ethnic/racial minorities. Taken together, our findings and those of Burke et al. (2014) provide good support for a two diagnosis strategy in identifying ASD cases in electronic health records of large healthcare systems.

There are a number of limitations to our findings that should be mentioned. One is the potential generalizability to a variety of healthcare settings. Low-income and disadvantaged populations may have been underrepresented in our study (10–25 % of youth across all four healthcare systems received some kind of public assistance including Medicaid). Another limitation was that, overall, one-third of those cases selected for chart review did not have enough information to make a judgment regarding validity of ASD diagnoses. Their diagnoses may have been made informally, or in an outside or prior healthcare setting without transfer of the documentation to electronic sources. Analyses revealed that these youth were more likely to be adolescents and have only one ASD diagnosis in their charts. They were also less likely to have a diagnosis from a specialist. These findings support the outcomes of the multivariate regression in that, to study the full range of the population with ASD diagnoses and be the most confident of the validity of these diagnoses, researchers should select cases that have more than one diagnosis of ASD in the medical record and have at least one of these diagnoses from a developmental specialist.

Finally, our study was not a formal validation study where samples of youth with and without an ASD diagnosis were then subjected to clinical diagnostic assessments by trained professionals. The purpose of our study was to provide a mechanism by which researchers could identify ASD youth at the population level and put large administrative insurance claims and electronic medical record databases to meaningful use for the study of a variety of factors contributing to ASD as well as outcomes resulting from treatment. Because we did not select a patient population without ASD to determine false negative rates, our case identification method is likely missing ASD cases, especially in those populations that are not recommended for routine screening such as older children and adolescents.

Despite these limitations, there are a number of strengths to this study. One is that the healthcare systems studied touched nearly 9 million families (adults and children), with 54 % of the patient membership being a racial/ethnic minority, and 55 % with a median household income of $75,000 or less. In addition, although all study sites were integrated healthcare systems with electronic medical records, the ASD diagnostic practices were quite variable across these sites, as evidenced by the variable rates of missing information and diagnostic validity across the study sites. One large and one small site had specialized ASD centers as part of their systems and the other two sites relied strongly on external contracted organizations, such as public school systems, to identify and treat ASD. We specifically limited our analyses to those ASD diagnoses made outside of specialized ASD centers to enhance the generalizability of our findings to more routine care settings such as small private practices and community-based healthcare clinics. It is likely that the rates of confirmed diagnoses would have been higher had we included diagnoses made at specialty ASD centers.


We recommend that researchers use at least two diagnoses at least 1 day apart in electronic medical records to identify the broadest sample of ASD patients for study. To be more conservative and confident that the ASD diagnoses are accurate, researchers could also add the criteria that at least one of these diagnoses be made by a specialist (behavioral health/developmental pediatrics). In addition, we recommend that ASD diagnoses made in older children and adolescents be treated with caution. Finally, in response to the Healthcare Reform Act and the requirements of meaningful use, there is an opportunity to create initiatives to improve documentation of ASD diagnoses so that population-based studies can be done with confidence across the whole range of youth with ASD.


Our findings can be used to identify ASD cases in future population-based studies using large electronic databases without extensive chart review and validation. This will facilitate efficient research into predictive factors of ASD and its severity as well as the study of healthcare outcomes and service utilization for ASD youth and their families. Our findings could lead to inexpensive, efficient methods for identifying families for large, population-based prospective observational and treatment research within the settings intended as models for healthcare reform.



Funding for this study was provided by the National Institutes of Mental Health (NIMH# U19 MH092201).

Supplementary material

10803_2015_2358_MOESM1_ESM.pdf (312 kb)
Supplementary material 1 (PDF 311 kb)
10803_2015_2358_MOESM2_ESM.pdf (194 kb)
Supplementary material 2 (PDF 194 kb)


  1. American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed.). Arlington: American Psychiatric Publishing, Inc. 943 pp.Google Scholar
  2. Avchen, R. N., Wiggins, L. D., Devine, O., Braun, K. V. N., Rice, C., Hobson, N. C., et al. (2011). Evaluation of a records-review surveillance system used to determine the prevalence of autism spectrum disorders. Journal of Autism and Developmental Disorders, 41(2), 227–236. doi: 10.1007/s10803-010-1050-7.PubMedCrossRefGoogle Scholar
  3. Blumenthal, D., & Tavenner, M. (2010). The “meaningful use” regulation for electronic health records. New England Journal of Medicine, 363(6), 501–504. doi: 10.1056/NEJMp1006114.PubMedCrossRefGoogle Scholar
  4. Buescher, A. V., Cidav, Z., Knapp, M., & Mandell, D. S. (2014). Costs of autism spectrum disorders in the United Kingdom and the United States. Journal of the American Medical Association Pediatrics. doi: 10.1001/jamapediatrics.2014.210.PubMedGoogle Scholar
  5. Burke, J. P., Jain, A., Yang, W., Kelly, J. P., Kaiser, M., Becker, L., et al. (2014). Does a claims diagnosis of autism mean a true case? Autism, 18(3), 321–330.PubMedCrossRefGoogle Scholar
  6. Centers for Disease Control and Prenvention (CDC). (2007). Prevalence of autism spectrum disorders—Autism and developmental disabilities monitoring network, six sites, United States, 2000; prevalence of autism spectrum disorders—Autism and developmental disabilities monitoring network, 14 sites, United States, 2002; evaluation of a methodology for a collaborative multiple source surveillance network for autism spectrum disorders—Autism and developmental disabilities monitoring network, 14 sites, United States, 2002. Surveillance Summaries, 2007. MMWR CDC Surveillance Summaries, 56(SS-1), 1–40.Google Scholar
  7. Centers for Disease Control and Prenvention (CDC). (2014). Prevalence of autism spectrum disorder among children aged 8 years—Autism and developmental disabilities monitoring network, 11 sites, United States, 2010. MMWR: Surveillance Summaries, 63(2), 1–21.Google Scholar
  8. Croen, L. A., Grether, J. K., Yoshida, C. K., Odouli, R., & Hendrick, V. (2011). Antidepressant use during pregnancy and childhood autism spectrum disorders. Archives of General Psychiatry, 68(11), 1104–1112.PubMedCrossRefGoogle Scholar
  9. Idring, S., Rai, D., Dal, H., Dalman, C., Sturm, H., Zander, E., et al. (2012). Autism spectrum disorders in the Stockholm Youth Cohort: Design, prevalence and validity. PLoS One, 7(7), e41280. doi: 10.1371/journal.pone.0041280.PubMedCentralPubMedCrossRefGoogle Scholar
  10. Lord, C., Rutter, M., Goode, S., Heemsbergen, J., Jordan, H., Mawhood, L., et al. (1989). Autism diagnostic observation schedule: A standardized observation of communicative and social behavior. Journal of Autism and Developmental Disorders, 19(2), 185–212.PubMedCrossRefGoogle Scholar
  11. Lord, C., Spence, S., Moldin, S. O., & Rubenstein, J. L. R. (2006). Autism spectrum disorders: phenotype and diagnosis. In S. O. Moldin & J. L. R. Rubenstein (Eds.), Understanding autism: From basic neuroscience to treatment. Boca Raton, FL: CRC Press.Google Scholar
  12. Mental Health Research Network. (2014). Introduction to the Mental Health Research Network. Accessed July 1, 2014.
  13. Rice, C. E., Baio, J., Braun, K. V. N., Doernberg, N., Meaney, F. J., & Kirby, R. S. (2007). A public health collaboration for the surveillance of autism spectrum disorders. Paediatric and Perinatal Epidemiology, 21(2), 179–190. doi: 10.1111/j.1365-3016.2007.00801.x.PubMedCrossRefGoogle Scholar
  14. Schendel, D. E., Diguiseppi, C., Croen, L. A., Fallin, M. D., Reed, P. L., Schieve, L. A., et al. (2012). The study to explore early development (SEED): A multisite epidemiologic study of autism by the centers for autism and developmental disabilities research and epidemiology (CADDRE) network. Journal of Autism and Developmental Disorders, 42(10), 2121–2140.PubMedCentralPubMedCrossRefGoogle Scholar
  15. Windham, G. C., Anderson, M. C., Croen, L. A., Smith, K. S., Collins, J., & Grether, J. K. (2011). Birth prevalence of autism spectrum disorders in the San Francisco Bay area by demographic and ascertainment source characteristics. Journal of Autism and Developmental Disorders, 41(10), 1362–1372. doi: 10.1007/s10803-010-1160-2.PubMedCrossRefGoogle Scholar
  16. Zerbo, O., Qian, Y., Yoshida, C., Grether, J. K., Van de Water, J., & Croen, L. A. (2013). Maternal infection during pregnancy and autism spectrum disorders. Journal of Autism and Developmental Disorders. Dec 24 [Epub ahead of print].Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Karen J. Coleman
    • 1
  • Marta A. Lutsky
    • 2
  • Vincent Yau
    • 2
  • Yinge Qian
    • 2
  • Magdalena E. Pomichowski
    • 1
  • Phillip M. Crawford
    • 3
  • Frances L. Lynch
    • 3
  • Jeanne M. Madden
    • 4
  • Ashli Owen-Smith
    • 5
  • John A. Pearson
    • 3
  • Kathryn A. Pearson
    • 3
  • Donna Rusinak
    • 4
  • Virginia P. Quinn
    • 1
  • Lisa A. Croen
    • 2
  1. 1.Department of Research and EvaluationKaiser Permanente Southern CaliforniaPasadenaUSA
  2. 2.Division of ResearchKaiser Permanente Northern CaliforniaOaklandUSA
  3. 3.Center for Health ResearchKaiser Permanente NorthwestPortlandUSA
  4. 4.Harvard Medical School and Harvard Pilgrim Healthcare InstituteBostonUSA
  5. 5.Center for Health ResearchKaiser Permanente GeorgiaAtlantaUSA

Personalised recommendations