Most researchers and practitioners speak about the gap between research and practice; fewer researchers and practitioners speak about the gap between research and policy, which is arguably much greater, much more significant, and much more likely to be overlooked. A handful of articles each year shed some light on this gap, and one of these appears in a recent issue of the Journal.

In the paper, Barton et al. (2015) reviewed state-level practices and policies surrounding educational assessment and identification, and explored the relation between these practices and the proportion of children identified with ASD within the school system in each state. Their findings offer insights into the potential sources, extent, and impacts of these gaps between research and policy.

In many respects, the trajectory of ASD research over the past 30 years has followed the evolution of the DSM. The converging consensus surrounding the diagnosis of ASD is truly remarkable given its complexity, and is absolutely essential to any efforts to compare results across different research samples. At the same time, recent changes in the DSM-5 (with a new emphasis on autism spectrum, the ‘grandfathering’ in of cases with previous DSM-IV diagnoses) and the advent of both R-Doc and eventually ICD-11 may contribute to some degree of confusion despite the essential consensus on the central diagnostic features of autism (Volkmar and McPartland 2014).

The findings of Barton and her colleagues raise questions about the likelihood of agreement between educational identification and current conceptualizations of ASD. For example, more than 80 % of the states simply adopt the federal statute, which has not been significantly amended since 1990, and which references many features that do not clearly align with the DSM-IV let alone the DSM-V (Doehring and Winterling 2011).

It can be argued that we should not seek any alignment between educational and medical definitions of ASD as captured in the DSM-V: for example, medical diagnosis functions to group individuals together whose difficulties might stem from a common biological cause, something of less concern to educators. Medical diagnosis and educational classification do share, however, the most important function: to group individuals together who might benefit from similar programs of intervention, particularly in the US. To the extent that many medical diagnoses and educational interventions focus on observable behavior and skill deficits, an updated federal policy that clearly aligns educational definitions with relevant DSM criteria would group children meaningfully for purposes of educational planning.

The lack of alignment between the federal definition and the DSM over at least the past 15 years has hindered the ability of many different professionals to provide ASD services to individuals who do not also have a co-occurring intellectual disability, such as those identified with Asperger’s Disorder under the DSM-IV. The current federal definition specifies that autism cannot be considered in the absence of an intellectual disability, if the individual’s disabilities are explained by an emotional disorder. It is our experience that confusion regarding diagnostic practice, particularly among higher cognitively functioning individuals in educational settings, has often led to confusion with emotional or conduct disorders, resulting in highly inappropriate educational programs. In some states, e.g., Delaware, state regulations were rewritten to align with DSM-IV and thus include those with Asperger’s Disorder (Doehring and Winterling, 2011). This shift in the educational definition in Delaware led to a decision by the State Division of Developmental Disability Services to change their criteria to include adults with Asperger’s disorder, as long as there is evidence of significant deficits in social or adaptive functioning. Unfortunately many state developmental disabilities services do not extend their supports to adults with ASD who do not also have an intellectual disability. This is particularly unfortunate as, with early intervention, more and more individuals with ASD are moving into college and vocational settings (Wenzel and Brown 2014). The lack of appropriate and readily comparable definitions across states essentially means that we have sometimes markedly different practices and approaches with no systematic way to relate them to each other or outcomes.

The analyses of Barton and her colleagues also suggest a lack of consensus regarding the recommended assessment protocol: requirements for the use of observations, the types of evaluators, the choice of assessment tools, and the need for family input varied widely from state to state. This is in contrast to the consensus achieved across researchers regarding the choice and proper use of tools to diagnose ASD, and to a lesser extent other characteristics and co-occurring conditions. While community-based assessments are invariably less rigorous than those conducted in highly specialized research settings, the lack of consistency across states is nonetheless striking. Many individual states have sought to rectify this by developing guides to assessment and identification (Connecticut State Department of Education 2005; Hepburn et al. 2014; California Department of Developmental Services 2002), but the lack of a national policy or program on this critical question of educational assessment is remarkable.

The impact of this policy gap is evident further downstream, in the training in assessment provided to practitioners as part of their initial licensure. This is occurring just at a time when training in behavioral techniques is expanding as is, hopefully, insurance coverage. In our experience, most practitioners need considerable post-graduate training to reach a reasonable level of rigor in their assessment of students with ASD, training that many local and state education agencies lack the resources to deliver effectively. The same problem is evident in medical settings where the diagnoses provided by inexperienced practioners are much less stable (Zablostky et al. 2015).

In reviewing the available literature, Barton and her colleagues identified the most important potential impact of these inconsistent (and arguably outdated) policies: the failure to properly identify students with ASD.

Variation across programs and districts within and across states and the lack of state reciprocity can result in missed, inaccurate, or delayed identification. Missed or inaccurate identification might lead to the provision of ineffective services or services provided at the wrong intensity, which will significantly impact a child’s developmental trajectory and academic outcomes.

Barton found tremendous variations in the proportion of children within the special education category identified with ASD, ranging from 1.1 % in Iowa to 17.9 % in California, for the 6–21 year old age group. Though not discussed by Barton, shifts in the relative rates of identification within states for 3–5 year olds as compared to 6–21 year olds were also inexplicable: in some states rates doubled in the older group (as in California, growing from 9.6 to 17.9 %), while in five other states (Arizona, Arkansas, Idaho, Mississippi, and Missouri) rates decreased by more than half. We know that ASD does not vary significantly from one region to another, and so these differences are the direct result of policies related to assessment and identification, or related to other elements derived from policy (like funding or services). Similar discrepancies are observed in other programs like the federally funded Supplemental Social Security Income program (Boat and Wu 2015).

Changes in the best estimates of prevalence in the United States, derived from the Centers for Disease Control, have generated more attention in both the popular literature and the scientific press than other pronouncements. The remarkable lack of correspondence between these estimates and the actual number of people with ASD identified with autism within state education or health systems—or the administrative prevalence—has garnered much less attention. Though Barton did not consider variations in the overall prevalence of ASD across states, other researchers have. For example, the most recent autism census in Pennsylvania indicated that the prevalence per county ranges from 15.6/10 000 (or 1 in 640) to 57.1/10 000 (or 1 in 175) (Shea, 2014). The total administrative prevalence in Pennsylvania is about 30 % of what is expected based on the 2014 CDC estimates. In other words, for every individual who had received public services under an autism label in Pennsylvania, there were at least two other individuals with ASD who had not. Whether such individuals indeed should have such a diagnosis remains an open question. Notwithstanding the reasons why the administrative prevalence will always be less than the best estimates of the “true” prevalence (for example, some individuals may not need services, or may seek privately funded services), this gap is likely to be very significant, has not been closed despite 20 years of public concern about ASD’s prevalence, and will continue to limit the impact of improved intervention on the population at large. A number of studies have suggested that this gap is likely greatest amongst families in poverty. Analyses at the state (Palmer et al. 2005) and the national level (Mandell and Palmer 2005; Boat and Wu 2015) suggest links between lower identification rates, and family income and/or per-pupil spending (which it itself often correlated with poverty). Barton and her colleagues found a similar relationship, but one that also depended on the use of independent evaluations.

Given the considerable state and federal expenditures on autism and the potential for strong economic impact of needs for long term care (Knapp et al. 2014), these policy gaps deserve special attention because they are only one link in the chain that yokes better outcomes for the entire population of people with ASD to necessary improvements in services, research, training, and policy (Doehring 2013). Given the considerable expenditure in time and money the government is also making to have researchers submit their research data to a national data base, it seems somewhat paradoxical that actual data that might clarify the mysteries of state to state variations are lacking. Indeed in many ways we are missing an important opportunity to understand what are essentially 50 different (or more) approaches to autism in the various states at all levels (schools and beyond). This lack of information seems particularly unfortunate since so many ASD initiatives fall short of expectations because they are based on the assumption that a change in one of these links will be sufficient: a new intervention will not be accessible if it cannot be implemented through publicly funded programs; a new mandate to provide insurance will not change outcomes without properly trained personnel, and so on. Ensuring that the educational definitions of ASD are updated, or mandating that professionals conduct certain types of assessments, is just a start: it would depend upon properly trained personnel, which may require changes to graduate training programs; it may require changes to ensure that these personnel have the time to conduct these assessments, which may affect the overall number needed and require new funding or entail a reorganization of existing resources. Weak links between research and policy can put individuals with ASD at risk for very negative outcomes that are entirely preventable given what is known about effective intervention (Doehring 2014).

Demonstrating how to close these gaps is itself a research priority. Understanding the most significant contributor(s) to these gaps in educational assessment could help us to understand other gaps in assessment or in intervention. Systematically documenting how programs integrate services, research, training, and policy to create better outcomes would help in the development of strategies at the local, state, or national level that make best use of available resources. This may require the convergence of best practices in research and in public or private sector innovation. Widening the focus from an emphasis on rigorous methods and careful consideration of specific findings to broader policy implications will make the work of researchers even more relevant.