Skip to main content
Log in

Top 10 Responses to the Commentaries on Dixon, Reed, Smith et al. (2015)

  • Discussion and Review Paper
  • Published:
Behavior Analysis in Practice Aims and scope Submit manuscript

Abstract

In a previous article (Dixon et al. Behavior Analysis in Practice, 8(1), 7–15, 2015), we put forward data suggesting that most behavior analytic faculty do not publish in major behavior analytic journals, and in only about 50 % of behavior analysis programs have faculty combined to produce ten or more empirical articles. Several commentaries followed the release of our article, with content that ranged from supporting our endeavors and confirming the dangerous position our field may be in to highlighting the need for further refinement in procedures used to rank the quality of behavior analysis graduate training programs. Presented in the present article are our “top 10” responses to these commentaries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In at least one case, the day-to-day variation was not trivial. Our initial count omitted more than 20 publications of one faculty member that, for unknown reasons, appeared in the Google Scholar database only months later after we collected out data. We did not publish an erratum because our article accurately described the data we obtained using the procedures we described. Still, this deviation underscores the difficulties of quantifying scholarly productivity.

  2. Another factor that influences data is how variables are operationally defined. Some who contacted us about our article suggested that our rankings omitted certain individuals who work primarily in clinical research settings, are affiliated in some fashion with a training program, and therefore contribute to the mentoring of graduate students. We had a similar concern while gathering our data but chose to uniformly apply the objective search criteria that were described in our Methods section. One difficulty that we experienced is that web sites and other public descriptions of graduate programs do not always accurately identify program faculty or specify the role that affiliated faculty play in a program. In the latter case, for instance, a faculty member might teach courses in a BCBA-approved course sequence, be listed as program faculty but teach only in other areas like behavioral neuroscience, or supervise students’ clinical work without teaching didactic courses at all. Similarly, a program-affiliated researcher may or may not routinely involve program students in his or her research program. If the goal is to hold programs accountable for how practitioners are trained, an obvious initial step is to standardize what is meant by “program faculty.” We emphasize, however, that the present lack of standardization is a characteristic of the field, not a weakness that is peculiar to our specific data collection methods.

  3. It is important here not to endorse stereotypes uncritically. Many of the people listed in our “Top 10” also have distinguished reputations for teaching frequently and effectively. Research productivity does not necessarily preclude teaching, and heavy teaching loads do not necessarily preclude research productivity.

  4. A ranking system also carries risks. One plausible administrative response to a low program ranking is to discontinue the program. That is not necessarily a bad thing.

References

  • Bailey, J. S., & Burch, M. R. (2002). Research methods in applied behavior analysis. Thousand Oaks: Sage.

    Google Scholar 

  • Critchfield, T.S. (2015). In dreams begin responsibility: why and how to measure the quality of graduate training in applied behavior analysis. Behavior Analysis in Practice.

  • Critchfield, T. S. (2015b). What counts as high-quality practitioner training in applied behavior analysis? Behavior Analysis in Practice, 8(1), 3–6.

    Article  Google Scholar 

  • Detrich, R. (2015). Are we looking for love in all the wrong places? Comment on Dixon et al. Behavior Analysis in Practice, 1-3.

  • Dixon, M. R., Reed, D. D., Smith, T., Belisle, J., & Jackson, R. E. (2015). Research rankings of behavior analytic graduate training programs and their faculty. Behavior Analysis in Practice, 8(1), 7–15.

    Article  Google Scholar 

  • Hayes, L. J. (2015). There’sa man goin’round taking names. Behavior Analysis in Practice, 1-2.

  • Hayes, S. C. (1978). Theory and technology in behavior analysis. Behavior Analyst, 1, 35–41.

    Google Scholar 

  • Kulkarni, A. V., Aziz, B., Shams, I., & Busse, J. W. (2009). Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals. Journal of the American Medical Association, 302, 1092–1096.

    Article  PubMed  Google Scholar 

  • Maguire, R. W., Allen, R. F. (2015). Another perspective on research as a measure of high-quality practitioner training: a response to Dixon, Reed, Smith, Belisle, and Jackson. Behavior Analysis in Practice, 1-2.

  • Wilder, D. A., Lipschultz, J. L., Kelley III, D. P., Rey, C., Enderli, A. (2015). An alternative measure of research productivity among behavior analytic graduate training programs: a response to Dixon et al.(2015). Behavior Analysis in Practice, 1-3.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jordan Belisle.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dixon, M.R., Reed, D.D., Smith, T. et al. Top 10 Responses to the Commentaries on Dixon, Reed, Smith et al. (2015). Behav Analysis Practice 8, 165–169 (2015). https://doi.org/10.1007/s40617-015-0094-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40617-015-0094-8

Keywords

Navigation