School Mental Health

, Volume 7, Issue 2, pp 92–104 | Cite as

Preliminary Investigation of the Impact of a Web-Based Module on Direct Behavior Rating Accuracy

  • Sandra M. Chafouleas
  • T. Chris Riley-Tillman
  • Rose Jaffery
  • Faith G. Miller
  • Sayward E. Harrison
Original Paper


The purpose of this study was to provide initial evaluation of a web-based training module on rating accuracy when using Direct Behavior Rating (DBR). Components of the training module included (a) an overview familiarizing users with assessing student behavior with this method, (b) modeling that includes frame-of-reference training, and (c) multiple opportunities to practice and receive immediate corrective feedback. Participants included 90 undergraduate students assigned to one of six sessions (three experimental and three control). Rating accuracy served as the outcome measure defined as the difference between the rater score and a comparison derived from an expert DBR or systematic direct observation (SDO) score. Rating targets included academically engaged, disruptive, and respectful behavior. Completion of the DBR training module generally yielded ratings that more closely compared with the scores obtained via DBR experts and SDO, yet specific results were mixed across type of rating (i.e., behavior target and duration) and comparison (i.e., DBR expert and SDO). Limitations, future research directions, and implications for practice are discussed.


Direct Behavior Rating Behavior assessment Rater accuracy Teacher training 



The authors would like to thank Rohini Sen for her assistance with data analyses and Austin Johnson for his editorial feedback. Preparation of this article was supported by funding provided by the Institute for Education Sciences, U.S. Department of Education (R324B060014). Opinions expressed herein do not necessarily reflect the position of the Institute or U.S. Department of Education, and such endorsements should not be inferred.


  1. Athey, T. R., & McIntyre, R. M. (1987). Effect of rater training on rater accuracy: Levels-of-processing theory and social facilitation theory perspectives. Journal of Applied Psychology, 72, 567–572.CrossRefGoogle Scholar
  2. Brown, A., & Green, T. (2003). Showing up to class in pajamas (or less!): The fantasies and realities of on-line professional development course for teachers. Clearing House, 76, 148–151.CrossRefGoogle Scholar
  3. Chafouleas, S. M. (2011). Direct behavior rating: A review of the issues and research in its development. Education and Treatment of Children, 34, 575–591.CrossRefGoogle Scholar
  4. Chafouleas, S. M., Jaffery, R., Riley-Tillman, T. C., Christ, T. J., & Sen, R. (2013). The impact of target, wording, and duration on rating accuracy for direct behavior rating. Assessment for Effective Intervention, 39, 39–53. doi: 10.1177/1534508413489335.CrossRefGoogle Scholar
  5. Chafouleas, S. M., Kilgus, S. P., & Hernandez, P. (2009a). Using direct behavior rating (DBR) to screen for school social risk: A preliminary comparison of methods in a kindergarten sample. Assessment for Effective Intervention, 34, 224–230.CrossRefGoogle Scholar
  6. Chafouleas, S. M., Kilgus, S. P., Riley-Tillman, T. C., Jaffery, R., & Harrison, S. (2012). Preliminary evaluation of various training components on accuracy of Direct Behavior Ratings. Journal of School Psychology. doi: 10.1016/j.jsp.2011.11.007.
  7. Chafouleas, S. M., Riley-Tillman, T. C., & Christ, T. J. (2009b). Direct behavior rating (DBR): An emerging method for assessing social behavior within a tiered intervention system. Assessment for Effective Intervention, 34, 201–213.CrossRefGoogle Scholar
  8. Chafouleas, S. M., Riley-Tillman, T. C., & Sugai, G. (2007). School-based behavioral assessment: Information intervention and instruction. New York: Guilford.Google Scholar
  9. Chafouleas, S. M., Sanetti, L. M. H., Jaffery, R., & Fallon, L. (2012b). Research to practice: An evaluation of a class-wide intervention package involving self-management and a group contingency on behavior of middle school students. Journal of Behavioral Education, 21, 34–57. doi: 10.1007/s10864-011-9135-8.CrossRefGoogle Scholar
  10. Chafouleas, S. M., Sanetti, L. M. H., Kilgus, S. P., & Maggin, D. M. (2012c). Evaluating sensitivity to behavioral change across consultation cases using direct behavior rating single-item scales (DBR-SIS). Exceptional Children, 78, 491–505.Google Scholar
  11. Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Generalizability and dependability of direct behavior ratings (DBR) across raters and observations. Educational and Psychological Measurement, 70, 825–843. doi: 10.1177/0013164410366695.CrossRefGoogle Scholar
  12. Edelbrock, C. (1983). Problems and issues in using rating scales to assess child personality and psychopathology. School Psychology Review, 12, 293–299.Google Scholar
  13. Floyd, R. G., & Bose, J. E. (2003). Behavior rating scales for assessment of emotional disturbance: A critical review of measurement characteristics. Journal of Psychoeducational Assessment, 21, 43–78.CrossRefGoogle Scholar
  14. Guion, R. M. (1965). Personnel testing. New York, NY: Mc Graw-Hill.Google Scholar
  15. Harris, J., Tyre, C., & Wilkinson, C. (1993). Using the child behavior checklist in ordinary primary schools. British Journal of Educational Psychology, 63, 245–260.CrossRefPubMedGoogle Scholar
  16. Harrison, S. E., Riley-Tillman, T. C., & Chafouleas, S. M. (in press). Practice with feedback and base rates of target behavior: Implications for rater accuracy using direct behavior ratings. Canadian Journal of School Psychology.Google Scholar
  17. Hutton, J. B., & Roberts, T. (1984). Disturbing behaviors: Comparison of regular and special education teachers. Perceptual and Motor Skills, 58, 799–802.CrossRefPubMedGoogle Scholar
  18. Jaffery, R., Johnson, A. H., Bowler, M. C., Riley-Tillman, T. C., Chafouleas, S. M., & Harrison, S. E. (in press). Using consensus building procedures with expert raters to establish true score estimates of behavior for direct behavior rating. Assessment for Effective Intervention.Google Scholar
  19. Kline, T. B., & Sulsky, L. M. (2009). Measurement and assessment issues in performance appraisal. Canadian Psychology/Psychologie Canadienne, 50(3), 161–171. doi: 10.1037/a0015668.CrossRefGoogle Scholar
  20. LeBel, T. J., Kilgus, S. P., Briesch, A. M., & Chafouleas, S. M. (2010). The impact of training on the accuracy of teacher-completed direct behavior ratings (DBRs). Journal of Positive Behavioral Interventions, 12, 55–63.CrossRefGoogle Scholar
  21. Lenth, R. V. (2006–2009). Java applets for power and sample size (computer software). Retrieved September 2, 2010, from
  22. McConaughy, S. H., & Ritter, D. (2002). Multidimensional assessment of emotional and behavioral disorders. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 1303–1320). Washington, DC: National Association of School Psychologists.Google Scholar
  23. Olsen, H., Donaldson, A. J., & Hudson, S. D. (2010). Online professional development: Choices for early childhood educators. Dimensions of Early Childhood, 38, 12–17.Google Scholar
  24. O’Neill, K. B., & Liljequist, L. (2002). Strategies used by teachers to rate student behavior. Psychology in the Schools, 39, 77–85. doi: 10.1002/pits.10007.CrossRefGoogle Scholar
  25. Reinke, W., Lewis-Palmer, T., & Merrell, K. (2008). The classroom check-up: A classwide teacher consultation model for increasing praise and decreasing disruptive behavior. School Psychology Review, 37, 315–332.PubMedCentralPubMedGoogle Scholar
  26. Riley-Tillman, T. C., Chafouleas, S. M., Briesch, A. M., & Eckert, T. (2008a). Daily behavior report cards and systematic direct observation: An investigation of the acceptability, reported training and use, and decision reliability among school psychologists. Journal of Behavioral Education, 17, 313–327.CrossRefGoogle Scholar
  27. Riley-Tillman, T. C., Chafouleas, S. M., Sassu, K. A., Chanese, J. A. M., & Glazer, A. D. (2008b). Examining the agreement of direct behavior ratings and systematic direct observation for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10, 136–143.CrossRefGoogle Scholar
  28. Riley-Tillman, T. C., Kalberer, S. M., & Chafouleas, S. M. (2005). Selecting the right tool for the job: A review of behavior monitoring tools used to assess student response to intervention. The California School Psychologist, 10, 81–91.CrossRefGoogle Scholar
  29. Roch, S. G., Woehr, D. J., Mishra, V., & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. Journal of Occupational and Organizational Psychology, 85, 370.CrossRefGoogle Scholar
  30. Russell, M., Carey, R., Kleiman, G., & Venable, J. D. (2009). Face-to-face and online professional development for mathematics teachers: A comparative study. Journal of Asynchronous Learning Networks, 13, 71–87.Google Scholar
  31. Saks, A. M., & Belcourt, M. (2006). An investigation of training activities and transfer of training in organizations. Human Resource Management, 45, 629–648.CrossRefGoogle Scholar
  32. Schlientz, M. D., Riley-Tillman, T. C., Briesch, A. M., Walcott, C. M., & Chafouleas, S. M. (2009). The impact of training on the accuracy of direct behavior ratings (DBRs). School Psychology Quarterly, 24, 73–83.CrossRefGoogle Scholar
  33. Smith, D. E. (1986). Training programs for performance appraisal: A review. Academy of Management Review, 11, 22–40.Google Scholar
  34. Spool, M. D. (1978). Training programs for observers of behavior: A review. Personnel Psychology, 31, 853–888.CrossRefGoogle Scholar
  35. Stamoulis, D. T., & Hauenstein, N. M. A. (1993). Rater training and rating accuracy: Training for dimensional accuracy versus training for rate differentiation. Journal of Applied Psychology, 76, 994–1003.CrossRefGoogle Scholar
  36. Sulsky, L. M., & Balzer, W. K. (1988). Meaning and measurement of performance rating accuracy: Some methodological and theoretical concerns. Journal of Applied Psychology, 73, 497–506.CrossRefGoogle Scholar
  37. Tapp, J. (2004). MOOSES (multi-option observation system for experimental studies).
  38. Thornton, G., & Zorich, S. (1980). Training to improve observer accuracy. Journal of Applied Psychology, 65, 351–354.CrossRefGoogle Scholar
  39. Woehr, D. J., & Huffcutt, A. I. (1994). Rater training for performance appraisal: A quantitative review. Journal of Occupational and Organizational Psychology, 67, 189–205.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Sandra M. Chafouleas
    • 1
  • T. Chris Riley-Tillman
    • 2
  • Rose Jaffery
    • 1
  • Faith G. Miller
    • 3
  • Sayward E. Harrison
    • 4
  1. 1.Department of Educational PsychologyUniversity of ConnecticutStorrsUSA
  2. 2.University of MissouriColumbiaUSA
  3. 3.University of MinnesotaMinneapolosUSA
  4. 4.East Carolina UniversityGreenvilleUSA

Personalised recommendations