Skip to main content

Preliminary Investigation of the Impact of a Web-Based Module on Direct Behavior Rating Accuracy

Abstract

The purpose of this study was to provide initial evaluation of a web-based training module on rating accuracy when using Direct Behavior Rating (DBR). Components of the training module included (a) an overview familiarizing users with assessing student behavior with this method, (b) modeling that includes frame-of-reference training, and (c) multiple opportunities to practice and receive immediate corrective feedback. Participants included 90 undergraduate students assigned to one of six sessions (three experimental and three control). Rating accuracy served as the outcome measure defined as the difference between the rater score and a comparison derived from an expert DBR or systematic direct observation (SDO) score. Rating targets included academically engaged, disruptive, and respectful behavior. Completion of the DBR training module generally yielded ratings that more closely compared with the scores obtained via DBR experts and SDO, yet specific results were mixed across type of rating (i.e., behavior target and duration) and comparison (i.e., DBR expert and SDO). Limitations, future research directions, and implications for practice are discussed.

This is a preview of subscription content, access via your institution.

Fig. 1

References

  • Athey, T. R., & McIntyre, R. M. (1987). Effect of rater training on rater accuracy: Levels-of-processing theory and social facilitation theory perspectives. Journal of Applied Psychology, 72, 567–572.

    Article  Google Scholar 

  • Brown, A., & Green, T. (2003). Showing up to class in pajamas (or less!): The fantasies and realities of on-line professional development course for teachers. Clearing House, 76, 148–151.

    Article  Google Scholar 

  • Chafouleas, S. M. (2011). Direct behavior rating: A review of the issues and research in its development. Education and Treatment of Children, 34, 575–591.

    Article  Google Scholar 

  • Chafouleas, S. M., Jaffery, R., Riley-Tillman, T. C., Christ, T. J., & Sen, R. (2013). The impact of target, wording, and duration on rating accuracy for direct behavior rating. Assessment for Effective Intervention, 39, 39–53. doi:10.1177/1534508413489335.

    Article  Google Scholar 

  • Chafouleas, S. M., Kilgus, S. P., & Hernandez, P. (2009a). Using direct behavior rating (DBR) to screen for school social risk: A preliminary comparison of methods in a kindergarten sample. Assessment for Effective Intervention, 34, 224–230.

    Article  Google Scholar 

  • Chafouleas, S. M., Kilgus, S. P., Riley-Tillman, T. C., Jaffery, R., & Harrison, S. (2012). Preliminary evaluation of various training components on accuracy of Direct Behavior Ratings. Journal of School Psychology. doi:10.1016/j.jsp.2011.11.007.

  • Chafouleas, S. M., Riley-Tillman, T. C., & Christ, T. J. (2009b). Direct behavior rating (DBR): An emerging method for assessing social behavior within a tiered intervention system. Assessment for Effective Intervention, 34, 201–213.

    Article  Google Scholar 

  • Chafouleas, S. M., Riley-Tillman, T. C., & Sugai, G. (2007). School-based behavioral assessment: Information intervention and instruction. New York: Guilford.

    Google Scholar 

  • Chafouleas, S. M., Sanetti, L. M. H., Jaffery, R., & Fallon, L. (2012b). Research to practice: An evaluation of a class-wide intervention package involving self-management and a group contingency on behavior of middle school students. Journal of Behavioral Education, 21, 34–57. doi:10.1007/s10864-011-9135-8.

    Article  Google Scholar 

  • Chafouleas, S. M., Sanetti, L. M. H., Kilgus, S. P., & Maggin, D. M. (2012c). Evaluating sensitivity to behavioral change across consultation cases using direct behavior rating single-item scales (DBR-SIS). Exceptional Children, 78, 491–505.

    Google Scholar 

  • Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Generalizability and dependability of direct behavior ratings (DBR) across raters and observations. Educational and Psychological Measurement, 70, 825–843. doi:10.1177/0013164410366695.

    Article  Google Scholar 

  • Edelbrock, C. (1983). Problems and issues in using rating scales to assess child personality and psychopathology. School Psychology Review, 12, 293–299.

    Google Scholar 

  • Floyd, R. G., & Bose, J. E. (2003). Behavior rating scales for assessment of emotional disturbance: A critical review of measurement characteristics. Journal of Psychoeducational Assessment, 21, 43–78.

    Article  Google Scholar 

  • Guion, R. M. (1965). Personnel testing. New York, NY: Mc Graw-Hill.

    Google Scholar 

  • Harris, J., Tyre, C., & Wilkinson, C. (1993). Using the child behavior checklist in ordinary primary schools. British Journal of Educational Psychology, 63, 245–260.

    Article  PubMed  Google Scholar 

  • Harrison, S. E., Riley-Tillman, T. C., & Chafouleas, S. M. (in press). Practice with feedback and base rates of target behavior: Implications for rater accuracy using direct behavior ratings. Canadian Journal of School Psychology.

  • Hutton, J. B., & Roberts, T. (1984). Disturbing behaviors: Comparison of regular and special education teachers. Perceptual and Motor Skills, 58, 799–802.

    Article  PubMed  Google Scholar 

  • Jaffery, R., Johnson, A. H., Bowler, M. C., Riley-Tillman, T. C., Chafouleas, S. M., & Harrison, S. E. (in press). Using consensus building procedures with expert raters to establish true score estimates of behavior for direct behavior rating. Assessment for Effective Intervention.

  • Kline, T. B., & Sulsky, L. M. (2009). Measurement and assessment issues in performance appraisal. Canadian Psychology/Psychologie Canadienne, 50(3), 161–171. doi:10.1037/a0015668.

    Article  Google Scholar 

  • LeBel, T. J., Kilgus, S. P., Briesch, A. M., & Chafouleas, S. M. (2010). The impact of training on the accuracy of teacher-completed direct behavior ratings (DBRs). Journal of Positive Behavioral Interventions, 12, 55–63.

    Article  Google Scholar 

  • Lenth, R. V. (2006–2009). Java applets for power and sample size (computer software). Retrieved September 2, 2010, from http://www.stat.uiowa.edu/~rlenth/Power.

  • McConaughy, S. H., & Ritter, D. (2002). Multidimensional assessment of emotional and behavioral disorders. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 1303–1320). Washington, DC: National Association of School Psychologists.

    Google Scholar 

  • Olsen, H., Donaldson, A. J., & Hudson, S. D. (2010). Online professional development: Choices for early childhood educators. Dimensions of Early Childhood, 38, 12–17.

    Google Scholar 

  • O’Neill, K. B., & Liljequist, L. (2002). Strategies used by teachers to rate student behavior. Psychology in the Schools, 39, 77–85. doi:10.1002/pits.10007.

    Article  Google Scholar 

  • Reinke, W., Lewis-Palmer, T., & Merrell, K. (2008). The classroom check-up: A classwide teacher consultation model for increasing praise and decreasing disruptive behavior. School Psychology Review, 37, 315–332.

    PubMed Central  PubMed  Google Scholar 

  • Riley-Tillman, T. C., Chafouleas, S. M., Briesch, A. M., & Eckert, T. (2008a). Daily behavior report cards and systematic direct observation: An investigation of the acceptability, reported training and use, and decision reliability among school psychologists. Journal of Behavioral Education, 17, 313–327.

    Article  Google Scholar 

  • Riley-Tillman, T. C., Chafouleas, S. M., Sassu, K. A., Chanese, J. A. M., & Glazer, A. D. (2008b). Examining the agreement of direct behavior ratings and systematic direct observation for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10, 136–143.

    Article  Google Scholar 

  • Riley-Tillman, T. C., Kalberer, S. M., & Chafouleas, S. M. (2005). Selecting the right tool for the job: A review of behavior monitoring tools used to assess student response to intervention. The California School Psychologist, 10, 81–91.

    Article  Google Scholar 

  • Roch, S. G., Woehr, D. J., Mishra, V., & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. Journal of Occupational and Organizational Psychology, 85, 370.

    Article  Google Scholar 

  • Russell, M., Carey, R., Kleiman, G., & Venable, J. D. (2009). Face-to-face and online professional development for mathematics teachers: A comparative study. Journal of Asynchronous Learning Networks, 13, 71–87.

    Google Scholar 

  • Saks, A. M., & Belcourt, M. (2006). An investigation of training activities and transfer of training in organizations. Human Resource Management, 45, 629–648.

    Article  Google Scholar 

  • Schlientz, M. D., Riley-Tillman, T. C., Briesch, A. M., Walcott, C. M., & Chafouleas, S. M. (2009). The impact of training on the accuracy of direct behavior ratings (DBRs). School Psychology Quarterly, 24, 73–83.

    Article  Google Scholar 

  • Smith, D. E. (1986). Training programs for performance appraisal: A review. Academy of Management Review, 11, 22–40.

    Google Scholar 

  • Spool, M. D. (1978). Training programs for observers of behavior: A review. Personnel Psychology, 31, 853–888.

    Article  Google Scholar 

  • Stamoulis, D. T., & Hauenstein, N. M. A. (1993). Rater training and rating accuracy: Training for dimensional accuracy versus training for rate differentiation. Journal of Applied Psychology, 76, 994–1003.

    Article  Google Scholar 

  • Sulsky, L. M., & Balzer, W. K. (1988). Meaning and measurement of performance rating accuracy: Some methodological and theoretical concerns. Journal of Applied Psychology, 73, 497–506.

    Article  Google Scholar 

  • Tapp, J. (2004). MOOSES (multi-option observation system for experimental studies). http://kc.vanderbilt.edu/mooses/mooses.html.

  • Thornton, G., & Zorich, S. (1980). Training to improve observer accuracy. Journal of Applied Psychology, 65, 351–354.

    Article  Google Scholar 

  • Woehr, D. J., & Huffcutt, A. I. (1994). Rater training for performance appraisal: A quantitative review. Journal of Occupational and Organizational Psychology, 67, 189–205.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Rohini Sen for her assistance with data analyses and Austin Johnson for his editorial feedback. Preparation of this article was supported by funding provided by the Institute for Education Sciences, U.S. Department of Education (R324B060014). Opinions expressed herein do not necessarily reflect the position of the Institute or U.S. Department of Education, and such endorsements should not be inferred.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sandra M. Chafouleas.

Appendix: Direct Behavior Rating (DBR) Form: 3 Standard Behaviors

Appendix: Direct Behavior Rating (DBR) Form: 3 Standard Behaviors

figure a

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chafouleas, S.M., Riley-Tillman, T.C., Jaffery, R. et al. Preliminary Investigation of the Impact of a Web-Based Module on Direct Behavior Rating Accuracy. School Mental Health 7, 92–104 (2015). https://doi.org/10.1007/s12310-014-9130-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12310-014-9130-z

Keywords

  • Direct Behavior Rating
  • Behavior assessment
  • Rater accuracy
  • Teacher training