Preliminary Investigation of the Impact of a Web-Based Module on Direct Behavior Rating Accuracy
- 332 Downloads
The purpose of this study was to provide initial evaluation of a web-based training module on rating accuracy when using Direct Behavior Rating (DBR). Components of the training module included (a) an overview familiarizing users with assessing student behavior with this method, (b) modeling that includes frame-of-reference training, and (c) multiple opportunities to practice and receive immediate corrective feedback. Participants included 90 undergraduate students assigned to one of six sessions (three experimental and three control). Rating accuracy served as the outcome measure defined as the difference between the rater score and a comparison derived from an expert DBR or systematic direct observation (SDO) score. Rating targets included academically engaged, disruptive, and respectful behavior. Completion of the DBR training module generally yielded ratings that more closely compared with the scores obtained via DBR experts and SDO, yet specific results were mixed across type of rating (i.e., behavior target and duration) and comparison (i.e., DBR expert and SDO). Limitations, future research directions, and implications for practice are discussed.
KeywordsDirect Behavior Rating Behavior assessment Rater accuracy Teacher training
- Chafouleas, S. M., Kilgus, S. P., Riley-Tillman, T. C., Jaffery, R., & Harrison, S. (2012). Preliminary evaluation of various training components on accuracy of Direct Behavior Ratings. Journal of School Psychology. doi:10.1016/j.jsp.2011.11.007.
- Chafouleas, S. M., Riley-Tillman, T. C., & Sugai, G. (2007). School-based behavioral assessment: Information intervention and instruction. New York: Guilford.Google Scholar
- Chafouleas, S. M., Sanetti, L. M. H., Jaffery, R., & Fallon, L. (2012b). Research to practice: An evaluation of a class-wide intervention package involving self-management and a group contingency on behavior of middle school students. Journal of Behavioral Education, 21, 34–57. doi:10.1007/s10864-011-9135-8.CrossRefGoogle Scholar
- Chafouleas, S. M., Sanetti, L. M. H., Kilgus, S. P., & Maggin, D. M. (2012c). Evaluating sensitivity to behavioral change across consultation cases using direct behavior rating single-item scales (DBR-SIS). Exceptional Children, 78, 491–505.Google Scholar
- Edelbrock, C. (1983). Problems and issues in using rating scales to assess child personality and psychopathology. School Psychology Review, 12, 293–299.Google Scholar
- Guion, R. M. (1965). Personnel testing. New York, NY: Mc Graw-Hill.Google Scholar
- Harrison, S. E., Riley-Tillman, T. C., & Chafouleas, S. M. (in press). Practice with feedback and base rates of target behavior: Implications for rater accuracy using direct behavior ratings. Canadian Journal of School Psychology.Google Scholar
- Jaffery, R., Johnson, A. H., Bowler, M. C., Riley-Tillman, T. C., Chafouleas, S. M., & Harrison, S. E. (in press). Using consensus building procedures with expert raters to establish true score estimates of behavior for direct behavior rating. Assessment for Effective Intervention.Google Scholar
- Lenth, R. V. (2006–2009). Java applets for power and sample size (computer software). Retrieved September 2, 2010, from http://www.stat.uiowa.edu/~rlenth/Power.
- McConaughy, S. H., & Ritter, D. (2002). Multidimensional assessment of emotional and behavioral disorders. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 1303–1320). Washington, DC: National Association of School Psychologists.Google Scholar
- Olsen, H., Donaldson, A. J., & Hudson, S. D. (2010). Online professional development: Choices for early childhood educators. Dimensions of Early Childhood, 38, 12–17.Google Scholar
- Riley-Tillman, T. C., Chafouleas, S. M., Briesch, A. M., & Eckert, T. (2008a). Daily behavior report cards and systematic direct observation: An investigation of the acceptability, reported training and use, and decision reliability among school psychologists. Journal of Behavioral Education, 17, 313–327.CrossRefGoogle Scholar
- Russell, M., Carey, R., Kleiman, G., & Venable, J. D. (2009). Face-to-face and online professional development for mathematics teachers: A comparative study. Journal of Asynchronous Learning Networks, 13, 71–87.Google Scholar
- Smith, D. E. (1986). Training programs for performance appraisal: A review. Academy of Management Review, 11, 22–40.Google Scholar
- Tapp, J. (2004). MOOSES (multi-option observation system for experimental studies). http://kc.vanderbilt.edu/mooses/mooses.html.