Advertisement

Journal of Computing in Higher Education

, Volume 24, Issue 1, pp 58–69 | Cite as

Online versus paper evaluations: differences in both quantitative and qualitative data

  • William B. Burton
  • Adele Civitano
  • Penny Steiner-Grossman
Article

Abstract

This study sought to determine if differences exist in the quantitative and qualitative data collected with paper and online versions of a medical school clerkship evaluation form. Data from six-and-a-half years of clerkship evaluations were used, some collected before and some after the conversion from a paper to an online evaluation system. The quantitative data consisted of a composite score based on the average of several Likert-type items; the qualitative data consisted of open-ended comments about the clerkships. Clerkship ratings were more positive in the online version. Students made significantly longer comments about both strengths and weaknesses on the online form than on the paper form. In addition, comments made on the online form were judged to be more informative and showed less evidence of “negativity” than those made on the paper form. The findings suggest that both quantitative and qualitative data obtained with online evaluation forms can differ in important ways from data collected with paper forms.

Keywords

Student feedback Course evaluation questionnaires Qualitative data Inter-rater reliability Factor analysis 

References

  1. Anderson, H. M., Cain, J., & Bird, E. (2005). Online course evaluations: Review of literature and a pilot study. American Journal of Pharmaceutical Education, 69(1), 34–43.Google Scholar
  2. Ardalan, A., Ardalan, R., Coppage, S., & Crouch, W. (2007). A comparison of student feedback obtained through paper-based and web-based surveys of faculty teaching. British Journal of Educational Technology, 38(6), 1085–1101.CrossRefGoogle Scholar
  3. Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does on online delivery system influence student evaluations? Journal of Economic Education, 37(1), 21–38.CrossRefGoogle Scholar
  4. Ballantyne, C. S. (2004). Online or on paper: An examination of the difference in response and respondents to a survey administered in two modes. Paper presented at the annual international conference for the Australasian Evaluation Society, October 13–15, Adelaide, South Australia.Google Scholar
  5. Bullock, C. D. (2003). Online collection of midterm student feedback. New Directions for Teaching and Learning, 96, 95–102.CrossRefGoogle Scholar
  6. Carini, R. M., Hayek, J. C., Kuh, G. D., Kennedy, J. M., & Ouimet, J. A. (2003). College students responses to web and paper surveys: Does mode matter? Research in Higher Education, 44(1), 1–19. doi: 0361-0365/03/0200-0001/0.CrossRefGoogle Scholar
  7. Chang, T. S. (2005). The validity and reliability of student ratings: Comparison between paper-pencil and online survey. Chinese Journal of Psychology, 47(2), 113–125.Google Scholar
  8. Dommeyer, C. J. (2006). The effect of evaluation location on peer evaluations. Journal of Education for Business, 82(1), 21–26.CrossRefGoogle Scholar
  9. Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: Their effects on response rates and evaluations. Assessment & Evaluation in Higher Education, 29(5), 611–623.CrossRefGoogle Scholar
  10. Donovan, J., Mader, C., & Shinsky, J. (2006). Constructive student feedback: Online vs. traditional course evaluations. Journal of Interactive Online Learning, 5(5), 283–296.Google Scholar
  11. Ernst, D. (2006). Student evaluations: A comparison of online vs. paper data collection. Paper presented at the annual conference for Educause, October 9–12, Dallas, TX.Google Scholar
  12. Gamliel, E., & Davidovitz, L. (2005). Online versus traditional teaching evaluation: Mode can matter. Assessment & Evaluation in Higher Education, 30, 581–592.CrossRefGoogle Scholar
  13. Handwerk, P. G., Carson, C., & Blackwell, K. M. (2000). On-line vs. paper-and-pencil surveying of students: A case study. Paper presented at the annual forum for the Association for Institutional Research, May 21–24, Cincinnati, OH.Google Scholar
  14. Heath, N. M., Lawyer, S. R., & Rasmussen, E. B. (2007). Web-based versus paper-and-pencil course evaluations. Teaching of Psychology, 34(4), 259–261.CrossRefGoogle Scholar
  15. Hmieleski, K., & Champagne, M. V. (2000). Plugging into course evaluation. The technology source archives at the University of North Carolina. http://technologysource.org/article/plugging_in_to_course_evaluation/.
  16. Johnson, T. D. (2003). Online student ratings: Will students respond? New Directions for Teaching and Learning, 96, 49–59.CrossRefGoogle Scholar
  17. Kasiar, J. B., Schroeder, S. L., & Holstad, S. G. (2002). Comparison of traditional and web-based course evaluation processes in a required, team-taught pharmacotherapy course. American Journal of Pharmaceutical Education, 66, 268–270.Google Scholar
  18. Krupat, E., Pelletier, S. R., & Chernicky, D. W. (2011). The third year in the first person: Medical students report on their principal clinical year. Academic Medicine, 86(1), 90–97.CrossRefGoogle Scholar
  19. Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40(2), 221–232. doi: 0361-0365/99/0400-0221.CrossRefGoogle Scholar
  20. Liegle, J., & McDonald, D. S. (2004). Lessons learned from online vs. paper-based computer information students’ evaluation system. Paper presented at the annual conference for Information Systems Educators, November 3–7, Newport, RI.Google Scholar
  21. Paolo, A. M., Bonaminio, G. A., Gibson, C., Partridge, T., & Kallail, K. (2000). Response rate comparisons of e-mail and mail-distributed student evaluations. Teaching and Learning in Medicine, 12(2), 81–84. doi: 10.1207/S15328015TLM1202_4.CrossRefGoogle Scholar
  22. Ravelli, B. (2000). Anonymous online teaching assessments: Preliminary findings. Paper presented at the annual national conference for the American Association for Higher Education, June 14–18, Charlotte, NC.Google Scholar
  23. Rice, L. S., & Van Duzer, E. V. (2005). A comparison of three modes of student ratings of teacher performance. Educational Resources Information Center (ERIC) #ED490478. http://eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/c1/b3.pdf.
  24. Smither, J. W., Walker, A. G., & Yap, M. K. T. (2004). An examination of the equivalence of web-based versus paper-and-pencil upward feedback ratings: Rater- and ratee-level analyses. Educational and Psychological Measurement, 64, 40–61. doi: 10.1177/0013164403258429.CrossRefGoogle Scholar
  25. Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the annual forum for the Association for Institutional Research, June 2–5, in Toronto, Ontario.Google Scholar
  26. Tomsic, M. L., Hendel, D. D., & Matross, R. P. (2000). A World Wide Web response to student satisfaction surveys: Comparisons using paper and Internet formats. Paper presented at the annual forum for the Association for Institutional Research, May 21–24, in Cincinnati, OH.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • William B. Burton
    • 1
  • Adele Civitano
    • 1
  • Penny Steiner-Grossman
    • 1
  1. 1.Office of Educational ResourcesAlbert Einstein College of MedicineBronxUSA

Personalised recommendations