Measuring meaningful outcomes in consequential contexts: searching for a happy medium in educational technology research (Phase II)
Abstract
In a paper published 25 years ago, Ross and Morrison (Educ Technol Res Dev 37(1):19–33, 1989) called for a “happy medium” in educational technology research, to be achieved by balancing high rigor of studies (internal validity) with relevance to real-world applications (external validity). In this paper, we argue that, although contemporary research orientations have made substantial strides in capturing these two features, success in combining them and achieving the happy medium envisioned remains limited and disappointing. Highly prevalent today are (a) “technology effects studies,” which are strong in rigor but continue to view educational technology (ET) as a “treatment” rather than as a mode for delivering and potentially enhancing treatments; and (b) “surface learning studies,” which examine processes and outcomes of realistic ET applications, but often without including meaningful measures of student learning. To promote studies that more successfully bridge research and practice, we present suggestions and positive examples for finally achieving a happy medium in this “Phase II” quest.
Keywords
Educational technology research Technology effect studies Surface learning Research methodologyReferences
- Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. The American Journal of Distance Education, 16(2), 83–97.CrossRefGoogle Scholar
- Allen, E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Babson Park, MA: Babson Survey Research Group.Google Scholar
- Anderson, R. C. (1972). How to construct achievement tests to assess comprehension. Review of Educational Research, 42(2), 145–170.CrossRefGoogle Scholar
- Bernard, R. M., Abrami, P. C., & Lou, Y. (2004a). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439.CrossRefGoogle Scholar
- Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004b). A methodological morass? How we can improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi: 10.1080/0158791042000262094.CrossRefGoogle Scholar
- Borman, G. D., Hewes, G. M., Overman, L. T., & Brown, S. (2003). Comprehensive school reform and achievement: A meta-analysis. Review of educational research, 73(2), 125–230.CrossRefGoogle Scholar
- Campuzano, L., Dynarski, M., Agodini, R., Rall, K., & Pendleton, A. (2009). Effectiveness of reading and mathematics software products: Findings from two student cohorts. Washington, DC: Institute of Education Sciences.Google Scholar
- Cavanaugh, C. S. (2001). The effectiveness of interactive distance education technologies in K-12 learning: A meta-analysis. International Journal of Educational Telecommunications, 7(1), 73–88.Google Scholar
- Cheung, A. C. K., & Slavin, R. E. (2011). The effectiveness of education technology for enhancing reading achievement: A meta-analysis. Best Evidence Encyclopedia. Retrieved from http://www.bestevidence.org/word/tech_read_Feb_24_2011.pdf.
- Cheung, A. C. K., & Slavin, R. E. (2013). The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis. Educational Research Review, 9, 88–113.CrossRefGoogle Scholar
- Cho, Y., Park, S., Jo, S. J., & Suh, S. (2013). The landscape of educational technology viewed from the ETR&D journal. British Journal of Educational Technology, 44(5), 677–694. doi: 10.1111/j.1467-8535.2012.01338.x.CrossRefGoogle Scholar
- Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445.CrossRefGoogle Scholar
- Clark, R. E. (1985). Confounding in educational computing research. Journal of Educational Computing Research, 1(2), 137–148.CrossRefGoogle Scholar
- Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29.CrossRefGoogle Scholar
- Davis, F. D. (1993). User acceptance of information technology: System characteristics, users perceptions, and behavioral impacts. International Journal of Man Machine Studies, 38(3), 475–487.CrossRefGoogle Scholar
- Dynarski, M., Agodini, R., Heaviside, S. N., Carey, N., Campuzano, L., Means, B., et al. (2007). Effectiveness of reading and mathematics software products: Findings from the first student cohort. Washington, DC: Institute of Education Sciences.Google Scholar
- Edirisingha, P., Nie, M., Pluciennik, M., & Young, R. (2009). Socialisation for learning at a distance in a 3-D multi-user virtual environment. British Journal of Educational Technology, 40(3), 458–479. doi: 10.1111/j.1467-8535.2009.00962.x.CrossRefGoogle Scholar
- Eisenhart, M., & Towne, L. (2003). Contestation and change in national policy on “scientifically based” education research. Educational Researcher, 32(7), 31–38.CrossRefGoogle Scholar
- Ertmer, P., Sadaf, A., & Ertmer, D. (2011). Student-content interactions in online courses: The role of question prompts in facilitating higher-level engagement with course content. Journal of Computing in Higher Education, 23(2), 157–186. doi: 10.1007/s12528-011-9047-6.CrossRefGoogle Scholar
- Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational Researcher, 31(8), 4–14.CrossRefGoogle Scholar
- Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2–3), 87–105. doi: 10.1016/S1096-7516(00)00016-6.Google Scholar
- Hagler, P., & Knowlton, J. (1987). Invalid implicit assumption in CBI comparison research. Journal of Computer-Based Instruction, 14(3), 84–88.Google Scholar
- Hsieh, P., Acee, T., Chung, W. H., Hsieh, Y. P., Kim, H., Thomas, G. D., et al. (2005). Is educational intervention research on the decline? Journal of Educational Psychology, 97(4), 523.CrossRefGoogle Scholar
- Kulik, C.-L. C., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: An updated analysis. Computers in Human Behavior, 7(1–2), 75–94. doi: 10.1016/0747-5632(91)90030-5.CrossRefGoogle Scholar
- Levin, B. (2004). Making research matter more. Education Policy Analysis Archives, 12, 56.Google Scholar
- Lust, G., Elen, J., & Clarebout, G. (2012). Adopting webcasts over time: The influence of perceptions and attitudes. Journal of Computing in Higher Education, 24, 40–57. doi: 10.1007/s12528-011-9052-9.CrossRefGoogle Scholar
- Mayer, R. E., Dyck, J. L., & Cook, L. K. (1984). Techniques that help readers build mental models from scientific text: Definitions pretraining and signaling. Journal of Educational Psychology, 76(6), 1089–1105. doi: 10.1037/0022-0663.76.6.1089.CrossRefGoogle Scholar
- Pane, J. F., McCaffrey, D. F., Slaughter, M. E., Steele, J. L., & Ikemoto, G. S. (2010). An experiment to evaluate the efficacy of cognitive tutor geometry. Journal of Research on Educational Effectiveness, 3(3), 254–281.CrossRefGoogle Scholar
- Petkovich, M. D., & Tennyson, R. D. (1984). Clark’s “learning from media”: A critique. Educational Communications and Technology Journal, 32(4), 233–241.Google Scholar
- Quattrocchi, C. (2013). The nature of big edtech research. edSurge. Retrieved from https://www.edsurge.com/n/2013-08-27-the-nature-of-big-edtech-research/.
- Ross, S. M., & Morrison, G. R. (1989). In search of a happy medium in instructional technology research: Issues concerning external validity, media replications, and learner control. Educational Technology Research and Development, 37(1), 19–33. doi: 10.1007/bf02299043.CrossRefGoogle Scholar
- Ross, S. M., & Morrison, G. R. (2012). Constructing a deconstructed campus: Instructional design as vital bricks and mortar. Journal of Computing in Higher Education, 24(2), 119–131.CrossRefGoogle Scholar
- Ross, S. M., Morrison, G. R., Hannafin, M. J., Young, M., van den Akker, J., Kuiper, W., et al. (2008). Research designs. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (pp. 715–761). New York: Taylor Francis.Google Scholar
- Rourke, L., & Kanuka, H. (2009). Learning in communities of inquiry: A review of the literature. Journal of Distance Education, 23(1), 19–48.Google Scholar
- Salomon, G., & Clark, R. E. (1977). Reexamining the methodology of research on media and technology in education. Review of Educational Research, 47(1), 99–120.CrossRefGoogle Scholar
- Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R., Abrami, P. C., Wade, C. A., et al. (2009). Technology’s effect on achievement in higher education: A Stage 1 meta-analysis of classroom applications. Journal of Computing in Higher Education, 21, 95–109.CrossRefGoogle Scholar
- Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher, 32(1), 25–28.CrossRefGoogle Scholar
- Shea, P., & Bidjerano, T. (2009). Cognitive presence and online learner engagement: A cluster analysis of the community of inquiry framework. Journal of Computing in Higher Education, 21(3), 199–217. doi: 10.1007/s12528-009-9024-5.CrossRefGoogle Scholar
- Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. doi: 10.2307/3594400.CrossRefGoogle Scholar
- Slavin, R. E. (2003). A reader's guide to scientifically based research. Educational Leadership, 60(5), 12–16.Google Scholar
- Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(1), 4–28.CrossRefGoogle Scholar
- US Congress. (2001). No Child Left Behind Act of 2001. Public Law 107-110. Washington, DC: Government Printing Office.Google Scholar