Introduction

At the 2014 meeting of the National Technology Leadership Summit (see http://www.ntls.info/) in Washington, DC, in September, a group of journal editors unanimously agreed to pursue the publication of replication studies. Educational Technology Research & Development (ETR&D) was among the journals represented at that meeting. As Suppes (1978) lamented years ago, there has been negligible impact of education research on educational practice in the USA. In that volume, a number of promising studies, all of which involved technology, were noted. However, none of those projects resulted in large-scale impact even though each of those projects was deemed successful. Makel and Plucker (2014) and Schmidt (2009) confirm this pattern (lack of impact on educational practice), and we believe it is true outside the USA as well. As a result ETR&D hereby issues this call for papers reporting replication studies and studies reporting large-scale, sustainable, systemic impact on educational practice. Such papers can be submitted to any ETR&D section following journal guidelines, and every effort will be made to bring these papers up to the publication standards of ETR&D.

The scope of papers that ETR&D considers can be found in Spector et al. (2014). Additional topics of concern to educational technology can be found in Richey et al. (2011), Spector (2012, 2015), Spector et al. (2014), and Woolf (2010). Moreover, emerging technologies are reviewed periodically in the Horizon Reports published by the New Media Consortium (see www.nmc.org).

Replication studies

Replication studies are widely accepted in other domains and a hallmark of scientific research in the medical domain. Replication studies add confidence in findings and are necessary to generate a basis for generalization beyond the original project setting. Moreover, replication studies can identify potential biases in the original study and serve as a basis for confirming or disconfirming prior findings (Makel and Plucker 2014). Lykken (1968) identified three types of replication studies: (a) literal (exact duplication; nearly impossible to conduct and may suffer the same experimental biases), (b) operational (testing the validity of the original findings and instruments), and (c) constructive (tests aimed at the targeted construct of the original study). Schmidt (2009) re-categorized replication studies into direct (repeating the experimental procedure) and conceptual replications (different methods aimed at the same underlying hypotheses), with direct replications being the preferred path by many as they preserve the original instrumentation and approach.

ETR&D is open to both direct and conceptual replication studies, and urges researchers to conduct such studies so as to make educational technology research more scientific and provide a firm and convincing foundation for large-scale implementations and impact studies. In addition, ETR&D also welcomes meta-analytic studies that analyze and synthesize experimental findings across a number of related studies.

Large-scale impact reports

There are a number of efforts around the globe dedicated to large-scale educational reform (see http://www.oecd.org/). As with the case of replication studies, there are a few success stories (the Head Start program in the USA is one—see http://www.acf.hhs.gov/programs/ohs), but these are far fewer than one might expect given the high level discussions of the need for large-scale educational reform. A classical piece on large-scale, systemic change in education is that by Reigeluth and Garfinkle (1994). Regrettably, little evidence of sustained, large-scale systemic change efforts exists.

As a result of the lack of published findings with regard to large-scale impact on educational practice, ETR&D also strongly encourages such reports to be submitted to any ETR&D section. ETR&D welcomes stories of successful large-scale impact as well as reports indicating problems and shortfalls in attempts to achieve large-scale impact. Conceptual frameworks for achieving systemic change and large-scale educational transformation that build off past successes, theory and lessons learned are also welcome.

Concluding remarks

We believe there are three main barriers that have prevented researchers and developers from pursuing such studies. The first pertains to funding. National funding agencies seldom support studies that extend beyond a few years. The European Commission has funded large networks of excellence (see for example http://www.teleurope.eu/pg/frontpage and http://www.galanoe.eu/) but these only last 4 years, and none have yet to result in systemic change and large-scale impact of the type mentioned previously. The American National Science Foundation also funds centers of excellence, and these are expected to continue beyond their original funding (see for example https://www.nsf.gov/about/budget/fy2012/pdf/39_fy2012.pdf and http://www.op-tec.org/nsfate.php). Of all such large-scale, nationally funded efforts, perhaps the one that has achieved long-term impact is the National Center for Research on Evaluation Standards, & Student Testing (CRESST; see https://www.cse.ucla.edu/index.php), but that prestigious and successful national center has not led to large-scale, systemic reform in educational practice although its influence on educational research is clearly positive. National funding agencies need to promote more types of long-term efforts to achieve systemic and sustainable reform in educational practice.

The second barrier concerns the willingness of researchers and developers to conduct replication studies and meta-analyzes, and find funding for large-scale efforts. Researchers have a tendency to want to create their own instruments and engage in an effort that is closely tailored to their personal interests. While such efforts are laudable, they do not lead to progressive improvement of educational technology research as a science. One-off studies may get published and even lead to some notoriety of the investigators. However, they often fail to result in systemic and systematic improvement of educational practice. There is clearly a disconnect between what is personally valued by educational researchers and what is needed to improve learning and instructional practice. Because publication in high-quality refereed journals is typically required for promotion and tenure, ETR&D encourages replication studies, meta-analyses, and large-scale reports of impact on education to be submitted for publication.

The third barrier concerns the willingness of researchers to openly and freely share instruments and details of their research procedures with others. There are a few exceptions when this has been done—notably, by the American National Science Foundation’s ITEST (Innovative Technology Experiences for Students and Teachers; see the most recent NSF ITEST description at http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5467). However, too many researchers want to retain full rights and ownership of their instruments and all too often charge those who wish to use those instruments a fee that is prohibitive for graduate students and junior faculty.

It is our hope that project investigators and senior faculty will encourage meta-analyses, replication studies and large-scale impact reports so that our enterprise can become more scientific and provide a basis for transforming educational practice. By welcoming these studies, we aim to fulfill the overarching aims of ETR&D, which includes publishing quality research and development findings that will improve learning and instruction around the world.