Advertisement

How research informs educational technology decision-making in higher education: the role of external research versus internal research

  • Fiona HollandsEmail author
  • Maya Escueta
Open Access
Research Article

Abstract

Research use in educational decision-making has been encouraged and well documented at the K-12 education level in the United States but not in higher education, or more specifically for educational technology. We conducted a qualitative study to investigate the role of research in decisions about acquiring and using educational technology for teaching and learning in higher education. Results from 45 interviews of decision-makers in higher education show that they engage in different types of research activities throughout the decision-making process, but that in most cases the research is lacking in methodological rigor. Externally-produced, scientifically-rigorous research was mentioned in less than 20% of interviews. Decision-makers often conduct their own internal investigations on educational technology products and strategies producing locally-relevant, but usually less-than rigorous, evidence to inform decisions about continuing use of the technology or scaling up.

Keywords

Educational technology Decision-making Higher education Research use Evidence-based decision-making 

Introduction

Research use in educational decision-making

The use of research in educational decision-making has become a topic of increasing importance among education researchers, practitioners, policymakers, and funders. With limited budgets and increasing demand for accountability in education, a call for the use of evidence and scientifically-based research in education decision-making has emerged over the past 10–20 years (Baker and Welner 2012; Maynard 2006). However, more has been done to support and enforce this call in K-12 education than in higher education. The No Child Left Behind Act (NCLB 2002) and the Every Student Succeeds Act (ESSA 2015) guide K-12 educators by establishing evidence-based decision-making practices and by providing definitions of scientifically-based research. This legislation is based on the theory that such a framework can support better outcomes for students (U.S. Department of Education 2016), although it is unclear whether this works in practice given the lack of enforcement. While the NCLB Act and ESSA provide an accountability framework for K-12 education, no such legislation calls for the same focus on evidence-based decision-making in higher education (Deming and Figlio 2016).

A significant body of literature has addressed whether and how research influences K-12 decision-makers in practice (Asen et al. 2013; Farley-Ripple et al. 2018; Farrell and Coburn 2017; Finnigan et al. 2013; Honig and Coburn 2008; Honig et al. 2017; Penuel et al. 2018). It has been well established that K-12 education decision-makers often consult three main types of information for decision-making: local knowledge, data, and scientifically-based research. Research findings may be used conceptually to influence decision-makers’ understanding of the decision problem, symbolically or politically as a tool of persuasion to justify a decision already made, or instrumentally to directly guide and shape decision-making (Finnigan et al. 2013; Honig and Coburn 2008; King and Pechman 1984; Nutley et al. 2007; Penuel et al. 2016; Tseng 2012; Weiss 1977). A comparable body of literature is missing for higher education (Chaffee 1983; Deming and Figlio 2016; Ho et al. 2006).

There is a general consensus that integrating externally-produced research into decision-making is difficult (Neal et al. 2018) since rigorous research about the educational program or practice in question may not be available. Even when it is, the findings need to be contextualized with “local data analyses, organizational history, and practice experience” (Tseng and Nutley 2014, p. 170) to apply it to the decision-maker’s own situation. This establishes an inherent tension between external, and often more rigorous, research and internal research. Externally-produced, rigorous research, such as randomized controlled trials (RCTs), is often expensive, may take too long to inform pressing decisions, and is often difficult to generalize to a decision-maker’s context. Locally-relevant, internal research, such as faculty and student surveys or pilot studies, may be more feasible to implement and may provide more timely information, especially to answer questions about whether an educational technology tool or strategy is meeting local needs. However, internal research may be less reliable for providing solid answers to questions about effectiveness for improving academic outcomes. There is substantial evidence that involving stakeholders in identifying educational needs and goals and in designing and conducting locally-relevant research and evaluation increases the likelihood that the findings are used for decision-making (Anderson and Shattuck 2012; Coburn and Penuel 2016; Dede 2005; Lewin 1946; Penuel et al. 2015; Penuel and Farrell 2017; The Design-Based Research Collective 2003). But the jury is still out as to whether evidence-based decision-making in education leads to improved student outcomes (Heinrich and Good 2018).

With many institutions of higher education (IHEs) including “research” in their mission statements, one might expect a greater inclination and capacity among higher education decision-makers to use research in decision-making. But universities have often been characterized as “organized anarchies” (Cohen et al. 1972, p. 1) in which faculty and students operate with a great deal of autonomy and administrators struggle to manage disparate interests (Birkland 2011). Rational decision-making at such organizations is hard to orchestrate, possibly leaving little room for research to influence choices. Cohen et al. suggest that, more often, decisions at universities are made according to the “garbage can model” (p. 1) in which the actors begin with solutions and then look for problems to solve with them. The dearth of studies about research use in higher education decision-making leaves these competing hypotheses untested.

The importance of research use in educational technology decision-making

Over the past few decades, with rapid developments in information and communications technology, practitioners and decision-makers have increasingly turned to educational technology products as potential solutions to long-standing challenges in higher education. These products include software and hardware tools, and initiatives or strategies that simultaneously integrate multiple tools. Most IHEs have adopted sophisticated learning management systems aiming to improve efficiency of instruction. Many have acquired adaptive software with the goal of individualizing instruction, or are using predictive analytics software in an effort to improve student performance, retention, and completion. According to MindWires Consulting, U.S. higher education technology investments in 2016 were estimated to be between $1.9 and $3.3 billion, which includes central information technology spending and educational technology services purchases made through academic programs (P. Hill, personal communication, July 8, 2017). Educational technology is ostensibly used to improve administrative efficiency, to increase access to educational opportunities, and to facilitate student learning. These investments may present promising solutions for the unique challenges of higher education, particularly for non-traditional learners who face obstacles in access and progression. Justification for these investments should vary depending on the purpose of the technology, for example, whether the intention is to increase access to higher education by a wider range of potential students, to share instructional content and materials more efficiently, or to improve academic outcomes. Yet, few IHEs systematically assess whether their goals are met in order to justify the enormous amounts of time, effort, and resources dedicated to implementing the technology.

Of most concern, there are few rigorous evaluations of the effects of educational technologies on student outcomes. For example, although almost one third of college students now participate in online courses (Allen and Seaman 2017), a recent review of rigorous evaluations of educational technology found that only a handful of experimental studies compare online education to traditional face-to-face instruction in an undergraduate university setting. These have found null to mixed results (Escueta et al. 2017). Decision-makers appear to be more influenced by market demands for online education and policy pressure to increase access to higher education than by evidence of what works to improve student academic achievement. In another rapidly growing application of educational technology, predictive analytics, the evidence to date on student outcomes is mostly limited to case studies (e.g., Sclater et al. 2016; Shacklock 2016). These may provide decision-makers with valuable information on potential uses of such innovations and how to implement them. However, initial decisions about large investments of limited resources may deserve stronger evidence than descriptive and correlational analyses if the goal is to improve academic achievement.

As investments in educational technology increase (Morrison 2017), it is important to understand how educational technology decision-makers in higher education make decisions about acquiring and using educational technology for the purposes of teaching and learning, whether and how they use research, and how to improve these practices to ensure positive academic returns on educational technology investments. With the exception of Acquaro (2017), almost no research has specifically investigated the use of evidence and research for decisions about educational technology in higher education. This study addresses this gap in the literature by asking the question: Do educational technology decision-makers in higher education use research to inform decisions about acquiring and using educational technology to facilitate teaching and learning and, if so, how? We summarize our findings from a set of interviews with decision-makers in higher education about their use of research when making decisions about educational technology to improve teaching and learning and provide two detailed examples of internal research conducted by IHEs.

Methods

Between September 2016 and April 2017, eight interviewers conducted 45 semi-structured interviews (Merriam and Tisdell 2015) with educational technology decision-makers from 42 IHEs in the U.S. Interviewees included e-Learning administrators such as Directors of Digital or Online Learning, Presidents, Chief Information Officers (CIOs), instructional technology personnel, general administrators, Chief Academic Officers, faculty members, and Innovation Officers. We specifically sought out individuals involved in making decisions about use of educational technology for teaching and learning purposes, as opposed to administrative functions such as enrollment, registration and record-keeping, for which research on student academic outcomes would not be the most appropriate measures of effectiveness.

Sampling and recruitment

Two sampling strategies were employed to balance the likelihood of participation with the likelihood of obtaining a fair picture of the range of decision-making strategies across institutional types. Rather than trying to create a sample of institutions that proportionally represented the types of IHEs in the U.S., our goal was to have at least five institutions in each of six major categories of IHEs to ensure that we obtained a range of perspectives: 2-year private for-profit, 2-year private non-profit, 2-year public, 4-year private for-profit, 4-year private non-profit, 4-year public. First, a purposive sample of IHEs was established by soliciting suggestions from participants in the EdTech Efficacy Research Academic Symposium (2017). We asked for names of IHEs and interviewees who might provide useful insights into how educational technology decisions related to teaching and learning are made and what role research plays in the process. Potential interviewees were emailed an invitation to participate in the interview. One follow-up email was sent in the event the first invitation did not elicit a response. Individuals from 30 of 37 invited U.S. IHEs agreed to an interview or suggested someone else at the same institution more appropriate to invite, who subsequently agreed. The resulting participation rate was very high at 81%.

Second, to address the lack of broad representation of IHEs in the purposive sample, we created a stratified random sample of institutions from the Integrated Postsecondary Education Data System (IPEDS) database (National Center for Education Statistics 2016). We used the following criteria to identify our potential population of IHEs in IPEDS: U.S.-based; Title IV eligible; 750 or more undergraduate and/or graduate students. We included all 2-year and 4-year institutions meeting these criteria, but excluded less-than-2-year institutions. All institutions in IPEDS meeting these criteria were assigned a random value from 0 to 1 using the Stata function runiform(), and, for each of 12 categories of IHEs, we selected up to ten schools to invite by picking the lowest randomly assigned numbers. The 12 categories were IHEs with combinations of the following characteristics: 2-year or 4-year; for-profit or non-profit; public or private; and distance education offered or not.

After our first round of sample recruitment, we found participation from the private sector institutions was low, so we drew a second random sample of private 2-year and 4-year institutions. We followed the same procedure as above, but excluding the institutions already drawn from the first sample before drawing the second sample from the eligible institutions. We drew up to 20 institutions in each of the 2-year and 4-year private institution categories. There were fewer than ten eligible IHEs in some categories, so we obtained a total of 104 IHEs in the first random sample, and 66 additional IHEs from the second random sample. As Table 1 shows, there are very few institutions that met our criteria in some of these categories, such as private, non-profit 2-year institutions, so we reached out to all institutions in that category. We used public sources (e.g., the IHE’s website) to identify the CIO, Chief Technology Officer or other educational technology decision-maker for each of the IHEs in the drawn sample. We emailed these individuals to request their participation in an interview or a recommendation for someone else at their institution we could invite. We contacted a total of 67 institutions from the random sample (and 79 individual decision-makers), and received agreement from 13 of these institutions to participate, yielding a participation rate of 19%. One of these IHEs was already in the purposive sample and was therefore only counted in the purposive sample.
Table 1

Categories and numbers of U.S. institutions of higher education that participated in interviews

Type of institution

No. of institutions in IPEDSa meeting our criteria

Purposive sample

Random sample

Total

2-Year for-profit

88

2

3

5

2-Year private non-profit

5

1

1

2b

2-Year public

758

5

2

7

4-Year for-profit

155

6

2

8c

4-Year private non-profit

919

7

2

9

4-Year public

678

9

2

11

Total

2603

30

12

42

aThe Integrated Postsecondary Education Data System is a system of interrelated surveys conducted annually by the U.S. Department of Education’s National Center for Education Statistics that gathers information from every college, university, and technical and vocational institution that participates in the federal student financial aid programs

bVery few institutions fall into this category in IPEDS and all were contacted

cOne institution in this category appeared in both the purposive and random sample but is only counted in the purposive sample

Among the 42 U.S. IHEs who agreed to participate in the study, 71% were from the purposive sample and 29% from the random sample; 19% were 2-year IHEs and 81% were 4-year IHEs; 19% were for-profits and 81% were non-profits; 33% were public IHEs and 67% were private IHEs. Table 1 summarizes the number of institutions in the IPEDS database that met our criteria and the number and types of institutions that participated in our interviews in both the purposive and random samples. We succeeded in recruiting five or more institutions in each of the six target categories of IHE, with the exception of 2-year private non-profits, of which there were only five total in IPEDS that met our criteria. Overall, compared to the population in IPEDS, our sample was skewed towards 4-year institutions as opposed to 2-year institutions: 81% of institutions in our sample were 4-year institutions compared with 67% in IPEDS. Our sample was also skewed toward for-profits as opposed to non-profits: 19% of institutions in our sample were for-profit institutions compared with 9% in the IPEDS list. It was also skewed toward private institutions compared with public institutions: 67% in our sample were private institutions, compared with 45% in the IPEDS list.

For most IHEs, we conducted one interview with a single person. In a few cases, two or three decision-makers from the IHE participated in one interview. For three IHEs, we interviewed a second person, yielding a total of 45 interviews. The small sample size in each category does not allow us to draw reliable conclusions about each category, but we believe the overall sample can provide insights into the range of decision-making processes and uses of research in U.S. IHEs. An additional interview was conducted with an Australian university. The purpose of this interview was to provide a counterpoint to the U.S. perspective and to investigate the observation of a symposium member that Australian universities are generally more sophisticated in their selection and evaluation processes for educational technology than U.S. universities. Lessons from this interview are shared in one of the highlighted examples in our findings section but the interview was not included in our analyses of research use by U.S. universities.

Interview procedure

Interview questions were based on the trajectory of rational decision-making processes (Simon 1957) and assumed multiple criteria were used in decision-making (Zopounidis and Doumpos 2017). The interview protocol was also influenced by Edwards and Newman’s (1982) methods for multiattribute evaluation and by qualitative studies of research use by Asen et al. (2013) and Finnigan et al. (2013). We began by eliciting information about where the interviewee obtains information on educational technology products and trends; what individuals or organizations the interviewee perceives as opinion leaders, change makers, or innovation leaders in educational technology; and who generally participates in educational technology decisions at the institution. Subsequently, the interviewee was asked to identify one particular educational technology decision in which she or he participated recently enough to remember the details of the process, and to answer many detailed questions about the goals of the decision, the stakeholders involved, and the decision-making process itself. Subsequently, interviewees were asked specifically what constitutes research, what role research plays in the IHE’s educational technology decision-making processes and whether the IHE conducted any of its own investigations into how well an educational technology product works. Interviews lasted between 31 and 172 min, averaging 66 min. They were conducted face-to-face, by Skype or by phone. Transcripts of recordings or notes were coded in NVivo software, using a combination of deductive and inductive theming and coding techniques (Merriam and Tisdell 2015). For the purposes of our analysis regarding the use of research by U.S. IHEs, we used interviews as the unit of analysis (n = 45) rather than IHEs (n = 42) because answers to some questions differed among individuals at the same institution.

Findings

Decision-makers described a variety of decisions about acquiring products to facilitate teaching and learning needs in their institution, including technologies that facilitate the teaching process, such as Learning Management Systems, online content and ebooks, and products aiming to directly improve outcomes, such as student retention tools or personalized adaptive programs. Overall, our interviewees displayed decision-making practices that ranged from reasonably rational (Simon 1957) to exemplifying the garbage can model (Cohen et al. 1972). In the former cases, decision-makers started with needs identified by faculty, administrators or students, and ended with the implementation of a carefully-vetted educational technology tool. In the latter cases, new educational technology tools were acquired with a view to later finding a use for them. However, the majority of the interviewees appeared to fall somewhere between these extremes, constantly scanning a variety of sources for information about new educational technology tools at the same time as they made efforts to regularly gather information about their community’s needs. The resulting decision-making process was perhaps more akin to match-making, with decision-makers cycling attention back and forth between solutions and needs. This strategy may be the most pragmatic given the swift pace of technology change which allows little time for evidence of effectiveness to accumulate before the technology has moved to its next incarnation.

What constitutes research for decision-makers in educational technology?

All interviewees claimed to conduct research when making educational technology decisions, but their definitions of research varied widely. These definitions included the collection of various forms of information, externally or internally, at each stage throughout the decision-making process. General background information-gathering throughout the year was reported by all interviewees on technology trends, technology solutions to current IHE needs, and the variety and capabilities of educational technology products. These general information-gathering activities included consulting peers who have used similar products (mentioned in 29% of interviews), conducting surveys within decision-makers’ own institutions to understand technology needs (mentioned in 24% of interviews), or consulting literature reviews on relevant educational technology topics (mentioned in 13% of interviews).

More purposive, targeted searches for pedagogical strategies and practical solutions were conducted to address particular needs identified by students or faculty, for example, to search for strategies to improve retention. A handful of interviewees described efforts to use research-based instructional design strategies, such as strategies to improve learner retention in Massive Open Online Courses. Once a potential strategy had been identified, decision-makers searched for specific ways to operationalize it, for example, predictive analytics or student retention software compatible with the IHE’s existing student data systems. After identifying several options to consider, 80% of IHEs investigated potential product options through product demonstrations and pilots, primarily to determine ease of use and feasibility of implementation, but also in some cases to evaluate changes in student outcomes. While most IHEs engaged in these product demonstrations and pilots, only around one quarter of them considered these to be research activities. A few interviewees, particularly from for-profit IHEs, described investments in predictive analytics and subsequent tracking of student outcome data. Finally, post-implementation investigations of products and strategies were also conducted at most IHEs to assess impact on student outcomes, and to help inform decisions about continued use or scale up of the educational technology product or strategy.

Activities that interviewees counted as research and the percentage of interviews in which each was mentioned are summarized in Table 2. Overall, thirteen different types of research activities were mentioned. We indicate whether the research is external, that is, produced by a third party, or internal, that is, produced by personnel within the institution. We also note whether the activity can be deemed scientifically rigorous, as defined by use of an experimental or quasi-experimental design. Only a few of the activities described would meet this standard and these activities were mentioned in less than 20% of the interviews.
Table 2

Types of activities named as research used or conducted to inform decisions about educational technology products and strategies

Internally (I) or externally (E) produced

Scientifically rigorousa? (Y/N/maybe)

Activities that counted as research

% of interviews in which activity was named (n = 45) (%)

I

N

Conducting student, staff, and faculty interviews, surveys, or focus groups about educational technology issues

40

I

N

Looking at student outcomes after implementing an educational technology strategy or product

38

E

N

Reading industry, consortium, or trade publications, reports, or white papers about educational technology products

33

I/E

N

Participating in site visits/asking peers or references what educational technology products they use and for feedback on products

31

E

N/maybe

Reading vendor-provided information/literature/materials/white papers/case studies/efficacy studies

31

I

N

Reviewing data analytics based on own technology platform or tool use data

24

E

N

Reading forum, blog, or internet reviews about educational technology tools; gathering information about educational technology tools via social media and internet searches

24

I

Maybe

Conducting a pilot study in which an educational technology product or strategy is used by teachers and students

22

E

Maybe

Reading articles/reports/literature reviews/annotated bibliography/research materials on product (sources unspecified)

20

E

Y/maybe

Reading scholarly papers or journals about educational technology strategies

18

I

Maybe

Conducting investigations at own research centers or institutional research units on educational technology products or teaching and learning strategies

16

I

Y/maybe

Conducting comparison studies in which some teachers/students use educational technology product/strategy

16

E

N

Conferring with consultants about educational technology products and strategies

13

aScientifically rigorous is defined as quasi-experimental or experimental

The most frequently mentioned activities that educational technology decision-makers counted as research were conducting student, staff, and faculty interviews, surveys, or focus groups about educational technology issues, mentioned in 40% of interviews, followed by looking at student outcomes after implementing an educational technology strategy or product, mentioned in 38% of interviews. Both of these activities were internally conducted by personnel within the interviewees’ own institutions. In slightly under a third of the interviews, reading a variety of vendor-provided information was counted as research, with “efficacy studies” among them. However, only one interviewee was able to provide a definition of efficacy that reflects the definition of efficacy research provided by the National Science Foundation and U.S. Department of Education’s Institute for Education Sciences (U.S. Department of Education, Institute for Education Sciences and National Science Foundation 2013), suggesting that the efficacy studies alluded to might not involve experimental or quasi-experimental methods.

The activity most likely to involve research that meets the scientifically-rigorous standard was reading scholarly papers or journals about educational technology strategies, mentioned in 18% of interviews. It is, however, unclear whether the interviewees were actually reading about quasi-experimental or experimental studies. Approximately one-sixth of the interviewees reported that their IHEs conducted internal comparison studies in which some teachers and students used an educational technology product or strategy while others did not. While interviewees may have termed these “experiments,” the level of rigor of these internal studies was often questionable because they relied on volunteer or “first-mover” faculty members who might not be representative of typical faculty members, did not include any form of pre-test, and did not assign subjects to treatment and control conditions at random.

Barriers to using research

Many interviewees described barriers to using research. Some pointed to the dearth of rigorous research on educational technology products and strategies, implying that they would use such evidence if it were available. For example, one interviewee remarked “I think where the What Works Clearinghouse fell down is that they just applied the gold standard to absolutely everything so in some categories almost nothing ‘passed’.” He suggested a “tiers of investment” approach for educational technology research that “calibrates the appropriate methodology to the level of investment.” In this approach, RCTs would only be used for high-stakes investments such as for adaptive learning strategies where the return on investment has the potential to be very high, while less rigorous research on lower-stakes investments would be appropriate as long as it is useful for decision-making.

Many interviewees noted that the length of time it takes to complete research is incompatible with the fast pace of technology change. By the time the research has been completed, the technology may have been upgraded or changed multiple times. Even during the course of a study, the technology might change making it hard to pinpoint which iteration “works” or to precisely describe the intervention. Furthermore, every university and college appeared to believe that its students and faculty are unique, leading to reluctance in accepting the relevance of results from a study executed in a different context.

Internal research conducted by decision-makers

In 78% of our interviews, decision-makers reported conducting their own investigations of educational technology products. These studies varied widely in goals and methodological rigor with a few resulting in peer-reviewed publications, but most not being shared publicly. Results of these investigations were used to continuously improve instruction or to decide whether to continue or scale up use of the educational technology product or strategy. For those decision-makers in positions of acquiring and supporting technology, useful investigations were often more about figuring out what technology is (or is not) being used by students, staff and faculty, and what needs to be supported, than about whether it improved student outcomes.

A few IHEs, both non-profit and for-profit, operate their own Research and Development (R&D) centers that focus on educational technology solutions to institutional challenges. One President described the work of three different educational technology-related R&D centers at his institution: one that searches for innovative, external educational technology companies that can help the IHE with “strategic concerns” such as improving student persistence or individualizing learning; one that focuses specifically on identifying effective simulation products in healthcare and publishes its findings in peer-reviewed journals; and one that focuses on developing and evaluating effective test preparation materials for high stakes professional examinations. The Chief Academic Technology Officer at a state university described the work of a dedicated institutional unit to continuously tinker with and improve entry-level courses using technology in an effort to help students succeed in progressing to more advanced courses. The latter interviewee observed that, to conduct meaningful research, scale is necessary in order to produce enough data to analyze (further details in Example 1 below). In both cases, the IHE has been less focused on sharing results externally, and more on generating research to inform internal practice.

Several interviewees provided detailed descriptions of the investigations they were conducting on specific educational technology products and strategies, some of them ongoing activities over an extended period of time. A for-profit university assessed whether using a video-enabled discussion forum tool improved student engagement in a liberal arts course as measured by time-on-task and whether it affected the quality of student work. A community college which had acquired a student retention system was comparing student retention and completion rates prior to and after the software acquisition. A large, state university was working with the vendor of an adaptive learning platform to carefully analyze student data in an effort to intervene early to support students at risk of failing or dropping out of a course. To date, the work had resulted in three peer-reviewed publications. A small, private university investigated the use of active learning strategies to increase student engagement by introducing clickers and online quizzes, and by “flipping” the classroom. Another small, private IHE described action research on how to increase student participation in online discussion forums. This work led to new discussion group procedures and structures being implemented in the IHE’s courses. In general, the involvement of faculty in leading investigations appeared to increase the likelihood that the findings would eventually be reported in a peer-reviewed publication. IHEs not undertaking investigations of educational technology products cited constraints such as costs, time, and capacity. Below, we explore in detail two examples in which decision-makers conducted their own investigations and used the results from these investigations to directly inform their decision-making around the acquisition and use of educational technology for teaching and learning. The first case represents a more typical approach to evaluating educational technology products that are already in place at the IHE with a view to continuous improvement. The second case provides a potential model for a systematic need-based and evidence-based approach to the initial acquisition of educational technology to support teaching and learning.

Example 1: piloting personalized learning software at a 4-year public institution

The Chief Academic Technology Officer of a 4-year public IHE described piloting personalized learning software through an in-house “action lab” dedicated to educational technology research. He stressed that the value of conducting this research was not necessarily to produce rigorous, peer-reviewed work, but to use the large-scale data available to continuously improve the courses and platforms the IHE is already using. He emphasized that the purpose of the action lab is to conduct use-oriented research that can help the institution deliver courses at scale:

One of the drivers for the Action Lab is to do use-oriented research. The way we approach creating and delivering courses at scale is pretty much product-driven and not focused primarily on producing peer-review-research, ‘Oh, we’re going to get a grant from NSF and then we’re going to set up a certain experiment and we’re going to run that …’ It’s more like we’re building an effective mechanismthe way I would describe this course is like it’s a laboratory apparatus. Once that apparatus is constructed and data is beginning to flow, then we can start to talk about experiments we can design on top of the apparatus, and use the data that comes back to have meaningful things to say. I don’t mean to say the peer-reviewed approach isn’t important. I think that as these tests we perform begin to bear fruit, many of them will pass peer-review muster. But we won’t be gated by that, we don’t view peer-review as having to come first in order for us to be able to make decisions because, in practice, people actually make decisions in much less data-driven ways than that. In a sense, we’re trying to land a basic implementation in the product space and then study the product and continuously improve it.”

For the Chief Academic Technology Officer at this IHE, entry-level courses are key for student learning because they are where students “level up” and gain the basic skills they need to be successful learners for the remainder of their college careers. He detailed the deployment at scale of a promising adaptive learning platform for math in one of these entry level courses, College Algebra:

This last term we just deployed [a type of personalized learning software] for the first time at scale in College Algebra: for 3500 on-campus students, for almost 800 online students, and for another 50,000 students on the internet. What we get back from [the personalized software] is terabytes of data about where each student placed when they first entered the course, which topics they had already mastered, which topics they had not mastered. We can see how much time they spend day by day, week by week; how they spend that time. Do they watch videos? Do they do problems? When they do problems do they struggle, do they stall? When they struggle, how do they respond? Do they drop out? Do they persist? What help do they seek?”

The IHE uses beta testing to investigate differences across groups who are exposed to various aspects of a particular technology. The results of these and other tests are used to iterate on the design of the platform. Such investigations serve more to allow for monitoring and continuous improvement than to inform decisions about whether or not to acquire a particular technology in the first place. The Chief Academic Technology Officer acknowledged that a technology must already be implemented at scale to permit the IHE to collect enough data to draw meaningful inferences. He outlined future plans to conduct comparison studies between new educational technology products.

Example 2: developing a streamlined process for piloting and scaling educational technology in an Australian University

The Academic Director in the Office of Learning and Teaching at an Australian university described a streamlined process that the IHE recently developed to make decisions about acquiring educational technology to support pedagogy. This systematic selection and piloting process for educational technology initiatives feeds into decisions about selection and scaled-up implementation of technologies campus-wide. It illustrates how both external and internal research contribute to decisions about acquiring and scaling up educational technology initiatives.

The university faced three challenges related to educational technology acquisition prior to developing this process: the IHE’s instructional technology services (ITS) was working in isolation from other groups and departments; ITS decisions were slow because the unit believed it needed to obtain full faculty agreement before acquiring any new technology; and vendors were directly contacting faculty, with ITS having no way of knowing which faculty were adopting which technologies. To address these issues and implement an ongoing process for testing the educational value of educational technology, the Academic Director (a faculty member) developed a multi-step strategy. First, he created a prioritized list of technology initiatives based on his own assessment of the IHE’s needs and a review of external research including Horizon reports,1 Gartner reports,2 and information about how various technologies have been used in the past to improve student outcomes. Examples of such initiatives include: learning analytics, augmented reality, and eAssessments. He also reported seeking out research that supports the implementation of the technologies, but noted that such studies are generally conducted at small scale.

Once an initiative makes it onto the priority list, a market scan is conducted to identify vendors that can potentially serve the university’s needs and a pilot study or “alpha launch” is planned. The university distinguishes between “pilots” and “alpha launches” because, based on the IHE’s past experience, pilots have inevitably led to use of the technology long-term. By using the term “alpha launch” the messaging is that there is no guarantee the IHE will continue to license or support the software if initial results for students are not promising. An alpha launch runs for 6 weeks, or half a semester, with the idea that if it yields positive results, the technology can be in place campus-wide by the beginning of the following semester.

During the alpha launch, the university gathers its own evidence on the technology’s impact on pedagogy and potential for scalability. In addition to qualitative feedback from faculty and students, the university’s Office of Learning and Teaching tracks five metrics: student evaluations at the end of the unit; student grades; student engagement as measured by usage of the technology; evaluation of the course by faculty and head course designers; and “student resilience” which is measured via longer-term tracking of how students perform in future, related courses compared with students in past years who did not use the technology being assessed. During the second half of the semester, the Office of Learning and Teaching writes a report on the alpha launch and, if results available by that point appear to justify wider use of the technology, prepares a business case for enterprise-wide adoption of the technology the next semester. The Academic Director pointed to an online polling initiative that was recently implemented university-wide as a successful initiative that underwent an alpha launch and was subsequently scaled.

The Office of Learning and Teaching holds a showcase on campus every 6 weeks to keep the university community apprised of new initiatives and elicits regular feedback on the university’s educational technology tools and needs from students, faculty, and staff. The list of technologies with which to experiment is reviewed every 6 months and re-prioritized. This pattern has established a systematic and time-bound process for identifying and meeting technology needs through regular solicitations of stakeholder input, testing solutions in vivo but on a small scale, and using the results to decide whether to scale up the technology adoption.

Discussion and conclusions

Even for those decision-makers taking a quasi-rational approach to educational technology decisions, our findings suggest that educational technology decision-makers in higher education rarely use externally-produced, scientifically-rigorous research to inform their decisions. This is consistent with previous findings about K-12 decision-making that the types of research school district leaders find useful are not primarily peer-reviewed impact studies (Penuel et al. 2018). Instead, educational technology decision-makers are more likely to produce and use their own locally-relevant evidence such as findings from interviews, surveys or focus groups. Some decision-makers review student outcomes after implementing an educational technology product or strategy, but usually without setting up comparison groups. This finding is also consistent with studies indicating that K-12 school and district level decision-makers value local knowledge, and often consult their own student performance data to make decisions about school improvement (Finnigan et al. 2013; Honig and Coburn 2008).

Our two examples of internal investigations demonstrate concrete cases in which investigations directly inform decisions about how the IHE acquires or uses educational technology for teaching and learning. Our findings align with previous research indicating that when decision-makers are involved in the design and conduct of research, the results are more likely to be used (Anderson and Shattuck 2012; Lewin 1946; Penuel and Farrell 2017). However, neither internal investigation involves experimental or quasi-experimental approaches. The first example illustrates the use of platform-based analytics and other student data to continuously improve on the implementation of technology-based courses. The university’s approach presumes the educational technology strategy can improve student outcomes, if only implementation can be perfected. It does not serve as a rigorous vetting mechanism to identify effective educational technology before it is scaled up. This assumption that educational technology will improve student outcomes if it is implemented correctly and that repeated tinkering will eventually produce success was not uncommon among our interviewees. These beliefs may preclude serious examination of the underlying mechanism by which technology use is expected to affect learning and we recommend that decision-makers interrogate the theory of change first before expending resources on high-quality implementation of educational technology.

The Australian university example illustrates a mechanism for prioritizing faculty and student needs, concentrating the use of institutional resources to a limited number of initiatives at any one time, and vetting educational technology before investing in any product or strategy at scale. It also demonstrates efforts to draw on externally-produced and potentially more rigorous studies to inform the university’s decision-making. Appointing a faculty member to spearhead an IHE’s approach to technology may lead to more evidence-based decision-making and IHEs should motivate and reward faculty members who contribute in this way. But while the internal efforts to investigate how technology affects teaching and learning at the university appear well-planned and systematic, the study design used in alpha launches is not sufficiently rigorous to allow for causal inferences. Furthermore, it appears that recommendations about scaling up are made before some important results are available: it is unclear how end-of-course grades and course ratings or the longer-term student resilience metrics could factor into the decision and whether making a recommendation after only 6 weeks is justifiable. Revisiting efficacy of the technology-based strategy when longer-term student outcomes become available would be prudent.

If decision-makers are indeed more likely to use internally-produced and locally-relevant research to inform their decision-making, then concerted efforts to systematically document and share local evidence may improve the availability of relevant evidence that IHEs can use to inform their decisions about acquiring and using educational technology. Specifically, this may include strategies such as creating a cross-institutional repository of studies that pools findings from the internal investigations already being conducted within universities. If universities can share results more easily, this may increase the uptake and use of existing evidence, especially if it is possible for IHEs to find studies conducted by peer institutions serving similar populations and operating under similar conditions.

The attention to use of research evidence in K-12 settings has led to the development of tools and recommendations that could also serve IHEs well. For example, tools such as RCT-YES and RCE Coach are designed to facilitate the timely execution of rigorous studies by practitioners within their own contexts. Digital Promise has developed the Edtech Pilot Framework (https://edtech.digitalpromise.org/) which includes a checklist for evaluating existing studies of educational technology (http://digitalpromise.org/wp-content/uploads/2016/04/DP_EvaluatingTheResults.pdf). Bull et al. (2017) recommend enhancements to teacher and leader preparation programs to develop K-12 educators’ “assessment literacy” and “research literacy” with respect to educational technology. While faculty members in most IHEs are not required to receive explicit instruction on how to teach, parallel in-service professional development opportunities could be helpful, for example, to support faculty members in conducting systematic action research in their own courses or in finding relevant educational technology research conducted elsewhere. Action research would provide timely and contextually-relevant information to faculty members.

Given the challenges of implementing rigorous research that is relevant in a timely fashion, a model such as the Australian university’s alpha launches may be a workable solution that increases the quality of internal evidence and its frequency of use. By running these alpha launches over short time-frames with clear mechanisms for systematic collection of data on a few key measures, fast feedback, and assessment, universities may be more likely to gather evidence that is usable, locally reliable, and can be integrated into decision-making. We caution, however, that such internal investigations may lack rigor and result in less-than-ideal evidence for informing consequential decisions about investing in or scaling up often costly educational technology products and strategies. A common fallacy in evaluating educational technology products in higher education is that they are initially implemented by volunteer faculty who are innovative and enthusiastic about technology. The faculty members’ enthusiasm and extra efforts become confounded with the technology intervention and cannot be replicated at scale. Application of more rigorous standards, for example implementing rapid cycle experiments, could help disentangle such extraneous factors that may contribute to the success or failure of an educational technology intervention.

Both the detailed examples of educational technology decisions-making described earlier illustrate the importance of intentional structures and vehicles for making evidence-based decisions about educational technology. These cannot be developed without the leadership and support of senior administrators at the IHEs. Hollands and Escueta (2017) describe the decision-making infrastructure and processes established for the selection and implementation of enterprise-level educational technology at four different types of IHE. In all cases, these involve a substantial investment of personnel time, including that of senior leaders, and solicitation of stakeholder input. One university is highlighted for the culture of innovation instilled by its President who fosters risk-taking by focusing on fixing problems when education technology fails rather than punishing the experimenters.

Going forward, we recommend more frequent and sustained communication and collaboration between the researchers and educational technology decision-makers at IHEs, either to render externally-produced, scientific research more useful in practice, or to build capacity of decision-makers to produce more rigorous internal research. Concerted collaborations between researchers and practitioners in the design and implementation of both local and large-scale generalizable research about educational technology products and strategies should increase the usability and the rigor of research. In turn, this will improve the quality of evidence that influences educational technology decisions aimed at optimizing student learning. Funding such work is likely to be challenging so it will be important to create internal incentives for faculty and administrators to engage in these studies. In some instances, educational technology vendors who can benefit from the production of high quality research on their products may be willing to collaborate substantively with the researchers. Dziuban et al. (2016) model this approach in their work with an adaptive learning platform.

Despite these critiques, we acknowledge that there is no rigorous evidence to show that educational technology decisions based on experimental or quasi-experimental research guarantee better teaching and learning outcomes than those based on less rigorous, internally-conducted research and pilot studies. Several of our interviewees argued that success of a decision to adopt a new educational technology product, tool, or strategy depends most on building buy-in among faculty members and students, which in turn assures fidelity of implementation. It is perhaps more likely that these decision-makers would rely on externally-produced, rigorous research if the studies clearly documented how buy-in was developed for use of the educational technology intervention and how fidelity of implementation was monitored and enforced. A responsible approach to educational technology decision-making might involve combining a number of the strategies that surfaced in this study: regular needs assessments, staying abreast of technology developments, match-making between solutions and needs, gathering evidence commensurate with the tier of investment, systematic stakeholder engagement, supportive leadership, educational technology professional development for faculty and continuous improvement. But we also urge initial attention to the underlying theory of change for any educational technology tool or strategy to ensure there is a plausible mechanism by which it can lead to the intended outcomes. Once the tool or strategy is implemented, periodic reassessment of efficacy is warranted to determine whether the theory of change holds up and endures in practice.

Footnotes

  1. 1.

    Examples of Horizon reports can be found at https://www.nmc.org/publication-type/horizon-report/.

  2. 2.

    Examples of Gartner research reports can be found at https://www.gartner.com/technology/research/.

Notes

Acknowledgements

The work reported in this manuscript constitutes part of the work conducted by Working Group B for the EdTech Efficacy Research Academic Symposium (http://symposium.curry.virginia.edu/), May 3-4, 2017, and was partially supported by a grant from Jefferson Education Accelerator (JEA) to Teachers College, Columbia University. We acknowledge several individuals who were a part of Working Group B, and helped with aspects of the study design and execution: Alison Griffin, Amy Bevilacqua, Bill Hansen, Bror Saxberg, David Kim, Deborah Quazzo, Emily Kinard, Fred Singer, Jerry Rekart, Kristin Palmer, Mark Triest, Matt Chingos, MJ Bishop, Phil Hill, Stephanie Moore, and Whitney Kilgore. We also acknowledge Kirsten Blagg at Urban Institute and Yilin Pan at the World Bank who kindly provided technical assistance in generating a random sample of colleges and universities from IPEDS. Yilin Pan also contributed to the literature review on evidence use in decision-making.

References

  1. Acquaro, P. E. (2017). Investigation of the selection, implementation, and support of online learning tools in higher education (Doctoral dissertation). Retrieved from ProQuest Dissertations & Theses Global (Accession No. 10259577).Google Scholar
  2. Allen, E., & Seaman, J. (2017). Distance education enrollment report 2017. Retrieved from: https://onlinelearningsurvey.com/reports/digtiallearningcompassenrollment2017.pdf.
  3. Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41(1), 16–25.CrossRefGoogle Scholar
  4. Asen, R., Gurke, D., Conners, P., Solomon, R., & Gumm, E. (2013). Research evidence and school board deliberations: Lessons from three Wisconsin school districts. Educational Policy, 27(1), 33–63.CrossRefGoogle Scholar
  5. Baker, B., & Welner, K. G. (2012). Evidence and rigor: Scrutinizing the rhetorical embrace of evidence-based decision making. Educational Researcher, 41(3), 98–101.CrossRefGoogle Scholar
  6. Birkland, T. A. (2011). An introduction to the policy process: Theories, concepts, and models of public policy making (3rd ed.). Armonk: ME Sharpe.Google Scholar
  7. Bull, G., Spector, J. M., Persichitte, K., & Meier, E. (2017). Preliminary recommendations regarding preparation of teachers and school leaders to use learning technologies. Contemporary Issues in Technology and Teacher Education, 17(1), 1–9.Google Scholar
  8. Chaffee, E. E. (1983). Rational decisionmaking in higher education (p. 92). Boulder: National Center for Higher Education Management Systems.Google Scholar
  9. Coburn, C. E., & Penuel, W. R. (2016). Research-practice partnerships in education: Outcomes, dynamics, and open questions. Educational Researcher, 45(1), 48–54.CrossRefGoogle Scholar
  10. Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational choice. Administrative Science Quarterly, 17(1), 1–25.CrossRefGoogle Scholar
  11. Dede, C. (2005). Why design-based research is both important and difficult. Educational Technology, 45(1), 5–8.Google Scholar
  12. Deming, E. J., & Figlio, D. (2016). Accountability in US education: Applying lessons from K-12 experience to higher education. The Journal of Economic Perspectives, 30(6), 33–55.CrossRefGoogle Scholar
  13. Dziuban, C., Moskal, P., Cassisi, J., & Fawcett, A. (2016). Adaptive learning in psychology: Wayfinding in the digital age. Online Learning, 20(3), 74–96.CrossRefGoogle Scholar
  14. EdTech Efficacy Research Academic Symposium. (2017). Symposium hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator in Washington, DC. http://symposium.curry.virginia.edu/
  15. Edwards, W., & Newman, J. R. (1982). Multiattribute evaluation. Beverly Hills: Sage Publications.CrossRefGoogle Scholar
  16. Escueta, M., Quan, V., Nickow, A. J., & Oreopoulos, P. (2017). Education Technology: An Evidence-Based Review. (NBER Working Paper No. 23744). Retrieved from: http://www.nber.org/papers/w23744.
  17. Every Student Succeeds Act of 2015, Pub. L. No. 114-95 § 114 Stat. 1177 (2015–2016).Google Scholar
  18. Farley-Ripple, E., May, H., Karpyn, A., Tilley, K., & McDonough, K. (2018). Rethinking connections between research and practice in education: A conceptual framework. Educational Researcher, 47(4), 235–245.CrossRefGoogle Scholar
  19. Farrell, C. C., & Coburn, C. E. (2017). Absorptive capacity: A conceptual framework for understanding district central office learning. Journal of Educational Change, 18(2), 135–159.CrossRefGoogle Scholar
  20. Finnigan, K. S., Daly, A. J., & Che, J. (2013). Systemwide reform in districts under pressure: The role of social networks in defining, acquiring, using, and diffusing research evidence. Journal of Educational Administration, 51(4), 476–497.CrossRefGoogle Scholar
  21. Heinrich, C. J., & Good, A. (2018). Research-informed practice improvements: Exploring linkages between school district use of research evidence and educational outcomes over time. School Effectiveness and School Improvement, 29(3), 418–445.CrossRefGoogle Scholar
  22. Ho, W., Dey, P. K., & Higson, H. E. (2006). Multiple criteria decision-making techniques in higher education. International Journal of Educational Management, 20(5), 319–337.Google Scholar
  23. Hollands, F. M., & Escueta, M. (2017). EdTech decision-making in higher education. Center for Benefit-Cost Studies of Education, Teachers College, Columbia University. Retrieved from https://docs.wixstatic.com/ugd/cc7beb_39a11e93051142c8be0aa7a69d7eadee.pdf.
  24. Honig, M. I., & Coburn, C. E. (2008). Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy, 22(4), 578–608.CrossRefGoogle Scholar
  25. Honig, M. I., Venkateswaran, N., & McNeil, P. (2017). Research use as learning: The case of fundamental change in school district central offices. American Educational Research Journal, 54(5), 938–971.CrossRefGoogle Scholar
  26. King, J. A., & Pechman, E. M. (1984). Pinning a wave to the shore: Conceptualizing evaluation use in school systems. Educational Evaluation and Policy Analysis, 6(3), 241–251.CrossRefGoogle Scholar
  27. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2(4), 34–46.CrossRefGoogle Scholar
  28. Maynard, R. A. (2006). Presidential address: Evidence-based decision making—What will it take for the decision makers to care? Journal of Policy Analysis and Management, 25(2), 249–265.CrossRefGoogle Scholar
  29. Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. San Francisco: Wiley.Google Scholar
  30. Morrison, N. (2017). Google leapfrogs rivals to be classroom king. Forbes Magazine. Retrieved from https://www.forbes.com/sites/nickmorrison/2017/05/09/google-leapfrogs-rivals-to-be-classroom-king/#5d449d0827a6.
  31. National Center for Education Statistics. (2016). The Integrated Postsecondary Education Data System [Data set]. Retrieved from https://nces.ed.gov/ipeds/use-the-data.
  32. Neal, Z., Neal, J. W., Mills, K., & Lawlor, J. (2018). Making or buying evidence: Using transaction cost economics to understand decision making in public school districts. Evidence & Policy: A Journal of Research, Debate and Practice, 14(4), 707–724.CrossRefGoogle Scholar
  33. No Child Left Behind Act of 2001, P.L. 107–110, 20 U.S.C. § 6319 (2002).Google Scholar
  34. Nutley, S. M., Walter, I., & Davies, H. T. O. (2007). Using evidence: How research can inform public services. Bristol: Policy Press at the University of Bristol.CrossRefGoogle Scholar
  35. Penuel, W. R., Allen, A. R., Coburn, C. E., & Farrell, C. (2015). Conceptualizing research–practice partnerships as joint work at boundaries. Journal of Education for Students Placed at Risk, 20(1–2), 182–197.CrossRefGoogle Scholar
  36. Penuel, W. R., Briggs, D. C., Davidson, K. L., Herlihy, C., Sherer, D., Hill, H. C., et al. (2016). Findings from a national survey of research use among school and district leaders (Technical Report No. 1). Boulder: National Center for Research in Policy and Practice.Google Scholar
  37. Penuel, W. R., & Farrell, C. (2017). Research-practice partnerships and ESSA: A learning agenda for the coming decade. In E. Quintero-Corrall (Ed.), The social side of reform. Cambridge: Harvard Education Press.Google Scholar
  38. Penuel, W. R., Farrell, C. C., Allen, A. R., Toyama, Y., & Coburn, C. E. (2018). What research district leaders find useful. Educational Policy, 32(4), 540–568.CrossRefGoogle Scholar
  39. Sclater, N., Peasgood, A., & Mullan, J. (2016). Learning analytics in higher education. London: JISC.Google Scholar
  40. Shacklock, X. (2016). From bricks to clicks: the potential of data and analytics in higher education. London: Higher Education Commission.Google Scholar
  41. Simon, H. (1957). Models of man: Social and rational. New York: Wiley.Google Scholar
  42. The Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–8.CrossRefGoogle Scholar
  43. Tseng, V. (2012). The uses of research in policy and practice (Social Policy Report No. V26#2). Ann Arbor: Society for Research in Child Development.Google Scholar
  44. Tseng, V., & Nutley, S. (2014). Building the infrastructure to improve the use and usefulness of research in education. In K. S. Finnigan & A. J. Daly (Eds.), Using research evidence in education: From the schoolhouse door to Capitol Hill (pp. 163–175). Cham: Springer.CrossRefGoogle Scholar
  45. U.S. Department of Education. (2016). Non-regulatory guidance: Using evidence to strengthen education investments. Retrieved from https://www2.ed.gov/policy/elsec/leg/essa/guidanceuseseinvestment.pdf.
  46. U.S. Department of Education, Institute of Education Sciences & National Science Foundation. (2013). Common guidelines for education research and development. Retrieved from Institute of Education Sciences, National Center for Education Statistics website. https://ies.ed.gov/pdf/CommonGuidelines.pdf.
  47. Weiss, C. H. (1977). Research for policy’s sake: The enlightenment function of social research. Political Analysis, 3, 531–545.Google Scholar
  48. Zopounidis, C., & Doumpos, M. (Eds.). (2017). Multiple criteria decision making: Applications in management and engineering. New York: Springer.Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Center for Benefit-Cost Studies of Education (CBCSE), Teachers CollegeColumbia UniversityNew YorkUSA

Personalised recommendations