Journal of Computing in Higher Education

, Volume 24, Issue 3, pp 164–181

Re-envisioning instructional technology research in higher education environments: a content analysis of a grant program

  • Trena M. Paulus
  • Gina Phipps
  • John Harrison
  • Mary Alice Varga
Article

DOI: 10.1007/s12528-012-9062-2

Cite this article as:
Paulus, T.M., Phipps, G., Harrison, J. et al. J Comput High Educ (2012) 24: 164. doi:10.1007/s12528-012-9062-2

Abstract

Within the field of instructional technology, scholars have long worked to define the scope and purpose of research and its role in informing practice. Increasingly, researchers outside of the instructional technology field are conducting studies to examine their use of technology in educational contexts. Few studies have been done on how researchers in other disciplines are designing such studies. We conducted a content analysis of 60 proposals submitted from 2006 to 2010 to our internal grant competition for faculty research on instructional technology to better understand the kinds of studies being proposed. Categories explored within each proposal included academic discipline, collaboration, knowledge of previous literature, context, goals of study, and research design. A majority of proposals came from outside of the education field and were submitted by individuals rather than collaborative teams. Just under half of the proposals cited previous literature to justify their study, and just over half sought to examine classroom contexts. Roughly a third proposed to study distance education contexts. Most proposals were to examine the implementation of a new instructional strategy (rather than to conduct a media comparison study) and just over half utilized a quantitative research design collecting performance or satisfaction data. We include recommendations for those who may be interested in how better to support researchers in designing effective studies to investigate instructional technology use, highlighting the use of design-based research as a viable methodology.

Keywords

Instructional technology research Faculty development Content analysis Media comparison Research design Design-based research 

Introduction

Within the field of instructional technology, scholars have long struggled to define the scope and purpose of research and its role in informing practice. As the Internet and other technologies have become more pervasive and accessible in higher education, faculty in many other disciplines have begun using technology in their teaching and even conducting research on its use (Reeves et al. 2005). Currently, however, there is a dearth of research on how scholars outside the instructional technology field are conceptualizing their studies. It is unclear, for example, whether these researchers are familiar with the literature on instructional technology use in education or with the major debates within the field.

While institutions are beginning to provide incentives to faculty to use instructional technology in their classes (Wolff 2008), few provide incentives to conduct research on the experience. Our university, in contrast, has been providing research incentives to faculty proposing research on instructional technology since 2006 and therefore has amassed data that can shed light on how faculty outside instructional technology design such studies. We undertook this content analysis of the proposals for these studies in order (a) to gain insights into how non-instructional technology faculty at our own university are conceptualizing research into instructional technology use and (b) to suggest how faculty and administrators across higher education might best design and support such studies.

Review of the literature

In the inaugural issue of Educational Technology Research & Development (ETR&D) Higgins et al. (1989) reported results of a content analysis of previous issues of the two journals which were being merged (Educational Communication and Technology Journal and Journal of Instructional Development) and a survey of the Association of Educational Communications and Technology membership. Findings reflected a divergence in the field that persists still today between research that focuses on descriptions of instructional development projects and theory-driven experimental design research. This distinction was captured by the editors’ decision to divide each issue of ETR&D into “research” and “development” sections, a practice that continues today. Through the use of surveys, content analyses, and literature reviews, scholars have continued to seek direction for research in the field of instructional technology. These studies have been appearing regularly in top journals and handbooks throughout the past decades and are summarized here.

Topic, design, and focus of previous research

Klein (1997) noted that most ETR&D studies from 1989 to 1997 focused on the development of new technologies rather than the development of instructional technologies, and he concluded that more data-driven studies were needed on a wide variety of topics. Since then, content analyses across instructional technology journals have noted that data-driven research studies indeed are being conducted on a wider variety of topics, typically in higher education contexts (Bekele and Menchaca 2008; Hew et al. 2007; Shih et al. 2008). Research topics have included Internet technologies (Bekele and Menchaca 2008; Hew et al. 2007), psychology of learning and instruction (Hew et al. 2007; Shih et al. 2008), interaction in learning environments (Hrastinski and Keller 2007b; Shih et al. 2008) and issues related to distance education (Rourke and Szabo 2002; Hrastinski and Keller 2007b).

However, recent reviews of ETR&D and other instructional technology journals noted the early prevalence but subsequent decline in the number of experimental designs and intervention studies. In 2004 Ross and Morrison categorized articles in ETR&D (and its earlier iterations) from 1953 to 2001 and found that while, overall, 81 % of the studies were experimental designs and 19 % were descriptive studies, in the most recent decade (1993–2001) the percentage of descriptive studies increased to 45 % and the number of true experiments decreased to 53 %. Ross, Morrison, and Lowther (2010) conducted an informal “gap analysis” by reviewing 43 articles from the 2006–2008 issues of ETR&D, finding that 58 % of the articles were descriptive in nature (i.e. case studies, design-based studies, developmental research, formative evaluation, observation, surveys and qualitative studies.) Shih et al. (2008) reviewed 444 articles related to e-learning environments in five journals (Computers & Education, British Journal of Educational Technology, Innovations in Education and Teaching International, ETR&D, and Journal of Computer-Assisted Learning) from 2001 to 2005 and noted that of the sixteen most-cited articles, most used descriptive techniques rather than experimental or development research designs.

At the same time that the decline in experimental and/or intervention studies was being reported, others scholars expressed concern over the lack of conceptual or theoretical work being published in instructional technology journals. Hrastinski and Keller (2007a) analyzed the content of four journals (Computers & Education, Educational Media International, Journal of Educational Computing Research, and the Journal of Educational Media) from 2000 to 2004 and noted that while over 68 % of published articles were empirical studies, there were a lack of conceptual or theoretical articles that could contribute to cumulativity in the field. This finding was attributed to the traditionally pragmatic focus of the field. Rourke and Szabo (2002) had noted the same concern in their earlier review of Journal of Distance Education articles from 1986 to 2000, and this concern was echoed again by Hrastinski and Keller (2007b) in their content analysis of articles related to computer-mediated communication in four top journals (Computers & Education, Educational Media International, Journal of Educational Computing Research, and the Journal of Educational Media) from 2000 to 2004.

Media comparison studies and “what works”

Surprisingly, few of the content analyses and literature reviews (Bekele and Menchaca 2008; Hew et al. 2007) have differentiated between media comparison studies and other kinds of studies despite the attention media comparison studies have received in the field (Ross et al. 2008; Ross and Morrison 2004). Hew et al. (2007) analyzed the content of articles published from 2000 to 2004 in three journals (ETR&D, Instructional Science, and Journal of Educational Computing Research) and found that almost half of the topics could be categorized as media studies (41 %). Media comparison studies have been a source of controversy in the instructional technology field for decades (Clark 1994a, b; Kozma 1994a, b). Ross et al. (2010) discussed the “resurgence of interest” in “technology’s viability as a causal treatment” (p. 22) in recent years as the “what works” movement in educational research has gained momentum, warning that “the fervor of identifying ‘what works’ carries the associated long-standing risk (Clark 1983) of confusing delivery modes with instructional strategies” (p. 24). The proliferation of meta-analyses examining the effect of technology on achievement, such as those conducted by Schmid et al. (2009) and Tamim et al. (2011), seem to exemplify this fervor.

Understanding “what works” may also explain some scholars’ concerns that the impact of technology or other “interventions” on student learning outcomes (e.g. Bekele and Menchaca 2008) is no longer being addressed by research studies. Rourke and Kanuka (2009) noted that of the 252 studies conducted related to the community of inquiry model (Garrison et al. 2000), only 48 (19 %) were empirical studies and only five of these included a measure of student learning; other studies relied heavily on self-report data of perceived learning. Hsieh et al. (2005) noted a decline in the number of intervention studies published in four educational psychology journals (from 40 to 26 %) and the American Educational Research Journal (from 33 to 4 %) over a 21 year period. Ross et al. (2008) confirmed this trend in their analysis of the number of intervention studies in ETR&D, noting a declining trend from 75 % in 1983 to 44 % in the 1995–2004 period.

Media comparison studies are often viewed as conceptually simple, easy to design, and seemingly relevant each time a new technology appears (Surry and Ensminger 2001). Based on our conversations with faculty over the years we suspected that faculty researchers outside of the instructional technology field would view a media comparison study as the simplest and most direct way to study new technology use and its impact on outcomes. Warnick and Burbules (2007) deconstructed the assumptions underlying media comparison studies and pointed out aspects of conceptual confusion in such designs. They argued that the underlying metaphor of these studies, that of information transmission, is outdated, and that technology-rich environments should be viewed instead as spaces, with studies designed to understand the particular goals and contexts of technology use, to explore how new media make new instructional goals possible, and to recognize the unintended consequences and outcomes of introducing new technologies.

Design-based research

Reeves (1995, 2000) and Reeves et al. (2005) have been vocal critics of media comparison studies and the overall quality of instructional technology research, arguing for a shift to “socially responsible research” and a focus on theory building in the form of design studies (also known as design-based research or development studies). Reeves (2000) claimed that “the overall goal of development research is to solve real problems while at the same time constructing design principles that can inform future decisions” (p. 12). Design-based research (DBR) is defined by Barab and Squire (2004) as “a series of approaches with the intent of producing new theories, artifacts, and practices that account for and potentially impact learning and teaching in naturalistic settings” (p. 2). DBR focuses on complex problems, requires long term engagement in the field, requires collaboration among researchers and practitioners, and develops and tests design principles and theory as solutions are created and evaluated.

Winn (2002) also supported a shift to more DBR studies, arguing that the primary focus of educational technology research should be “the study of learning in complete, complex, and interactive learning environments” (p. 331) and suggested that “research methodologies should adjust to the demands of studying increasingly more complex interactions between students and their environments” (p. 347). Oh and Reeves (2010) compared DBR with more traditional instructional systems design approaches, arguing that DBR “combines both innovative design and socially responsible inquiry” (p. 272) and has the potential to transform practice in today’s complex learning environments.

Ross et al. (2010) explored research trends around the impact of technology on “school learning.” They identified common domains for research on technology use (technology as tutor, technology as teaching aid, and technology as learning tool) and described the history of research on instructional technology as shifting from viewing technology as a treatment or a delivery mode to the more recent focus on technology use in open-ended learning environments, situated learning contexts and DBR studies. They argued that in order to help solve “real-world educational problems,” studies should strike a balance between rigor and relevance and focus on “meaningful application topics” (p. 24) situated in classroom practices. Ross et al. (2010) argued that the field should continue “basic research on cognition and learning using technology” as well as “formative evaluation and design-based research” (p. 31) and that researchers should “reduce efforts to prove the ‘effectiveness’ of technology, while focusing on conducting rigorous and relevant mixed methods studies to explicate which technology applications work to facilitate learning, in which contexts, for whom, and why” (p. 31).

Purpose of the study

Some variation in the designs of published instructional technology studies (descriptive, experimental, design-based) can likely be attributed to differences in journal preferences for particular kinds of studies and topics of interest. However, published reviews of the literature across journals have noted the field’s interest in new media (e.g. the Internet and online learning), the prevalence of descriptive studies, the lack of theory-building research, and the decreasing focus on outcomes. While scholars have suggested DBR and/or mixed methods research as emergent paradigms for instructional technology research, none of the reviews focused specifically on the number of DBR studies being done. This is likely because, as Kirby et al. (2005) pointed out, the field of instructional technology and that of the learning sciences (within which DBR is a prevalent paradigm) have remained relatively distinct in terms of publication venues.

We have seen that there is still debate within the field around how instructional technology research should be conceptualized, designed and implemented. While this discussion has been occurring for years within the discipline, faculty from other fields are now also researching instructional technology use. Conceptualizing research on instructional technology as DBR, as opposed to simple media comparison studies, may lead to more fruitful outcomes for scholars in other disciplines seeking to understand technology use in their contexts. However, we do not yet know, for example, which disciplines are actively engaged in instructional technology research, which knowledge bases they are drawing upon, what contexts they are exploring, which research designs they are using, the goals of their studies, or the outcomes being examined. This study seeks to provide a baseline of answers to these questions.

Context of the study

Since 2006 our institution has held an internal grant competition to fund research studies by faculty on the use of instructional technology in education. Our institution is a research-intensive public university in the southeastern United States with an enrollment of close to 28,000 students. The program has been coordinated through the Office of Information Technology’s Instructional Support Group in part by one of the authors of this paper, Phipps, who held the position of Manager within the group for 4 years. Paulus was a recipient of a grant in 2007 and has worked as a Faculty Fellow and proposal reviewer during the 2008–2009 academic year. During that time she served as the lead author on this research study with the assistance of two doctoral students (Harrison and Varga).

The call for proposals is available as supplemental material on this journal’s Website. The call includes a variety of possible research questions and a description of past projects, as well as required components of the proposal: a statement of the problem, project significance, methodology, timeline and budget. The award amount varies from $3,000–$5,000 with three to seven proposals funded per year, depending on the university’s budget. A letter of support from the department head and proof of institutional review board approval is also required as part of the application process. While the university’s international outreach initiative is mentioned in the call for proposals, special consideration was not given to proposals tailored to this initiative. Sixty proposals were submitted from 2006 to 2010.

Methods

We undertook a content analysis of these proposals to better understand the kinds of studies faculty are proposing. All proposals were stripped of identifying information (other than the number of authors and their departmental affiliation) prior to analysis. Our research questions were:
  1. 1.

    From which disciplines are faculty proposing instructional technology research studies?

     
  2. 2.

    To what extent are faculty collaborating on their proposals?

     
  3. 3.

    To what extent are faculty reviewing the literature prior to designing their studies?

     
  4. 4.

    What are the contexts of the proposed studies?

     
  5. 5.

    What are the goals of the proposed studies?

     
  6. 6.

    What research designs are faculty using?

     
  7. 7.

    What outcomes are being measured?

     

Bauer (2000) identified content analysis as a “hybrid technique” that “bridges statistical formalism and the qualitative analysis of materials” (p. 132). It is a systematic approach to making sense of written texts, such as this research proposal data. A coding scheme is the primary tool of a content analysis study. We developed our coding scheme inductively, drawing upon our review of the literature and categories that emerged from the data. The process was iterative, in that we began with initial coding categories from the literature review and added new categories and eliminated irrelevant ones through the analysis process. We continually refined the coding categories as we read through the data by applying the categories to the data and adjusting them accordingly. Rationale for and development of the coding categories is described next.

Researcher discipline and collaboration

In order to gain a comprehensive picture of who is conducting research on instructional technology at our university, we coded the data for the disciplinary background of the researcher and whether the study was conducted individually or collaboratively. Discipline and collaboration categories were applied to the data based on the number of authors on the proposal and their departmental affiliations. Departmental affiliations were collapsed into broader discipline categories by first referring to the college structure at our institution and then modifying the categories as necessary. The final list of ten categories (business & computer science, communication, education, health sciences, social sciences, social work, natural sciences, math & statistics, agricultural sciences, arts/languages/music & humanities) encompassed all of the proposal data. Proposals were also categorized as to whether collaboration occurred between faculty within the same department or across departments. Given that this stage of coding was straightforward, one researcher applied the discipline and collaboration codes to the data.

Knowledge of previous literature

Previous reviews of instructional technology research have noted the lack of theoretical work in the field and the apparent disconnect among theoretical, research and practice-based articles. To better understand whether researchers were grounding their proposals in theoretical frameworks or previous research findings, we coded the data for whether or not they included citations to published work. Three categories emerged from the data: (a) those who did not refer to any literature or previous research; (b) those who alluded to previous research without explicitly citing the research; and (c) those who included citations and references to previous work.

Contexts of the proposed studies

Similar to studies by Hew et al. (2007) and Shih et al. (2008), we examined the contexts in which authors proposed to conduct their studies. Reeves (2000, 2005) argued that more work needs to focus on complex classroom environments. Additionally, while reviews of the literature prior to the advent of the Internet (Higgins et al. 1989; Klein 1997) noted that distance education was not a major topic of interest, since then, distance/online learning and Internet communications have become much more prevalent. Given our institution’s recent emphasis on increasing distance education programs, we were particularly interested in the number of proposals to study distance education contexts. Thus, two broad categories emerged from our initial readings of the data: (a) studies which were taking place in classroom contexts and those which were not; and (b) studies which were concerned with distance education contexts and those which were not.

Goals of the proposed studies

Hew et al. (2007), Shih et al. (2008), Rourke and Szabo (2002), and Hrastinski and Keller (2007b) informed our initial readings of the data in light of the goals of the study. Media comparison studies have been common in the research literature (Ross and Morrison 2004; Ross et al. 2008, Surry and Ensminger 2001), and we wondered whether they would be particularly prevalent in proposals submitted by faculty in other disciplines. Our four categories, similar to categories used by Hew et al. (2007), were: (a) media comparison; (b) implementation of a new instructional strategy; (c) tool/instrument development; and (d) learner/instructor characteristics.

Research design

We were interested in the proposed study designs, in light of Winn’s (2002), Reeves et al. (2005) and Ross et al. (2010) calls for new approaches to research in instructional technology. We started by comparing the coding categories used by Klein (1997), Hew et al. (2007), Rourke and Szabo (2002), Hrastinski and Keller (2007b) to our data. We then refined the coding categories to five: (a) quantitative designs; (b) qualitative designs; (c) mixed methods designs; (d) development/evaluation/action/design-based research designs; and (e) lack of specified research design.

Measured outcomes

The measurable outcomes category was of interest to us in light of Warnick and Burbules’ (2007) call for new metaphors for understanding technology environments. Our initial reading of the data revealed that most studies proposed to measure some outcome as part of their study design. Bekele and Menchaca’s (2008) study distinguished between (a) performance/achievement outcomes and (b) satisfaction/perception outcomes. These categories were consistent with our data. In addition to these two categories, some designs (c) proposed to measure both outcomes, or (d) did not include outcome measures in their proposals.

Inter-rater reliability

Once the coding categories were finalized, they were applied to the data by two independent raters to establish reliability. A third rater was used to resolve disagreement among the two initial raters. Details of the final coding categories used in the analysis are available as supplemental materials on this journal’s website. A Cohen’s kappa was calculated to determine inter-rater reliability across the 60 cases. A Kappa value between .40 and .59 is considered moderate agreement; .60 to .79 substantial; .80 and higher outstanding (Landis and Koch 1977). Table 1 outlines the Cohen’s kappa values.
Table 1

Coding categories, percent agreement and Cohen’s kappa

Category

Percent agreement (%)

ϰ

Knowledge of previous literature

90

.847

Classroom context

83

.607

Distance education context

96.7

.927

Goal of the study

88.3

.814

Research design

83

.755

Measured outcomes

83

.685

All Kappa values were either outstanding or substantial. We hypothesize that the lower values were due to the complexity of the coding categories. Classroom contexts can be interpreted in various ways making judgments difficult. Research designs can vary and were often not clearly stated, requiring inferences to be made by the raters. Finally, similar to classroom context, the outcomes being measured were at times unclear, making the coding process difficult.

Chi-square tests of independence

Three Chi-square tests of independence were calculated to further answer the research questions. The first test was conducted comparing the frequency of membership in academic discipline to each of the categories—collaboration, knowledge of previous literature, classroom or distance education context, goals of the study, research design, and outcomes measured. This test was conducted to explore whether there was a significant relationship between specific academic disciplines and each category to provide insight on how different disciplines approached their proposals. Chi-square tests of independence were also calculated comparing the frequency of collaboration to each of the categories. This test was conducted to explore whether there was a significant relationship between collaboration and each category to gain insight on the level of collaboration involved in the proposal process. Finally, a Chi-square test of independence was calculated comparing the frequency of type of research design and goals of the study. Since appropriate research designs are needed to reach intended goals, this test explored whether the type of research design had a significant relationship with the goals of the study.

Findings

Disciplines proposing instructional technology studies

Education was the disciplinary affiliation of 33 % of the researchers who submitted proposals. Of the 67 % of proposals that came from outside of education, six disciplines accounted for 58 % of the proposals, while three accounted for just 9 %. The fewest number of proposals came from math and statistics and the natural sciences, with only one proposal in each. See Table 2.
Table 2

Disciplinary affiliation of proposal authors

Discipline

Total

Percentage

Education

21

33

Business and computer science

8

12

Health sciences

7

11

Arts, languages, music and humanities

6

9

Social sciences

6

9

Social work

6

9

Agricultural sciences

5

8

Communication

3

5

Natural sciences

1

2

Math and statistics

1

2

Number of authors

64

100

Researcher collaboration

A majority of the proposals (65 %) were submitted by individual researchers. Of the 21 proposals that were collaborative (35 %), 15 of those (25 %) were submitted by researchers in the same department, and only six (10 %) were submitted by researchers from different departments.

Chi-square analysis showed a significant relationship (χ2(28) = 57.12, p < .01) with academic discipline and collaboration. An analysis of adjusted residuals (AR) was performed for each significant relationship to specifically identify the types of collaboration among each academic discipline. Adjusted residuals greater than or equal to the absolute value of 1.96 are considered to be statically significant (Aspelmeier and Pierce 2009). The significant relationship of academic discipline and collaboration showed that collaboration within departments was higher than expected among proposals received from agricultural sciences (AR 2.3). The analysis also showed that collaboration across multiple departments was higher than expected among proposals received from agricultural sciences (AR 1.9) and social work (AR 2.4). Adjusted residuals for communications, education, health sciences, math and sciences, and social sciences showed a non-significant relationship to collaboration.

Knowledge of previous literature

Twenty-six (43 %) of the proposals included citations to previous literature. Nineteen (32 %) did not acknowledge any previous research or theoretical framework at all. Fifteen proposals (25 %) referred to, but did not cite, previous literature. Chi-square analysis showed a significant relationship (χ2(28) = 46.36, p < .01) with academic discipline and review of literature. An analysis of adjusted residuals (AR) was performed to identify the types of literature reviews performed by each academic discipline. The analysis showed social work (AR 2.4) as the only academic discipline that grounded their grant proposals in previous literature as statistically expected. Proposals received from communications (AR 3.1) referred to but did not cite previous literature in their proposals as often as would be expected. Proposals received from agricultural sciences (AR 3.4) were less likely than expected to acknowledge or reference previous literature as justification for their study. Academic disciplines including business and computer science, education, health sciences, math and sciences, natural sciences, and social sciences did not show a strong relationship with exploring previous literature for their grant proposals.

Contexts of the proposed studies

Thirty-three (55 %) of the proposals were for studies that would not occur in the classroom, such as conducting large scale surveys or developing new tools or instruments. Twenty-seven (45 %) of the proposals were to conduct studies in classroom contexts. These included, for example, having students engage in literature discussions via our course management system, introducing web-based modules in agriculture courses, making neurobiology lectures available online, using digital audio technologies for language learning, and creating webinars for study sessions in veterinary medicine courses. Chi-square analysis was performed and showed no significant relationship between academic discipline and classroom context. Additional Chi-square analysis also showed no relationship between collaboration and classroom context.

Only twenty of the proposed studies (33 %) aimed to investigate an area of distance education, such as online testing, attrition rates in online classes, and measures of procrastination and academic honesty among distance learners. Chi-square analysis showed no significant relationship between distance education context and academic disciplines. Additional Chi-square analysis also showed no significant relationship between distance education context and collaboration.

Goals of the proposed studies

Only 13 of the 60 proposals (22 %) were to conduct a media comparison study. Twenty-six of the proposed studies (43 %) were to investigate the implementation of a new instructional strategy, such as whether and how handheld technologies enable cooperative learning in the nursing field. Twelve (20 %) sought to identify learner or teacher characteristics related to technology integration (such as motivation, technology competencies, and beliefs) and nine (15 %) of the proposals were for the development of an instrument or tool related to technology use. Development proposals were for projects such as statistical analysis modules for plant sciences, a computer-based sketching tool for computer science, and developing an online program in educational administration. Chi-square analysis showed no significant relationship between goals of proposed studies and academic discipline. Additional Chi-square analysis also showed no significant relationship between goals of proposed studies and collaboration.

Proposed research designs

We found that 32 (53 %) of the proposals were for a quantitative research design, 18 (30 %) for a mixed methods design, and seven (12 %) for a qualitative design. Only three (5 %) proposals were to conduct a development/evaluation or action research study. No proposals utilized a design-based research design. Chi-square analysis comparing the frequency of research design and goals of study showed a significant relationship (χ2(9) = .00, p < .001). Analysis of adjusted residuals showed that the relationship between research designs for development/evaluation and studies with the goal of developing new software, instruments, programs, or models were higher than expected (AR 4.2). The relationship between mixed methods designs and studies with the goal of identifying learner/instructor characteristics was lower than expected (AR-2.5). The relationship between quantitative designs and studies with the goal of identifying learner/instructor characteristics (AR 3.0), and the relationship between quantitative designs and studies with the goal of media comparison (AR 1.9) were both also higher than expected. Additional Chi-square analysis showed no significant relationship between research design and academic discipline.

Proposed measurable outcomes

In terms of measurable outcomes, 36 (60 %) of the proposals included a measure of performance outcomes (usually test scores) to answer their research questions and 20 (33 %) proposed to measure satisfaction outcomes (such as student course evaluations). Three proposals (5 %) included a measure of both types of outcomes, and only one proposal (2 %) did not intend to measure any outcomes. Chi-square analysis showed no significant relationship between measurable outcomes and academic discipline. Additional Chi-square analysis also showed no significant relationship between measurable outcomes and collaboration.

Discussion

We speculated that faculty outside of the education or instructional technology field might choose media comparison designs for their apparent simplicity (Warnick and Burbules 2007). Thus, we were pleasantly surprised to find this was not the case. Hew et al. (2007) also found fewer media comparison studies than studies on other topics. Similar to findings by Shih et al. (2008), nearly half of the proposals (45 %) were to explore the impact of instructional technologies on outcomes in classroom contexts. For example, faculty proposed to explore the impact of new technologies such as handheld devices, blogging, e-readers, digital audio and the Second Life virtual world on learning.

We found a range in the type of studies being designed, with 53 % choosing quantitative designs and 42 % utilizing qualitative or mixed methods designs. Quantitative designs have historically been more prevalent in the instructional technology field, though this has been variable over time (Ross and Morrison 2004; Ross et al. 2008, 2010). The focus on quantitative data may reflect the current emphasis on outcomes-based education and quasi-experimental designs encouraged by the National Research Council (2002) and other organizations advocating study designs resulting in generalizable claims. Or it may be that faculty on the whole are more familiar and/or comfortable with quantitative research designs.

Most importantly, findings from our Chi-square analyses point to the ability of faculty researchers to select research designs appropriate to their goals—development/evaluation studies for developing new media, quantitative (usually survey) designs for identifying learner/instructor characteristics, and quantitative designs for media comparison studies. Since there was no significant relationship between research design and academic discipline, it may suggest that faculty throughout higher education are embracing various types of methodologies to best fit their research designs.

However, none of the proposals in our study utilized a DBR approach (Barab and Squire 2004), and only one mentioned action research as a framework. This is not surprising since our findings overall indicate minimal if any collaboration among disciplines and a dearth of literature reviews, two important aspects of a DBR approach.

We were pleased to find that only one of the 60 proposed designs did not include either a measure of performance or satisfaction as an outcome measure. While it seems important to have some indicator of success or change when introducing a new instructional method or strategy, it can be challenging to identify what those indicators should be. Media comparison studies and quantitative designs in particular emphasize manipulation of variables (such as the technology used) and measurement of outcomes, and studies of new approaches would do well to follow the suggestions of researchers like Warnick and Burbules (2007), Reeves et al. (2005), Ross et al. (2010), and Winn (2002). All these authors promote a focus on studying instructional technology innovations in terms of the different ends they make possible, rather than view them as a means to an end that is assumed (such as performance or satisfaction variables.)

We suggest, as Reeves et al. (2005) have previously, that DBR be considered a viable research design for instructional technology studies in classroom contexts (Greenhow et al. 2009; Ross et al. 2010). As Winn (2002) and Warnick and Burbules (2007) have pointed out, and Kozma (1994a, b) before them, understanding technology-supported learning environments is a complex undertaking, and research methods such as DBR are able to analyze them as spaces full of opportunities, rather than variables to be manipulated towards a pre-determined outcome.

Currently only 35 % of proposals were collaboratively authored, with only 10 % representing collaborations across departments. Social work and agricultural sciences were found to be more likely to collaborate across departments, perhaps reflecting the culture of those fields. Encouraging greater collaboration in higher education contexts could be accomplished in several ways. While education was the most highly represented discipline among proposal authors, authoring 32 % of the proposals, the majority of authors (44 of the 64) came from outside of education. It could be that faculty from the education field are already comfortable designing studies in educational contexts, thus making them more likely to submit a proposal, whereas faculty in math and the natural sciences may have less experience with or interest in designing such studies. Encouraging collaboration may be a way to not only increase participation by faculty across campus, but also to strengthen study designs and lay the groundwork for future external grant funding. For faculty not as experienced with social science research, we may want to encourage collaboration on the study design with doctoral students and faculty in educational research and/or instructional technology. Wolff (2008) found that a significant number of faculty submitting grant proposals for technology projects at their institution reported collaborating with colleagues both inside and outside of their academic units prior to submitting their proposals. It could be that our measure of collaboration did not fully capture collaborations that were actually occurring.

Encouraging collaboration across disciplines may result in a greater awareness of relevant studies and theoretical frameworks. That less than half of the proposals grounded their studies in previous literature (43 %), theoretical or empirical, is troubling. As Higgins et al. (1989), Klein (1997) and others pointed out, there has often been disconnect between instructional technology research, theory and practice. Reviewing previous studies would likely provide answers to research questions that newcomers to instructional technology are likely to ask, eliminating the need for additional research in that area. Grounding new studies in a theoretical framework or previous literature is essential for advancing the knowledge base of any field. DBR’s emphasis on the testing of theories would encourage faculty to review the literature in the field when determining what can reasonably be expected from introducing new technologies in particular ways. Furthermore, DBR includes both testing and generating of new theories while solving complex instructional problems.

Only 45 % of the studies proposed to take place in classroom contexts. Reeves et al. (2005), in his call for more socially-responsible research in higher education, argued that more instructional technology research should focus on changing classroom practice. With the recent proliferation of and interest in distance education programs at our institution, we expected a greater percentage of proposals to focus on distance education contexts. However, we found that only 33 % of the studies proposed to do so. If we want to encourage the design of studies more specifically focused on changing classroom practice, some modification may need to be made to the grant competition.

Implications for the design of the internal grant competition

A tacit purpose of our grant competition to date has been to encourage faculty to simply try new technologies and see how well they work. Similar to the internal grant program described by Wolff (2008), the focus has traditionally been on “innovative” uses of technology in teaching with little connection to particular outcomes. As Wolff (2008) discovered, however, agreeing upon what an innovative use of technology looks like has varied widely across campus. An alternative model for a grant competition would be one that emphasizes conducting research in a way that will make a meaningful contribution to changing classroom practice or even to the knowledge base of a field. Doing so would require more of an emphasis on making connections with theoretical frameworks and previous literature, something that until now a majority of proposals have not done. Highlighting DBR as a viable methodology for collaborative faculty teams to explore technology use for changing classroom practices at our institution could become a guiding principle for the grant competition.

While faculty may understandably want to publish in a journal from their own discipline on the use of technology in their own teaching contexts, there are still research findings from the field of instructional technology that can inform those studies and help faculty avoid trying to reinvent the wheel. Faculty outside of education and outside of the instructional technology field may not be familiar with existing research on technology in educational contexts. For example, faculty should be aware of the no-significant difference phenomenon (Russell 1999) and the problems inherent in media comparison studies (Warnick and Burbules 2007). Encouraging faculty to collaborate with their colleagues through DBR studies driven by theory may lead to more meaningful outcomes. We may want to provide an incentive for doing so as part of the proposal evaluation criteria.

Students could be part of the research team to ensure that studies are relevant to classroom contexts. Encouraging collaborative teams may also make it more likely that this internal grant can serve as seed money for landing external funding. As resources dwindle, it seems reasonable to view internal grants in this way. Beyond a final report submitted to the university, specific outcomes have not been expected from the awardees. Making peer reviewed journal articles and/or external grant proposal submissions a required outcome may also encourage a focus on reviewing the literature, setting goals related to classroom change, and collaborating with colleagues.

Limitations and conclusions

This study was conducted on one large research-intensive campus, and a logical next step would be to replicate it on other campuses where similar initiatives are in place. Wolff (2008) argued that administrators of internal grant programs should recognize the “potential danger such programs can pose to the development of a community of learners” (p. 1196), arguing that setting up a competition for resources to implement innovative ideas actively works against the possibility of ensuring that “all interested members of the community are engaged in conversations about teaching and learning with technology regardless of their individual outcomes in a competitive process” (p. 1196). They suggested instead that structures be put into place to “reinforce locally conceived models of innovation processes” (p. 1196). This is also something for us to consider as we work to improve our grant program, and we have already instituted informal monthly brown bag lunches for faculty across campus who consider themselves part of the “instructional technology community of practice.”

While our findings are limited to one institution, they may be of use to faculty development professionals interested in how to better support researchers in designing effective studies around the use of instructional technology. As we learned from analyzing the instructional technology projects within our own institution, having a systematic process for faculty to utilize, such as DBR, may prove important for both research and practice. DBR could promote strong studies consisting of knowledge of previous literature and specific goals that can enhance the research in and use of technology in higher education among all disciplines.

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Trena M. Paulus
    • 1
  • Gina Phipps
    • 2
  • John Harrison
    • 3
  • Mary Alice Varga
    • 4
  1. 1.Department of Educational Psychology and CounselingUniversity of TennesseeKnoxvilleUSA
  2. 2.Teaching and Technology CenterAiken Technical CollegeAikenUSA
  3. 3.Carter and Moyers School of EducationLincoln Memorial UniversityHarrogateUSA
  4. 4.Educational Psychology and CounselingUniversity of TennesseeKnoxvilleUSA

Personalised recommendations