Advertisement

Journal for STEM Education Research

, Volume 1, Issue 1–2, pp 85–102 | Cite as

A Comparative Analysis of the Use of Student Response Devices (“Clickers”) in University Learning Environments at a Large Southeastern University

  • Grant E. Gardner
  • Subodh Dutta
  • Karen Mulcahy
  • Vera Tabakova
  • Diane Majewski
  • Joshua W. Reid
  • Zhigang Jia
Article

Abstract

Reforms in STEM education call for the broad implementation of student-centered strategies and instructional technologies support this integration. An instructional technology that has received much attention in the last decade is student responses systems, or “clickers”. There is little literature that has examined the widespread diffusion of clickers and their appropriate use. This cross-sectional survey study explores use of clickers in postsecondary settings across multiple disciplines, how clickers are used, and why faculty chose to use (or not) clickers. Survey responses revealed that clickers were differentially used by faculty based on discipline. Logistic regression determined class size as the only predictor of clicker use. Implications are discussed in terms of providing professional development to impact instructor knowledge and beliefs towards instructional technology.

Keywords

Higher education Automated response devices Student-centered instructional strategies 

Introduction to the Problem

Calls for educational reform in undergraduate science, technology, engineering, and mathematics (STEM) learning environments are grounded in the notion that classroom instruction should be built on the same fundamental principles as authentic scientific practices (i.e., scientific teaching; Handelsman et al. 2004). Implications of scientific teaching are that instruction should, “involve active learning strategies to engage students in the process of science and teaching methods that have been systematically tested and shown to reach diverse students” (Handelsman et al. 2004, pg. 521). Effective STEM instruction should be active, student-centered, based on systematic evidence, and result in iterative cycles of refining practice based on data. Despite these calls the diffusion and implementation of many evidence-based instructional practices remains limited, especially in large-enrollment STEM courses (Milner-Bolotin et al. 2010; Sevian and Robinson 2011).

One way to facilitate active, student-centered instruction in large-enrollment STEM classrooms is through the integration of instructional technologies. Although not the only way to promote active STEM classrooms, instructional technologies might alleviate the efficiency with which instruction can be facilitated with large student populations. The most prolific of these instructional technologies over the last decade in large STEM classrooms have been student automated responses systems (referred to colloquially and in the remainder of this manuscript as “clickers”; Auerbach and Schussler 2016; Burstein and Lederman 2006; Caldwell 2007; Jin and Bierma 2013; Kay and LeSage 2009; Lincoln 2009). Clickers are not new. Clickers began as hardware devices that could connect to instructor computers through RFD or other means, but have since evolved such that most clicker-based systems have moved to cloud-based software accessible by student wi-fi enabled devices (Dunn et al. 2012; Stowell 2015). However, at our current institutions, we have seen little transition, with only a small percentage of students opting to utilize their own wi-fi enabled devices in lieu of clicker hardware when given the choice. We acknowledge that clickers are not the only instructional technology that facilitate student learning but have been in the forefront of many classroom reform discussions.

Clickers in the STEM classroom were originally developed as a technological tool to increase student active learning, especially in larger undergraduate classrooms (Bruff 2009; Caldwell 2007; Feis and Marshall 2006). The most basic pedagogical use of clickers in STEM classrooms is to poll students with multiple choice questions and utilize the results, typically a frequency distribution of distractors selected by students, to enable both the instructor and students to formatively assess their own understanding of the content (Bruff 2009). Clickers are also utilized with more formalized, and perhaps more cognitively demanding, active learning strategies such as Peer Instruction (Mazur 1997; Vickery et al. 2015). Easton (2009) included one of the most comprehensive lists of how clickers can be used in classrooms to support learning: assessing student prior understanding, providing formative feedback during class, “breaking-up” lecture for student engagement purposes, administering summative assessments, promoting student collaboration, taking attendance, and creating a sense of community in the classroom. There are research tools developed to assess student usability, impact to student engagement, and the impact on learning (Judson and Sawada 2002; Richardson et al. 2015).

Although not the primary focus of this manuscript, faculty have found clickers useful in engaging students in the content, increasing student classroom interactions, enhancing student motivation and interest, and increasing attendance (Crossgrove and Curran 2008; Dunn et al. 2013; Feis and Marshall 2006; Fitch 2004; Hansen 2007; Kay and LeSage 2009; MacArthur and Jones 2008; Preszler et al. 2007; Sharma et al. 2005). However, as with any instructional technology it is how the instructor chooses to use it and not merely that it is used that determines whether student learning is supported directly (Gray and Steer 2012; MacArthur 2013). For example, MacArthur and Jones (2008) found that in all published reports where students showed learning gains after using clickers some form of collaborative learning was also used. In other words, collaborative interactions supported by the instructional technology seem particularly effective.

Emenike and Home (2012) argued that, broadly, the pedagogical advantage of clickers can be embedded in the theory of meaningful learning (Bretz 2001). This theory posits that the most powerful forms of learning occur when cognitive (thoughts and ideas), affective (attitudes and emotions), and psychomotor (physical behaviors and interactions) domains are activated during the learning process. Under this theory of learning, student learning can be effective with clicker devices because students are engaged with answering content-based questions (cognitive domain) while engaged with and motivated by the technology (affective domain) that is a physical system in the form of a clicker or student device (psychomotor domain).

When used in a manner aligned with valid learning theory, clickers have been shown to promote student learning in active learning classrooms across numerous content disciplines and are an utilitarian way for many faculty to begin exploring student-centered instruction (Kay and LeSage 2009; Mayer et al. 2009; Shaw et al. 2015). However, some studies note that the use of clickers in the classroom varies greatly among faculty and between disciplines (Freeman et al. 2007). In addition, despite evidence for its effectiveness, many faculty choose to discontinue use of the technology due to perceived barriers to implementation (Freeman et al. 2007; Henderson 2005). As with any instructional technology or pedagogical technique, understanding the factors that relate to faculty adoption of this particular pedagogical support is critical to exploring how to increase the use, and efficacy of use, of this technology in particular learning environments. Again, we are not claiming that clickers are the only way to increase student-centered instruction, but may serve as a unique case study for understanding adoption of student-centered pedagogies more broadly. This is because clickers are often one of the earliest “gateway” pedagogies that faculty interested in student-centered instruction might try. Understanding these factors is particularly critical because most STEM instructors rely on instructor-centered lecture with clickers remaining the most common instructional technology utilized for engagement (Stains et al. 2018). The broad goal of this study was to better understand the diffusion of this particular technological innovation in a single university context. To situate our work, we briefly review the literature on faculty adoption of clicker technology and embed it in a diffusion of innovation framework. Despite many descriptive studies of clicker implementation, and many manuscripts relying on anecdotal or intuitive claims, there remains very little empirical work related of the efficacy and adoption of this instructional technology by STEM faculty.

Prevalence and Diffusion of Clicker Use in Higher Education

Rogers’ (2003) diffusion of innovations framework has been adapted to education settings and been used to understand how instructional innovations are developed, disseminated and diffused through instructor populations in postsecondary STEM settings (for examples see Froyd et al. 2013; Henderson 2005). This theory examines relationships between awareness, adoption, characteristics of particular innovations, relevant adopters, and how adoption might progress over time. Rogers (2003) claimed that adoption of innovations typically follow an S-shaped growth curve with early adoption occurring at a slow rate across time until a “tipping point is reached when the innovation becomes more widely accepted and available. Following the tipping point, the innovation might experience exponential growth in adoption before: 1) the market becomes saturated and adoption again levels out, or 2) the innovation fails to be widely integrated such that use steadily decreases over time. Although authors acknowledge the published limitations of the efficacy of the develop-and-disseminate model for actually impacting instructional change in STEM learning environments (Henderson et al. 2010), Rogers’ (2003) diffusion model still provides a useful lens through which to view data related to the dissemination of specific educational innovations in practice. With this framework in mind we discuss the research on the dissemination and use of clickers in higher education.

Are there certain disciplinary fields that are more likely to adopt clicker technology than others? The few empirical studies that examined this research question in undergraduate settings indicate that few fields besides STEM have adopted clickers. For example, Farag et al. (2015) noted the “plethora of articles regarding how clickers can be used” (p. 208), but highlighted the lack of literature regarding faculty perceptions and frequency of use of clickers. Farag et al. (2015) noted that 66.0% of a sample of n = 104 business faculty had never taught with clickers and a remaining 24.6% had taught with clickers once or only a few times. In a similar study with marketing education faculty, Lincoln (2009) noted that 84.55% of a national survey sample of marketing faculty (n = 356) were non-adopters of clickers and of that sub-sample of non-adopters, only 201 were either only somewhat aware or completely unaware of clickers. These studies appear to confirm lack of widespread use of clicker technology within faculty in non-STEM fields.

In a critical review of n = 17 manuscripts on clicker use published from 2004 to 2009, Good (2013) noted that most classroom applications of the technology seem to be in STEM or social sciences in human behavior (82.35% of the research manuscripts in this sample). This finding is consistent with Lincoln’s (2009) earlier claim that, “It currently appears that most faculty members using clickers are housed within the hard sciences disciplines, not the social sciences disciplines” (p. 27). A more recent review of clicker use in higher education by Han (2014) found that in a sample of n = 84 research studies, 56.10% were in STEM disciplines and 4.88% were in a mixed STEM/non-STEM disciplinary contexts. What is clear from this literature is that the research on clicker use has primarily been a priori embedded in STEM learning environments.

In STEM fields specifically, there are very few studies that look at adoption and sustainability of clicker use. The few studies that do exist are often at a single institution. Vincens and Caspersen (2014) noted a persistent increase in the use of clicker technology in STEM classrooms at their institution over a four-year period. However, Vincens and Caspersen (2014) findings appeared to follow a campus-wide initiative to support faculty adoption and use of clickers in their classroom. Lewin et al. (2016) noted that in 264 STEM classroom observations at their institution, 80 observations demonstrated faculty utilizing clickers in some form. In their study, Lewin et al. (2016) were primarily interested in how clickers were being used to support Peer Instruction specifically, and it is unclear whether there were institutional supports in place to support faculty use of the technology. Finally, Gibbons et al. (2017) noted that in a sample of chemists at 37 institutions of higher education in the United States (n = 1282), only 21% were using clickers regularly in their classroom instruction.

The studies noted previously have a specific focus on clicker technology. There are a few other studies for which adoption of educational innovations is studied more broadly with clickers as a sub-component. For example, Froyd et al. (2013) noted that a sample of 121 engineering faculty included about 70% familiar with Peer Instruction (that often utilizes and is sometime synonymous with clicker-use), but only around 20% were currently implementing Peer Instruction. The only study which the authors are aware of that looks at faculty use of clicker technology specifically was conducted by Emenike and Home (2012) in chemistry. There findings from a sample of over 1500 chemistry faculty indicate that 18.6% used clickers, but only 12.8% indicated the use of clickers and found them useful in their instruction. In a recent study by the Authors (unpublished data) at two large southeastern universities in the United States, 97.98% of 101 STEM faculty were at least aware of clickers. Of those aware of clickers, 72% had chosen not to use them, 10% had discontinued use, with only 15% currently using clickers (3% of the sample did not respond to the item). Collectively, it appears that the use of clickers across STEM and non-STEM fields might be somewhere around a quarter of faculty implementing the technology in the classroom.

Explaining why Faculty Use Clickers in Higher Education

It is important to uncover the affordances and barriers to faculty use of clickers as they are a common and utilitarian means of facilitating active learning in classrooms (Dunn et al. 2012). There is very limited research that specifically sets out to uncover affordances and barriers to faculty use of clickers as a research question, although some studies make suggestions as part of their implications (see Crossgrove and Curran 2008; Dunn et al. 2012; Shaw et al. 2015, Stowell 2015 as examples). Research in non-STEM contexts primarily rely on data related to faculty “perceptions” of clicker usage, but do specifically address the “types” of demographic or learning environment characteristics that promote the adoption of clicker technologies. Lincoln (2009) found that faculty who used clickers were more likely to be females. Farag et al. (2015), using a regression model, found that gender, academic rank, and highest degree offered at an institution were not significant predictors of clicker use in his particular sample of legal studies faculty. Good (2013) noted that sustained use of clickers was most prevalent where there was a community of support for technology integration including technology follow-up and learning communities of practice. Primary reasons for faculty not adopting clickers included: 1) high costs to students, 2) perceptions that clickers would not work well in their particular course, 3) clickers are too much of a hassle to use, 4) faculty do not believe clickers help students learn, and 5) there was no time to learn about the technology. Lai et al. (2015) conclude that a complicated milieu of issues contribute to adoption of clickers including personality of faculty, attitude towards technology, perceived benefits of clickers and perceived ease of use of clickers. Kay and LeSage (2009) suggest that the biggest challenges for faculty adoption of clicker technology include time to learn the technology, using it effectively, and concerns of time to “cover content”. Lai et al. (2015) noted that technological challenges, administrative challenges, and pedagogical challenges were the most frequent barrier to implementation of clickers for their case study participant.

Emenike and Home (2012) conducted logistic regression on a sample of chemistry faculty to examine barriers and affordances to their use. The authors found that there was no relationship between use of clickers and gender, years of teaching chemistry, or specific sub-discipline of chemistry. Emenike and Home (2012) also found the following. Number of years teaching chemistry decreased the probability that faculty found clickers useful. Faculty from doctoral institutions were 2.2 times more likely to use clickers than faculty form 2-year institutions and 1.6 times more likely than faculty from 4-year institutions. Faculty who identified as chemical education researchers were 1.8 times more likely to report clicker usage than organic chemists. In another study with chemistry faculty, research found that clickers were more common in larger classes and introductory sections of courses (Gibbons et al. 2017).

Study Goals and Research Questions

In light of the dearth of literature on faculty clicker use and the affordances and barriers to that use, this study reports on the differential use of clickers by faculty members at a large university in the Southeastern United States. We compared the use of clickers in STEM fields to other departments in the university. In addition, within the STEM fields we explored the personal and learning environment characteristics that might predict whether STEM faculty adopt clicker technology which extends previous work in other disciplines (Good 2013; Lincoln 2009). Finally, the literature demonstrates the importance of considering how instructional innovations are used and not just that these innovations are used. Therefore, for STEM faculty who adopted clickers, we explored how clickers are used in their classrooms. This led to the following research questions.
  • Research Question 1: What is the prevalence of the use of clickers in multiple disciplines across one university? Are their significant differences between disciplines in use of clickers at this university?

  • Research Question 2: What personal (teaching experience, race, gender, age) and learning environment (class size, rank, academic department) characteristics predict STEM faculty use of clickers?

  • Research Question 3: What are the most common affordances and barriers that STEM faculty cite for adopting or not adopting clickers?

  • Research Question 4: For STEM faculty who adopted clickers, how are they using the technology in their classrooms?

Methods

A cross-sectional survey research design was used to answer the research questions and was approved by the participating institution’s Institutional Review Board. The survey was constructed to understand if faculty were using clicker technology and how and why faculty had chosen to use (or not use) the technology. Examples items included, “please explain the primary reason you began using clickers in your course” and “what would have to change for you to adopt clickers in your course.”1 The full survey can be provided upon request. Content validity of the items were confirmed by the research team as well as an external faculty member with experience in educational psychometrics and survey research design. The survey asked faculty about their use of clickers to engage students, their satisfaction levels with the technology, affordances and barriers associated with the technology, typical course sections taught, typical class sizes taught, number of years teaching, degree level, academic rank (i.e., part-time lecturer, assistant professor, professor, etc.). A request to complete the survey (created utilizing Qualtrics survey software) was sent in an email to the entire university faculty by an upper administrator during the Spring 2013 academic semester. A full list of survey items can be found in the supplemental materials. Descriptive statistics, point biserial correlations, and logistic regression was performed using IBM SPSS Statistics Version 21.0 (IBM Corp 2012).

Context and Respondents

The context of the study was a large southeastern university that began as a teacher’s college in 1907. The university currently has more than 100 undergraduate degrees, more than 75 masters programs, and is associated with a large regional medical school. The university maintains a high institutional focus on undergraduate instruction and has an undergraduate enrollment of around 23,000 students per year. Faculty responses (N = 201) were logged out of 1166 full-time instructional faculty in the surveyed groups (a 17.2% response rate) recorded at the time the survey was distributed. Although this might seem a low response rate, it is not atypical for such designs in this context. Respondents’ personal and professional demographics are summarized in Tables 1 and 2. Some percentages in Tables 1 and 2 do not add up to 100% due to lack of full responses on some items. Demographics of respondents were largely representative of the university as a whole. Thirty respondents did not indicate a home department and were therefore not included in the analysis in which department was a relevant variable.
Table 1

Personal demographic characteristics of survey sample

 

STEM (n = 32)

Humanities (n = 22)

Social sciences (n = 38)

Health sciences (n = 17)

Medical school (n = 13)

Characteristics

n

(%)

n

(%)

n

(%)

n

(%)

n

(%)

Sex

Female

7

(22)

11

(50)

20

(53)

16

(94)

5

(38)

Male

22

(69)

10

(45)

16

(42)

1

(6)

7

(54)

Age Range

20–30 Years Old

0

(0)

0

(0)

0

(0)

0

(0)

1

(8)

31–40 Years Old

6

(19)

4

(18)

11

(29)

2

(12)

2

(15)

41–50 Years Old

11

(34)

4

(18)

8

(21)

2

(12)

2

(15)

51–60 Years Old

10

(31)

9

(41)

8

(21)

10

(59)

4

(31)

  > 61 Years Old

2

(6)

5

(23)

8

(21)

3

(18)

4

(31)

Race

Caucasian

18

(56)

18

(82)

31

(82)

13

(73)

9

(69)

African American

1

(3)

2

(9)

3

(8)

1

(6)

0

(0)

Hispanic

1

(3)

0

(0)

0

(0)

0

(0)

0

(0)

Asian

0

(0)

0

(0)

0

(0)

1

(6)

0

(0)

Multi-Racial

2

(6)

0

(0)

0

(0)

1

(6)

1

(8)

Declined

8

(25)

2

(9)

1

(3)

1

(6)

2

(15)

Table 2

Professional characteristics of survey sample

 

STEM (n = 32)

Humanities (n = 22)

Social sciences (n = 38)

Health sciences (n = 17)

Medical school (n = 13)

Characteristics

n

(%)

n

(%)

n

(%)

n

(%)

n

(%)

Years teaching

  < 5 Years

6

(19)

3

(14)

6

(16)

3

(18)

5

(38)

5–10 Years

11

(34)

3

(14)

10

(26)

5

(29)

0

(0)

10–15 Years

6

(19)

4

(18)

7

(18)

3

(18)

1

(8)

15–20 Years

2

(6)

5

(23)

4

(11)

1

(6)

1

(8)

  > 20 Years

5

(16)

7

(32)

11

(29)

5

(29)

3

(23)

Position/Rank

Part-time Instructional

0

(0)

4

(18)

6

(16)

1

(6)

0

(0)

Full-time Instructional

10

(31)

4

(18)

10

(26)

4

(24)

0

(0)

Assistant Professor

4

(13)

3

(14)

6

(16)

6

(35)

4

(31)

Associate Professor

13

(41)

7

(32)

11

(29)

4

(24)

4

(31)

Full Professor

5

(16)

3

(14)

5

(13)

2

(12)

5

(38)

Ave. Courses taught per semester

One

9

(28)

3

(14)

4

(11)

1

(6)

5

(38)

Two

8

(25)

8

(36)

12

(32)

9

(53)

4

(31)

Three

8

(25)

5

(23)

10

(26)

4

(24)

0

(0)

Four

5

(16)

6

(27)

11

(29)

2

(12)

0

(0)

Five

2

(6)

0

(0)

1

(3)

1

(6)

2

(15)

Typical Class Size

  < 10 Students

2

(6)

0

(0)

1

(3)

0

(0)

1

(8)

10–50 Students

9

(28)

15

(68)

22

(58)

11

(65)

2

(15)

50–100 Students

6

(19)

4

(18)

11

(29)

2

(12)

7

(54)

100–200 Students

10

(31)

2

(9)

2

(5)

4

(24)

1

(8)

  > 200 Students

5

(16)

1

(5)

2

(5)

0

(0)

1

(8)

Respondents’ academic departments were coded as follows and primarily aligned with formal organization of university colleges. 26.2% were from a basic or applied STEM fields which included departments of computer science, biology, engineering, chemistry, physics, and mathematics. Non-STEM fields included 18.0% from the Humanities (theater and dance, politics, history, foreign language, English, and business); 31.1% from the social sciences (sociology, psychology, accounting and management, geography, education, economics, criminal justice), 13.9% from the basic health sciences (nursing, health management, physical therapy), and 10.7% were from the school of medicine.

Limitations

We note two limitations to our work here related to survey research designs. Both limitations are likely to lead to a slight overestimation of this sample’s awareness and use of clicker technology. The first is response bias. With the response rate of 17.2%, it is possible that this sample of survey responses might not be representative of clicker use from the population of faculty at this university. In general, it is difficult to estimate how response bias might impact the results; however, we attempted to reduce this bias by following established survey protocols of survey dissemination and by comparing our sample to the university population demographics (Sudman 1985). We think it is reasonable to assume that participant faculty who responded to a survey specifically-referring to clicker use in the classroom were either familiar with or had used the technology before.

Secondly, we recognize that self-report data does have some limitations in that what faculty reports of classroom practices may not align with actual classroom practices as far as the use of clickers is concerned. Previous studies in undergraduate STEM settings have noted that faculty often overestimate their use of innovative instructional methods and it is reasonable to assume similar results in this study (Ebert-May 2011). Perhaps, because of this, our data should be considered somewhat of an “upper bound” of clicker usage at this particular university and thus the data should be interpreted through that lens. However, we contend that the within-sample comparisons allow for useful trends to emerge.

Results

Research Questions 1: What Is the Prevalence of the Use of Clickers in Multiple Disciplines across One University? Are their Significant Differences between Disciplines in Use of Clickers?

Results demonstrated that the majority of the survey participants were not utilizing clickers in some manner in their classrooms with 57.2% reporting never having used clickers, 28.3% reporting currently using clickers, and 14.5% having discontinued use after trying clickers. When the data is broken up by faculty disciplinary category a visible trend arises (Fig. 1). It is clear that clicker use is most prevalent in the STEM fields (with a 50.0% reported current use), the Medical School (46.2%) and Basic Health Science (41.2%). Many faculty in the Humanities (72.7%) and Social Sciences (65.8%) have never used clickers. In addition, there is a notable number of individuals in the Social Sciences who have adopted then discontinued use of clickers (26.3%).
Fig. 1

Comparison of clicker adoption by disciplinary groups. The dark gray bars indicate faculty who are currently using clicker technology, the light gray bars indicate faculty who have never used the clicker technology, and the medium gray bars indicate faculty who adopted clicker technology but then chose to discontinue use

Frequency counts and summary statistics were calculated across all survey items in order to determine the prevalence of clicker use across the disciplines. In order to examine if there was a statistically significant difference between discipline and adoption of clicker technology, a chi-square test for association was used. The first categorical variables in the chi-square analysis was a dichotomous variable that included use (“current use”) and non-use (including “never used” and “discontinued use”). The second categorical variable in the chi-square analysis was the five coded disciplines (STEM, Humanities, Social sciences, Basic Health Science, and Medicine). The chi-square test for independence revealed that there was a statistically significant association between the variables with a moderate effect size (Χ2 = 19.20, p = .001, Cramer’s V = 0.40, p = .001) indicating that use of clickers is related to a particular discipline (as delineated in this way). Post-hoc analyses revealed that, for the use of clickers, the adjusted residual was statistically significant for STEM (z = 2.96, p = .003) and Social Science (z = −3.52, p < .001).

Research Question 2: What Personal (Teaching Experience, Race, Gender, Age) and Learning Environment (Class Size, Rank, Academic Department) Characteristics Predict Undergraduate STEM Faculty Use of Clickers?

In order to explore the relationship between personal and learning environment characteristics with use of clickers, a point biserial correlation was calculated between clicker use (dichotomous yes/no variable) and all other variables (Table 3). The only statistically significant relationship between variables was between use of clickers and common class size (r = .551, p < .01). STEM faculties’ use of clickers also demonstrate a small to medium effect size relationship with faculty designation (r = .201) and gender (r = −.190), but because both of these values did not demonstrate a statistically significant difference from r = 0, caution should be used in interpreting these effect sizes. The correlation indicated that clickers were more commonly used in larger classes (r = .551, p < .01).
Table 3

Summary of point biserial correlations for use with stem professional and personal demographics

Variable

1.

2.

3.

4.

5.

6.

7.

8.

1. Use of Clickers

1

       

2. Teaching Experience

.007

1

      

3. Faculty Designation

.201

−.460*

1

     

4. Number of Sections

.026

−.081

.425*

1

    

5. Class Size

.551**

.093

.383*

.037

1

   

6. Gender

−.190

−.156

.295

−.045

−.037

1

  

7. Age

−.011

.437*

−.373*

−.049

−.093

−.335

1

 

8. Familiarity with Clicker Policies

−.038

−.072

−.093

−.210

−.094

−.031

−.049

1

* p < .05 **p < .01

Although, not directly relevant to the research question 2, some of the statistically significant correlations that arose support the construct validity of our data and include the following. Participants with more teaching experience had higher faculty designations (r = −.460, p < .05; faculty designation was reverse coded with 1 = full professor) and tended to be older (r = .437, p < .05). Faculty with lower designations tended to teach more course sections (r = .425, p < .05), have bigger class sizes (r = .383, p < .05), and be younger (r = −.373, p < .05).

A logistic regression was performed to determine the effects of class size on the likelihood that STEM faculty will or will not use clickers. The outcome variable in the regression model was clicker use and the predictor variables were those listed in Table 3. The logistic regression model was statistically significant at a p < 0.05 level, χ2(1) = 10.85, p = 0.001 (Table 4). The model explained 38.3% (Nagelkerke R2) of the variance in clicker use and correctly classified 71.9% of cases. The regression model revealed that class size was the only statistically significant predictor of whether a faculty member used clickers or not. The model indicated that for every one standard deviation increase in class size codes, the likelihood of clicker use increased by 1.17 standard deviations units. For each increase in the class size code, the odds of using clickers increase by a factor of 3.23 (Table 5). This means that someone with a class size of 100–200 is about three times more likely to use clickers than someone with a class size of 50–100.
Table 4

Clicker adoption logistic regression

Variable

B

S.E.

χ2

df

p

1. Class Size

1.17

0.42

10.85

1

.006

Table 5

Logistic regression and odds ratio for clicker use and class size

Variable

Wald X2

Odds ratio

95% LCL

95% UCL

Class size

7.67

3.23

1.41

7.40

Constant

6.91

0.02

  

LCL Lower Confidence Limit; UCL Upper Confidence Limit

Research Question #3: What Are the most Common Affordances and Barriers that Undergraduate STEM Faculty Cite for Adopting or Not Adopting Clickers?

For those individuals using clickers, most adopted them because the faculty believe that clickers help students achieve critical learning objectives (40.22%), because faculty enjoy using innovative technologies (22.83%), or for the pragmatic reason of needing to take attendance in large lecture courses (19.57%). Faculty who did not utilize clickers or discontinued use were also asked their reasons for discontinuing clicker use (Fig. 2), as well as, what would have to change in order for them to reconsider utilizing clicker technologies. The most common were a classroom environment that was not conducive to their use for some reason (34.4%) followed by a concern that the technology was not an effective teaching tool (31.3%), and that faculty did not have time to learn a new technology (15.6%). Only a few faculty mentioned a concern with lack of professional development (9.4%) or a lack of comfort with the technology (9.4%). When asked what would have to change for them to consider adoption of clickers, the most highly cited reason was that faculty would need to be convinced of their classroom efficacy through the research literature (26.3%).
Fig. 2

Faculty members’ perceived constraints of using clickers

Research Question #4: For STEM Faculty Who Adopted Clickers, how Are they Using the Technology in their Classrooms?

STEM faculty are primarily using clickers for keeping students engaged (87.5%), for real-time formative feedback of student comprehension (87.5%), and taking attendance (62.5%). Less common uses of clickers include summative assessments (37.5%), anonymously polling students on potentially sensitive issues (18.8%), and for recording class participation (31.3%; Fig. 3).
Fig. 3

Description of faculty use of clickers

Discussion

Many studies on the adoption of educational innovations in STEM classrooms in higher education have taken a broad approach to assessing instructors’ use of research-based instructional strategies (RBIS) by either: 1) not specifying a specific instructional strategy in their instrument, but using all-encompassing terms such as “active learning” or “evidence-based practices”; or 2) providing many options of RBIS, but averaging the results across all these independent strategies in their analysis (see Borrego et al. 2010; Froyd et al. 2013; Hendrson et al. 2012). In this manuscript we have focused on a single educational innovation that is widely used such that clicker use might stand-in for a case study of educational technology innovation adoption more broadly. This is not because we believe that clickers are somehow superior to other methods or that other pedagogical methods might be similarly supportive of student learning. We choose clickers because this instructional technology often has high visibility in discussions of education reform in large-enrollment STEM classrooms and also often serves as a “gateway” tool that instructors use when trying student-centered instructional methods for the first time.

Awareness levels for the use of clickers in classrooms was high with only 9% of our respondents reporting unawareness of this educational technology. Our survey revealed that clickers are primarily used by faculty in STEM departments (50%). Use is followed closely by those academic units most closely related to STEM degrees (Medical School at 46% and Basic Health Sciences at 41%). This data adds credence to previous claims supported by limited evidence that clickers are primarily being used in the STEM fields (Good 2013; Lincoln 2009). The other unique finding from this data was just over a quarter of the Social Science faculty participant group had discontinued use of clicker devices.

Two future research questions naturally extend from the results of this first data set. Is there something inherent in the STEM disciplines, in contrast to the Social Sciences or Humanities disciplines, that lends itself to more prevalent clicker use? Two aspects of STEM instruction that might explain this disciplinary disparity are: 1) The typical nature of content and assessments in STEM disciplines; and 2) the learning environments in which many early STEM classes take place. Anecdotally, STEM disciplines often rely heavily on factual content easily assessed through multiple-choice items, whereas the Social Sciences and Humanities tend to assess student knowledge through written text. Although clicker technology has the capacity to have students enter in larger text strings the majority of its assessment uses rely on items structured as multiple choice (Bruff 2009). Even Peer Instruction, a method traditionally harnessing clickers as a supportive educational technology while assessing higher order conceptual knowledge in STEM, relies on multiple-choice based ConcepTests (Mazur 1997).

Our data provides some insight into the second potential cause for disciplinary disparities. Our regression analysis revealed class size as the only statistically significant predictor of clicker use of the demographic and learning environment variables we examined (see also Gibbons et al. 2017). It could be that clickers are a tool for student-centered instruction that scales up particularly well and allows for large-scale analytics of student data. In addition, the most commonly cited barrier to clicker use was that the faculty member perceived it was “not appropriate for their classroom environment.” At the particular institution at which this study was conducted, class sizes in STEM courses can reach upwards of 250 students per section, whereas in the Social Science and Humanities fields these class sizes are rare. Because clickers were originally developed as a technological tool to increase student active learning in these large classes (Bruff 2009; Caldwell 2007; Feis and Marshall 2006), it is not unusual that the learning environment (i.e. large class sizes) might be predictive of the types of environments in which faculty would readily adopt clickers. Emenike and Home (2012) noted that chemistry faculty at 4-years institutions were over 2 times more likely to adopt clickers than smaller institutions, and it is possible that larger classes sizes at 4-years institutions might explain these results. These two hypotheses will need further testing in future studies.

What is perhaps more telling is what variables were not predictive of faculty use of clickers. Aspects such as faculty rank, teaching experience, age, or gender were not found to be significant predictors of clicker use in our study. This finding largely aligns with previous research that has demonstrated few statistical relationships between faculty demographics and clicker use (Emenike and Home 2012; Farag et al. 2015; Gibbons et al. 2017; Lincoln 2009). Only Lincoln’s (2009) study noted a relationship between gender and clicker use with females using the technology more often than males. There is often the perception that innovative technologies are for the young faculty and that older faculty might be more “stuck in their ways” with inertia preventing motivation for adoption of new educational innovations. Our data suggests that professional development geared towards implementing technological innovations in post-secondary classrooms must be careful not to harbor bias and target populations of assumed interest and motivation.

Rogers’ (2003) diffusion of innovations framework argues that there will always be those individuals who do not adopt a specific technology. Due to the fact that demographic factors seem to be least predictive of STEM faculty use of clickers in general and awareness of clicker technology is high in our sample, it is likely that significant barriers to adoption occur at later stages in the innovation diffusion process (Rogers 2003). Faculty are likely weighing their options whether to try, or discontinue using clickers based on their perceptions of affordances or barriers to use of the educational technology. Beliefs about the affordances and constraints of technologies, and how these beliefs relate to instructor beliefs about teaching and learning, has been discussed as a contributing intrinsic factor to technology integration (Chen, Looi, and Chen 2009; Ertmer 1999). The top uses for clickers that participants cited in this study were for gaining formative feedback on student comprehension (88%), for keeping students engaged (88%), and for taking attendance (63%). These three uses of clickers are particularly strong affordances for large-enrollment courses in which gaining collective feedback, keeping students engaged and maintaining attendance records can be particularly onerous. This finding adds further indirect support for class size best predicting clicker usage.

Most faculty have adopted clickers for evidence-based reasons; utilizing clickers as a tool to formatively asses and engage students. However, a large number of the faculty sample still primarily used clickers as a means to take attendance. There are numerous issues surrounding this type of use, including providing professional development to help faculty realize the potential of innovation technology, as well as, the value of the clickers to students in the classroom in an increasingly tight economy. Non-pedagogical uses of clicker still require students to purchase the technology without any benefit to their learning. Therefore, faculty need opportunities to be exposed to, and practice, various pedagogical strategies associated with clickers (i.e., peer instruction; Mazur 1997) and the support networks in place to implement them effectively (MacArthur 2013) .

The second most cited barrier to faculty adopting clicker technology was faculty perception that there was not enough evidence to support clickers as an effective instructional tool. This is a common problem with some evidence-based practices in which disciplinary faculty do not have the time or the inclination to engage in the educational research literature (Froyd et al. 2013). Those faculty that are familiar with the evidence-based use of clickers in undergraduate STEM classrooms might be more likely to use them. In fact evidence from Emenike and Home (2012) study with chemistry faculty indicated that faculty who identified as chemistry education researchers were 1.8 times more likely to adopt clickers than other disciplinary faculty. Additionally, others have discussed knowledge, goals, and beliefs of the instructor as major factors that dictate technology integration (Chen et al. 2009). Effective technology integration that mirrors evidence-based pedagogy requires a specialized knowledge base (i.e., TPACK; Mishra & Koehler 2006).

Conclusions

Faculty have various reasons for choosing not to adopt innovative technologies in general; time and administrative support are frequently cited (similar to our study). For example, several faculty perceived the technology was not conducive to their learning environment. Therefore, faculty beliefs and knowledge of the technology impacted clicker integration practices. Exploring these beliefs and knowledge bases in more depth with qualitative data sources could prove to be productive for progressing this research area and for understanding how various instructional technologies can support student-centered ins truction in STEM classrooms.

Faculty cite that they would need to be convinced of the efficacy of clicker technologies in order to feel the need to (re) adopt those technologies. Considering the large volumes of educational research available to faculty there needs to be a more concerted effort on the part of administrators and educational faculty to provide resources and data to convince faculty of the importance of various instructional technologies in the classroom. In addition, faculty perceptions of teacher-centered instruction often make assumptions about a “one-size-fits-all” approach to education. Helping university faculty see the importance of utilizing various instructional technologies as well as modes of instruction to reach a diversity of students is an important and unrealized goal of teaching in higher education. As noted, clickers are not a universal solution to integrating student-centered instruction in STEM classrooms. They must be utilized with efficacy and with other methods appropriate for the particular instructor and context. Examining clicker technology in supporting student-centered classroom integration can serve as a case study for both instructors and researchers to consider when attempting to adopt innovative methods in their own classrooms or support adoption of these methods with colleagues.

Notes

Acknowledgements

This work was supported funded primarily by the Oak Foundation USA and North Carolina Glaxo Smith Klein Foundation with support from the U. S Department of Education and University of North Carolina General Assembly as part of the College STAR (Supporting Transition, Access, and Retention) collaborative project in association with East Carolina University, Appalachian State University, and Fayetteville State University.

Compliance with Ethical Standards

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. Auerbach, A. J., & Schussler, E. E. (2016). Instructor use of group active learning in an introductory biology sequence. Journal of College Science Teaching, 45(5), 67–74.CrossRefGoogle Scholar
  2. Borrego, M., Froyd, J. E., & Hall, T. S. (2010). Diffusion of engineering education innovations: A survey of awareness and adoption rates in U.S. engineering departments. Journal of Engineering Education, 99(3), 185–207.CrossRefGoogle Scholar
  3. Bretz, S. L. (2001). Novak’s theory of education: Human constructivism and meaningful learning. Journal of Chemical Education, 78(8), 1107–1117.CrossRefGoogle Scholar
  4. Bruff, D. (2009). Teaching with classroom response systems: Creating active learning environments. San Francisco, CA: Jossey-Bass.Google Scholar
  5. Burstein, R. A., & Lederman, L. M. (2006). The use and evolution of an audience response system. In D. A. Banks (Ed.), Audience response systems in higher education: Applications and cases (pp. 40–52). Hershey, PA: Information Science.CrossRefGoogle Scholar
  6. Caldwell, J. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE Life Sciences Education, 6, 9–20.CrossRefGoogle Scholar
  7. Chen, F.-H., Looi, C.-K., & Chen, W. (2009). Integrating technology in the classroom: A visual conceptualization of teachers' knowledge, goals, and beliefs. Journal of Computer Assisted Learning, 25(5), 470–488.Google Scholar
  8. Crossgrove, K., & Curran, K. L. (2008). Using clickers in nonmajors- and majors-level biology courses: Student opinion, learning, and long-term retention of course material. CBE Life Sciences Education, 7, 146–154.CrossRefGoogle Scholar
  9. Dunn, P. K., Richardson, A., McDonald, C., & Oprescu, F. (2012). Instructor perceptions of using mobile-phone-based free classroom response system in first-year statistics undergraduate courses. International Journal of Mathematical Education in Science and Technology, 43(8), 1041–1056.CrossRefGoogle Scholar
  10. Dunn, P. K., Richardson, A., McDonald, C., & Oprescu, F. (2013). Mobile-phone-based classroom response systems: Students’ perceptions of engagement and learning in a large undergraduate course. International Journal of Mathematical Education in Science and Technology, 44(8), 1160–1174.CrossRefGoogle Scholar
  11. Easton, C. (2009). An examination of clicker technology use in legal education. Journal of Information, Law and Technology, 3. Academic OneFile, retrieved 10 June, 2015.Google Scholar
  12. Ebert-May, D. (2011). What we say is not what we do: Effective evaluation of faculty professional development programs. BioScience, 61, 550–558.CrossRefGoogle Scholar
  13. Emenike, M. E., & Home, T. A. (2012). Classroom response systems have not “crossed the chasm”: Estimating the numbers of chemistry faculty who use clickers. Journal of Chemical Education, 89, 465–469.CrossRefGoogle Scholar
  14. Ertmer, P. A. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development 47(4):47–61.Google Scholar
  15. Farag, S. P., Park, S., & Kaupins, G. (2015). Faculty perceptions of the adoption and use of clickers in the legal studies in business classroom. Journal of Education for Business, 90(4), 209–2016.CrossRefGoogle Scholar
  16. Feis, C., & Marshall, J. (2006). Classroom responses systems: A review of the literature. Journal of Science Education and Technology, 15(1), 101–109.CrossRefGoogle Scholar
  17. Fitch, J. L. (2004). Student feedback in the college classroom: A technology solution. Educational Technology Research and Development, 52, 71–81.CrossRefGoogle Scholar
  18. Freeman, M., Bell, A., Comerton-Forde, C., Pickering, J., & Blayney, P. (2007). Factors affecting educational innovation with in class electronic responses systems. Australasian Journal of Educational Technology, 23(2), 149–170.CrossRefGoogle Scholar
  19. Froyd, J. E., Borrego, M., Cutler, S., Henderson, C., & Prince, M. J. (2013). Estimates of use of research-based instructional strategies in core electrical or computer engineering courses. IEEE Transactions on Education, 56(4), 0018–9359.Google Scholar
  20. Gibbons, R. E., Laga, E. E., Leon, J., Villafane, S. M., Stains, M., Murphy, K., & Raker, J. R. (2017). Chasm crossed? Clicker use in postsecondary chemistry education. Journal of Chemical Education, 94(5), 549–557.CrossRefGoogle Scholar
  21. Good, K. C. (2013). Audience response systems in higher education courses: A critical review of the literature. International Journal of Instructional Technology and Distance Learning, 10(5), 19–34.Google Scholar
  22. Gray, K., & Steer, D. N. (2012). Personal response systems and learning: It is the pedagogy that matters, not the technology. Journal of College Science Teaching, 41(5), 80–88.Google Scholar
  23. Han, J. H. (2014). Closing the missing links and opening the relationships among the factors: A literature review on the use of clicker technology using the 3P model. Educational Technology & Society, 17(4), 150–168.Google Scholar
  24. Handelsman, J., Ebert-May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., Gentile, J., Lauffer, S., Stewart, J., Tilghman, S. M., & Wood, W. B. (2004). Scientific teaching. Science, 304, 521–522.CrossRefGoogle Scholar
  25. Hansen, C.R. (2007). An evaluation of a student response system used a Brigham Young University. (master thesis). Retrieved from http://contentdm.lib.by.edu/ETD/image/etd2127.pdf.
  26. Henderson, C. (2005). The challenges of instructional change under the best of circumstances: A case study of one college physics instructor. American Journal of Physics, 73(8), 778–786.CrossRefGoogle Scholar
  27. Henderson, C., Finkelstein, N., & Beach, A. (2010). Beyond dissemination in college science teaching: An introduction to four core change strategies. Journal of College Science Teaching, 39, 18–25.Google Scholar
  28. Hendrson, C., Dancy, M., & Niewiadomska-Bugaj, M. (2012). The use of research-based instructional strategies in introductory physics: Where do faculty leave the innovation-decision process? Physical Review Special Topics – Physics Education Research, 8, 020104.Google Scholar
  29. IBM Corp. (2012). IBM SPSS Statistics for Windows, version 21.0. Armonk, NY: IBM Corp.Google Scholar
  30. Jin, G., & Bierma, T. (2013). STEM for non-STEM majors: Enhancing science literacy in large classes. Journal of College Science Teaching, 42(6), 20–26.CrossRefGoogle Scholar
  31. Judson, E., & Sawada, D. (2002). Learning from past and present: Electronic response systems in college lecture halls. Journal of Computers in Mathematics and Science Teaching, 21(2), 235–249.Google Scholar
  32. Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education, 53(3), 819–827.CrossRefGoogle Scholar
  33. Lai, G., Hill, V., & Ma, Y. (2015). Clickers in the classroom: A business professor's adoption of a classroom response system. International Journal of Innovation and Learning, 18(4), 451–470.CrossRefGoogle Scholar
  34. Lewin, J. D., Vinson, E. L., Stetzer, M. R., & Smith, M. K. (2016). A campus-wide investigation of clicker implementation: The status of peer discussion in STEM classes. CBE-Life Sciences Education, 15(1), ar6.Google Scholar
  35. Lincoln, D. J. (2009). Student response systems adoption and use in marketing education: A status report. Marketing Education Review, 19(3), 25–40.CrossRefGoogle Scholar
  36. MacArthur, J. R. (2013). How will classroom response systems “cross the chasm”? Journal of Chemical Education, 90(3), 273–275.CrossRefGoogle Scholar
  37. MacArthur, J. R., & Jones, L. L. (2008). A review of literature reports of clickers applicable to college chemistry classrooms. Chemistry Education Research and Practice, 9(3), 187–195.CrossRefGoogle Scholar
  38. Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, B., Bimber, D., Chun, M., Bulger, J., Campbell, A. K., & Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34, 51–57.CrossRefGoogle Scholar
  39. Mazur, E. (1997). Peer instruction. Pearson Higher Education: New York, NY.Google Scholar
  40. Milner-Bolotin, M., Antimirova, T., & Petrov, A. (2010). Clickers beyond the first year science classroom. Journal of College Science Teaching, 40(2), 14–18.Google Scholar
  41. Mishra, P. & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.Google Scholar
  42. Preszler, R. W., Dawe, A., Shuster, C. B., & Shuster, M. (2007). Assessment of the effects of student response systems on student learning and attitudes over a broad range of biology courses. CBE Life Sciences Education, 6(1), 29–41.CrossRefGoogle Scholar
  43. Richardson, A. M., Dunn, P. K., McDonald, C., & Oprescu, F. (2015). CRiSP: An instrument for assessing student perceptions of classroom response systems. Journal of Science Education and Technology, 24(4), 432–447.CrossRefGoogle Scholar
  44. Rogers, E. M. (2003). Diffusion of innovations. (5th Ed.). New York, NY: Free Press.Google Scholar
  45. Sevian, H., & Robinson, W. E. (2011). Clickers promote learning in all kinds of classes -- small and large, graduate and undergraduate, lecture and lab. Journal of College Science Teaching, 40(3), 14–18.Google Scholar
  46. Sharma, M. D., Khachan, J., Chan, B., & O’Byrne, J. (2005). An investigation of the effectiveness of electronic classroom communication systems in large lecture classes. Australasian Journal of Educational Technology, 21(2), 137–154.CrossRefGoogle Scholar
  47. Shaw, A. M., Mendonca, A. F., & Daraba, A. (2015). "clickers" and HACCP: Educating a diverse food industry audience with technology. Journal of Extension, 53(6), 6TOT6.Google Scholar
  48. Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E … .Young, A. M. (2018). Anatomy of STEM teaching in north American universities. Science, 359(6383), 1468–1470.Google Scholar
  49. Stowell, J. R. (2015). Use of clickers vs. mobile devices for classroom polling. Computers & Education, 82, 329–334.CrossRefGoogle Scholar
  50. Sudman, S. (1985). Mail surveys of reluctant professionals. Evaluation Review, 9, 349–360.CrossRefGoogle Scholar
  51. Vickery, T., Rosploch, K., Rahmanian, R., Pilarz, M., & Stains, M. (2015). Research based implementation of peer instruction: A literature review. CBE-Life Sciences Education, 14, essay 3.Google Scholar
  52. Vincens, Q., & Caspersen, M. E. (2014). Getting more scientists to revamp teaching. Journal of College Science Teaching, 43(5), 22–27.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Middle Tennessee State UniversityMurfreesboroUSA
  2. 2.East Carolina UniversityGreenvilleUSA

Personalised recommendations