Introduction

Despite the abundance of evidence demonstrating the positive impact of active learning on student outcomes in undergraduate science, technology, engineering, mathematics (STEM) courses (e.g., Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, & Wenderoth, 2014; Prince, 2004), there are, overall, still only limited shifts in instruction to more evidence-based instructional practices (EBIPs) (Stains et al., 2018). Prior research suggests that these limited changes in instruction may be impacted by tensions between professional identity related to teaching and research (Brownell & Tanner, 2012) and instructor-perceived barriers (e.g., Henderson & Dancy, 2007; Foster, 2014; Shadle, Marker, & Earl, 2017). Further, instructional practices may differ based on disciplinary contexts and course level (e.g., Drinkwater, Matthews, & Seiler, 2017), and researchers argue that these contextual factors should be more thoroughly explored in order for instructional change efforts to be effective (Lund & Stains, 2015). However, no validated comprehensive instrument exists to systematically and efficiently assess the factors that may promote or impede instructors’ implementation of EBIPs. In the present study, we report on the development and initial validation of the Faculty Instructional Barriers and Identity Survey (FIBIS), share preliminary results from the instrument, and discuss the implications of using FIBIS to support departmental and institutional change efforts in undergraduate STEM education.

Literature review

EBIPs can be defined as “instructional practices that have been empirically demonstrated to promote students’ conceptual understanding and attitudes” (Lund & Stains, 2015, p. 2). There is a complex relationship between faculty beliefs about student learning, contextual barriers, and affordances to teaching and practice as they related to EBIPs; those seeking to change practice need to consider how all of these interact with one another (Hora, 2012; Hora, 2014).

Synthesizing change research studies across faculty development, physics education, and higher education, Henderson, Beach, and Finkelstein (2011) concluded that there are four categories of change strategies for undergraduate instruction: disseminating curriculum and pedagogy, developing reflective teachers, developing policy, and developing shared vision. They also concluded that making tested “best practice” materials available to faculty and having “top-down” policies meant to influence the use of EBIPs do not work to bring about change. Instead, what works are approaches that seek to change faculty beliefs, utilize long-term interventions, and recognize that institutions are complex. After a review of the literature using a systems approach, Austin (2011) explained that faculty decisions are shaped by both individual characteristics (i.e., prior experience, doctoral socialization, the discipline, career stage, appointment type, and faculty motivation) and their current contexts (i.e., institution, department, and external contexts). From this systems perspective, there are four “levers” that can serve as barriers or promoters of faculty choices: reward systems, work allocation, professional development (PD), and leadership (Austin, 2011). Similarly, in their model of faculty curriculum decision-making, Lattuca and Pollard (2016) pulled from the literature to conclude that influences external to the institution affect both individual (i.e., faculty identity, experiences, and beliefs/knowledge) and internal (e.g., institutional, departmental) influences. Their model suggests that external influences affect individual influences, which in turn affect motivation for change and, finally, the decision to engage in change. Below, we review the literature on these factors and the ways in which they impact instructional practice and demonstrate the gaps in the previous work that the present study seeks to address.

Barriers to implementing EBIPs

There is a body of mostly qualitative work published on faculty perceptions of barriers to implementing EBIPs (e.g., Michael, 2007; Henderson & Dancy, 2007; Dancy & Henderson, 2008; Walder, 2015). This literature suggests that there may be different barrier foci at different universities; however, there are some common themes across these studies to characterize instructional barriers. Based on our review and synthesis of the literature, we categorized these barriers into professor barriers, student barriers, technical/pedagogical barriers, and institutional/departmental barriers, similar to how Walder (2015) categorized barriers (Table 1).

Table 1 Overview of research focused on faculty perceptions of barriers to implementing EBIPS

Though there is general agreement on time being the greatest issue for faculty, the variety of studies performed at different universities has found different barrier foci. For example, Michael (2007) explored barriers to faculty using active learning at a workshop with 29 faculty participants, some of which were in STEM, with results falling into three categories: student characteristics, teacher characteristics/problems, and pedagogical issues that affect student learning. The most cited barriers were time, financial obstacles, teaching resistance, student commitment, and technical complexity. Walder (2016) interviewed 32 professors who had won teaching awards at a Canadian university and divided their barriers to innovation into six categories: professors, technical aspects, students, institution, assessment, and discipline. Professor-related obstacles were by far the highest mentioned barriers for this population. Turpen, Dancy, and Henderson (2016) interviewed 35 physics faculty across several institutions to examine barriers to the use of peer instruction (PI) by users and non-users. Non-users of PI were most concerned with time while PI users were most concerned with implementation difficulties, suggesting that faculty barriers differ between those who have and have not used EBIPs. In a study of six tenured physics faculty from four institutions, Henderson and Dancy (2007) found the largest barriers to the use of EBIPs to be the need to cover content, students’ attitudes toward school, lack of time due to heavy teaching load and research, departmental norms of lecture, student resistance, class size, room layout, and fixed or limited time structure. The differences observed in barriers to use of EBIPs across each study could be a result of the institutional context. However, without a common instrument to measure barriers, it is challenging to draw conclusions across studies on the most common or significant barriers for faculty. The development of FIBIS seeks to address this gap in the literature.

Several studies have examined barriers at the same institution across departments (e.g., Austin, 2011, Borrego, Froyd & Hall, 2010; Lund & Stains, 2015; Shadle et al., 2017; Stieha, Shadle, & Paterson, 2016) and have found that they vary—the implication being that each environment is unique and must be understood before effective reform efforts can move forward. In particular, Lund and Stains (2015) found that the physics faculty at their university possessed primarily student-centered views of teaching and viewed contextual influences positively, while the chemistry faculty had the most teacher-centered views and viewed contextual influences as barriers to the adoption of EBIPs; biology faculty fell in between. Conversely, Landrum et al. (2017) found that their chemistry faculty at their institution had the lowest rates of EBIP adoption while computer science faculty had the highest. These studies demonstrate that not only does EBIP implementation vary across departments within the same university, but EBIP implementation within the same department across universities also varies. Punctuating this, Reinholz and Apkarian (2018) reviewed the literature extensively and concluded that departments are key to systemic change in STEM fields. Therefore, it is important that barriers are understood within the context of both departments and institutions.

Despite the evidence supporting the use of EBIPs, many faculty members choose not to utilize such practices due to their beliefs (Addy & Blanchard, 2010); in particular, deeply held beliefs about teaching and learning (Pajares, 1992). Andrews and Lemons (2015) showed that faculty will continue to think and to use what they have always used until they have a personal experience that influences them into thinking otherwise. A number of studies have shown that dissatisfaction with current practice is necessary for instructors’ beliefs to change (Bauer et al., 2013; Gess-Newsome, Southerland, Johnston, & Woodbury, 2003; Gibbons, Villafane, Stains, Murphy, & Raker, 2018; Windschitl & Sahl, 2002). Reflection when teaching has been found to be vital in creating dissatisfaction that can lead to changes in practices in faculty (e.g., Kane, Sandretto, & Heath, 2004). In graduate students and postdocs, reflecting upon novel teaching experiences helped these instructors experience conceptual change in their teaching beliefs and practices (Sandi-Urena, Cooper, & Gatlin, 2011). The discipline-based education research (DBER) report reviewed the belief literature and found that teacher beliefs are usually the most common barrier to implementing EBIPs (National Research Council [NRC], 2012). When there is a mismatch between faculty beliefs and their practice, faculty are more likely to become dissatisfied with their teaching and be open to EBIPs. More important than extra time needed to implement or characteristics of the course or instructor, Madson, David, and Tara (2017) found the best predictor of faculty use of interactive methods to be faculty beliefs regarding the positive results of using these methods both for them and their students. However, no comprehensive survey has been found in the literature to assess barriers to instruction, which include faculty beliefs, to provide researchers a systematic way to start comparing and contrasting how these factors affect one another. The closest survey in scope and focus is the Survey of Climate for Instructional Improvement (Walter, Beach, Henderson, & Williams, 2014). However, this instrument investigates departmental climate for instructional improvement rather than barriers to instruction directly. A full listing of instruments and their foci is included in Table 2; most of these instruments are focused on teaching practices in a specific discipline. None of these instruments contain a comprehensive, quantitative measure of barriers to using EBIPs or identity. Further, one of the two instruments that focus on barriers (Shell, 2001) explores faculty barriers to implementing critical thinking in nursing courses, not barriers to STEM EBIPs more broadly. The other recently published work using a barriers-related survey (Bathgate et al., 2019) explores the existence (check or un-check boxes) of 30 paired barriers and supports (total of 60) for faculty using evidence-based teaching methods. Unlike these other instruments, the FIBIS instrument also connects a host of demographic and background information with use of EBIPs, barriers to using EBIPs (and the extent to which these are barriers), and issues related to teaching and research identity that may affect faculty’s use of EBIPs.

Table 2 List of survey instruments related to faculty use of EBIPs including measures of faculty teaching practices, beliefs, and institutional climate; barriers; and supports to using EBIPs. (Note: list is not intended to be exhaustive but to demonstrate what is being done in these areas)

Professional identity

In addition to barriers, researchers suggest that faculty professional identity can impact instructional practices (e.g., Abu-Alruz & Khasawneh, 2013; Brownell & Tanner, 2012). Professional identity can be defined as the core beliefs, values, and assumptions about one’s career that differentiates it from others (Abu-Alruz & Khasawneh, 2013) and includes professional values, professional location, and professional role (Briggs, 2007). Identity also includes the sense of professional belonging (Davey, 2013) and is constantly developing over the course of a lifetime (e.g., Henkel 2000). Professional identity in academia most generally relates to research and teaching that are discipline-specific (Deem, 2006). While there is not an agreed upon definition of professional identity in the higher education literature, based on the results of their meta-analysis, Trede, Macklin, and Bridges (2012) suggest three main characteristics for professional identity: (1) individuals develop “knowledge, a set of skills, ways of being and values” that are common among individuals of that profession, (2) these professional commonalities differentiate the individual from others in different professions, and (3) individuals identify themselves with the profession (p. 380). The authors also argue that faculty professional identity develops when individuals are students, and much of the research on professional identity focuses on the development of identity as undergraduate students (e.g., Nadelson et al., 2017; Hazari, Sadler, & Sonnert, 2013; Ryan & Carmichael, 2016), graduate students (e.g., Schulze, 2015; Gilmore, Lewis, Maher, Feldon, & Timmerman, 2015; Hancock & Walsh, 2016), and faculty (e.g., Abu-Alruz & Khasawneh, 2013; Barbarà-i-Molinero, Cascón-Pereira, & Hernández-Lara, 2017; Sabancıogullari & Dogan, 2015).

Overall, the research suggests that there are both internal (e.g., beliefs, prior experiences) and external (e.g., social expectations, departmental contexts) factors that may influence faculties’ professional identity (e.g., Abu-Alruz & Khasawneh, 2013; Samuel & Stephens, 2000; Starr et al., 2006). Faculty, particularly those at research-intensive universities, can identify with both the teacher/educator profession and the research profession. However, many times, these identities can be in tension, which may be due to the institutional and individual value placed on these two responsibilities (e.g., Brownell & Tanner, 2012; Fairweather, 2008; Robert & Carlsen, 2017).

First, the institutional value placed on research and teaching may have an impact on faculty’s professional identity. In a study of research productivity of 25,780 faculty across 871 institutions, Fairweather (2002) found that the more time faculty spend on teaching, the lower their average salary, regardless of institution type. Furthermore, Fairweather (2002) found that only 22% of faculty studied were able to attain high productivity in both teaching and research during the study’s two 2-year period, with untenured faculty being the least likely to be highly productive in both. Exploring faculty perceptions of institutional support for teaching in Louisiana (n = 235), Walczyk, Ramsey, and Zha (2007) determined that faculty tended to include teaching as part of their professional identity when they perceived their institution as valuing teaching in the tenure and promotion process. These studies suggest that the ways in which institutions value teaching, or not, may play an important role in faculty’s professional identity.

Second, the individual value of teaching and research perceived by a faculty member as well as how others (e.g., colleagues, advisors) perceive the value of teaching and research may impact faculty professional identity. For example, Fairweather (2005) describes faculty perceptions that time spent improving teaching leads to less time for research, which leads to less faculty involvement in reform, even when faculty are committed to student learning (Leslie, 2002). This may suggest that, despite a strong teaching identity, their own perceptions may negatively impact this identity. Robert and Carlsen (2017) explored the tension of teaching and research through a case study approach of four professors and concluded that personal career goals and aspirations, likely derived during their graduate school experiences, may be leading faculty to disvalue teaching. Further, Brownell and Tanner (2012) hypothesized that tensions between “scientists’ professional identities—how they view themselves and their work in the context of their discipline and how they define their professional status—may be an invisible and underappreciated barrier to undergraduate science teaching reform, one that is not often discussed, because very few of us reflect upon our professional identity and the factors that influence it” (p. 339). Despite the perceived importance of both teaching and research identities for faculty in implementing EBIPs, no instrument to our knowledge measures both. Thus the FIBIS incorporates both teaching and research identity components into the survey to capture these potential tensions.

Conceptual framework for EBIP implementation

The research described above suggests that there are a variety of factors that can promote or impede faculty implementation of EBIPs. These factors include:

  • Internal and external-related barriers

  • Teaching and research identities

  • Faculty beliefs

  • Institutional supports

Further, these factors can be influenced by:

  • Departmental contexts

  • Mentors and colleagues

  • The tension between teaching and research identities

Table 3 lists and describes several different models related to instructor decision-making and use of EBIPs used in education literature. We are not attempting to provide a new model for interpreting faculty decisions to use EBIPs and have adapted Lattuca and Pollard’s (2016) model of faculty decision-making about curricular change. This model has a number of important factors described while still maintaining simplicity and specific focus on higher education instructional barriers. From Lattuca and Pollard’s original model, we incorporated other elements from the literature such as dissatisfaction with practice (Andrews & Lemons, 2015) and the role of prior experiences, as opposed to institutional environment, in shaping beliefs about teaching and the decision to engage in EBIPs (Oleson & Hora, 2014). As seen in Fig. 1, there are extra-institutional influences (on faculty and the institution/department), external influences (from the institution and department on the faculty), and internal influences (from the faculty themselves in the form of beliefs, identity, prior experiences, and knowledge). This aligns with Henderson et al.’s (2015) bridge model for sustained innovation showing individual, department, institution, and extra-institution factors affecting teaching practices. In Lattuca and Pollard’s model, however, the extra-institutional and external influences are interpreted through faculties’ internal lens, informing their motivation to change and, ultimately, their decision to engage in the use of EBIPs. In this study, we examine identity, which includes components of extra-institutional, external, and internal influences, as well as barriers to the use of EBIPs, which include external and internal influences. We also look at the dissatisfaction component of motivation to change.

Table 3 List of frameworks related to faculty decision-making and use of EBIPs
Fig. 1
figure 1

Model for faculty’s decision-making process on using EBIPs. Underlined terms were included in the FIBIS instrument and measured in this study. (Note: arrows indicate theoretically-based relationships, not empirically tested ones.)

Purpose and rationale

The prior research paints a complex picture around faculty instruction and the choices made to implement or not implement EBIPs. As reviewed in detail by Williams et al. (2015), there are a variety of self-report instruments that measure faculty teaching practices in specific STEM disciplines (e.g., Borrego, Cutler, Prince, Henderson, & Froyd, 2013; Henderson & Dancy, 2009; Marbach-Ad, Schaefer-Zimmer, Orgler, Benson, & Thompson, 2012) and a few that measure practices in STEM higher education broadly (e.g., Hurtado, Eagan, Pryor, Whang, & Tran, 2011). Currently, only one instrument we know of quantitatively measures research and professional identity in higher education (Abu-Alruz & Khasawneh, 2013); however, there are no instruments that include teaching identity. A few previously development instruments include small sections on barriers to EBIPs (e.g., Lund & Stains, 2015; Prince, Borrego, Henderson, Cutler, & Froyd, 2013); however, these sections are not comprehensive of all potential barriers elucidated in the qualitative literature. To our knowledge, only two surveys quantitatively measure barriers to teaching in significant depth. A survey used by Shell (2001) focused on barriers to teaching critical thinking in general (no other EBIPs) and was developed for a nursing context only. Just published, Bathgate et al. (2019), developed a survey that included sections on use of evidence-based teaching and 30 questions assessing only the presence or absence of barriers and their associated supports (60 total questions). While an important contribution to the field, this work is limited in its nuance (presence or absence of barriers rather than magnitude of the barriers), does not address faculty identity or extensive faculty background details, and does not provide details on the validity of the instrument or publish the instrument itself. Despite the plethora of work on faculty use of EBIPs, faculty identity, and barriers to using EBIPS, no studies seek to validate an instrument that possesses both breadth and nuance in order to systematically understand the relationship between these particular variables. The present study aims to fill this gap by developing and initially validating an instrument, the Faculty Instructional Barriers and Identity Survey (FIBIS), to examine relationships between (a) use of EBIPs, (b) barriers to using EBIPs, and (c) tensions in teaching and research identity. From our initial pilot of the FIBIS, we sought to answer the following research questions:

  1. 1.

    What are faculty members’ reported use of and satisfaction with EBIPs, perceived instructional barriers, and professional identity?

  2. 2.

    What relationship exists between these variables?

  3. 3.

    Where do differences in barriers, identity, and reported use of EBIPs exist?

Methodology

Below, we describe the steps in the development and initial validation of the FIBIS instrument. We then detail the pilot study data collection and analysis methods. The final instrument can be viewed in the supplementary material accompanying the online article (Additional file 1: FIBIS Instrument Supplement). Note that all research was conducted with permission from our institution’s Internal Review Board with informed consent of all voluntary participants whose data were used in this study.

Survey development and initial validation

The FIBIS instrument was developed based on best practices in instrument development derived from the literature (e.g., American Education Research Association [AERA], 2014; Sawada et al., 2002; Walter et al., 2016) and helped to identify the steps in our process, detailed below. The FIBIS includes both Likert scale and open-ended questions to quickly, yet descriptively, capture information on factors that the literature suggests influence the use of EBIPs but have not all been quantified. Our survey development and validation steps included (1) reviewing the literature to construct the initial FIBIS sections and questions, (2) obtaining face and content validity of FIBIS from an expert panel, and (3) revising FIBIS based on panel feedback. The subsequent section overviews the FIBIS pilot study. The supplementary material accompanying the online article includes additional details about the survey development and initial validation.

Construction of initial survey

We began construction of the instrument by searching the literature to find questions that could be modified and adapted for FIBIS. This search resulted in the three main components of the FIBIS: (1) faculty use of and satisfaction with EBIPs (modified from Lund & Stains, 2015), (2) barriers to faculty implementation of EBIPs (developed from the qualitative literature), and (3) professional identity of academics who conduct research and teach (modified from Abu-Alruz & Khasawneh, 2013). Because of the literature on changing instructional practice, we also included a section on contextual factors, which included the types and sizes of classes taught, department, level of engagement in the universities’ teaching center, prior experiences, and satisfaction with current practice. The specific details follow in the next few paragraphs.

To identify instruments focused on understanding which EBIPs faculty use, we searched the terms “evidence-based practices,” “research-based practices,” or “active learning” combined with “higher education” and “STEM.” We found a survey instrument that included a portion in which instructors reported their familiarity and use of defined EBIPs (Lund & Stains, 2015), which included 11 total EBIPs: Think-pair-share, just-in-time-teaching, case studies, Process Oriented Guided Inquiry Learning (POGIL), SCALE-UP, interactive lecture, collaborative learning, cooperative learning, teaching with simulations, and peer instruction. After reading the EBIP name and definition, participants indicated their use of these EBIPs on a scale from 1 = never heard of it to 6 = I currently use all or part of it. In the initial stages of development, we used these 11 EBIPs in their original form and added an additional open-ended question for those who indicated they had used EBIPs to understand the extent to which participants were satisfied or dissatisfied with their practice.

An initial search of the literature using the search terms “barriers,” “higher education,” and “STEM” resulted in no instruments that measured the variety of barriers to EBIPs. Lund and Stains included a few barrier questions on their survey but not an extensive section. Walter et al.’s (2014) SCII survey focused on climate for improvement rather than barriers themselves, serving a different, but related, purpose than ours. One instrument was found in nursing education which related to barriers to the use of critical thinking (Shell, 2001), which we used to compare to the barriers questions in the FIBIS. With no other instruments to begin with at that time, we developed our own barrier questions based upon a review of the barriers literature described above. For this part of the instrument, the list of barriers was compiled from the most relevant and extensive barrier articles (Brownell & Tanner, 2012; Dancy & Henderson, 2008; Elrod & Kezar, 2017; Henderson & Dancy, 2007; Michael, 2007; Porter, Graham, Bodily, & Sandberg, 2016; Turpen et al., 2016; Walder, 2015). The questions developed were organized around the four barrier types found in Table 1, which were similar to categories developed by Walder (2015).

Finally, we searched the literature for questions on professional identity with search terms “professional identity” or “teaching and research” combined with “higher education” and “STEM.” We found one instrument involving faculty identity (Abu-Alruz & Khasawneh, 2013), which organized faculty professional identity around four dimensions:

  1. 1.

    Work-based (e.g., connection with the university community)

  2. 2.

    Student-based (e.g., commitment to supporting students in the classroom)

  3. 3.

    Self-based teaching (e.g., connection to the teaching community)

  4. 4.

    Skill-based teaching (e.g., commitment to improving teaching)

However, there were no questions related to faculty research identity. We used this instrument as the basis to build the identity portion of our survey. We added two sections to their original survey to create a self-based research identity component and skill-based research identity component that mirrored the already present teaching identity sections. We also modified the overall work section of this survey to include analogous research identity statements. For example, for the statement “I am committed to the university mission, vision, and goals for teaching,” we added the analogous statement “I am committed to the university mission, vision, and goals for research.”

Face and content validity

Next, we sought to obtain face and content validity (AERA, 2014; Trochim, Donnelly, & Arora, 2016) of the survey through review by a panel of four experts. Both face and content validity seek to determine the degree to which a construct is accurately translated into the operationalization. Face validity examines the operationalization at face value to determine whether or not it appears to be a good translation of the construct, while content validity examines the operationalization compared to the construct’s relevant content area(s). To ensure the EBIP terms and definitions from Lund and Stains (2015), descriptions of barriers, and characterization of identity were valid, we included a science educator, two STEM faculty members, and the director of the institution’s teaching center in our expert panel. The two faculty members served as face validity experts and gave feedback regarding survey question clarity and readability. The director and educational researcher served as content and face validity experts, providing feedback on the extent to which the questions captured the ideas of the constructs.

Survey revision

Based on the panel feedback, we made small modifications to the barriers to implementation and professional identity sections. The barriers statements were revised based on the committee’s comments for clarity and accuracy with a couple relatively redundant statements removed and some rewording for clarity. For example, the statement “Educational support faculty/staff (e.g., educational developers, instructional designers, educational researchers) do not value my experience” was changed based on committee feedback to “Faculty/staff who support instructional development (e.g., educational developers, instructional designers, educational researchers) do not value my experience.” The identity portion was modified to a small extent for clarity but mostly left untouched.

Descriptions of the various EBIPs were more extensively modified based on a glossary developed by our university’s teaching center, comments by the review committee, and a search of the extant literature. We also went back to the definitions used by the authors who branded and/or researched each EBIP. References included the SCALE-UP website (http://scaleup.ncsu.edu/), cooperative learning’s seminal article (Slavin, 1980), a review of active learning (Prince, 2004), POGIL.org, and peer instruction’s designer (Crouch & Mazur, 2001). Finally, the panel agreed that the Likert scale for reported use of EBIPs should collapse into 1 = never heard of it, 2 = heard of it, 3 = familiar but have not used it, 4 = familiar and plan to implement, and 5 = used it in the past or currently use it.

The modified version of FIBIS was then sent to the director of the university teaching center for final review feedback. With the director’s comments on the modified version, a final meeting between the director and the researchers was held and a consensus was reached on the final version of the instrument with a limited number of final modifications. The final version of the survey was entered into Qualtrics at which point it was emailed to the two faculty reviewers and the two researchers who designed the survey to take and test that the Qualtrics version of the survey was working properly and had no discernable problems or errors in the questions. A few comments were made regarding flow, and appropriate changes were made. The final survey sent to faculty for this study consisted of four sections: awareness and adoption of EBIPs (11 Likert questions with two scales, five free response), barriers to implementation (46 Likert questions, two free response), professional identity (31 Likert questions, one multiple choice, four free response, two fill-in-the-blank), and demographics/background (ten multiple choice questions, two free response).

Pilot study

Data collection

The FIBIS survey was sent via Qualtrics in November of 2017 to 150 STEM faculty members who were part of a larger research project at our institution. Of the 150 faculty emailed, 86 (57%) completed the survey. Of those, 69 respondents indicated they both taught and conducted research within their discipline or around teaching and learning (as opposed to only teaching courses). Since the survey included questions on both teaching and research identities, only those 69 respondents were used for the survey analysis. Table 4 overviews the demographics of these participants.

Table 4 Overview of participant demographics

Data coding

The 11 EBIP questions were first identified as the instructor using or not using the practice (i.e., chose 5 = I have used it in the past or I currently use it). Given previous qualitative research on time/effort being an important barrier to implementing EBIPs (e.g., Michael, 2007; Walder, 2016; Shadle et al., 2017), we chose to group the EBIPs, with feedback from our expert panel, based on the perceived effort needed to implement the practice (Table 5). This helped us organize the types of EBIPs, better understand the barriers to using EBIP data, and elucidate why faculty might be using particular practices or not. Each participant received a % score for how many of each type of EBIP effort category they indicated that they were implementing. For example, if a participant indicated that they used POGIL, case studies, simulations, and peer instruction, they would receive a 33% for high effort EBIPs, 25% for moderate EBIPs, and 25% for low effort EBIPS.

Table 5 EPIB effort-to-implement groups

We also calculated the amount of dissatisfaction participants had with implementing EBIPs. From participants’ responses to their dissatisfaction with EBIPs that they had implemented, each participant received a sum score of the number of EBIPs with which they were dissatisfied. The percent dissatisfaction was calculated by dividing the sum of the EBIPs they were dissatisfied with by the total number of EBIPs they reported implementing. For example, if a participant reported implementing six EBIPs and was dissatisfied with three of them, they would receive a score of 50%.

Identifying constructs

After coding the data, we then calculated reliability for the constructs/dimensions under instructional barriers and professional identity. Since the professional identity questions were modified from a prior instrument (Abu-Alruz & Khasawneh, 2013), the constructs were based upon the prior instruments dimensions of work-based, self-based, and skill-based. Based on the framework in Fig. 1, these dimensions included extra-institutional (e.g., how connected they feel to the greater professional community), external (e.g., how connected they feel to the university), and internal (e.g., their passion for research and teaching) influences. Barriers to instruction questions were developed for this survey and were not previously organized by construct. Due to the small sample size, the data set was not appropriate for exploratory factor analysis (EFA) to identify barriers groupings (Costello & Osborne, 2005); thus, we developed initial groupings from the literature as well as feedback from our expert panel.

Grouping the barrier questions into meaningful constructs was an iterative process of (1) reviewing correlations between questions, (2) calculating a Cronbach’s alpha reliability score of the group of questions (Cronbach, 1951), and (3) identifying questions to remove based on low/high correlations and improvement of reliability upon removal. Initially, we used the four categories of barriers that were formed from the literature review. However, when analyzed, these were shown to not be reliable. Through iterative grouping and checking of reliability, we formed seven groupings, five of which were reliable (α > .68), divided into external and internal categories based on our framework. Thus, there were a total of five professional identity dimensions and five instructional barriers constructs (all αs > .68) (Table 6). These ten constructs were used in the analysis described in the next section. We acknowledge the limitations of our methods in validating the barriers section of the instrument and are conducting a follow-up study in order to psychometrically evaluate the validity of these barriers questions.

Table 6 Overview of instructional barriers and professional identity constructs

Quantitative data analysis

To answer research question 1, we descriptively explored the data by calculating means, standard deviations, and frequencies for different constructs and sub-constructs. For example, we calculated the frequency of participants who reported using and were dissatisfied with each of the EBIPs. As another example, we calculated means and standard deviations for each of the five professional identity dimensions. To understand participants’ professional identity, we ran four paired t tests to test the hypothesis that participants at our research-intensive university would have higher research identities than teaching identities. We used a more conservative p value to account for additional statistical tests conducted (p < .05/4).

For research question 2, we ran correlations to understand relationships between use of EBIPs, instructional barriers, and identity dimensions. For research question 3, we calculated descriptives for subgroups of participants (e.g., participant race, prior experience, instructor type). For example, we calculated means and standard deviations for under-represented minority (URM) and non-URM participants. A Mann-Whitney U non-parametric test was used to explore differences in reported use of EBIPs, barriers, and professional identities between male and female faculty.

Qualitative data analysis

The survey contained several open-ended questions regarding participants’ satisfaction/dissatisfaction with use of EBIPs, most significant perceived barriers, and experiences in graduate school related to teaching and research. These data were used to inform portions of the quantitative results and revisions on the instrument itself. The first author coded the free response questions in the general inductive content analysis tradition (Patton, 2002; Creswell, 2003) using a cyclical process of open coding and analysis of emergent themes. Responses were read over initially for common themes, tentative nodes were created, and the responses were then coded into those nodes. Revisal of nodes occurred as the analysis proceeded. Once responses were coded, nodes were reviewed for overall themes and some were combined and modified as appropriate until a final list of categories was created. The final list of codes were discussed between the two researchers to ensure the data were represented by the coding.

Pilot data results

The intent of developing the FIBIS was to provide STEM stakeholders a quick, yet descriptive, method to capture information on factors that impact instructors’ use of and satisfaction with EBIPs. Our sample is small and specific to a set of STEM instructors at our institution and is meant to demonstrate the ways in which FIBIS can be used rather than make definitive claims. What our pilot data results, described below, suggest is that FIBIS has the potential to identify faculty-reported use of and satisfaction with EBIPs, perceived instructional barriers, and professional identity. When possible, we demonstrate how FIBIS can be used to identify significant differences between groups and constructs. When small subgroup sample sizes limit the ability to make inferential claims, we use descriptive data that, with a larger sample, could be used to make inferential claims. Further work on FIBIS is being conducted with a larger sample to more rigorously validate the instrument and these findings.

RQ1: Characterizing use of EBIPs, instructional barriers, and professional identity

Use of EBIPs

Overall, participants were most familiar with and most often used interactive lecture and collaborative learning in their STEM courses (Fig. 2). Of those that did not use EBIPs, the largest percentage of participants appeared to be familiar with the EBIP but did not intend to use it (i.e., 3-familiar but not used in Fig. 2). The least known EBIP for participants was SCALE-UP, with over a third of participants never having heard of the practice.

Fig. 2
figure 2

Participant familiarity and use of EBIPs. Dotted lines emphasize differences in percentages for each EBIP

When exploring faculty-reported use of and dissatisfaction with these practices, we observed that the largest percentage of faculty was dissatisfied with the least often used practices (Table 7). The third largest percentage of dissatisfaction in practice, however, was for faculty who implemented collaborative learning.

Table 7 Participant use and dissatisfaction with EBIPs

Professional identity

Participants overall held both strong teaching and research professional identities (Table 8). There were no significant differences in participants’ teaching and research identities. Fifteen participants (21.7%) indicated that they conducted education research outside their disciplinary research.

Table 8 Participant professional identity descriptives

However, there were significant differences in participants’ self-based (i.e., connection to the community) and skill-based (i.e., commitment to improving) identities. For both the teaching and research identities, participants’ skill-based dimension was significantly higher, suggesting that these participants are willing and interested in improving in both domains.

Instructional barriers

Both the Likert data and the inductive coding of the open-ended qualitative barriers support similar top barriers cited by participants as impacting their implementation of EBIPs (Table 9). Typical issues of time, institutional value of teaching, and appropriate classrooms were most cited by faculty when asked about barriers to using EBIPs, but when they were asked what they were dissatisfied with in their teaching, internal influences relating to beliefs, confidence, and knowledge were most reported by far. For example, one faculty member explained, “I don’t feel that what I've done is enough to take advantage of the benefits of active learning strategies.” Another acknowledged, “I am overly reliant on lecture and I aspire to make the student’s in-class experience more interactive. There is some interactiveness built into my courses (more than many of my peers), but I can do a better job and plan to do a better job.” These types of comments were more frequent than comments of dissatisfaction related to external factors.

Table 9 Overlap in top reported barriers to instruction from qualitative and quantitative data

RQ2: Relationship between EBIPs, barriers, and professional identity

There appeared to be significant moderate and strong relationships between reported implementation of EBIPs, professional identity, and perceived barriers (Table 10). There were a few significant relationships observed between EBIPs when organized by effort level. Participants who reported implementing more low-effort EBIPs (e.g., think-pair-share) also tended to report implementing moderate-effort EBIPs (e.g., case studies) and tended to have a stronger teaching identity in both the self and skill dimensions. Participants who reported implementing more moderate-effort EBIPs tended to have significantly lower barriers related to their own beliefs (e.g., negative student evaluations).

Table 10 Correlations between implementation of EBIPs, perceived barriers to implementation, and professional identity

There was no significant relationship between participants’ implementation of low-, moderate-, or high-effort EBIPs and their % dissatisfaction with implementing EBIPS. In other words, participants who reported using more high-effort EBIPs did not have a higher level of dissatisfaction with their EBIP practice than participants who reported using fewer high-effort EBIPs. Participants’ level of dissatisfaction with the use of EBIPs was significantly and positively correlated with half of the instructional barriers categories. Participants who had a higher percentage dissatisfaction tended to also perceive students as higher barriers to instruction, had negative beliefs about EBIPS that more often barred implementation, and had more barriers related to negative prior experiences with EBIPs.

Of the five barrier constructs, barriers related to faculty beliefs appeared to be correlated most often with other constructs. Faculty who held more negative teaching beliefs about EBIPs tended to also have perceptions of students as barriers as well as perceived departmental barriers. There was a strong positive correlation between perceived departmental barriers and perceived limited supports. Participants who had negative prior experiences with EBIPs as either an instructor or student also tended to perceive students as barriers.

Finally, of the five professional identity dimensions, work identity appeared to be most often correlated with other constructs. Faculty who perceived higher departmental barriers tended to have lower work identities (i.e., feel less connected with the university and the larger academic community). Work identity was also significantly and positively correlated with self-teaching, self-research, and skill-research dimensions of professional identity.

Not surprising was the strong, positive correlation between the two teaching identity dimensions. In other words, participants who felt connected to the teaching community at the university also felt committed to improving their teaching. The same relationship was true for the two research dimensions. While no EBIPs effort groupings were correlated with the professional identity constructs, there were significant relationships between identity and perceived departmental barriers. Participants who perceived higher departmental barriers had a lower work identity. Similarly, participants who felt more committed to improving their research skills had lower perceived departmental barriers related to implementing EBIPs.

RQ3: Differences in EPIB implementation, identity, and barriers

Demographic differences

Differences existed between faculty of different status, gender, and ethnicity. First, there were significant differences for faculty of different status as it related to their beliefs about implementing EBIPs F(65,3) = 3.76, p = .015. Post-hoc comparisons between instructor types, using a Bonferroni adjustment, identified tenure-track professors to hold teaching beliefs that were significantly more of a barrier to use of EBIPs (M = 2.44, SD = .54) than teaching faculty (M = 1.83, SD = .47). Second, when exploring participant gender, female participants implemented significantly more Think-pair-share (M = 4.48, SD = 1.12) than their male counterparts (M = 3.39, SD = 1.68) (U = 421, n1 = 22, n2 = 45, p < .01). Males appeared to be significantly more familiar with and use simulations in teaching (M = 3.65, SD = 1.37) than females (M = 2.83, SD = 1.37), F(67,1) = 5.58, p = .021. Third, descriptively, there appeared to be differences in URM professional identity and perceived instructional barriers (Table 11). While a small sample, these data may suggest that URM participants have a higher self-teaching identity (M = 4.52, SD = .71) and self-research identity (M = 4.43, SD = .68) than non-URM participants (M = 4.39, SD = .43 and M = 4.18, SD = .99, respectively). Further, URM participants’ beliefs about implementing EBIPs were lower (M = 1.78, SD = .73) than non-URM participants (M = 2.06, M = .61) and their percent implementation of high-effort EBIPs were higher (URM: M = 38.09%, SD = 35.63; non-URM: M = 32.80, SD = 29.87).

Table 11 Comparison of URM professional identity with perceived instructional barriers

Dissatisfaction with practice

Looking at just the participants who reported implementing a particular EBIP, there were descriptive differences in perceived barriers and self-based teaching identities for participants. For participants who reported implementing collaborative learning (n = 54), there were differences in perceived barriers for participants who were and were not satisfied with using collaborative learning (Fig. 3). While the numbers of dissatisfied participants was low (< 6) for the remaining EBIPs, the trends appeared similar. Thus, there may be a relationship between participants’ perceived barriers and their satisfaction with implementation.

Fig. 3
figure 3

Descriptive differences in perceived barriers for participants who were satisfied and dissatisfied with their use of collaborative learning

Similarly, participants who implemented collaborative learning but were dissatisfied with their practice held self-based teaching identities that were lower than participants who were satisfied with their practice of implementing collaborative learning (Fig. 4). There were virtually no differences between the two groups on their work identity, skill-based teaching dimension, or either of the research-based dimensions. These data represent a small number of participants; however, it may suggest that participants’ connection to the teaching community may differ when they are implementing EBIPs that do not go well for them.

Fig. 4
figure 4

Descriptive differences in professional identity dimensions for participants who were satisfied and dissatisfied with their use of collaborative learning

Departmental contexts

Participants from different departments appeared to have different levels of implementation of EBIPs (Table 12). While the numbers are small for each department and may not be representative of the department itself, there did appear to be differences in the frequency of faculty who were using particular EBIPs. The majority of social science and physics/astronomy department participants reported implementing very few of these EBIPs. Conversely, the majority of participants from the chemistry and computer science departments reported implementing many of the practices.

Table 12 Percentage of faculty who implement EBIPs in each department

Interestingly, there existed differences in relationships between participants’ knowledge and use of EBIPs, identity, and barriers for different departments. For the chemistry department, participants who implemented a larger percentage of the EBIPs tended to have a stronger work identity (r = .756, p = .049) and skill-based research identity (r = .898, p = .006) along with lower barriers related to their own beliefs about EBIPs (r = − .762, p = .046). No other significant relationships existed between percent of familiar EBIPs used and other variables for participants in other departments.

Discussion

The present study sought to develop, initially validate, and pilot the Faculty Instructional Barriers and Identity Survey (FIBIS). Our pilot data provided feedback that was used to make small, final revisions to the instrument. Although this was a pilot study with a small sample, we were able to demonstrate how FIBIS can be used to characterize STEM faculties’ use of and satisfaction with EBIPS, barriers to implementing EBIPs, and professional identity. Below, we discuss how our FIBIS results align with previous work exploring these different factors to provide evidence of for the construct validity, or the degree to which the factors measure what they were intending to measure (AERA, 2014). We also use our pilot data to suggest a modification to the decision-making framework, which will be tested in future work. Lastly, we demonstrate how FIBIS could be used at other institutions for meaningful and practical STEM education transformation.

EBIPs, barriers, and identity

Prior research suggests that science faculty may have tensions between their research and teaching responsibilities (e.g., Brownell & Tanner, 2012), which drove the development of the identity section of FIBIS with both teaching and research identity questions. Given our own context, we hypothesized that there would be significant differences between the teaching and research identity components of FIBIS in our sample of faculty at our research-intensive university. However, our data did not show any relationship between research and teaching identities. Such a finding aligns with a larger body of literature that shows a lack of relationship (e.g., Hattie & Marsh, 1996; Jenkins, 2004) or a small, positive correlation (uz Zaman, 2004) between research and teaching. Our data may suggest that FIBIS can identify when faculty have both strong teaching and research professional identities. However, further FIBIS testing and follow-up interviews with faculty would help confirm the validity of FIBIS in identifying tensions, or lack thereof, between teaching and research identities for STEM faculty.

Despite the small pilot sample, FIBIS was able to elucidate descriptive differences between URB and non-URM faculty identity dimensions. Given the concern with individuals’ persistence in STEM (Waldrop, 2015) and the importance of peer and mentor relationships for faculty of color (Ong, Smith, & Ko, 2018), understanding differences in STEM URM and non-URM faculty identity are important. Thus, being able to connect demographics and identity within FIBIS could potentially be a powerful, yet practical way to identify how connected faculty are to their fields, teaching, and the institution. In future larger-scale studies, the FIBIS may be used to explicate whether significant differences in professional identity for different subgroups of faculty exist.

We also found that FIBIS was able to identify how STEM faculty internal characteristics (e.g., teaching identity, beliefs) played an important role in implementation of EBIPs at different levels of effort. These findings from FIIBS aligns with prior research suggesting that faculty background and beliefs coming into the university plays a key role in how resistant faculty are to implementing EBIPs (White, 2016; Robert & Carlsen, 2017; Oleson & Hora, 2014). Further, when asked what faculty were dissatisfied with in their teaching using EBIPs, internal influences relating to beliefs, confidence, and knowledge were reported most often for faculty in this pilot study. These results also align with research suggesting that implementation of EBIPs can be challenging (e.g., Stains & Vickrey, 2017) and demonstrate that FIBIS has the potential to identify not only faculties’ decision to use EBIPs but whether, and why, they might be dissatisfied with their implementation. Unfortunately, these internal barriers cannot be mitigated by the university making more active learning classrooms available or giving faculty rewards—they point to deeply held beliefs founded on prior experiences with teaching and learning. Further exploration into FIBIS results for faculty with differing prior experiences with teaching and learning would be important to further understand these relationships.

A suggested modification to the framework

Prior studies have captured and conceptualized the factors that may impact whether faculty choose to implement, or not, EBIPs (e.g., Andrews & Lemons, 2015; Henderson, Dancy, & Niewiadomska-Bugaj, 2012; Gess-Newsome et al., 2003). In our study, we found that there appears to be an additional layer to understanding STEM faculty barriers that includes faculty who choose to implement EBIPs but may be satisfied or dissatisfied with their practice. Models of implementation and decision-making (e.g., Lattuca & Pollard, 2016) should account for faculty satisfaction with newly attempted EBIPs; therefore, we have updated the conceptual framework presented in the literature review (Fig. 1) to a slightly revised model based on the initial results of this study (Fig. 5). Note that we have shown arrows as uni-directional but that these arrows could also be bi-directional; this would need to be tested in future quantitative work.

Fig. 5
figure 5

Suggested revised model for faculty’s decision-making process on using EBIPs. Additions to the framework based on results of the FIBIS pilot data are in blue/white. Arrow bolded indicates greater influence in the relationship. Note that arrows indicate theoretically-based, as opposed to empirically tested, relationships. This model will need to be confirmed in future studies

The revised model presented in Fig. 5 aligns with the fifth and final step of Rogers’ (2003) decision-innovation model in which decision-making regarding use of an innovation occurs in five stages: (1) knowledge about the innovation, (2) persuasion about the benefits of the innovation, (3) decision to use the innovation, (4) implementation of the innovation, and (5) confirmation of continued implementation of the innovation. Our work highlights the importance of this last step, its connection to faculty dissatisfaction and beliefs, and the need to further study it and to provide the support needed to sustain the use of an innovation once faculty decide to try an EBIP. Indeed, Henderson and Dancy (2007) concluded in their qualitative study of barriers that physics faculty needed support to know what situational barriers they would likely face before implementing an innovation. In their study, when faculty faced situational barriers while implementing, they often quit using the innovation. Our work suggests this extends to STEM faculty more broadly and supports the previously cited work regarding the importance of supporting faculty both before and during implementation of EBIPs in order to sustain use of said practices.

Our qualitative data demonstrated that internal factors may be most important to STEM faculty who implement but are dissatisfied with practice. While our results are preliminary and need the confirmation of a larger study, this study may suggest that external factors might be most important in initial adoption of EBIPs but that internal factors may be most important for STEM faculty satisfaction with their implementation of EBIPs. Figure 5 indicates this with its bolder arrow indicating a more influential relationship between internal influences and the decision to quit or persist when faced with dissatisfaction with EBIPs. Future research should seek to explore the impact of internal and external influences with a larger STEM faculty sample across institutions and confirm this suggested revised framework.

Importance of context

Descriptively, we found differences in EBIP implementation across STEM departments, which aligns with prior studies demonstrating differences in EBIPs across departments (e.g., Landrum, et al., 2017; Lund & Stains, 2015). When adding the results from the FIBIS pilot to these two previous studies, not only does EBIP implementation vary across departments within the same university, but EBIP implementation within the same department across universities also appears to vary. In addition, the types of faculty (e.g., non-tenure teaching track faculty vs physics researchers) seen engaging most in the use of EBIPs between the three studies may indicate that another factor, such as faculty PD utilization—not reported in any of the studies—may be influencing faculty’s use of EBIPs. Indeed, one successful reform program pointed to the extensive support of disciplinary experts trained in pedagogy and education research being vital in their faculty beginning and continuing to use EBIPs (Wieman, Deslauriers, & Gilley, 2013). Thus, FIBIS could be used to not only identify differences in faculty across departments but the impact of faculty PD. Further research is warranted regarding reported EBIP use, ways to reduce barriers, and how to shift professional identity.

Like Henderson et al. (2012), we did not find that faculty rank, years of teaching, or research identity influenced their results. However, these results contrast two other studies of STEM faculty decision-making. Hora (2012) showed that faculty identified decision-making moderators included tenure/social status, and Landrum et al. (2017) showed a difference in adoption of EBIPs at their university by tenure-track versus non-tenure track faculty. Nevertheless, Hora also found that faculty decision-making was moderated by individual factors such as personal initiative, faculty autonomy that encourages individual approaches to instruction, and doctoral training, which our study’s initial qualitative results appear to agree with. With further psychometric testing of FIBIS to validate the barriers section of the survey, we aim to use FIBIS for a national study across institutions to determine what factors matter when and where.

Limitations

While the FIBIS has been developed and initially validated through the present study, there are still limitations to be considered. First, due to the limitation of sample size (n = 69), we were unable to conduct a factor analysis to further establish validity of the barriers section of the instrument. Our current work with the revised FIBIS will allow for psychometric testing to further validate FIBIS. Second, the sample of faculty with whom we piloted FIBIS are likely not representative of the entire STEM faculty population at the university nor are they representative of all faculty. This may limit our ability to understand how FIBIS reliably and accurately captures faculty use of EBIPs, barriers, and identity. Third, the EBIP questions possessed a scale that asked if faculty implemented the practices but not the frequency of implementation, which limits our understanding of the nuances of the data presented. The revised FIBIS, described below, includes a refined EBIP scale to better elucidate both presence and frequency of EBIP use. Adding this nuance will allow for further exploration of the relationship between EBIP use, barriers, and identity.

Final FIBIS revision

From the pilot data presented in the results section, we made small changes to the FIBIS to create the final version provided in the supplementary online material of this paper. For example, the qualitative coding of open-ended questions focused on participant satisfaction with using EBIPs and barriers to instruction was used to inform and confirm the Likert barrier statements. We compared the barriers elucidated from the qualitative data to the Likert barriers questions to identify any additional questions that should be added to the quantitative barriers questions. For a complete description of what was changed and why, see the supplementary material accompanying the online article (Additional file 2: Survey Development Supplement). Note that these final changes were not tested in this study. The revised version is the instrument provided in the online supplementary material.

Potential uses of FIBIS

The intent of developing FIBIS was to provide a tool that can be used to (1) inform faculty development programs, (2) support institutional efforts to reduce faculty barriers to implementing EBIPs, and (3) systematically study STEM faculty and teaching in higher education. While we feel confident in our ability to achieve our first two goals, researchers should use caution when using the barriers section of the FIBIS for research until further analysis has been done to validate this section using exploratory factor analysis and a larger STEM faculty sample.

Based on the evidence we collected in our pilot study, we have developed a list of ways in which we will use the information at our own institution to address both external and internal barriers for our STEM faculty. This list is not intended to be used by other institutions as-is but to provide an example for others on how FIBIS data could be used to shift external influences:

  • There may be a need for the university to address external issues of institutional value and rewards, finances for building appropriate classrooms, and facilitating a climate where active learning is normalized among students. Faculty developers at this institution could focus efforts on finding ways to reduce these barriers for STEM faculty.

  • PD programs at this institution could be created to concentrate on external supports: developing a strong teaching community for STEM faculty and helping faculty feel connected to the university as a whole (i.e., improve faculty work identity), since our data indicated that faculty who perceived higher departmental barriers tended to have lower work identities. Further, encouraging faculty cohorts within departments to engage in center programs, and direct work with STEM department chairs may also promote implementation of EBIPs.

  • To support STEM URM faculty, STEM educators and faculty developers could partner with URM faculty to build diverse communities of practice across the university.

The FIBIS can also help identify internal influences that can be leveraged to promote change:

  • Faculty developers could identify STEM faculty with high skill-based teaching dimensions on the FIBIS as potentially receptive to PD.

  • Faculty developers should consider demographics, prior experiences, and contexts as they work to support faculty in using EBIPs in undergraduate STEM classrooms.

  • Faculty developers could give focused support to STEM faculty during implementation of EBIPs to help address dissatisfactions that arise.

These approaches for addressing internal barriers align with aspects of effective faculty PD programs, such as sustained support for faculty during PD (e.g., Gehrke & Kezar, 2017; Rathbun, Leatherman, & Jensen, 2017), especially those who are disciplinary experts trained in pedagogy (Wieman et al., 2013). Recent work has also highlighted the importance of forming communities of practice to help faculty make the shift to using EBIPs, especially in large enrollment courses (Tomkin, Beilstein, Morphew, & Herman, 2019), and the necessity to focus on improving supports, such as developing social support networks for faculty implementing EBIPs rather than focusing on addressing barriers (Bathgate et al., 2019). Further research is warranted to understand the ways in which faculty PD can improve reported EBIP use, reduce barriers, and shift professional identity.

Conclusion

This study sought to develop and initially validate the FIBIS instrument. Many of the exploratory findings from our FIBIS pilot align with previous work, suggesting that FIBIS can be used to capture faculty identity, use of and satisfaction with EBIPs, and barriers to instruction. While we cannot generalize our claims, the following suggestions for our institution may demonstrate how results from FIBIS can inform efforts to try and reduce STEM faculty barriers to implementing EBIPs: (1) developing a strong teaching community (especially needed for persistence of URM faculty), (2) helping faculty connect to the university as a whole, and (3) working with departments to better support implementation of EBIPs. The results presented and implications of these findings demonstrate the potential of FIBIS as a tool for examining factors that influence faculty instructional practice. Future work includes further validating the FIBIS barriers component of the survey so it can be used to support work focused on bringing change in institutions of higher education. Beyond the development of the FIBIS instrument itself and ways in which FIBIS can be used, we hope this article shows readers a glimpse of the work that has already been done in a variety of fields. By connecting these disparate bodies of literature, we hope researchers will build off of the previous work conducted across fields in order to meet our shared goal of bringing about change in higher education.