Advertisement

Journal for STEM Education Research

, Volume 2, Issue 1, pp 14–34 | Cite as

Expanding the Pipeline: the Effect of Participating in Project Lead the Way on Majoring in a STEM Discipline

  • Gary R. PikeEmail author
  • Kirsten Robbins
Article
  • 470 Downloads

Abstract

Meeting the current demand for STEM graduates requires significantly increasing the number of students majoring in STEM fields. One program designed to increase the number of STEM majors is Project Lead The Way (PLTW). Using statewide data from Indiana, this research examined the effects of PLTW participation in high school on the likelihood of majoring in STEM during college. Propensity score matching and weighting were used to provide a rigorous evaluation of PLTW that would allow causal inferences to be made about program effectiveness. Results indicated that PLTW participation significantly increased the likelihood that students who attend college will major in a STEM discipline. The results also indicated a dosage effect for PLTW participation. Specifically, completing one PLTW course increased the likelihood of majoring in STEM by 0.16, and completing two PLTW courses increased the likelihood of majoring in STEM by 0.27. Completing three or more PLTW courses increased the likelihood of majoring in STEM by 0.38. Tests of the conditional independence assumption also revealed that it was unlikely that these results were the product of external, unmeasured variables. Thus, it appears likely that PLTW participation has a direct, causal effect on majoring in a STEM discipline during college.

Keywords

Learning by design Program evaluation Project Lead the way Propensity score matching STEM 

Competing effectively in the global economy of the Twenty-first Century requires that the United States substantially increase the educational attainment of its working-age adults (Carnevale and Rose 2011; Carnevale et al. 2010). The need is particularly acute in STEM fields where the U.S. is being challenged for supremacy in the production of postsecondary degrees (National Academy of Sciences, National Academy of Engineering, and Institute of Medicine 2007, 2010). Carnevale et al. (2010) forecast there will be 2.8 million new STEM jobs in the coming decade, and 2 million will require at least a bachelor’s degree. Although some of the demand can be met by students now enrolled in college, satisfying the need for STEM graduates requires significantly increasing the number of students majoring in STEM (President’s Council of Advisors on Science and Technology 2010).

In response, President Obama created “Educate to Innovate” to improve math and science education in middle and high school, thereby increasing the number of STEM majors and graduates in college (Olitsky 2014). One initiative is Project Lead The Way (PLTW). Beginning with 12 high schools in New York in 1997, PLTW has grown into a national program in 10,500 schools in 50 states (Project Lead The Way 2018a). The present research examined the effects of PLTW participation on the likelihood of majoring in STEM during college. Propensity score matching was used to provide a rigorous evaluation of PLTW that may allow causal inferences to be made about program effectiveness.

Background

The Business-Higher Education Forum (2011) identified two factors influencing the decision to pursue a STEM major—interest in STEM and proficiency in math and science. The fact that choosing a STEM major is related to both interest and proficiency is significant given the mismatch between STEM interest and math/science proficiency among high school graduates. Almost two-thirds of the high school graduates who have the math/science proficiency needed to be successful are not interested in STEM careers (Business-Higher Education Forum 2012). Conversely, 58% of high school graduates who are interested in STEM careers meet the ACT College Readiness Benchmark for mathematics and 51% meet the Readiness Benchmark for science (Business-Higher Education Forum and American College Testing 2014).

Research has confirmed that interest and proficiency are key factors in pursuing a STEM degree. Almost two-thirds of the STEM graduate students and scientists interviewed by Maltese and Tai (2010) reported developing an interest in math and science before or during middle school. Studies also show that proficiency in math and science is essential for success in STEM during college (Crisp et al. 2009; Wang 2013a, b). Research has found that females are significantly less likely to major in a STEM discipline, and lack of interest in STEM appears to be a key factor in their decisions not to pursue a STEM degree, whereas math and science achievement appear to play a minor role (Heilbronner 2012; Sax et al. 2015). The underrepresentation of racial/ethnic minorities in STEM is well documented, and both interest and proficiency appear to be an issue (National Science Foundation 2013). Less than 10% of Black and 12% of Latinix high school graduates are math/science proficient and interested in STEM (Business-Higher Education Forum 2011).

Studies show that interest in STEM is encouraged by educational experiences that emphasize problem solving and higher-order thinking (Brophy et al. 2008; President’s Council of Advisors on Science and Technology 2010). Developing an interest in math and science also allows high school students to begin taking rigorous courses that prepare them to be successful in college (Archer et al. 2010; Business-Higher Education Forum and American College Testing 2014). Not surprisingly, math/science proficiency is directly linked to the quality of math and science preparation in high school (Heilbronner 2011; Riegle-Crumb et al. 2012). Robinson (2003) also found that high school characteristics may indirectly affect choosing a STEM major in college. He reported that size of high school and the socioeconomic status of students attending the school were positively related to taking Advanced Placement (AP) courses, which was positively related to majoring in a STEM discipline during college.

PLTW is an activity- and problem-based curriculum designed to increase interest and proficiency in math and science (Project Lead The Way 2018b). The curriculum is based on the “Understanding by Design” model developed by Wiggins and McTighe (2005). Their approach encourages students to productively use content knowledge, rather than focusing on the acquisition of content (Wiggins and McTighe 2008). The PLTW curriculum provides a rigorous program of study aligned with both the Next Generation Science and Common Core standards (Achieve 2015; Common Core State Standards Initiative 2015; Project Lead The Way 2018b). PLTW offers year-long elective high school science classes in engineering, computer science, and biomedical science (Project Lead The Way 2018c). Introductory courses are designed to develop an understanding of and enthusiasm for the field. Schools may also choose from a variety of more specialized courses for their students. All courses include the introduction of science knowledge and skills using hands-on activities, and students use acquired knowledge to create projects that offer solutions to problems they are studying (Project Lead The Way 2018c).

Evaluations of PLTW suggest that the curriculum is effective in overcoming interest and proficiency deficits that discourage students from majoring in STEM. Studies found that PLTW participants took significantly more math and science courses and were more likely to complete a college preparatory curriculum in high school (Bottoms and Anthony 2005; Bottoms and Uhn 2007; Starobin et al. 2013). Research also found that PLTW participation was associated with higher scores on standardized tests (Bottoms and Anthony 2005; Bottoms and Uhn 2007; Schenk Jr. et al. 2012; Starobin et al. 2013; Van Overschelde 2013). Researchers at Iowa State University also reported that students who participated in PLTW were more likely to major in a STEM discipline (Schenk Jr. et al. 2012; Starobin et al. 2013).

Although encouraging, previous research is limited in two respects. First, it is not clear that previous research adequately accounted for students (or their parents) choosing to take PLTW courses. This selection effect can confound rigorous evaluation of program outcomes (Murnane and Willett 2011; What Works Clearinghouse 2014). Researchers have generally used matching in an effort to account for self-selection. Bottoms and Anthony (2005) and Bottoms and Uhn (2007) relied on stratified random sampling using a limited set of student demographics. The evaluations of PLTW in Iowa selected students for comparison groups ‘in order to conduct a fair analysis’ (Schenk Jr. et al. 2012, p. 4), but the selection procedures were not identified. Van Overschschelde (2013) relied on propensity score matching to select a comparison group, but did not indicate whether the assumptions required to make rigorous inferences about program effectiveness were satisfied.

A second limitation of previous research is that the treatment has not been defined consistently. The majority of studies have defined PLTW participation as having taken at least one course (Schenk Jr. et al. 2012; Starobin et al. 2013), but Van Overschschelde (2013) and Bottoms and Anthony (2005) defined PLTW participation as taking at least two courses. Bottoms and Uhn (2007) defined participation as three or more courses, whereas Dixon and Brown (2012) used the total number of courses taken as an indicator of PLTW participation. Inconsistencies in how PLTW participation has been defined make it impossible to determine what level of participation is needed to produce positive outcomes, or if there is a dosage effect in which more PLTW courses equal more positive outcomes.

Using propensity score matching, the present research examined the effects of PLTW participation on the likelihood of majoring in a STEM discipline during the first year of college. Unlike previous studies, the current research assessed whether the assumptions of common support, covariate balance, and conditional independence were met and would allow causal inferences, with reservations, to be made about PLTW effectiveness (Imbens and Wooldridge 2009; What Works Clearinghouse 2014). Multivalued treatment effect models were used to test whether there was a dosage effect for PLTW participation. Three levels of PLTW participation were evaluated—completing one, two, or three or more courses.

Research Methods

Conceptual Framework

The conceptual framework for the present research is based on Rubin’s (1974) model of causal inference. Using Rubin’s model, Angrist and Pischke (2009) explained that an outcome of an individual (e.g., majoring in a STEM discipline) can be represented by the equation:
$$ {Y}_i={Y}_{0i}+\left({Y}_{1i}-{Y}_{0i}\right){D}_i $$
(1)

Where Yi is the outcome for student i, Y0i is the potential outcome if the student did not participate in PLTW, Y1i is the potential outcome if the student did participate in PLTW, and Di is an indicator of whether the student did (Di = 1), or did not (Di = 0), participate in PLTW.

In Rubin’s formulation, (Y1i– Y0i) represents the causal effect of participating in PLTW. That is, the causal effect of PLTW participation is the difference in the potential outcomes of participating and not participating in PLTW for a student. Obviously, a student cannot have outcomes for both participating and not participating in PLTW. Students who participated in PLTW can only have an outcome for participation, and students who did not participate in PLTW can only have an outcome for nonparticipation. Outcomes that are not possible are referred to as counterfactuals—they are counter to the facts (Murnane and Willett 2011).

Educational researchers are usually concerned with the effects of an intervention or program on groups of students. When the focus is on group outcomes, expectations (i.e., arithmetic means) are used to represent causal effects. For students who did not participate in PLTW, the causal effect of PLTW participation is
$$ E\left({Y}_{1i}|{D}_i=0\right)-E\left({Y}_{0i}|{D}_i=0\right)\kern0.5em \mathrm{or}\kern0.5em E\left(\left[{Y}_{1i}-{Y}_{0i}\right]|{D}_i=0\right) $$
(2)
Conversely, the causal effect of PLTW participation for students who participated in PLTW is
$$ E\left({Y}_{1i}|{D}_i=1\right)-E\left({Y}_{0i}|{D}_i=1\right)\ \mathrm{or}\kern0.5em E\left(\left[{Y}_{1i}-{Y}_{0i}\right]|{D}_i=1\right) $$
(3)

An important advantage of examining mean effects is that the effects of an intervention or program do not have to be the same for all students (Imbens and Wooldridge 2009). The combined causal effect for participants and nonparticipants (eqs. [2] and [3]) is referred to as the average treatment effect (ATE), and the causal effect for participants is the average treatment effect on the treated (ATET). The average treatment effect on the treated is appropriate when evaluating the effect of a program on its participants, and the average treatment effect is appropriate when the objective is to determine the impact of extending a program to all students (Imbens and Wooldridge 2009).

Many evaluation studies collect outcome data for groups of students who participated or did not participate in an intervention or program and then compare the outcomes of the two groups. Angrist and Psichke (2009) observed that this comparison represents a naïve view of causal inference because the comparison is confounded by selection effects:
$$ E\left(\left[{Y}_{1i}-{Y}_{0i}\right]|{D}_i=1\right)+E\left({Y}_{0i}|{D}_i=1\right)-E\left({Y}_{0i}|{D}_i=0\right) $$
(4)

In equation [4], E([Y1i − Y0i]| Di = 1) represents the mean causal effect for participants (i.e., the average treatment effect on the treated), and E(Y0i| Di = 1) − E(Y0i| Di = 0) represents the selection effect. Random assignment of students to participant and nonparticipant groups solves the selection problem because E(Y0i| Di = 1) = E(Y0i| Di = 0). However, random assignment is not possible in many situations. In those instances, quasi-experimental methods must be used to account for selection effects (Murnane and Willett 2011). In the present research, propensity score matching was used to account for selection effects (Rosenbaum and Rubin 1983, 1985a). When properly conducted, propensity score matching can produce results comparable to randomized experiments (Dong and Lipsey 2018; Shadish et al. 2008).

Multivalued treatment models can be used to examine whether there is a dosage effect for PLTW course completion. In multivalued treatment models, individuals can receive more than one type of treatment, or receive no treatment at all (Imbens 2000). The potential outcome means for multivalued treatments are represented by
Where Open image in new window represents the treatment received (including no treatment). The average treatment effect for a multivalued treatment is the difference between the potential outcome mean for a given treatment and the potential outcome mean for no treatment (or for some other level of the treatment).
Similarly, the average treatment effect on the treated is the difference between the outcome mean for those receiving the treatment and the potential outcome mean for those individuals if they had not received the treatment, or if they received a different treatment.

Data Source

The setting for the present research was public high schools in Indiana. Public education in Indiana generally ranks in the top quartile in the nation in mathematics achievement (National Assessment of Educational Progress 2017) and is considered to be one of the leading states for STEM education (Watt 2015). At the time of the study, PLTW programs were in more high schools in Indiana than in any other state in the country (C. Feldhouse, personal communication, December 18, 2014). Data for the research were obtained from the Indiana Department of Education (IDOE) and the National Student Clearinghouse (NSC). The IDOE provided data on student characteristics, test scores, types of diplomas earned, and PLTW participation for students who graduated in 2010. IDOE also provided data on high school characteristics. The NSC provided data on college enrollment and academic major. The NSC “Student Tracker” program maintains data on student enrollment at more than 3600 colleges and universities, representing 98% of all students enrolled at public or private postsecondary institutions (National Student Clearinghouse 2018).

Of the 70,317 students in the graduating cohort, 14,758 were excluded due to missing IDOE data; 8329 students were excluded because they did not enroll in college; and 28,813 students were excluded because colleges and universities did not report major field of study to the Clearinghouse. It is significant to note that more than one-third of the colleges and universities in the study did not report major field for any students. An examination of the data obtained from NSC revealed that non-reporting of major field was a national phenomenon and was not limited to any one region or state.

Table 1 displays the characteristics of the students and high schools included in the study. Means and standard deviations are presented for all 18,417 students, the 1320 students who completed at least one PLTW course, and the 17,097 students who did not participate in PLTW. Cluster (i.e., aggregated) means and standard deviation are presented for high school characteristics.
Table 1

Means and standard deviations for the total sample, PLTW, and non-PLTW groups

Student/High School Characteristic

Total Sample

(n = 18,417)

PLTW Students

(n = 1320)

Non-PLTW Students

(n = 17,097)

Sex (Male)

0.44

(0.50)

0.84

(0.36)

0.41

(0.49)

Race/Ethnicity (Underrepresented Minority)

0.14

(0.34)

0.13

(0.33)

0.14

(0.34)

Socio-Economic Status (Low SES)

0.18

(0.39)

0.17

(0.37)

0.18

(0.39)

Received Honors Diploma

0.53

(0.50)

0.59

(0.49)

0.53

(0.50)

Received Core40 Diploma

0.44

(0.50)

0.38

(0.49)

0.44

(0.50)

Received General Diploma

0.03

(0.17)

0.03

(0.16)

0.03

(0.17)

ISTEP+ Mathematics Score (Divided by 100)

6.05

(0.60)

6.28

(0.59)

6.04

(0.60)

ISTEP+ ELA Score (Divided by 100)

5.74

(0.43)

5.76

(0.42)

5.74

(0.43)

High School Enrollment (Divided by 1000)*

0.86

(0.67)

1.04

(0.70)

0.86

(0.67)

Percent of Underrepresented Minority Students*

14.75

(22.34)

14.04

(18.86)

14.75

(22.34)

Percent of Low-SES Students*

34.55

(19.41)

34.36

(15.43)

34.55

(19.41)

*Means are aggregated (clustered) by high school (n = 418 for the total and non-PLTW samples, n = 210 for the PLTW sample)

An examination of the data in Table 1 revealed that the students in the study tended to be female and have graduated with an Academic Honors diploma. Participants also tended to be white and relatively affluent. Only 14% of the participants were underrepresented students of color and 18% were low-SES (i.e., eligible for free or reduced lunch) students. Students who completed one or more PLTW courses tended to be male and have graduated with and Academic Honors diploma. These students had relatively high eighth-grade standardized (ISTEP+) mathematics scores. The average enrollment per high school was 860 students. Students who completed PLTW courses graduated from high schools with an average enrollment of more than 1000 students. Of the students 1320 who completed PLTW courses, 669 (50.7%) completed one course, 365 (27.7%) completed two courses, and 286 (21.7%) completed three or more courses.

Measures

Three measures of PLTW participation were used in the study. The first was a dichotomous variable indicating whether a student had, or had not, completed one PLTW course. (Students completing more than one PLTW course were excluded.) The second measure indicated whether a student had completed two PLTW courses. Students completing one or three or more PLTW courses were excluded. The third measure indicated whether a student had completed three or more PLTW courses. Students completing one or two PLTW courses were excluded. Classification of Instructional Program (CIP) codes, developed by the National Center for Education Statistics (n.d.), were used to identify students majoring in STEM. Students with CIP codes linked to STEM disciplines by the National Science Foundation (2011) were assigned a value of 1, and students with any other CIP code were assigned a value of 0. Undecided students were classified as not majoring in STEM.

A variety of student and high school characteristics were included in the study as covariates. These measures were selected because they were similar to the covariates used in previous evaluations of PLTW. Student demographics included whether the student was male, a member of an underrepresented minority group, or low-SES. Rigor of high school coursework was represented by whether students received an Academic Honors or General diploma. The Academic Honors diploma represents the most rigorous course of study in Indiana high schools and is recommended for students planning to attend a 4-year college or university. The Core40 diploma represents the next most rigorous course of study, and at least a Core40 diploma is required for admission to a public 4-year college or university in Indiana. Students graduating with a General diploma are not eligible for admission to an Indiana 4-year institution, but may be eligible to attend a 4-year institution in another state (Indiana Department of Education 2018). The General diploma represents the least rigorous course of study for students participating in the study Students receiving a CORE40 diploma served as the reference group in the study because this is the required diploma for enrolling in a 4-year college or university in Indiana.

The eighth-grade standardized test scores used in the statewide assessment program (i.e., ISTEP+ mathematics and English/language arts [ELA]) scores were also included as covariates. The three high school characteristics included in the study were total enrollment (divided by 1000), the percent of underrepresented minority students in the high school, and the percent of low-SES students in the high school. Preliminary data screening revealed substantial positive skewness for the percentage of minority student enrollment. For the final data analyses, this covariate was transformed by first adding a constant and then calculating the natural logarithms of the resulting values (Afifi et al. 2012).

Data Analysis

The data were analyzed using propensity score matching. The Stata 15 Treatment Effects program was used for all of the analyses in this study (StataCorp 2017). The analyses were conducted in three phases: (1) logistic regression analyses were used to calculate propensities (i.e., probabilities) of participating in PLTW for all students in the sample; (2) treatment and comparison groups were matched based on their propensity scores; and (3) average treatment effects on the treated (ATET) were calculated to determine whether students who participated in PLTW were more likely to have majored in a STEM discipline than if they had not participated in PLTW. Separate analyses were conducted for students who completed one, two, or three or more PLTW courses.

Different matching methods can create different comparison groups and produce different treatment effects (D’Agostino Jr. 1998; Rosenbaum and Rubin 1985a). To test the robustness of results, four different matching methods were compared using the standard errors and confidence intervals for ATET estimates. For the first method, students were matched on a 1:1 basis using the propensity scores. Second, a 10:1 match based on propensity scores was utilized. The third and fourth methods used nearest-neighbor matching based on Mahalanobis distance measures derived from the covariates in the logistic regression models. For the third method, a 1:1 match was utilized, and a 10:1 match was used for the fourth method.

At each step in the data analysis, the assumptions of propensity score matching were tested. When propensity scores were calculated, overlap charts were produced to demonstrate that sufficient numbers of students who did not participate in PLTW could be matched with the PLTW participants. Adequate overlap between the treatment and comparison groups is referred to as the common support assumption (Imbens and Wooldridge 2009). Once matching was completed, the groups were compared to determine if the matched groups had similar means on the covariates used to calculate propensity scores. This covariate balance assumption was tested using standardized differences to avoid the confounding effects of sample size on t-tests (Rosenbaum and Rubin 1985b). According to Rosenbaum and Rubin (1985b), standardized differences of less than an absolute value of 0.10 indicate good covariate balance.

The final assumption, conditional independence, was evaluated after final ATET estimates were calculated. The assumption holds that there are no unmeasured variables that are related to the selection and outcome variables (Imbens and Wooldridge 2009; Murnane and Willett 2011). Although it was not possible to directly test this assumption, it was possible to assess the sensitivity of the ATET estimates to the effects of unobserved variables (Rosenbaum 2002). Because majoring in a STEM discipline was a dichotomous variable, Mantel-Haenzel bounds (mhbounds) was used to assess the sensitivity of the results (Becker and Caliendo 2007). Researchers note that most bounded effects range from 1.0 to 2.0 in the social sciences (Caliendo and Kopeing 2005; DiPrete and Gangl 2004). Conditional independence (i.e., mhbounds) estimates were calculated using the program developed by Becker and Caliendo (2007) for Stata. Gamma values were allowed to range from 1.0 to 5.0 in increments of 0.1.

Given significant treatment effects for completing PLTW courses, multivalued treatment effects were analyzed using inverse weighting on the propensity score (Imbens 2000). Wooldridge (2007) has recommended using inverse probability weighted regression adjustment (IPWRA) to estimate treatment effects. The IPWRA estimates are “doubly robust” because they model the relationship between a set of covariates and both the outcome and the possibility of treatment. If either the outcome model or the treatment model is correctly specified, IPWRA will provide estimates of causal effects (Kang and Schafer 2007).

Results

Calculating Propensity Scores

The logistic regression analyses indicated that the covariates used in the study were significantly related to completing one, two, or three or more PLTW courses (Wald χ2 = 474.77, df = 10, p < 0.001; Wald χ2 = 358.11, df = 10, p < 0.001; Wald χ2 = 283.03, df = 10, p < 0.001). Table 2 presents the odds ratios for the covariates used to calculate propensity scores. Results show that being male significantly increased the odds of completing one, two, or three or more PLTW courses. Likewise, students’ ISTEP+ mathematics scores were positively related to completing PLTW courses across all three treatment levels, and the percentage of low-SES students in a high school was also positively related to all levels of PLTW course completion. Receiving an Academic Honors diploma was positively related to completing two or three or more PLTW courses, whereas receiving a General diploma was positively related to completing three or more PLTW courses. High school size (i.e., total enrollment) increased the odds of completing one or two PLTW courses, whereas increases in the percentage of underrepresented minority students decreased the odds of completing one or two PLTW courses.
Table 2

Odds ratios for the covariates used to calculate propensity scores

Student/High School Characteristic

1 PLTW Course

2 PLTW Courses

3+ PLTW Courses

Sex (Male)

6.84*

7.05*

7.96*

Race/Ethnicity (Underrepresented Minority)

0.97

1.36

1.22

Socio-Economic Status (Low SES)

0.92

0.95

0.82

Received Academic Honors Diploma

0.99

1.22*

1.43*

Received General Diploma

0.99

1.02

2.02*

ISTEP+ Mathematics Score (Divided by 100)

1.71*

1.51*

1.87*

ISTEP+ ELA Score (Divided by 100)

0.78

1.24

0.68

High School Enrollment (Divided by 1000)

1.21*

1.20*

1.06

Percent of Underrepresented Minority Students

0.98*

0.99*

1.00

Percent of Low-SES Students

1.02*

1.03*

1.02*

Constant

0.00*

0.00*

0.00*

*p < 0.05

Fig. 1 displays the overlap charts for the three sets of analyses. In all three graphs, there is substantial overlap between the propensity scores of students who completed PLTW courses and those who did not. These results clearly indicate that the propensity scores for the groups were sufficiently similar to allow for the matching of students in treatment and comparison groups.
Fig. 1

Overlap charts for the common support assumption

Matching

Table 3 presents the ATET coefficients, standard errors, and 95% confidence intervals for the four matching methods across the three PLTW treatments. An examination of the results in the table reveals that all of the ATET estimates were within the 95% confidence intervals for all four matching methods. For example, the ATET estimates for completing one PLTW course ranged from 0.14 to 0.18. These estimates were within even the smallest confidence interval (0.13 to 0.20) for 10:1 propensity score matching. For all analyses, 10:1 propensity score matching was utilized because it consistently provided the smallest standard errors for the ATET estimates. In the Stata propensity score matching program, students with identical propensity scores (i.e., ties) were included in the analyses, even if the number of matched students in the comparison group exceeded 10. Thus, the minimum number of matched students was 10 for all analyses, but the maximum number of matched students was 11 for the analysis of students taking one PLTW course.
Table 3

Comparison of results for the four matching methods

Matching Method

ATET

Standard Error

Lower 95% C.I.

Upper 95% C.I.

1 PLTW Course

 1:1 Propensity Score Match

0.14

0.03

0.09

0.19

 10:1 Propensity Score Match

0.16

0.02

0.13

0.20

 1:1 Nearest-Neighbor Match

0.18

0.03

0.13

0.23

 10:1 Nearest-Neighbor Match

0.17

0.02

0.13

0.21

2 PLTW Courses

 1:1 Propensity Score Match

0.28

0.03

0.21

0.34

 10:1 Propensity Score Match

0.27

0.03

0.22

0.32

 1:1 Nearest-Neighbor Match

0.30

0.03

0.23

0.36

 10:1 Nearest-Neighbor Match

0.28

0.03

0.23

0.33

3+ PLTW Courses

 1:1 Propensity Score Match

0.41

0.04

0.33

0.49

 10:1 Propensity Score Match

0.38

0.03

0.33

0.44

 1:1 Nearest-Neighbor Match

0.38

0.04

0.31

0.46

 10:1 Nearest-Neighbor Match

0.39

0.03

0.33

0.44

Standardized differences were calculated as a test of the appropriateness of the matching process. The standardized differences for 10:1 propensity score matching are presented in Table 4. Examination of the results indicated close matches between the treatment and comparison groups. The largest absolute value of any standardized difference was well below the 0.10 threshold for a satisfactory match.
Table 4

Standardized differences for the covariates used to calculate propensity scores

Student/High School Characteristic

1 PLTW Course

2 PLTW Courses

3+ PLTW Courses

Sex (Male)

−0.011

−0.043

−0.004

Race/Ethnicity (Underrepresented Minority)

−0.007

0.007

−0.031

Socio-Economic Status (Low SES)

−0.005

0.001

−0.028

Received Honors Diploma

0.019

0.021

−0.002

Received General Diploma

−0.002

−0.011

0.010

ISTEP+ Mathematics Score (Divided by 100)

0.037

0.029

0.023

ISTEP+ ELA Score (Divided by 100)

0.032

0.042

0.019

High School Enrollment (Divided by 1000)

−0.028

−0.020

−0.048

Percent of Underrepresented Minority Students

0.006

0.016

−0.001

Percent of Low-SES Students

−0.017

0.043

−0.003

Effects of PLTW Course Completion

Table 5 presents the average treatment effects on the treated (ATET) coefficients, 95% confidence intervals, and mhbounds estimates for the three levels of PLTW course completion. The results show that students who completed one, two, or three or more PLTW courses were significantly more likely to major in STEM discipline during their first year in college. ATET estimates indicated that, compared to students who did not participate in PLTW, the proportion of students majoring in STEM was 0.16 greater for students who completed one PLTW course, 0.27 greater for students who completed two PLTW courses, and 0.38 greater for students who completed three or more PLTW courses. An examination of the mhbounds statistics reveals moderately strong support for the conditional independence assumption for the effect of completing one PLTW course. Strong support for the conditional independence assumption was found for the effect of completing two courses, and support for conditional independence was extremely strong for completing three or more PLTW courses.
Table 5

ATET results for the three levels of PLTW course completion

  

95% Confidence Interval

 

Number of Courses Completed

ATET

Lower

Upper

mhbounds

1 PLTW Course

0.16*

0.13

0.20

1.6

2 PLTW Courses

0.27*

0.22

0.32

2.5

3+ PLTW Courses

0.38*

0.33

0.44

4.9

*p < 0.001

Dosage Effects for PLTW Course Completion

Table 6 presents the ATET multivalued treatment effects for comparisons among levels of PLTW course completion. Results for completing one, two, or three or more PLTW courses, as compared to completing no PLTW courses, were virtually identical to the ATET coefficients in Table 5. This is significant, given that different estimation methods were used in the two sets of analyses. In addition, significantly greater effects for majoring in a STEM discipline were found for completing two or three or more PLTW courses, as compared to completing one PLTW course. Likewise, the effect of completing three or more PLTW courses was significantly greater than the effect of completing two PLTW courses.
Table 6

Multivalued treatment effects for the number of PLTW courses completed

  

95% Confidence Interval

Average Treatment Effect on the Treated

ATET Coefficient

Lower

Upper

1 Course v. No Courses

0.16*

0.12

0.20

2 Courses v. No Courses

0.27*

0.22

0.32

3 Courses v. No Courses

0.38*

0.32

0.44

2 Courses v. 1 Course

0.11*

0.04

0.17

3 Courses v. 1 Course

0.20*

0.13

0.27

3 Courses v. 2 Courses

0.10*

0.03

0.17

*p < 0.001

Limitations

The generalizability of the results of the present research are limited in several ways. First, these results are specific to one high school graduating class in a single state. Even though more schools were offering PLTW courses in Indiana than in any other state, it is possible that including students who graduated in a different year or who were from a different state, with lower levels of performance on the National Assessment of Educational Progress, might have influenced the results in unknown ways. The generalizability of the findings may also be limited by the covariates included in the analyses. Including different covariates might have produced different results. However, the strength of the mhbounds estimates suggests that including different variables would be unlikely to affect the statistical significance of the treatment effects observed in this study, unless those variables more than doubled the likelihood of participating in PLTW.

The large number of college enrollees for whom no major field of study was available represents a potential limitation. Major field was not available for many college enrollees because reporting this information is voluntary, and more than one-third of the colleges did not report students’ majors to the National Student Clearinghouse. No information is available about why institutions chose not to report these data. Thus, it is not possible to determine whether the data were systematically missing in ways that could have influenced the results of the study.

Discussion

Despite these limitations, the results of this study have important implications for expanding the STEM pipeline. First and foremost, the findings provide strong evidence of the effectiveness of Project Lead The Way in increasing the likelihood that students who attend college will major in a STEM discipline.1 This finding is consistent with the evaluation of PLTW outcomes in Iowa (Schenk Jr. et al. 2012; Starobin et al. 2013). Unlike the Iowa studies, the present research was designed to evaluate the causal effects of PLTW. In that regard, the results of the mhbounds tests revealed that only large selection effects would have altered the ATET estimates reported in the study. An unmeasured variable would have to increase by a factor of 1.6 the odds of a student enrolling in PLTW to counteract the effect of completing one course. An unmeasured variable would need to increase the odds of enrolling in PLTW by a factor of 2.5 to counter the effect of completing two courses and almost quintuple the odds of enrolling in PLTW to counteract the effect of completing three or more courses. Only the sex of a student produced odds ratios of the magnitude required to affect the observed ATET estimates.

The results of the present research also clearly indicated that there is a dosage effect for PLTW participation. Completing a single course in high school increased the likelihood of majoring in a STEM field by 0.17, whereas completing two courses increased the probability of majoring in STEM by 0.28. Completing three or more PLTW courses increased the probability of majoring in STEM by 0.38. Differences among the ATET effects were all statistically significant. Previous research on choosing a STEM major did not examine the differential effects of the number of PLTW courses completed. However, studies of the effects of PLTW on subsequent standardized test scores in high school did find evidence of differential effects (Bottoms and Anthony 2005; Bottoms and Uhn 2007; Dixon and Brown 2012; Schenk Jr. et al. 2012; Van Overschschelde 2013).

Future research should examine the effects of other differences in PLTW course taking. For example, it would be important to know whether completing PLTW computer science or biomedical science courses also increases the probability of majoring in STEM. It would also be important to know if effects differ across specific courses. Likewise, it may be that effects are different depending on when PLTW courses are taken in students’ high school careers and if the sequencing of PLTW courses makes a difference.

PLTW is not a panacea. A previous analysis of PLTW data from Indiana revealed that completing PLTW courses in high school was not related to increased college enrollment in either 2- or 4-year institutions (Pike and Robbins 2014). This finding should be explored in greater detail in future research. It appears that completing PLTW courses increases the number of students majoring in STEM disciplines without increasing the number of students in college. If policy makers wish to meet the twin goals of improving college access and increasing STEM-majors, PLTW will need to be paired with other programs designed to improve readiness and access to postsecondary education.

The results of the logistic regression analyses also identified several challenges associated with relying on PLTW to increase the numbers of females, minoritized, and lower-SES students majoring in STEM. In the current study, females were much less likely to participate in PLTW than males. Although minoritized and lower-SES students were not underrepresented in PLTW, neither were they overrepresented. Thus, the demographic profile of PLTW participants paralleled the demographic characteristics of persons in STEM fields. In order for PLTW to contribute to diversity in STEM, special attention must be paid to attracting female, minoritized, and economically disadvantaged students to problem-based math and science courses and increasing these students’ interest in STEM careers.

Logistic regression results also point to challenges associated with overcoming deficits in math and science proficiency. Students with higher eight-grade (i.e., pre-PLTW) scores on standardized mathematics tests were significantly more likely to participate in PLTW. This finding is consistent with the results of research by Morgan et al. (2016), which found that gaps in math and science achievement begin as early as elementary school and persist through high school. On a positive note, students who graduated from high school with a rigorous Academic Honors diploma were more likely to have completed two or three or more PLTW courses, which had the greatest effect on majoring in a STEM discipline. The fact that students who graduated with a General diploma were more likely to have completed three or more PLTW courses was surprising and should be examined in future research. Whether PLTW participation leads to more rigorous coursework or whether rigorous coursework is a proxy for selection effects is also an important question for future research.

The characteristics of students’ high schools were also related to the likelihood of completing PLTW courses. Students who attended high schools with larger total enrollments were significantly more likely to complete one or two PLTW courses. Students who attended high schools with greater percentages of underrepresented minority students were less likely to complete one or two PLTW courses, whereas students who attended high schools with greater percentages of low-SES students (i.e., students who were eligible for free or reduced lunch) had significantly greater odds of completing one, two, or three or more PLTW courses. The reasons for the later finding are unclear and may reflect administrative decisions on the part of PLTW or school administrators. Future research should examine whether PLTW programs have been targeted toward lower-SES high schools and/or whether lower-SES high schools have been attracted to PLTW as a mechanism for improving the educational and economic outcomes of their students. The effects for high school characteristics seem to suggest a need to broaden PLTW programs to include smaller high schools, as well as high schools enrolling large numbers of minoritized students.

Attracting more students to STEM fields is a necessary, but not sufficient, condition for improving STEM degree attainment. It does little good to encourage more students to major in STEM, only to have them not be successful (National Academy of Sciences, National Academy of Engineering,, and Institute of Medicine 2010). Programs designed to attract students to STEM fields are unlikely to improve student success in STEM in college. Research has shown that the key factors related to the success of students in STEM fields are instructional practices in STEM, interpersonal relationships among STEM students, and the characteristics of the postsecondary institutions themselves (Cole et al. 2013; Crisp et al. 2009; Griffith 2010; Ost 2010; Rask 2010). The (in)ability of postsecondary institutions to alter students’ college experiences may well limit the ability to meet future demand for qualified STEM graduates.

The results of the current research also illustrate the important role that propensity score matching can play in evaluation studies. Significantly improving postsecondary degree attainment, both generally and in STEM disciplines will likely need to be accomplished with existing resources (National Academy of Sciences, National Academy of Engineering,, and Institute of Medicine 2010; Rhoades 2012). Needing to do more with the same, or fewer, resources requires that administrators and policy makers carefully evaluate the quality and effectiveness of student access and success programs, including pipeline-to-college programs such as PLTW. Effective programs will need to be expanded and institutionalized; marginal programs must be improved when possible; and ineffective programs should be terminated and the resources shifted to other efforts.

Appropriate evaluation of student access and success programs requires that evidence of program effectiveness not be confounded by external factors, such as selection effects. The gold standard for evaluating program effectiveness is randomized control trials (Schneider et al. 2007). Random assignment of participants to treatment and control groups ensures baseline equivalence so that post-program differences can be linked to participation in the program (Murnane and Willett 2011). As a practical matter, however, random assignment of students to treatment and control groups is seldom feasible and may not be ethical (Reynolds and DesJardins 2009). Evaluating the effectiveness of most access and success programs will require quasi-experimental designs that include matching (Heinrich et al. 2010). Because propensity score matching can subsume a wide range of confounding covariates in a single score, it provides an efficient and effective method of creating an appropriate comparison group (Reynolds and DesJardins 2009). As Rosenbaum and Rubin (1983, 1985a) observed, differences in outcomes for treatment and comparison groups matched on the basis of propensity scores may allow causal inferences to be made about program effects.

The ability to draw causal inferences does not flow from the use of propensity scores or matching. Instead, it flows from the ability to satisfy three key assumptions—common support, covariate balance, and conditional independence (Imbens and Wooldridge 2009; Reynolds and DesJardins 2009). The first two assumptions support the adequacy of matching mechanics. Satisfying the common support assumption insures that it is possible to create matched groups using propensity scores, and satisfying the covariate balance assumption insures that the matched groups are not substantively different.

Satisfying the conditional independence, assumption is the key to making causal inferences about program effects. Conditional independence holds that there is no external, unmeasured variable that is related to both selection into a group and program outcomes, conditional on the covariates in the model (Imbens and Wooldridge 2009). It is a virtual certainty that there are unobserved variables that are related to selection into a program. The key question is whether these unobserved variables can confound the estimation of treatment effects. Because the potential confounding variables cannot be measured, researchers must assume that an unmeasured variable is present and then determine if that variable is likely to affect the significance of treatment effects. If the unobserved variable is likely to affect treatment effects, causal inferences should not be made. It is for this reason that studies employing propensity score matching are said to satisfy What Works Clearinghouse (2014) standards, with reservations. Even when it is not possible to make causal inferences about program effectiveness, the analysis of program outcomes based on groups formed using propensity score matching is likely to be superior to the comparison of outcomes for unmatched groups. (Reynolds and DesJardins 2009). It is also important to note that virtually identical results were obtained for propensity score matching and inverse weighting on the propensity score. It seems likely that this was at least partly attributable to the fact that the three assumptions of propensity score matching were so clearly satisfied. Thus, multivalued treatment effect models may provide a viable follow-up procedure for detecting dosage effects when the assumptions of propensity score matching are met.

Conclusion

The use of propensity score analyses to evaluate the outcomes of Project Lead The Way in Indiana indicated that completing PLTW courses increased the likelihood that students would select STEM disciplines as their major field of study in college. Results also indicated that there is a dosage effect associated with PLTW participation, with the greatest effects found for completing three or more courses. The challenge for PLTW is that less than 22% of all the PLTW participants completed as many as three courses. Another challenge is that the characteristics of students drawn to PLTW reflect the characteristics of students already majoring in STEM. Increasing the diversity in STEM fields will require new efforts to attract women, minoritized, and lower-SES students to STEM pipeline programs. Perhaps the greatest challenge to meeting the demand for a skilled workforce is the fact that completing PLTW courses increases the likelihood of majoring in a STEM discipline for those students who already attend college, but completing PLTW courses does not increase the number of students attending college.

Footnotes

  1. 1.

    Separate analyses were conducted using only students from high schools that offered PLTW courses, and the results replicated the findings of the current study.

Notes

Compliance with Ethical Standards

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. Achieve, Inc. (2015). Next generation science standards. Retrieved from http://www.achieve.org/next-generation-science-standards. Retrieved 30 May 2016.
  2. Afifi, A., May, S., & Clark, V. A. (2012). Practical multivariate analysis (5th ed.). Boca Raton: CRC Press.Google Scholar
  3. Angrist, J. D., & Pischke, J. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton: Princeton University Press.Google Scholar
  4. Archer, L., Dewitt, J., Osborne, J., Dillon, J., Willis, B., & Wong, B. (2010). “Doing” science versus “being” a scientist: Examining 10/11-year-old school children’s construction of science through the lens of identity. Science Education, 94, 617–639.CrossRefGoogle Scholar
  5. Becker, S. O., & Caliendo, M. (2007). Mhbounds—Sensitivity analysis for average treatment effects. The Stata Journal, 7, 71–83.CrossRefGoogle Scholar
  6. Bottoms & Anthony. (2005, May). Project Lead the way: A pre-engineering curriculum that works. Atlanta: Southern Regional Education Board.Google Scholar
  7. Bottoms & Uhn. (2007, September). Project Lead the way works: A new type of career and technical program. Atlanta: Southern Regional Education Board Research Report.Google Scholar
  8. Brophy, S., Klein, S., Portsmore, M., & Rogers, C. (2008). Advancing engineering education in P-12 classrooms. Journal of Engineering Education, 97, 369–387.CrossRefGoogle Scholar
  9. Business-Higher Education Forum (2011, August). Creating the workforce of the future: The STEM interest and proficiency challenge. Washington, DC: Author. Retrieved from http://www.bhef.com/sites/g/files/g829556/f/brief_2011_stem_inerest_proficiency.pdf. Retrieved 22 May 2014.
  10. Business-Higher Education Forum (2012, May). STEM interest among college students: Where they enroll. Washington, DC: Author. Retrieved from http://www.bhef.com/sites/g/files/g829556/f/brief_2012_stem_interest_enrollment.pdf. Retrieved 22 May 2014.
  11. Business-Higher Education Forum, & American College Testing (2014, May). Building the talent pipeline: Policy recommendations for ‘The Condition of STEM 2013.’ Washington, DC: Business-Higher Education Forum. Retrieved from http://www.bhef.com/sites/g/files/g829556/f/201406/2014_brief_BHEF_ACT_0.pdf. Retrieved 22 May 2014.
  12. Caliendo, M., & Kopeing, S. (2005, May). Some practical guidance for the implementation of propensity score matching. [IZA discussion paper no. 1588]. Bonn, Germany: Institute for the Study of labor. Retrieved from http://ftp.iza.org/dp1588.pdf. Retrieved 21 Aug 2014.
  13. Carnevale, A. P., & Rose, S. J. (2011). The undereducated American. Washington, DC: Georgetown University Center on Education and the Workforce. Retrieved from http://www9.georgetown.edu/grad/gppi/hpi/cew/pdfs/undereducatedamerican.pdf. Retrieved 14 Nov 2012.
  14. Carnevale, A. P., Smith, N., & Strohl, J. (2010, June). Help wanted: Projections of jobs and education requirements through 2018. Washington, DC: Georgetown University Center on Education and the Workforce. Retrieved from http://cew.georgetown.edu/JOBS2018/. Retrieved 14 Nov 2012.
  15. Cole, B., High, K., & Weinland, K. (2013). High school pre-engineering programs: Do they contribute to college retention? American Journal of Engineering Education, 4, 85–98.Google Scholar
  16. Common Core State Standards Initiative (2015). Preparing America’s students for success. Retrieved from http://www.corestandards.org/. Retrieved 30 May 2016.
  17. Crisp, G., Nora, A., & Taggart, A. (2009). Student characteristics, pre-college, college, and environmental factors as predictors of majoring in and earning a STEM degree: An analysis of students attending a Hispanic serving institution. American Educational Research Journal, 46, 924–942.CrossRefGoogle Scholar
  18. D’Agostino, R. B., Jr. (1998). Tutorial in biostatistics: Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistics in Medicine, 17, 2265–2281.CrossRefGoogle Scholar
  19. DiPrete, T. A., & Gangl, M. (2004). Assessing bias in the estimation of causal effects: Rosenbaum bounds on matching estimators and instrumental variables estimation with imperfect instruments. Sociological Methodology, 34, 271–310.CrossRefGoogle Scholar
  20. Dixon, R. A., & Brown, R. A. (2012). Transfer of learning: Connecting concepts during problem solving. Journal of Technology Education, 24, 2–16.Google Scholar
  21. Dong, N., & Lipsey, M. W. (2018). Can propensity score analysis approximate randomized experiments using pretest and demographic information in pre-K intervention research? Evaluation Review, 42, 34–70.CrossRefGoogle Scholar
  22. Griffith, A. L. (2010). Persistence of women and minorities in STEM field majors: Is it the school that matters? Economics of Education Review, 29, 911–922.CrossRefGoogle Scholar
  23. Heilbronner, N. N. (2011). Stepping onto the STEM pathway: Factors affecting talented students’ declaration of STEM majors in college. Journal for the Education of the Gifted, 34, 876–899.CrossRefGoogle Scholar
  24. Heilbronner, N. N. (2012). The STEM pathway for women: What has changed? Gifted Child Quarterly, 57, 39–55.CrossRefGoogle Scholar
  25. Heinrich, C., Maffioli, A., & Vásquez, G. (2010, August). A primer for applying propensity-score matching. [impact-evaluation guidelines technical notes, no. IDB-TN-161.] Washington, D. C.: Inter-American Development Bank.Google Scholar
  26. Imbens, G. W. (2000). The role of the propensity score in estimating dose-response functions. Biometrika, 87, 706.710.CrossRefGoogle Scholar
  27. Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47, 5–86.CrossRefGoogle Scholar
  28. Indiana Department of Education (2018). Learn more Indiana: High school diploma options. Indianapolis, IN: Author. Retrieved from https://learnmoreindiana.org/college/preparing-for-college/high-school-diploma-options/. Retrieved 25 Nov 2018.
  29. Kang, J. D. Y., & Schafer, J. L. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical Science, 22, 523–539.CrossRefGoogle Scholar
  30. Maltese, A. V., & Tai, R. H. (2010). Eyeballs in the fridge: Sources of early interest in science. International Journal of Science Education, 32, 669–685.CrossRefGoogle Scholar
  31. Morgan, P. L., Farkas, G., Hillemeier, M. M., & Maczuga, S. (2016). Science achievement gaps begin very early, persist, and are largely explained by modifiable factors. Educational Researcher, 45(1), 18–35.CrossRefGoogle Scholar
  32. Murnane, R. J., & Willett, J. B. (2011). Methods matter: Improving causal inference in educational and social science research. New York: Oxford University Press.Google Scholar
  33. National Academy of Sciences, National Academy of Engineering, & Institute of Medicine. (2007). Rising above the gathering storm: Energizing and employing America for a brighter economic future. Washington, DC: National Academic Press.Google Scholar
  34. National Academy of Sciences, National Academy of Engineering, & Institute of Medicine. (2010). Rising above the gathering storm, revisited: Rapidly approaching category 5. Washington, DC: National Academies Press.Google Scholar
  35. National Assessment of Educational Progress (2017). The nation’s report card. Washington, DC: U. S. Department of Education. Retrieved from https://www.nationsreportcard.gov/math_2017/states/scores/?grade=8. Retrieved 25 Nov 2018.
  36. National Center for Education Statistics (n.d.). CIP 2010: What is CIP? Retrieved from https://nces.ed.gov/ipeds/cipcode/default.aspx?v=55. Retrieved 21 Sept 2013.
  37. National Science Foundation (2011, Fall). GSS-CIP crosswalk. Washington, DC.: Author. Retrieved from http://www.nsf.gov/statistics/nsf13331/pdf/2011_GSS_CIP_Crosswalk.pdf. Retrieved 30 May 2014.
  38. National Science Foundation (2013). Women, minorities, and persons with disabilities in science and engineering: 2013. Washington, DC: Author. Retrieved from http://www.nsf.gov/statistics/wmpd/2013/pdf/nsf13304_digest.pdf. Retrieved 30 May 2014.
  39. National Student Clearinghouse (2018). Student tracker. Herndon, VA: Author. Retrieved from http://www.studentclearinghouse.org/colleges/studenttracker/. Retrieved 24 Nov 2018.
  40. Olitsky, N. H. (2014). How do academic achievement and gender affect the earnings of STEM majors? A propensity score matching approach. Research in Higher Education, 55, 245–271.CrossRefGoogle Scholar
  41. Ost, B. (2010). The role of peers and grades in determining major persistence in the sciences. Economics of Education Review, 29, 923–934.CrossRefGoogle Scholar
  42. Pike, G. R., & Robbins, K. (2014, March). Using propensity scores to evaluate education programs. Paper presented at the annual meeting of the Indiana Association for Institutional Research. Indianapolis.Google Scholar
  43. President’s Council of Advisors on Science and Technology. (2010). Prepare and inspire: K-12 science, technology, engineering, and math (STEM) education for America’s future. Washington, DC.: Executive Office of the President. Retrieved from http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-stem-ed-final.pdf. Retrieved 22 May 2014.
  44. Project Lead The Way (2018a). About PLTW. Indianapolis, IN: Author. Retrieved from https://www.pltw.org/about-us. Retrieved 25 Nov 2018.
  45. Project Lead The Way (2018b). Our approach. Indianapolis, IN: Author. Retrieved from https://www.pltw.org/about-us/our-approach. Retrieved 25 Nov 2018.
  46. Project Lead The Way (2018c). Our programs. Indianapolis, IN: Author. Retrieved from https://www.pltw.org/our-programs. Retrieved 25 Nov 2018.
  47. Rask, K. (2010). Attrition in STEM fields at a liberal arts college: The importance of grades and pre-college preferences. Economics of Education Review, 29, 892–900.CrossRefGoogle Scholar
  48. Reynolds, C. L., & DesJardins, S. L. (2009). The use of matching methods in higher education research: Answering whether attendance at a 2-year institution results in differences in educational attainment. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. XXIV, pp. 47–104). Dordrecht: Springer.CrossRefGoogle Scholar
  49. Rhoades, G. (2012). The incomplete completion agenda: Implications for academe and the academy. Liberal Education, 98(1), 18–25.Google Scholar
  50. Riegle-Crumb, C., King, B., Grodsky, E., & Muller, C. (2012). The more things change, the more they stay the same? Prior achievement fails to explain gender inequality in entry into STEM college majors over time. American Educational Research Journal, 49, 1048–1073.CrossRefGoogle Scholar
  51. Robinson, M. (2003). Student enrollment in high school AP sciences and calculus: How does it correlated with STEM careers? Bulletin of Science, Technology, & Society, 23, 265–273.CrossRefGoogle Scholar
  52. Rosenbaum, P. R. (2002). Observational studies (springer series in statistics (2nd ed.). New York: Springer.CrossRefGoogle Scholar
  53. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  54. Rosenbaum, P. R., & Rubin, D. B. (1985a). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39, 33–38.Google Scholar
  55. Rosenbaum, P. R., & Rubin, D. B. (1985b). The bias due to incomplete matching. Biometrics, 41, 103–116.CrossRefGoogle Scholar
  56. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701.CrossRefGoogle Scholar
  57. Sax, L. J., Kanny, M. A., Riggers-Piehl, T. A., Whang, H., & Paulson, L. N. (2015). “But I’m not good at math”: The changing salience of mathematical self-concept in shaping women’s and men’s STEM aspirations. Research in Higher Education, 56, 813–842.CrossRefGoogle Scholar
  58. Schenk Jr., T., Laanan, F. S., Starobin, S. S., & Rethwisch, D. (2012). An evaluation of Iowa project Lead the way on student outcomes: Summary report. Ames, IA: Iowa State University Community College Leadership Program. Retrieved from http:///www.cclp.hs.iastate.edu/research/pltw.php. Retrieved 22 May 2014.
  59. Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating causal effects using experimental and observational designs. Washington, DC: American Educational Research Association.Google Scholar
  60. Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignements. Journal of the American Statistical Association, 103, 1334–1343.CrossRefGoogle Scholar
  61. Starobin, S. S., Schenk, Jr., T., Laanan, F. S., & Rethwisch, D. (2013, June). Evaluation research of the Iowa project plead the way: Final project report. Ames, IA: Iowa State University Community College Leadership Program. Retrieved from http:///www.cclp.hs.iastate.edu/research/pltw.php. Retrieved 22 May 2014.
  62. StataCorp. (2017). Treatment effects. College Station: Author.Google Scholar
  63. Van Overschschelde, J. P. (2013). Project Lead the way students more prepared for higher education. American Journal of Engineering Education, 4, 1–11.Google Scholar
  64. Wang, X. (2013a). Modeling entrance into STEM fields of study among students beginning at community colleges and four-year institutions. Research in Higher Education, 54, 664–692.CrossRefGoogle Scholar
  65. Wang, X. (2013b). Why students choose STEM majors: Motivation, high school learning, and postsecondary context of support. American Educational Research Journal, 50, 1081–1121.CrossRefGoogle Scholar
  66. Watt, W. (2015, March 23). Indiana leads the way in STEM education. Retrieved from https://www.alec.org/article/indiana-leads-the-way-in-stem-education/. Retrieved 25 Nov 2018.
  67. What Works Clearinghouse (2014, March). What works Clearinghouse procedures and standards manual version 3.0. Washington, DC: U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v3_0_standards_handbook.pdf. Retrieved 6 May 2018.
  68. Wiggins, G., & McTighe, J. (2005). Understanding by design. Alexandria: Association for Supervision and Curriculum Development.Google Scholar
  69. Wiggins, G., & McTighe, J. (2008). Put understanding first. Educational Leadership, 65(1), 36–41.Google Scholar
  70. Wooldridge, J. M. (2007). Inverse probability weighted estimation for general missing data problems. Journal of Econometrics, 141, 1281–1301.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Higher Education & Student AffairsIU School of EducationBloomingtonUSA
  2. 2.Ball State UniversityMuncieUSA

Personalised recommendations