Introduction

As conversations about the Reauthorization of the Higher Education Act heated up on Capitol Hill in 2003, legislators and lobbyists discussed, among other issues, dissatisfaction with the higher education accreditation process. The concerns included the secrecy and perceived lack of rigor of the peer review process, variable institutional acceptance of transfer credit for courses taken at other institutions, and consumers’ inability to make informed choices about colleges and universities based on information from accreditation reviews (Bollag, 2005, 2006). In spring 2006, a discussion paper prepared for the national Commission on the Future of Higher Education proposed replacing regional accreditors with a national accreditation body—a suggestion that sparked considerable debate, but found little support from institutions and accreditors (Bollag, 2006). At the heart of such discussions are serious questions about the effectiveness of accreditation as a quality assurance mechanism.

Challenges to accreditation are not new, but they have rarely been so visible to the general public. Throughout the history of the accreditation process, accreditors have responded to changing contexts and pressures from inside and outside the academy by modifying their processes. For example, in response to the increasing cost burden associated with regional and professional accreditation reviews, agencies have encouraged institutions to embed these reviews in ongoing institutional processes such as strategic planning or program review. Acknowledging the growing consensus that student learning outcomes are the ultimate test of the quality of academic programs, accreditors have also refocused their criteria, reducing the emphasis on quantitative measures of inputs and resources and requiring judgments of educational effectiveness from measurable outcomes (Volkwein, Lattuca, Caffrey, and Reindl, 2003). The Council for Higher Education Accreditation (CHEA), which recognizes individual accreditation agencies, endorses assessment of student learning outcomes as one dimension of accreditation: “Students, parents, and the public ... want to know what the learning gained in these [academic] programs will mean in the marketplace of employment and in their lives as citizens and community members” (2003, p. 4).

Assessment of student outcomes may assist higher education accreditors and institutions answer the increasingly fervent calls for accountability from state and federal legislators. Among accreditors, the trend toward assessment of student outcomes as a criterion for accreditation has gained considerable momentum, but requirements vary by accreditation agency. Regional accreditors generally require that institutions or programs conduct assessment as a condition for accreditation,1 but some professional accreditation agencies have taken outcomes assessment a step further by identifying specific learning outcomes to be achieved by accredited programs. For instance, ABET, the Accreditation Board for Engineering and Technology, specified 11 undergraduate learning outcomes for all baccalaureate engineers, regardless of engineering specialty. Similarly, the Accreditation Council for Graduate Medical Education identified six general competencies (e.g., patient care, medical knowledge, interpersonal skills, and communication) in their accreditation criteria (Batalden, Leach, Swing, Dreyfus, and Dreyfus, 2002). In response to these changes, engineering and medical programs have begun to align their curricula with the outcomes stipulated by their respective criteria (see Batalden et al., 2002; Lattuca, Terenzini, and Volkwein, 2006).

Discussions about the effectiveness of accreditation as a quality assurance tool might be less contentious if there were clear evidence regarding the impact of the process on institutions, academic programs, and graduates. Surprisingly, despite the centrality of the process in higher education, there is little systematic research on the influence of accreditation on programs or learning. Anecdotal accounts of institutional and program responses to new accreditation standards are abundant (see, for example, the proceedings of the American Society for Engineering Education), but there are only a handful of studies that examine the impact of accreditation across institutions or programs. Moreover, these studies typically focus simply on documenting institutional responses. For example, in a study of the impact of changes in accreditation standards for accounting programs, Sinning and Dykxhoorn (2001) found programs working to identify the skills and knowledge base required for employment in the field and developing educational objectives reflecting these skills. Similarly, a survey of 21 mechanical engineering programs conducted by the American Society of Mechanical Engineers (ASME) found that the implementation of EC2000 in these programs “created an environment in which the entire program faculty was involved in the process of establishing program educational objectives and student outcomes and assessment processes” (Laurenson, 2001, p. 20). Systematic studies of the impact of accreditation processes on both changes in educational programs and student learning are, to our knowledge, non-existent.

The opportunity to study the impact of an outcomes-based accreditation model on educational processes and student learning arose in 2002 when ABET engaged Penn State’s Center for the Study of Higher Education to assess the impact of its new accreditation criteria for undergraduate engineering programs. The new criteria were expected to stimulate significant restructuring of curricula, instructional practices, and assessment activities in engineering programs (ABET, 1997) and the EC2000 impact study, entitled Engineering Change, assesses the extent to which the expected restructuring has occurred and its influence on the 11 student learning outcomes specified in EC2000.

ABET’s reform efforts began in the 1990s, as the agency responded to criticisms from two key stakeholder groups. Employers’ voiced concerns regarding the mismatch between industry needs and the skill sets of graduates of engineering programs. Engineering faculty and administrators countered that that ABET’s prescriptive accreditation criteria were barriers to curricular and pedagogical innovation (Prados, Peterson, and Aberle, 2001). With funding support from the National Science Foundation and industries represented on the ABET advisory council, the agency conducted a series of workshops to generate ideas about needed change and consensus among different constituencies. Recommendations from the workshops, published in A Vision for Change (ABET, 1995), became catalysts for the development of new criteria, which were circulated to the engineering community a few months later. Following a period of public comment, the ABET Board of Directors approved Engineering Criteria 2000 (EC2000) in 1996.

The new criteria radically altered the evaluation of undergraduate engineering programs, shifting the emphasis from curricular specifications to student learning outcomes and accountability. Under EC2000, engineering programs must define program objectives to meet their constituents’ needs. To ensure accountability, each program is required to implement a structured, documented system for continuous improvement that actively and formally engages all of its constituents in the development, assessment, and improvement of academic offerings. Programs must also publish specific goals for student learning and measure their achievement to demonstrate how well these objectives are being met (Prados, 1995). ABET was one of the first accrediting bodies to adopt a philosophy of continuous improvement for accreditation and the first to submit that process to scholarly evaluation.

ABET piloted the EC2000 criteria in 1996–1997. After a 3-year transition period (1998–2000), during which programs could choose to undergo review using either the old or new criteria, EC2000 became mandatory in 2001. In this paper, we use data from the Engineering Change study to answer two research questions regarding the influence of EC2000 on engineering programs. The first research question focuses on the overall impact of EC2000, asking

  • Did the new EC2000 accreditation standards have the desired impact on engineering programs, student experiences, and student outcomes?

The second research question explores the potential influence of the phased implementation of the EC2000 standards over a 5-year period, assuming that this may have affected rates of change in programs and student learning. As noted, institutions undergoing an accreditation review between 1998 and 2000 had the option of meeting the new EC2000 standards “early”, that is, before they became mandatory, or to “defer” an EC2000 review until the next cycle. The decision to undergo EC2000 review early or to defer to the next cycle may have reflected program readiness to meet the requirements outlined in the new criteria. Early adopters of EC2000 accreditation, for instance, may have already introduced outcomes assessment or continuous improvement principles into their programs. Institutions that deferred EC2000 and stood for review under the old standards may have had one or more programs that were not well positioned to respond to these new requirements. “On-time” programs, the ones that came up for review after EC2000 became mandatory in 2001, may be more like early programs in terms of their readiness to provide evidence of assessment and continuous improvement practices. The second research question for this study therefore asks,

  • Do engineering programs in the three review cycle groups (pilot/early, on-time/required, and deferred) differ in program changes, student experiences, and/or student outcomes?

In other words, are programs that deferred adoption of EC2000 significantly different from those programs that did not?

Conceptual Framework of the Engineering Change Study

The conceptual model guiding the study (see Fig. 1) summarizes the logic of the study’s design, assuming that if implementation of the EC2000 evaluation criteria were having the desired effect, several changes in engineering programs would be evident:

  • Engineering programs would make changes to align their curricula and instructional practices with the 11 learning outcomes specified by EC2000.

  • Changes in program faculty culture would be evident as faculty members engaged at a higher rate than before EC2000 in activities such as outcomes assessment and curriculum revision.

  • Faculty and program administrators would adjust program practices and policies regarding salary merit increases, tenure, and promotion criteria to give greater recognition to the kinds of teaching and learning required by EC2000.

  • Changes to the quality and character of student educational experiences inside and outside the classroom would be visible.

  • All these changes in curricula, instructional practices, faculty culture and student experiences would influence student learning (defined as improved student performance on measures of the 11 EC2000 learning outcomes).

  • Employers would report improvements in the knowledge and competencies of the engineering graduates they have hired since implementation of EC2000.

Data Sources and Instruments

The Engineering Change study examined accredited engineering programs in selected engineering fields within a representative sample of institutions. The population of programs for the EC2000 study was defined to be those programs accredited since 1990 in seven targeted disciplines.2 Of the 1,241 ABET-accredited engineering programs in the targeted disciplines, 1,024 met this specification. The project team selected programs for participation in the study based on a two-stage, disproportionate,3 stratified random sample with a 7 × 3 × 2 design. The sample is stratified on three criteria: (1) the targeted seven disciplines, (2) the three accreditation review cycles (pilot/early, on-time, deferred), and (3) whether the programs and institutions participated in a National Science Foundation Engineering Education Coalition during the 1990s.4 To round out the sample, four EC2000 pilot institutions (first reviewed in 1996 and 1997), and several Historically Black Colleges and Universities (HBCUs) and Hispanic Serving Institutions (HSIs) were added. The final sample included 203 engineering programs at 40 institutions. Table 1 lists the participating institutions.

Fig. 1.
figure 1

Conceptual framework.

Table 1. Institutions Participating in Engineering Change: A Study of the Impact of EC2000

The participating schools include 23 public and 17 independently controlled institutions. Thirty schools award doctoral degrees; six are master’s degree institutions, and four are baccalaureate or specialized institutions (based on the 2000 Carnegie Classifications). The programs at all these institutions closely resemble those in the defined population. Both the number of undergraduate degrees awarded by discipline and the number of faculty in each discipline are within 3% points of their representation in the population. Although the proportion of private institutions in the sample is greater than their number in the population, the percentages of undergraduate degrees awarded by public and private institutions align with those in the population from which the sample is drawn. Finally, the profiles of small, medium, and large programs in the sample approximate the actual program size profiles in each of the seven disciplines.

Data Collection

Engineering Change collected data in 2003–2004 from several sources, including 2004 graduating seniors, 1994 engineering graduates, faculty members and program chairs from these same programs, deans of the participating institutions, and employers (data from deans and employers were not used in the analyses reported here). Institutions participating in the study provided contact information for their faculty, graduating seniors, and alumni. One institution did not provide student responses, reducing that institutional sample to 39. Four of the five survey instruments developed for the study and used for the current analysis are described below.

The Survey of Seniors in Engineering Programs solicited information on basic demographic information, level of participation in out-of-class activities related to engineering education, student-learning outcomes associated with the 11 EC2000 outcomes criteria, classroom experiences, and plans for the future. The companion instrument, the Survey of Engineering Alumni, asked 1994 graduates to report on these same educational experiences and their learning outcomes at the time of their graduation.

The Survey of Engineering Program Changes collected program-level information from program chairs, including changes over time in curricular emphases associated with the EC2000 learning outcomes, levels of faculty support for assessment and continuous improvement efforts, and changes in institutional reward policies and practices. The companion instrument, the Survey of Faculty Teaching and Student Learning in Engineering, collected complementary information on changes in curricula and instructional practices at the course-level, participation in professional development activities, assessments of student learning, and perceptions of changes in the faculty reward system.

In the fall of 2003, survey instruments were sent to 2,971 faculty members and 203 program chairs. In spring, 2004, surveys were sent to the population of 12,144 seniors nearing graduation as well as the 15,734 1994 graduates of the targeted 40 campuses (instruments are available at http://www.ed.psu.edu/cshe/abet/ instruments.html). Before data collection began, the dean of the college of engineering on each campus sent e-mail requests soliciting student and faculty participation. Follow-up waves included a postcard reminder sent two weeks after the initial mailing, and a complete follow-up (similar to the initial mailing) sent 2 weeks after the postcard.

Survey responses were received from 4,543 graduating seniors, 5,578 alumni, 1,272 faculty members, and the chairs of 147 engineering programs on 39 campuses. The student and alumni surveys yielded 4,330 (36% response rate) and 5,336 (34%) usable cases respectively. The faculty and program chair respondents yielded useable cases for 1,243 faculty members (42%), and 147 program chairs (representing 72% of the targeted programs).5

After conducting analyses of missing data on the student, alumni, and faculty databases, we eliminated respondents who submitted surveys for which more than 20% of the variables were missing. We then imputed values (using expected maximization estimation in SPSS) for missing data in the useable cases.6 We did not impute missing data for the program chairs due to the smaller number of cases in the database.

Variables

Three sets of control variables are used in our multivariate analyses: (1) precollege characteristics of graduating seniors and alumni; (2) institutional characteristics; and (3) engineering program characteristics. Precollege characteristics include age, male/female, SAT/ACT scores, transfer status, race/ethnicity, family income, parents’ education, high school GPA and citizenship. Institutional and program characteristics are type of control, institutional type (based on Carnegie Classification, 2001), institutional size and wealth, participation in an NSF Coalition, and engineering discipline. Two sets of dichotomously coded independent variables are used in these analyses—student cohort (1994 or 2004) and EC2000 review cycle (pilot/early, on-time, or deferred). See Table 2 for the breakdown of sample programs by review cycle. We are especially interested in comparing the performance indicators of the 48 “deferred” programs with the other groups.

Table 2. Sample Programs by Accreditation Cycle

The dependent variables in our multivariate analyses are indicators of program changes, student experiences and learning outcomes. Table 3 summarizes these variables. Fourteen scales and one single-item measure assess changes in program characteristics before and after EC2000. Eight of these scales (four each from the program chair and faculty datasets) measure changes in curricular emphasis on topics associated with the EC2000 learning outcomes (for example, changes in curricular emphasis on Foundational Knowledge and Skills). Two scales measure changes in emphasis on Active Learning or Traditional Pedagogies. Changes in Faculty Culture were assessed by two scales tapping faculty participation in professional development activities related to instructional improvement and engagement in projects to improve undergraduate engineering education. An additional item asked program chairs to assess changes in faculty support for continuous improvement efforts. Finally, two scales (one for faculty and one for program chairs) measured changes in perceptions of the degree to which the faculty reward system emphasizes teaching. As seen in Table 3, the Alpha reliabilities for these 14 scales range from .90 to .49, with only three falling below .72.

Table 3. Variables Used in ANCOVA Analyses

Three scales measure students’ in-class experiences: Clarity and Organization; Collaborative Learning, and Instructor Interaction and Feedback. Two other scales represent students’ perceptions of Program Openness to Ideas and People and Program Climate. Additional out-of-class experiences are measured by single items that assess (1) the degree to which students or alumni were active in a student chapter of a Professional Society, and (2) the number of months students and alumni spent in each of the following: Internship/Cooperative Education, International Travel, Study Abroad, and Student Design Competition.

The learning outcomes scales are derived from a series of principal components analysis of 36 survey items. The research team developed these items through an iterative process designed to operationalize the 11 learning outcomes specified in EC2000. For each item, respondents indicated their level of achievement with regard to a particular skill on a 5-point scale (1 = “No Ability” and 5 = “High Ability”). The principal components analysis produced a nine-factor solution that retained more than 75% of the variance among the original 36 survey items. (See Table 3 for sample items and scale reliabilities, and see Strauss and Terenzini (2005) for the full factor structure and description of the item development process.)

Only four of the 28 multi-item scales developed for this study have alpha reliabilities below the conventional .70 standard (i.e., applied engineering skills, traditional pedagogies, engagement in projects to improve undergraduate engineering education; and perceived program climate). Such scales can be retained for analyses if they have substantive or theoretical value, which these do.

Analytical Procedures

Data were analyzed using analysis of covariance (ANCOVA) with multiple covariates to control for graduates’ precollege characteristics, program, and institutional traits. In the initial analysis, means for the pilot/early, on-time, and deferred EC2000 cycle group were compared across eight student in- and out-of-class experience scales, nine student outcome scales, 14 program change scales, and one single-item program change variable. The Bonferroni correction was applied in order to mitigate the effects of making comparisons across multiple groups by controlling overall error rate. In the second phase of the analysis, the 1994 and 2004 engineering graduates’ mean scores were compared on student experience and outcome variables. Effect sizes were calculated in order to determine the magnitude of the differences between the student cohorts. In the third and final phase of analysis a series of pairwise multiple comparisons were used to determine the mean differences in program changes, student experiences and outcomes among the three EC2000 cycle groups.

Results of the EC2000 Study

We first examined the data to see if the new EC2000 accreditation standards are having the desired impact on engineering programs, student experiences, and student outcomes. We summarize here the major findings consistent with the logic of the conceptual model in Fig. 1.

Changes in Engineering Programs between 1994 and 2004

According to program chairs and faculty members, engineering program curricula changed considerably following implementation of the EC2000 criteria. Both program chairs and faculty members report increased emphasis on nearly all of the professional skills and knowledge sets associated with EC2000 Criterion 3.a-k. Three-quarters or more of the chairs report moderate or significant increases in their program’s emphases on communication, teamwork, use of modern engineering tools, technical writing, lifelong learning, and engineering design. Similarly, more than half of the faculty respondents report a moderate to significant increase in their emphasis on the use of modern engineering tools, teamwork, and engineering design in a course they taught regularly (these results not shown here but available from first author).

EC2000’s focus on professional skills might be expected to lead to changes in teaching methods as faculty members seek to provide students with opportunities to learn and practice their teamwork, design, and communication skills. Consistent with that expectation, half to two-thirds of the faculty report that they have increased their use of active learning methods, such as group work, design projects, case studies, and application exercises, in a course they teach regularly (see Fig. 2).

Fig. 2.
figure 2

Faculty reports of changes in teaching methods since first teaching their focal course.

EC2000 also requires that engineering programs assess student performance on the a-k learning outcomes and use the findings for program improvement. Program chairs report high levels of faculty support for these practices (see Fig. 3). About three quarters of chairs estimate that either more than half or almost all of their faculty supported curricular development and revision efforts (76%) and systematic efforts to improve (73%). Seventy percent report moderate to strong support for the assessment of student learning, and about two-thirds report similar levels of support for data-based decision-making.

Fig. 3.
figure 3

Program chairs reports of faculty support for EC2000 initiatives.

Faculty generally corroborate this finding: As shown in Fig. 4, nearly 90% of the faculty respondents report some personal effort in assessment, and more than half report moderate to significant levels of personal effort in this area. For the most part, moreover, faculty members do not perceive their assessment efforts to be overly burdensome: Nearly 70% think their level of effort was “about right.”

Fig. 4.
figure 4

Faculty level of effort in assessment.

Learning how to conduct assessment or incorporate active learning methods into courses may influence faculty members’ engagement in professional development opportunities focused on teaching and learning. This study finds that more than two-thirds of the faculty members report reading about teaching in the past year, and about half engage in formal professional development activities, such as attending seminars or workshops on teaching, learning, and assessment, or participating in a project to improve engineering education. Depending on the activity, one-fifth to one-quarter of the faculty members report that in the past 5 years they have increased their teaching-and-learning-related professional development efforts.

One of the most important influences on faculty work in colleges and universities is the institutional reward system, which can encourage or discourage attention to teaching. The EC2000 accreditation criteria require that engineering programs be responsible for the quality of teaching, learning, and assessment, but do faculty members believe that their institutions value their contributions in these areas when making decisions about promotion, tenure, and merit-based salary increases? About half of the program chairs and faculty surveyed see no change in their institution’s reward system over the past decade. About one third of the program chairs, however, report an increase over the past decade on the emphasis given to teaching in promotion, tenure, and salary and merit decisions. In contrast, roughly one-quarter of faculty respondents believe the emphasis on teaching in their reward systems decreased in the same time period. Senior faculty members, however, tend to report increased emphasis on teaching in promotion and tenure decisions whereas untenured faculty are more likely to report decreased emphasis.

Changes in Student Experiences

Have the curricular and pedagogical changes reported by program chairs and faculty had a measurable impact on the educational experiences of engineering undergraduates? The evidence suggests they have. Indeed, the experiences of the 2004 graduates differ in a number of ways from those of their counterparts of a decade earlier. The direction of the changes, moreover, is consistent with what one would expect if EC2000 were putting down roots. As shown in Figs. 5 and 6, compared to their counterparts of a decade earlier and controlling for an array of individual and institutional characteristics, 2004 graduates reported:

  • More collaborative work and active engagement in their own learning;

  • More interaction with instructors and instructor feedback on their work;

  • More study abroad and international travel experiences;

  • More involvement in engineering design competitions; and

  • More involvement in professional society chapters.

Although they tend to be small, six of these eight differences between pre- and post-EC2000 graduates are statistically significant.

Fig. 5.
figure 5

Adjusted means for 1994 and 2004 graduates’ in-class experiences during their engineering programs.

Fig. 6.
figure 6

Adjusted means for 1994 and 2004 graduates’ out-of-class experiences during their engineering programs.

The 2004 graduates, however, are slightly less positive than their predecessors about the diversity climate in their programs. The generally high scores on the “program diversity climate” scale reflect student perceptions that faculty in their programs are treating all students fairly, as well as the general absence of personal harassment, offensive words, behaviors, or gestures directed at students because of their identity. Recent graduates perceive a significantly cooler climate for women and students of color than 1994 graduates, although the responses in both years are near the positive end of the five-point climate scale.

Because some survey items were ambiguous with regard to whether they described in- or out-of-class experiences, we treated the “program openness” scale separately from those with more certain settings (and thus it is not shown in Fig. 5). Items in this scale tapped graduates’ perceptions of how much their engineering program encouraged them to examine their beliefs and values, emphasized tolerance and respect for differences, made them aware of the importance of diversity in the engineering workplace, and how frequently they discussed diversity issues with their engineering friends. In contrast to their reports about the diversity climate in their programs, more recent graduates report their programs to be marginally more open to new ideas and people than do their predecessors (scale means are 2.80 and 2.74, respectively; effect size = .09, or 4 percentile points). The magnitude of this difference, however, suggests relatively little substantive change over the past decade in the extent to which engineering programs encourage their students to be open to new ideas and people.

Changes in Learning Outcomes

As noted in the methodology section, a factor analysis on the battery of items reflecting EC2000 Criterion 3.a-k learning outcomes yielded nine scales. These nine scales became the dependent variables in a series of multivariate ANCOVA analyses that allowed us to examine the differences between 1994 and 2004 graduates. These analyses controlled for the differences in student pre-college traits, as well as for the characteristics of the institutions and engineering programs they attended.

Each of the nine outcomes scales are based on self-reported ability levels at the time of graduation. A growing body of research over the past 30 years has examined the adequacy of self-reported measures of learning and skill development as proxies for objective measures of the same traits or skills. Although results vary depending on the traits and instruments examined, these studies report correlations of .50 to .90, between self-reports and actual student grades, SAT scores, the ACT Comprehensive Test, the College Basic Academic Subjects Examination, and the Graduate Record Examination.7 Moreover, most social scientists place a great deal of confidence in aggregated scores as an indication of real differences between groups under the conditions prevailing in this study (Kuh, 2005).

Figures 79 show the results of the multivariate ANCOVA analyses. We see significant gains between 1994 and 2004 in their ability to

  • Apply knowledge of mathematics, science, and engineering

  • Use modern engineering tools necessary for engineering practice

  • Use experimental skills to analyze and interpret data

  • Design solutions to engineering problems

  • Function in groups and engage in teamwork

  • Communicate effectively

  • Understand professional and ethical obligations

  • Understand the societal and global context of engineering solutions

  • Recognize the need for, and engage in life-long learning.

In all cases, the differences are consistent with what one would expect under the assumption that EC2000 is having an impact on engineering education. All differences, moreover, are statistically significant (p < .001), with effect sizes ranging from + .07 to + .80 of a standard deviation (mean = + .36). Five of the nine effect sizes exceeded .3 of a standard deviation, an effect size that might be characterized as “moderate.”

Fig. 7
figure 7

Adjusted means for 1994 and 2004 graduates’ reports of their competence in mathematics, science, and engineering science skills.

Fig. 8.
figure 8

Adjusted means for 1994 and 2004 graduates’ reports of their competence in project-related skill areas.

Fig. 9.
figure 9

Adjusted means for 1994 and 2004 graduates’ reports of their competence in the contexts and professionalism cluster.

The largest differences between 1994 and 2004 graduates are in five areas: Awareness of societal and global issues that can affect (or be affected by) engineering decisions (effect size = + .80 of a standard deviation), applying engineering skills (+.47 sd), group skills (+.47 sd), and awareness of issues relating to ethics and professionalism (+.46 sd). The smallest difference is in graduates’ abilities to apply mathematics and sciences (+.07 sd). Despite that small but statistically significant difference, this finding is particularly noteworthy because some faculty members and others have expressed concerns that developing the professional skills specified in EC2000 might require devoting less attention to teaching the science, math, and engineering science skills that are the foundations of engineering. This finding indicates not only that there has been no decline in graduates’ knowledge and skills in these areas, but that more recent graduates report slightly better preparation than their counterparts a decade earlier. The evidence suggests not only that implementation of EC2000 is having a positive impact on engineering education, but also that gains are being made at no expense to the teaching of basic science, math, and engineering science skills.

Differential Impact by year of Accreditation Review

Having documented the EC2000-driven changes between 1994 and 2004, we next examined engineering programs that are reviewed earlier and later in the accreditation cycle. Are programs that deferred the adoption of EC2000 significantly different from those programs that did not? More specifically, do they differ in program changes, student experiences, and/or student outcomes?

As we note in the methodology discussion above, the ABET 6-year accreditation review cycle means that the 203 programs in our study stood for review under the new EC2000 standards at different points in time. After the 18 pilot programs (at four institutions) were reviewed in 1996–1997, those engineering programs scheduled for re-accreditation in 1998, 1999, and 2000 were given a choice to undergo review either under the new EC2000 standards or the old standards, thus deferring the EC2000 review for 6 years. Beginning in 2001, all programs were required to adhere to the EC2000 standards in order to be reaccredited. At the time of data collection in 2004, 18 programs had been reviewed twice under EC2000 (the pilots), 69 programs had opted to be reviewed early (in 1999–2000), 48 programs had elected to postpone their EC2000 review for 6 years and instead be reviewed under the old standards, and 68 programs underwent EC2000 review as required (shown in Table 2). Hence a key question: Do the engineering student experiences and outcomes in the 48 “deferred” programs differ significantly from the 155 others in 2004?

Congruent with our conceptual model, we first examined the 15 program measures (described in Table 3) and found relatively uniform changes in program curricula, in pedagogical practices, and in the general faculty culture (see Table 4). Changes in the faculty and program emphasis on communications skills, project skills, and assessment for improvement were especially strong but relatively uniform across all program groups, even those that had not yet experienced an EC2000 re-accreditation review. Only one of the 15 variables resulted in statistically significant differences: faculty in the Deferred group reported greater emphasis on teaching in the rewards structure than faculty in the other two groups.

Table 4. Adjusted Program Change Means, Standard Deviations, and Significant Differences among the Three Accreditation Groups

Next, we examined the 2004 in-class and out-of-class student experiences shown in Table 5 and found almost the same uniformity among the three groups. For each of the eight indicators of EC2000-relevant experiences and nine indicators of student learning, we tested for significant differences among the three groups, and only six of the 51 tests proved statistically significant at <.05. As shown in Table 5, students in the Deferred programs report significantly greater engagement in collaborative learning than those in the Early and On-time programs, and they also report greater gains in ethics and professionalism than students in the On-time group. On the other hand, those in the Deferred group report less involvement in internships and cooperative education experiences than both the Early and On-time groups, and lower gains in experimental skills than the On-time group. There were no significant differences among the three cycle groups in the 13 other learning experiences and outcomes (e.g., instructor clarity, interaction and feedback, design competition, program openness and diversity, and participation in a student chapter of a professional society). We assumed that the students in the Early and On-time groups would report better EC2000-like experiences and greater learning gains than those in the Deferred group. Instead we find that 45 of the 51 differences in means among the three groups on the 17 indicators are not significant, and three of the six significance tests favored students in the Deferred programs and three favored students in the other two groups.

Table 5. 2004 Adjusted Mean Differences in Student Experiences and Outcomes among the Three Accreditation Groups

Somewhat puzzled by these results, we next looked back at the 1994 data to compare the 1994 and 2004 programs. We already knew from the faculty and program chairs that the changes in programs and curricula across the three groups were surprisingly uniform (as shown in Table 4), so we then examined the 1994 ANCOVA-adjusted means on the eight student experiences and nine outcomes for the three groups. To our surprise, we found that (a) the 1994 Deferred group had the lowest mean on 14 of the 17 variables; and (b) the gains between 1994 and 2004 and the associated effect sizes are greatest for the Deferred group on 13 of the 17 variables (table not included but available from first author). Thus, we appear to have a homogenization or catch-up effect: the three groups were much more disparate in 1994, but much more alike in 2004.

This conclusion is supported by the analysis shown in Table 6, which shows the 51 ANCOVA-adjusted mean differences among the three groups on the 17 indicators of student experiences and outcomes in 1994. The means for the Early and/or On-time groups exceed the means of the Deferred group on 14 of the 17 measures of student experiences and outcomes; and the lag by the Deferred group is especially significant in instructor clarity, interaction and feedback, design competition, diversity climate, design, and problem solving, experimental skills, engineering skills, and life-long learning. Only in Professional Society Chapter participation did the Deferred group exceed the On-time (but not the Early) group. In summary, the Deferred group means in 1994 are significantly lower than the other two groups in eight of the 17 student experiences and outcomes, but by 2004 the deferred group means are lower in only two of the 17 indicators and higher in two.

Table 6. 1994 Adjusted Mean Differences in Student Experiences and Outcomes among the Three Accreditation Groups

Conclusions

The 2004 evidence suggests a rather uniform level of post-EC2000 performance across most of the measures and most of the groups in our study. We suspect that, for one reason or another, the programs and institutions in the deferred group were lagging significantly behind the others in 1998–2000, and they appeared to know it. Thus, they elected to stand for re-accreditation under the old ABET standards and to defer the EC2000 review for 6 years so they could catch up.

This EC2000 Study provides the education research community with evidence of a connection between changes in accreditation and the subsequent improvement of programs, curricula, teaching, and learning in undergraduate programs. Our findings reveal that engineering programs have changed substantially since the implementation of EC2000. In general, programs increased their emphasis on curriculum topics associated with EC2000 as well as the emphasis on active learning strategies. Engineering faculty are more engaged in assessment activities and in professional development activities related to teaching and assessment. The nature of the student educational experience has also changed. Compared to alumni of a decade ago, today’s engineering graduates are engaged in more collaborative learning activities in the classroom, interact more with faculty, and are more actively involved in co-curricular activities such as engineering design competitions and student chapters of professional organizations. Today’s graduates also report significantly higher levels in all of the student learning outcomes specified in the EC2000 criteria, and in some cases, the differences between 1994 and 2004 graduates are substantial.

All of this evidence suggests that EC2000 is working as expected. We stress, however, that these changes are not attributable solely to EC2000. Other analyses have demonstrated that programmatic changes are due in part to EC2000 but that additional external and internal influences also shape engineering programs (for analyses and discussion, see Lattuca et al., 2006; Lattuca, Strauss, and Sukbaatar, 2004). Rather than the sole influence, EC2000 accreditation is an important driver in a set of convergent factors (including faculty initiatives, external funding for projects to improve teaching and learning, and employer feedback) that influence educational activities and learning in engineering programs.

The current study, however, provides additional convincing evidence supporting the important role that accreditation has played in engineering education. When we began this study, we expected to find that engineering programs varied in their responsiveness to EC2000. We assumed that programs that deferred their EC2000 accreditation review year would demonstrate less change than those that adopted EC2000 early. We expected to find lower levels of curricular and instructional change and different kinds of student classroom and out-of-class experiences in the deferred populations. We also expected that graduates of deferred programs would report lesser levels of preparation on the 11 EC2000 learning outcomes. Our analyses, however, indicate that, although the deferred programs were indeed significantly behind the EC2000 early adopters in 1994, by 2004 these differences among Early/On-time and Deferred programs had largely disappeared.

Our findings suggest that EC2000 was indeed a catalyst for change. By 1996, all engineering programs knew that an EC2000 review lay ahead of them—and they all began to respond in kind. The deferred programs bought time by using the old criteria for reviews between 1998 and 2000, but they did so, apparently, knowing that they lagged behind the others. As these analyses indicate, deferred programs made considerable progress in meeting the goals expressed in EC2000 by 2004, actually catching up to programs that were, in 1994, better prepared for an EC2000 review.

A remaining question concerns the generalizability of these findings to regional accreditation and to professional accreditation in other fields. As noted earlier, regional, or institutional, accreditation often requires the use of outcomes assessment, but does not specify particular learning outcomes for all institutions. The diversity of institutional missions presumably precludes the identification of a single set of learning outcomes for all colleges and universities. Yet, the American Association of Colleges and Universities (AAC&U) recently published the results of its decade-long national effort to identify liberal learning outcomes for all college graduates. By undertaking studies of high school and college students, faculty, employers, civic leaders, and accrediting bodies, AAC&U has galvanized a national consensus and identified a set of college student outcomes that are needed by today’s citizens. Reported in Taking Responsibility for the Quality of the Baccalaureate Degree (2004), these outcomes, such as written and verbal communication, inquiry and critical thinking, quantitative literacy, teamwork, and information literacy, are highly consistent with the professional outcomes specified in EC2000. Moreover, the AAC&U efforts also have discovered goal congruence among the regional and specialized accrediting agencies. The specialized accrediting agencies that participated in the AAC&U Project on Accreditation and Assessment (PAA) were unanimous in their belief that a strong liberal education was essential to success in each of their professions (AAC&U, 2004). For example, seven of the eleven EC2000 learning outcomes for engineering and technology graduates are visible in almost any list of desired general education outcomes at most leading institutions in any major. Hence, academic autonomy and differential mission may not be the barrier it has been presumed to be—and the identification of some shared learning outcomes for all graduates does not negate the need to specify unique outcomes that respond to additional important institutional values.

In the case of professional accreditation, it is reasonable to wonder whether accreditors’ specification of learning outcomes will result in the kinds of consistent changes in programs and student outcomes we see in engineering. Like the regional accreditors, ABET enjoys almost universal compliance; 96% of all engineering programs in the United States are ABET-accredited (due in part but not wholly to licensure requirements). The number of programs seeking accreditation, however, varies by professional field. The National Council for Accreditation of Teaching Education, for example, accredits about 70% of the schools of education in the US (Wise, 2005). Likewise, just over 70% of newly licensed nurses graduated in 2004 from programs accredited by the National League for Nursing Accreditation Commission (2005). In contrast, the Association to Advance Collegiate Schools of Business, (2006) accredits less than a third of the estimated 1500 schools of business in the United States. Widespread change may depend on the perceived importance of accreditation. Some institutions may simply decide that the return on investment from specialized accreditation is not enough to justify overhauling their educational programs.

Despite variations in the types of accreditation and their application in particular fields, the study of the impact of EC2000 provides useful evidence that can inform discussions of accountability and quality assurance in higher education. Additional analyses of these data will provide further information on the role that accreditation plays in program improvement, but the current study at least answers the nagging question about what influence, if any, accreditation has on educational quality and student learning in fields like engineering. As importantly, the study provides a model for evaluations in other professional fields and might be replicated by other agencies as they seek to understand—and establish—their impact on undergraduate and graduate education.

Acknowledgements

This study was sponsored by the Accrediting Board of Engineering and Technology (ABET) and a grant from the National Science Foundation. The opinions expressed here do not necessarily reflect the opinions or policies of ABET, and no official endorsement should be inferred. The authors acknowledge and thank the other members of the research team who participated in the development of the design, instruments, and databases for this project: Dr. Patrick T. Terenzini, co-PI; Dr. Linda C. Strauss, senior project associate; Suzanne Bienart, project assistant; and graduate research assistants Vicki L. Baker, Amber D. Lambert, and Javzan Sukhbaatar.

Endnotes

  1. 1.

    See standards of the Middle States Commission on Higher Education (2002), New England Association of Schools and Colleges (2005), North Central Association of Colleges and Schools (2003), Northwest Commission on Colleges and Universities (2006), Southern Association of Colleges and Schools (2004), and the Western Association of Schools and Colleges (2005).

  2. 2.

    The seven disciplines targeted for the Engineering Change study are aerospace, chemical, civil, computer, electrical, industrial, and mechanical engineering. For the past 5 years, these seven fields have annually accounted for about 85% of all undergraduate engineering degrees awarded in the U.S.

  3. 3.

    The sample is “disproportionate” because the team over-sampled the smaller disciplines (e.g., aerospace and industrial) to ensure an adequate number of responses for discipline-specific analyses.

  4. 4.

    During the 1990s, the National Science Foundation funded 10, multi-institution Engineering Education Coalitions to design and implement educational innovations and to disseminate findings and best practices in an effort to encourage curricular and instructional reform. Wankat, Felder, Smith, and Oreovicz (2002) assert that NSF support has “probably done more to raise awareness of the scholarship of teaching and learning in engineering than any other single factor” (p. 225).

  5. 5.

    Several chairs in the study administered more than one engineering program. We contacted these chairs by phone to ask if they preferred to complete a single survey for both programs or to respond for each program separately. Most completed one survey for two programs, producing a lower number of program chairs surveys than the number of programs studied.

  6. 6.

    We did not impute personal characteristics such as male/female, race/ethnicity, and discipline.

  7. 7.

    For a review of this literature see Volkwein (2005)