Innovative Higher Education

, 31:115

Oral Communication Skills in Higher Education: Using a Performance-Based Evaluation Rubric to Assess Communication Skills

  • Norah E. Dunbar
  • Catherine F. Brooks
  • Tara Kubicka-Miller

DOI: 10.1007/s10755-006-9012-x

Cite this article as:
Dunbar, N.E., Brooks, C.F. & Kubicka-Miller, T. Innov High Educ (2006) 31: 115. doi:10.1007/s10755-006-9012-x


This study used The Competent Speaker, a rubric developed by the National Communication Association (S. P. Morreale, M. R. Moore, K. P. Taylor, D. Surges-Tatum, & R. Hulbert-Johnson, 1993), to evaluate student performance in general education public speaking courses as a case study of student skills and programmatic assessment. Results indicate that students taking the general education public speaking course are below satisfactory standards on five of the eight competencies defined by the National Communication Association and are above satisfactory standards on two of the eight competencies. Implications for this particular program, other communication departments, and communication across the curriculum in general education are discussed. We also offer suggestions for those in other disciplines or educational settings in the use of performance evaluation rubrics for assessing other student skills/knowledge and for training new teachers.


assessment general education evaluation rubrics faculty development 

In 1993, the National Education Goals Panel, established as part of the “Goals 2000: Educate America Act,” (Newburger, 1996, p. 70) resolved to develop an assessment system revealing the ability of college graduates to demonstrate certain skills. Students’ ability to communicate effectively was among those skills prioritized (Newburger, 1996). At the same time, the United States Department of Labor developed a commission that identified competencies students must develop in preparation for the global work environment. One of those competencies was the ability to listen and speak well in order to carry out certain work-related tasks (Newburger, 1996). The goal of teaching essential skills, such as oral communication, in higher education is to prepare students to be more effective employees and responsible citizens. For colleges and universities, one of the challenges becomes assessing their curricula so that proof of return can be provided to policy makers, taxpayers, academic officials, and parents (Erwin & Sebrell, 2003).

In order to meet the demands of external audiences, higher education institutions have sought to restructure their curricular offerings to bring them in line with current societal needs, to attract and retain students, and to help students progress toward graduation with certain skills well developed. The goals of many restructured general education programs reaffirm learning as the center of the educational enterprise with an emphasis on the acquisition of certain fundamental skills (Rockman, 2002). Historically, general education has sought to provide students with academic competencies essential to all academic majors (Allen, 2002).

One critically important skill is the ability to communicate, hence, the recent movement on campuses for “communication across the curriculum.” That is, communication skills are now taught in a wide range of general education courses, not just those offered by the communication department. The standardized assessment of general education is critical in higher education, and programs in the discipline of communication are being asked to lead institutions in efforts to improve communication skills across the curriculum (Allen, 2002; Dannels, 2001). Thus, the assessment of communication skills provides a good case study for those interested in specific assessments of other skills acquired in general education.

This study supports the current move toward skills-specific assessment methods; we argue that the collection of observational data is paramount in the assessment of any academic program. The purpose of this study was to document one department's assessment of instructional effectiveness in the area of oral communication skills in a foundational general education course. This study provides a model of performance-based assessment using a standardized rubric for practitioners and scholars in a variety of disciplines and may be used by any institution wishing to undertake a large-scale assessment of general education competencies.

Assessment of General Education Skills

The Need for Communication Competency

Communication education is positively linked to academic and professional success for students (Rubin & Morreale, 1996). Thus, students need speaking and listening skills that will help them succeed in future courses and in the workplace. A basic communication course can offer students knowledge of effective communication techniques and provide a safe arena for developing and practicing skills, which can create positive feelings about communicating in the future. The assessment of these oral communication skills can benefit students and their academic programs in many ways because skill deficiencies can be addressed using solutions at the classroom, department, or institutional levels. Walvoord, Bardes, and Denton (1998) argued that “closing the feedback loop” when assessing skills is important at all three levels because it means that teachers get data on their students’ progress early, their initiatives for change can be supported, problems that require institution-wide solutions can be addressed, and assessment activities can be documented for accreditors and other external audiences.

Establishing communication-based competencies is not just vital for educators in communication departments. It has become increasingly important for a variety of disciplines such as Business, Engineering, Health, and Biology. Technical disciplines have begun to recognize and explore the role of oral performance in their curricula (Dannels, 2002). Oral communication is laden with contextual motivations, purposes, audiences, and strategies specific to each field of inquiry. Thus, each discipline must be able to assess communication to ensure appropriate skills are being developed in the classroom. Assessment practices that evaluate the extent to which students achieve the communication outcomes determined by certain disciplines to be valued, salient, and relevant, must be developed (Dannels, 2001). Although this study assessed skills appropriate for the field of communication as taught in a general education setting, it can be replicated in any discipline once an appropriate rubric is selected.

Assessment of Learning Outcomes in Communication

The National Communication Association (NCA)1 has long been actively involved in assessment within the discipline of Communication. For example, in 1970, the Committee on Assessment and Testing (CAT) was formed to focus on the testing of speech communication skills. As a result of the large need for assessment strategies and tools, CAT produced several publications on assessing oral communication skills for students of all ages in the decades that followed. In July of 1990, a conference on communication assessment was held and produced recommended criteria for the assessment of oral communication skills (Rubin, Welch, & Buerkel, 1995; Speech Communication Association, 1993).

Through these academic meetings, the NCA has identified several communication skills that are vital for students to learn at both basic and advanced levels (Morreale, Rubin, & Jones, 1998). These skills include, among others, the ability to recognize when it is appropriate to speak, to speak clearly and expressively, to present ideas in an organizational pattern that allows others to understand them, to listen attentively, to select and use the most appropriate and effective medium for communication, to structure a message appropriately, to identify others’ level of receptivity to their message, to give information and to support it with illustrations and examples. One result of the summer 1990 conference on communication assessment was the development of an evaluation instrument called The Competent Speaker, an evaluation form that identifies standards for evaluating students’ eight basic speaking competencies: (Allen, 2002) being able to choose an appropriate topic and restrict it according to the purpose and the audience; (Borko, 1997) communicating the purpose of the speech in a manner appropriate for the audience and the occasion; (Dannels, 2001) using appropriate supporting material to fulfill the purpose of the oral discourse; (Dannels, 2002) using an organizational pattern appropriate to the topic, audience, and occasion; (Dougan, 1996) employing language appropriate to the designated audience; (Erwin, 1991) employing vocal variety in rate, pitch, and intensity; (Erwin & Sebrell, 2003) articulating clearly, and using correct grammar and pronunciation; demonstrating nonverbal behavior that supports the verbal message (Morreale, Moore, Taylor, Surges-Tatum, & Hulbert-Johnson, 1993; Morreale et al., 1998).

In addition to speaking competencies for general education courses at the college level, the NCA has also identified competencies for other areas of communication for both basic and advanced levels including skills in persuading, informing, speaking, listening, and relating interpersonally. Advanced skills are blends of knowledge, skill, and attitude; they require greater levels of behavioral flexibility/adaptability. For instance, a basic skill such as “identify communication goals” at an advanced level becomes “manage multiple communication goals.” This advanced skill requires both identification of the goals and the behavioral component of managing the goals, both of which require adaptability (Morreale et al., 1998). Advanced skills also require reasoning and audience analysis. Examples of advanced skills include being able to understand people from other cultures, organizations, or groups; being able to identify important issues or problems; the ability to draw conclusions; and being able to understand others to manage conflict better. These skills involve adapting messages to the demands of the situation or context (Jones, 1994) and require greater emphasis on creating appropriate and effective messages, two main components of competence (Spitzberg, 1983).

Instructors and administrators can use some or all of the expected student outcomes to inform the design of a basic communication course. Academic institutions can use some or all of the outcomes identified by NCA to describe campus expectations for students in regard to the general education curriculum (Morreale et al., 1998; Rosenbaum, 1994). Many scholars have delineated important communication skills and have suggested measurements that can be used to assess communication competence. Research on communication competency is extremely varied, and no one definition prevails (Jones, 1994; McCroskey, 1982; Rubin & Graham, 1988; Rubin, Graham, & Mignerey, 1990; Weimann & Backlund, 1980). Thus, in assessing student learning in a basic speaking course, this project utilized guidelines set forth by the NCA.

Using Rubrics in Assessment

In assessing communication competencies among students, it is especially important to gauge student ability in general education courses where large numbers of students need to learn fundamental skills. The assessment of student learning has historically focused on standardized practices such as exams and the use of survey data (Rosenbaum, 1994). Rosenbaum (1994) separated traditional assessment means such as tests and surveys from nontraditional means such as portfolios, capstone courses, oral assessments, and exit interviews. Multiple approaches are needed in the assessment of learning (Erwin, 1991; Rosenbaum, 1994), many of which require the use of a rubric. Nontraditional performance-based assessment is on the rise, and assessment such as the analysis of portfolios or performance events is gaining in popularity at all educational levels. Many disciplines are moving toward new means for assessment which are authentic and discipline-specific (Borko, 1997; Dougan, 1996; Goldberg, Roswell, & Michaels, 1996; Hambleton & Plake, 1995). Prioritizing the effective design, understanding, and competent use of rubrics for these various types of assessment methods is paramount.

Distinctions can be made in order to clarify the ways in which a rubric can and should be used (Mager, 1997). Criterion-referenced evaluation occurs when student performance is rated according to standards set by the discipline or department whereas norm-referenced evaluation occurs when students are evaluated on the basis of comparisons made to the performance of other students (Mager, 1997). In addition, all components of the rubric must measure accurately the objective of that particular component. When communication assignments are given in a course, such as a speech or presentation, rubrics are often utilized in grading student oral presentations; thus they are appropriate for the assessment of basic oral communication skill level. Moreover, for the assessment of basic oral communication skill level across an entire program, criterion-referenced evaluation based on standards set by the discipline or department is appropriate. The Competent Speaker is one such rubric. It contains competencies agreed upon by the communication discipline's national organization, the NCA, and has been tested for reliability and validity (Morreale et al., 1993).

When utilizing a rubric, evaluators use an analytic rating system whereby each component is scored individually or performance is rated holistically on the basis of an overall impression (Pomplun, Capps, & Sundbye, 1998). Evaluators of student presentations typically utilize an analytic rating system so that students can understand which communication competency area to work on for future public speaking opportunities. For each competency on a rubric, evaluators must always understand the conditions and terms for each competency, and they must make inferences and approximations of those standards (Mager, 1997). To ensure evaluators infer similar ratings for each competency on a rubric, adequate training should be conducted. When training evaluators to use The Competent Speaker, for example, time must be spent watching speeches, then discussing the evaluation of that speech with regard to each competency so that insights are be shared between the evaluators, which in turn, promote more consistency of individual scoring between evaluators. Thus, rubrics can be an influential tool in faculty development efforts in terms of developing and maintaining consistency among teachers.

The intention of this study was to assess the instructional effectiveness of one particular program in terms of its general education public speaking courses. This study also explored the utility of The Competent Speaker for use in individual classrooms, in programmatic assessment, and in the training of new communication teachers. Finally, we offer suggestions for instructors from all disciplines to utilize The Competent Speaker or a similar rubric in their own attempts to assess students’ oral communication or student performances in other academic areas. That is, a similar methodology can be adopted in assessing any general education skill with an appropriate discipline-relevant rubric.

The Study


Students giving speeches (N=100) in seven different sections of the general education lower-level public speaking class were videotaped in their classrooms (Kubicka, 2003). The sample included 36 men and 64 women, which, according to the Office of Institutional Research at our institution (2005), reflects the campus population in general. The speeches were recorded near the end of the semester so as to mirror the students’ skill level upon completion of a basic public speaking course. The students had also filled out questionnaires earlier in the semester that included demographic information and other data. Participants represented a wide variety of ethnic backgrounds: Euro-American (41.2%), Hispanic (26.5%), African American (5.9%), Asian American (7.4%), and other (19.1%). The average age of respondents was 19.79 years old. Most students reported they had never taken a communication class before (76.6%), but 15.6% reported they had taken at least one communication class, and 7.9% reported they had taken two or more communication classes before. These data were collected from students on a voluntary basis with approval from the appropriate review board for the protection of human subjects.


Although several instruments exist to evaluate speaking abilities, The Competent Speaker has been tested for reliability and validity; it arose from communication competence research and from published guidelines for public speaking assessment (Morreale et al., 1993). The rubric includes eight speaking competencies derived from the NCA's standards including (Allen, 2002) choosing a topic; (Borko, 1997) communicating the purpose of the speech; (Dannels, 2001) using appropriate supporting materials; (Dannels, 2002) organization; (Dougan, 1996) language use; (Erwin, 1991) vocal variety; (Erwin & Sebrell, 2003) correct articulation, grammar, and pronunciation; and (Goldberg et al., 1996) using appropriate nonverbal behaviors (Morreale et al., 1993, 1998). The first and second authors served as coders for this study. In using The Competent Speaker, coders utilized an analytic rating system, rating student performance on each of the eight competencies individually. Each competency can receive a score of “excellent” (3 points), “satisfactory” (2 points), or “unsatisfactory” (1 point). Thus, the maximum score a speech can receive is 24; and the minimum score is 8.

Coders worked together over the course of two days to code the speeches included in this study. They began each of those 2 days by training with one another in order to gain consistency in perceptions of possible scores for each of the speaking competencies. During these sessions, coders watched each speech, rated the speech using the evaluation rubric, and then discussed their ratings until agreement was reached. Reliability was checked once official coding began and again periodically during the coding sessions to assure it was within acceptable levels. Reliability was checked for 25% of the sample and the intercoder reliability for the entire scale was .96 using Ebel's intraclass correlation. The intercoder reliabilities for the eight individual competency areas ranged from .82 to .97.


The results for each individual competency area on The Competent Speaker (see Table I) indicate that students performed at a higher level on some competencies (choosing a topic, communicating the purpose, and using an appropriate organizational pattern) than on others (providing supporting materials, using appropriate language, using vocal variety, using proper pronunciation, articulation and grammar, and using nonverbal behaviors that support the verbal message). We conducted one-sample t-tests to see if the means for each competency were significantly below the mid-point on the scale. The results (see Table I) indicate that students did not differ significantly from the middle “satisfactory” score on the “choosing a topic” competency and were significantly better than “satisfactory” on “communicating the purpose” and “using an appropriate organizational pattern.” However, the students examined were significantly lower than “satisfactory” on the remaining five competency areas.
Table I

Means and Standard deviations for Coded Speaking Competencies

Competency area



One-sample t-test results

Chooses and narrows a topic



t(99)=.54, ns

Communicates thesis/specific purpose



t(99)=16.37, p < .001

Provides appropriate supporting material



t(99)=−3.08, p < .05

Uses an appropriate organizational pattern



t(99)=2.89, p < .05

Uses appropriate language



t(99)=−2.91, p < .05

Uses vocal variety in rate, pitch, and intensity



t(99)=−8.87, p < .001

Uses appropriate pronunciation, grammar, and articulation



t(99)=−9.61, p < .001

Uses physical behaviors that support the verbal message



t(99)=−9.64, p < .001

The students’ overall competency was calculated by adding together their scores from each individual competency. The mean for this sample was 15.01 (SD=2.42). We conducted a one-sample t-test to determine whether or not this differed significantly from the mid-point on the scale (a score of 16). The mean for the group was significantly lower than the possible midpoint, t(99)=−4.10, p < .001.


The results of this study reflect the recognized need for assessment by academic departments. An institution should not simply assume that it is meeting students’ needs without conducting an objective, overall assessment of the main competencies among students. This study suggests that this particular department can learn a great deal from the assessment of student performance in basic, general education skills courses. The evaluation of student speakers from general education courses indicates that this department's success in teaching even the most basic skills to non-majors is limited. Although the students excelled at some competencies, such as selecting a topic, communicating the purpose of their speech, and organizing the speech, some of the most fundamental skills have not been obtained. That is, a variety of competencies relative to students’ delivery, one of the most fundamental communication skills, has not been achieved to a satisfactory degree. More attention must be paid to teaching students the importance of the language they use and the verbal and nonverbal methods of delivery. Overall, the fact that students were rated “unsatisfactory” in five out of eight competency areas delineated on The Competent Speaker rubric is cause for alarm and must result in changes in the way this department teaches its basic public speaking course by re-examining both the content of the course and the way in which it is delivered. It is possible that the lack of student skills could be a product of poor course design, not inadequate instructional delivery. The specific causes of the lack of student competencies needs to be explored further in this particular case.

One of the most important aspects of assessment should be “closing the loop” by providing feedback to the instructional agency or department so that improvements can be made where warranted and strengths and weaknesses can be identified. Clearly, this communication studies program still has work to do in their basic general education course. Although this is disappointing for faculty members, this result highlights the importance of conducting program-wide assessments. Without large-scale assessments at the programatic level, deficiencies will not be recognized or addressed; and academic departments may continue with the status quo which may or may not be instructionally effective. We hope that other academic programs in higher education will learn from our experience and conduct similar assessments of their general education course offerings using whatever criterion-referenced rubric is appropriate for their discipline.

This study illuminates the practical applications of using The Competent Speaker scoring rubric or a similar rubric, two of which are discussed here. First, evaluating student skills or knowledge using a rubric such as this can be useful in training new teachers or for comparing perceptions of seasoned teachers. That is, in this research, the training process itself proved to be a useful exercise in determining standards for the evaluation of oral speeches. The coders used in this study are experienced instructors and have taught public speaking numerous times but rarely get the chance to compare their evaluations to those of another. In the desire for high intercoder reliability, much discussion was necessary to bring the coders to similar standards and to keep their evaluations in-line with the training manual descriptions. Most disciplines have their own standards, perhaps as articulated by their national organization, which are widely acknowledged and could be used for creating a rubric and training activity similar to those utilized in this study. Second, students can benefit from the use of a standardized rubric based on discipline-specific criteria because it clearly identifies the competencies expected and allows them to see which areas are in the greatest need of improvement once they have been assessed.

This study also highlights the challenges in using a rubric for assessment. First, The Competent Speaker is an extremely general rubric. Specific assignments may warrant additional competencies be added to it or that competencies be defined differently in keeping with that particular assignment. Our study aimed to evaluate a variety of speech types, so keeping with the general rubric was of benefit to ensure that speeches were evaluated similarly regardless of content or type. Second, the use of only three standards (excellent, satisfactory, unsatisfactory) with the clear definitions of each provided by The Competent Speaker training manual (Morreale et al., 1993) reduced the amount of inference necessary by the raters and thus allowed coders to achieve a high level of intercoder reliability. However, using the rubric in the classroom setting may require finer gradations of these standards to make clearer distinctions between superior, adequate, and poor performances. Third, the time and expense in adopting this type of assessment is not trivial. In this study, the student speeches were recorded for later examination; and considerable time was spent establishing acceptable intercoder reliability prior to the coding of the speeches. Departments often institutionalize the assessment of students at the end of their academic program in the form of portfolio review or exit exams and interviews. Perhaps it is possible that oral communication assessment can be implemented in a way that coincides with other forms of assessment and does not create an additional burden for faculty.

Finally, the use of this particular instrument provides insight into the encouragement of teaching communication across the curriculum. Dannels (2001) correctly argued that oral communication is not something that occurs exclusively in our public speaking courses. Learning to communicate is context-driven and takes place across the curriculum. Instructors in a wide variety of disciplines may find our experience with The Competent Speaker informative and may wish to adapt the rubric to serve their own purposes. Teaching communication skills to suit the specific needs of students in Business, Biology, English, or Engineering can be accomplished while still adhering to the basic skills that students need as described by the NCA.


The proliferation of standards described by our discipline makes communication departments integral in the assessment of communication skills across the curriculum. When it comes to assessment, communication departments can and should be at the forefront and should be teaching those in other fields about our standards for basic skills in general education curricula. We hope that this project will inspire other departments to institute program-wide assessment of basic oral communication skills as a gateway to the assessment of other skills across the curriculum.

We also hope that departments will develop and utilize evaluation rubrics for faculty development purposes. Our study suggests that generating dialogue among teachers may increase consistency in teachers’ evaluations of student competency. This finding can be applied to a multitude of skill areas in addition to communication, such as writing or technology use. Indeed, discussing the use of rubrics designed to assess a variety of student skills can be a helpful component of faculty development efforts both within and across academic disciplines and departments. Taking the time and working together to agree upon the ways in which student skill levels are defined and assessed may assist universities and their faculty in structuring the pedagogical training of faculty who are new to teaching foundational skills. This, we believe, may very well enhance the overall educational experience of students and prepare them better for the twenty-first century marketplace.


National Communication Association was previously known as the Speech Communication Association.



This research was funded by a California State University Long Beach Assessment Grant received by the first two authors.

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  • Norah E. Dunbar
    • 1
  • Catherine F. Brooks
    • 1
  • Tara Kubicka-Miller
    • 1
  1. 1.Department of Communication StudiesCalifornia State UniversityLong BeachUSA

Personalised recommendations