The influence of cognitive domain content levels and gender on designer judgments regarding useful instructional methods

Development Article


Instructional theory is intended to guide instructional designers in selecting the best instructional methods for a given situation. There have been numerous qualitative investigations into how instructional designers make decisions and the alignment of those decisions with theoretical influences. The purpose of this research is to more quantitatively explore the question of how instructional designers actually use instructional planning theory to judge the usefulness of instructional methods. We asked 56 instructional designers to rate the usefulness of 31 instructional methods for six different cognitive domain content level conditions. The results show that content level has a statistically-significant influence on a designer’s judgments regarding the usefulness of an instructional method. A designer’s gender also has a statistically-significant influence on a designer’s judgments regarding methods, but a weak effect size limits this result. Overall, the results provide evidence that supports the core principles of instructional planning theory, specifically method generality. The results also provide instructional designers further guidance in selecting the most useful instructional methods for cognitive domain content levels.


Design Theory Methods Practice Applications Cognitive domain 

To achieve worthwhile instructional outcomes, instructional designers must make good decisions regarding the methods they use in their learning experiences. For instance, according to Weston and Cranton’s (1986) prescriptions, an instructional designer who specifies using programmed instruction to teach learners facts about a cell is likely making a good decision. Alternatively, an instructional designer who specifies the same method to teach learners to design computer user interfaces is likely making a poor decision. Good decisions positively impact outcomes related to effectiveness, efficiency, and appeal, whereas poor decisions negatively impact those outcomes (Reigeluth 1983).

To make good decisions about the design of instructional solutions, Reigeluth and Carr-Chellman (2009) suggest that instructional designers use instructional theory as the basis for those decisions. Instructional theory as a concept is quite broad, in that it covers everything from what the instruction should be like to how the instruction should be evaluated. But at instructional theory’s core are the basic principles of design theory, which involves solving problems by knowing good methods and when to use those methods.

Instructional theory is well-established, and it has been updated and enhanced as the field learned more about how designers make decisions. However, as suggested by Yanchar et al. (2010), some designers question the relevance of instructional theory in their design work, especially in decision making. Yanchar et al. (2010) go on to suggest that, “there is clearly an uneasiness about the applicability of theories and other conceptual tools in everyday design work” (p. 41). To address this issue, Yanchar et al. (2010) believe that it is important to resolve this concern through several approaches, namely by doing a better job connecting theory with practice.

There have been numerous investigations into how instructional designers make decisions and the alignment of those decisions with theoretical influences. Rowland (1992) examined the question of whether designers actually do what theories suggest, finding that designers had little use for instructional design principles and formal plans. Similarly, Wedman and Tessmer (1993) found that instructional designers frequently omit a variety of instructional design activities due to such constraints as lack of time or decisions being made by others. Christensen and Osguthorpe’s (2004) study on instructional designer decision making found that only 50 % of instructional designers surveyed reported regularly using theory when making instructional strategy decisions. Jonassen (2008) argues that an instructional designer’s decisions, “are driven less by accepted principles than they are by constraint satisfaction and beliefs” (p. 21). Thus, the better one can address the constraints, the more successful the design.

The perspectives illustrated in the paragraph above comes from qualitative interviews and surveys that ask designers to be reflective about their past design decisions. Our interest is more active, with the desire to look specifically at how instructional designers assess the usefulness of an instructional method for a given instructional situation. We argue that instructional theory is relevant to designers and that designers make judgments in a way that is consistent with its core principles. Our premise is that instructional designers use theory to guide their design, but their usage of theory is likely tacit—e.g. not apparent, even to them. Thus, the purpose of this research was to explore the question of how instructional designers use instructional theory to assess the usefulness of instructional methods. To answer this question, we collected data over six years from nearly 60 instructional designers regarding judgments of method usefulness in different conditions. This is the first of several articles that will explore this question across the dimensions of content, context, and learning outcomes. This article is focused on judgments associated with cognitive domain content. We start this article with background on instructional theory, from which we elicit our hypotheses. Then, we describe our method and the results we collected. Finally, we discuss the results and explore the implications and limitations of this research.


Given the broad nature of instructional theory, we must clarify what we mean by instructional theory for this research. To us, instructional theory is an approach for how instructional designers solve problems. The approach has two parts, the situation and the methods (Reigeluth and Carr-Chellman 2009). The situation, which designers assess through analysis, identifies the empirical facts (conditions) and the matters of opinion (values) associated with the problem to be solved. The methods are the possible solutions for the problem.

As we mentioned earlier, instructional theory is broad, comprising six unique design theories (Reigeluth and Carr-Chellman 2009). One of those design theories, instructional planning theory, is the primary context for our study. Instructional planning theory represents various processes for planning instruction. One process governed by instructional planning theory is choosing instructional methods, such as lecture, game, demonstration, and so on. Thus, when we use the term instructional theory in this paper, we are referring to the process of choosing instructional methods like lecture, game, demonstration, and so on, using the conditions and values associated with the situation as the choice criteria.

When one compares our definition with what Christensen and Osguthorpe (2004) used, one can better understand why only 50 % of their survey respondents said they use instructional theory. The researchers had participants respond to one item that asked about their use of instructional theory: I use specific instructional design (prescriptive) theories or research. Obviously, the phrase instructional design theories in this item is ill-defined, since, as we have shown, it likely has many possible meanings to different participants. Had the researchers asked, I choose instructional methods after assessing the situation, Christensen and Osguthorpe’s results regarding instructional theory likely would be much different in terms of whether designers are using instructional theory or not.

To illustrate how our specific definition of instructional theory works, let’s look more closely at a condition, in this case, content, and how it guides the selection of instructional methods. Reigeluth and Carr-Chellman (2009) define content as, “The nature of what is to be learned, defined comprehensively to include not only knowledge, skills, and understandings, but also higher-order thinking skills, metacognitive skills, attitudes, values, and so forth” (p. 24). Using this definition, instructional designers operationalize content as a learning objective, and they write the learning objective to relate to a specific learning domain and level within that domain (Morrison et al. 2011). Therefore, when a designer assesses content, they are essentially assessing a level in a learning domain. Bloom’s (1956) learning domains model provides designers the taxonomy that describes the nature of what a learner might learn. For example, the levels for cognitive domain content are knowledge, comprehension, application, analysis, synthesis, and evaluation.

When designing instruction, classifying content by level within a learning domain helps a designer choose a useful instructional method. For example, if the content is related to the comprehension of a specific concept, and the desired outcome is recognition of that concept, then a designer might choose to use a micro-level concept classification method to teach that concept (Reigeluth 1999b). Or, at a more macro level, a designer might choose lecture, modularized instruction, or programmed instruction as a method most compatible with this type of content (Weston and Cranton 1986).

The principles of instructional theory and research related to selecting instructional methods suggest that instructional methods have varying levels of power (Reigeluth and Carr-Chellman 2009). Power describes how strongly a method contributes to attaining a learning goal. The stronger the power, the more likely for a designer to chose that method. In the context of our study, we use the term useful as a means for designers to describe a method’s level of power in a given situation. For example, following the prescriptions in Weston and Cranton (1986), if content is at the knowledge level, then most designers should rate lecture as a more useful (powerful) instructional method and group project as less useful. Conversely, if the content is at the evaluation level, then most designers should rate group project as more useful and lecture as less useful. We expect that designers, as rational actors in applying instructional theory, will act consistently with the core premise of instructional theory and its related prescriptive research. Hence, we propose the following hypothesis:


Cognitive domain content levels will influence a designer’s judgment regarding the usefulness of an instructional method.

A designer who follows Weston and Cranton’s (1986) prescriptions regarding the selection of instructional methods will no doubt design good instruction. However, will it be great instruction? Reigeluth (1999a) suggests that values influence the criteria designers use to judge methods:

Traditionally, instructional-design process models have relied primarily on research data about which methods work best. But which methods work best depends on what criteria you use to judge those methods. Those criteria reflect your values. (p. 12).

Whereas conditions reflect design variables that are external to the designer, values are design variables that are held internally by the designer. Because values influence a designer’s design criteria, it is possible that males and females differ systematically in what they value. This means that males and females may also differ systematically in the value they place on various instructional methods, based upon what they value. According to Rokeach (1973), values are the beliefs a person holds regarding desired goals or end states (terminal values) and modes of conduct (instrumental values). This model of values has clear parallels when applied to learning contexts. For example, Beatty (2002) has demonstrated the connection between instrumental values, such as interactive dialog, with terminal values related to an educational goal. Similarly, a designer who labels themselves as a constructivist, cognitivist, or behaviorist is, in essence, exposing an instrumental value about themselves, and that value will lead to an instructional solution (a terminal value) that is consistent with that value label.

While a person’s terminal and instrumental values can be determined by scales such as the Rokeach Values Survey (Rokeach 1973), research suggests that gender may serve as a proxy for understanding the influence of values on design. A study by Di Dio et al. (1996) investigated how values manifested themselves differently between men and women. They found that men were aligned with agentic values related to freedom and accomplishment. Women, on the other hand, were aligned with communal values of friendship and equality. Cross and Madson (1997) come to a similar conclusion in their meta analysis of self-construals and gender. Their research suggests that, “Women tend to develop an interdependent self-construal, whereas men tend to develop an independent self-construal” (p. 22). This means that for women, relationships are a strong element of their self. Men, on the other hand, have a self that is more independent.

Given Reigeluth and Carr-Chellman’s (2009) construct of values about methods, it is likely that the opinions of male designers would be more favorable to agentic instructional methods while female designers’ opinions would be more favorable to communal instructional methods. This suggests our second hypothesis:


For cognitive domain content, male instructional designers will judge the usefulness of instructional methods differently than female instructional designers.



The participants for this research were 56 people who enrolled in a yearly online, capstone graduate-level instructional strategies course of an instructional design program at a large Midwestern university between 2007 and 2012. Participants included practicing instructional designers employed at corporations, consulting firms, and colleges, in-service K-12 teachers, people in related fields (for example, journalism) seeking a career change to instructional design, and full-time students. The participants were seeking an instructional design certificate, a Masters degree, or Ph.D. degree. Geographically, participants were predominately located in regions throughout North America, with some joining the course from Asia and Europe. Thirty-nine of the participants were females and 17 were males. Table 1 illustrates the number of participants participating in each year.
Table 1

Number of research participants between 2007 and 2012


































The materials used by participants included a Most Useful Instructional Strategy (MUIS) ranking template, descriptions of instructional methods from Reigeluth (1999a), and a link to an online resource at the University of Washington that contained information about learning domains and the content levels within the cognitive domain (unfortunately, this link is now deactivated by its author). The MUIS template was created in Microsoft Excel® and represented a matrix of conditions and methods. The rows of the MUIS template listed the 31 instructional methods, which represented the union of the instructional methods listed in Reigeluth (1999a). The columns in the MUIS template listed the six content levels (knowledge, comprehension, application, analysis, synthesis, and evaluation) associated with Bloom’s (1956) cognitive learning domain. This content is consistent with the structure used in Weston and Cranton’s (1986) prescriptions regarding the selection of instructional methods for the cognitive domain. The blank cells at the intersection of specific method and condition combinations were used by participants into indicate their rating of the usefulness of the method for that condition.


As part of a class activity in which participants received points equal to 5 % of their grade for completing the MUIS template and participating in a subsequent discussion of the aggregated results, each participant’s task was to judge the usefulness of each instructional method with respect to each content level using a scale of 1 (least useful) to 5 (most useful). Participants used the materials to complete the 186 unique ratings of method and content level interactions (31 methods by six content levels) within one week. Participants were able to use any other information source they desired to better understand the instructional methods or the content levels. Participants emailed completed templates to the researcher. No other qualitative data was collected regarding participants’ decision-making process. The researcher then analyzed the data and provided the aggregated results to the participants, for subsequent discussion by the researcher and participants.

Data analysis

We analyzed the MUIS data from all six years together. We did this to (1) increase the sample size, specifically for males who were underrepresented in some years, (2) increase the diversity of participants to enhance generalizability, and (3) smooth out the effects of macroeconomic conditions on participants judgments that arose during the data collection time period. Additionally, our year-by-year analysis of the data showed generally the same results as our collective analysis, with differences in the Gender variable during years with smaller sample sizes.

To analyze the MUIS data, we used a General Linear Model to analyze the interaction of the Gender, Content Level, and Method factors on the Usefulness response. We also calculated effect size using the partial Eta-squared method, where 0.01 is interpreted as a small effect, 0.06 as a medium effect, and 0.14 as a large effect (Fritz et al. 2012). This analysis was followed by two post hoc analyses. The first post hoc analysis involved generating 31 method-specific subsets of the data and then performing a one-way ANOVA on each subset to assess the effects of Content Level on Usefulness ratings for a given Method. The second post hoc analysis involved performing a one-way ANOVA on each of the 31 data subsets to assess how Gender influenced judgments regarding the Usefulness of each Method. For both post hoc analyses, Benjamini and Hochberg’s (1995) false discovery rate (FDR) analysis (p < 0.05) was used to adjust the significance tests for multiple comparisons. For the Gender post hoc analysis, Cohen’s d was used to determine effect size, where 0.2 is interpreted as a small effect, 0.5 as a medium effect, and 0.8 as a large effect (Cohen 1988; Cohen 2008; Fritz et al. 2012).


The General Linear Model for the Gender, Content Level, and Method interactions was run using a 95 % confidence level. As shown in Table 2, the Content Level and Method interaction was statistically significant, F(150,10383) = 7.17, p = 0.000, ηp2 = 0.125, as was Content Level on its own, F(5,10383) = 23.73, p = 0.000, ηp2 = 0.014, and Method on its own, F(30,10383) = 57.75, p = 0.000, ηp2 = 0.164. These results support hypothesis H1.
Table 2

General linear model summary table of the gender, content level, and method interaction


Degrees of freedom

Seq. sum of squares

Adj. sum of squares

Adj. mean squares


p value










Content level
















Gender*content level
















Content level*method








Gender*content Level*method

















To further understand the Method and Content Level interaction, a series of post hoc, one-way ANOVAs for Method based upon Content Level were run using a 95 % confidence level. An ANOVA was run for each of the 31 Methods with Content Level as the factor. The results are shown in Table 3, with the Methods ordered from the highest F-values (top) to the lowest (bottom). Twenty-four (77 %) methods showed significant results, from drill and practice down to group discussion guided. The difference in the mean ratings for Content Level was highest in the drill and practice method. It had a maximum mean rating of 4.41 for the knowledge level and a minimum mean rating of 1.52 for the evaluation level, F(5,335) = 69.07, p = 0.000. The most uniform mean ratings for the Content Level were found in the quiet meeting method. Quiet meeting had a maximum mean rating of 2.70 for the analysis level and a minimum mean rating of 2.32 for the knowledge level, F(5,335) = 0.78, p = 0.566.
Table 3

One-way ANOVA summary table of methods and cognitive domain content levels










Drill and practice

4.49 (0.84)

3.42 (1.36)

3.02 (1.39)

1.89 (0.88)

1.71 (0.83)

1.55 (0.74)

F(5,329) = 69.07, p = 0.000



4.16 (0.89)

4.13 (0.96)

3.77 (1.10)

2.79 (1.00)

2.68 (1.13)

2.46 (1.08)

F(5,335) = 31.58, p = 0.000


Lecture speech

3.59 (1.16)

3.34 (1.01)

2.23 (0.95)

2.20 (0.94)

1.98 (0.88)

1.98 (0.92)

F(5,335) = 29.83, p = 0.000


Tutorial, programmed

4.36 (0.77)

3.88 (0.94)

3.61 (0.80)

3.04 (1.08)

2.75 (1.03)

2.63 (1.17)

F(5,335) = 27.51, p = 0.000


Tutorial, conversational

4.16 (0.87)

4.20 (0.70)

3.77 (0.81)

3.30 (1.04)

3.09 (1.08)

2.88 (1.18)

F(5,335) = 18.98, p = 0.000



2.93 (1.44)

3.48 (1.31)

3.41 (1.16)

4.23 (0.95)

4.07 (0.99)

4.46 (0.74)

F(5,335) = 15.20, p = 0.000


Team project

3.36 (1.15)

3.91 (0.94)

4.29 (0.78)

4.27 (0.77)

4.48 (0.71)

4.16 (0.83)

F(5,335) = 11.63, p = 0.000


Lecture guided discovery

3.84 (0.95)

4.02 (0.80)

3.16 (0.99)

3.27 (0.94)

3.09 (1.01)

2.95 (1.07)

F(5,335) = 11.47, p = 0.000


Case study

3.34 (1.30)

3.68 (1.10)

3.96 (1.01)

4.50 (0.66)

4.14 (0.86)

4.25 (0.82)

F(5,335) = 10.19, p = 0.000


Field trip

3.50 (1.13)

3.68 (0.90)

3.18 (1.15)

2.84 (1.19)

2.75 (1.28)

2.70 (1.20)

F(5,334) = 9.54, p = 0.000



3.61 (1.06)

3.96 (0.93)

4.43 (0.78)

4.41 (0.65)

4.43 (0.76)

3.98 (0.80)

F(5,335) = 9.18, p = 0.000


Problem solving/lab

3.61 (1.16)

3.95 (1.00)

4.45 (0.78)

4.52 (0.66)

4.41 (0.83)

4.13 (0.83)

F(5,335) = 8.81, p = 0.000


Think tank/brainstorm

2.91 (1.31)

3.16 (1.16)

3.34 (1.15)

4.07 (1.11)

4.06 (1.13)

3.82 (1.25)

F(5,335) = 8.76, p = 0.000



3.98 (1.10)

4.09 (1.07)

4.86 (0.40)

4.36 (.80)

4.16 (1.01)

3.75 (1.21)

F(5,335) = 8.61, p = 0.000


Guided laboratory

3.89 (1.02)

4.18 (0.92)

4.29 (0.80)

3.91 (.93)

3.73 (0.94)

3.32 (1.01)

F(5,334) = 7.46, p = 0.000



3.33 (1.20)

3.80 (0.98)

4.17 (0.86)

3.96 (0.85)

3.65 (0.96)

3.46 (1.04)

F(5,323) = 5.37, p = 0.000


Role play

3.04 (1.24)

3.68 (0.97)

4.00 (0.93)

3.46 (1.03)

3.43 (1.06)

3.39 (1.07)

F(5,335) = 5.19, p = 0.000


Cooperative group learning

3.41 (1.09)

3.79 (0.85)

3.95 (0.96)

4.20 (0.75)

4.04 (0.89)

4.02 (0.94)

F(5,335) = 4.95, p = 0.000



3.61 (1.02)

4.00 (0.99)

4.27 (0.88)

3.89 (1.02)

3.63 (1.09)

3.43 (1.19)

F(5,335) = 4.93, p = 0.000


Independent/learner control

3.66 (0.92)

3.77 (0.97)

3.64 (0.94)

3.50 (0.92)

3.34 (0.96)

3.13 (0.96)

F(5,335) = 3.57, p = 0.004


Socratic dialog

3.41 (1.11)

3.73 (1.02)

3.13 (1.01)

3.89 (1.00)

3.68 (1.15)

3.70 (1.33)

F(5,335) = 3.45, p = 0.005


Ancient symposium

3.06 (1.33)

3.27 (1.13)

2.42 (1.05)

3.16 (1.37)

2.93 (1.43)

3.24 (1.40)

F(5,329) = 3.32, p = 0.006



3.40 (1.15)

3.56 (1.01)

2.91 (1.06)

3.16 (1.05)

3.02 (1.10)

3.06 (1.08)

F(5,329) = 2.96, p = 0.013



3.73 (1.05)

3.84 (0.95)

4.05 (0.88)

3.80 (0.96)

3.57 (1.06)

3.41 (1.30)

F(5,335) = 2.57, p = 0.027


Group discussion guided

3.43 (1.02)

3.93 (0.93)

3.46 (0.87)

3.80 (0.88)

3.66 (0.92)

3.71 (1.11)

F(5,335) = 2.28, p = 0.047


Discovery, group

3.55 (1.17)

3.84 (0.99)

3.88 (0.85)

4.02 (0.73)

3.98 (0.77)

3.84 (0.85)

F(5,335) = 1.83, p = 0.106


Panel discussion

3.13 (0.97)

3.27 (0.98)

2.79 (1.10)

3.04 (1.25)

2.77 (1.35)

3.05 (1.43)

F(5,335) = 1.51, p = 0.188


Group discussion open

2.95 (1.02)

3.34 (0.94)

3.16 (0.89)

3.36 (0.88)

3.13 (0.90)

3.27 (1.12)

F(5,335) = 1.46, p = 0.204



3.57 (1.13)

3.59 (1.13)

3.18 (1.10)

3.36 (1.14)

3.25 (1.24)

3.30 (1.20)

F(5,335) = 1.21, p = 0.302


Discovery individual

3.59 (1.06)

3.66 (0.94)

3.59 (0.85)

3.86 (0.77)

3.73 (0.80)

3.55 (0.97)

F(5,335) = 0.9, p = 0.484


Quiet meeting

2.32 (1.21)

2.48 (1.13)

2.36 (1.03)

2.70 (1.13)

2.57 (1.22)

2.54 (1.36)

F(5,335) = 0.78, p = 0.566


* Indicates false discovery rate (FDR) significance at p < 0.05

The Gender and Method interaction shown in Table 2 was statistically significant, F(30,10383) = 2.43, p = 0.000, ηp2 = 0.007, as was Gender on its own, F(1,10383) = 54.08, p = 0.000, ηp2 = 0.005. While the results are significant, the weak effect sizes limit its contribution in supporting the hypothesis. The Gender, Content Level, and Method interaction was not significant, F(150,10383) = 0.48, p > 0.999, ηp2 = 0.007, nor was the Gender and Condition interaction, F(5,10383) = 0.35, p = 0.884, ηp2 = 0.000. Based on these results, hypothesis H2 is not supported.

Table 4 shows the results for the series of post hoc, one-way ANOVAs that compared all 31 Methods with Gender at a 95 % confidence rate. The table is ordered from highest F-value to lowest. Nine methods (28 %), from group discussion guided, F(1,335) = 18.22, p = 0.000, down to lecture speech, F(1,335) = 6.19, p = 0.013, showed significant results with medium to small effect sizes.
Table 4

One-way ANOVA summary table of methods and gender





Cohen’s d


Group discussion guided

3.81 (0.98)

3.33 (0.85)

F(1,335) = 18.22, p = 0.000



Discovery, group

3.97 (0.84)

3.58 (1.01)

F(1,335) = 13.60, p = 0.000



Cooperative group learning

4.01 (0.95)

3.65 (0.90)

F(1,335) = 10.64, p = 0.001



Group discussion open

3.30 (0.97)

2.96 (0.92)

F(1,335) = 9.18, p = 0.003




3.83 (1.03)

3.46 (0.95)

F(1,323) = 9.14, p = 0.003



Panel discussion

3.13 (1.23)

2.72 (1.08)

F(1,335) = 8.81, p = 0.003



Role play

3.39 (1.12)

3.75 (0.96)

F(1,335) = 7.59, p = 0.006



Field trip

3.22 (1.25)

2.85 (1.02)

F(1,335) = 6.72, p = 0.010



Lecture speech

2.66 (1.25)

2.31 (0.95)

F(1,335) = 6.19, p = 0.013



Tutorial, conversational

3.66 (1.05)

3.35 (1.14)

F(1,335) = 5.73, p = 0.017




3.27 (1.10)

2.98 (1.06)

F(1,329) = 4.87, p = 0.028



Problem solving/lab

4.24 (0.97)

4.02 (0.87)

F(1,335) = 4.06, p = 0.045




3.85 (1.22)

3.57 (1.25)

F(1,335) = 3.73, p = 0.054



Discovery individual

3.71 (0.85)

3.55 (1.00)

F(1,335) = 2.37, p = 0.124



Lecture guided discovery

3.44 (1.07)

3.27 (0.88)

F(1,335) = 2.04, p = 0.154



Ancient symposium

3.08 (1.38)

2.85 (1.13)

F(1,329) = 1.96, p = 0.162



Independent/learner control

3.55 (1.01)

3.40 (0.84)

F(1,335) = 1.72, p = 0.191




3.39 (1.25)

3.20 (1.22)

F(1,335) = 1.72, p = 0.191




4.16 (1.05)

4.29 (0.96)

F(1,335) = 1.26, p = 0.262



Quiet meeting

2.54 (1.21)

2.39 (1.12)

F(1,335) = 1.09, p = 0.298



Case study

3.94 (1.10)

4.07 (0.92)

F(1,335) = 1.08, p = 0.300




3.77 (1.07)

3.66 (1.02)

F(1,335) = 0.81, p = 0.369



Guided laboratory

3.91 (1.02)

3.83 (0.91)

F(1,334) = 0.43, p = 0.513




3.83 (1.13)

3.76 (0.90)

F(1,335) = 0.30, p = 0.582



Tutorial, programmed

3.40 (1.14)

3.32 (1.20)

F(1,335) = 0.29, p = 0.590




3.36 (1.22)

3.40 (0.99)

F(1,335) = 0.08, p = 0.778



Think tank/brainstorm

3.57 (1.27)

3.53 (1.24)

F(1,334) = 0.08, p = 0.782




4.15 (0.90)

4.12 (0.88)

F(1,335) = 0.07, p = 0.794



Team project

4.07 (0.98)

4.09 (0.87)

F(1,335) = 0.02, p = 0.889



Drill and practice

2.68 (1.47)

2.67 (1.53)

F(1,329) = 0.01, p = 0.924



Socratic dialog

3.59 (1.18)

3.58 (1.01)

F(1,335) = 0.01, p = 0.908



* Indicates false discovery rate (FDR) significance at p < 0.05


In the discussion section of this article, we will first discuss our thoughts about the interaction between content levels and instructional methods, as well as the usefulness of the various methods. We will then discuss what our results suggest regarding gender and its influence on judgments regarding the usefulness of instructional methods.

Content levels and methods

The results of our analysis support the core premise of instructional theory and hypothesis H1, that designers’ judgments regarding the usefulness of instructional methods are influenced by cognitive domain content levels. The results show that participants judged the usefulness of instructional methods differently for each of the six content levels. The results also provide us a glimpse into the minds of instructional designers, in terms of the instructional methods they see as being most useful for different cognitive domain content levels.

What we find most interesting in these results is that the methods our participants judged as being most useful for each of the six levels are very similar to what experts suggest. Table 5 presents Weston and Cranton’s (1986) suggested methods for each of the six content levels alongside the methods from this study that had a usefulness rating of greater or equal to four. What we see in this comparison is that methods for lower-level content tend to be more instructor-centered (lecture and demonstration) and individualized (programmed instruction). Methods for higher-level content tend to be more interactive (discussions and group projects) and experiential (laboratory and field experience). Even though the studies we reviewed at the beginning of this paper for this research suggest that designers don’t use instructional theory, this comparison shows that our participants made judgments in a way that were consistent with instructional theory, and that their actions resulted in judgments about methods that were consistent with expert practice.
Table 5

Weston and Cranton’s (1986) prescriptions for methods associated with specific cognitive domain content levels

Cognitive domain levels

Weston and Cranton’s suggested methods

This study’s top methods (usefulness ≥ 4)


Lecture, programmed instruction, drill and practice

Drill and practice, tutorial (programmed), demonstration, tutorial (conversational)


Lecture, modularized instruction, programmed instruction

Tutorial (conversational), guided laboratory, demonstration, apprenticeship, lecture (guided discovery), simulation


Discussion, simulations and games, CAI, modularized instruction, field experience, laboratory

Apprenticeship, problem solving lab, project, guided laboratory, team project, simulation, laboratory, game, role play


Discussion, independent/group projects, simulations, field experience, role-playing, laboratory

Problem solving lab, case study, project, apprenticeship, team project, debate, cooperative group learning, think tank/brainstorm, discovery (group)


Independent/group projects, field experience, role-playing, laboratory

Team project, project, problem solving lab, apprenticeship, case study, debate, think tank/brainstorm, cooperative group learning


Independent/group projects, field experience, laboratory

Debate, case study, team project, problem solving lab, cooperative group learning

When we look at the results closer, by plotting the Table 3 means for each method/condition interaction in a line graph, we see five patterns emerge in terms of how designers judge and classify instructional methods. Figure 1 illustrates these patterns, which we name and describe as follows:
Fig. 1

Five different patterns illustrating the usefulness of instructional methods

  • Low-level methods. This is illustrated by the drill and practice line. Low level means that a method is judged more useful for lower-level cognitive content, but less useful for higher-level cognitive content.

  • High-level methods. This is illustrated by the debate line. The pattern and meaning is opposite of low level, that the method is more useful for higher level cognitive content than lower-level ones.

  • Concave methods. This is illustrated by the apprenticeship line. A concave method is one where the method’s usefulness rating for application is greater than all the other content levels. The pattern suggests greater usefulness for application-level content, but generally the same usefulness for lower-level and higher-level cognitive content.

  • Convex methods. This is illustrated by the ancient symposium line. A convex method is one where the method’s usefulness rating for application is lower than all the other content levels. The pattern suggests limited usefulness for application-level content, but generally the same usefulness for lower-level and higher-level cognitive content.

  • Flat methods. This is illustrated by the quiet meeting line, where the standard deviation of the means is less than 0.25. The pattern suggests generally equal usefulness across all cognitive content.

What is interesting about these five patterns is that they illustrate Reigeluth and Carr-Chellman’s (2009) theory of method generality. For example, flat methods are (using Reigeluth and Carr-Chellman’s terms) wide, meaning that their power to achieve a learning goal is generally the same for varying conditions. The high-level and low-level methods are limited, meaning that their power to achieve a learning goal is very different when conditions vary. The concave and convex methods are moderate, meaning that their power is strong or weak for one type of condition, but generally the same for the other conditions.

Figures 2 and 3 further illustrate the limited generality of the low-level and high-level methods. Figure 2 gathers all the low-level methods in one plot. Participants judged each to have strong usefulness for knowledge- and comprehension-level content. However, that usefulness drops off significantly as the method is judged against the higher-level content associated with analysis, synthesis, and evaluation.
Fig. 2

The top ten instructional methods that were judged more useful for lower-level content

Fig. 3

The top seven instructional methods that were judged more useful for higher-level content

Figure 3 presents the high-level methods in one plot. As shown, the usefulness of these methods is opposite of those shown in Fig. 2. These methods are judged to be more useful for content that is more complex.

The practical application of these results is to reinforce instructional designers’ analysis of cognitive domain content levels when choosing instructional methods. Additionally, these results provide instructional designers an additional source of guidance for choosing the most useful instructional method for cognitive domain content levels.


In this research we used Gender as a variable to represent the differing values and self-construls that Di Dio et al. (1996) and Cross and Madson (1997) suggest may be present in males and females. Our results showed that there were significant differences between males and females in terms of how they judge the usefulness of instructional methods for cognitive domain content levels. However, our results also showed that effect sizes were weak and other factors and interactions were not significant. Collectively, these results do not support hypothesis H2.

Yet, we were intrigued by the significant results for the Gender and Method interaction to investigate it further. Our deep dive into the post hoc, per-method analysis results shows some very interesting patterns. First, for the overall set of methods that showed a significant difference between male and female (see both Fig. 4; Table 4 above), females’ judgments of usefulness where higher than males for eight of the methods, whereas males’ judgments of usefulness were higher for only one method (role play). The effect sizes for these results are in the small to medium range.
Fig. 4

The nine instructional methods for which the female usefulness rating was significantly different than the male usefulness rating

Second, the methods judged more useful by females are consistent with the female values identified by Di Dio et al. (1996) and Cross and Madson’s (1997) interdependent self-construl. For instance, a strong value for females is communion—a focus on others and the formation of relationships. Nearly all of the instructional methods judged more useful by females involve groups and/or some form of discussion, which is consistent with the feminine communion values. As Cross and Madson (1997) point out, “For the person with an interdependent self-construal, relationships are viewed as integral parts of the person’s very being” (p. 7).

For males, the instructional method judged more useful was role play, which is a method more consistent with male values related to focus on the self and accomplishment, and the independent self-construl. After all, in a role play, the learner plays a role (focus on the self) that is tasked with accomplishing something: handling an angry customer, successfully completing a negotiation, and so on.

These insights suggest a potential designer bias, which connects with Reigeluth and Carr-Chellman’s (2009) ideas regarding values about power and values about method. The bias is that a designer’s values may blind them into making a poor design decision regarding instructional methods. For example, based upon her communion values, a female designer might favor cooperative group learning (a usefulness rating 4.20) over a more accomplishment-oriented case study method (4.50) for analysis-level content. Such a decision would be consistent with her values. But if the audience for her training is primarily male (a learner condition), then the method she finds more useful may not deliver the results she expects with her audience. This suggests that designers must be cautious about how their values influence decisions about methods.

Overall, this research suggests that the principles of instructional theory guide, and provide a sound basis for, instructional design decisions. We have shown on a small scale using one type of condition (cognitive domain content levels) that designers assess situations and then use that assessment to judge the usefulness of instructional methods in a way that is consistent with what experts suggest. Contrary to the theory-use literature discussed earlier in this article, which suggests designers don’t use instructional theory because designers qualitatively say they don’t, our quantitative results suggest that designers made judgments in a way that were consistent with the principles of instructional theory. Furthermore, this research provides some directional evidence that suggests a designer’s gender, which may reflect their values about methods, influences how designers judge the usefulness of instructional methods within the context of cognitive domain content levels.

Limitations and implications

Our use of cognitive domain content levels as the only variable was consistent with Weston and Cranton’s (1986) work on prescribing instructional methods. Ideally, to enhance the generalizability of this research, one should show how other situational factors influence usefulness judgments. For conditions, this includes other types of content (affective, psychomotor, and interpersonal), learner characteristics, context factors, and resource constraints. For values, this includes values about learning goals, priority, and methods. Since this report is the first in a series, we will explore some of these other factors in subsequent articles. However, we caution readers that the outcomes of this paper should not be generalized to complex educational processes that include multiple cognitive content level conditions and methods interacting and conditioning each other.

That all participants in the study are affiliated with one institution and the same instructional strategies course introduces a bias that potentially limits the generalizability of the results. The participants in this research were a convenience sample and their level of expertise was not quantified. However, as previously discussed, the online nature of the course enhanced the diversity of the participants, geographically and vocationally. Most of the participants were instructional design (or related field) practitioners in various organizations such as colleges, computing technologies, defense contractors, financial institutions, K-12 schools, pharmaceutical manufacturers, software-as-service providers, textbook publishers, and utilities. This kind of diversity enhances the generalizability of the results. Thus, future research should aim to recruit participants who received their instructional design training from a greater variety of educational institutions.

Our use of Gender as an independent variable representing a designer’s values is limited in that we did not formally measure our participants’ values and self-construls. Consequently, this investigation of values and their influence on usefulness judgments is limited. Furthermore, our study does not include values that are more in the spirit of Reigeluth and Carr-Chellman’s (2009) ideas, such as a subject’s values toward a particular learning theory (e.g. behaviorist, and cognitivist, constructivist). Future research should include measures that assess the strengths of these various values, a designer’s perception of self (Harter 2012), and their effects on usefulness ratings.

For instructional designers, this research suggests that using instructional theory as an approach for instructional design has benefits. One benefit is that it helps designers effectively judge the usefulness of methods for a given situation. This leads to better instructional design decisions. Another benefit is that it can help designers defend their design decisions. This leads to a more efficient and enjoyable design process. Managers of instructional designers should use instructional theory’s core principles to assess the quality of their designers’ decisions. If a designer can defend their decisions using the principles of instructional theory, it could mean the difference between a great learning experience and one that is not.


  1. Beatty, B. J. (2002). Social interaction in online learning: A situationalities framework for choosing instructional methods. Dissertation Abstracts International. Doctoral dissertation, Indiana University, DAI-A 63/05, p. 1795.Google Scholar
  2. Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 289–300.Google Scholar
  3. Bloom, B. S. (1956). Taxonomy of educational objectives, handbook I: The cognitive domain. New York: David McKay Co Inc.Google Scholar
  4. Christensen, T. K., & Osguthorpe, R. T. (2004). How do instructional-design practitioners make instructional-strategy decisions? Performance Improvement Quarterly, 17(3), 45–65.CrossRefGoogle Scholar
  5. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.Google Scholar
  6. Cohen, B. H. (2008). Explaining psychological statistics (3rd ed.). Hoboken, NJ: Wiley.Google Scholar
  7. Cross, S. E., & Madson, L. (1997). Models of the self: Self-construals and gender. Psychological Bulletin, 122(1), 5–37.CrossRefGoogle Scholar
  8. Di Dio, L., Saragovi, C., Koestner, R., & Aubé, J. (1996). Linking personal values to gender. Sex Roles, 34(9/10), 621–636.CrossRefGoogle Scholar
  9. Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology, 141(1), 2–18. doi:10.1037/a0024338.CrossRefGoogle Scholar
  10. Harter, S. (2012). The construction of the self: A development perspective (2nd ed.). New York: Guilford Press.Google Scholar
  11. Jonassen, D. H. (2008). Instructional Design as Design Problem Solving: An Iterative Process. Educational Technology, 48(3), 21–26.Google Scholar
  12. Morrison, G., Ross, S., Kalman, H. K., & Kemp, J. (2011). Designing effective instruction. Hoboken, NJ: Wiley.Google Scholar
  13. Reigeluth, C. M. (1983). Instructional design: What is it and why is it? In C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 3–36). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  14. Reigeluth, C. M. (1999a). What is instructional-design theory and how is it changing? In C. M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory (Vol. II, pp. 5–29). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  15. Reigeluth, C.M. (1999b). Module 3: Concept classification. (Online training program). Retrieved from
  16. Reigeluth, C. M., & Carr-Chellman, A. (2009). Understanding instructional theory. In C. M. Reigeluth & A. Carr-Chellman (Eds.), Instructional-design theories and models: Building a common knowledge base (Vol. III, pp. 3–26). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  17. Rokeach, M. (1973). The nature of human values. New York: The Free Press, Collier Macmillan.Google Scholar
  18. Rowland, G. (1992). What do instructional designers actually do? An initial investigation of expert practice. Performance Improvement Quarterly, 5(2), 65–86.CrossRefGoogle Scholar
  19. Wedman, J., & Tessmer, M. (1993). Instructional designers’ decisions and priorities: A survey of design practice. Performance Improvement Quarterly, 6(2), 43–57.CrossRefGoogle Scholar
  20. Weston, C., & Cranton, P. A. (1986). Selecting instructional strategies. The Journal of Higher Education, 57(3), 259–288.CrossRefGoogle Scholar
  21. Yanchar, S. C., South, J. B., Williams, D. B., Allen, S., & Wilson, B. G. (2010). Struggling with theory? A qualitative investigation of conceptual tool use in instructional design. Educational Technology Research and Development, 58(1), 39–60. doi:10.1007/s11423-009-9129-6.CrossRefGoogle Scholar

Copyright information

© Association for Educational Communications and Technology 2013

Authors and Affiliations

  1. 1.Customer Performance GroupRenoUSA

Personalised recommendations