Advertisement

Journal of Computing in Higher Education

, Volume 30, Issue 1, pp 34–54 | Cite as

Design tools in practice: instructional designers report which tools they use and why

  • Ahmed Lachheb
  • Elizabeth Boling
Article

Abstract

Minimal attention has been paid by scholars to practitioners’ views of and experiences with instructional design tools. Instructional design practitioners working in diverse setting were surveyed regarding the tools they use in their practice, and interviewed regarding how they explain their choices to use the tools that they do. A survey completed by 100 instructional designers shows that they use a wide array of both digital and analog tools, many of them not specifically focused on, or limited to, the design and development of instruction. Analysis of interview narratives with 10 instructional designers surfaced themes in two categories, rationalist and situational explanations for the use of certain tools, with appropriateness (a rational explanation) and individual preference (a situational explanation) offered most frequently. These findings, and the statements of the designers, highlight the role of instrumental judgment in instructional design practice and points to implications for the education of instructional designers.

Keywords

Instructional design Instructional design tools Designerly tools Instructional designers 

Introduction

Instructional design (ID) tools—tangible and theoretical/methodological tools—play an important role in instructional design (ID) practice. As Gibbons, Boling and Smith observed in 2014, ID tools, mainly models and principles, aim to cover all design activities. Despite the importance of ID tools implied by this, minimal attention has been paid to the subject in instructional design and technology (IDT) scholarship, at least to the practitioners’ views and experiences with design tools. A few exceptions are found in the IDT literature that shed a light on ID tools from the practitioner’s perspective (e.g., Dicks and Ives 2008; Sugar et al. 2011, 2012; Yanchar et al. 2010).

Research on ID practice has been witnessing what Gray et al. (2015) called a shift gradually taking place. In fact, studies that are adopting a descriptive approach to ID practice started to appear since 1990s. Rowland’s (1992) rigorous descriptive study to investigate what instructional designers do marks an early point in this shift. Similar studies that focused on ID practice and adopted a descriptive approach have been widely cited in IDT literature. Based on a citation network analysis conducted by Cho et al. (2011), Cho and Park found in 2012 that among the top 20 most influential papers published in Performance Improvement Quarterly were four studies on ID practice (Perez and Emery 1995; Rowland 1992; Wedman and Tessmer 1993; Winer and Vázquez-Abad 1995). The value of such studies originates from their common goal: generate a rich description of ID practice that can inform ID education and practice.

Problem statement

An integrative review of 20 publications between 1990 and the present that were identified as discussing instructional design tools reveals two knowledge gaps. First, studies that examined ID tools in practice did not take a comprehensive approach to study ID tools. In fact, studies on ID tools focus either on computer-based tools or theoretical/methodological tools, not both combined. Most notably, Sugar’s (2014) synthesis and critique of literature on ID practices highlights a similar idea—a lack of “holistic perspective of current ID practices from the viewpoint of an ID professional and an ID student” (p. 106).

Secondly, articles describing ID practitioners’ views and experiences (i.e., descriptive) with ID tools were found to be minimal. With the exception of six studies (Dicks and Ives 2008; Roytek 2010; Sugar et al. 2011, 2012; Svihla et al. 2015; Yanchar et al. 2010) that include the practitioners’ voice, other studies reviewed may have focused on tools (i.e., prescriptive) but ignored the views of ID practitioners.

Our perspective is that if prescriptive studies on ID tools (and on ID practice in general) continue to dominate IDT literature, the ID field will not progress and flourish like other fields of design (e.g., architecture, graphic design, engineering design), where a richer understanding has been promoted through descriptive inquiry that captures design in situ (Smith and Boling 2009; Sugar 2014). With descriptive scholarship on ID practice, an understanding of the field and its complex nature should help ID educators consider what ID tools to teach, how to teach them and how to use them. Without such study on this important topic, we can face the risk of seeing tools developed for ID practitioners not used resulting in time and money being wasted and we will “miss opportunities” (Boling et al. 2017, p. 200).

ID practice and tools

Morrison et al. (2011) define instructional design (ID) as “the process for designing instruction based on sound practices” (p. 6). The practice of a profession, (ID in this case) as argued by Crouch and Pearce (2012) refers to activities that are standardized and approved by an agreed-upon professional authority (e.g., professional association). However, Crouch and Pearce (2012) provide an alternative definition of practice that is the most suitable for the purpose of this study—“practice is not necessarily indicative of how things ought to be” (p. 34). ID practice with this alternative definition is viewed to be intertwined with ideas and theories originating from personal experiences and social norms, not only those prescribed by a given professional authority.

Boling and Gray (2015) have posited a comprehensive definition of ID tools as “methods, tools, techniques, and approaches, as well as design models, design and learning theories, and principles” (p. 112). Under this definition, the term tool is used in its broad sense to refer to actual tools (e.g., computers), methods (e.g., prototyping), techniques (e.g., negotiation), and approaches (e.g., client-oriented). This definition is partially inspired by the works of Stolterman et al. (2009), and it is underpinned by the following two assertions: (1) there are a variety of tools besides ID models; and (2) there is no established order that guides the usage of ID tools. Empirical and conceptual investigations of ID tools reviewed in the literature were found to share the same elements of this comprehensive definition of ID tools, and not offer an alternative definition.

Literature review

Studies reviewed were identified following Torraco’s (2016) method in conducting an integrative literature review. Three online databases were consulted to search relevant literature (Google Scholar, Academic Search Premier, and ERIC). Key words used were “instructional design tools”, “instructional designer tools”, “designerly tools”, and “instructional designer technology”. Four other publications were added to the review of the literature. These four publications have a significant value in providing foundational knowledge about the research topic (Sugar et al. 2011, 2012; Tripp 1991; Sugar 2014). Two selection criteria were adopted to narrow the search into more suitable results. These criteria were (1) scholarly peer-reviewed journal articles, books and book chapters that discuss ID tools empirically and conceptually, (2) conference proceedings that report empirical and conceptual types of research on ID tools.

Twenty publications that met the above criteria were selected for review; they were read and summarized in detail. Three broad ideas emerged from this review: significance of ID tools, computer-based ID tools, and theoretical/methodological ID tools. These three broad ideas were transformed into three themes, which were later used to report a summary of literature on ID tools.

Significance of ID tools

The significance of ID tools in ID practice was stressed consistently in all reviewed literature that discussed ID tools in practice. For example, Kearsley (1977) was among the first scholars to frame Computer Assisted Instruction (CAI) as a significant ID tool, due to its ability in providing the practitioner with diverse options regarding media and methods. CAI, according to Kearsley (1977), is a “methodology for instructional design […] to provide individualized and interactive instruction” (p. 3). Among reviewed literature, Kearsley’s paper was found to be the oldest IDT publication that discussed ID tools. Despite framing CAI as important, Kearsley (1977) called on IDT scholars to focus on ID tools that included both computer-based and methodological tools stating, “Without [an] adequate instructional design [method], the use of CAI can be like using a supersonic jet to spray farm crops” (p. 31). Merrill (2001) elaborated on the significance of ID tools in relation to their affordance to enhance ID practice. According to Merrill (2001), many computer-based ID tools “require considerable time to learn and even more time to use” (p. 292). Merrill (2001) proposed to search for ID tools that should not only be efficient but should also afford an enhanced quality of ID products (p. 292). Most recently, Ritzhaupt and Kumar (2015) reported that knowledge and ability to work with different ID tools were among the top required skills needed by instructional designers working in higher education. Focusing on multimedia production knowledge and skills, Sugar et al. (2011, 2012) investigated essential multimedia knowledge and skills required of entry-level ID practitioners, by surveying working ID practitioners (Sugar et al. 2011) and by analyzing ID job postings for higher educational institutions (2012). Both studies have highlighted the importance of acquiring competency with ID tools in order for ID students enter the job market.

Computer-based ID tools

Computer-based ID tools are actual software and hardware instruments that cannot be operated without the use of a computer, regardless of their inner features (van Merriënboer and Martens 2002). In total, six selected publications on computer-based ID tools included several examples of tools, described how they were being used and what impact they had on ID practice (see Table 1). A special issue of the Educational Technology Research and Development (ETR&D) journal (Volume 50, Issue 4, 2002) was dedicated to discussion of computer-based ID tools. In all of the four published papers in this special issue (de Croock et al. 2002; Mooij 2002: McKenney et al. 2002; Spector 2002) computer-based ID tools and the way they were being used were discussed with the aim of evaluating their impact on the outcome of ID.
Table 1

Summary of literature on computer-based ID tools

Author

Tool

Function

Impact

Mooij (2002)

Digital planning boards

Learning activities

Learner progress

Academic record management

Individualized and optimized instruction

McKenney et al. (2002)

Computer Assisted Curriculum Analysis, Design and Evaluation (CASCADE)

Development, implementation and evaluation

Lesson plan

Improves quality of ID (mainly instructional materials)

Spector (2002)

Knowledge Management Systems (KMS)

Collaborative design and development

Document sharing and communication

Improve collaborative ID work among teams and allows for reusing materials

de Croock et al. (2002)

Core and Eval tools:

ADAPT IT

Project

Support design, analysis and evaluation ID tasks, based on the 4C/ID method

Better ID performance, higher quality of training programs

Chapman (1995)

Designer’s

Edge

Supports development, analysis and design stages of the Instructional System Design (ISD) model

Potentially saves time, accelerates development stages, supports and guides inexperienced instructional designers

Uduma and Morrison (2007)

Designer’s

Edge 3.0

Guides instructional designers throughout the stages of the ADDIE model

Used for guidance by novice instructional designer, was not used correctly by expert designers

Mooij (2002), in a special issue of ETR&D, reported the findings of a pilot study on Digital Planning Boards (DPB)—a software package similar to what is known today as a Learning Management System (LMS). Individualization and optimization of a piloted DPB were reported by Mooij (2002) to be important features for successful implementation of DPBs among instructional designers, teachers and students. The pilot study’s findings called for future research to expand the “development of a pedagogical-didactic kernel structure, integrating instructional management software in new instructional practices” (p. 11).

In the same special issue McKenney et al. (2002) discussed the findings of research studies on Computer Assisted Curriculum Analysis, Design and Evaluation (CASCADE) tools—a software package purposefully designed to help accomplish complex ID tasks. When compared to the work of instructional designers who did not use CASCADE tools, the authors made a case for CASCADE tools, as they were found to positively impact instructional designers’ performance—saving the practitioners time and allowing them to generate quality ID outputs. The authors also warned about the limitations of CASCADE tools in regard to their limited ability “to provide support for all curriculum perspectives” (p. 32).

A third article, authored by Spector (2002), explored Knowledge Management Systems (KMSs) and how they have been used in ID practice. DocuShare software was an example of a KMS ID tool that helped instructional designers collaborate, communicate, coordinate and control their ID work. Based on preliminary data of KMS use, Spector (2002) argued that the quality of ID practice would be improved due to the innate features of DocuShare KMS and its abilities to solve many ID issues related to collaboration and communication.

The fourth article, written by de Croock et al. (2002) shared a description of computer-based tools aimed at supporting design and evaluation of competency-based instruction. The context of the study was the Advanced Design Approach for Personalized Training-Interactive Tools (ADAPT IT) project in Europe. The examples given were a training design tool called Core, used for analysis and for designing competency-based training programs, and an evaluation tool called Eval, used for evaluation. The context where these tools were being used was air traffic control training implementing the competency-based training approach. de Croock et al. (2002) argued that “Core and Eval tools support the creation of an effective training blueprint by focusing primarily on easing the information management and decision-making aspects of applying the 4C/ID* methodology” (p. 56).

Theoretical/methodological ID tools

The goal of theoretical/methodological ID tools is serving as “aids for explaining the world and taking appropriate action to solve problems and design solutions” (Yanchar et al. 2010, p. 41). Four out of eight publications selected for review are empirical studies that include the practitioners’ views and experiences when describing theoretical/methodological ID tools used in practice (Dicks and Ives 2008; Yanchar et al. 2010; Roytek 2010; Svihla et al. 2015). The four other publications reviewed discuss theoretical/methodological ID tools from a conceptual perspective (Boling and Smith 2008; Boling and Gray 2015; Sözcü and Ipek 2014; Tripp 1991). Table 2 describes the tools and their functions as described in these studies.
Table 2

Summary of literature on theoretical/methodological ID tools

Paper

Tool

Function

Dicks and Ives (2008)

Social and cognitive skills

Building relationships, building sense, and instructional conscience

Yanchar et al. (2010)

Conceptual (learning and design theories)

Decision making, idea generating, making argument, struggling with theory

Roytek (2010)

Design models, ID team member roles, ID processes, ID tools

Structure ID process, Enhance work dynamics

Svihla et al. (2015)

Modeling, dialogues, scaffolding of design processes, asking to design learning for real-world use

Support teachers to design technology-enhanced instruction

Boling and Smith (2008)

Design artifacts and precedent knowledge

Provide examples and precedent design features to designers

Boling and Gray (2015)

Sketching

Support open-ended action of visual brainstorming

Sözcü and Ipek (2014)

Rapid e-learning development model

Design new modalities of learning that cannot be done with ADDIE model

Tripp (1991)

Prescriptive design methodologies

Support real-world and preferred methods for ID practitioners

ID tools in practice

In his (2014) review of studies on instructional design practices, Sugar strongly recommends an “all-inclusive” approach to such studies, exploring everything that designers do rather than framing studies through a limited number of pre-established and prescriptive process steps or concepts (p. 106). He further stresses the importance of the practitioners’ views and experiences, included in studies available at the time of his review. Several recent empirical studies on ID practice that focus on the practitioner voice are premised on the idea that the practitioner is the guarantor of design (Nelson and Stolterman 2012)—the one responsible for the success and/or the failure of a design, rather than the model or theory or process being used by the designer (Boling and Gray 2015; Gray et al. 2015). This concept characterizes the contemporary philosophy of ID (Carr-Chellman and Rowland 2016) which stands in contrast to the classical view of ID that focuses on models, theory, data and systematic approaches (Carr-Chellman and Rowland 2016; Smith and Boling 2009).

Design tools in HCI (which may be considered a neighboring field of practice) have been studied from the practitioners’ perspectives since the early 2000s. Stolterman et al. (2009) and Stolterman and Pierce (2012), studied the relationship between interaction designers and their tools as they are practiced in situ. They relied on a qualitative, grounded theory approach to data collection and analysis proposing a framework, the Tools-in-Use Model (Stolterman et al. 2009), to study design tools in practice and call on all fields of design to investigate their own practices and the tools in used in practice. They pose serious questions like, “How are tools taught in our design education? [and] Do designers feel as if they are using the ‘wrong’ tools in our field, and if so, why?” (Stolterman et al. 2009, p. 11).

A heavy emphasis on the evaluation of computer-based ID tools is also notable in the IDT literature. Several contributors to this emphasis may be: computer-based ID tools are generally costly to buy and to maintain (Merrill 2001) and therefore warrant attention; there are also many tools of this type available (Chapman 1995); and the many “technology effect” studies emerging after passage of the No Child Left Behind Act in 2001 (Ross and Morrison 2014) which included increased focus on ID tools. The constructs underlying computer-based tools can be seen, as in the four publications in the ETR&D 2002 special issue—to rely on a classical view of ID centered on process and guided by design models and on argumentation rather than empirical study in the field. Gustafson’s (2002) critique of these publications calls for “systematically evaluating the effectiveness and appeal of the education and training that result from using ID (tools)” (p. 65) [emphasis added].

In summary, ID tools have been discussed since the 1970s (Kearsley 1977). Each selected publication for review has indicated the significance of ID tools in ID practice. This significance is exhibited through an emphasis on evaluating computer-based ID tools and studying theoretical/methodological ID tools in ID practice. Despite the rich knowledge currently published on ID tools, two major knowledge gaps are noted; (1) There is a lack of inclusive research on ID tools. (2) There is a minimal focus on the practitioner’s views and experiences regarding ID tools. Both noted knowledge gaps are supported by Sugar’s (2014) comprehensive synthesis and critique of studies on ID practice.

Purpose of study

The purpose of this study is to investigate ID tool use from the perspective of ID practitioners. This study adopts a broad definition of ID tools that include methods, tools, techniques, and approaches (Boling and Gray 2015; Stolterman et al. 2009) and uses the lens of designerly tools (Stolterman et al. 2009). This perspective suggests that tools open to flexible use by designers will see more use and enjoy more acceptance than those constraining the design process. Authentic and rich descriptions of ID practice in its own terms (vs those of prescriptive scholarship or theory) can inform educators to better prepare ID practitioners; the focus on tool use is one part of investigating ID practice.

Tracey and Boling (2014) made the case in 2014 that IDT scholarship predominantly aims to measure the work of ID practitioners “against what academics think they should be doing” (p. 658), which has resulted in a limited understanding of how instructional design actually occurs in situ, and which limits preparation of practitioners in the field. Studies have begun to appear in the field which approach ID practice on its own terms, descriptively versus prescriptively, by focusing on the practitioners’ views and experiences (Christensen and Osguthorpe 2004; Cox and Osguthorpe 2003; Gray et al. 2015; Kenny et al. 2005; Rowland 1992; Visscher-Voerman and Gustafson 2004) but little attention has been paid to instructional design tools from the perspective of ID practitioners (e.g., Dicks and Ives 2008; Sugar et al. 2011, 2012, Sugar 2014; Yanchar et al. 2010). When tools are addressed in the IDT literature, emphasis is placed on evaluating their impact on the outcome of ID, assuming that ID is defined as the application of theory grounded and guided by design models. Studies like these are exemplified in a special issue of Educational Technology Research and Development (2002). This view of ID tools narrows our understanding of practice in the field rather than expanding it.

This descriptive study poses two questions:
  • (RQ1) What ID tools do ID practitioners report that they recognize as tools, and use in their practice?

  • (RQ2) What do ID practitioners explain regarding their choice to use some ID tools and not others?

Method

Research design

To answer these two questions, a “basic” qualitative research design (Merriam and Tisdell 2015, p. 23) was used. This study design deemed to be the most appropriate since the focus of the study is placed on participants’ actual lived experience of their design practice and the research questions explore “how people make sense of their lives and their experiences” (Merriam and Tisdell 2015, p. 24).

Data collection

Participants

A convenience sampling method was used for this study. ID practitioners with different job titles (e.g., instructional designers, instructional consultants) and professional settings were identified through an online search in target organizations’ websites. They were recruited to participate in the study via a recruitment message posted on the two major social media ID groups (LinkedIn and Facebook), two ID listservs to which the first author is subscribed, and direct contact by the first author of this study. One selection criterion was used: participants must hold a job titled as a full-time or part-time instructional designer, developer, or consultant.

Instruments

An online pilot of the survey was conducted with a group of senior instructional design graduate students; their knowledge and experience with ID tools was expected to make them viable proxies for this study. Results from the pilot survey allowed for finding types of responses that we hoped to find, survey items were understandable, and some wording changes needed to be made for clarity.

Study participants were asked five questions, two establishing job title and education, two eliciting their report of ID tools that they use, and one enquiring about their willingness to participate in a follow-up interview (see “Appendix 1: Survey: tools in instructional design practice” section). The survey addressed the first research question; what ID tools do ID practitioners report that they recognize as tools, and use in their practice? The tools listed on the survey were drawn from reviewed articles and the experience of the researchers, and were intended to stimulate responses rather than to be comprehensive. Participants checked off those that they use and in question #4 added as many additional tools as they desired. A total of 100 survey responses were completed and participants who participated in the follow-up interview (n = 10) received a small gift card in appreciation for their time.

The semi-structured interview protocol (“Appendix 2: Interview protocol” section) was used to guide interviews that lasted for average 40 min. 45 survey respondents expressed the desire to participate in the follow-up interviews and 10 were ultimately scheduled for completed sessions included in the analysis. Data analysis of the interview data was done to answer the second research question; what explanations do ID practitioners give for their choices to use particular tools? Participants’ background and experiences varied in terms of job titles, education and years of experience (see Tables 3, 4, 5 below).
Table 3

Summary of participants’ job titles by number holding each title

Job title

Number holding title

Instructional designer

46

Instructional developer

8

Instructional systems designer and/or developer

17

Instructional consultant

6

Other

23

Total

100

Table 4

Summary of participants’ educational background by number holding each qualification

Degree

Number

Master’s degree in instructional design and/or technology

40

Bachelor’s degree in instructional design and/or technology

17

Master’s degree in educational technology

16

Bachelor’s degree in educational technology

5

No formal degree in instructional design and/or technology but have certificate in the field

3

No degree or certificate in the field

6

Other degree(s)/certificate(s)—please specify

28

No degree

0

Table 5

Summary of interviewed participants’ by title, credential and years of experience

 

Job title

Education

Years of experience

1A

Instructional designer

Masters’ in Instructional Design/Technology

6

2B

Instructional designer

Masters’ in Instructional Design/Technology

2.5

3C

Learning management systems coordinator

Masters’ in Instructional Design/Technology

15

4D

Associate online instructional designer

Completing Master’ in Instructional Design/Technology

1

5E

Instructional designer

Educational Specialist Degree in Curriculum and Instruction

4

6F

Instructional designer

Completing Certificate in Instructional Systems Technology

5

7G

Learning designer

Master of Computer and Information Science

1

8H

Associate director

Ed.D. in Leadership and Innovation

20

9I

Instructional designer

Masters’ in Instructional Design/Technology

2

10 J

Instructional designer

Masters’ in Educational Technology

9

Data analysis

Descriptive statistics techniques were used to analyze survey data and produce simple frequency counts of ID tools selected/entered in the survey. Counts of ID tools selected from the initial listed were tabulated automatically via Qualtrics; ID tools entered in response to the open-ended question were tabulated in Excel through color coding and data sorting.

Merriam and Tisdell (2015) open coding process was used to analyze interview data. Each transcript was read multiple times and segments that were found to be relevant to the study were highlighted. Initial notes were taken directly on the transcripts and provisional themes noted. Themes were highlighted and revisited again while reading the interview transcripts; these themes were further grouped (axial coding) into more general themes with an emphasis on the framework of designerly tools developed by Stolterman et al. (2009). After consultation between the researchers, the transcripts were re-analyzed using the general themes thus derived. Both frequency counts of these themes across the transcripts and holistic appreciation of respondents’ meanings informed the final findings and discussion.

Findings

ID tools reported in the survey

While 58 tools were listed on the survey, respondents added an additional 95 tools for a total of 153, almost tripling the number researchers had listed (Table 6). The tools can be roughly grouped into computer-based ID tools, methodological/theoretical ID tools, and analog tools. Analog tools were less frequently reported than the other two types, and fewer individual analog tools were reported. Computer-based tools were reported in greater numbers (almost three times as many) and with greater frequency than methodological/theoretical tools. The total number of tools selected across all participants was 1681, an average of almost 18 tools per designer.
Table 6

Summary of ID tools reported by practitioners

Category of tool

Examples

Tools (numbers)

Frequency

Computer-based ID tools

Learning management systems, Adobe Suite

107

795

Methodological/theoretical ID tools

Brainstorming, meetings

36

628

Analog tools

Paper and pencil, pair of scissors

10

258

Practitioner’s explanations of tool selection and use

Asked to discuss favorite tools, reasons for the selection of tools on specific projects that respondents brought to mind, participants offered both rationalist and situational explanations their choices. Within these thematic categories, rationalist explanations included appropriateness of tools for particular tasks, match of tools to process steps in design, and cost effectiveness of tools; situational explanations included individual preference for tools, social reasons including the influence of peer designers and clients, and cultural mandates from a design team or employer for the use of certain tools. Outside this framework, the researchers identified relevant statements in the transcripts that addressed instrumental judgment on the part of the designer and comments indicating the frequency with which a designer used certain tools.

Simple frequency counts of the statements analyzed as contributing to or representing each theme demonstrate that some explanations exceeded others in frequency. One rationalist and one situational theme are mentioned most frequently, together with explanations that describe the designers’ use of instrumental judgment in choosing to use certain tools (Table 7).
Table 7

Simple frequencies of statements by practitioners reflecting reported themes

Themes and subthemes

Statement counts by participant

1A

2B

3C

4D

5E

6F

7G

8H

9I

10J

Total

Rationalist

Appropriateness

9

6

3

11

1

8

7

1

4

6

56

Process

8

8

5

3

1

5

4

1

2

3

32

Cost

5

3

1

7

1

1

1

0

2

2

23

Situational

Individual pref

5

6

6

1

4

9

5

7

8

8

44

Social

1

3

4

0

7

1

4

5

7

1

33

Cultural

0

0

0

2

1

2

0

3

1

1

10

Instrumental judgment

5

1

5

10

0

8

0

10

7

6

52

The themes were illustrated through several statements each participant made throughout the interviews. Rationalist explanations were expressed through appropriateness; “[…]that course is going to use VoiceThread for teaching so we will actually be using VoiceThread for our collaboration with this faculty member”. (6F), process; “HTML is also where we make sure that we have everything coded for accessibility as much as possible…” (4D), and cost; “[….] we really don’t have a lot of time to sit down and ask a lot of questions” (2B).

Situational explanations were expressed through individual preference; “This is something more on the fun part of the job where I have to create cool stuff like interactive videos….it allows me to be creative and retreat to myself” (1A), social; “Our department follows a process, a course development process. It is not necessarily and instructional design model, we don’t…. we infuse those types of things” (5E), and cultural; “The course map is like the foundation” (10 J).

Instrumental judgment

Nelson and Stolterman (2012) define instrumental judgment as “the choice and mediation of means within the context of prescribed ends” (p. 152). In many instances, participants described instrumental judgments regarding not so much why a tool was selected but how the selection was made. For example, tools might be used in a complementary way—one tool is used with another tool, or one tool is used after another, or because of another, tool. Participant 1A described how flowcharts were used for brainstorming and how conducting interviews was a necessary ID tool to use before another one (Captivate) to develop content. This participant explains, “I can’t use a tool as a tool itself. Without brainstorming with flipcharts with my team…is like one can’t exist without the other”.

Discussion

The survey data show that among almost 100 ID practitioners a large number and variety of ID tools are used, many of them computer-based. The emphasis on computer-based tools corresponds with Roytek’s (2010) study that highlights wide use of computer-based technologies among instructional designers. Roytek (2010) also identifies an important ID competency that requires the knowledge of authoring tools (i.e., computer-based software). Current modalities of learning, including online and blended learning, may contribute to this frequent use. Conceptual and methodological tools are reported frequently as well, although the discrete number of these tools is only about a third that of computer-based tools.

Interview participants clearly do not reserve or confine certain tools to particular design activities. While de Croock et al. (2002) report distinct ID tools to be used for specific design activities, following a prescribed design model (4C/ID), this may have to do with the perspective brought to that study. Stolterman et al. (2009), who studied design tools in use among interaction designers, and Gray et al. (2015) studying design judgments made by ID practitioners at work, both used an open perspective in which no presumption was made that design models determine the use of tools in a specific manner or particular order. The findings in this study conform to those, because—as practitioners report—their projects do not conform to a regular process and the tools they use may be applied to more than one part of a design process. For instance, participant 9I remarked “There is no order to it. You jump-in and you see what needs to be done and you do what needs to be done and you keep your eye on the goal all the time. That’s as far as like how to do a project”. Further, asked to make their own classification of tools, some respondents used process categories and others did not. Multiple respondents placed the same tool into different categories than did others. Participant 3C for instance classified his tools around his own job duties as an ID mentioning “communication”, “planning and mapping”, “content development”, “assembly”, and “training” as categories. Whereas participant 8H classified his ID tools (un-exclusively) into “Design”, “Develop”, “Deliver” and “Assess”. Participant 1A had a different approach by reporting specific design activities as categories of her reported ID tools, such as “Analysis and Scoping”, “Design”, “Support Learning”, “Development”, “Graphics production”, “Evaluation”, “Project management” and “Knowledge base”. The implication here is that designers are placing these tools at their service rather than that tools guide their design processes. It is telling that while designers listed tools that can be used step by step (like ADDIE as a conceptual process model), they did not list the kinds of automated design tools or expert systems that direct their activities. Stolterman et al. (2009) describe the major characteristics of what they call designerly tools as those that support the practitioners’ work without guiding it, are appreciated by practitioners each for different reasons, and are not used exclusively for specific, or individual, design activities. Our conclusion is that, regardless of what scholars may develop or recommend, practicing instructional designers choose and use designerly tools.

This evidence of a designerly approach to tools and their use is bolstered by the high frequency of interview statements describing instrumental judgment (Nelson and Stolterman 2012), a form of design judgment exercised to match processes and tools appropriately with the demands, global and specific, of a given design situation. While Dicks and Ives (2008) do not use the term design judgment when they report on the use of social and cognitive skills by ID practitioners in support of their daily ID practice, they are pointing to basic elements of the same construct. They point out that such tools are not prescribed in well-known design models (e.g., ADDIE), and are not being taught in ID classes.

Implications

While larger studies, preferably observational studies in the field, are called for if our field is to build a true understanding of how instructional design is practiced we posit, based on this study, that several ideas should be entertained by instructional design educators and scholars. First, educators need to focus on developing instrumental judgment in their students; this might entail an open-ended approach to selecting tools in their classes rather than specifying which tools (digital, analog and conceptual) their student must use, and promoting deep reflection in students regarding the efficacy of the tools they have selected and used. Scholars need to appreciate the skill required for instructional designers to do what they are already doing in selecting tools, matching them to the variable demands of the design situations they encounter, and using them flexibly for the properties best suited to their work instead of seeking to replace their designerly judgment with external guidance that ignores their legitimate skills. To do so, commitment to descriptive, versus evaluative, study of instructional design practice must increase among instructional design scholars.

Study limitations

Self-reported data may or may not reflect an accurate description of practitioners’ use or views on ID tools, although the exploratory nature of this study made self-reporting an efficient means to obtain preliminary insights. Despite the possibility of social desirability bias inherent in interviews (Grimm 2010), and in this case the possibility that respondents may have wanted the interviewer to see them in the best professional light possible, was addressed as much as possible by taking a non-evaluative focus in the recruitment message used and emphasizing the same message at the beginning of, and throughout, each interview.

Conclusion

As highlighted earlier in the problem statement section, IDT scholarship predominantly aims to measure the work of ID practitioners “against what academics think they should be doing” (Tracey and Boling 2014, p. 658). While this approach might have some value, it has limited our understanding of how instructional design actually occurs in situ. We believe this study adds to the evidence that ID practice, when studied closely, reveals a complex picture that bears further examination and deeper understanding if scholars are to attempt to support or improve it and educators are to prepare the most effective designers possible. Our descriptive study contributes to an understanding of ID practice from the view of the practitioners themselves, who talked freely about how they actually work and reported their tool use to us in an open-ended format. Their use of tools certainly shows some constraints and influences, often from the workplace and from their personal, professional preferences. It does not show that they are seeking or using tools created to scaffold or direct their designing in a predetermined way. Their ability to discuss the instrumental judgments they make regarding tools implies that this proclivity on their part is a valid feature of their practice, not ignorance of such tools or a willful refusal to use what is good for them. What good does it do scholars and educators in the field to appreciate this feature of ID practice? First, tool design for these practitioners can factor in their judgment and the flexibility with which they use tools, making the work of tool design both easier (less need to consider many fail-safe, or “idiot-proof” features), and more difficult (more need to consider how the tool might serve the designer instead of guiding the designer). Scholarly tools offered to the field might therefore represent wasted effort less often, but might—even better—contribute to advancement in the field robustly and affect positively the types of instructional design problems we can take on and the ability of our designers to flex with the nature of those problems.

Educators preparing instructional designers may also use the findings from studies of practice to refine both design briefs (assignments for project work) and the conditions they create for students to practice ID to reflect, and demand, the kind of designer-led activity that practice in the field requires. The result could be that ID students develop a strong sense of their instrumental judgment and confidence in their ability to use it, becoming practitioners who are not stymied or dismayed when a new design problem does not yield readily to the tools they have been taught to use. While we see that practicing designers do learn to exercise flexible use of tools, as educators it is a worthy goal to prepare students as ready to work effectively as we can make them. Every practice relies on the workplace to complete the education of new practitioners to some extent, but narrowing the gap would be a positive outcome from a realistic understanding of what practice in that workplace demands of our students when they enter it. For educators teaching practitioners already in the workplace, our instruction of them is enhanced in credibility and applicability when we recognize the realities of their practice.

Finally, an empirically established view of practitioners in our field behaving consistently with design theory allows us to draw on design theory when we hope, as we must, to improve our understanding of practice and contribution to its improvement.

Notes

Acknowledgements

We would like to thank Dr. Yonjoo Cho, Professor of Instructional Systems Technology at Indiana University School of Education, for her support and help with this study, as she encouraged the first author to turn a literature review study into an extended research study.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Human and animal rights

The study has been approved by the Indiana University Office of Research Compliance, Institutional Review Board (Protocol #1703628139) and has been performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. Boling, E., Alangari, H., Hajdu, I. M., Guo, M., Gyabak, K., Khlaif, Z., et al. (2017). Core judgments of instructional designers in practice. Performance Improvement Quarterly, 30(3), 199–219.  https://doi.org/10.1002/piq.CrossRefGoogle Scholar
  2. Boling, E., & Gray, C. M. (2015). Designerly tools, sketching, and instructional designers and the guarantors of design. In B. Hokanson, G. Clinton, & M. W. Tracey (Eds.), The design of learning experience (pp. 109–126). Cham: Springer. Retrieved from http://link.springer.com/10.1007/978-3-319-16504-2_8.
  3. Boling, E., & Smith, K. M. (2008). Artifacts as tools in the design process. In J. Spector, D. M. Merrill, J. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (pp. 685–690). New York: Taylor and Francis.Google Scholar
  4. Carr-Chellman, A. A., & Rowland, G. (Eds.). (2016). Issues in technology, learning, and instructional design: Classic and contemporary dialogues. New York, NY: Routledge.Google Scholar
  5. Chapman, B. L. (1995). Accelerating the design process: A tool for instructional designers. Journal of Interactive Instruction Development, 8(2), 8–15.Google Scholar
  6. Cho, Y., Jo, S. J., Park, S., Kang, I., & Chen, Z. (2011). The current state of human performance technology: A citation network analysis of Performance Improvement Quarterly, 1988–2010. Performance Improvement Quarterly, 24(1), 69–95.  https://doi.org/10.1002/piq.20103.CrossRefGoogle Scholar
  7. Cho, Y., & Park, S. (2012). Content analysis of the 20 most influential articles in PIQ. Performance Improvement Quarterly, 25(3), 7–22.  https://doi.org/10.1002/piq.21126.CrossRefGoogle Scholar
  8. Christensen, T. K., & Osguthorpe, R. T. (2004). How do instructional-design practitioners make instructional-strategy decisions? Performance Improvement Quarterly, 17(3), 45–65.  https://doi.org/10.1111/j.1937-8327.2004.tb00313.x.
  9. Cox, S., & Osguthorpe, R. T. (2003). How do instructional design professionals spend their time? TechTrends, 47(3), 45–47.Google Scholar
  10. Crouch, C., & Pearce, J. (2012). Doing research in design. Oxford: Berg.CrossRefGoogle Scholar
  11. de Croock, M. B., Paas, F., Schlandbusch, H., & van Merriënboer, J. J. (2002). ADAPTIT: Tools for training design and evaluation. Educational Technology Research and Development, 50(4), 47–58.  https://doi.org/10.1007/BF02504984.CrossRefGoogle Scholar
  12. Dicks, D., & Ives, C. (2008). Instructional designers at work: A study of how designers design. Canadian Journal of Learning and Technology/La revue canadienne de l’apprentissage et de la technologie, 34(2). Retrieved from https://www.cjlt.ca/index.php/cjlt/article/view/26421/19603.
  13. Gibbons, A. S., Boling, E., & Smith, K. M. (2014). Instructional design models. In J. Spector, M. Merrill, J. Elen, & M. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 607–615). New York: Springer.CrossRefGoogle Scholar
  14. Gray, C. M., Dagli, C., Demiral Uzan, M., Ergulec, F., Tan, V., Altuwaijri, A. A., et al. (2015). Judgment and instructional design: How ID practitioners work in practice. Performance Improvement Quarterly, 28(3), 25–49.  https://doi.org/10.1002/piq.21198.CrossRefGoogle Scholar
  15. Grimm, P. (2010). Social desirability bias. In W. Kamakura (Ed.), Part 2 marketing research, Wiley international encyclopedia of marketing (pp. 258–259). Hoboken, NJ: Wiley-Blackwell.Google Scholar
  16. Gustafson, K. (2002). Instructional design tools: A critique and projections for the future. Educational Technology Research and Development, 50(4), 59–66.  https://doi.org/10.1007/BF02504985.CrossRefGoogle Scholar
  17. Kearsley, G. P. (1977). Instructional design considerations of CAI for the deaf. Alberta, Canada: University of Alberta (ERIC Document Reproduction No. ED 160 084). Retrived from https://files.eric.ed.gov/fulltext/ED152046.pdf.
  18. Kenny, R. F., Zhang, Z., Schwier, R. A., & Campbell, K. (2005). A review of what instructional designers do: Questions answered and questions not asked. Canadian Journal of Learning and Technology, 31(1). Retrieved from https://www.cjlt.ca/index.php/cjlt/article/view/26504/19686.
  19. McKenney, S., Nieveen, N., & Van den Akker, J. (2002). Computer support for curriculum developers: CASCADE. Educational Technology Research and Development, 50(4), 25–35.  https://doi.org/10.1007/BF02504982.CrossRefGoogle Scholar
  20. Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. San Francisco, CA: Wiley.CrossRefGoogle Scholar
  21. Merrill, M. D. (2001). Components of instruction toward a theoretical tool for instructional design. Instructional Science, 29(4–5), 291–310.  https://doi.org/10.1023/A:1011943808888.CrossRefGoogle Scholar
  22. Mooij, T. (2002). Designing a digital instructional management system to optimize early education. Educational Technology Research and Development, 50(4), 11–23.  https://doi.org/10.1007/BF02504981.CrossRefGoogle Scholar
  23. Morrison, G. R., Ross, S. M., & Kemp, J. E. (2011). Designing effective instruction. Hoboken, NJ: Wiley.Google Scholar
  24. Nelson, H. G., & Stolterman, E. (2012). The design way: Intentional change in an unpredictable world (2nd ed.). Cambridge, MA: The MIT Press.Google Scholar
  25. Perez, R. S., & Emery, C. D. (1995). Designer thinking: How novices and experts think about instructional design. Performance Improvement Quarterly, 8(3), 80–95.  https://doi.org/10.1111/j.1937-8327.1995.tb00688.x.CrossRefGoogle Scholar
  26. Ritzhaupt, A. D., & Kumar, S. (2015). Knowledge and skills needed by instructional designers in higher education. Performance Improvement Quarterly, 28(3), 51–69.  https://doi.org/10.1002/piq.21196.CrossRefGoogle Scholar
  27. Ross, S. M., & Morrison, J. R. (2014). Measuring meaningful outcomes in consequential contexts: Searching for a happy medium in educational technology research (Phase II). Journal of Computing in Higher Education, 26(1), 4–21.  https://doi.org/10.1007/s12528-013-9074-6.CrossRefGoogle Scholar
  28. Rowland, G. (1992). What do instructional designers actually do? An initial investigation of expert practice. Performance Improvement Quarterly, 5(2), 65–86.  https://doi.org/10.1111/j.1937-8327.1992.tb00546.x.CrossRefGoogle Scholar
  29. Roytek, M. A. (2010). Enhancing instructional design efficiency: Methodologies employed by instructional designers. British Journal of Educational Technology, 41(2), 170–180.  https://doi.org/10.1111/j.1467-8535.2008.00902.x.CrossRefGoogle Scholar
  30. Smith, K. M., & Boling, E. (2009). What Do we make of design? Design as a concept in educational technology. Educational Technology, 49(4), 3–17.Google Scholar
  31. Sözcü, Ö. F., & İpek, İ. (2014). Rapid E-learning development strategies and a multimedia project design model. European Journal of Contemporary Education, 7(1), 46–53.CrossRefGoogle Scholar
  32. Spector, J. M. (2002). Knowledge management tools for instructional design. Educational Technology Research and Development, 50(4), 37–46.  https://doi.org/10.1007/BF02504983.CrossRefGoogle Scholar
  33. Stolterman, E., McAtee, J., Royer, D., & Thandapani, S. (2009). Designerly tools. Retrieved from http://shura.shu.ac.uk/491/.
  34. Stolterman, E., & Pierce, J. (2012). Design tools in practice: studying the designer-tool relationship in interaction design. In Proceedings of the designing interactive systems conference (pp. 25–28). ACM.Google Scholar
  35. Sugar, W. (2014). Studies of ID practices: A review and synthesis of research on current ID practices. New York, NY: Springer.CrossRefGoogle Scholar
  36. Sugar, W., Brown, A., Daniels, L., & Hoard, B. (2011). Instructional design and technology professionals in higher education: Multimedia production knowledge and skills identified from a Delphi study. Journal of Applied Instructional Design, 1(2), 30–46.Google Scholar
  37. Sugar, W., Hoard, B., Brown, A., & Daniels, L. (2012). Identifying multimedia production competencies and skills of instructional design and technology professionals: An analysis of recent job postings. Journal of Educational Technology Systems, 40(3), 227–249.CrossRefGoogle Scholar
  38. Svihla, V., Reeve, R., Sagy, O., & Kali, Y. (2015). A fingerprint pattern of supports for teachers’ designing of technology-enhanced learning. Instructional Science, 43(2), 283–307.  https://doi.org/10.1007/s11251-014-9342-5.CrossRefGoogle Scholar
  39. Torraco, R. J. (2016). Writing integrative literature reviews: Using the past and present to explore the future. Human Resource Development Review, 15(4), 404–428.  https://doi.org/10.1177/1534484316671606.CrossRefGoogle Scholar
  40. Tracey, M. W., & Boling, E. (2014). Preparing instructional designers: Traditional and emerging perspectives. In J. Spector, M. Merrill, J. Elen, & M. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 653–660). New York: Springer.Google Scholar
  41. Tripp, S. D. (1991). Two theories of design and instructional design. Paper presented at the Annual Meeting of AECT, Orlando, FL.Google Scholar
  42. Uduma, L., & Morrison, G. R. (2007). How do instructional designers use automated instructional design tool? Computers in Human Behavior, 23(1), 536–553. http://dx.doi.org.proxyiub.uits.iu.edu/10.1016/j.chb.2004.10.040.
  43. Van Merriënboer, J. J., & Martens, R. (2002). Computer-based tools for instructional design: An introduction to the special issue. Educational Technology Research and Development, 50(4), 5–9. https://doi-org.proxyiub.uits.iu.edu/10.1007/BF02504980.
  44. Visscher-Voerman, I., & Gustafson, K. L. (2004). Paradigms in the theory and practice of education and training design. Educational Technology Research and Development, 52(2), 69–89.Google Scholar
  45. Wedman, J., & Tessmer, M. (1993). Instructional designers’ decisions and priorities: A survey of design practice. Performance Improvement Quarterly, 6(2), 43–57.  https://doi.org/10.1111/j.1937-8327.1993.tb00583.x.CrossRefGoogle Scholar
  46. Winer, L. R., & Vázquez-Abad, J. (1995). The present and future of ID practice. Performance Improvement Quarterly, 8(3), 55–67.  https://doi.org/10.1111/j.1937-8327.1995.tb00686.x.CrossRefGoogle Scholar
  47. Yanchar, S. C., South, J. B., Williams, D. D., Allen, S., & Wilson, B. G. (2010). Struggling with theory? A qualitative investigation of conceptual tool use in instructional design. Educational Technology Research and Development, 58(1), 39–60.  https://doi.org/10.1007/s11423-009-9129.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.IST Department, School of EducationIndiana University BloomingtonBloomingtonUSA
  2. 2.School of EducationIndiana University BloomingtonBloomingtonUSA

Personalised recommendations