Introduction

Many, if not most scholars, argue their fields are evolving rapidly to stay relevant in the 21st Century. For example, the discipline of the humanities is experiencing a dramatic increase in digital programs (Kirschenbaum 2012), mathematics teaching has shifted to making the math relevant by emphasizing statistics and computational thinking (The National Academies 2010; Sengupta et al. 2013), and art now emphasizes digital art, photography, film, and animation (Black and Browning 2011). Similarly, the field of instructional design and technology (IDT) has experienced vital evolution during the past 30 years since the high point of Gagne, systems design, and computer-assisted learning. During this time we have seen the rise of the learning sciences, the expansion of IDT into many other fields, and the explosion of the Internet and online learning. This evolution has also brought unique challenges to the field. Wilson (in Merrill and Wilson 2006) stated, “In the midst of ongoing change, it can be difficult to gauge where we are now and where we are headed as a professional community” (p. 341).

Many IDT departments have evolved in response to these rapid changes in our field. For example, in 2009 the Instructional Technology Department at Utah State University, a long-standing bulwark in the field with well-known IDT faculty, changed the department name to Instructional Technology and Learning Sciences—a change so dramatic that it warranted an article explaining the rationale (USU ITLS Faculty 2009). Other departments have gone through similar shifts in branding, focus, or in some cases division, including the University of Georgia, Georgia State, Penn State, Indiana, Purdue, Florida State, Arizona State University, and the University of Minnesota. Ku et al. (2011) found that these efforts have resulted in 29 different degree titles across 55 IDT programs. Persichitte (2007) explained that the changes of program titles “represent both a response to changes in the field and an influence on changes in the field” (p. 328).

Because of the evolutions and shifts in our field, we need to understand where we are now as a field, and in part our field can be defined by its academic departments and the scholarship they produce. In this paper we identify some of the departments influencing the field according to the (acknowledged) limited data available from publications and professional organization awards. We then analyze the courses offered in a few of these departments to better understand trends in the ways they are preparing the instructional technologists of tomorrow.

Literature review

In this section we first review researchers’ previous efforts to identify publication topics and trends in the field of instructional design and technology (IDT). In choosing this terminology, we relied on the definition from Reiser (2012) who defined IDT as encompassing “the analysis of learning and performance problems, and the design, development, implementation, evaluation, and management of instructional and non-instructional processes and resources intended to improve learning and performance” (p. 5). Subsequently we discuss attempts to categorize programs and scholars in the field while noting some of the limitations of their research, which lead eventually to the need for the current investigation.

Research on IDT publication topics and trends

Most studies attempting to answer the question of where we are as a field have sought to identify key topics and trends in the articles published. As one example, Hsu et al. (2012) conducted a content analysis of 2976 articles (excepting book reviews, letters, and editorials) published in five journals (BJET, CE, ETRD, ETS, and JCAL) from 2000 to 2009. They found “pedagogical design and theories” was the most popular topic and that research participants were most commonly sampled from higher education. The authors also compared trends in the first five years to those in the second five years and found that articles regarding “motivation, perceptions, and attitudes” and “digital game and intelligent toy enhanced learning” have become more prevalent in the last five years.

Similarly, West and Borup (2014) analyzed article keywords extracted mostly from library databases in ten major IDT journals over 10 years (2001–2010). The most frequent keywords varied across journals, confirming the complexity and breadth of the field. However, these authors identified four patterns. First, keywords related to distance education were the most frequently used in eight of the ten journals. Second, the authors identified a possible bias towards researching and understanding higher and adult education over PK-12 populations. Third, the authors found a balance between keywords related to teaching and those concerned with cognition. Fourth, the keywords indicated that the field has maintained a focus on teacher education and preparation within the field.

Other researchers have narrowed their focus to publication trends in a specific subdomain in the field such as online and blended learning. For example, Drysdale et al. (2013) analyzed 205 doctoral dissertations and master’s theses in the domain of blended learning and presented a summary of trends in topics and methodology. These same authors plus a colleague did a thematic analysis of the most highly cited scholarship in journals over the previous decade, again focusing only on blended learning scholarship (Halverson et al. 2014). Lee et al. (2007) extended beyond blended learning to study the topics and methodologies used in 553 articles published in four distance education journals between 1997 and 2005, and Zawacki-Richter et al. (2009) included 695 articles published between 2000 and 2008 in five distance education journals.

While research into trends in online/blended learning research is more common, other scholars have studied trends in other subdisciplines of the field. Ozcinar (2009) did a citation analysis for 1980–2008 on the topic of instructional design in the Web of Science database; this study found emphasis on cognitive load theory and effective examples in subjects such as education and educational psychology. Ozcinar also found that 87 % of authors were from only seven countries. Finally, Martin et al. (2011) conducted a metatrends analysis of the most promising technologies published in the Horizon Report between 2004 and 2011 and how they effected publication trends in our field, finding that social web and mobile devices had a large impact on publication trends, with games and immersive environments also emerged as important.

Still other scholars have examined publication trends in a specific geographical location. For example, Neto and Santos (2010) compared trends in Brazilian distance education journals with articles published in American Journal of Distance Education, finding interesting differences in methodologies used and topics discussed. However, this study was limited by use of only one journal to represent American (and even English-language) scholarship on distance education.

Kucuk et al. (2013) published an analysis of scholarship from Turkey over 21 years in 94 different journals, considering a total of 1151 articles. Kucuk et al. (2013) found popular topics to be distance education, multimedia, and educational environments and technology. However, this research was limited to scholarship only from Turkey. Spring et al. (2016) compared trends from different geographic regions and found that North America led other regions in total citations and average citations per year. North America was followed by Europe, Oceania, and Asia respectively. Due to the research activity in North America, Ritzhaupt et al. (2010) narrowed their analysis to two predominate North American distance education journals—American Journal of Distance Education and the Journal of Distance Education—between 1987 and 2005. Due to the extended time period of their analysis, they identified distinct differences across three time periods: pre-web, emerging web, and maturing web. While this study provides good insight, it like many others suffered from only reporting on a limited (in this case two) pool of journals.

Research on IDT journals and programs

In addition to topical and citation analyses, authors seeking to clarify the nature and direction of the IDT field have identified influential journals, programs, and scholars. Many different strategies have been used, particularly in evaluating journals. Carr-Chellman (2006) surveyed recently tenured scholars in the field, according to departments listed in the Educational Media and Technology Yearbook, and asked 17 scholars to submit their vitae. The author then listed the most common journals in which work by these scholars was being published; the highest listed were Educational Technology Research and Development, TechTrends, Journal of Educational Computing Research, Journal of Computing in Higher Education, and Journal of Research on Computing in Education. While useful, Carr-Chellman’s study is now dated and limited by focusing on only emerging scholars.

More recently, Ritzhaupt et al. (2012) surveyed 79 educational technology professionals of various academic ranks and positions about the prestige and visibility of peer-reviewed journals in educational technology. While they found a high level of variance between journal visibility, prestige, and citation metrics, they learned that Educational Technology Research and Development, British Journal of Educational Technology, and Computers and Education had the highest overall visibility and prestige ratings.

Gall et al. (2008) narrowed their focus to influence of a particular journal by examining references from and citations to articles published in Educational Technology Research and Development from 1990 to 2004. They found the top nine symbiotic journals with ETRD were Contemporary Educational Psychology, Educational Psychologist, Instructional Science, Journal of Computer-based Instruction, Journal of Educational Computing Research, Journal of Educational Psychology, Journal of Educational Research, Journal of Research in Science Teaching, and the Review of Educational Research. While Gall et al.’s (2008) study was useful for understanding the breadth of influence of IDT scholars in various educational journals, it was limited by considering influence connected to only one journal, in this case ETRD. Many other studies published in the last 15 years have similarly been limited to examining one or two journals (Bozkaya et al. 2012; Masood 2004; Ritzhaupt et al. 2010; West 2011), providing interesting but limited findings.

IDT program evaluation

In contrast to these more common efforts to analyze trends in the topics being published in the IDT field and identify the journals publishing these articles, only a few attempts have been made to analyze the IDT programs that train and prepare the next generation of IDT professionals. To begin to attend to this, in 2009 the editors of the Educational Media and Technology Year book (Orey et al. 2010) identified and alphabetically listed the top 30 LDT programs. They based their decisions on the number of publications by each department in Educational Technology Research and Development for the previous two years (2007–2008) and on “received opinions from about 5–10 other faculty members from around the country” (p. vii). Recognizing the limitations of their initial attempt, the editors of the 2010 yearbook based their program rankings on broader and more objective data, albeit largely self-reported. They provided six top-20 department lists based on

  1. 1.

    Number of publications in ETRD and Journal of the Learning Sciences (JLS),

  2. 2.

    Amount of grant and contract monies,

  3. 3.

    Number of full-time faculty,

  4. 4.

    Number of PhD graduates,

  5. 5.

    Number of master’s graduates, and

  6. 6.

    Overall total number of graduates.

The Yearbook editors explained that the first two lists were measures of research productivity, and the other four were measures related to teaching. ETRD and JLS were selected as journals to ensure that the ranking was based on high quality publications. The Yearbook editors, however, acknowledged that their selection of ETRD and JLS was based on a “general sense that they are the leading journals” and was “somewhat arbitrary” (Orey and Branch 2015, p. vii). They also acknowledged that by focusing on only two journals they had excluded programs focused on different journals more aligned with their departments’ missions. When listing programs based on the amount of grant and contract monies received, the editors relied solely on self-report data and admitted “there is no real way of verifying that the data is accurate” (Orey et al. 2015, p. vii). Meanwhile, the measures of teaching also relied on self-report data and focused only on quantity—not quality. Because the data (outside of the publication counts) were based on one-year periods, these annual lists provide snapshots but not trends across time.

Additionally, Ku (2009) attempted to rank IDT institutions and authors, also based only on ETRD publications, but including 20 years of data (1989–2008). Using “Olympics-type” scoring (three points for first authorship, two points for second, and one point for third), Ku ranked Arizona State, Florida State, Pennsylvania State, Indiana, and University of Georgia as the top five programs respectively. Ku also found “several relationships between the most productive institutions and the most productive authors” (p. 805). For instance, four of the authors ranked as the top five had present or former affiliations with the top ranked institution, Arizona State (two as faculty and two as doctoral students). While these studies have been useful, as Spector (2015) noted, “What is less well documented is the changing nature of programs that prepare individuals for careers in the broad multidisciplinary field of educational technology” (p. 19).

In conclusion, attempts thus far to define the field of IDT have been theoretical/philosophical or have been based on analyses of publications, journals, programs, or individual scholars. However, we have been unable to locate studies that have examined department curricula or programmatic efforts to teach and train IDT professionals.

In this study we attempted to fill this deficit by examining first which universities across the world appear to be productive in the field of IDT. Understanding that the use of any single data source would be limited and biased, we analyzed multiple sources of publicly available data sources while still acknowledging the limitations that are inherent with this information. We present findings of an analysis of the program and course descriptions from five of these most influential universities in order to better understand what kinds of instruction these leaders are providing to students.

Methods

Because the field of instructional design and technology (IDT) is expansive and has been integrated into many other disciplines (Gall et al. 2008), no single definition or sampling method will be universally accepted (Ku et al. 2011) or provide a comprehensive understanding of the field. We selected the following methods and rationale for our strategies of data collection and analysis.

Journal selection

From 2011 to 2015 we published a series of articles in Educational Technology analyzing various important journals in the IDT field. These journals were chosen by carefully analyzing lists of important journals in our field, especially through consultation with the data collected by Ritzhaupt et al. (2012) on the most prestigious IDT journals. Ritzhaupt et al. (2012) recruited 79 professionals from three prominent educational technology listservs: the American Educational Research Association’s (AERA) special interest group on Instructional Technology, the ITFORUM listserv, and the Association for Educational Communications Technology (AECT) members’ listserv. These respondents averaged 13.44 years of experience, and included participation from the United States, Finland, Australia, Greece, Portugal, and Oman. These professionals were surveyed according to their perceptions of the academic prestige (scale of 1–10) of journals in the field, resulting in a list of 59 journals ranked according to their prestige.

In addition to consulting this list from Ritzhaupt et al. (2012), we also consulted Google Metrics and the Thomson Reuters Impact Factor for information on the most cited journals in the field of instructional design technology. We also sought journals that would provide a balance of cognitive/learning science and theory, technology, and instructional design. Following are the 20 selected journals, listed alphabetically:

  • American Journal of Distance Education

  • Australasian Journal of Educational Technology

  • British Journal of Educational Technology

  • Cognition and Instruction

  • Computers and Education

  • Distance Education

  • Educational Technology Research and Development

  • Educational Technology and Society

  • Instructional Science

  • Interactive Learning Environments

  • International Journal of Computer-Supported Collaborative Learning

  • International Journal of Technology and Design Education

  • International Review of Research in Open and Distance Learning

  • Internet and Higher Education

  • Journal of Computer-Assisted Learning

  • Journal of Computing in Higher Education

  • Journal of Distance Education

  • Journal of Educational Computing Research

  • Journal of Learning Sciences

  • Performance Improvement Quarterly

Because hundreds of journals publish work related to IDT, undoubtedly we have neglected to include some journals that could have been considered. However, the selected 20 journals represent much of the top scholarship in the field of instructional design and technology during the span of 2005–2014. In looking at authorship in these journals, we considered only first authorship, which has potential negative consequences on the data by ignoring multi-institutional collaborations, but we felt limited the bias from one mentor publishing with several advisees from the same institution.

Professional awards selection

We considered awards given by professional organizations to be a potential data source, as these awards recognize the best of a variety of scholarly pursuits. For our study we chose to use the Association for Educational Communications Technology (AECT) as the main professional organization for IDT. Lowenthal and Wilson (2009) argued that “AECT historically has been uniquely influential in shaping and guiding theory and practice of instructional design and technology” (p. 39), and Lowenthal (2012) surveyed 140 instructional design and technology professionals recruited through the ITFORUM listserv on which conferences they attend, and found that AECT was the most frequently attended conference (attended by 50 of the respondents, with Educause second at 33 and AERA third at 32).

Once we identified AECT as the organization most representative of the field of IDT, we emailed leaders of the various AECT divisions as well as the AECT main office to request data on awards and scholarships given over the last 10 years (2005–2014)—including the Educational Communications and Technology internship program, which traditionally has recognized top graduate students in the field who show potential for leadership in the professional community. The following are the 25 AECT awards for which we received data:

  1. 1.

    Strohbehn, Cochran, and Johnson Internships

  2. 2.

    Young Scholar and Young Researcher Awards

  3. 3.

    Distinguished, Special, and Outstanding Service Awards

  4. 4.

    Presidential Awards

  5. 5.

    Dale Award (given in 2004)

  6. 6.

    Qualitative Inquiry Award

  7. 7.

    Manke Multimedia Award

  8. 8.

    Annual Achievement Award

  9. 9.

    School Library Media Specialist of the Year (given in 2006)

  10. 10.

    Theory into Practice

  11. 11.

    ECT Mentor Scholarship

  12. 12.

    Brown Publication Award

  13. 13.

    Gagne Award

  14. 14.

    McJulien Minority Graduate Scholarship

  15. 15.

    deKieffer International Award

  16. 16.

    Distinguished Development Award

  17. 17.

    McClusky Research Award

  18. 18.

    Outstanding Master’s Student Scholarship

  19. 19.

    Outstanding Student Practice Award

  20. 20.

    Outstanding Journal Article

  21. 21.

    Brunikse Award (given in 2009)

  22. 22.

    Division awards given by Distance Learning, Change, Multimedia, Research and Theory, and Teacher Education

  23. 23.

    Legacy Scholarship

  24. 24.

    ECT Diamond Mentor

  25. 25.

    Pacificorp Design and Development

Handbook selection

The number of edited books in our field is immense. Accordingly, we limited our analysis to the AECT-sponsored handbook, the Handbook of Research on Educational Communications and Technology (Spector et al. 2014), the third and fourth editions, which were both published during our 10 year window. The Handbook, like other books of its variety, includes chapters by decision of the editors. However, the Handbook does rely on blind peer reviews, and has multiple editors, instead of just one, reducing potential bias. In addition, there are frequently sessions at AECT discussing the proposed makeup of future Handbook editions in order to solicit feedback, and invitations to submit chapter proposals are widely distributed within the AECT organization. Thus, while any edited book will by definition not be as egalitarian as journals, the Handbook does seek for wider representation than most edited books.

Data collection

We developed a web scraper (using the python scripting language) that extracted journal article data from EBSCOhost, which includes educational databases such as ERIC. We were planning to use this as the main source of our journal article data until we discovered we could find almost all the information we needed in Scopus, a database of journals. Thus we used the scraper data only for AJDE and used Scopus to extract information for the journals ETS, ETRD, DE, IS, BJET, CE, ILE, IHE, IRRODL, JLS, and PIQ. We (assisted by the authors of the various articles in the Educational Technology series, see West 2011) entered data manually for the journals JECR, AJET, JCHE, CI, JCAL, IJTDE, JDE, and IJCSCL, checking the data for accuracy and filling in missing data by reviewing past issues of the journals. Similarly, we collected data on first authors and institutional affiliations for the Handbook chapters. We also organized the data collected from AECT leaders to ensure we had names and institutional affiliations for all award winners, as far as that information could be obtained.

Trend analysis

Because it was not possible to analyze trends within all IDT departments, the second phase of this research was to examine the trends and attributes within five of the departments. We sorted each individual data set to determine the top 10 institutions for each data source, and assigned points in reverse order (top institution 10 points, second institution 9 points, etc.) to create an overall list. From this pool of most frequently appearing institutions, we selected the top five institutions to address our second question, which involved qualitatively examining the trends within departments.

For each of these five institutions, we identified the department most likely to be associated with the field of IDT and retrieved course lists and programs of study from their websites. Employing thematic analysis, we coded for themes in emphasis, topics, and curriculum within these program and course descriptions. We began by coding for themes according to the traditional ADDIE framework (analysis, design, development, implementation, evaluation). While ADDIE is often considered inadequate as a design process (Bichelmeyer 2004; Hokanson et al. 2008), it provides a good framework for understanding basic categories in any IDT process. Additionally we applied inductive coding to discern other emergent themes in the courses (e.g., courses related to psychology, types of technologies taught, etc.). At least two scholars completed this coding for each department, comparing notes to ensure a high level of coding agreement.

Data audit

While the analysis is our own, we asked an outside scholar to conduct a data analysis audit in order to verify the reasonableness and accuracy of our methods. This scholar was from a major university in the United States, and is a well known and respected tenured professor within the field of instructional design and technology, who has been awarded prestigious grants and is well published in the field. This scholar was also not from any university in our list, thus having a higher degree of objectivity. The scholar was given access to our data files, a description of our data collection and analysis methods, and encouragement to ask questions until confident in the trustworthiness of our approach. This scholar verified our approach and felt the methodology was appropriate and strong. The provided insights and critiques were then incorporated into the final version of this paper.

Findings

Influential IDT institutions

A few universities emerged across multiple data sources, while many more universities showed strength in one area but not others. We will first share the top publishing institutions within the 20 identified journals and the Handbook followed by the top award-receiving institutions. We then conclude this section by sharing the results from our analysis of five of these institutions.

Top-publishing institutions

First, we examined the data to determine the top publishing institutions in these 20 journals over the past 10 years. We acknowledge that because the data report institutional but not departmental affiliations, some institutions might have scholars from multiple departments contributing to these counts. In addition, some departments are much larger than others, as are some universities (e.g., Open University enrolls 160,000 students). Finally, while this list of 20 journals represents a larger pool than other studies have assembled, we consider it likely that scholars in all departments sometimes publish in worthwhile journals not on this list. Thus, caution should be used in interpreting the data, as many departments produce excellent scholarship and provide outstanding mentoring to students that may not have surfaced in our calculations.

Table 1 reports the top 20 institutions in the world publishing in the 20 selected journals, according to the institutional affiliation of the first author only. The table also reports the percentage of the total articles in a particular journal that were produced by the 20 institutions, demonstrating patterns in institutions favoring particular outlets. For example, Nanyang University published 57 times in Australasian Journal of Educational Technology and Computing and Education. National Taiwan and National Central (also in Taiwan) Universities frequently published in Educational Technology & Society, British Journal of Educational Technology, and Computing and Education. Open University (in the United Kingdom) favored CE, BJET, and Journal of Computer Assisted Learning. Athabasca scholars published 58 times in the International Review of Research in Open and Distance Learning. Since IRRODL is owned and hosted by Athabasca, more frequent participation in this journal from Athabasca scholars is not surprising.

Table 1 Top Publishing Universities by Journals (2005–2014)

Publishing in major handbooks is another way to provide influence in a field. Table 2 shows the top institutions (according to the affiliation of the first author) in the last two editions of the Handbook of Educational Communications and Technology (published in 2008 and 2014). This list is markedly less international than the records of journal publication. The University of Georgia, Florida State University, Brigham Young University, Indiana University, and Open University of the Netherlands appear in both 10-year lists of journal and handbook authorship.

Table 2 Top authorship in handbook (3rd and 4th Eds.)

Top award-receiving institutions

In analyzing the data on AECT awards, we divided the results into research awards and design/practice awards (i.e., awards given explicitly for instructional design or educational technology practices). Only three institutions received more than two design/practice awards during this decade: Indiana University (9), University of Georgia (6), and Emporia State University (4). Research awards were more common. Table 3 shows the institutions receiving two or more prestigious research awards during this past decade—those awards that provide a cash payout of $500 or greater. The Universities of Georgia, Purdue, and Indiana were clearly the leaders in receiving these awards. Table 4 shows the overall institutions receiving five or more of any awards (prestigious or not, research/design/other) during this decade.

Table 3 Universities receiving prestigious AECT Research and Scholarship Awards (2005–2014)
Table 4 Universities receiving any AECT Awards (2005–2014)

Program offerings

For the second phase of our study, we sought to understand what several of these institutions are teaching students in the field of IDT. To answer this question we assigned points to universities in the top 10 in journal authorship, Handbook chapters, and prestigious AECT research awards. From this pooled list, we identified the five universities with the most points (in alphabetical order): Brigham Young University, Florida State University, Indiana University, University of Georgia and Utah State University. We list them in alphabetical order to de-emphasize any prioritizing of the programs, which we did not feel was necessary for the second stage of this study.

For these five schools, we accessed program descriptions from each department website for the department at each institution most representative of the field of instructional design and technology (see Table 5). We observed a few interesting things about these programs. None of the departments offered an undergraduate degree. Most offered an online master’s degree, but all doctoral programs remained residential. Master’s degrees typically required a portfolio in lieu of a thesis, although USU offered a thesis option and BYU required either a research thesis or evaluation/design written project.

Table 5 Program information

We then accessed and analyzed course descriptions for trends in how these schools train the next generation of instructional designers. We also provided each department the opportunity to verify the accuracy of the coding and data. It is important to note that it is very difficult to categorize often interwoven concepts into individual categories. We sometimes categorized a single course into two categories if it appeared to focus nearly equally on both (e.g. a course on “measurement and evaluation”). However, it is probably often true that a course touches on many of these categories, including learning theory and psychology, learner analysis, design, and development all together. In addition, it is sometimes not clear what courses are required at the PhD level because some students may enter with previous master’s degrees in the field, while those who do not may be asked by their chairs to take several master’s level courses. Oftentimes this decision is based on individual needs and chair advisement. Thus, we do not intend to report that this data is definitive for every student’s experience at these schools. For this reason, we elected not to report this data by school, but to instead combine the data to look for patterns across all the schools to indicate areas where the field as a whole may be focusing or neglecting our instruction and training of students.

Courses in the ADDIE framework

First, we coded the courses offered to students according to the basic ADDIE (analysis, design, development, implementation, and evaluation) framework (see Table 6). We found that these schools placed the strongest emphasis on design and development courses, and modest attention to evaluation, implementation, and analysis.

Table 6 Number of courses offered directly through the Instructional Technology Departments

Spector (2015) argued that in his view, “there is a need for the programs… to transform their curricula.” He specified, “There is an increasing emphasis on evaluation. That emphasis could and should be reflected in educational technology programs” (p. 23). The focus we found on teaching some evaluation courses is a positive indication from these leading departments that they recognize the value of evaluation for the field. However, Spector argued that few programs actually require courses in program evaluation specifically, despite evidence that there may be more careers in evaluation than in actual design. Our data was mixed in support for this assertion, as all of the five schools required an evaluation course at the master’s level, but only two required one at the PhD level. However, Spector’s opinion that these courses typically focus on summative evaluation and not formative or program evaluation could be true even of these master’s level courses.

Analysis was not commonly taught as a separate course (except at Indiana and Florida State), but was often included in the design courses. We identified few courses explicitly designed to teach implementation, despite common recognition that strong and innovative instruction can be rejected if implementation strategies or critical understanding of implementation environments is neglected (Ely 1990; Tessmer 1990). This could be an area in which the field could improve our training of new designers.

Courses outside the ADDIE framework

In addition, we looked for patterns in the courses taught outside of the ADDIE framework across all the departments. Common courses included foundations courses that provided introductory readings to the field, advanced readings courses and seminars, psychology and human development courses, courses that taught writing skills, basic research overview courses, and courses teaching educational technologies (see Tables 6, 7). For most of the programs, it appears that methodology courses are not frequently taught by IDT faculty, but are instead offered through a collaboration with another program within the college. However, it was positive to note the wide variety of methodology courses offered to students, including a strong balance between qualitative and quantitative methods. We found at the master’s level that all schools require a foundations course. Other than this course and the ADDIE courses mentioned above, there were few common patterns in what was required at the master’s level. Instead the pattern was for wide variety between the programs.

Table 7 Number of courses required for master’s degree

At the PhD level, it was very difficult to discern major patterns except for this one—each program emphasized flexibility and student choice in determining their program of study. We found that most of the programs required courses that could be selected from multiple options, making discrete categorization nearly impossible. However, we did identify a few smaller patterns. Every program required a general research methodology course, and typically multiple qualitative and quantitative methods. It was common for programs to offer, and three required, courses focused explicitly on learning how to write more effectively, including literature review courses. Most programs required at least one design course at the doctoral level, but rarely development courses. No courses directly focused on implementation were observed to be required, and only one school explicitly required an evaluation course, while it was an option to fulfill a requirement at one other. It could be that these schools require students to earn a master’s degree in the discipline prior to engaging in doctoral work, and consider the master’s courses in these areas sufficient.

Perhaps surprisingly, no programs were observed to require any courses in educational technologies at either the M.S or PhD level, even though many such courses were offered as electives. In addition, for a field with a rich history in psychology/learning theory (Driscoll 2004) and instructional theory (Reigeluth and Carr-Chellman 2009), we were surprised that so few courses are required in this area. As far as we could determine, only two programs required courses in psychology, human development, or learning theory at the doctoral level, and only two required courses in instructional methods/pedagogy.

Discussion

Any analysis of universities and departments will be inadequate, as much of a program’s strength is in its uniqueness—particularly for graduate education. However, in this study we attempted to answer Spector’s (2015) appeal for more documentation on the “changing nature of programs that prepare individuals for careers in the broad multi-disciplinary field of educational technology” (p. 19). Rather than rely on self-reported data, we first attempted to objectively report findings on at least three potential indicators of institutional strength in the field of instructional design and technology that were publically available: authorship in journals, authorship in the Handbook of Research on Educational Communications and Technology, and reception of professional awards from the Association for Educational Communications Technology.

We found strong departments in the field scattered throughout North America, Asia, and Europe, particularly when considering journal authorship. However, there was a marked difference in the international diversity when we considered Handbook authorship and AECT awards, perhaps indicating a bias in those areas towards North American scholars, which would have biased the remainder of the study towards that continent.

In addition, we analyzed programs of study in five of these departments in order to understand how they are teaching and developing the next generation of instructional designers. In doing so, we relied on the accuracy of their online materials, which we know may not always be up to date. Regardless, we found strong emphasis on design, development, and research courses, with moderate emphasis on principles of evaluation and foundational understanding of the field. We did not find very much emphasis on teaching instructional or learning theory or implementation strategies for instructional designs.

Professors in the field will not be surprised to learn that we found a great deal of flexibility for students to choose what they wanted to learn about in their programs. Thus students might graduate with a degree in the field when they have actually learned and done very little concerning the actual practice of instructional design—or evaluation, or research. We seemed to find a common tendency of programs providing foundational understanding to all students and allowing flexibility for students to go deeper into an area of interest. This approach could create confusion for the field as not all graduates from our departments will have similar backgrounds and skill sets. Thus departments might stress development of portfolios by which students may distinguish and clarify the training they have received.

A second concern emerged as we realized how few teaching/learning theory courses are required or offered to students. In a field steeped in theoretical traditions and that expects graduates to understand how to teach, this omission may warrant departmental reflection.

Limitations and conclusion

While we believe the insights from this study are useful, we acknowledge several limitations. First, certain departments prefer to publish in particular journals, and future research could uncover what motivates these preferences for a particular research outlet. For some departments these preferences might be connected to a university’s sponsorship of a journal (such as Athabasca’s preference for IRRODL). Editorial involvement with a journal may also be involved: for example, the University of Georgia historically published frequently in ETRD and had at one point two ETRD editors on faculty—although there is no evidence that their publication in the journal went up during this time of editorship. It is more likely that certain departments develop a culture and mission around particular topics in the field that fit within the community of a particular journal, and thus involvement with that journal is a natural progression. In addition, various universities have different expectations for faculty scholarship, with some expecting more output (and providing more resources) than others. In addition, some universities expect more international publication, others prefer national publication, some prefer print-based journals, and others prefer online ones. Thus, readers should remember that much of the output from faculty within a department or university may be influenced by these expectations. It would be interesting to conduct further research on reasons for these departmental and university preferences.

Second, our study identified journal authors by institution, but not by department, and future research could investigate more closely which faculty and which departments are contributing to these journals. For example, some institutions may have benefited by having multiple departments and programs contributing scholarship in the journals we selected. Another limitation is that our data does not explain whether these authors are part of the instructional design technology community. It could be that some of these authors identify in other fields such as general education, psychology, or the learning sciences. Future research using social networking technologies may help to determine which of these authors are bound together into research communities, and which conferences/organizations they attend. Also, our study did not account for scholars who may move from one institution to another, senior scholars who may be mentoring junior scholars in significant ways and thus not appear as the first author, scholars working under different publication requirements dictated by their individual universities, and which authors may be students. In the end, it is very difficult and perhaps impossible to account for all these factors, so instead we elected for a clean cross-section of the data, looking at the leading authors for the articles and where these authors were employed at the moment of publication. Future researchers may consider methods that would allow for a more detailed look at the data, including which departments have established records of publishing with students and mentoring junior colleagues.

Third, universities in the United States clearly had strong relationships with the Handbook and AECT organization, and this determined the final selection of five universities for the curriculum analysis. Selecting other international organizations that give awards, such as the United Kingdom’s Association for Learning Technologies, would have provided different results. In addition, publishing in the Handbook is done by decision of the editors, after receiving blind peer reviews, and thus may reflect a network bias. We debated whether to include the Handbook data at all because of these limitations, but ultimately felt that because it represents the major volume in the field of instructional design and technology, that it was an important piece of data to include.

Fourth, this study emphasizes the publication of articles in establishing the potential quality of a department. We attempted to overcome this bias somewhat by including the AECT awards, which are sometimes given for leadership and design. However, the results still reflect research scholarship as the predominant indictor of quality. We acknowledge this bias as a weakness; we believe that faculty and students in our field should be recognized for the quality of their instructional designs as much as for the number of articles they publish. However, this problem may be inherent in academia in general, increasing the difficulty of finding reliable indicators of quality design. Perhaps international organizations could assist in this area by finding ways to rate, evaluate, and reward design output that could be included in quality studies of the future.

In conclusion, we do not believe there is necessarily one “proper” (Rieber 1998) way for departments to train students in instructional design and technology, nor is there one “proper” department or program. However, we do believe that learning from each other can bring useful introspection to each department. Also such interchange may stimulate important conversations in the field concerning our core competencies and paths into the future (Merrill and Wilson 2006). The data used for this study showcased some institutions, but different data sets would showcase others. In the end, any ordering or ranking is not as important as reflection about what we value as institutions, and as a field.