Rapidly implemented during the COVID-19 pandemic, the Great Online Transition (GOT; Howard et al., 2022) represents a total transformation of higher education, demanding that faculty quickly assimilate new skills and considerably adapt courses and pedagogy. An international study profiled higher ed faculty readiness and found just over 90% of respondents perceived their skills and self-efficacy to teach online at levels of “inconsistent” or “low” readiness (Scherer et al., 2021). Research suggests that the beliefs, intentions, and abilities of faculty as well as the institutional support they receive will exert considerable influence on the aspects of online delivery that are sustained and contribute to positive impact (Atman Uslu & Usluel, 2019; Benson & Ward, 2013; Lohr et al., 2021; Knezek et al., 2016). Yet, it remains to be seen which changes to universities, programs, and courses demanded by the GOT persist as face-to-face teaching resumes in many parts of the world.

This concept paper argues that a set of principles can serve as guidance to organize faculty members’ design decisions when integrating EdTech (i.e., education technology used to enhance teaching and learning) into higher education courses, including their online delivery. As discussed next, the literature suggests that expertise and decision making are each contextually sensitive and organized around schema. Thus, it is possible principles designed to direct attention to context and organize schema could support building faculty expertise. To consider the soundness of this argument, and the affordances and limits of such a process-oriented approach to faculty planning for the integration of EdTech, a dataset is reviewed that comprises two previous studies of faculty who participated in formal but varied levels of professional learning regarding digital cases (Dexter & Tucker, 2010; Tucker & Dexter, 2011) and simulations (Clement et al., 2021). In the paper, the phrase integrating EdTech is used inclusive of incorporating technology such as LMSs, video conferencing, and other web-based software used to teach online, as well as specialty instructional tools used within an online course like the interactive tools reported upon in these studies.

Literature review

Future faculty success for online teaching can be described analytically in terms of competencies, the knowledge, skills, and motivation (KSMs) that as component parts undergird successful performance. For example, knowledge of EdTech for pedagogy, skills to operate various EdTechs, and the motivation to design instruction that draws upon these knowledge and skills. Future faculty success can also be framed holistically in terms of competence, the behaviors of successful performance, such as integrating EdTech in an online setting. Either framing—disaggregated KSMs or coherent, context-sensitive behaviors—could aid higher education in fostering online teaching excellence. Yet, considering this dichotomy instead as a continuum (Blömeke et al., 2015) emphasizing the connection between competencies and competence could prove very generative if made the focus of faculty professional learning. Blömeke et al. (2015) argue that this linking, middle view emphasizes the processes faculty use to recognize what KSMs the context demands and make decisions about what to do, thereby operationalizing competencies into competence. That is, knowledge, skills and attitudes are selectively drawn upon as individuals perceive, interpret, and make decisions to affect desirable performance in real-world situations (Blömeke et al., 2015).

The literature contains a limited number of studies regarding characteristics about faculty design of online learning. Namely, how autonomous faculty are; orientations to planning that are either student/ learning- or teacher/content-focused; and differences in course goals by content area (Bennett et al., 2017, Postareff & Lindblom-Ylänne 2008, Stark, 2000). In that literature, the overall desirable competence of TPACK and how its component parts (i.e., TK, PK, CK, etc.) vary in amounts (c.f. Benson & Ward, 2013) is more often discussed than are the processes of perceiving, interpreting and decision making that draw upon these component parts and lead to overall competence with EdTech. Few studies investigate the processes of how faculty design courses in general (Bennett et al., 2017), let alone for online learning. There are a couple of recent notable exceptions that provide some insight into faculty processes. Bennett et al. (2017) interviewed a purposive sample of 30 faculty in Australian universities with a range of teaching experience (described as less than 5 to more than 10 years, and all but one had some online teaching experience), to ask about their course planning approaches. Martin et al. (2019) interviewed 8 award-winning online instructors within the U.S. about their roles and responsibilities as designers and facilitators of online courses.

The instructors in Bennett et al.’s (2017) study described this work as iterative, not systematic; voluntary; generally proceeding from big picture considerations of the course to class meeting details; and idiosyncratic to the individual faculty and his or her contexts; yet with general shared characteristics that make work with colleagues possible and even desirable. Considerations started out broad (“learning outcomes, scope of content and learning outcomes, the scope of the content and assessments, and general ideas about learning activities”, p. 135) became more granular (“content topics, learning activities and assessment” p. 135) and then specific (readings, content resources, learning activities, and timing and requirements of assessment tasks, p. 136). Much of their design processes were done prior to instruction and to the point of perceived coherence, with additional design processes done during teaching as needed. After teaching a unit faculty often recorded ideas for the next time they planned to teach, and the redesign process proceeded much like the original design process.

Comparatively, Martin et al. (2019) learned that these expert faculty were systematic in methods and worked from objectives to identify topics, and then resources using an approach of big picture first, then details. They then designed learning activities that would support achieving objectives while being mindful of overall alignment, but also the availability at their institutions of any selected EdTechs. Attention was given to how to meaningfully 'chunk' material, given the learners' prior knowledge, and to how to leverage online tools to provide sufficient interactions among learners, content, and the instructor. How features of EdTech might allow learners to demonstrate their understanding more uniquely or creatively was considered in addition to assigning papers and tests. Particular attention was given to course facilitation, through timely responses and announcements, feedback, and creating online presence through engagement. These faculty pointed to the value of the institutional resources available to them for course design and evaluation, and online teaching. These expert instructors characterized novices as needing support for course design, as they lack comfort or experience with it (Kumar et al., 2019).

Thus, we see some differences between the two groups, with the processes used by the expert group of award-winning faculty described as systematic, and the other group’s described as iterative. These differences are consistent with the literature on experts and novices that discuss experts as having different perceptual patterns (Tanaka & Philibert, 2022) and using specialty knowledge and representative understandings (Cokely et al., 2018), or mental models (Mosier et al., 2018), that organize their approaches to their work.

The literature on how to characterize expertise typically uses terms such as novice, advanced beginner, competent, proficient, and expert to capture behaviors consistent with individuals in the phases. The evolving progression is considered sequential and developed through deliberate practice (Fadde & Jalaeian, 2019). Yet expertise is not only viewed as a characteristic of the individual (Flach & Voorhorst, 2019) but also in a more holistic fashion dependent upon how individuals interpret the symbols in their environment and then construct meaning from them, which emphasizes perception more so than cognition (Baber, 2019).

Considering the competence and expertise literature together identifies some research-based similarities; most importantly the premise that they can both be developed (Blömeke et al., 2015; Mosier et al., 2018). Yet while experts are certainly competent, all individuals with competence are not expert in that their performances are not reliably superior (Ericsson, 2018). The competence literature emphasizes measurement, and somewhat recently Blömeke et al. (2015) proposed the process-oriented view of competence described above. The expertise literature emphasizes identifying and decomposing expert performance, with subsequent attention on how to develop expertise.

Following the GOT, considerable attention is focused on developing expertise in higher education for EdTech use in online learning, such as the articles in this special issue. Here, I argue that it requires attending to how faculty assemble competencies and draw upon them in context-sensitive ways in real-world settings. To this end, we can learn from both the competence and expertise literature, which emphasize the importance of perceiving, interpreting, and decision making guided by schema and symbols that heighten attention to context and guide iterative uses. This need for a concept-driven approach is consistent with Bennett et al.’s (2017) conclusions and, extrapolating their findings, the recommendations for emergency online instruction at the start of the COVID-19 pandemic (Connolly & Hall, 2020; Lee 2021; Lohman, 2021).

A commonly used approach to characterize knowledge competencies regarding EdTech use is technological pedagogical content knowledge, or TPACK (Mishra & Koehler, 2006). Studies of TPACK and teaching focus more so on K-12 schoolteachers than on post-secondary faculty (Major & McDonald, 2021; Raduan & Na, 2020), but the considerable extant literature on teachers’ TPACK (Petko, 2020) offers many insights for post-secondary faculty. There is consensus that TPACK is meant to capture several bases of knowledge, decomposed into technology knowledge (TK), content knowledge (CK), and pedagogical knowledge (PK). This literature is replete with descriptions and examples of how these combine (i.e., TPK, TCK, and PCK) to represent new knowledge, with the primary interest in how they all combine to represent TPACK, the knowledge of how to use technology to aid the teaching of subject matter (Voogt et al., 2013). Some researchers also name context knowledge (XK), the situated nature of instruction, as an eighth dimension of this model. Brianza et al. (2022) reviewed that research and offered the conclusion that XK should represent teachers’ agency in using their TPACK in immediate, proximal, and distal levels of context. However, there is little consensus on how the three sub-combinatory options or overall TPACK come into being. Petko (2020) characterizes the core argument about TPACK’s development as integrative (i.e., knowledge types can add up once learned) or transformative (i.e., it requires active learning to combine separate types of knowledge) (c.f. Angeli et al., 2016). More recently, Brianza et al.’s (2022) characterization of context knowledge (XK) describes it as a competence that allows contextualizing TPACK [knowledge].

The teaching competence and expertise literature is also more so focused on K12 schoolteachers, but its emphasis on the skillful combining of knowledge, skills, and motivations (sometimes called attitudes and sometimes including beliefs, and self-regulatory capabilities) (Kunter et al., 2013; Raduan & Na, 2020), has a deeper empirical basis and provides more guidance into how the eight domains of knowledge in the TPACK model might lead to competent teaching with EdTech.

To conclude, the research on learning to teach with EdTech has been predominately grounded in the TPACK model. But, as noted, the field does not agree on how TPACK forms (Petko, 2020) nor even whether TPACK is a theory (Saubern, 2020). Considerably more guidance to create a theory of change for developing faculty expertise in EdTech is available from the extant literature on expertise and competence. That research suggests that through learning and then practice, with explicit support for perceiving, interpreting, and making decisions, faculty knowledge, skill and motivation can be combined to affect desirable performance.

Conceptual framework

The conceptual framework comprises a set of principles developed to guide planning and decision making for classroom integration, while considering the context of the degree of school-wide support. The six Education Technology Integration Principles (ETIPs, Dexter, 2005), were originally developed for use in teacher education to provide K-12 pre-service teachers principles to consider when designing instruction and considering the role EdTech might play. They are updated here for higher education faculty and online contexts.

The first three ETIPs are organized into learning-level principles that assume ICT does not possess inherent instructional value, but rather instructors must design into instruction the value the EdTech adds to teaching and learning. This emphasizes reasoning and decision making, which differs from the emphasis of knowledge underpinning the TPACK model.

  • The first principle is that learning outcomes drive the selection of EdTech. This consideration focuses attention on the EdTech and its affordances, as weighed against the course learning outcomes and their cognitive demands.

  • The second principle is that EdTech provides added value to teaching and learning. Here, added value means it makes possible something desirable teachers or learners could do that otherwise would be less viable, or perhaps impossible. This principle discourages gratuitous or faddish uses of EdTech.

  • The third principle is that EdTech assists in the assessment of the learning outcomes. This principle builds upon the previous one and directs attention to any artifacts or traces the EdTech might produce as a potential source of data regarding the learner’s achievement of learning outcomes.

The next three ETIPs are organization-level principles that direct an instructor’s consideration of the support they might expect or anticipate needing when integrating EdTech, to gauge whether, given their amount of planning time and knowledge, a particular integration goal is realistic.

  • The fourth principle is ready access to supported managed EdTech resources is provided. This principle directs attention to how feasible the planned EdTech integration is, considering the organizational context. Faculty may ask Is the EdTech readily available for use, via licensing or scheduling? When in use, is technical support available in a workable time frame if a problem arises?

  • The fifth principle is that faculty professional learning is targeted at successful EdTech integration. This principle suggests faculty should expect opportunities to learn to select EdTech purposefully, and then use it in ways that add value and assist in assessment. Thus, this principle is generative for the first three. Faculty may ask Do I know enough to use this EdTech to positively impact teaching and learning? If not, it directs their search for further opportunities to learn.

  • The sixth principle is that professional community enhances EdTech integration. This principle also directs a faculty member to consider the available support, this time from other colleagues. Faculty may ask Who do I know who can share their experience? Or, Is there anyone to work with on this EdTech integration effort?

Altogether, these six principles could guide faculty members in designing for EdTech use in online instruction so that EdTech enhances teaching, learning and assessment, and consider how, or if, the context provides appropriate support for the demands of effective EdTech use. Orienting toward these principles could guide faculty professional learning towards coordinating various competencies of technical, pedagogical, and content knowledge and skill into coherent competence for using EdTech in higher education courses. A shift in emphasis for faculty professional learning from knowledge and skill competencies to instructional design decision making represents higher order thinking that includes contextual consideration. In this paper, the principles serve as a lens to investigate if they characterize the decisions faculty made, or the decision-making process they used, in a dataset of two previously completed studies. The goal of applying this lens is to consider the soundness of the argument that a competence-based theory of action could aid developing faculty competence for EdTech.

Dataset and approach

The six principles are illustrated through two existing datasets, comprising interviews with educational leadership faculty about their instructional decision making. The two completed studies were both of novice faculty learning to integrate an online interactive EdTech (Dexter et al., 2020) into a post-secondary educational leadership course. In the first study the EdTech was digital cases (Dexter & Tucker, 2010; Tucker & Dexter, 2011). The digital cases portray an educational leadership dilemma that requires identifying and analyzing pertinent information portrayed as a school website or its intranet. A structured text response asks learners to address four decision steps to identify the problem, criteria to use as guidance for the decision, generate two alternative courses of action, and then decide upon a plan of action. In the second study (Clement et al., 2021) the interactive EdTech was digital simulations (sims). The digital sims are branching type, or “choose your own adventure” style, simulations that introduce a scenario requiring a decision. Three to four answer options are offered and the learner’s choice among those determines what they see next. Both tools offer opportunities to apply a variety of information that may be taught in a course, and it could be argued there are no wrong answers, just better or worse ones.

In both studies a convenience sample of educational leadership faculty participated. In study one, 9 faculty in 7 distinct public or private institutions in one mid-Atlantic state used a set of digital cases over two to four semesters as a part of a project that provided three professional learning events about their optimal use. Sample two also had 9 faculty in 7 additional public and private universities from all over the United States who participated in one professional learning event about the use of these simulations, which were used for one semester. In both samples instructors taught in a combination of online, hybrid, and face-to-face settings. In both studies the interactive EdTechs were integrated into a variety of educational leadership courses, which were selected by the instructor.

The combined dataset comprises participant interviews for and the reported findings of the two studies. After each semester of use faculty were interviewed about their instructional planning for the EdTech, any barriers to use they encountered, their actual implementation of it, and their impressions of students’ experiences and learning with it. To follow along during a faculty member’s interview, his or her syllabi was also reviewed. For example, to track the discussion of the dates the EdTech was used, and the framing of the case or sim activity in the context of the course.

Rendering of the conceptual framework through faculty experiences

learning-level perceiving, interpreting and decision making regarding digital cases and sim EdTechs

Learning outcomes drive the selection of EdTech

This principle seems to have captured well in this dataset the perceiving, interpreting, and decision making necessary to marshal needed knowledge, skills, and affect and motivation to integrate EdTech. The participating faculty had purposes for gaining access to the EdTechs through participation in the respective studies that met their intentions for their courses and the outcomes they wanted their students to achieve. They were remarkably open to trying the digital cases and the sims, even though in both studies no one had used the EdTech before. Although, all but one had tried traditional linear narrative (and often paper-based) cases. Thus, their matching of their learning outcomes to the EdTech in these studies was guided by the information in the call for faculty volunteers to participate, and likely by projecting their expectations from using cases onto these new EdTechs. In their pre-surveys, they marked many purposes for using the EdTechs that matched what they indicated they had used cases for in prior semesters. This suggests that their perceiving and interpreting of the match between the EdTech and their intentions were scaffolded by the descriptive information in the call to participate and their prior experiences with cases.

Multiple faculty members described their process for reviewing and selecting of the one or more topics of cases or sims they used, as each tool offered nine or more options. The standards in their program provided guidance as did their course objectives and topics. In describing their selection process, faculty described reviewing multiple options that, based on its title, seemed to match their course. While all started at the more general level of the title of the digital case or sim, a lessor number of faculty described then actually doing the case or sim to experience it as a learner would before finalizing their selection. In the study of digital cases, where faculty members used the cases two to four semesters, their implementation over time added steps where they made increasingly explicit to students the connections between the course learning outcomes and the selected tool. For example, by relating it to the course outcomes or standards, or talking about the grading criteria, or what made a quality answer.

The experience of using the tool in their course determined these faculty members’ inclination whether to incorporate it again in the future. One instructor preferred her own program’s decision-making model to the one inherent in the digital cases and so would modify any future use of the tool to incorporate her program’s model. Another instructor used an alternate framework for family involvement inherent in a sim, and indicated if he would use it again, he would address that discrepancy. In one instance, an instructor developed more complex learning outcomes because of students asking the principals supervising their internship to do the simulation with them.

The above illustrates how faculty drew upon their content knowledge, rather naturally perceiving the potential match of the well described EdTechs to their outcomes of interest and, in some cases, seeing how they could reach new outcomes, or make modifications for better alignment. It appears the course learning outcomes were most key, which of course drew upon their content knowledge but also extends it. In turn, that match of EdTech to outcomes motivated their decisions to adopt these interactive EdTechs for use in courses. What the review of this dataset cannot address are the faculty who did not elect to participate after seeing the calls for participation. It is unknown whether non-participants also considered and rejected the opportunity, or if, even after reviewing the descriptions of the EdTechs and their match to educational leadership, they still did not even understand what the EdTech offered.

EdTech use provides added value to teaching and learning

This principle also seems to have captured very well in this dataset the perceiving, interpreting, and decision making necessary to for faculty to utilize knowledge, skills, and affect and motivation to integrate EdTech. A motivating reason for the studies’ faculty members to participate was their perception that the tools added the desirable capability of providing hypothetical but realistic opportunities for students to practice applying knowledge in leadership situations. Before use, they gauged the varying simulated contexts for leadership decision making exceeded what they felt they could otherwise provide to students. After using the EdTech with students, they could more specifically describe the approximation of leadership practice settings as the benefit of the digital cases’ simulated school contexts and the sim’s scenarios. This specific perceived added value is what drew in all of the participating faculty.

In both studies, the faculty also described the value of reliably providing all students the same practice environment for applying leadership knowledge and making leadership decisions. In some instances, the hypothetical school or scenario provided specific contexts to which faculty members could not otherwise guarantee students access. For example, one faculty member recognized some of her students did internships in high performing schools with few discipline issues and she wanted to let them practice decision making to address low student achievement and high levels of discipline issues.

These faculty members’ experiences show how the valuing of an EdTech for use in a course draws upon technical knowledge (TK) to comprehend or ascertain what functionalities are inherent in tools like interactive EdTechs, imagine these functionalities in terms of teaching value (TPK), and filter them through pedagogical content knowledge (PCK). ETIP 2 encompasses these competencies as well as how a match between teaching goals and an EdTech's functionalities can motivate its adoption. Combining such knowledge and motivation can then support the decision making involved in the specific uses of an EdTech within a course.

EdTech assists in the assessment of the learning outcomes

This learning-level principle can capture the perceiving, interpreting, and decision making of faculty, including how in these studies it appears participating faculty largely had negative perceptions of, or did not feel efficacious about, the specific ways these interactive tools could support gaining insight into students’ performance. The faculty in these studies did not emphasize the assessment of learning outcomes with features inherent in the interactive EdTechs, and instead most often framed the assignments as participation exercises to fuel discussion. As described above, the match with learning outcomes and the unique ability to simulate practice was valued, but there was hesitation to assess student performances using the interactive tools. One reason offered by several faculty was that they were just learning the EdTech themselves and did not want to unfairly introduce accountability through misjudging what they might learn from student performances.

The faculty also offered that the demonstrations of learning inherent in these performances were not directly observed and they felt uncertain about the use of the artifacts student generated by completing the digital case or sim. The students’ output of the digital cases was a series of short answers to questions, which were presented to faculty in a grade book with a rubric. Some faculty members in the study of digital cases either anticipated these were time-consuming to grade or remarked they were not comfortable with how to judge a better or worse response. A couple indicated they did not even read the case answers, reasoning that the experience of doing the case and formulating the response was more important than receiving instructor feedback. The artifact generated when a student completed a sim was a PDF summarizing the choices made at decision points, with reference to how the simulation encompassed various national leadership preparation standards. Multiple faculty members expressed they were uncertain what to do with those artifacts or how students’ choices would translate to a score for the assignment.

Instead, an important feedback mechanism for the faculty in both studies was the discussions they held during and after students’ participations with the EdTechs. During the digital case assignments a few faculty held asynchronous or synchronous discussions about what meaning and interpretations students were making about case information. This was done purposefully before students submitted answers to aid their case exploration and sense-making. Just one used the built-in faculty tool that summarized students’ answers while cases were in progress to anticipate how to lead those discussions. A few faculty members in the sims study assigned students to do the sims as a group of two or three and asked them to discuss each choice they were offered in the branching simulations. One faculty member did the sim in class as a whole group and incorporated discussion about choices and their implications throughout the sim.

All the faculty members used discussions after the students’ completion of the assignment. One had students use a discussion board, but the rest held their discussions synchronously. From these discussions faculty inferred if their purposes for assigning the interactive EdTechs were being met, based on the things they heard students write or say.

Here the dataset illustrates how limited knowledge and skill about how to apprehend students’ learning outcomes from these two EdTechs decreased their motivation to use built-in features, or to grade performances at all, thus impacting how they integrated these EdTechs. Considering ETIP 3 emphasizes how technological pedagogical knowledge (TPK) and skill is needed to perceive how ICT features can be leveraged to judge student demonstrations of learning, which in turn appeared to motivate their use (or not). Like the considerations regarding match with learning outcomes and added value, these competencies impacted motivation and drove the decision making related to how an EdTech might be used for course assessment procedures.

Organization-level perceiving, interpreting and decision making regarding digital case and sim EdTechs

Ready access to supported, managed hardware/software EdTech is provided

This organization-level principle does align with the perceiving, interpreting, and decision making of faculty to determine the viability of EdTech use. A motivating reason the faculty members gave about their choice to participate in these studies was to gain free and supported access to these interactive EdTechs. Use of the tools required faculty and learners access a computer, the internet, and a web browser. The digital cases study was carried out prior to students’ widespread access to laptops, and two faculty described access to university computer labs during class as a factor in planning their integration, whereas other participants had learners supply their own device and complete the digital cases as homework. The sims study was carried out a decade later and the faculty did not even question whether students would be able to use their own computers to complete them.

Despite widespread connectivity and access to hardware and software a few minor technical issues were reported, which were described as able to be solved. The digital cases require the browser settings turn off pop-up blockers, which confused some users unfamiliar with how to adjust this setting. With the sims, one instructor wanted her students to form small groups online to work through the sim together. She described how not all students had sufficient bandwidth to serve as the host for screen sharing audio and video, but those students worked this out quickly, including which settings in the video conferencing software optimized video sharing. No one else reported that internet access or browser use was a problem, although one instructor described issues her students had with accessing the links to supplementary information that was linked to from the sim’s interface.

The faculty in this dataset experienced little need for additional EdTech access or support. However, because in these studies the support and access were provided, it did not provide much opportunity to learn about how specific knowledge, skills, and motivation were assembled during instructional decision making. In general, ETIP 4 accentuates how technical knowledge (TK) and skill are key competencies for gauging the technical access and support required for successful faculty and student use of an EdTech before deciding to adopt it. These competencies in turn can support the decision making to incorporate EdTech into a course, or not, including more specific decisions about the feasibility of synchronous or asynchronous uses.

Professional learning is targeted at successful EdTech integration

This fifth principle, too, can capture the perceiving, and interpreting faculty use to make integration decisions. Both studies provided participating faculty with professional learning about the features and operation of the tools, as well as walking through a slide deck for instructors’ use in class that guided suggested implementation steps before, during, and after use of the digital case or sim. These suggested steps were discussed in at least one professional learning session, where insights into course uses of the tool were provided by an instructor experienced with its use. The digital cases project exceeded this and provided a total of three in-person learning sessions, timed at approximately six-month intervals over the two years of the study.

The before-use steps suggested that faculty introduce the tool, reviewing its interface and commands for operation, as well as relating the specific digital case or sim to the course objectives. For the digital cases, which required a text response by students, there were also slides that addressed the elements of a quality response and the rubric that would be used to score responses. The during-use steps gave faculty guidance on engaging learners in discussions about sense-making regarding the information they were encountering and prompted faculty to ask students for rationale about decisions made. The after-use steps suggested debriefing students on their decisions, talking about the relevant stakeholders, how declarative knowledge from class was drawn upon and interpreted considering the context of the digital case or sim setting, and what constituted a better or worse response. All faculty participants reported that the professional learning sessions prepared them to use the interactive tool in classes. Where one faculty member reported his students wished for a checklist for registering, accessing, and completing a quality answer in the digital cases, he was able to create one himself.

In each study the faculty members were interviewed after each semester of use (just one semester in the sims study). A common remark was that hearing the experienced faculty member presenting approaches to teaching with the tool was helpful guidance, as were the provided materials. All the faculty felt their uses of the sims went quite well and the majority reported it served as a powerful learning experience for students. In the digital cases study, faculty used them two, three, or four semesters and the semester-by-semester changes in use of the before, during and after recommended steps were tracked. Trends in the uses of the before-case implementation steps were that faculty increased their discussion of a quality answer and the benefits of case-based learning, added a description of the inherent decision-making model, and added demonstrating how to navigate within the EdTech. Trends in after-case implementation were increased uses of discussing the step-by-step decisions, identifying key stakeholders, and talking about the rubric criteria.

This dataset contained in faculty interviews little explicit discussion of how the project-provided professional learning sessions built their knowledge, skills, and motivation for ICT integration. We might infer that the learning opportunities provided fulfilled these faculty members’ needs because they did not request further opportunities to learn. The digital cases project gives insight that faculty learned from each implementation cycle, in that they filled in additional recommended steps of implementation as they saw there was need. In general, ETIP 5 may guide faculty members to seek how to bolster and assemble their technical knowledge (TK), pedagogical knowledge (PK), and content knowledge (CK) and skills into the overall competence of integrating the ICT. In these studies, the knowledge and skill developed through professional learning for the respective ICT allowed detailed decision making for incorporating it into a course session, including more specific decisions about how to allocate class time before, during, and after its use for maximum benefit.

Professional community enhances EdTech integration and implementation

As with the other organization-level principles, ETIP 6, too, captures something about what was provided to faculty in these studies, but little data were generated about just how perceiving, interpreting, and decision making came together. The two studies varied in their emphasis on building a professional community for participating faculty. The sims study followed faculty tool use for just one semester, and besides meeting one another on the professional learning session described earlier, these faculty members were not prompted to interact further, nor did they report doing so. As noted above, in the digital cases project the three in-person interactive professional learning sessions allowed faculty to share their experiences and ask questions of one another and the project staff. Multiple faculty members reported in the post-semester interviews that they had interacted directly with other participants in the study about the uses of the digital cases. While the data do not elaborate upon the nature and frequency of those interactions, it can be noted that they arose without prompting by project staff.

Experiences of faculty in this data set illustrate that some participants had intrinsic motivation to reach out to other faculty about their uses of this tool—perhaps tapping other participants’ knowledge and skill. However, this did not happen in the one-semester sim study. This is perhaps explained by two factors. First the digital cases faculty were all from one state where there is a professional organization of educational leadership professors and so many already knew each other; they also met face to face three times. Their increased knowledge of each other could have contributed to the decision to reach out to colleagues about uses of the tool. ETIP 6 may represent context knowledge (XK), extending ETIP 5 in a contextually sensitive way. In the digital cases study the faculty were not asked to interact about their experiences but we see their agency in tapping a distal level of context.

Discussion

This paper has considered how the six principles can characterize perceiving, interpreting, and decision making that faculty use to assemble EdTech competencies in context-sensitive ways. This consideration is illustrated through a dataset of two studies of faculty integrating interactive EdTech tools. Rendering with this dataset how the ETIPs might offer guidance for faculty instructional decision making about EdTech highlighted its contextual nature. The goals and settings in each project’s design placed different demands on the scope and nature of what faculty had to perceive, interpret, and make decisions about to achieve their instructional aims. This dataset provided greater detail on how the learning-level principles characterize the processes faculty used to assemble competencies.

At the learning-level, we see how ETIP 1 can relate to content knowledge (CK), but more than just possessing content knowledge it requires perceiving a match between EdTech and content and the valuing of it as an early integration decision point. ETIP 2 relates mostly to pedagogical content knowledge (PCK) and ETIP 3 to technological pedagogical knowledge (TPK), but again these principles represent more than knowledge and convey the processes of discerning and designing instruction.

The design of the original studies appears to have constrained the opportunity to learn as much about the potential role the organization-level principles might play in characterizing faculty planning for EdTech integration, by requiring minimal technical access or support, providing professional learning, and in the digital cases some professional exchange. Yet, we might infer that the organization level ETIP 4 draws upon technical knowledge (TK) and ETIPs 5 and 6 holistically address building and seeking out, respectively, technological pedagogical content knowledge (TPACK) and context knowledge (XK) to aid making instructional design decisions in courses.

The central issue following from this conceptual framework is how might guidance from these principles direct, or even accelerate, the building of key knowledge, skills, and affect or motivation, and what Blömeke et al. (2015) describe as a skillful process of perceiving, interpreting, and decision making to appropriately assemble them into competence. The TPACK model identifies the knowledge to develop in faculty members (i.e. technical, pedagogical, and content) to draw upon as needed for teaching, without directly addressing skills and motivation. The literature on expertise points to training and rehearsal as necessary precursors to combine with an understanding of practice (Raduan & Na, 2020). The literature on experts and expertise has multiple schools of thought that conceive of expertise as not in the mind (schema), but rather, as Baber (2019) explains, as focused on doing, and “about perception–action coupling”:

Advocates of the alternative approaches argue that cognition is constructed through the dynamic interaction of person and environment without the need for mental representation. Cognition thus involves events which arise from the experience of having a body, with a set of sensorimotor capacities and capabilities, in an environment. (p. 244).

Baber (2019) explains how we extend cognitive agency to objects, for example an LMS with email capability that allows an instructor to not learn students’ emails. Situated cognition, another school of thought, emphasizes knowledge as “constructed within and linked to the activity, context, and the culture in which it was learned (Baber, 2019, p. 252),” for example embedding best practices for LMS use through college-wide templates, training, and showcasing of faculty exemplars. Baber explains that distributed cognition suggests that the very representation of an object in a system supports its members’ information processing, for example how the descriptions provided of EdTechs in an organization can support integration decisions.

Whether schema, perception, or some sort of combination guides the process, assembling competence is dependent upon possessing (at the least the competencies of) knowledge and skill. This begs the question of who is supposed to know which competencies are needed? Because EdTechs vary in requisite technical knowledge needed and may benefit from new pedagogy, for faculty to identify needed competencies and how to develop them could require iterative cycles of trial and error with EdTech. Instead, universities could accelerate the assembly process by embedding competencies into context and culture, so they are evoked in the perceiving and using of the EdTech. For example, the representations of available EdTechs could cue perceptions, like listing the inherent content if present in an app, or else the nature of its functions and relating them to disciplinary-based pedagogical uses. Representing together the EdTech and its added value for pedagogy could address that faculty often perceive technology and pedagogy separately and do not consider their combination (Brinkley-Etzkorn, 2018), and more often use ICT for passive or active learning rather than the constructive or interactive learning activities associated with better learning outcomes (Lohr et al., 2021).

The principles themselves could distribute and situate cognition for instructional design at the higher educational level within the typical faculty professional learning delivery models of individualized instructional consultations and formal learning opportunities (whether in-person workshops, online, or for credit) (Koh, 2020; Major & McDonald, 2021). Lohr et al. (2021) found that the university’s provision of technical and educational support was one of the more influential factors on bringing concepts of EdTech-supported learning into practice. A meta-aggregative review of teacher professional development for online blended learning concurred, recommending a model of support for the full cycle of learning technical skills to developing ideas for EdTech uses into a coherent strategy for teaching and learning (Philipsen et al., 2019). Faculty are likely to vary in their levels of knowledge of how technology might support learning in their discipline (Benson & Ward, 2013), so assessing levels of motivation, skill, and pedagogical styles, in the context of the organization’s provided access to tools (Knezek & Christensen, 2016) could further direct for whom the principles are most useful, and how to position them to serve as guidance. Major and McDonald (2021) found just thirteen studies since 2012 examining TPACK in faculty learning to teach online. And while their review indicated which studies measured faculty outcomes, it did not indicate that any of the reported interventions described a general theory of change, or a specific theory of action, for their designs. The explicit teaching of these principles could be incorporated into a theory of action.

While illustrating the six principles through an existing dataset allowed a holistic check of the conceptual argument, it is important to point out the limitations of this approach. The data were limited; it had been collected for other purposes. The inquiry only addressed two interactive instructional EdTechs and how faculty used the tools only within educational leadership courses. Based upon other research of faculty planning, there are disciplinary differences (e.g., Stark, 2000) and thus the principles may function differently with other EdTechs or disciplines. Further research is needed for a better understanding of these possible changes. Also, the context of a volunteer group of faculty members focusing on one EdTech for which targeted support was provided may not reflect the opportunities available to many higher education faculty. Therefore, conclusions drawn here are best considered guideposts for designing a future study that would more robustly interrogate the role such principles might play in aiding faculty in perceiving, interpreting, and decision making about EdTech.

Conclusion

This special issue called for research on the specific EdTech competencies desirable in higher education faculty, which could also be considered more holistically in terms of competence for EdTech integration. To align integration approaches with the six principles presented here requires using sequences of perceiving, interpreting, and making decisions that might more accurately be described as the competence of instructional design with EdTech. Bennett et al. (2017) noted both that faculty are autonomous in their planning and teaching, and his participants did not indicate use of any frameworks to guide their teaching or EdTech uses. This suggests that universities may benefit from framing EdTech support differently. If the aim of EdTech support was to embed in faculty members’ instructional environment representations of excellent EdTech integration, and for online environments, it could build the “EdTech imagination” of faculty members, and implicitly outline needed technical and pedagogical knowledge and skill. Then, by university EdTech support systems also providing guiding principles it may motivate faculty to acquire and assemble those competencies in context-sensitive ways for instructional decision making. This reasoning shifts more responsibility for setting directions and crafting aligned professional learning to the organization in a way that exceeds the “help desk” model, to also leverage and work with any university units concerned with quality in teaching and learning. It also suggests that beyond simply joining forces they adopt a theory of change around developing competence and translate it into a context-sensitive (situated) theory of action for their faculty and institution.