Advertisement

Prompting Deep Learning with Interactive Technologies: Theoretical Perspectives in Designing Interactive Learning Resources and Environments

  • Tiffany A. KoszalkaEmail author
  • Mary K. Wilhelm-Chapin
  • Christopher D. Hromalik
  • Yuri Pavlov
  • Lili Zhang
Chapter
  • 1.3k Downloads
Part of the Smart Computing and Intelligence book series (SMCOMINT)

Abstract

Deep content learning requires learners to think about content. Interacting with digital resources and interactive technology-based instructional environments does not guarantee engagement in content thinking. Formal and informal instructional activities and environments are being inundated with opportunities for learners to interact in multiple ways with content through emerging interactive technologies. Questions are being raised as to whether these interactions are leading to critical thinking and deeper content learning. It is not enough to merely interact or “play with” technology resources, rather learners must cognitively manipulate, think about, and reflect on content purposefully, in multiple and flexible ways, throughout these interactions to reach deeper knowledge. This chapter provides a conceptual description of learning and argues for a set of common guidelines to design learning resources and learning environments that integrate interactive technologies in ways that support learners in making meaningful content connections. This set of guidelines was drawn from a synthesis of overlapping tenets defined in generative learning, cognitive-flexibility, and reflection theories and is supported by a multitude of research investigations. Examples of these guidelines in-use, directly integrated into resources or through supporting instructional resources, show how learners can benefit from physical interactions that prompt thinking to achieve deeper content knowledge.

Keywords

Interactive technologies learning resources Generative learning Cognitive flexibility Reflection Deep learning Instructional design Design guidelines 

1 Introduction

With the emergence of new types of interactive technologies, formal and informal educational resources and environments are being inundated with opportunities for learners to interact with content in multiple ways through a variety of digital materials and experiences. Newer technologies receiving much attention in recent educational literature include simulation-like environments, virtual reality, and augmented reality. Access to these experiences has opened up from the stand-alone computer to include networks (internet), tablets, mobile technologies, peripherals, and even robots. Each access device adds different affordances and challenges to engage learners in content thinking. Research on these interactive technologies spreads across multiple content domains (e.g., K-12 curriculum, engineering, medicine, sciences, humanities) and responds to a variety of questions about teaching and learning effectiveness; effects on learner perceptions, skills, emotions, and learning outcomes; and creation of instructional design principles. New questions are asking whether the use of these technologies are prompting critical thinking and deeper content understanding or inhibiting learning (Garrison & Cleveland-Innes, 2005; Liu & Kaye, 2016; Murray, Pérez, Geist, & Hedrick, 2013). Results are varied and mixed.

A current research theme of particular interest is calling attention to learner–content interaction, specifically unpacking the relationships among interactive technology features and learner–content interaction design (Anderson, 2003; Dondlinger, 2007; Lamb, Annetta, Firestone, & Etopip, 2018). Cummings’ team (2015) suggested that learners must be engaged in interactive learning opportunities that allow them to feel connected to the instructional environment and not just as key-pressing spectators. In this context, being connected suggests that learners interacted with technologies in ways that help focus their attention on content. They are prompted to feel like part of the learning situation and feel comfortable in exploring content. They are encouraged to use, share, and display their developing knowledge and they are engaged in reflecting about their own learning goals [self-regulation] and study practices. Such foci, emotions, interactions, collaborations, and reflections prompt learners to reach significantly higher achievement in content learning than those who interact with resources and environments that do not foster this sense of connectedness or provide focused content learning experiences and scaffolds (Bernard et al., 2009). Unpacking thinking and learning mechanisms is important in identifying how to design and incorporate interactive features to support learners in developing content knowledge (Gagné & Briggs, 1996; Reeves, Herrington, & Oliver, 2005).

This chapter begins by summarizing cognitive learning processing differentiating between surface and deep levels. The focus then turns to integration and adaption of technologies in learning environments highlighting poor and good design features aimed at prompting learner interaction, engagement, and reflection. Existing research on generative learning theory, cognitive flexibility theory, and reflective theory provide the basis for proposed guidelines to prompt learner–content interaction resulting in deep levels of processing. Researchers are encouraged to design rigorous studies to contribute to our understanding of how instructional design around technology features affects learner–content interactions toward deep learning.

Connections between technology features and the degree to which they prompt deep learning processing are unclear from current research. As educational technologies advance in their ability to be interactive, research to explore features that prompt learners to engage with the content, particularly at deep levels of processing, is warranted to guide future design and development.

2 Thinking, Learning, and Deep Learning

Learners need to think and act to learn (Biggs, 1989; Wittrock, 1974). The assumption of this chapter is that learning is an ongoing cognitive and activity-based process. Learning processes involve learners actively experiencing, thinking about, and integrating representations (e.g., text, images, sounds, smells, tactile sensations, tastes) of content from resources into their developing knowledge structures (Jonassen, Campbell, & Davidson, 1994; Kolb & Kolb, 2012; Merriam, Caffarella, & Baumgartner, 2007). Resources are content containers (information providers) that can be human, object, or an environment—physical or virtual. Learning resources (and learning environments) however bring a dimension to the learner–content interaction experience that moves content containers into resources that purposively prompt learner physical interaction and mental processing in the support of knowledge development (Koszalka, 2016a).

Although a rather complex behavioral and internal process, simply put, learners start to develop new knowledge, skills, or attitudes by a consciously or unconsciously triggered recalling of what they already know, can do, and or feel, and related types of experiences they have already had (Gagné, 1985). This previously learned knowledge was stored, organized, and integrated within their schema and the trigger identifies disparities or gaps in this current knowledge (Anderson & Pearson, 1984; Rumelhart, 1980).

Guided by their personal mechanisms of learning [like preferences, interests, goals, and abilities], learners then interact and engage with new information to begin generating connections from new information experiences into their existing knowledge structures [schema] to close these gaps (Rumelhart & Norman, 1978, 1981). More and richer sets of connections generated around content suggest a deeper knowledge (Anderson & Pearson, 1984; Grabowski, 2004).

Not all knowledge is in the same domain, e.g., cognitive, affective, psychomotor, nor is learning all at the same level of complexity, e.g., low to high (Anderson et al., 2001; Bloom, 1956; Dave, 1971; Krathwohl, Bloom, & Masia, 1964). Learners take different approaches to learning with the outcomes of learning being closely associated with the chosen approach (Ramsden, 2003). For example, to learn a skill, learners may choose (or be prompted to choose) hands-on practice based on watching a demonstration, whereas when learning new information they may choose (or be prompted to choose) to work through multiple interactions with the information, models and graphics, plus participate in focused thinking-based practices (Ormrod, 2016). The approach a learner chooses and takes can predict the level at which learning occurs.

2.1 Surface and Deep Processing

Learning can occur through surface level processing (low-level thinking) or deep level processing (high-level thinking) depending on learner goals (Biggs, 1989). Learners using surface level processing approaches focus (or are prompted to focus) on the substance of information and generally use low-level recall and memorization techniques (Biggs, 1989; Tagg, 2003). They perceive the learning goal is to study for a test to avoid failure rather than to grasp key concepts and determine how to apply new knowledge (Bowden & Marton, 1998). Thus, surface level learning likely results in few connections among schema. See Table 1.
Table 1

Characteristics of surface and deep thinking processing

Surface level processing

Deep level processing

• Substance of content

• Recall and memorization

• Goal is to pass the test

• Substance and meaning of content

• Self-monitor, correct own thinking

• Personal commitment to learning

• Reflect on understanding

• Pursue multiple perspectives and application of content

• Goal is deep understanding

Learners using deep level processing approaches focus (or are prompted to focus) on substance and underlying meaning of content (Biggs, 1989) and actively self-monitor and correct their own thinking during their learning (Elder & Paul, 2009). Deep thinking is activated when learners’ engage in applying, analyzing, evaluating, and creating content (Anderson et al., 2001).

Deep level processing is represented by a personal commitment to understand the material that is echoed in activities like reflecting on how individual pieces of information relate to larger constructs or patterns and applying knowledge in real-world situations (Biggs, 1987, 2003; Entwistle, 1981; Ramsden, 2003; Tagg, 2003). Deep learning is also about developing habits to consistently think and reflect, approach new phenomena in thoughtful ways, and see new phenomena from different perspectives in everyday life (Dennis & Vander Wal, 2010; Elder & Paul, 2009; Ramsden, 2003; Tagg, 2003). Deeper comprehension results in the ability to articulate multiple perspectives and uses of content in new ways and different contexts (Jonassen et al., 1994; Spiro, Feltovich, Jacobson, & Coulson, 1992; Vos, van der Meijden, & Denessen, 2011).

The level of processing is also affected by the nature of learning tasks, e.g., content domain, time on-task, learner–content and learner–resource interactions, type and level of learning task, expected outcomes, prompts, and characteristics of resources (Laird, Shoup, & Kuh, 2005). Passey and Hobrecht (2001) found that higher levels of self-study and interaction with learning resources led to higher level performance outcomes, suggesting deeper content knowledge (Bloom, 1956). Deep learning is associated with abilities to retain, integrate, and transfer content knowledge, attitudes, and skills at higher rates (Biggs, 1989; Prosser & Millar, 1989; Ramsden, 2003; Van Rossum & Schenk, 1984).

Thus, if the goal of formal or informal instruction is to foster surface level learning, the design should prompt test-passing goals with the use of memorization and recall learner–content interactions. If the goal is deep content learning, the design of learning resources should prompt activation of prior knowledge and experiences and provide multiple types of learner–content interactions accentuating content-focused engagement [thinking]. “The only capacity we can use to learn is human thinking. If we think well while [actively] learning, we learn well. If we think poorly during learning, we learn poorly” (Paul & Elder, 2007, p. 8).

The characteristics of learning resources can influence a learner’s goals and learning (Dick & Carey, 1978; Gagné & Briggs, 1979) and play a role in learners’ choices for processing new information (Biggs, 2003; Garrison, Anderson, & Archer, 2001; Littlejohn, 2003). However, it is unclear which features of interactive learning resources and learning environments prompt connectedness and deep learning and which are distracting or detrimental.

3 A Brief Review of Educational Technology Uses and Research

Historically, technology uses in education began as supports for teacher-centered pedagogies, e.g., film, radio, projectors (Reiser, 2001a, b). As more sophisticated technologies like calculators and computers emerged learners became the targeted users (Reiser, 2001a, b). These computer-based resources, however, generally held to teacher-centered approaches that primarily provided small chunks of content and tested for learner recall of that content (Gagné & Briggs, 1996; Ormrod, 2016).

As technology became more powerful with the advent of CD-ROMS, DVD and eventually the internet, features like graphics, sounds, multimedia, and new types of interactions were added and more learner controls and choices were integrated to lend themselves to learner-centered pedagogies (Koszalka & Ganesan, 2004; Reiser, 2001b). These evolving resources offered rich learning environments with a multitude of information, visualizations, interactions, social connections, and prompts to support independent learning. New devices (e.g., tablets, smart phones) now enable learners to interact with these rich environments anytime, anywhere, with any connected resources—human or informational (Koszalka & Ntloedibe-Kuswani, 2010; Li & Wang, 2018).

Today simulations, virtual reality, and augmented reality are rapidly entering into the educational realms. Many are being built upon cognitive models and artificial intelligence offering human-like digital or physical components like avatars and robots to involve and engage learners (Alimisis, 2016; Curto & Moreno, 2016; Girvan, 2018). These technologies provide learners with authentic and immersive environments, context-sensitive triggers proposing to provide help and formative feedback based on learner choices and decision-making interactions, and intellectual, social, physical, and emotional stimuli that immerse learners in rich sensory and often times social learning experiences (Chattopadhyay, Gangadhar, Shankar, & Kasinathan, 2018; Lamb et al., 2018; Merchant, Goetz, Cifuentes, Keeney-Kennicutt, & Davis, 2014; Oigara, 2018; Sousa, 2005). Research questions continue to ask how well these rich and interactive technology resources actually prompt learner–content interaction that supports thinking and deep learning.

Guidelines like the Technological Pedagogical Content Knowledge (TPaCK) framework were developed in response to haphazard practices using new technologies in the classroom, questions about the alignment of technologies with pedagogy and content, and questions about the value of technology in learning (Koehler & Mishra, 2009). This framework suggests that certain technologies are best used with specific pedagogies that align most effectively to specific content domains, e.g., science and reading. Integrating pedagogical approaches can foster cognitive presence [thinking about content], reflection, and discourse (Cook-Benjamin, 2018; Garrison et al., 2001) and influence the level of content exploration and interest (Axelson & Flick, 2010).

Other research has identified good design characteristics for specific types of technologies. Well-designed learner–technology–content interactions accentuate content, prompt learner–content interactions, and focus learners on content during learner–instructor and learner–learner interactions (Murray et al., 2013; Shea, Li, & Pickett, 2006). These interactions prompt thinking, self-regulating, and reflecting and support deep level processing of content (Bernard et al., 2009; Harvey, Coulson, & McMaugh, 2016; Lamb et al., 2018; Liu & Kaye, 2016; Merchant et al., 2014; Mislevy, 2013). They help learners develop confidence in their understanding of content (Duschl, 2003; Munene, Darby, & Doherty, 2015; Sato, 2003), and achieve learning gains (Kluger & DeNisi, 1996). Several studies found that significant relationships developed among learners who reported a sense of connectedness among peers and instructors and cognitive thinking in environments that scaffold their attention to content, self-reflection, and achievement (Liu and Kaye, 2016; Rovai, 2002). Deep learning occurs through interacting with, engaging in, and reflecting on content. These leaning behaviors, as advocated in learning theories, are reflected in good design characteristics. See Table 2.
Table 2

Characteristics of poor versus good interactive technology design

Poor design

Good design*

• Lacks pedagogical framework

• Inconsistent or haphazard use of features

• Too many features or learner decision points

• Content not apparent or weakly presented by interactions

• Lacks learner choices

• Static (one way) content focus

• Lacks physical content interaction

• Lacks prompting for intellectual, social, and emotional thinking

• Lacks prompting for self-regulation

• Extraneous design features which interfere with bimodal processing

• Formative feedback missing or non-specific

• Mismatch between content interaction type and content learning expectations e.g., domain (cognitive, psychomotor, attitude) and level of learning (surface or deep)

• Lacks reflection prompts

• Informed by pedagogya,b

• Strategic use of features to prompt learner–content interactionsa

• Streamline design to mitigate cognitive loadb

• Content rich interactionsa,b

• Interactions align with and focus on content learninga,b,c

• Learner-directed choicesb,c

• Dynamic interactions to read or act upona

• Prompts physical interaction with content environmenta

• Prompts for intellectual, social, and emotional content engagementb,c

• Prompts self-regulationc

• Multiple content representations—text, image, animation, sounda,b

• Context-sensitive prompts and formative and authentic feedbacka

• Match between content interaction type and content learning expectation e.g., domain and level of learningb,c

• Prompts reflectionc

*Notes Primarily about—aInteracting physically; bEngaging mentally; cReflecting

Research also has identified characteristics of poor designs that tend to distract learners from content learning. Poorly designed resources haphazardly use features that confuse or overload learners, encourage involvement in activities that do not align with content learning outcomes, and distract learners from content, thus do not support deep content learning (Gilbert & Moore, 1998; Oliver, 1999). Mismatching interaction features and learning expectations also impedes content learning (Akdemir & Koszalka, 2008; Kearsley, 1997; Keengwe, Onchwari, & Wachira, 2008; Koszalka & Ganesan, 2004). Thus, developing interactive technology-based learning resources requires design thinking that strategically combines and purposively guides learner–technology interactions and learner–content interactions.

4 Design Principles and Technology Affordances to Support Content Learning

An essential component in designing any interactive technology-based learning resource is drawing learner attention to the content through the technology features (Garrison & Cleveland-Innes, 2005; Rufai, Alebiosu, & Adeakin, 2015; Shea et al., 2006). Reeves et al. (2005) suggested that technology-enhanced resources and environments should be “integrating design principles with technology affordances” (p. 105) to help learners successfully achieve intended learning. Merrill (2002) synthesized a set of design principles suggesting instruction best promotes learning when (i) learners engage in solving real-world problems, (ii) existing knowledge is activated, (iii) new knowledge is demonstrated to the learner, (iv) new knowledge is applied by the learner, and (v) new knowledge is integrated into the learner’s world. Others argue that additional parts of this equation are using appropriate pedagogy, prompting self-assessment, and providing feedback, all aligned with content and key learning expectations (Cook-Benjamin, 2018; Lamb et al., 2018; Poll, Widen, & Weller, 2014).

Grabowski and Small (1997) suggested that there are three types of design elements in interactive technology-based (hypertext) applications. These elements include information, instruction, and learning. Information elements organize and provide content, instructional elements provide direction, and learning elements engage participants in active cognitive processing. Technology features may include multiple text (still, animated, captions), visual (live, animated, 3-D), and audio (voice, sounds) elements; hints and tips; noting, capturing, and highlighting functions; assessment tools; on-demand or context-sensitive feedback; social interaction tools; movement options (navigation using menus, joysticks, scrolling, keys, etc.); and other tools that allow learners to create, share, highlight, manipulate, and visualize content. These features range from static viewable resources (e.g., pdf documents) to highly interactive and responsive environments in virtual or real-world contexts. Each feature, however, can be designed to support informational, instructional, and learning needs (See more in Koszalka & Ganesan, 2004).

Matching design principles and technology features in learner–content interaction design can influence learner activity, engagement, satisfaction, and learning outcomes (Cho, 2011; Ekwunife-Orakwue & Teng, 2014) and scaffold learners in choosing strategies that lead to deeper content thinking (Koszalka & Ganesan, 2004). However, it is unclear which features are best at fostering deep learning.

5 Recent Research in Interactive Technologies and Learning

Recent literature analyzed learning outcomes in simulation environments including games and virtual worlds; virtual reality environments; augmented reality environments; and artificial intelligence devices.1 Patterns emerged across these papers that help unpack the relationships among learning and interactive technologies features. See Table 3.
Table 3

Summary of highly interactive technologies with definitions and key design factors

Interactive technology

Definition

Key design factors in learner–content interaction*

Simulations (see Sauve, Renaud, Kaufman, & Marquis, 2007)

Digital environment models reality of a defined system, has fidelity, accuracy, and validity, designed to address learning objectives

• Use of pedagogical models

• 3-D environmentsa

• Behavior/action promptsa

• Highly interactivea

• Inviting interfacea

• Visual and auditory featuresa

• Feedback with elaborationb

• Correct response feedbackb

• Minimal learner assessmentb

• Experiential pedagogyb

Educational games (see Sauve et al., 2007)

Environment artificial or fantasy; pre-determined goals; pedagogically based; for one or multiple players

Virtual worlds (VW) (see Girvan, 2018)

Shared, simulated space inhabited by characters/avatars (learners) who mediate experiences; Avatars share and construct knowledge together

Virtual reality (VR) (see van Krevelen & Poelman, 2010)

Synthetic, digitally created dynamic view of real world in which learners interact with content (higher tech demand vs. VW/VR)

• 3-D environmentsa

• Exploratory pathsa

• Intellectual, emotional, and social processes promptsb

• Multiple perspectivesc

• Authentic—real worldb

• Context-sensitive feedbackb

• Overlay informationb

• Multisensory (visual, sound, tactile)b

• Experiential pedagogyb

Augmented reality (see van Krevelen & Poelman, 2010; Turkan et al., 2017)

Mix of digital VR and real world; Augments overlay real objects; learners prompt augments (higher tech demand than VR)

Artificial intelligence technologies (see Hoppe, Verdejo, & Kay, 2003)

Digital resources and environments (e.g., intelligent-tutors, -worlds, -communities, robots) built on programs that mimic brain functions

• Auto response/triggersb

• Shares achievement levelb

• Recognizes and adapts to learner inputb

• Individualized feedback/cuesb

*Note All technologies engage learners in problem solving and support cognitive processing; aPrimarily interaction (physical); bPrimarily engagement (thinking)

A meta-analysis of simulations, games, and virtual worlds found that the use of pedagogical models, 3-D environments, and various prompts promoted cognitive gains, higher order thinking, skill development, and affective attributes in learners (Lamb et al., 2018). Features like elaborate feedback, knowledge of correct response, and learner control mechanism stimulated skill and cognitive development and prompted higher levels of motivation and interest likely due to interface, interaction, and feedback design (Lamb et al., 2018; Merchant et al., 2014). Collaborative and competitive activities were perceived as providing an overabundance of interaction that may have slowed or impeded learning.

Virtual reality (VR) and augmented reality (AR) environments use experiential pedagogy to engage learners in intellectual, social, and emotional processes (Oigara, 2018). They are motivational, encourage exploration, prompt interaction through multiple visual and auditory channels, and provide multiple perspectives of content. AR provides context-sensitive overlaid information that enhanced the perceived value of learner–content interaction (van Krevelen & Poelman, 2010). VR and AR provide authentic real-life environments that lull learners into becoming part of a learning environment and give feedback in authentic ways (Oigara, 2018; Rossing, Miller, Cecil, & Stamper, 2012; van Krevelen & Poelman, 2010). Such features scaffold prior knowledge activation and help learners make sense of content (Sousa, 2005).

Artificial Intelligence-based resources use pattern-matching to trigger formative assessment through tutors, robots, or other means. Triggers (text, verbal, visual) help learners monitor their own achievement, identify errors in behaviors or thinking, and adapt toward greater content understanding. Pedagogical models, content domain models, and learner behavior data are merged to mimic human-like prompts, questions, alerts, and suggestions that engage learners in thinking while progressing toward deeper content understanding (Chattopodhyay et al., 2018).

5.1 Summary

Interactive technologies have features that are able to support learning when they are designed as problem-centered and activate existing knowledge, demonstrate knowledge, prompt learners to demonstrate knowledge, and engage learners to integrate new knowledge into their own context (Merrill, 2002). Resources that are pedagogically grounded, offer flexibility, provide feedback; prompt self-assessment, and encourage reflective practices can support deep thinking. Several studies confirm specific instances of rich, interactive technology resources and environments that have positively impacted learning.

However, learners’ abilities to reach a deeper understanding in these complex technology-based environments is called into question when learners are faced with too many paths or choices and a lack of scaffolding toward content learning (Munene et al., 2015; Sampson, Leonard, Ballenger, & Coleman, 2010). Most of the studies report on surface learning, perception, affect, or behavior patterns. This does not necessarily suggest that interactive technologies do not support deep learning but there is simply a lack of robust research in these technologies specifically that explore deep content thinking. Through the lens of learning theory, relationships between technology features and deep processing of content can be thoroughly investigated.

6 Contributing Theories and Synthesis of Their Tenets

Unpacking relationships among technology features and deep learning processing can be aided by considering theory. Three established educational theories that speak to learner–content interactions and have empirical evidence suggesting deep learning mechanisms [cognitive development] in interactive technology applications include generative learning theory (GLT), cognitive flexibility theory (CFT), and reflection practices (RP). Each provides a theoretical view of deep learning with technologies. We posit that overlapping tenets across these theories provide validity to their principles. Synthesizing these principles with research findings across interactive technology studies mentioned above provides a foundation upon which guidelines can be created to inform the design of interactive technologies that support deep learning. See Table 3.

6.1 Generative Learning Theory (GLT)

Generative Learning Theory (GLT) posits that learning occurs when learners are both physically and cognitively active in organizing and integrating new information into their existing knowledge structures (Grabowski, 2004; Wittrock, 1992). Comprehension and understanding result from generating relationships among existing concepts and previous experiences with new information and experiences (Wittrock, 1992). Cognitive processing starts with sensory arousal. Learners then actively (physically) and cognitively (mentally) begin to make sense of new information by organizing and integrating it into their existing knowledge schema or, if not perceived as important, dropping it from their thinking. Schema are further developed and modified through four processes—motivational, learning, knowledge creation, and generation. These processes invoke interaction with thinking and in turn lead to generation of new concepts and connections within the learners’ schema. See Table 4.
Table 4

Four process components of generative learning theory*

 

Motivational processes

Learning processes

Knowledge creation processes

Generation processes

Brain function

Arousal and attention

Arousal and attention

Sensory input and integration

Executive planning and organization

Example

Intention (interest)

Attention and focus (sustaining)

Beliefs, pre-conception, conception and meta-cognition

Coding, integration

Learner action

Selectively acknowledge new content based on interest and sense of control

Attention focus on and response to new content that has been acknowledged

Iteratively combine and compare new content to existing knowledge

Create new relationships by integrating, organizing, reconceptualization

Determines

Recognition of stimulus/new content and if generation occurs

Decision to code and integrate new information

Connection quality and type generated on belief, value, and memory

Comprehension level, recall success and retrieval of new content

*Note Adapted from Wilhelm-Chapin and Koszalka (2016)

6.2 Cognitive Flexibility Theory (CFT)

Cognitive Flexibility Theory (CFT) suggests that deep learning requires learners to engage with new content from multiple perspectives and in flexible ways of thinking, thus prompting the development of higher order thinking skills (e.g., problem solving), richer cognitive connections, and changes in the learner’s affective domain (Spensley & Taylor, 1999; Spiro, Coulson, Feltovich, & Anderson, 1988). Cognitive flexibility is defined as “the ability to spontaneously restructure one’s knowledge, in many ways, in adaptive response to radically changing situational demands” (Spiro & Jehng, 1990, p. 165).

CFT maintains that advanced knowledge construction is more than a simple recollection of prepackaged information. Knowledge is formed by actively assembling different knowledge fragments from past experiences and applying them adaptively to solve new problems in a “situation-specific knowledge assembly” process (Spiro et al., 1988, p. 8). As a result, CFT promotes knowledge transfer. In order to be able to use knowledge flexibly, it needs to be learned flexibly. CFT offers several instructional principles that promote such flexibility during learning: ill-structuredness, interconnectedness, irregularity, nonlinearity, flexibility, conceptual variability, multiple representations, early introduction of complexity, multiple perspectives, and criss-crossing of knowledge landscapes (Spiro, Collins, Thota, & Feltovich, 2003; Spiro et al., 1988). These principles can inform the design of complex case studies that entice learners into deep learning by interacting with the content, visualizing and revisiting content, and engaging in analysis, evaluation, reflection, application, and synthesis during learning.

6.3 Reflection Theory (RT)

Reflection theory (RT) suggests that learners engage intellectually and affectively in situations, activities, or resources (Schön, 1983). This reflective engagement leads to deeper understanding, more cognitive connections, appreciations of one’s experiences, self-assessment practices, meaning making, and the ability to transfer newly learned concepts to new situations (Schön, 1983; Wells, 1999; Zimmerman, 1998). Reflection occurs in episodes of self-observation, self-judgment, and self-reaction in which learners evaluate their progress to the goals they have set (Zimmerman, 1998). Forethought is when learners plan for learning and evoke interest, motivation and goal orientation toward learning. The performance phase is where learners employ learning strategies (e.g., time management, volition) to engage in meeting learning goals. Self-reflection is when learners evaluate their performance and achievement of learning goals. Facilitating learners through all three phases assists them in reaching deeper levels of learning (Bannert, 2006; Lin & Lehman, 1999; Moos & Bonde, 2016). See Table 5.
Table 5

Prompts for self-regulated learning leading to self-reflection

Types of prompts

Purpose

Examples: prompts learners to…

Corresponding self-regulation phase

Resources management

Prompts learners to ensure optimal learning conditions

• Gather necessary learning materials

• Coordinate groups

• Sustain motivation

Forethought

Cognitive

Prompts to support information processing and use of learning strategies

• Stimulate recall

• Complete steps in procedure/process

• Use cognitive learning strategies

Performance

Metacognitive

Prompts learners to self-monitor and control own learning

• Self-reflect

• Use metacognitive learning strategies

Self-reflection

A learner’s ability to self-regulate learning is both cognitive in nature and it is the result of the interaction between personal, environmental, and behavioral influences (Zimmerman, 1998, 2001, 2011). If progress in learning does not match a learning goal, a highly self-regulated learner will use what has been gained from reflection to make changes in learning activities and strategy use (Zimmerman, 1998), thus potentially leading to gains in goal and academic achievement (Schunk & Greene, 2018).

6.4 Summary

Each of the theories described above has empirical evidence suggesting that purposeful learner–content interactions can lead to deeper content learning through prompted content manipulation (acting), thinking, and reflecting (Koszalka, 2016b). Commonalities across these theories suggest leaner-centric design, activating previous knowledge, active participation through physical and cognitive engagement, demonstrating content in multiple formats, encouraging meaning making, and reflecting on content from multiple perspectives are important to deep content learning.

Technology features, when well designed and integrated can support and facilitate all of these learner–content interactions and help learners achieve deeper learning. Thus, these commonalities can help define guidelines for integrating interactive technology features in deep learning resources and environments. See Table 6.
Table 6

Commonalities among theories

Theory

Definition

Learning interactions*

Generative learning (Wittrock, 1974)

Learners actively generate new knowledge by mentally forming labeled relationships and connections between new information and prior knowledge/experiences

• Learner centric—toward content understanding

• Simultaneous physical and cognitive engagement with content-active

• Connect existing knowledge to new information

• Encourage generating meaningful connections

Cognitive flexibility (Spiro et al., 1988, 1992)

Learners develop, change, or adapt their content perspective based on engaging in new or complex situations that provide rich information in multiple formats and flexible choices during learning interactions

• Learner centric—toward content understanding

• Provide multiple dimensions, perspectives, and rich representations of content

• Flexible interactions and interconnections across knowledge/content-active

• Prompt thinking of multiple perspectives on, and representations of, content

Reflection (Zimmerman, 2002)

Learners transform experience into deep understanding by thinking continuously about connections they make with content across previous, current, and potential future interactions

• Learner centric—toward content understanding

• Prompt self-reflection of observations/experiences to learning goals

• Encourage testing concepts in new situations and contexts-active

*Note All theories suggest deeper levels of cognitive processing based on activating previous knowledge, prompting physical and cognitive interactions, and tapping into self-regulated learning mechanisms (e.g., personal goals, motivation)

7 Supportive Guidelines for Creating Interactive Technology-Based Learning Resources and Learning Environments

Literature both supports technology uses that enhance learner–content interactions and offers cautions on the overuse (too much), misaligned use (poorly designed), and even lack of use (missed opportunity) of appropriate interactive technology features. Learner–content interactions are important in supporting deep learning and technology-based features can prompt and possibly strengthen content manipulation (interacting), thinking (engagement), and reflecting.

This proposed set of guidelines brings together trends in interactive technology research, theories of deep learning, and design principles in an effort to guide design of digital learning resources and learning environments. The goal is to prompt learner–content interactions in ways that support deep content learning.

The guidelines are presented in four focus areas; general, incorporation of interactions with content, engagement (thinking) on content, and reflecting about content. Each of the four focus areas has 3–6 specific instructional design guidelines and offers several examples of features or activities that may support deep learning in technology-based resources or environments.

The guidelines are flexible in that they can be used to create or enhance static and dynamic resources. They can also be consulted when creating new interactive technology-based learning resources and learning environments and when transforming existing digital resources and environments into learning resources and learning environments. See Table 7.
Table 7

Guidelines for learner–content interactions to enhance deep learning

Focus

Guidelines*

Example features or activities

General

• Learner centric content focusa,b,c

• Define content and learning (cognitive, affective, psychomotor; levels of learning low-to-high)a,b,c

• Create interest in contenta,b,c

• Interactive interface and features—show examples, stories, applications

• Inviting 2D or 3D interface—content focus

• Surprise, motion-elicit attention, action, thinking

• Multiple content views

Interact physically with content

• Create purposeful interactiona

• Provide varied interactionsa,b

• Provide choices on how and when to interact, with whoma,b

• Encourage content explorationa,b

• Prompts explore actions

• Various interaction types

• Multiple pathways and options

• Social networking

Engage in thinking about content

• Prompt summarizing, organizinga (low-level thinking)

• Prompt synthesizing, predictinga (high-level thinking)

• Provide multiple representationsb

• Prompt thoughtful practice interactions with contenta,b,c

• Prompt thinking about the what’s and why’s of physical interactions—what am I doing, what am I learning, what can I do with this knowledge, and why?a,b,c

• Multisensory content—image, sound, tactile

• Context-sensitive feedback, questioning

• Prompts to hypothesize, test, check—social

• Periodic hints, summaries

• Variety of summaries—graphics, charts, audio

• Multiple assessment types at multiple levels

• Auto responses, triggers to summarize content

Reflect on content learning

• Prompt self-assessmenta,b,c

• Prompt reflection on how content can be used now and in the future, in learner’s worldb,c

• Prompt reflection on understandingc

• Prompt goal and expectation settingc

• Prompt reflection on feelingsc

• Prompt reflection on meeting goalsc

• Goal setting prompts

• Intermittent questions on content applications, learning, feelings

• Achievement level self-check, progress feedback

• Individualized and context-sensitive feedback, questioning

*Note aGenerative learning, bCognitive flexibility, cReflection

8 Transforming Static and Dynamic Resources into Learning Resources

These guidelines can be used to transform resources into learning resources through purposeful learner–content interactions. Ideally, these guidelines are used in the initial design and development phases when creating new experiences. Incorporating context-sensitive prompts, access to multiple views of content (e.g., text, graphics, motion, still), providing thinking prompts (e.g., what happened? Why is this important? What is next?), and offering help can instigate physical activity, thinking, and reflection.

These types of features can assure that learner–content interactions standout in the learning resource and are well and consistently designed from the start. It is more efficient to design learner–content interactions early in the development process than having to make major adjustments later.

The initial steps of any complex development process are to define what the resources or environment will “do” and what the users will “accomplish.” The guidelines lend themselves to thinking about how to use available technology features to prompt deep learning (accomplishment of learner). Revisiting them during development reviews can help maintain consistency in learner–content interactions that will support deep learning.

For example, when creating a simulation or augmented reality world on identifying business problems or working with chemical reactions identifying features in the technology platform (e.g., content overlays, feedback, prompts) and how they will be used to support interacting, thinking, and reflecting will help frame the learner–content interactions. Establishing these decisions early in the design process helps avoid major revisions later in the development process. They also assure that the content interaction, thinking, and reflecting focus is maintained in the environment, throughout the development process and when the resources are being used by learners. Establishing guidelines based on theory and supported by rigorous research studies can foster creation of educational technologies that promote deep learning.

However, sometimes existing digital resources and environments are used that are not editable. This does not discount the possibility of transforming them into learning resources aimed at prompting deep learning. Existing digital resources, whether static (e.g., pdf) or dynamic (e.g., interactive simulation), can be transformed into content learning resources with the addition of supporting instructional materials that are used side-by-side with existing resource to prompt learners interacting, thinking, and reflecting that is not prompted in the existing resource.

For example, learners may have access to a digital information sources like pdf articles or web-based simulations to learn content. These content sources already exist. Reviewing these types of resources may lead to deep learning; however, their use often results in surface level learning (Laird et al., 2005). Transforming these types of resources into learning resources may be accomplished by adding enhanced learner–content interactions that facilitate focused learner–content interactions using the features of the original digital resource and prompting activities.

Two examples are provided to demonstrate how existing resources and the development of new learning resources might incorporate the guidelines of good design for instructional technology and the proposed guidelines for enhancing learner–content interactions. The static and dynamic resource transformations also highlight links to generative learning theory (GLT), cognitive-flexibility theory (CFT), and reflection theories (RT) which have been shown to enhance deep learning.

8.1 Transforming Static Resources

Consider learners provided with a pdf article that describes the anatomy of a plant cell. They are prompted to read the article and be able to describe all the key parts of a cell in a test. With no further prompting learners may take a variety of approaches to learning that may include printing and highlighting the file, taking notes, creating flash cards, or some other memorization type activity. These interactions are helpful in achieving surface level learning and likely short-term memory.

Considering the guidelines above, prompting learners to use pdf software features to highlight cell organelle names in blue (on the pdf), underline organelle descriptions in blue, underline description of organelle function in black, highlight functions of organelles that relate to other organelles by drawing highlighted lines between the organelles in yellow, etc. These interactions require learners to manipulate and organize content, think about what the text is saying, and make decisions about what to highlight and underline (GLT/CF). These are deep thinking activities. Learners could be further prompted to use text editing software (e.g., word) to create a table demonstrating their knowledge of the content and classifying each organelle according to its main function (GLT). They may be prompted in another way, perhaps to use concept-mapping software to draw a map of their understanding of the connections among organelles (GLT), incorporate graphics of the organelle (CF), and reflect and write about on how well they understand the anatomy of the plant cell (RT). These types of learner–content interactions help learners generate new concepts and relationships in their existing schema, adding to their depth of knowledge.

Another option might be to prompt the learner to create a short instructional presentation on the anatomy of a plant cell. Provided guidelines may request that graphics, animations, narration, and progressive disclosure be added to the presentation and that each slide include some type of probing question about the slide content to prompt thinking in the audience. Thus, learners creating the presentation are engaged in higher level thinking, applying what they learned, analyzing importance of information, and creating a new and multiple representations of the content (GLT, CF, RT) (Anderson et al., 2001).

The goal is not to recreate the pdf content to make it more interactive, rather the goal is to prompt learner–content interaction, thinking, and reflecting. The learner makes the choice of how to interact and study with the pdf file. The suggested uses of technology features beyond reading scaffolds learners to move from surface level strategies to deep level thinking.

8.2 Transforming Dynamic Resources

The same type of transformation can occur with existing dynamic digital resources. For example, consider an interactive simulation about cells, where the learner can click on the cell organelles and get information on each, and manipulate a 3-D view of the cell anatomy. Explanations are offered about how the organelles function and the graphics show what they look like within the cell. This is similar to an augmented reality environment in that the text or audio features overlay the view of the actual cells and are revealed when activated by the learner. Since this simulation is already created, there may be no way to add additional prompting inside the simulation to support learning for specific purposes. However, to support deeper learning an additional learning resource may be created to prompt deeper learning rather than just interacting (playing) with the simulation.

Similar to those described above, learning resources can be created to prompt learners to play with the simulation and simultaneously create a table or a concept map or a presentation of the organelles, their characteristics, and their functions (GLT/CF). Reflection questions can be added to prompt learners to reflect on how they understand the content, how helpful it was for them to view the cell from multiple perspectives, and how they might use this information in the future (RT). Supporting learner–content interactions based on the proposed guidelines may enhance deeper learning.

The challenge with existing digital resources and environments is to create supporting learning activities and resources that help learners focus on content in ways that will activate deeper thinking. Simply suggesting ways to interact, think, and reflect while using provided static or dynamic resources may move learners from choosing surface level approaches to choosing deep learning approaches. Research to better understand the effect and extent to which learning resources support deep learning processing is ongoing.

8.3 Summary

Although the proposed design guidelines presented here are based in theories that have decades of supporting research behind them, much of the cited research was focused on a specific technology (e.g., simulation, interactive video, concept mapping), content domain (e.g., mathematics, reading), age level (e.g., young students, adults), and generally through multiple short-term studies of a single technology to test theoretical ideas. Additional research is needed to validate the proposed design guidelines presented here for using specified types of technology features across different technologies with different levels of audiences, and a variety of content domains to promote deeper learning. It is also be important to identify and further understand the types of learner behaviors exhibited while interacting with different types of technologies, specifically looking for those behaviors that infer deep content learning, ultimately validating the assumption that generative learning, cognitive flexibility, and reflection theories can indeed guide the use of technology features to enhance deep learning. There is a need to conduct longitudinal research across content domains, technologies, and static and dynamic technology resources to extract common design principles that may provide valid ideas to enhance technology feature use across different technologies (e.g., simulations, VR). It will also be important to look at the long term affects different technology features may have on promoting or inhibiting deep learning. It will also be important to look at when highly immersive (more expense and consuming) technologies are better or worse at supporting content learning than lower level technology resources (less expense and consuming). These types of research agendas may help unpack the complexities of learning through technology interactions (physical manipulation) and learning based on content engagement (cognitive manipulation).

9 Summary

Interactive technologies are inundating learning activities. Multiple features offered by technologies give learners options to interact with, engage in, and reflect on learning content. New technologies are offering more exciting and contextualized resources and environments than ever before. The questions explored in this chapter were about how to design effective learner–content interactions by assuring the resources and environments encourage interaction WITH engagement in thinking and reflection, by transforming resources and environments into learning resources and learning environments. There is no guarantee that highly immersive, or low technology-enhanced, experiences are going to be better or worse at supporting deep learning. Theory and research can inform characteristics of technology uses that are predictive of deep learning. By combining what is theorized about the mechanisms of deep learning, design principles, and appropriate technology features, a set of guidelines has been proposed to support the design of learner–content interactions. Further research is necessary to validate these guidelines and their application to a variety of emerging interactive technologies and content applications.

The goal is to take a strategic perspective in incorporating what is known about learning when creating or transforming resources to support deep learning. We likely have not yet fully explored what technologies can do to attract and lull learners into deep thinking and how to avoid designs and interactions that distract and inhibit learning. Learners are complex beings who must choose to think during the learning process. Helping learners make this choice is a complex idea; however, the process can be informed by research.

Regardless of the technology of choice, one of the most critical interactions in the learning process is the learner–content connection. Focusing on that interaction goes far in designing purposeful instruction.

Footnotes

  1. 1.

    Note: This review is not intended to be a full analysis of all recent research. Rather, it is a starting point in unpacking relationships among technologies and learning.

References

  1. Akdemir, O., & Koszalka, T. (2008). Investigating the relationships among instructional strategies and learning styles in online environments. Computers & Education, 50(4), 1451–1461.CrossRefGoogle Scholar
  2. Alimisis, D. (2016). Robotics in education & education in robotics: Shifting focus from technology to pedagogy. In Paper published in 3rd International Conference on Robotics in Education, Prague (pp. 7–14).Google Scholar
  3. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., …, & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Pearson, Allyn & Bacon.Google Scholar
  4. Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading comprehension. In P. D. Pearson (Ed.), Handbook of reading research (pp. 255–311). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.Google Scholar
  5. Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction. International Review of Research in Open and Distance Learning, 4(2), 9–14.CrossRefGoogle Scholar
  6. Axelson, R. D., & Flick, A. (2010). Defining student engagement. The Magazine of Higher Learning, 43(1), 38–43.  https://doi.org/10.1080/00091383.2011.533096.CrossRefGoogle Scholar
  7. Bannert, M. (2006). Effects of reflection prompts when learning with hypermedia. Journal of Educational Computing Research, 35(4), 359–375.  https://doi.org/10.2190/94V6-R58H-3367-G388.CrossRefGoogle Scholar
  8. Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education, Review of Educational Research, 79(3), 1243–1289.Google Scholar
  9. Biggs, J. B. (1987). Student approaches to learning and studying. Melbourne: Australian Council for Educational Research.Google Scholar
  10. Biggs, J. B. (1989). Approaches to the enhancement of tertiary teaching. Higher Education Research and Development, 8, 7–25.CrossRefGoogle Scholar
  11. Biggs, J. B. (2003). Teaching for quality learning at university: What the student does. Buckingham: Open University Press.Google Scholar
  12. Bloom, B. S. (1956). Taxonomy of educational objectives. Vol. 1: Cognitive domain (pp. 20–24). New York: McKay.Google Scholar
  13. Bowden, J., & Marton, F. (1998). The university of learning: Beyond quality and competence. London: Kogan Page.Google Scholar
  14. Chattopadhyay, S., Gangadhar, R. B., Shankar, S., & Kasinathan, K. (2018). Applications of artificial intelligence in assessment for learning in schools. In J. Keengwe (Ed.), Handbook of research on digital content, mobile learning, and technology integration models in teacher education (pp. 185–206). Hershey, PA: IDE Global.CrossRefGoogle Scholar
  15. Cho, T. (2011). The impact of types of interaction on student satisfaction in online courses. International Journal on E-Learning, 10(2), 109–125.Google Scholar
  16. Cook-Benjamin, L. (2018). Best practices to support online student engagement. In J. Keengwe (Ed.), Handbook of research on digital content, mobile learning, and technology integration models in teacher education (pp. 287–299). Hershey, PA: IDE Global.CrossRefGoogle Scholar
  17. Cummings, C., Shelton, K., Mason, D., & Baur, K. (2015). Active learning strategies for online and blended learning environments. In J. Keengwe & J. J. Agamba (Eds.), Models for improving and optimizing blended learning in higher education (pp. 58–82). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  18. Curton, B., & Moreno, V. (2016). Robotics in education. Journal of Intelligent Robot Systems, 81, 3–4.CrossRefGoogle Scholar
  19. Dave, R. H. (1971). Developing and writing behavioral objectives. Tucson, AZ: Educational Innovators Press.Google Scholar
  20. Dennis, J. P., & Vander Wal, J. S. (2010). The cognitive flexibility inventory: Instrument development and estimates of reliability and validity. Cognitive Therapy and Research, 34(3), 241–253.CrossRefGoogle Scholar
  21. Dick, W., & Carey, L. (1978). The systematic design of instruction. Glenview, Illinois: Scott, Foresman and Company.Google Scholar
  22. Dondlinger, M. J. (2007). Educational video game design: A review of the literature. Journal of Applied Educational Technology, 4(1), 21–31.Google Scholar
  23. Duschl, R. (2003). Assessment of inquiry. In M. Atkin & J. E. Coffey (Eds.), Everyday assessment in the science classroom (pp. 41–59). Arlington, VA: NSTA Press.Google Scholar
  24. Ekwunife-Orakwue, K., & Teng, T. (2014). The impact of transactional distance dialogic interactions on student learning outcomes in online and blended environments. Computers & Education, 78, 414–427.  https://doi.org/10.1016/j.compedu.2014.06.011.CrossRefGoogle Scholar
  25. Elder, L., & Paul, R. (2009). Critical thinking: Concepts and tools (6th ed.). Foundation for Critical Thinking Press.Google Scholar
  26. Entwistle, N. J. (1981). Styles of learning and teaching: An integrated outline of educational psychology students, teachers, and lectures. New York: Wiley.Google Scholar
  27. Gagné, R. M. (1985). The conditions of learning and theory of instruction. New York: Holt, Rinehart and Winston.Google Scholar
  28. Gagné, R. M., & Briggs, L. (1979). Principles of instructional design. Holt, Rinehart and Winston.Google Scholar
  29. Gagné, R. M., & Briggs, L. J. (1996). Principles of instructional design. New York: Holt, Rinehart, and Winston.Google Scholar
  30. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. The American Journal of Distance Education, 15(1), 7–23.CrossRefGoogle Scholar
  31. Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133–148.  https://doi.org/10.1207/s15389286ajde1903_2.CrossRefGoogle Scholar
  32. Gilbert, L., & Moore, D. (1998). Building interactivity in Web courses: Tools for social and instructional interaction. Educational Technology, 38, 29–35.Google Scholar
  33. Girvan, C. (2018). What is a virtual world? Definition and classification. Education Technology Research and Development, 1–14.  https://doi.org/10.1007/s11423-018-9577-y.CrossRefGoogle Scholar
  34. Grabowski, B. L. (2004). Generative learning contributions to the design of instruction and learning. In D. H. Jonassen & Association for Educational Communications and Technology (Eds.), Handbook of research on educational communications and technology (2nd ed., pp. 719–743). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  35. Grabowski, B., & Small, R. (1997). Information, instruction, and learning: A hypermedia perspective. Performance and Improvement Quarterly, 10, 156–166.Google Scholar
  36. Harvey, M., Coulson, D., & McMaugh, A. (2016). Toward a theory of the ecology of reflection: Reflective practice for experiential learning in higher education. Journal of University Teaching and Learning Practice, 13(2), 1–20.Google Scholar
  37. Hoppe, U., Verdejo, M. F., & Kay, J. (2003). Artificial intelligence in education: Shaping the future of learning through intelligent technologies. Amsterdam, Netherlands: IOS Press.Google Scholar
  38. Jonassen, D., Campbell, J., & Davidson, M. (1994). Learning with media: Restructuring the debate. Educational Technology Research and Development, 42(2), 31–39. Retrieved from http://www.jstor.org/stable/30218685.
  39. Kearsley, G. (1997). A guide to online education. Retrieved from http://gwis.circ.gwu.edu/~etl/online.html.
  40. Keengwe, J., Onchwari, G., & Wachira, P. (2008). Computer technology integration and student learning: Barriers and promise. Journal of Science Education, 17, 560–565.  https://doi.org/10.1007/s10956-008-9123-5.CrossRefGoogle Scholar
  41. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.  https://doi.org/10.1037/0033-2909.119.2.254.CrossRefGoogle Scholar
  42. Koehler, M., & Mishra, P. (2009). What is technological pedagogical content knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. Waynesville, NC, USA: Society for Information Technology & Teacher Education. Retrieved from https://www.learntechlib.org/primary/p/29544/.
  43. Kolb, A., & Kolb, D. (2012). Experiential learning theory. In N. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 1215–1219). U.S.: Springer.  https://doi.org/10.1007/978-1-4419-1428-6_227.CrossRefGoogle Scholar
  44. Koszalka, T. A. (2016a). What is RIDLR [Research in designing learning resources] all about? [Narrated video]. Retrieved from http://ridlr.syr.edu/about/.
  45. Koszalka, T. A. (2016b). Reflection and its application to learning resources. [Concept paper]. Retrieved from http://ridlr.syr.edu/publications/.
  46. Koszalka, T. A., & Ganesan, R. (2004). Designing online courses: A taxonomy to guide strategic uses of features available in distance education course management systems (CMS). Distance Education, 25(2), 243–256.CrossRefGoogle Scholar
  47. Koszalka, T., & Ntloedibe-Kuswani, G. S. (2010). Literature on the safe and disruptive learning potential of mobile-technologies. Distance Education, 31(2), 139–150.CrossRefGoogle Scholar
  48. Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of educational objectives, handbook II: Affective domain. New York: David McKay Company Inc.Google Scholar
  49. Laird, T. F., Shoup, R., & Kuh, G. (2005). Measuring deep approaches to learning using the national survey of student engagement. Paper Presented at the Annual Meeting of the Association for Institutional Research, Chicago, Il.Google Scholar
  50. Lamb, R. L., Annetta, L., Firestone, J., & Etopip, E. (2018). A meta-analysis with examinations of moderators of student cognition, affect, and learning outcomes while using serious educational games, serious games, and simulations. Computers in Human Behavior, 80, 158–167.CrossRefGoogle Scholar
  51. Li, Y., & Wang, L. (2018). Using iPad-based mobile learning to teach creative engineering within a problem-based learning pedagogy. Educational and Information Technologies, 23, 555–568.CrossRefGoogle Scholar
  52. Lin, X., & Lehman, J. D. (1999). Supporting learning of variable control in a computer-based biology environment: Effects of prompting college students to reflect on their own thinking. Journal of Research in Science Teaching, 36(7), 837–858.CrossRefGoogle Scholar
  53. Littlejohn, A. (2003). Using online resources: A sustainable approach to e-learning. Great Britain: Kogan Page Limited.CrossRefGoogle Scholar
  54. Liu, J. C., & Kaye, E. R. (2016). Preparing online learning readiness with learner-content interactions: Design for scaffolding self-regulated learning. In L. Kyeo-Blankson, J. Blankson, E. Ntuli, & C. Agyeman (Eds.), Handbook of research on strategic management or interaction, presence, and participation in online courses (pp. 216–244). Hershey, PA: Information Science Reference IDE Global.CrossRefGoogle Scholar
  55. Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students learning outcomes in K-12 and higher education: A meta-analysis. Computers & Education, 70, 29–40.  https://doi.org/10.1016/j.compedu.2013.07.033.CrossRefGoogle Scholar
  56. Merriam, S. B., Caffarella, R. S., & Baumgartner, L. (2007). Learning in adulthood: A comprehensive guide (3rd ed.). San Francisco, CA: Jossey-Bass.Google Scholar
  57. Merrill, D. M. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43–59.CrossRefGoogle Scholar
  58. Mislevy, R. J. (2013). Evidence-centered de-sign for simulation-based assessment [Supplemental Material]. Military Medicine, 178(10), 107–114.  https://doi.org/10.7205/milmed-d-13-00213PMID:24084311.
  59. Moos, D. C., & Bonde, C. (2016). Flipping the classroom: Embedding self-regulated learning prompts in videos. Technology, Knowledge, and Learning, 21(2), 225–242.  https://doi.org/10.1007/s10758-015-9269-1.CrossRefGoogle Scholar
  60. Munene, I. I., Darby, F., & Doherty, J. (2015). Blended for student engagement and retention: The case of cinema and visual culture and healthy lifestyle studies. In J. Keengwe & J. J. Agamba (Eds.), Models for improving and optimizing blended learning in higher education (pp. 129–146). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  61. Murray, M., Pérez, J., Geist, D., & Hedrick, A. (2013). Student interaction with content in on-line and hybrid courses: Leading horses to the proverbial water. Informing Science, 16, 99–115.CrossRefGoogle Scholar
  62. Oigara, J. N. (2018). Integrating virtual reality tools into classroom instruction. In J. Keengwe (Ed.), Handbook of research on mobile technology, constructivism, and meaningful learning (pp. 147–159). Hershey, PA: IGI Global.CrossRefGoogle Scholar
  63. Oliver, R. (1999). Exploring strategies for online teaching and learning. Distance Education, 20, 240–250.CrossRefGoogle Scholar
  64. Ormrod, J. E. (2016). Human learning (7th ed.). Upper Saddle River, NJ: Pearson Education.Google Scholar
  65. Passey, D., & Hobrecht, P. (2001). Online resources and effective teaching and learning. Education, 29(1), 3–13.  https://doi.org/10.1080/03004270185200021.CrossRefGoogle Scholar
  66. Paul, R., & Elder, L. (2007). A guide for educators to critical thinking competency standards. Foundation for Critical Thinking Press.Google Scholar
  67. Poll, K., Widen, J., & Weller, S. (2014). Six instructional best practices for online engagement and retention. Journal of Online Doctoral Education, 1(1), 1–17.Google Scholar
  68. Prosser, M., & Millar, R. (1989). The “how” and “why” of learning physics. European Journal of Psychology of Education, 4, 513–528.CrossRefGoogle Scholar
  69. Ramsden, P. (2003). Learning to teach in higher education. London: Routledge Falmer.CrossRefGoogle Scholar
  70. Reeves, T. C., Herrington, J., & Oliver, R. (2005). Design research: A socially responsive approach to instructional technology research in higher education. Journal of Computing in Higher Education, 16(2), 96–115.  https://doi.org/10.1007/BF02961476.CrossRefGoogle Scholar
  71. Reiser, R. A. (2001a). A history of instructional design and technology: Part I: A history of instructional media, Educational Technology Research and Development, 49(53).  https://doi.org/10.1007/BF02504506.CrossRefGoogle Scholar
  72. Reiser, R. A. (2001b). A history of instructional design and technology: Part II: A history of instructional Design, Educational Technology Research and Development, 49(57).  https://doi.org/10.1007/BF02504928.CrossRefGoogle Scholar
  73. Rossing, J. P., Miller, W. M., Cecil, A. K., & Stamper, S. E. (2012). iLearning: The future of higher education? Student perceptions on learning with mobile tablets. Journal of the Scholarship of Teaching and Learning, 12(2), 1–26.Google Scholar
  74. Rovai, A. P. (2002). Sense of community, perceived cognitive learning, and persistence in asynchronous learning networks. The Internet and Higher Education, 5(4), 319–332.CrossRefGoogle Scholar
  75. Rufai, M. M., Alebiosu, S. O., & Adeakin, O. A. S. (2015). A conceptual model for virtual classroom management. International Journal of Computer Science, Engineering and Information Technology, 5(1), 27–32.CrossRefGoogle Scholar
  76. Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading and comprehension (pp. 33–58). Hillsdale, NJ: Erlbaum.Google Scholar
  77. Rumelhart, D. E., & Norman, D. A. (1978). Accretion, tuning, and restructuring: Three modes of learning. In J. W. Cotton & R. L. Klatzky (Eds.), Semantic factors in cognition (pp. 37–53). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  78. Rumelhart, D. E., & Norman, D. A. (1981). Analogical processes in learning. In J. R. Anderson (Ed.), Cognitive skills and their acquisition (pp. 335–359). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  79. Sampson, P., Leonard, J., Ballenger, J., & Coleman, J. (2010). Student satisfaction of online courses for educational leadership. Online Journal of Distance Learning Administration, 13(3). Retrieved from http://www.westga.edu/~distance/ojdla/Fall133/sampson_ballenger133.html.
  80. Sato, M. (2003). Working with teachers in assessment-related professional development. In J. M. Atkin & J. E. Coffey (Eds.), Everyday assessment in the science classroom. Arlington, VA: NSTA Press.Google Scholar
  81. Sauve, L., Renaud, L., Kaufman, D., & Marquis, S. (2007). Distinguishing between games and simulations: A systematic review. Journal of Educational Technology & Society, 10(3), 247–256.Google Scholar
  82. Schön, D. A. (1983). The reflective practitioner. USA: Basic Books.Google Scholar
  83. Schunk, D. H., & Greene, J. A. (2018). History, contemporary, and future perspectives on self-regulated learning and performance. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulated learning and performance (2nd ed., pp. 1–15). New York: Routledge.Google Scholar
  84. Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and web-enhanced college courses. The Internet and Higher Education, 9(3), 175–190.  https://doi.org/10.1016/j.iheduc.2006.06.005.CrossRefGoogle Scholar
  85. Sousa, D. (2005). How the brain learns to read. Thousand Oaks, CA: Corwin Press.Google Scholar
  86. Spensley, F., & Taylor, J. (1999). The development of cognitive flexibility: Evidence from children’s drawings. Human Development, 42(6), 300–324.CrossRefGoogle Scholar
  87. Spiro, R. J., Collins, B. P., Thota, J. J., & Feltovich, P. J. (2003). Cognitive flexibility theory: Hypermedia for complex learning, adaptive knowledge application, and experience acceleration. Educational Technology, 43(5), 5–10.Google Scholar
  88. Spiro, R. J., Coulson, R. L., Feltovich, P. J., & Anderson, D. K. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. Technical Report No. 441.Google Scholar
  89. Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility, constructivism and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In T. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction. Hillsdale, NJ: Erlbaum.Google Scholar
  90. Spiro, R. J., & Jehng, J.-C. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. In D. Nix & R. Spiro (Eds.), Cognition, education, and multimedia: Exploring ideas in high technology (pp. 163–205). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  91. Tagg, J. (2003). The learning paradigm college. Boston, MA: Anker.Google Scholar
  92. Turkan, Y., Radkowski, R., Karabulut-llgu, A., Behzadan, A. H., & Chen, A. (2017). Mobile augmented reality for teaching structural analysis. Advanced Engineering Infomatics, 34, 90–100.CrossRefGoogle Scholar
  93. van Krevelen, D. W. F., & Poelman, R. (2010). A survey of augment reality technologies, application and limitations. International Journal of Virtual Reality, 9(2), 1–10.Google Scholar
  94. Van Rossum, E. J., & Schenk, S. M. (1984). The relationship between learning conception, study strategy and learning outcome. British Journal of Educational Psychology, 54, 73–83.CrossRefGoogle Scholar
  95. Vos, N., van der Meijden, H., & Denessen, E. (2011). Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Computer & Education, 56(1), 127–137.  https://doi.org/10.1016/j.compedu.2010.08.013.CrossRefGoogle Scholar
  96. Wells, G. (1999). Dialogic inquiry. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  97. Wilhelm-Chapin, M. K., & Koszalka, T. A. (2016). Generative learning theory and its application to learning resources [concept paper]. Retrieved from http://ridlr.syr.edu/publications/.
  98. Wittrock, M. C. (1974). Learning as a generative process. Educational Psychologist, 11, 87–95.CrossRefGoogle Scholar
  99. Wittrock, M. C. (1992). Generative learning processes of the brain. Educational Psychologist, 27(4), 531–541. http://dx.doi.org/10.1207/s15326985ep2704_8.CrossRefGoogle Scholar
  100. Zimmerman, B. J. (1998). Developing self-fulfilling cycles of academic regulation: An analysis of exemplary instructional models. In D. H. Schunk & B. J. Zimmerman (Eds.), Self-regulated learning: From teaching to self-reflective practice (pp. 1–19). New York, NY, US: Guilford Publications.Google Scholar
  101. Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (pp. 1–37). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.Google Scholar
  102. Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70.  https://doi.org/10.1207/s15430421tip4102_2.CrossRefGoogle Scholar
  103. Zimmerman, B. J. (2011). Motivational sources and outcomes of self-regulated learning and performance. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance. New York: Routledge.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Tiffany A. Koszalka
    • 1
    Email author
  • Mary K. Wilhelm-Chapin
    • 2
  • Christopher D. Hromalik
    • 3
  • Yuri Pavlov
    • 1
  • Lili Zhang
    • 1
  1. 1.Instructional Design, Development and EvaluationSyracuse UniversitySyracuseUSA
  2. 2.State University of New York (SUNY) CortlandCortlandUSA
  3. 3.World Languages DepartmentOnondaga Community CollegeSyracuseUSA

Personalised recommendations