TechTrends

pp 1–8 | Cite as

How Useful are our Models? Pre-Service and Practicing Teacher Evaluations of Technology Integration Models

Original Paper

Abstract

We report on a survey of K-12 teachers and teacher candidates wherein participants evaluated known models (e.g., TPACK, SAMR, RAT, TIP) and provided insight on what makes a model valuable for them in the classroom. Results indicated that: (1) technology integration should be coupled with good theory to be effective, (2) classroom experience did not generally influence teacher values and beliefs related to technology integration, (3) some models may be more useful to teachers than others, (4) the widespread use of a model does not necessarily reflect usefulness, (5) useful models for teachers should engender real-world, concrete application, and (6) visual appeal of a model is largely subjective, but some visual representations might convey notions of practicality. Conclusions should be used to help researchers and practitioners understand the practical application value of technology integration models in real-world settings.

Keywords

Technology integration Theoretical models TPACK SAMR Rat Tip 

Theoretical models are widely used in teacher preparation and educational research for guiding and understanding the process of technology integration. Though as a field we recognize the value of theoretical constructs, they are very diverse and seem to be adopted in an uncritical, tribalistic (Kimmons 2015; Kimmons and Hall 2016a, b), or anarchic (Feyerabend 1975) manner. Some theoretical models have seen greater diffusion than others and have become more prominent. For instance, TPACK has become very popular among educational researchers, and SAMR has become very popular among practitioners, but it is not clear what such diffusion is based upon, what characteristics of these theoretical models make them appealing to different groups, and how models should be adopted, adapted, and critically evaluated in relation to one another. The educational technology field, like the educational field generally, exhibits a high degree of theoretical pluralism, but informed pluralism requires common understandings of how to approach options in a manner that recognizes the value and limitations of each and that favors the development of focused “local or phenomenological theories” appropriate for specialized contexts, rather than generalist theories of technology in education (Burkhardt and Schoenfeld 2003). Without such shared understandings and dialogue, pluralism in the field will continue to reflect tribalistic tendencies of adoption and advocacy for more generalist theoretical constructs that we are familiar with, while ignoring those that we are not.

Various technology integration models may be found dispersed throughout the literature, professional development settings, and teacher practice. Some prominent models include TPACK, SAMR, TIP, TIM, RAT, and TAM, and though acronyms for these models may in some cases be very similar, the concepts, approaches, values, and assumptions inherent to them can be quite diverse. With plenty of models to choose from, the problem is not that a useful model cannot be found but that deciding upon a model can be a complex task. Some of these models have emerged out of similar theoretical constructs, and there are many commonalities between them, some intentional and some unintentional. For example, RAT (Hughes 2005) builds upon the earlier work of Cuban (1988) and Pea (1985) in its conceptualization of technology integration as including instances of redefinition and transformation, while SAMR has almost identical constructs (Puentedura 2003). Similarly, TPACK (Mishra and Koehler 2007) builds off of Shulman’s concept of PCK, while TIP adds onto this by utilizing TPACK as a foundation (Roblyer and Doering 2013).

To progress as a field, educational technology needs to grapple with some of the hard issues emerging from the existence of a plurality of theoretical models. The fact that many exist, for instance, suggests that no single model may be universally valuable, understandable, or useful to all shareholders. This is expected, because the field represents a wide diversity of shareholders, including researchers, practitioners, policymakers, advocates, and administrators. Each of these groups represents diverse individuals, differing on personal values, ways of seeing the world, subject areas of interest, grade and age levels of intended learners, and paradigms of practice. It seems unrealistic to expect that a single theoretical construct could be used to meet all needs and struggles, because “[n]o [model] ever solves all the problems it defines,” and “no two [models] leave all the same problems unsolved” (Kuhn 1996, p. 110). Yet, as a field, we have not established a systematic method for evaluating models for adoption, and it appears that models are typically chosen without considering other options or critically evaluating their usefulness (Kimmons 2015). This suggests that our efforts are not sufficiently critical or thoughtful and that rather than gaining theoretical robustness, the conceptual development of our models remains stagnant, and their usefulness for particular groups remains unclear.

As with other groups, teachers’ beliefs and values greatly influence their technology usage. Since teacher beliefs may have more influence on integration than does their knowledge (Kagan 1992; Pajares 1992), teachers need to be able to define what makes technology integration meaningful to them on an individual level for integration to occur (Becker 2000). Though these beliefs and values may change over time, studies suggest that teacher experience and age do not impact their willingness to adopt new technologies (Bebell et al. 2004; Smarkola 2008). Thus, we need to accurately understand teacher beliefs and values if we are to guide them in adopting and using theoretical models effectively, and theoretical constructs like technology integration models need to be subjected to critical scrutiny in this regard if they are to be taken seriously (cf., Willingham 2012).

Though this issue has been introduced in previous literature, and the strengths and limitations of particular models have been considered in a fledgling manner (Archambault and Barnett 2010; Archambault and Crippen 2009; Brantley-Dias and Ertmer 2013; Graham 2011; Kimmons 2015; Kimmons and Hall 2016b), this study seeks to take the next step in this process by considering what values direct a particular shareholder group’s adoption of theoretical models and which models align with their values. This study focused on a relatively small group of preservice and practicing teachers (n = 129) who were required to integrate technology into their planning and teaching and sought to understand (1) what foundational considerations guided their views about theoretical models of technology integration generally, (2) how they valued specific theoretical constructs when they were looking to adopt a theoretical model, and (3) how existing models aligned with these values and considerations. Value criteria were developed from previous theoretical work in this area (Kimmons and Hall 2016a) and borrow heavily from the literature on theory development (Kuhn 2013).

Methods

This study utilized a one-time survey distribution method with preservice and in-service teachers, and data was analyzed quantitatively to determine relationships between provided constructs. The guiding research question for this study was: What types of technology integration models are valuable for preservice and inservice teachers? To meaningfully answer this research question, a series of secondary questions were asked as follows:
  1. 1.

    What are participants’ attitudes toward theory and technology, and are there any differences based upon teaching experience?

     
  2. 2.

    Which theoretical values are important for participants when adopting a technology integration model, and are there any differences based upon teaching experience?

     
  3. 3.

    How well do specific models (TPACK, TIP, SAMR, & RAT) align with the theoretical values that are important to each participant?

     
  4. 4.

    How is visual appeal of a model influenced by participant’s attitudes toward technology and theory?

     

Data Collection

Data for this study was collected via a one-time online survey. To ensure face validity, survey items were collaboratively co-designed by a technology integration specialist and a technology integration researcher who had both been active in research and outreach efforts with teachers in this area for many years. Participants in this study were recruited from two main sources. First, preservice teachers at the university were invited to complete the survey after completing focused coursework on technology integration. Second, in-service teachers participating in technology integration trainings and research projects pertaining to technology integration were also invited. Because participants were recruited at face-to-face technology integration training sessions and were given time to complete surveys at the end of trainings, the response rate was very high (at about 90%). In total, 129 educators participated in the study, and classroom experience demographic data was collected on 103 participants, of which 43% were pre-service or first-year teachers and 57% were in-service teachers. As such, this study utilized focused sampling to gather the perspectives of teachers who had explicitly received training on technology integration, and both pre-service teachers and in-service teachers were surveyed so that researchers could compare results, considering whether teacher beliefs and values changed as a result of classroom experience. To answer the research questions, four types of data were collected, including: general attitudes toward theory and technolog, beliefs regarding theoretical model values, evaluations of specific technology integration models, and visual appeal of tech integration model graphics. Each type of data is now explained in more detail.

Attitudes toward Theory and Technology

To establish participants’ general attitudes toward theoretical models and the role that technology should play in the educational process, a series of nine questions were asked, rated on a 5-point Likert agreement scale (Strongly Disagree to Strongly Agree), and results were normalized for a range of −1 to 1. Provided questions are available in the appendix. These items yielded a highly reliable Cronbach’s alpha of .81, suggesting that they are reliably measuring the same construct: the participant’s attitude toward theory and technology.

Theoretical Model Values

Based upon previous theoretical work in this area (Kimmons and Hall 2016a), a series of six theoretical model values were provided, and participants were asked how important each value was to them when considering adoption of a technology integration model. These values included the following: clarity, compatibility, fruitfulness, outcomes, and role of technology. Descriptions of each item are available in the appendix. Participants were asked to evaluate each theoretical value on a 3-point Likert importance scale (Not Important, Somewhat Important, or Very Important), and values were normalized to a numeric scale of 0, 0.5, and 1, which will be explained later. As expected, results yielded a low Cronbach’s alpha of .41. This suggests that the theoretical values likely represent different constructs that may have varying subjective appeal to participants, thereby making them useful for understanding reasons for adoption. Thus, each theoretical value was treated as a different construct in the statistical analysis.

Specific Models

In addition to the general attitudes toward theory and technology, and the theoretical model values described above, researchers also collected concrete evaluations of specific models from participants and connected these evaluations to each participant’s values. That is, researchers sought to understand (1) how valuable specific models were to participants and (2) why they were valuable.

This required three main steps. First, researchers provided participants with a list of four models commonly referenced in the institution and geographic region: TPACK, TIP, SAMR, and RAT. Participants were asked which (if any) of the four they were familiar with. Second, for each model with which a participant was familiar, we asked the participant to evaluate the model in accordance with the six theoretical model values described above on a 5-point Likert agreement scale (Strongly Disagree to Strongly Agree). For instance, if a participant was familiar with SAMR only, then the participant was asked only to what extent SAMR exhibited Clarity, Compatibility, etc., while if a participant was familiar with multiple models, then each model was scored separately. Third, operating on the assumption that a model’s actual value should be determined by both its alignment with theoretical values and each participant’s view of the relative importance of those values, a weighted model evaluation was calculated for each model evaluated by a participant using the following equation:
$$ Swmv=\varSigma \left({Smv}^{\ast } Stv\right)/\varSigma (Stv) $$

In this equation, each participant’s model value scores (Smv) are multiplied by that participant’s score for the relative importance of the corresponding theoretical value (Stv), resulting in a weighted model value score. All weighted model value scores for a particular evaluation are then added together and divided by the sum of the participant’s value scores (Stv) to produce the overall model score (Swmv). This approach allowed for each model to be scored based upon the values of the participant, checking for how well each model addressed the values that were individually important to each participant. Since theoretical model value scores ranged from 0 to 1, this meant that model value scores were weighted as a percent, wherein important values were weighted at 100%, somewhat important values at 50%, and unimportant values at 0%. As a result, overall model scores ranged from −1 to 1, wherein −1 meant that the model completely failed to address the participant’s theoretical values, 0 meant that the model was neutral in addressing the participant’s values, and 1 meant that the model completely addressed the participant’s values.

To illustrate, suppose a specific teacher rated theoretical values as listed in Table 1, with role of technology being not important (Stv = 0.0), fruitfulness being somewhat important (0.5), and all other values being important (1.0). Suppose that this same teacher strongly agreed that a specific model (e.g., TIP) exhibited clarity (Smv = 1.0), agreed that it exhibited compatibility (0.5), neither agreed nor disagreed that it exhibited outcomes (0.0), disagreed that it exhibited role of technology and fruitfulness (−0.5), and strongly disagreed that it exhibited scope (−1.0). The weighted model value for each item, then, would be calculated by multiplying model value by theoretical value (Smv * Stv). In this case, since the teacher said that clarity was important and strongly agreed that the model exhibited clarity, a high score of 1.0 was produced. Alternatively, since the teacher said that fruitfulness was only somewhat important but disagreed that the model exhibited fruitfulness, this produced a somewhat negative score of −0.25. And since the teacher said that role of technology was not important, a neutral weighted value of 0.0 was produced. When the sum of all of the weighted model values (Σ[SmvStv]) was divided by the sum of the theoretical values (Σ[Stv]), the overall model score was produced. In this case, the score was .06, which meant that the model was only slightly successful in addressing the participant’s values.
Table 1

Example teacher model evaluation that would result in an overall weighted model value of 0.06 (0.25/4.5 = 0.06)

 

Theoretical value

Stv

Model evaluation (TIP)

Smv

Weighted model values

Clarity

Important

1.0

Strongly Agree

1.0

1.0

Outcomes

Important

1.0

Neutral

0.0

0.0

Role of Technology

Not Important

0.0

Disagree

−0.5

0.0

Compatibility

Important

1.0

Agree

0.5

0.5

Scope

Important

1.0

Strongly Disagree

−1.0

−1.0

Fruitfulness

Somewhat Important

0.5

Disagree

−0.5

−0.25

Sum

 

4.5

 

0.25

Visual Appeal

It was also considered that the visual appeal of a model might be related to theoretical values. Participants were provided with a graphical multiple-choice question that provided the common visual depictions of three models (TPACK, SAMR, and TIP) and were asked the following question: “Based purely on the visual representation of the models provided below, which model do you find most appealing, or which would you be most interested in learning more about?” It was anticipated that with these results, researchers could discern which models were more visually appealing to participants and determine whether this appeal reflected theoretical model values (e.g., Clarity).

Data Analysis

This study relied upon quantitative analysis of survey responses. Descriptive results are provided, and a series of analysis of variance tests (ANOVA), paired-samples t-tests, and Fisher’s least significant difference (LSD) post hoc tests were utilized to determine significant findings. Separate statistical tests were required to answer each secondary research question, and detailed information about each test is provided in the following section, organized by research question.

Results

This study sought to understand what types of technology integration models are valuable for preservice and in-service teachers. Toward this end, secondary research questions with hypothesis testing (as appropriate) were explored, and detailed results for each secondary question now follow.

RQ1. What are Participants’ Attitudes Toward Theory and Technology, and are there any Differences Based upon Teaching Experience?

Responses to attitudes toward theory and technology items ranged from −1 to 1 (Strongly Disagree to Strongly Important), and descriptive results revealed that participants generally agreed with all items (M = .14 to M = .62; cf. Table 2). Participants, however, strongly agreed that it is important for technology use to be connected to theory (M = .62) and that theory should be clear (M = .58), practical (M = .54), and connected to outcomes (M = .54), while they only somewhat agreed that it was important for technology integration model adoption to be uniform (M = .14) and that developing technology skills alone constituted sufficient justification for integrating technology (M = .28). A one-way analysis of variance (ANOVA) with classroom experience as the independent variable and attitude factors as dependent variables revealed that a difference between groups might exist in participant attitudes toward technology’s role as a method of engagement, F(2) = 3.08, p = .05. All other factors did not yield significant differences between groups. Given that the overall test approached significance on one factor, Fisher’s LSD post hoc testing was conducted and revealed that participants who had been teaching for 1 year or less believed that engagement was a greater benefit of technology than did those who had been teaching for more than 5 years, MD = .2, p < .05.
Table 2

Descriptive results of attitudes toward theory and technology

 

Description

Min

Max

Mean

SD

Theory

Technology needs to be coupled with good theory in order to improve teaching and learning.

−1.0

1.0

.62

.37

Clarity

Approaches to technology integration should be clear and should easily translate into concrete practice.

−1.0

1.0

.58

.39

Practicality

Educational theory can be of practical value for classroom practice.

−1.0

1.0

.54

.35

Outcomes

Educational technologies should clearly improve student achievement through measurable outcomes.

−1.0

1.0

.54

.44

Engagement

Technology’s greatest benefit in the classroom is that it can engage students.

−1.0

1.0

.51

.47

Compatibility

Educational technologies need to be compatible with existing practices.

−1.0

1.0

.46

.45

Rethink

Educational technologies should force us to rethink existing practices and norms.

−1.0

1.0

.38

.48

Skill

Developing technological skills is a sufficient reason alone for integrating technology.

−1.0

1.0

.28

.54

Uniformity

Educators should adopt uniform models for understanding and implementing technology.

−1.0

1.0

.14

.52

RQ2. Which Theoretical Values are Important for Participants when Adopting a Technology Integration Model, and are there any Differences Based upon Teaching Experience?

Responses to theoretical value items ranged from 0 to 1 (Not Important to Very Important), and descriptive results revealed that participants generally believed that all items were at least somewhat important (M = .57 to M = .89; cf. Table 3). However, participants felt that clarity (M = .89), outcomes (M = .86), and role of technology (M = .81) were very important, while fruitfulness (M = .57) and scope (M = .71) were only somewhat important. A one-way analysis of variance (ANOVA) with classroom experience as the independent variable and theoretical value factors as dependent variables revealed no significant differences between groups, suggesting that classroom experience did not impact a shift in theoretical values among participants.
Table 3

Descriptive results of theoretical values (Stv)

 

Description

Min

Max

Mean

SD

Clarity

The model is easily understood, well-defined, difficult to misunderstand, and easy to translate into concrete practice.

0

1

0.89

0.24

Outcomes

The model easily aligns with goals for improved student academic achievement and yields results that can be readily assessed

0

1

0.86

0.25

Role of Technology

The model treats technology as a means for achieving a meaningful goal rather than treating technology as an end itself.

0

1

0.81

0.3

Compatibility

The model is compatible with existing pedagogical practice and can be easily incorporated into existing practice through concrete steps.

0

1

0.75

0.31

Scope

The model forces you to think deeply about the educational process and educational institutions, involving ethical and social issues (e.g., equal access to quality education).

0

1

0.71

0.33

Fruitfulness

Many people know about the model and incorporate it into trainings, professional development, papers, lessons, blog posts, and so forth.

0

1

0.57

0.35

RQ3. How Well do Specific Models (TPACK, TIP, SAMR, & RAT) Align with the Theoretical Values that are Important to each Participant?

Calculated evaluations of overall model value ranged from −1 to 1 (Complete Lack of Alignment to Complete Alignment) and took into consideration participant evaluations of model theoretical value alignment as well as each participant’s own beliefs about the importance of the theoretical constructs. Descriptive results of specific model weighted evaluations revealed that participants only somewhat agreed that SAMR (M = .46), RAT (M = .39), and TPACK (M = .27) aligned with their theoretical values and that they were generally neutral on TIP (M = .07; cf. Table 4). More robust statistical analysis could not be performed on these results, however, because sample sizes for these items were much smaller than for other items (e.g., not every participant was familiar with every model).
Table 4

Descriptive results of specific model evaluations (Stv and Stv)

 

N

Raw

Weighted

Weight change

Mean

SD

Min

Max

Mean

SD

Min

Max

SAMR

53

.43

.31

−.33

1.00

.46

.31

−.25

1.00

.03 **

RAT

19

.39

.23

.00

.92

.39

.21

.00

.91

.00

TPACK

13

.27

.42

−1.00

.58

.25

.41

−1.00

.56

−.02

TIP

13

.07

.47

−1.00

.58

.07

.48

−1.00

.56

.00

**Indicates significance at the p < .01 level

To consider whether participant value weightings significantly changed individual participants’ evaluations of specific models, a series of paired-samples t-tests were conducted, comparing raw and weighted scores. Results indicated that only the SAMR evaluations were significantly altered by the weighting, t(52) = 3.39, p < .01, with a small difference in means (MD = .03).

RQ4. How is Visual Appeal of a Model Influenced by Participant’s Attitudes toward Technology and Theory?

Responses to the visual appeal item indicated that participants favored the visual graphic of SAMR (62.2%) over TPACK (29.1%) over TIP (8.8%). A one-way analysis of variance (ANOVA) with visual appeal as the independent variable and attitudes toward technology and theory factors as dependent variables revealed that a difference between groups existed for Practicality, F(2) = 3.36, p < .05. All other factors did not yield significant differences between groups. Given the significance of the overall test, Fisher’s LSD post hoc testing was conducted and revealed that participants who preferred TPACK’s visual to other models also believed that educational theory can have practical value for classroom practice (MD = .15 for SAMR and MD = .22 for TIP; cf. Table 5).
Table 5

Fisher’s LSD post hoc results of practicality on visual appeal of specific models

 

SAMR

TIP

MD

Std. Error

MD

Std. Error

TPACK

.15 *

.06

.22 *

.11

*Denotes significance at the p < .05 level

Discussion

Results of this study have important implications for a variety of educational stakeholders. We will proceed by discussing teacher educator, educational researcher, and general implications.

Teacher Educator Implications

With regard to teacher educators, teacher attitudes toward theory and technology revealed that technology use needs to be coupled with good theory that is clear and practical. This means that technology integration efforts should always be guided by meaningful theories that practically address desired learning outcomes in a manner that is contextually valuable in the teacher’s unique classroom setting (Kimmons and Hall 2016a). When selecting a model, teacher educators should be less concerned about fruitfulness, which was the least important of identified criteria, because practicing teachers do not seem to care whether or not models are in use in the field generally but instead focus entirely upon whether a model is useful in their local contexts (Burkhardt and Schoenfeld 2003). This means that teacher educators should be careful not to adopt models in teacher education just because they are in wide use but should concretely consider a theoretical model’s value for their localized practice.

Also, differences in evaluations between models suggested that some models may be more useful to teachers than others, and that choosing a model should not merely be an aesthetic choice by the teacher educator. Teacher education programs should be thoughtful in considering which models to utilize in coursework and training and should recognize that teachers will be better served by selecting particular models as a basis for training over others. This requires a shift away from existing tribalistic tendencies in model selection toward informed pluralism that requires consideration of multiple theoretical options and their explicit alignment with values, beliefs, and desired outcomes (Kimmons and Hall 2016a). Along these lines, results indicated that considerations of clarity, outcomes, and the role of technology were of primary importance to teachers, which means that adopted models should engender real-world, concrete application. Teachers recognize the need for theoretical models insofar as these models have discernible bearing in their classrooms and readily help them to achieve valuable goals. Thus, though a theoretical construct may be conceptually rich and appealing to a researcher, it may provide limited utility and value for a classroom teacher; and teacher educators should only utilize those models in teaching that align with desirable teacher values and beliefs and provide practical guidance on classroom use.

Educational Researcher Implications

With regard to educational researchers, two findings are noteworthy. First, results corroborated some existing studies that classroom experience does not influence technology adoption (cf., Bebell et al. 2004; Smarkola 2008) by showing that it does not influence theoretical evaluations or beliefs in most cases. This runs contrary to prevalent myths in education related to more experienced teachers being less willing to innovate with technology. In this study, the only time that classroom experience seemed to be significant was when participants considered the value of technology as an engagement tool, wherein less experienced teachers expressed greater interest in technology for this purpose. This result suggests that emphasizing technology’s importance for engagement may reflect a certain level of naivety about the classroom and technology’s actual benefits for supporting student learning, thereby perpetuating novelty effects rather than sustained learning benefits (Clark 1983). Researchers should explore this issue in more depth to determine whether this differing emphasis on engagement has an impact on technology use for novelty effects.

And second, visual appeal of a model seems to be largely subjective, but it appears that a teacher’s attitude toward practicality might influence them to gravitate toward some visual depictions over others. This means that researchers and others who develop technology integration models should give attention to the visual illustrations of the models they utilize and recognize that visual cues suggest certain aspects of models to teachers (e.g., practicality) that may or may not be accurate (cf., the notion of deceptive simplicity in Graham 2011). As a field, we need to be careful to ensure that our visual models align with the theoretical components they are purportedly representing. Otherwise, educators may be drawn to a model for its visual appeal or aesthetics (Feyerabend 1975) only to later find that it poorly aligns with their values.

General Implications

In more general terms, the process we followed in this study generated some insights into the perspectives of our participants, and though many advocates of technology integration models may incorporate these models into research and training, rarely are practitioners asked to provide feedback on the models themselves. Based on these results, we find it dubious that a single model currently addresses all of the needs of participants or fully aligns with their theoretical values (cf., Kuhn 1996), but more importantly, we also submit that the development of a single monolithic model of technology integration that addresses all participant needs may not be possible (cf., Burkhardt and Schoenfeld 2003), given the diversity of individual perspectives.

We suggest that the importance of a study like this lies in its ability to draw out potential benefits and limitations of specific models, which can then be used to support informed plurality in model use. In short, we need meaningful methods for comparing models and determining their appropriateness for particular purposes, and by reporting on this study, we hope to provide other teacher educators and researchers with a groundwork for exploring this issue and determining how to best approach technology integration model adoption in their institutions.

Conclusion

Technology integration models provide structure for the complex task of integrating technology in educational environments. However, a problem lies in choosing the proper model to fit contextual needs of practitioners. In this study, we report on a survey of K-12 teachers and teacher candidates wherein participants evaluated known models (e.g., TPACK, SAMR, RAT, TIP) and provided insight on what makes a model valuable for them in the classroom. Results indicate that: (1) technology integration should be coupled with good theory to be effective, (2) classroom experience did not generally influence teacher values and beliefs related to technology integration, (3) some models may be more useful to teachers than others, (4) the widespread use of a model does not necessarily reflect usefulness, (5) useful models for teachers should engender real-world, concrete application, and (6) visual appeal of a model is largely subjective but some visual representations might convey notions of practicality. Conclusions should be used to help researchers and practitioners understand that (1) technology integration models are varied and may have differing value to various groups, (2) technology integration models can and should be evaluated for their usefulness in various contexts, and (3) we need to have a conversation in the field of how to go about evaluating models in a way that reflects the richness of the models and the contexts in which they are applied.

Notes

Funding

This study was funded by the J. A. and Kathryn Albertson Foundation.

Compliance with Ethical Standards

Conflict of Interest

Dr. Royce Kimmons declares that he has no conflict of interest. Cassidy Hall declares that she has no conflict of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

References

  1. Archambault, L., & Barnett, J. (2010). Revisiting technological pedagogical content knowledge: Exploring the TPACK framework. Computers & Education, 55(4), 1656–1662.CrossRefGoogle Scholar
  2. Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education, 9(1), 71–88.Google Scholar
  3. Bebell, D., Russell, M., & O’Dwyer, L. (2004). Measuring teachers’ technology uses: why multiple-measures are more revealing. Journal of Research on Technology in Education, 37(1), 45–63.CrossRefGoogle Scholar
  4. Becker, H. J. (2000). Who’s wired and who’s not: children’s access to and use of computer technology. Children and Computer Technology, 10(2), 44–75.Google Scholar
  5. Brantley-Dias, L., & Ertmer, P. A. (2013). Goldilocks and TPACK: is the construct ‘just right?’. Journal of Research on Technology in Education, 46(2), 103–128.CrossRefGoogle Scholar
  6. Burkhardt, H., & Schoenfeld, A. H. (2003). Improving educational research: toward a more useful, more influential, and better-funded enterprise. Educational Researcher, 32(9), 3–14.CrossRefGoogle Scholar
  7. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445–459.CrossRefGoogle Scholar
  8. Cuban, L. (1988). Constancy and change in schools (1880s to the present). In P. W. Jackson (Ed.), Contributing to educational change: Perspectives on research and practice (pp. 85–105). Berkeley: McCutchan.Google Scholar
  9. Feyerabend, P. K. (1975). Against method: outline of an anarchistic theory of knowledge. London: New Left Books.Google Scholar
  10. Graham, C. R. (2011). Theoretical considerations for understanding technological pedagogical content knowledge (TPACK). Computers & Education, 57(3), 1953–1960.CrossRefGoogle Scholar
  11. Hughes, J. (2005). The role of teacher knowledge and learning experiences in forming technology-integrated pedagogy. Journal of Technology and Teacher Education, 13(2), 277–302.Google Scholar
  12. Kagan, D. (1992). Professional growth among preservice and beginning teachers. Review of Educational Research, 62, 129–169.CrossRefGoogle Scholar
  13. Kimmons, R. (2015). Examining TPACK’s theoretical future. Journal of Technology and Teacher Education, 23(1), 53–77.Google Scholar
  14. Kimmons, R., & Hall, C. (2016a). Emerging technology integration models. In G. Veletsianos (Ed.), Emergence and innovation in digital learning: Foundations and applications. Edmonton: Athabasca University Press.Google Scholar
  15. Kimmons, R., & Hall, C. (2016b). Toward a broader understanding of teacher technology integration beliefs and values. Journal of Technology and Teacher Education, 24(3), 309–335.Google Scholar
  16. Kuhn, T. S. (1996). The structure of scientific revolutions (Third ed.). Chicago, IL: The University of Chicago Press.Google Scholar
  17. Kuhn, T. (2013). Objectivity, value judgment, and theory choice. In A. Bird & J. Ladyman (Eds.), Arguing About Science (pp. 74–86). New York: Routledge.Google Scholar
  18. Mishra, P., & Koehler, M. J. (2007). Technological pedagogical content knowledge (TPCK): confronting the wicked problems of teaching with technology. In R. Carlsen et al. (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2007 (pp. 2214–2226). Chesapeake: AACE.Google Scholar
  19. Pajares, M. F. (1992). Teachers’ beliefs and educational research: cleaning up a messy construct. Review of Educational Research, 62(3), 307–332.CrossRefGoogle Scholar
  20. Pea, R. D. (1985). Beyond amplification: using the computer to reorganize mental functioning. Educational Psychologist, 20(4), 167–182.CrossRefGoogle Scholar
  21. Puentedura, R. R. (2003). A matrix model for designing and assessing network-enhanced courses. Hippasus. Retrieved from http://www.hippasus.com/resources/matrixmodel/
  22. Roblyer, M. D., & Doering, A. H. (2013). Integrating educational technology into teaching (6th ed.). Boston: Pearson.Google Scholar
  23. Smarkola, C. (2008). Efficacy of a planned behavior model: Beliefs that contribute to computer usage intentions of student teachers and experienced teachers. Computers in Human Behavior, 24(3), 1196–1215.CrossRefGoogle Scholar
  24. Willingham, D. T. (2012). When can you trust the experts? How to tell good science from bad in education. San Francisco: Jossey-Bass.Google Scholar

Copyright information

© Association for Educational Communications & Technology 2017

Authors and Affiliations

  1. 1.Brigham Young UniversityProvoUSA
  2. 2.University of IdahoMoscowUSA

Personalised recommendations