Introduction

This article discusses a scale developed to measure the extent to which teachers use the three dimensions and 6 sub-dimensions of a mobile pedagogical framework (Kearney et al. 2012) which became known as the iPAC framework (Burden and Kearney 2018). The original framework was developed to reflect the learning that occurs using mobile technologies and highlights the dimensions that are characteristic of mobile learning (m-learning), that is, learning that occurs with the use of mobile technologies. Mobile technologies are any devices that have as their main characteristics, their portability and connectedness. These include smartphones, tablets and laptop computers. The iPAC framework was developed to describe signature pedagogies underpinning mobile activities from a socio-cultural perspective (Kearney et al. 2012), as described in “Conceptual Framework” section.

In this article, we investigate ways to measure use of the mobile pedagogical dimensions of personalisation, authenticity and collaboration, and their corresponding sub-dimensions of agency and customisation; context and task; and conversation and co-creation. These sub-dimensions have evolved since the original development of the framework (Burden and Kearney 2018; Kearney et al. in press), based on feedback from users and from analysis of earlier scales in which factor analysis indicated the strength or weakness of the dimensions’ internal consistency. The amendments are discussed in more detail below.

Existing Instruments Measuring Aspects of M-Learning

M-learning has been defined as learning that takes place with the help of portable electronic tools (Quinn 2000). Other scholars have noted the mobility enabled by the device, a characteristic enabling learning anytime, any place and any way (Cavus and Ibrahim 2009). Numerous instruments have already been developed to measure a variety of aspects of m-learning.

Based on the literature on technology readiness, online learning (or e-learning) readiness, and mobile computer anxiety, Lin et al. (2016) developed and validated a mobile learning readiness (MLR) scale which can be used to assess learners’ readiness to embrace m-learning in numerous contexts (e.g., school, university and workplace). They reported on the scale development of this 19 item instrument with three dimensions: m-learning self-efficacy, optimism, and self-directed learning. It was validated using responses from a diverse range of 319 participants, including students (from undefined sectors) and workplace learners. The aim of this generic instrument is to assist researchers in measuring learners’ psychological readiness to embrace m-learning.

In an earlier study, Wang (2007) reported on the development and validation of another scale in the affective domain: the ‘mobile computer anxiety scale’. This 38-item instrument supplemented similar instruments that measure computer and internet anxiety. The authors used responses from 287 participants to validate their instrument. Participants were adults from 19 to 60 years old from a range of organisations in Taiwan.

Another instrument that has been used in higher education was developed by Knezek and Khaddage (2012). They developed and used a seven-item Likert-type survey, named the ‘mobile learning scale’, to elicit university students’ attitudes toward mobile learning with an emphasis on both informal and formal learning environments.

There are a variety of other studies that have developed robust survey instruments to investigate m-learning in school contexts. For example, Domingo and Garganté (2016) developed a survey to be used in school contexts. Their instrument was designed to collect data about teachers’ individual information, teachers’ perceptions on the impact of mobile technology in learning, and use of a set of selected apps in the classroom. Their study gathered data from 102 teachers of 12 different primary schools in Spain.

Lai et al. (2016) developed a mobile learning environmental preference survey (MLEPS) consisting of eight factors: ease of use; continuity; relevance; adaptive content; multiple sources; timely guidance; student negotiation; and inquiry learning. They used this instrument to investigate differences between mobile learning environmental preferences of 429 high school teachers and 1239 students in Taiwan. They found that the teachers tended to focus more on technical issues, while the students cared more about the richness and usefulness of the learning content. They concluded that m-learning environments should consider both students’ and teachers’ preferences in the future.

Focusing on the teachers, Uzunboylu and Ozdamli (2011) reported on the scale development of an instrument that assesses teachers’ perceptions of m-learning in three areas: mobile technologies’ fit; appropriateness and forms of applications. This study had responses from 467 teachers from 32 schools in Northern Cyprus. In a later study focusing on university students, Uzunboylu et al. (2015) developed a reliable and valid scale to determine students’ attitudes towards mobile enabled language learning. Their “English Language Learning via Mobile Technologies Attitude Scale (ELLMTAS)” contains 37 items and is composed of six sub-dimensions. It was prepared by incorporating experts’ (n = 15) views and reading the relevant literature. It has been applied to 275 university students in an English language course at Cyprus International University.

The Need for a Scale to Measure iPAC Dimensions

As noted in “Existing instruments measuring aspects of m-learning” section, much of the literature on m-learning provides examples of mobile practices and developments in the educational use of mobile technologies (e.g., Valk et al. 2010) and their measurement. However, the pedagogical characteristics of m-learning have been given less consideration in the literature than the technical aspects. Many instruments in m-learning studies have the goal of measuring the effectiveness of m-learning, particularly in higher education (Hung and Zhang 2012; Wu et al. 2012), or aim to measure attitudes to m-learning, or to identify affordances and barriers to adoption.

Unlike these, our iPAC scales aim to foreground teachers’ pedagogies, rather than specific technologies or perceptions of m-learning drivers and constraints. They do so by interrogating teachers’ adoption of signature m-learning practices. The instruments reported in this paper elicit data on these distinctive m-learning approaches being enacted by primary and secondary school teachers. Use of these instruments provide unique data in specifically targeting the three iPAC dimensions detailed in “Conceptual Framework” section (personalisation, authenticity and collaboration) to investigate how teachers are orchestrating signature mobile pedagogies in their task designs.

Most studies rely on self-reported scales or performance measures (e.g., literacy tests) of those using a mobile technology in their learning and make comparisons to results obtained without use of the technology (e.g., Stockwell 2007). In some cases, the comparison is made with learning or enjoyment that occurred in a period of time in which the technology had not been used (e.g., Cavus and Ibrahim 2009; Wang et al. 2009). Other studies describe how researchers undertake a controlled experiment to evaluate outcomes across both time and technology dimensions (e.g., Basoglu and Akdemir 2010). Missing in such evaluations is the nature of the pedagogies that were employed to promote m-learning. In particular, there is little measurement of the presence or absence of certain socio-cultural characteristics of m-learning approaches, as indicated in the iPAC framework (see “Conceptual Framework” section).

One problem is that most m-learning studies reveal insights into m-learning from a technical perspective (i.e., using m-technology) or an understanding of related outcomes (e.g., learning; enjoyment; performance, etc.). However, more research is needed to provide insights into mobile pedagogical elements, and robust instruments are needed to facilitate these studies. Measuring m-learning from a pedagogical perspective is often overlooked in the literature and a rigorous, formal mechanism to capture this perspective remains elusive.

While learning mediated by mobile devices is emergent, mobile pedagogies are of interest to both researchers and practitioners. To our knowledge, however, an instrument to measure the nature of mobile pedagogies has not previously been developed and this void may deter the development of m-learning theory and practice. Therefore, this study developed a generic measurement of teachers’ mobile pedagogical practices, based on a socio-cultural mobile pedagogical framework (iPAC).

The current article provides an overview of the iPAC framework, and then articulates a scale development process with the aim of capturing and measuring the three dimensions described in this framework. An empirical study validates and establishes the measurement properties of this scale so that other scholars and practitioners may use it to investigate mobile pedagogies in a range of settings. The results showed that iPAC can be accurately measured with 20 viable items and is structured with three validated dimensions of personalisation, authenticity and collaboration.

Conceptual Framework

The iPAC framework originated as the Mobile Pedagogical framework (Kearney et al. 2012). It was the basis of a 2014–2017 Erasmus + project, the Mobilising and Transforming Teacher Educators' Pedagogies (MTTEP) project, and subsequently became known to practitioners and users as the iPAC framework (Burden and Kearney 2018). Its key dimensions are personalisation, authenticity and collaboration, each with sub-dimensions. How learners experience these distinctive characteristics of m-learning is influenced by their exploitation of spatial and temporal boundaries (or ‘time-space’), as depicted in Fig. 1. For a full description of the origins of this framework see Kearney et al. (2012).

Fig. 1
figure 1

Original representation of the mobile-learning pedagogy framework. From Kearney et al. (2012), p. 8 (with permission)

Ensuing research using the framework resulted in amendments to the sub-dimensions. Some changes were made to ensure greater comprehension of sub-dimensions. The dimensions and sub-dimensions are discussed below.

Collaboration

The term collaboration in the iPAC framework is taken to indicate that learners use their mobile devices to make rich connections to other people and resources. Exploiting the networking capability of mobile devices can potentially create shared, socially interactive environments.

Conversation

Collaboration is considered with respect to two sub-dimensions. The first is titled ‘conversations’. This sub-dimension is conceptualised as the extent to which learners hold conversations around or through the mobile device with peers, teachers and other experts.

Co-Creation

The other sub-dimension of collaboration refers to co-creation, which is the extent to which learners use a mobile device to co-create digital content and share information, data and artefacts. It has evolved from the original ‘data sharing’ label and was changed to more accurately describe the survey items for this sub-dimension.

Personalisation

Personalisation includes pedagogical features such as learner choice, autonomy and customisation. The two sub-dimensions of personalisation are described below.

Agency

Within personalisation, one sub-dimension is the extent to which learners have control over the place (physical and/or virtual), pace and time they learn, and autonomy over their learning content. We refer to this as agency.

Customisation

Within personalisation, we consider the extent to which learners can customise their m-learning experience, both at the level of the tool (e.g., choice of apps, or the device itself) and the activity (e.g., adaptations of activity or challenge levels automatically provided by the app to suit the learner).

Authenticity

Authenticity means that the mobile learning experience provides real-world relevance and personal meaning to the learner. We consider two aspects of authenticity: the context of the authentic learning experience and the task associated with the authentic experience.

Context

Authenticity can be realised in the context surrounding the learning task. We consider the extent to which learners’ mobile learning experiences are enhanced by realistic, meaningful content, or through ‘in situ’ learning in relevant physical and/or virtual settings.

Task

Authenticity can further be considered with respect to the m-learning activity. Task refers to the extent to which the mobile learning activities are realistic and offer activities relevant to the real world; and the extent to which the tasks and associated processes require use of apps and tools that replicate those of real-world practitioners.

These authenticity sub-dimensions have evolved from the original ‘contextualisation’ and ‘situatedness’ labels, to more accurately describe the survey items for these sub-dimensions.

The updated iPAC model is depicted in Fig. 2 below. A full description of the evolution of this representation is described in Kearney et al. (in press).

Fig. 2
figure 2

Current representation of the iPAC framework (formerly known as mobile-learning pedagogy framework). Adapted from Kearney et al. (2012), p. 8

The Scale Development Process

The initial development of items was undertaken in a range of forums including formal and informal discussions about mobile pedagogies with other researchers and experts in m-learning, as well as pre-service and in-service teachers. A more formal first round test of measurement items with in-service teachers was undertaken, the results of which are not reported here in the interest of brevity, but are available from the authors upon request. These initial results showed that item reliability and discriminant validity could be improved. For example, teachers were asked to report on m-learning activities involving communicating with experts, which captured both sub-dimensions of using m-technology to collaborate, as well as elements of authenticity. As such, there were some concerns regarding cross-factor loadings to the point that an improved instrument was desirable, thereby motivating the current study.

To address issues of discriminant validity, the original items were re-written and then reviewed by a team of five experts to ensure that they referred to only one of the three dimensions of the m-learning pedagogical framework. Given the eventual task of measurement would rely on the comprehension and perceptions of teachers, a subsequent classification exercise involving 79 pre-service teachers was undertaken.

Classification Exercise

Pre-service teachers were first provided with a small description of what collaboration, authenticity and personalisation mean in the iPAC context. Respondents could refer to these descriptions at any time during the task. The classification exercise presented each proposed item to participants. Respondents then classified each survey item to indicate if it was describing authenticity, collaboration, or personalisation. They also had the option to indicate if they felt that an activity description was not describing any of these three dimensions by selecting ‘neither’ or if they were unsure, to indicate ‘unsure’. As such, respondents used a five-point scale (authenticity; collaboration; personalisation; neither; unsure) to classify 24 randomised statements. The statements were originally intended to refer to collaboration (seven statements), personalisation (six-statements) or authenticity (seven-statements). A further four statements were included as distractors to demonstrate the engagement of participants in this exercise.

The results demonstrated that 17 of the 20 PAC-items were classified as intended by at least 50% of pre-service teachers against the focal dimension. For example, 93% of respondents classified that the activity described as “Students talk about the work displayed on the screen with others around them” was reflective of collaborative m-learning activity. Overall, the average level of agreement in classification of these 17 items was 78% of respondents. With respect to misclassification, 40% of respondents indicated that “Students annotate existing digital content e.g. creating a media mash, tagging a photo, editing a video” referred to personalisation rather than online collaboration as intended. Similarly, only 39% of respondents agreed that activities in which “Students learn through a community activity/project e.g. Platypus census using platypusSPOT app; environment projects such as bush regeneration or water quality” represented an authentic activity with another 38% indicating this item represented collaboration. Overall, the results served to indicate the typical statements and words that teachers would use to identify whether the activity was collaborative, authentic or involved personalisation. This is reported in Table 1.

Table 1 Results of item-classification task

The final list of items was further developed based on another review of the statements by the team of experts. Where items had high rates of misclassification, they were reworded to enhance the likelihood of reliability and of increased discrimination from other sub-dimensions. The survey that was tested consisted of 24 items to measure conversations (4 items), co-creation (3), agency (4), customisation (5), context (4) and task (4). A five-item measure was also developed to gauge overall m-learning outcomes.

Testing the iPAC Scale

Respondents were asked to complete a survey online. After reviewing a statement about ethics, those who agreed to undertake the survey were asked whether they had ever encouraged their students to use mobile technologies to support their learning. Mobile learning was defined as a “task or activity (or what we will call a ‘m-learning activity’) utilising mobile technologies to support learning. This can be in and/or outside of class (e.g., at home; on excursions).” To provide clarity on mobile technologies, respondents were also informed that: “A mobile technology is any portable, handheld technology that potentially supports learning. This includes any por device such as a laptop, a tablet (e.g., an iPad), a two-in-one (e.g., a Surface), or a smart phone”.

Teachers were then asked their main discipline area out of 21 areas (e.g., English, Mathematics, Science, Agriculture, Food Technology) in which they have used mobile technologies in activities. Teachers were also asked to nominate over the last year, the main class level in which they had implemented mobile learning activities in teaching ranging from Years K to Year 12. As the larger project, of which this study was part, was funded to focus on Years 7 to 12 cohorts, only those high school teachers were retained for analysis. The information regarding subject area and cohort were then piped into each question to contextualise the items, that is, teachers were asked to consider certain questions about the behaviour of students with mobile devices in the cohort and subject area previously nominated. Specifically, for each set of items, they were asked to consider: “When my students in Year <7 to 12> used mobile devices to learn in <subject area> activity, …”. Participant teachers were then asked to nominate which option best described their response to each statement on a scale of 1 to 5, where “1” means “Strongly disagree” and “5” means “Strongly agree”.

The survey instrument was further tested using four different versions. Firstly, the stem of the item either referred to ‘typical usage’ over the past year with a given cohort and subject area with mobile devices; or to one recently implemented, ‘chosen’ activity with mobile devices for a given cohort and subject area. Secondly, each statement was varied to either be presented with examples, or without examples. For instance, an item relating to the conversation sub-dimension was presented as “Discussed the work online with their friends/peers e.g. discussed ideas via email, SMS, Skype, Facebook etc.” whereas in the other condition, this was presented simply as: “Discussed the work online with their friends/peers”. Respondents were randomly allocated to each of these four survey versions:

A: General use over past year, with examples; B: General use over past year, without examples; C: Specific task, with examples; D: Specific task, without examples.

Results and Discussion

The results of the scale development were based on 349 completed responses. Respondents consisted of Australian teachers who were invited to participate via social media, who were previously listed on a database registering interest in being contacted for the study and teachers whose access was via a survey panel provider. Exploratory and confirmatory factor analysis was first undertaken to assess the measurement properties of the developed scale. Below we present the details of the final model that emerged from that analysis with respect to each of the main and sub-dimensions of m-learning pedagogical practice.

Collaboration

The Collaboration dimension was considered with respect to two underlying sub-dimensions: conversation and co-creation. With respect to conversations, the scale which included the first item (“Talked about the work displayed on the screen with others around them”) resulted in an average variance extracted (AVE) of less than .5 in all versions of the survey and also in the analysis combining all sample data. For this reason, convergent validity could not be established (Fornell and Larker 1981). When this item was dropped, however, the AVEs increased to be all above 0.5 and an acceptable level of convergent validity. In addition, the composite reliability was above 0.7 in all cases, confirming the reliability of the items in relationship to the individual construct (Raykov 1997). In retrospect, the first item referred to conversing with others around the mobile device, whilst the three other items referred to conversing online: as such the finalised sub-dimension as measured by the three items in Table 2 is reflective of m-learning activities involving online conversations ‘through’ the device.

Table 2 Measurement properties of collaboration dimension

Co-creation was measured by three items and referred to learners’ use of the mobile device to collaboratively create digital content. The factor loadings and associated measures of reliability are presented in Table 2. The co-creation scale captures m-learning in terms of students working together to create, contribute to and share digital content. There was sufficient convergent validity and reliability at the aggregate level. Version C of the survey (specific task; with examples), however, showed that item 3 in this sub-dimension (‘[students] contributed to existing digital content’) was less satisfactory in being a reflective measure of co-creation.

Personalisation

Personalisation captures the extent to which m-learning involves students choosing the parameters of their learning activities with respect to time, pace and location (i.e., agency) as well as the customisation of the m-learning activity based on the learning preferences and needs of the individual (i.e., customisation). With respect to agency, in all four survey versions, the inclusion of the four items to measure this sub-dimension resulted in an AVE below .5. However, the removal of item 4 (‘[students] chose how to express their thinking’) resulted in an acceptable measure overall (see Agency section of Table 3).

Table 3 Measurement properties of personalisation dimension

It is noted that the version involving teachers reporting on a specific m-learning activity without any examples for the item (i.e., version D) resulted in a measure of agency that did not meet the requirements for convergent validity (see Table 3). Overall, the agency items reflect the extent to which students choose the time, pace and place to undertake their m-learning activities.

Personalisation was also considered in terms of the extent to which teachers agreed that their students experienced customised learning as guided by their mobile devices. A four-item measure was supported based on appropriate factor loadings and reliability statistics. The inclusion of item 5 (‘[students] customised the learning to their requirements’) was found to slightly lower the AVE and CR in all cases. However, its inclusion had negligible effect with the exception of version D (specific task, no examples). Taken together, these customisation items capture the extent to which students’ device use and app preferences resulted in learning experiences tailored to their individual learning needs.

Authenticity

The authenticity of m-learning was conceptualised with respect to two underlying dimensions: context and task. With respect to context, a three-item measure was supported, with the exclusion of item 3 (“[Students] engaged in learning content that was relevant to them”) decreasing the reliability so that the AVE was unacceptable in three of the four versions and in the combined sample cases. Taken collectively, the context items capture the extent to which the time and place of the m-learning activities create meaning for learners. The factor loadings and reliability measures of the final three-item measure of context is presented in Table 4.

Table 4 Measurement properties of authenticity dimension

Authenticity with respect to task was captured by four proposed items (also see Table 4). The items are reflective of m-learning that involves working like an expert, participating in real world activities and engaging in activities related to everyday life. The latent construct ‘task’ is about the embedded learning that occurs in everyday life with students as citizens.

Overall M-Learning Experiences

An overarching measure of m-learning activities was devised to reflect teachers’ views of the experiences for students with respect to learning, enjoyment and understanding of subject material. The original measure proposed use of an item to capture difficulty in learning a subject using the mobile devices which when reversed coded would be reflective of the same overarching construct as the other measures. However, the item did not work as intended based on its lack of inter-item correlation with the four other items and overall construct. In turn, the measure of overall m-learning experiences was best captured using the four-items in Table 5, reflecting the extent to which teachers agreed that mobile devices help students learn, practice and improve their understanding of the given subject in which they had been employing mobile technologies. This forms the dependent variable later considered in the structural model of m-learning pedagogy. The factor scores and measurement properties of the finalised scale for m-learning experience are presented in Table 5.

Table 5 Overall M-learning experience

Discriminant Validity

Discriminant validity is the extent to which each construct is sufficiently different from other constructs. Whilst this arose as a concern in earlier forms of the scale development process, discriminant validity of the items was established using the outlined measures. This validity was confirmed as the squared correlation between any two variables was shown to be less than their respective AVEs (Fornell and Larker 1981) as reported in Table 6.

Table 6 Assessment of discriminant validity

Differences across Survey Versions

Overall, based on establishing that construct validity and discriminant validity were acceptable for all six sub-dimensions, the final 20-item scale performed adequately in all cases when teachers were prompted to consider their general practices over the prior year (versions A and B) regardless of whether each items was presented with examples (version A) or without (version B). Further, the average AVE and CR were higher in these two versions over the versions that asked teachers to consider practices for a specific task only (versions C and D). The measure of co-creation (collaboration dimension) failed for one item (item 3 in Table 2) using version C based on the AVE being less than .05 (AVE = .39) and CR being less than .70 (CR = .65). Version D on the other hand passed the benchmarks for CR on all six sub-dimension scales, but marginally failed with respect to AVE in relation to agency (personalisation dimension) due to Item 2 (AVE = .48) and Item 4 for customisation (AVE = .49), as shown in the personalisation results (Table 3). Taken together, the recommendation in using the iPAC scales would be to use the scale with or without examples if asking teachers about their general pedagogical practices over the last year. If only asking about practices with reference to a specific m-learning task, the recommendation is to use either the short or long forms for gauging m-learning in terms of conversation, context or task; to offer examples when measuring agency and customisation; and offer no examples when measuring contributions to existing digital content to gauge levels of co-creation. The final validated iPAC scales, including these recommendations, are presented in Appendix 1.

Structural Model of M-Learning Practice and Experiences

A structural model was developed to consider the relationships between latent constructs with respect to each overarching dimension and to develop a measure of overall m-learning pedagogical practice. The model was used to confirm that teachers agreeing they adopted an extensive use of pedagogically rounded m-learning practices would result in more significant levels of enjoyable and rewarding m-learning experiences for students. The measure of pedagogy is also considered with respect to how practices may vary by subject area and by student year.

The model was estimated using a covariance-variance based approach based on all 349 observations. The comparative fit index, CFI, was .935 (Bentler 1990; Bentler and Bonett 1980) and Tucker-Lewis Index (TLI) was .923 (Tucker and Lewis 1973) indicating acceptable levels of incremental fit (Hu and Bentler 1999). The root-mean square error of approximation (RMSEA) was estimated to be below .05 (p < .01 level) (Steiger and Lind 1980), again indicating acceptable model fit (Browne and Cudeck 1993). All parameter estimates were significant at the .01 level.

The model results and standardized path coefficients are presented in Fig. 3. The results show that collaboration in m-learning activities is reflected by conversation and co-creation. On the other hand, personalisation in m-learning is slightly more reflected by elements of customisation than agency, although both dimensions are significant. Authenticity is reflected by context and task, with some dominance of context in reflecting this overarching dimension. With respect to the main overarching construct, m-learning pedagogical practice, this construct is most reflected by teachers encouraging students to undertake personalised m-learning practice, authentic m-learning practice, followed by collaborative m-learning practice. The model predicts that those who adopt such m-learning pedagogical practices are likely to report positive m-learning experiences (β = .57; p < .001).

Fig. 3
figure 3

Structural model and estimates

Differences in M-Learning Pedagogical Practice

With the basic construct validated and standardised estimates available for each individual, we examined whether any differences in m-learning pedagogical practice would be apparent across the variables of those teaching across different subject areas or across different years of schooling.

With respect to subject area, we considered whether pedagogical practice would be difference across those undertaking and reporting on m-learning activities in English (18% of teachers), Mathematics (21%), Science (17%) or other subject area (43%). Owing to small sample sizes, the base group, ‘other subject area’, combined teachers working across languages (5%), computing studies (4%), TAS (3%), history (3%) and a range of other areas. An ANOVA was performed to test for differences in means and further comparisons using Tukey-Kramer honest significant differences were performed to further consider the nature of these differences (Tukey 1949).

The results revealed that m-learning practice using the iPAC dimensions was more predominant among English teachers (M = .13) than mathematic teachers (M = -.13); but these differences were not significant at the .05 level. Science teachers were medial to both groups with respect in engagement of m-learning pedagogies (M = .04) and slightly higher but not significantly different to teachers working in other areas (M = -.01; p = .44).

The results revealed significant variation across m-learning practice with respect to level of schooling in which the teachers were utilising m-technology for teaching activities (F = 3.023; p < .05). The pattern of results showed that m-learning practice was highest among teachers engaging with Year 12 students (M = .31) and lowest among those in Year 9 (M = -.33). On average, teachers undertaking m-learning activities with students in Year 9 were significantly more likely to do so relative to those in Years 8 or 10 (p < .05), as well as those in Year 12 (p < .01). Those teaching in Year 12 were more engaged in m-learning pedagogical practice significantly more so than those engaging with students in Years 7, 9 or 11 (p < .01).

In Australia, stages of curriculum comprise 2 year periods (e.g. years 7 and 8, years 9 and 10 and so on). One explanation for the pattern of results here is that use appears to be lowest in the introductory years to a subject over a usual 2-year stage (i.e., Years 7, 9 or 11) relative to the year thereafter (i.e., Years 8, 10, 12). The second-year of each stage may be perceived by teachers as a suitable period when students may be better equipped in the foundational knowledge or have better developed abilities to engage in a deeper level of learning. As such, for example, a student may receive an introduction to legal studies in Year 11 and be more likely to seek authentic, collaborative and personalised learning experiences, which may be facilitated by m-technology in year 12. Figure 4 presents these results graphically with a dotted line to highlight the 2-year pattern of results. Of course, further research would be encouraged to consider whether this explanation fits with why this pattern of practice is observed among teachers.

Fig. 4
figure 4

Pedagogical practice by year

Similarly, it would be useful to explore a range of other explanatory variables which may explain what moderates outcomes captured by the m-learning pedagogical practice scale. For example, variation in practice may be higher among those teachers with high levels of familiarity with mobile technologies and therefore possibly lead to different perceptions about the ease and difficulty in implementing richer forms of pedagogical practice (Aubusson et al. 2014; Burke 2013). Similarly, various barriers to technology adoption may exist among schools owing to variation in policy or resourcing (Ertmer et al. 2012).

Conclusion

The iPAC framework was designed to describe mobile learning from a pedagogical perspective, with socio-cultural theory underpinning it (Kearney et al. 2012). The framework was embraced by practitioners and has been adopted for use in numerous countries throughout the world (Burden and Kearney 2018). Stemming from earlier work on this framework, a survey was developed to measure teachers’ pedagogy on a specific task, based on the dimensions of the framework (Burden and Kearney 2018). The task survey was then adapted to measure teachers’ general mobile pedagogies across a year (Kearney et al. in press). However, survey analysis revealed some concerns with both discriminant validity and construct validity.

This article chartered the development of a more robust survey instrument in which validity was improved. The surveys in their current form have been found to measure the key dimensions of the iPAC framework and its subdimensions. These are likely to be helpful to teacher practitioners who wish to develop and design mobile activities, or are already using mobile activities and wish to evaluate where these activities are located in a socio-cultural learning paradigm. Similarly, this newly validated instrument can be utilised by researchers to confidently investigate m-learning pedagogy.

The findings of this article on scale development indicate that the survey has strong construct validity when the versions (both using examples, and the short version, without examples) evaluating general use over a year are used. With the versions evaluating a specific task, some modifications are needed. These comprise provision of examples for the sub-dimensions of agency and customisation, and removing examples from the sub-dimension of co-creation. These changes can be easily made and the authors will provide validated versions of these scales on a publicly available website (https://www.ipacmobilepedagogy.com) for teachers and researchers to use. With support of such measurement instruments, practitioners both in schools and in teacher education programs will be better able to develop mobile pedagogies informed by the iPAC framework that emphasise collaboration, personalisation and authenticity.

Future work could collect further responses to the surveys to allow further analysis of teacher, student and school characteristics, and subject areas. We trust that teachers and teacher educators will benefit from the opportunity to evaluate their pedagogies when developing mobile learning tasks.