The results of the scale development were based on 349 completed responses. Respondents consisted of Australian teachers who were invited to participate via social media, who were previously listed on a database registering interest in being contacted for the study and teachers whose access was via a survey panel provider. Exploratory and confirmatory factor analysis was first undertaken to assess the measurement properties of the developed scale. Below we present the details of the final model that emerged from that analysis with respect to each of the main and sub-dimensions of m-learning pedagogical practice.
The Collaboration dimension was considered with respect to two underlying sub-dimensions: conversation and co-creation. With respect to conversations, the scale which included the first item (“Talked about the work displayed on the screen with others around them”) resulted in an average variance extracted (AVE) of less than .5 in all versions of the survey and also in the analysis combining all sample data. For this reason, convergent validity could not be established (Fornell and Larker 1981). When this item was dropped, however, the AVEs increased to be all above 0.5 and an acceptable level of convergent validity. In addition, the composite reliability was above 0.7 in all cases, confirming the reliability of the items in relationship to the individual construct (Raykov 1997). In retrospect, the first item referred to conversing with others around the mobile device, whilst the three other items referred to conversing online: as such the finalised sub-dimension as measured by the three items in Table 2 is reflective of m-learning activities involving online conversations ‘through’ the device.
Co-creation was measured by three items and referred to learners’ use of the mobile device to collaboratively create digital content. The factor loadings and associated measures of reliability are presented in Table 2. The co-creation scale captures m-learning in terms of students working together to create, contribute to and share digital content. There was sufficient convergent validity and reliability at the aggregate level. Version C of the survey (specific task; with examples), however, showed that item 3 in this sub-dimension (‘[students] contributed to existing digital content’) was less satisfactory in being a reflective measure of co-creation.
Personalisation captures the extent to which m-learning involves students choosing the parameters of their learning activities with respect to time, pace and location (i.e., agency) as well as the customisation of the m-learning activity based on the learning preferences and needs of the individual (i.e., customisation). With respect to agency, in all four survey versions, the inclusion of the four items to measure this sub-dimension resulted in an AVE below .5. However, the removal of item 4 (‘[students] chose how to express their thinking’) resulted in an acceptable measure overall (see Agency section of Table 3).
It is noted that the version involving teachers reporting on a specific m-learning activity without any examples for the item (i.e., version D) resulted in a measure of agency that did not meet the requirements for convergent validity (see Table 3). Overall, the agency items reflect the extent to which students choose the time, pace and place to undertake their m-learning activities.
Personalisation was also considered in terms of the extent to which teachers agreed that their students experienced customised learning as guided by their mobile devices. A four-item measure was supported based on appropriate factor loadings and reliability statistics. The inclusion of item 5 (‘[students] customised the learning to their requirements’) was found to slightly lower the AVE and CR in all cases. However, its inclusion had negligible effect with the exception of version D (specific task, no examples). Taken together, these customisation items capture the extent to which students’ device use and app preferences resulted in learning experiences tailored to their individual learning needs.
The authenticity of m-learning was conceptualised with respect to two underlying dimensions: context and task. With respect to context, a three-item measure was supported, with the exclusion of item 3 (“[Students] engaged in learning content that was relevant to them”) decreasing the reliability so that the AVE was unacceptable in three of the four versions and in the combined sample cases. Taken collectively, the context items capture the extent to which the time and place of the m-learning activities create meaning for learners. The factor loadings and reliability measures of the final three-item measure of context is presented in Table 4.
Authenticity with respect to task was captured by four proposed items (also see Table 4). The items are reflective of m-learning that involves working like an expert, participating in real world activities and engaging in activities related to everyday life. The latent construct ‘task’ is about the embedded learning that occurs in everyday life with students as citizens.
Overall M-Learning Experiences
An overarching measure of m-learning activities was devised to reflect teachers’ views of the experiences for students with respect to learning, enjoyment and understanding of subject material. The original measure proposed use of an item to capture difficulty in learning a subject using the mobile devices which when reversed coded would be reflective of the same overarching construct as the other measures. However, the item did not work as intended based on its lack of inter-item correlation with the four other items and overall construct. In turn, the measure of overall m-learning experiences was best captured using the four-items in Table 5, reflecting the extent to which teachers agreed that mobile devices help students learn, practice and improve their understanding of the given subject in which they had been employing mobile technologies. This forms the dependent variable later considered in the structural model of m-learning pedagogy. The factor scores and measurement properties of the finalised scale for m-learning experience are presented in Table 5.
Discriminant validity is the extent to which each construct is sufficiently different from other constructs. Whilst this arose as a concern in earlier forms of the scale development process, discriminant validity of the items was established using the outlined measures. This validity was confirmed as the squared correlation between any two variables was shown to be less than their respective AVEs (Fornell and Larker 1981) as reported in Table 6.
Differences across Survey Versions
Overall, based on establishing that construct validity and discriminant validity were acceptable for all six sub-dimensions, the final 20-item scale performed adequately in all cases when teachers were prompted to consider their general practices over the prior year (versions A and B) regardless of whether each items was presented with examples (version A) or without (version B). Further, the average AVE and CR were higher in these two versions over the versions that asked teachers to consider practices for a specific task only (versions C and D). The measure of co-creation (collaboration dimension) failed for one item (item 3 in Table 2) using version C based on the AVE being less than .05 (AVE = .39) and CR being less than .70 (CR = .65). Version D on the other hand passed the benchmarks for CR on all six sub-dimension scales, but marginally failed with respect to AVE in relation to agency (personalisation dimension) due to Item 2 (AVE = .48) and Item 4 for customisation (AVE = .49), as shown in the personalisation results (Table 3). Taken together, the recommendation in using the iPAC scales would be to use the scale with or without examples if asking teachers about their general pedagogical practices over the last year. If only asking about practices with reference to a specific m-learning task, the recommendation is to use either the short or long forms for gauging m-learning in terms of conversation, context or task; to offer examples when measuring agency and customisation; and offer no examples when measuring contributions to existing digital content to gauge levels of co-creation. The final validated iPAC scales, including these recommendations, are presented in Appendix 1.
Structural Model of M-Learning Practice and Experiences
A structural model was developed to consider the relationships between latent constructs with respect to each overarching dimension and to develop a measure of overall m-learning pedagogical practice. The model was used to confirm that teachers agreeing they adopted an extensive use of pedagogically rounded m-learning practices would result in more significant levels of enjoyable and rewarding m-learning experiences for students. The measure of pedagogy is also considered with respect to how practices may vary by subject area and by student year.
The model was estimated using a covariance-variance based approach based on all 349 observations. The comparative fit index, CFI, was .935 (Bentler 1990; Bentler and Bonett 1980) and Tucker-Lewis Index (TLI) was .923 (Tucker and Lewis 1973) indicating acceptable levels of incremental fit (Hu and Bentler 1999). The root-mean square error of approximation (RMSEA) was estimated to be below .05 (p < .01 level) (Steiger and Lind 1980), again indicating acceptable model fit (Browne and Cudeck 1993). All parameter estimates were significant at the .01 level.
The model results and standardized path coefficients are presented in Fig. 3. The results show that collaboration in m-learning activities is reflected by conversation and co-creation. On the other hand, personalisation in m-learning is slightly more reflected by elements of customisation than agency, although both dimensions are significant. Authenticity is reflected by context and task, with some dominance of context in reflecting this overarching dimension. With respect to the main overarching construct, m-learning pedagogical practice, this construct is most reflected by teachers encouraging students to undertake personalised m-learning practice, authentic m-learning practice, followed by collaborative m-learning practice. The model predicts that those who adopt such m-learning pedagogical practices are likely to report positive m-learning experiences (β = .57; p < .001).
Differences in M-Learning Pedagogical Practice
With the basic construct validated and standardised estimates available for each individual, we examined whether any differences in m-learning pedagogical practice would be apparent across the variables of those teaching across different subject areas or across different years of schooling.
With respect to subject area, we considered whether pedagogical practice would be difference across those undertaking and reporting on m-learning activities in English (18% of teachers), Mathematics (21%), Science (17%) or other subject area (43%). Owing to small sample sizes, the base group, ‘other subject area’, combined teachers working across languages (5%), computing studies (4%), TAS (3%), history (3%) and a range of other areas. An ANOVA was performed to test for differences in means and further comparisons using Tukey-Kramer honest significant differences were performed to further consider the nature of these differences (Tukey 1949).
The results revealed that m-learning practice using the iPAC dimensions was more predominant among English teachers (M = .13) than mathematic teachers (M = -.13); but these differences were not significant at the .05 level. Science teachers were medial to both groups with respect in engagement of m-learning pedagogies (M = .04) and slightly higher but not significantly different to teachers working in other areas (M = -.01; p = .44).
The results revealed significant variation across m-learning practice with respect to level of schooling in which the teachers were utilising m-technology for teaching activities (F = 3.023; p < .05). The pattern of results showed that m-learning practice was highest among teachers engaging with Year 12 students (M = .31) and lowest among those in Year 9 (M = -.33). On average, teachers undertaking m-learning activities with students in Year 9 were significantly more likely to do so relative to those in Years 8 or 10 (p < .05), as well as those in Year 12 (p < .01). Those teaching in Year 12 were more engaged in m-learning pedagogical practice significantly more so than those engaging with students in Years 7, 9 or 11 (p < .01).
In Australia, stages of curriculum comprise 2 year periods (e.g. years 7 and 8, years 9 and 10 and so on). One explanation for the pattern of results here is that use appears to be lowest in the introductory years to a subject over a usual 2-year stage (i.e., Years 7, 9 or 11) relative to the year thereafter (i.e., Years 8, 10, 12). The second-year of each stage may be perceived by teachers as a suitable period when students may be better equipped in the foundational knowledge or have better developed abilities to engage in a deeper level of learning. As such, for example, a student may receive an introduction to legal studies in Year 11 and be more likely to seek authentic, collaborative and personalised learning experiences, which may be facilitated by m-technology in year 12. Figure 4 presents these results graphically with a dotted line to highlight the 2-year pattern of results. Of course, further research would be encouraged to consider whether this explanation fits with why this pattern of practice is observed among teachers.
Similarly, it would be useful to explore a range of other explanatory variables which may explain what moderates outcomes captured by the m-learning pedagogical practice scale. For example, variation in practice may be higher among those teachers with high levels of familiarity with mobile technologies and therefore possibly lead to different perceptions about the ease and difficulty in implementing richer forms of pedagogical practice (Aubusson et al. 2014; Burke 2013). Similarly, various barriers to technology adoption may exist among schools owing to variation in policy or resourcing (Ertmer et al. 2012).