The study used a qualitative, exploratory methodology. Using teacher logbooks and interviews during an 8-week period in an undergraduate university course, it was investigated how teachers made use of learning analytics.
The seven participating teachers formed the teaching team for the course Designing Educational Materials (DEM) during one semester (10 weeks) at a university in the Netherlands. Teachers 1 and 2 were the coordinators of the course. Teachers 1, 2, and 3 alternated in providing the weekly lectures in the course. Teachers 2, 3, 4, 5, 6, and 7 taught the weekly working groups. The mean age of the teachers was 35.9 years (SD = 9.7). On average, they had 6.9 years teaching experience (SD = 5.9) in higher education and 3.4 years teaching experience (SD = 2.3) in the DEM course. Five teachers were female. None of the teachers had experience with using learning analytics (LA) as part of their teaching. All teachers gave informed consent at the start of the course, indicating they agreed their data would be used for research purposes.
One hundred and fifty students were enrolled in the course and divided into seven working groups (average working group size was 21.4 students, SD = 2.0). One hundred and forty-six students signed informed consent for their data to be used for research purposes. The mean age of these students was 23.5 years (SD = 4.6); 114 students (78.1%) were female. The course involved both individual and collaborative activities. The collaborative activities were carried out in groups of four or five students; each working group therefore consisted of five or six small groups.
Course structure in terms of student activity
The DEM course had two learning objectives, namely for students to gain (1) knowledge about design models, and (2) skills concerning DEM. Students’ mastery of the first objective was assessed using an individual final exam, and the second objective was assessed using a collaborative assignment in which groups of four or five students designed course materials (e.g., a customer service training for bank employees).
The course used a flipped classroom model (Staker and Horn 2012). Students were expected to study the course materials before coming to the face-to-face meetings, during which time was spent on processing and deepening knowledge. Figure 1 outlines students’ weekly activities. To study the course materials, students could read the book in which DEM was explained or watch one of the 24 web lectures (or both), which were based on the content of the book. In addition, students were required to process the learning material by submitting at least one formative multiple choice question about the materials for that specific week. The questions were handed in through PeerWise, an application that enables students to see and practice with each other’s questions (Hardy et al. 2014; PeerWise 2016).
Every week, two face-to-face meetings were scheduled that were aimed at processing and deepening knowledge of DEM. In the 90-min interactive lecture, students discussed with a guest speaker that presented a real-life example of DEM. Also, additional background was provided about DEM that students could subsequently use in the group assignment. The weekly working groups, guided by one of the working group teachers, lasted 4 h. In the first hour of the working group, the formative questions submitted by the students were answered and discussed as a whole group. The remaining time was used to work on small group assignments, during which the teacher consulted with each small group and answered questions when needed.
In the first 8 weeks of the course, the course followed the structure outlined in Fig. 1. In week 9, no face-to-face meetings were scheduled, and students handed in the group assignment. In week 10, the individual exam took place. Students’ online activities (PeerWise and web lectures) were automatically logged. Attendance at the face-to-face meetings was registered on paper. A further source of information about students’ activities was a short weekly questionnaire that students filled in at the start of each working group (see “Content and design of the weekly LA reports” section). Students’ activities served as input for the weekly teacher LA report, see “Content and design of the weekly LA reports” section.
Course structure in terms of teacher activity
Teachers met with the students during lectures and working groups (see Fig. 1) in weeks 1–8 during the course. Every Monday between week 1 and 8, before the lectures and working groups occurred, a meeting between the teaching staff took place in which the planning for that week was discussed. During these meetings, teachers were provided with weekly LA reports about students’ activities. Section “Content and design of the weekly LA reports” details the content and design of the LA reports. The first report was handed out in week 2, after the first data about students’ activities had been collected. The LA reports were printed on paper for each of the members of the teaching team separately (corresponding to the working group they taught). To investigate teachers’ use of the reports, two instruments were used. First of all, teachers were asked to fill in a weekly logbook. Starting in week 2, after each staff meeting on Monday, teachers were sent an invitation with a link to the online logbook and were requested to fill it in that same day. In week 9, after all face-to-face meetings had ended, the teachers were interviewed about their experiences with and opinions of the LA reports. The logbooks and interviews are described in “Teacher logbooks and interviews” section.
Content and design of the weekly LA reports
An important decision was which data to display on the weekly LA reports concerning students’ activities. A criticism of LA has been that the choice of indicators of student learning “has been controlled by and based on the data available in learning environments” (Dyckhoff et al. 2013, p. 220, see also Slade and Prinsloo 2013), even though these measures do not always represent the information teachers are interested in or adequately measure student learning (Tempelaar et al. 2015). As has been increasingly advocated in the field of LA, data should be collected and interpreted by means of educational theory (Wise and Shaffer 2015). An explicit goal of this study was to provide meaningful LA reports to the teachers. Instead of relying on data that were automatically logged by the online systems, the researchers adopted the stance from action research to start by “think[ing] about the questions that have to be answered before deciding about the methods and data sources” (Dyckhoff et al. 2013, p. 220). Thus, the leading questions to determine the content of the LA reports were: what data are available; what data are considered important in literature; and what data are deemed important by the teaching staff.
Based on literature, the assumption was made that more student online activity would indicate more task effort and predict better learning outcomes on the exam and the group assignment (e.g., this relation was found for creating formative questions, Hardy et al. 2014, for study time, Lim and Morris 2009, and for engaging in online activities, Lowes et al. 2015). It was therefore important to gather information about which topics of the course materials students studied, as well as for how long they did so. In addition to individual study of the course materials, a large part of the course concerned the group assignment. Literature indicates that within groups, variables such as communication, trust, and perception of reliability of group members are important for successful collaboration because these factors enhance the social cohesion that is needed for group members to contribute equally to the task (Janssen et al. 2010; Kyndt et al. 2013). Therefore, these indicators were also added to the list of initial data types that were to be included on the LA reports.
This initial list of indicators was presented to the teaching staff, after which they could suggest additional types of information they wanted to have about their students. Based on the teachers’ experiences in previous cohorts of the DEM course, they suggested including the following additional information: (1) a measure indicating whether students feel they are able to translate the theoretical DEM model into their practical group assignment. This suggestion was interpreted theoretically to mean that teachers would like information about students’ self-efficacy; (2) a measure indicating whether there was agreement within the small groups about how to realize this translation; (3) a measure indicating whether students feel like they are on schedule with their assignment or are falling behind; and (4) an open question in which students could comment on the group collaboration.
The indicators based on literature and teacher suggestions were not automatically logged like the online activities, and thus had to be collected purposefully. This was done by means of a weekly questionnaire that was filled in by the students at the start of each working group. Filling in the questionnaire took about 5 min and was a fixed part of the routine of the working groups. The average number of completed questionnaires over the 8 weeks was 130.71 (SD = 3.5), indicating a high and steady completion rate (out of the sample of 146).
The final types of student information that the LA reports consisted of are displayed in Table 1, in which a distinction is made between information concerning activities students carry out individually, and information concerning the group assignment. The report also displays what percentage of students had watched the web lectures. Given that each web lecture started with an outline of its content and ended with a short outro, we estimated that students had seen the core of the web lecture if they had watched at least 75%. Students were therefore only included in this number when they had watched at least 75% of the web lecture.
The measures were aggregated to working group level, so that information about individual students could not be identified by the teachers. While from a scaffolding perspective it might be more beneficial for teachers to obtain information about each individual student separately, there were two reasons why it was decided to aggregate the data. First, the information might be considered sensitive and should therefore be treated carefully (Slade and Prinsloo 2013). Second, the LA reports might have become overly long and complex if they detailed information about every student separately. To secure students’ privacy and to keep the LA reports manageable for teachers, the data were therefore aggregated to working group level.
The LA reports had a basic, straightforward design using tables to display the quantitative information, which included the mean, standard deviation, and minimum and maximum for the working group (see Fig. 2) as well as for the whole course, so that teachers could compare their group to the course average. The open comments the students submitted in the weekly questionnaire were copied onto the LA report in a bulleted list (see Fig. 2).
Teacher logbooks and interviews
Teachers were asked to fill in a weekly logbook about how they had used the LA reports. The logbook first contained a question about how the teachers had used the LA reports in the previous week to make choices concerning their teaching activities in the DEM course. They were then asked about their intentions for the coming week, based on the LA report they had received that same day prior to filling in the logbook. In total, 37 logbook entries were collected (on average 5.3 per teacher; min = 3, max = 7), yielding a response rate of 75%. The logbooks recorded the teachers’ responses to the LA reports during the course, and were used as input for the main data source, the teacher interviews.
In the teacher interviews, general questions were asked about the functions the LA reports fulfilled for the teachers in terms of preparing for meetings with students, during meetings with students, and for reflecting on meetings with students. Teachers were also asked about their general perceptions and opinions of adding the LA reports as a part of the course. Teachers were furthermore asked to look at one LA report as an example and to think aloud about what they could infer from such a report and how they valued the information displayed on the report. The logbooks were analyzed prior to the interviews and served as prompts during the interview when asking teachers to reflect on the functions the LA report had served for them. Therefore, the interviews were a form of cued retrospective reporting (Van Gog et al. 2005), where the logbooks served as cues to elicit teachers’ thoughts and experiences concerning the LA reports. For example, teachers were asked to report in the interview what functions the LA reports had served for them. For each teacher, the answers from the logbooks were selected in which teachers had mentioned such functions. During the interview, the logbook answers could then be shown to the teacher to prompt further explanation and reflection on the LA reports.
The interviews were semi-structured and lasted on average 30 min. Teachers were asked permission to record the interviews, so that the interviewer could focus on the conversation and transcribe and analyze the interviews at a later point.
During analysis, a bottom-up and top-down approach were combined, using both the raw data as well as a theoretical lens to identify relevant codes and themes. The grounded theory approach was used to code the transcribed interviews in three phases: open coding, axial coding, and selective coding (Boeije 2010; Strauss and Corbin 1994). Open coding means the data are read, meaningful information is determined, and codes are created and assigned. Open codes were created for two dimensions, corresponding to the research questions: what functions of the LA reports the teachers mentioned, and which challenges they raised. After five interviews, saturation occurred and no new codes were needed. Interview six and seven were also open coded and used to further refine the list of open codes.
During axial coding, the relationships between the codes were further examined and main and sub-codes were distinguished. The theoretical lens of scaffolding was used in this process. In particular, the distinction between diagnosing students’ needs and intervening in the learning process became relevant. After thematically sorting the open codes, reliability of the coding scheme was established by calculating interrater reliability. For the codes for functions and challenges, 20% of the fragments were coded by two researchers, resulting in 90% agreement. The resulting axial coding scheme consisted of 10 codes for functions and 9 for challenges. After establishing reliability, the whole set of fragments was recoded with the final axial code book.
In the final stage of selective coding, connections between codes were sought in order to determine the core themes within the data. Core themes not only appear frequently, but are also central in the sense they are connected to a lot of other codes (Boeije 2010). A table was created for Teacher × Code. For each code, the corresponding fragments were read and compared to identify the relations between the functions and challenges of the LA reports. Thus, the result of this phase was the identification of the core themes and the relation between them.