4.1 Introduction and Scope

4.1.1 Scope

The goal on this chapter is to:

  • introduce the basics of methods and tools for analysing and interpreting educational data for facilitating educational decision making, including course and curricula design.

4.1.2 Chapter Learning Objectives

This chapter learning objectives

Learn2Analyse

Educational data Literacy

Competence profile

Know how to identify data sources within the educational design process

1.1

Be able to explain key concepts of data quality for data collected in the educational design process

1.2

Be able to design automated and semi-automated interventions based on educational data

4.4

Know and understand how to revise course tasks and contents based on educational data

5.1

Be able to construct adequate criteria and indicators for evaluating the impact of a data-driven intervention in educational design of online and blended courses

5.2

Be able demonstrate awareness of data privacy and distinguish between different levels of data protection in educational design of online and blended courses

6.2

Be able to explain the differences between the concepts of authorship, ownership, data access, renegotiation, and data-sharing in education design

6.3

4.1.3 Introduction

This chapter will introduce the basics of methods and tools for analysing and interpreting educational data for facilitating educational decision making, including course and curricula design. Teaching analytics use static and dynamic information about the design of learning environments for near real-time modelling, prediction, and optimisation of learning artefacts, learning designs, learning processes, curriculum designs, and educational decision making.

  • The first topic focuses on data sources for supporting teaching analytics. You will reflect on the instructional design process and locate data sources for optimising learning environments as well as understand limitations and requirements for data quality.

  • The second topic includes critical reflections on data ethics and privacy principles. You will build awareness toward data privacy, distinguish different levels of data protection and identify issues of authorship, ownership, data access and data-sharing.

  • The third topic addresses the application and communication of educational data and analytics findings to various stakeholders. You will design and revise automated and semi-automated interventions as well as apply methodologies for improving the design of learning environments, teaching processes as well as curricula.

In order to warm-up, explore the “didactic triangle” in Fig. 4.1 and reflect what data may stem from each of the key concepts and related interactions.

Fig. 4.1
A diagram of the didactic triangle. It indicates the learner, teacher, and content at the corner of the triangle.

Didactic triangle

4.2 Data Sources for Supporting Teaching Analytics

4.2.1 Learning and Teaching

According to Seel and Ifenthaler (2009), learning involves a stable and persisting change of what a person knows, requiring mental representations. The processes that result in learning (e.g., learning activities) can be and often are distinguished from the products of learning (e.g., learning outcomes), as discussed by Spector et al. (2014). Several theories of learning have been postulated over the 20th and 21st centuries: Behaviourism, Cognitivism, Constructivism, Connectivism. Figure 4.2 illustrates the theories of learning, how learning is conceptualised and what factors may influence learning.

Fig. 4.2
A diagram of an overview of learning theories. It illustrates learning theories such as behaviourism, cognitivism, constructivism, and connectivism.

Overview on learning theories. (Ifenthaler & Schumacher, 2016a, b)

Teaching is considered as deliberate actions undertaken with the intention of facilitating learning. Hence, when it comes to teaching, the relevant input and output characteristics for designing a learning environment need to be identified. The elementary parts of teaching include matching of content elements, psychological operations and didactic considerations (Scheerens et al., 2007). Doyle (1985) defines seven key criteria for effectiveness of teaching as follows:

  1. 1.

    Teaching goals are clearly formulated;

  2. 2.

    The course material to be followed is carefully split into learning tasks and is placed in sequence;

  3. 3.

    The teacher explains clearly what the pupils must learn;

  4. 4.

    The teacher regularly asks questions to gauge pupils’ progress and understanding;

  5. 5.

    Pupils have ample time to practice what has been taught, with much use of “prompts” and feedback;

  6. 6.

    Skills are taught until mastery is automatic;

  7. 7.

    The teacher regularly tests the pupils and calls on them to be accountable for their work.

Table 4.1 provides an overview of phases in the structuring of teaching (Scheerens et al., 2007):

Table 4.1 Structuring of teaching

4.2.2 Design of Learning Environments

Learning environments are physical or virtual settings in which learning takes place. Learning theory provides the fundament for the design of learning environments. However, there is no simple recipe for designing learning environments (Ifenthaler, 2012). Generally, the design of learning environments includes the three simple questions: What is taught? How is it taught? How is it assessed? Yet, the design of learning environments is not simply asking the above stated three questions. Rather, it includes a systematic analysis, planning, development, implementation, and evaluation phases (see Fig. 4.3).

Fig. 4.3
A diagram of the A D D I E model. It includes evaluating, analyzing, designing, developing, and implementing with revision.

The ADDIE model. (Gustafson & Branch, 2002)

The analysis phase includes needs analysis, subject matter content analysis, and job or task analysis. The design phase includes the planning for the arrangement of the content of the instruction. The development phase results in the tasks and materials that are ready for instruction. The implementation phase includes the scheduling of instruction, training of instructors, preparing time tables, and preparing evaluation parts. The evaluation phase includes various forms of formative and summative assessments.

4.2.3 Learning Design

Whereas instructional design is rooted in behaviourist learning theories and seems to on the one hand focus on learning products, such as learning objects and machine-readable representations and on the other hand on delivery systems and the advancement of the automation of designs, learning design is rooted in constructivist learning theories and seems to focus on making the design process explicit and shareable. Table 4.2 includes a list of definitions of learning design exemplifying the roots of this research field.

Table 4.2 Overview on definitions of learning design

4.2.4 TPACK Model

At the heart of good teaching with technology are three core components: content, pedagogy, and technology, plus the relationships among and between them (Mishra & Koehler, 2006). The TPACK model (i.e., Technological Pedagogical Content Knowledge) describes the core components of teaching where content (what you teach) and pedagogy (how you teach) must be the basis for any technology that is used in a learning environment in order to support and enhance learning (see Fig. 4.4).

Fig. 4.4
A diagram of the T P A C K model. It includes technological knowledge, content knowledge, pedagogical knowledge, T P K, T C K, and P C K.

The TPACK model. (Mishra & Koehler, 2006)

Pedagogical Content Knowledge (PCK) is the knowledge that teachers have about their content and the knowledge that they have about how teach that specific content. Technological Pedagogical Knowledge (TPK) is the set of skills which teachers develop to identify the best technology to support a particular pedagogical approach. Technological Content Knowledge (TCK) is the set of skills which teachers acquire to help identify the best technologies to support their students as they learn content.

Questions and Teaching Materials

  1. 1.

    For each theory of learning, influencing factors for learning can be distinguished. Which of the following factors can be related to Behaviourism?

    1. (a)

      Active participation and networking.

    2. (b)

      Building ties for social networks.

    3. (c)

      Providing rewards in relation to achievements.

    4. (d)

      Active engagement and stimuli for social collaboration.

Correct Answer: c

  1. 2.

    Learning Design and Instructional Design have different origins and conceptual foundations. Still, the purpose of these disciplines can be summarised as follows:

    1. (a)

      They include seven procedural steps for reviewing learning quality.

    2. (b)

      They include a systematic perspective on the planning, implementation and evaluation of learning environments.

    3. (c)

      They include assessment criteria for competences.

    4. (d)

      They include two features of learning strategies.

Correct Answer: b

  1. 3.

    The didactic triangle consists of …

    1. (a)

      Learner

    2. (b)

      Teacher

    3. (c)

      Content

    4. (d)

      Technology

    5. (e)

      Environment

Correct Answer: a, b, c

  1. 4.

    Effective teaching includes …

    1. (a)

      Time pressure

    2. (b)

      Formative assessment

    3. (c)

      Pure exploration

    4. (d)

      Clearly formulated goals

    5. (e)

      Sequenced learning tasks

Correct Answer: b, d, e,

  1. 5.

    A key principle of learning design includes …

  1. (a)

    Limitation of learning time

  2. (b)

    Representation of learning activities

  3. (c)

    Real-time monitoring of performance

  4. (d)

    Governance of exam regulations

  5. (e)

    Exclusion of supportive technology

Correct Answer: b

  1. 6.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Do you refer to different sources of data when designing your learning environments?

    • Do you analyse data to inform your teaching practice in (near) real-time, i.e., while teaching a class (online or face-to-face)?

    • Do you use specific tools to collect and analyse data to inform your teaching?

    • Do you strictly follow one theory of learning (e.g., Behaviourism, Cognitivism) when designing your learning environments?

    • Do you evaluate each phase of the instructional design (i.e., analysis, design, development, implementation) process before moving to the next phase?

4.3 Data Sources Within the Instructional Design Process

4.3.1 Broadening the Perspective for Data-Driven Education

The idea of grounding instructional design decisions on educational data has been around for some time. Traditionally, evidence-based instruction has used summative evaluation data to (re-)design instructional programs and systems. Immediate interventions based on formative evaluations have been conducted significantly less frequently. Research on learning and instruction brought attention to additional data sources, as summarized in the 3P-model of teaching and learning (Biggs et al., 2001): “presage” data focuses on student factors and the teaching context, “process” data on learning focused activities, and “product” data on learning outcomes. Historically, most of this data has been collected with social science research methods. Surveys and questionnaires have been used most often, at times supplemented by different forms of observations.

Online teaching and learning has created a wide range of opportunities for data-driven education. A lot more data sources are now at hand, as well as new technologies for data handling and analysis. While it seems impossible to create a complete list of potential data sources, educational data and the respective data sources can be systematized with a number of attributes.

Educational data can be primary data (direct data), that is: data that is especially collected for the purpose of improving teaching and learning. Secondary (indirect) data, on the other hand, has been initially collected for other purposes, but can also be used for teaching analytics. Data can be collected candid and transparent. This means that the purpose of data collection is clear, as for example in a direct survey, interview or an eye-tracking study. Educational data can also be collected automatically and with little or no transparency, as it is the case with user trails within the system or logging data. Educational data can be oriented toward the learning outcome or the learning process. Educational data can be static, that is: stable over a defined period of time (e.g., personality traits). Educational data can be dynamic, that is: volatile over the course run (e.g., motivational and emotional states). Educational data can be sourced on the individual or on a collective level. Educational data can be idiosyncratic or generalizable. Educational data can refer to learner variables (person focus; i.e. personal learning goals), it can refer to contextual variables (environment focus; i.e. curricular learning objectives), or to learning behaviour (person-environment-interaction focus; i.e. course performance). Finally, educational data can be open and accessible to anyone (i.e., curriculum data, syllabi), or it can be protected (i.e., discussion posts within a course environment) – a distinction which is not always as straightforward as it may sound (Greller & Drachsler, 2012).

4.3.2 Data Sources Within a Holistic Analytics Framework

Ifenthaler and Widanapathirana (2014) developed and empirically validated a holistic learning analytics framework that connects a number of different data sources (#1 to #5). A major aim of this model is to create a link between learner characteristics (e.g., prior learning), learning behaviour (e.g., access of materials), and curricular requirements (e.g., learning objectives, sequencing of learning) (see Fig. 4.5).

Fig. 4.5
A framework of holistic learning has individual characteristics, social web, physical data, curriculum, online learning environment, learning analytics and reporting engine, personalisation and adaption engine, governance, and institution.

Holistic learning analytics framework. (Ifenthaler & Widanapathirana, 2014)

4.3.3 Sources of Learner Data

Within the holistic learning analytics framework (see Fig. 4.5), three main areas of learner data and respective data sources have been differentiated. Characteristics of (1) individual learners include socio-demographic information, personal preferences and interests, responses to standardized inventories (e.g., learning strategies, achievement motivation, personality), demonstrated skills and competencies (e.g., computer literacy), acquired prior knowledge and proven academic performance, as well as institutional transcript data (e.g., pass rates, enrolment, dropout, special needs). Associated interactions with the (2) social web include preferences of social media tools (e.g., Twitter, Facebook, LinkedIn) and social network activities (e.g., linked resources, friendships, peer groups, web identity). Physical data (3) from outside the educational system is collected through various systems, for example through a library system (i.e., university library, public library). Other physical data may include sensor and location data from mobile devices (e.g., study location and time), or affective states collected through reactive tests (e.g., motivation, emotion, health, stress, commitments). Especially non-cognitive (i.e., emotional and motivational data) can provide deep insights into individual learning processes (D’Mello, 2017).

4.3.4 Sources of Online Learning Data

Furthermore, there are two areas of data and respective data sources related to online learning behaviour (see Fig. 4.6). Rich information is available from learners’ activities in the online learning environment (5) (i.e., learning management system, personal learning environment, learning blog). These mostly numeric data refer to logging on and off, viewing or posting discussions, navigation patterns, learning paths, content retrieval (i.e., learner-produced data trails), results on assessment tasks, responses to ratings and surveys. More importantly, rich semantic and context-specific information is available from discussion forums as well as from complex learning tasks (e.g., written essays, wikis, blogs). Additionally, interactions of facilitators with students and the online learning environment are tracked. Closely linked to the information available from the online learning environment is the curriculum information (5), which includes metadata of the online learning environment. These data reflect the learning design (e.g., sequencing of materials, tasks, and assessments), and learning objectives as well as expected learning outcomes (e.g., specific competencies). Ratings of materials, activities, and assessments as well as formative and summative evaluation data are directly linked to specific curricula, facilitators, or student cohorts (Ifenthaler & Widanapathirana, 2014).

Fig. 4.6
A diagram of the profile approach includes analytics and intervention of student profiles with static and dynamic parameters, learning profiles with dynamic parameters, and curriculum profiles with static parameters.

Profiles approach using static and dynamic data. (Ifenthaler & Widanapathirana, 2014)

In summary, teaching analytics use static and dynamic data sources for informing learning and teaching processes as well as outcomes. The Figure below summarises the profiles approach which includes static and dynamic data from students (e.g., demographic information, academic performance), dynamic data of learning behaviour (e.g., navigation pathways), and static data defined in the curriculum (e.g., learning outcomes, learning artefacts).

Questions and Teaching Materials

  1. 1.

    Which learning data can be related to the learning profile:

    1. (a)

      Forum activity, interaction with learning materials, assessment attempts

    2. (b)

      Forum posts and historical grades

    3. (c)

      Forum visits and learning objectives

    4. (d)

      Forum activity, emotional states, place of living

Correct Answer: a.

  1. 2.

    Why do teaching analytics require a reference to curricular statements, such as learning outcomes?

    1. (a)

      They help to understand the needs of a learner.

    2. (b)

      They function as benchmark for adaptive feedback a teacher can relate to.

    3. (c)

      They help the administrator to monitor the expertise of a teacher.

    4. (d)

      Active engagement and stimuli for social collaboration.

Correct Answer: d.

  1. 3.

    What outcomes can be produced from a reporting engine?

    1. (a)

      Dashboard

    2. (b)

      Heatmap

    3. (c)

      Personalised help

    4. (d)

      Collaborative scaffolds

    5. (e)

      Automated report

Correct Answer: a, b, e.

  1. 4.

    The profiles approach includes the following parameters

    1. (a)

      Alpha-numeric parameters

    2. (b)

      Static parameters

    3. (c)

      Dynamic parameters

    4. (d)

      Component parameters

    5. (e)

      Change parameters

Correct Answer: b, c.

  1. 5.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Are you able to access relevant data to inform your teaching anytime required?

    • Are your students aware of data you are using for informing your teaching?

    • A major aim of the holistic analytics model is to create a link between learner characteristics, learning behaviour, and curricular requirements Please name three or more data sources for which it might be worthwhile to establish such a connection. Where do you see logical relationships that might be helpful for analytics?

    • How would you try to collect emotional and motivational data? What could be feasible data sources?

4.4 Key Concepts of Data Quality and Limitations of Data Meaningfulness

4.4.1 Data Quality in Educational Contexts

As the amounts of educational data grow larger, the issue of data quality is becoming more and more important. ‘Big Data’ in education is characterized by the same attributes as in other domains: Volume, Velocity, Variety, and Value (Katal et al., 2013). Volume refers to the tremendous volume of the data, usually measured in TB or above. Velocity means that data are being formed at an unprecedented speed and must be dealt with in a timely manner. Variety indicates that big data has all kinds of data types, and this diversity divides the data into structured data and unstructured data. Finally, Value represents low-value density. Value density is inversely proportional to total data size, the greater the big data scale, the less relatively valuable the data (Cai & Zhu, 2015).

Already on a smaller scale, data quality is of crucial importance for teaching and learning analytics, as ‘poor data’ can impede valid inferences and hamper subsequent educational interventions. However, there is no common definition of educational data quality to date. If the broad ISO 9000:2015 definition of quality is applied, data quality can be defined as the degree to which a set of characteristics of data fulfils pre-defined requirements. These requirements are usually described in quality dimensions, each with specific elements and indicators for measurement (Cai & Zhu, 2015).

Despite the complexity of the topic, the majority of the numerous frameworks on data quality share a common core of quality dimensions that can be transferred to education datasets (Akoka et al., 2007; Goasdoué et al., 2007; Laranjeiro et al., 2015): completeness, accuracy, consistency, freshness and relevancy.

4.4.2 Core Dimensions of Data Quality

Data Accuracy is defined as the correctness and precision used for representing real world data in an information system. Data needs to be precise, valid and errorfree. Three main accuracy definitions have been established in current research literature: (i) Semantic correctness which describes how well data represent states of the real-world, i.e., identifiying the semantic distance between system-based data and real-world data. For instsance, the recorded address “99, Main Street” is actually the address of Mary? (ii) Syntactic correctness related to the degree to which data is free of syntactic errors, for example misspellings and format discordances, i.e., identifying the syntactic distance between system-based data and expected data representation. For example, the address “99, Main Street” is valid and well written? (iii) Precision refers to the level of detail of data representation, i.e., identifying the gap between the level of detail of system-based data and its expected level of detail (Peralta, 2006). For instance, the amount “€ 98” is a more precise representation of the cost of a product than “€ 100”.

Data Completeness is defined as the degree to which all relevant data have been recorded in an information system. It is expected that all relevant facts of the real world are represented in the information system (Gertz et al., 2004). Two aspects of completeness are differentiated: (i) Coverage meaning whether all required entities for an entity class are included; (ii) Density describing whether all data values are present (not null) for required attributes (Peralta, 2006).

Data Consistency refers to the degree to which data satisfies a set of integrity requirements. Common requirements of data constancy include check for null or missing values, key uniqueness or functional dependencies (Peralta, 2006).

Data Freshness introduces the idea of how old is the data: Is it fresh enough with respect to the user expectations? Has a given data source the more recent data? Is the extracted data stale? When was data produced? There are two main freshness definitions in the literature: (i) Currency describes how stale is data with respect to the sources. It captures the gap between the extraction of data from the sources and its delivery to the users. For example, given an account balance, it may be important to know when it was obtained from the bank data source. (ii) Timeliness describes how old is data (since its creation/update at the sources). It captures the gap between data creation/update and data delivery. For example, given a top-ten book list, it may be important to know when the list was created, no matter when it was extracted from sources (Akoka et al., 2007).

Data relevancy corresponds to the usefulness of the data. Among the huge volumes of data, it is often difficult to identify that which is useful. In addition, the available data is not always adapted to user requirements. This might lead to the impression of poor relevancy. Relevancy plays a crucial part in the acceptance of a data source. This dimension, usually evaluated by rate of data usage, is determined by the user and thus not directly measurable by quality tools.

4.4.3 Dimensions of Educational Data Quality

Valid examples for the core dimensions of educational data quality from the educational context could include the following (see Table 4.3):

Table 4.3 Dimensions of educational data quality

4.4.4 Data Quality Problems

Laranjeiro et al. (2015) classify data quality problems with respect to the source of information: single or multiple. Single-source problems are related with the (wrong or absent) definition of integrity constraints. Multi-source problems relate with the integration of data from multiple sources, which, for instance, might hold different representations of the same values, or contradictions. Each of these two classes of problems are further divided into schema-level, which are related with defects in the definition of the data model and schema, and instance-level which are problems that are not visible at the schema level and cannot be prevented by restrictions at the schema level (or by redesign).

In exchange for the user-determined ‘relevancy’-Dimension, the authors added ‘Accessibility: The degree to which data can be accessed in a specific context of use’ to their synopsis of data quality problems (see Table 4.4).

Table 4.4 Data quality problems mapped into dimensions

Questions and Teaching Materials

  1. 1.

    An example for data accuracy is

    1. (a)

      Academic performance record includes several data points of study progress

    2. (b)

      Event dates are stored in various formats

    3. (c)

      Student number in a campus management system matches the student number in the learning management system

    4. (d)

      Real-time user behaviour is stored for at least 10 days

Correct Answer: c.

  1. 2.

    Volume is referring to

    1. (a)

      The number of leaners and teachers

    2. (b)

      The capacity of a human brain

    3. (c)

      The voice level related to data storage devices

    4. (d)

      The tremendous amount of the data, usually measured in TB or above

Correct Answer: d.

  1. 3.

    Missing data with reference to data quality can be mapped to

    1. (a)

      Accessibility

    2. (b)

      Accuracy

    3. (c)

      Freshness

    4. (d)

      Consistency

    5. (e)

      Completeness

Correct Answer: b, e.

  1. 4.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Please think of one type of educational data as introduced in the previous section. How would this data have to be characterised on the different dimensions of data quality in order to be good source of information? Please explain your indicators to the dimensions and explain your ratings according to those indicators.

4.5 Data Ethics and Privacy Principles for Teaching Analytics

4.5.1 Ethical and Privacy Challenges Associated with the Application of Educational Data Analytics

Educational institutions have always used a variety of data about students, teachers and the learning environment, such as socio-demographic information, grades on entrance qualifications, or pass and fail rates, to inform their curricular planning, academic decision-making as well as for resource allocation. Such data can help to successfully predict student’s dropout rates and to enable the implementation of strategies for supporting learning and instruction as well as retaining students (Ifenthaler & Tracey, 2016). However, serious concerns and challenges are associated with the application of data analytics in educational settings:

  1. 1.

    Not all educational data is relevant and equivalent. Therefore, the validity of data and its analyses is critical for generating useful summative, real-time, and predictive insights.

  2. 2.

    Limited access to educational data generates disadvantages for involved stakeholders. For example, invalid forecasts may lead to inefficient decisions and unforeseen problems.

  3. 3.

    Information from distributed networks and unstructured data cannot be directly linked to educational data collected within an institution’s environment.

  4. 4.

    Ethical and privacy issues are associated with the use of educational data for learning analytics. That implies how personal data is collected and stored as well as how it is analysed and presented to different stakeholders.

Consequently, educational institutions need to address ethics and privacy issues linked to educational data analytics: They need to define who has access to which data, where and how long the data will be stored, and which procedures and algorithms to implement for further use of the available educational data (Ifenthaler, 2015).

4.5.2 Privacy in the Digital World

Within the digital world, many individuals are willing to share personal information without being aware of who has access to the data, how and in what context the data will be used, or how to control ownership of the data. Accordingly, data are generated and provided automatically by online systems, which limits the control and ownership of personal information in the digital world (Slade & Prinsloo, 2013).

There are several reasons why learners would like to keep their information private: First, there are competitive reasons, for example, if a learner performs poorly, a fellow student shall not know about it. Second, there are personal reasons, for example a learner might not want to share information about him−/ herself. There are also country-specific differences who owns the personal data. In the United States the collected data belongs to the collectors. In Europe the personal data belongs to the individual (e.g., the learner).

Table 4.5 provides an overview of privacy theories in the digital age. The first two concepts (1, 2) emphasize requirements for reaching privacy in a certain situation and focus on protection and normative or descriptive privacy. Early privacy theories (3) are based on control or limitation: Control refers to the influence of individuals on the flow of their personal data, whereas limitation means the possibility to prevent others from accessing personal data. Contemporary privacy theories (4) incorporate these earlier theories as well as normative and descriptive privacy concepts but go beyond them in being more holistic and applicable to different contexts (Ifenthaler & Schumacher, 2016).

Table 4.5 Overview of privacy concepts

4.5.3 Ethical Principles

Ethical principles for educational data analytics have been developed to underpin decision-making processes and provide guidance in the application of ethics (West et al., 2016). The key principles, as outlined and used in healthcare settings, are also relevant to the discussion of educational data analytics:

  1. 1.

    Respect for Autonomy generally translates to the idea of self-determination and the right of people to make their own decisions.

  2. 2.

    Non-maleficence essentially means that we should do no harm.

  3. 3.

    Beneficence means that in addition to doing no harm, we should also pursue good outcomes for others.

  4. 4.

    Justice translates into the concept of fairness and is often related to the distribution of resources based on equity, need, effort, merit and the market.

Figure 4.7 presents a four step framework that views ethical decision making as an operational process. The aim of this framework is to concisely model how a complex issue can be mapped, refined, decided on, and documented within a fairly linear process that would suit the busy operating environments of most institutions. There may be circumstances where reflection or new information means retracing earlier steps and the framework does not oppose doing so (West et al., 2016).

Fig. 4.7
A diagram depicts four steps of ethical decision-making. 1. Explore the issue. 2. Apply an institutional lens to the issue. 3. View the alternative actions in light of the ethical theoretical approaches. 4. Document the decision made.

Ethical decision making process for learning analytics. (West et al., 2016)

Questions and Teaching Materials

  1. 1.

    Ethical key principles for educational data analytics include …

    1. (a)

      Respect for autonomy

    2. (b)

      Building advantages over competitors

    3. (c)

      Pursue good outcomes for all involved stakeholder

    4. (d)

      Doing no harm to every involved stakeholder

Correct Answer: a, c, d.

  1. 2.

    Descriptive privacy is based on the assumption of natural means, e.g., physical barriers

    1. (a)

      No

    2. (b)

      Yes

Correct Answer: b.

  1. 3.

    Reasons for learners to keep data private include …

    1. (a)

      Environmental reasons

    2. (b)

      Competitive reasons

    3. (c)

      Technical reasons

    4. (d)

      Personal reasons

Correct Answer: b, d.

  1. 4.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Do you include your learners when designing a data analytics survey?

    • Do you ask for consent to collect data from your learners?

4.6 Identify Issues of Authorship, Ownership, Data Access and Data-Sharing

4.6.1 Privacy Calculus

To enhance the acceptance of educational data analytics, it is relevant to involve all stakeholders as early as possible. Students need to be considered in particular, as they take on two roles in the educational data analytics: (1) as producers of analytics data and (2) as recipients of the analyses derived from them (Slade & Prinsloo, 2013).

Figure 4.8 shows the deliberation process for disclosing information for educational data analytics. Students assess their concern over privacy on the basis of the specific information required for the learning analytics system (e.g., name, learning history, learning path, assessment results, etc.). This decision can be influenced by risk-minimizing factors (e.g., trust in the learning analytics systems and/or institution, control over data through self-administration) and risk-maximizing factors (e.g., non-transparency, negative reputation of the learning analytics system and/or institution). Concerns over privacy are then weighed against the expected benefits of the learning analytics system. The probability that the students will disclose required information is higher if they expect the benefits to be greater than the risk. Hence, the decision to divulge information on learning analytics systems is a cost–benefit analysis based on available information to the student.

Fig. 4.8
A process of deliberation includes information required for learning analytics, concern over privacy with maximizing and minimizing factors, expected benefits, decisions, and divulgence of information.

Deliberation process for sharing information for learning analytics systems. (Ifenthaler & Schumacher, 2016a, b)

4.6.2 Educational Data Analytics Benefits

Table 4.6 provides a matrix outlining the benefits of educational data analytics for stakeholders including three perspectives: (1) summative, (2) real-time/formative, and (3) predictive/prescriptive. The summative perspective provides detailed insights after completion of a learning phase (e.g., study period, semester, final degree), often compared against previously defined reference points or benchmarks. The real-time or formative perspective uses ongoing information for improving processes through direct interventions. The predictive or prescriptive perspective is applied for forecasting the probability of outcomes in order to plan for future strategies and actions (Ifenthaler, 2015).

Table 4.6 Educational data analytics benefits matrix

Each cell of the educational data analytics benefits matrix includes examples to be implemented at different phases of the learning process as well as for different purposes. When choosing a specific benefit of educational data analytics, the teacher, e-Tutor or instructional designer needs to understand:

  1. (a)

    who has access?

  2. (b)

    to what data?

  3. (c)

    to do what?

  4. (d)

    for what reason?

In sum, data ownership refers to the possession of, control of, and responsibility for information. Questions surrounding the ownership of data include considerations of who determines what data is collected, who has the right to claim possession over that data, who decides how any analytics applied to the data are created, used and shared, and who is responsible for the effective use of data. Ownership of data also relates to the outsourcing and transfer of data to third parties. A number of scholars point to the lack of legal clarity with respect to data ownership (Corrin et al., 2019). With the absence of legal systems in place to address this issue, the default position has been that the “data belongs to the owner of the data collection tool [who is], typically also the data client and beneficiary” (Greller & Drachsler, 2012, p.50).

4.6.3 Data for Instructional Support

Personalised learning is the notion of customising learning resources and activities to fit the interests and needs of individual learners. As with many educational technologies, personalised learning has a long history. However, with the growth of the Internet and ICTs and the advancement of intelligent systems, it is possible to use learning analytics as the basis for automated recommendation engines that drive individualised e-learning. This technology has been promised by several emerging LMSs, but has not yet become a sustainable reality on any scale. However, personalised learning technology can significantly change how instruction occurs and transform the notion of a learning place dramatically (Spector & Ren, 2015). Hence, data is a critical tool that makes this personalised learning possible. When students, parents, and teachers are empowered with access to timely, useful, safeguarded data, there are so many ways to support students on their path to success.

4.6.4 Data for Instructional Support

Corrin et al. (2019, p. 11) provide a well-informed overview on issues of educational data analytics focussing on (a) consent and (b) anonymity.

Consent is referred to as entering into a contract with data subjects in order to obtain their permission for their data to be gathered and analyzed. Consent must be informed in order to be valid; consequently, people should be given clear and transparent information about the purposes for data collection so that they may give informed consent. They should have the ability to opt out of having their data gathered at any time. Consent is not always a simple matter because it is not always a legal requirement, such as when data gathering is judged required for an organization’s ‘legitimate interests.’ (Corrin et al., 2019, p. 32). An example refering to the issues of students not being able to opt out of having their data collected is given in the JISC code of practice (http://repository.jisc.ac.uk/6985/1/Code_of_Practice_for_learning_analytics.pdf).

A more challenging ethical practice is informed consent in the context of learning analytics, which has been critically debated in recent learning analytics research. West et al., 2016 refer to the problematic relationship between ‘consent’ and ‘informed consent’ noting that these concepts are often conflated in higher education digital environments. For example, students are frequently asked to agree for their data to be collected, however, the purposes for which the data will be used is hidden or is not communicated clearly (West et al., 2016, p. 914). Cormack (2016) adds that it is not always clear prior to the collection and analysis of data what correlations will emerge or what the impact on individuals will be. This fact seems to make it difficult for educational organizations to communicate clear and transparent information about the use and purposes of data being collected and for of obtaining informed consent.

Individuals are given the option of concealing or revealing their identity and any identifying information about themselves through anonymity. Individuals’ identities may be de-identified before data is shared or analyzed in the field of learning analytics. Despite the fact that it is widely recognized that institutions should make every attempt to anonymize data, experts have claimed that anonymity cannot always be guaranteed. “Anonymized data can relatively readily be de-anonymized when they are integrated with other information sources,” according to Drachsler and Greller (2016, p. 94). Anonymity also limits the possible applications of learning analytics because it hinders or precludes meaningful bilateral communication, as well as the capacity for student intervention, feedback, and assistance.

4.6.5 Data Privacy in Productive Systems

One of the main concerns of educational data analytics is the handling of data privacy issues. As almost every learning analytics feature collects and processes user data by default, it is inevitable to consider this topic, particularly in regard of the country’s data privacy act. It is even more important when the decision is to work within the running, productive environment of the educational institution as soon as possible.

As shown in the Fig. 4.9, the educational institution decided to use a pseudonymisation in two steps. Wherever a direct touch with students’ activities occurs, a 32-bit hash value as an identifier is used. All tracking events and prompting requests use this hash value to communicate with the core application. The core API then takes this hash, enriches it with a secret phrase (a so-called pepper) and hashes it again. The doubled hash is then stored within the core’s database. As a result, a match with new student generated data can be made to already existing data without being directly traceable back to a specific student by a given date within the database.

Fig. 4.9
A diagram depicts the concept of encryption of a student’s identity. It includes a client application involving user account, hashing, and user hash and a L e A P core application involving secret, hashing, double hash, and the L e A p database.

Concept of the encryption of student’s identity. (Klasen & Ifenthaler, 2019)

Another important issue for implementing educational data analytics in productive systems is the setting of data collection and data analytics functionalities. Figure 4.10 shows an example implemented in a productive Learning Management System allowing the student to change the setting for data collection and data analytics anytime. In addition, the student may request to delete the data stored or download all stored data for self-inspection. Hence, compliance with EU GDPR is given in this case.

Fig. 4.10
An image depicts the screen of an individual setting for data collection and analytics. It has options for content, info, members, learning progress, and L A-profile. L A Profile has the options of L A Profile aktive, anonymous, and not active.

Individual setting for data collection and analytics. (Klasen & Ifenthaler, 2019)

Given the examples how to implement data privacy settings in productive systems, think about your own institution and how you may implement similar features in order to be compliant with the EU GDPR.

4.6.6 Case Study: Curtin Challenge I

This case study demonstrates how the analysis of navigation patterns and network graph analysis informs the learning design of self-guided digital learning experiences.

The Curtin Challenge digital learning platform (http://challenge.curtin.edu.au) supports individual and team-based learning via gamified, challenge-based, open-ended, inquiry-based learning experiences that integrate automated feedback and rubric-driven assessment capabilities. The Challenge platform is an integral component of Curtin University’s digital learning environment along with the Blackboard learning management system and the edX MOOCs platform. The Challenge development team at Curtin Learning and Teaching are working towards an integrated authoring system across all three digital learning environments with the view of creating reusable and extensible digital learning experiences (Ifenthaler et al., 2018).

Curtin Challenge includes several content modules, for example Leadership, Careers and English Language Challenge. Since 2015, over 2600 badges have been awarded for the completion of a challenge. The design features of each module contain approximately five activities that might include one to three different learner interactions.

Educational analytics data for the presented case study includes 2,753,142 database rows. Overall, 3550 unique users registered and completed a total of 14,587 navigation events within a period of 17 months. Figure 4.11 provides an overview of modules started (M = 3427, SD = 2880) and completed (M = 2903, SD = 2303) for the Curtin Careers Challenge. The average completion rate for the Curtin Careers Challenge was 87%. The most frequently started module was “Who am I?” (10,461) followed by the module “Resumes” (7996). The module “Workplace Rights and Responsibilities” showed the highest completion rate of 96%, followed by the module “Interviews” (92%).

Fig. 4.11
A bar chart depicts the started and completed modules of a Curtin Careers Challenge, such as who am I, resumes, cover letters, and so on. Workplace Rights and Responsibilities has the highest completion rate of 96 percent.

Module completion of Curtin Careers Challenge. (Ifenthaler et al., 2018)

4.6.7 Case Study: Curtin Challenge II

The network analysis identifies user paths within the learning environment and visualises them as a network graph on the fly. The dashboard visualisations help the learning designer to identify specific patterns of learners and may reveal problematic learning instances. The nodes of the network graph represent individual interactions. The edges of the network graph represent directed paths from one interaction to another. The indicator on the edges represent the frequency of users taking the path from one interaction to another and in parenthesis the percentage of users who took the path. An aggregated network graph shows the overall navigation patterns of all users. A network graph can be created for each individual user, for selected groups of users (e.g., with specific characteristics), or for all users of the learning environment.

The aggregation of all individual network graphs provides detailed insights into the navigation patterns of all users. Figure 4.12 shows the aggregated network graph including paths taken by all 3550 users showing 14,587 navigation events. The five modules are highlighted using different colours.

Fig. 4.12
A graph of an aggregated network with navigation patterns. It indicates selection criteria, cover letter, stop googling, interviews, and who am I.

Aggregated network graph. (Ifenthaler et al., 2018)

Provided the case study above, the following questions arise:

  • Who is the author of the data presented?

  • Who holds ownership of the data presented?

  • Who can access the data presented?

  • Who can share the data presented (and to what purpose)?

Questions and Teaching Materials

  1. 1.

    The educational data analytics benefits matrix includes references to the location of the institution

    1. (a)

      False

    2. (b)

      True

Correct Answer: a.

  1. 2.

    Examples of analytics benefits for teaching purposes can be related to different perspectives of data processing. Which of the following benefits can be related to predictive analytics?

    1. (a)

      Conduct cross-institutional comparisons

    2. (b)

      Track enrolments

    3. (c)

      Allocate financial resources.

    4. (d)

      Plan for interventions.

Correct Answer: d.

  1. 3.

    Within the deliberation process of sharing information, risk-maximizing factors include

    1. (a)

      Non-transparency

    2. (b)

      Positive reputation

    3. (c)

      Holistic marketing of data

    4. (d)

      Established data regulations

Correct Answer: a, c.

  1. 4.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Are you able to provide your students all the data collected about them when they may request it?

4.7 Applying and Communicating Educational Data and Analytics Findings

4.7.1 Adaptive Learning Technologies

Adaptive learning and teaching are an alternative to the traditional “one-size-fits-all” approach in the development of digital learning environments. Adaptive learning systems build a model of the goals, preferences and knowledge of each individual learner, and use this model throughout the interaction with the learner, in order to adapt to the needs of that learner (Brusilovsky, 1996). Educational data analytics provides the key element for designing and implementing adaptive learning experiences. In sum, adaptive learning and teaching are referred to as customised learning experiences that focus on the just-in-time need of an individual learning by providing meaningful interventions, feedback or support.

Learning management systems (LMSs) are most commonly used in technology-enhanced learning, typically present identical courses and content for every learner without consideration of the learner’s individual characteristics, situation, and needs (Graf & Kinshuk, 2014). As seen in Massive Open Online Courses, such a one-size-fits-all strategy frequently leads to frustration, learning challenges, and a high dropout rate (MOOCs).

Adaptive learning technologies aim to solve this problem by allowing learning systems to automatically modify the learning environment and/or learning activities to the learners’ unique situation, traits, and needs, resulting in individualized learning experiences. The system must represent the student and the learning setting in order to create adaptive interventions. This is where data and analytics are required. According to Graf and Kinshuk (2014), adaptive interventions can be based on the following areas:

  • Learning styles

  • Cognitive abilities

  • Affective states

  • Context and environment

Other common approaches besides “adaptive learning system” include “personalized learning system” which emphasizes the aim of the system to consider a learner’s individual differences. “Intelligent learning (or tutoring) system” focus on the use of techniques from the field of artificial intelligence to provide learning support.

The phrase “adaptive learning system,” on the other hand, emphasizes a learning system’s ability to provide different courses, learning materials, or learning activities for different learners automatically. Adaptive, personalized, and intelligent learning systems are those that use learning analytics to tailor instruction to learners’ traits and requirements. In their framework of personalization in technology enhanced learning, FitzGerald et al. (2018) characterized learning analytics systems as follows (see Table 4.7):

Table 4.7 Personalization dimensions and learning analytics

4.7.2 Automated and Semi-Automated Interventions

Closely linked to the demand of new approaches for designing and developing up-to-date adaptive learning environments is the necessity of enhancing the design and delivery of assessment systems and automated computer-based diagnostics (Almond et al., 2002; Ifenthaler et al., 2010). These systems need to accomplish specific requirements, such as:

  1. (a)

    adaptability to different subject domains,

  2. (b)

    flexibility for experimental and instructional settings,

  3. (c)

    management of huge amounts of data,

  4. (d)

    rapid analysis of specific data,

  5. (e)

    immediate feedback for learners and educators, and

  6. (f)

    generation of automated reports of results (Pirnay-Dummer et al., 2012, b).

Recently, promising methodologies have been developed which provide a strong basis for applications in learning and instruction in order to follow up with the demands that come with better theoretical understanding of the phenomena that are a prerequisite or an integral part or go along with the learning process.

Several possible solutions to the assessment and analysis problems of knowledge representations have been discussed (Ifenthaler & Pirnay-Dummer, 2014). Therefore, it is worthwhile to compare the model-based assessment and analysis approaches in order to illustrate their advantages and disadvantages, strengths and limitations (see Table below). Yet, there is no ideal solution for the automated assessment of knowledge. However, within the last five years, strong progress has been made in the development of model-based tools for knowledge assessment. Still, Table 4.8 highlights necessary further development of the available tools, especially for everyday classroom application.

Table 4.8 Comparison of model-based assessment tools

4.7.3 Instructional Design Principles for Adaptivity

Leutner (2004) has summarized ten instructional design principles for fostering adaptivity in open learning environments. These principles highlight various instructional elements that can be designed for adaptivity and personalized learning. The principles are:

Adapting …

  • P 1: ... the amount of instruction

  • P 2: ... the sequence of instructional units

  • P 3: ... the content of information

  • P 4: ... the presentation format of information

  • P 5: ... task difficulty

  • P 6: ... concept definitions

  • P 7: ... the system response time

  • P 8: ... advice in exploratory learning

  • P 9: ... the menu structure of computer software in software training programs

  • P10: ... system control versus learner control.

Questions and Teaching Materials

  1. 1.

    Based on which data features can adaptive interventions be implemented?

    1. (a)

      Features such as need for financial study support help to build adaptive interventions

    2. (b)

      Features related to the social environment can help to build adaptive interventions

    3. (c)

      Features related to cognitive processing can help to build adaptive interventions

    4. (d)

      Features such as need for social collaboration help to build adaptive interventions

    5. (e)

      Plan for interventions

Correct Answer: c.

  1. 2.

    Design principles for adaptive learning environments include …

    1. (a)

      Adapting the speed of algorithms for data processing

    2. (b)

      Adapting the presentation format of learning artefacts

    3. (c)

      Adapting the task difficulty

    4. (d)

      Adapting the sequence of instructional units

Correct Answer: b, c, d.

  1. 3.

    Informing teaching through data requires realistic technological and personal support

    1. (a)

      False

    2. (b)

      True

Correct Answer: b.

  1. 4.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • When interacting with an adaptive learning system, do you trust the recommendations the system provides for your own learning?

    • Have you designed or developed an adaptive tool for implementing in your learning environments?

4.8 Methodologies for Improving Learning and Teaching Processes as Well as Curricula

4.8.1 Creating Interventions in Classroom Settings

Following Ann L. Brown’s (1992) article, the effective methodology for improving learning and teaching processes as well as curricula is the combination of creating innovative educational environments and conducting experimental studies of those innovations. The so called design experiment is illustrated in the Fig. 4.13. Brown (1992) explains, that a functional classroom is central to the design experiment before an investigation can be implemented. Hence, classroom life is synergistic: Aspects of it that are often treated independently, such as teacher training, curriculum selection, testing, and so forth actually form part of a systemic whole. Just as it is impossible to change one aspect of the system without creating perturbations in others, so too it is difficult to study any one aspect independently from the whole operating system. Brown (1992) suggests that we must operate always under the constraint that an effective intervention should be able to migrate from our experimental classroom to average classrooms operated by and for average students and teachers, supported by realistic technological and personal support.

Fig. 4.13
A diagram illustrates the design experiment. It depicts the input and output of engineering a working environment, with contributions to learning theory and practical feasibility.

Features of design experiments. (Brown, 1992)

4.8.2 Educational Design Research at a Glance

Educational Design Research (EDR) or Design-Based Research (DBR) – the terms are mostly used synonymously – is a meta-methodology in educational research. It represents a genre of applied research in which the iterative development of solutions to practical and complex educational problems provides the setting for scientific inquiry. The solutions can be educational products, processes, programs, or policies. EDR not only targets solving significant problems educational practitioners face but at the same time seeks to discover new knowledge that can inform the work of others with similar problems. EDR distinguishes itself from other forms of inquiry by attending to both solving problems by putting knowledge to use, and through that process, generating new knowledge (McKenney & Reeves, 2014). EDR projects seek to establish collaborations among researchers and practitioners in real-world settings in order to avoid the widespread theory vs. practice dilemmata. EDR is closely related to research-based educational design as conducted with teaching and learning analytics, yet entails a bit more. Both concepts are shaped by iterative, data -driven processes to reach successive approximations of a desired intervention. However, research -based educational design focuses solely on intervention development, whereas design research strives explicitly to make a ‘transferable’ scientific contribution in form of design principles (McKenney & Reeves, 2014). Major characteristics of Educational Design Research are shown in Table 4.9:

Table 4.9 Characteristics of EDR/DBR

McKenney and Reeves (2014) described a process model for conducting educational design research. Figure 4.14 shows the model which has three main features (Huang et al., 2019):

  • Three core phases in a flexible, interactive structure: analysis, design, and evaluation.

  • Dual focus on theory and practice; integrated research and design processes; theoretical and practical outcome

  • Indications of being use-inspired: planning for implementation and spread; interaction with practice; contextually responsive

Fig. 4.14
A diagram of a generic model. It depicts exploration analysis, construction design, and reflection evaluation with the outcome of the maturing intervention and theoretical understanding for implementation and spread.

Generic model for conduction Educational Design Research. (McKenney & Reeves, 2014)

4.8.3 Designing Model-Based Learning Environments

In model-based and model-oriented learning environments two kinds of models need to be considered: (1) the model of the learning goal, which represents the expertise, set of skills, or, in general, the things to be learned and (2) the model within the learner that is constructed and retained in dependence on the learning environment and on the basis of the current epistemic beliefs active within the learner, i.e., whether and how the learner usually explains parts of the world. We will abbreviate the first type as the LE model (model of the learning environment) and the L model (model of the learner), always assuming that the two types are closely intertwined, especially in well-designed learning environments (Pirnay-Dummer et al., 2012, b).

As shown in Fig. 4.15 above, the educational system (meso- and exo-system) and the learners have different influences on the learning goals at different times. The learning goals constitute the constraints for the learning environment. The learning environment is a manifestation (a derivate) of the LE model. Possible and available learning environments (technology and/or best practices) influence the system by setting the boundaries for what is possible – and decidable as regards educational planning. The learner has influence on the learning environment (as more or less pre-structured by its design). Learning takes place as soon as the LE model and the L model interact. During that time, the learning goal influences and guides the interaction between the two models. LE model-oriented technologies usually focus on the L model while model-centered technologies concentrate more on the LE model. It is our understanding that the two (very similar) approaches will always go hand in hand and influence each other (Pirnay-Dummer et al., 2012, b).

Fig. 4.15
A diagram illustrates the interdependences of the system, learning goals, learner, and learning environment intertwined between the L E model and the L model.

Interdependences of system, learning goals, learner, and learning environment. (Pirnay-Dummer et al., 2012, b)

Questions and Teaching Materials

  1. 1.

    Educational Design Research (EDR) has several characteristic. Which of the following does not belong to EDR?

    1. (a)

      EDR is well grounded

    2. (b)

      EDR is following a single set of statistical procedures

    3. (c)

      EDR is related to contextual issues.

    4. (d)

      EDR is integrating various methods and approaches

Correct Answer: b.

  1. 2.

    The generic model of Educational Design Research includes the following main features …

    1. (a)

      core phase management

    2. (b)

      core phase analysis

    3. (c)

      core phase design

    4. (d)

      core phase transformation

    5. (e)

      core phase evaluation

Correct Answer: b, c, e.

  1. 3.

    Model-based and model-oriented learning environments consider five different models

    1. (a)

      No

    2. (b)

      Yes

Correct Answer: a.

  1. 4.

    ACTIVITY/PRACTICE QUESTION (Reflect on)

    We encourage you to reflect on your teaching experience supported through data. You may reflect on:

    • Do you always have sufficient information about the educational system before you design a learning environment?

    • Do you use evidence from different stakeholders when revising a curriculum?

4.9 Concluding Self-Assessed Assignment

4.9.1 Introduction

You are requested to complete a concluding self-assessed assignment. This self-assessed assignment is a real-life scenario activity (based on the use case of the instructional designer David), using a rubric across three proficiency levels and an exemplary solution rating. When you have completed this assignment, you will assess it yourself, following the rubric which will list the criteria required and give guidelines for the assessment.

This self-assessed assignment procedure consists of 5 steps:

  • Step 1. Real life scenario

  • Step 2. Prepare your answer

  • Step 3. Exemplary Sample Solution

  • Step 4. Rubrics for assessing your work

  • Step 5. Self-evaluate your answer

4.9.2 Step 1. Real Life Scenario

David is an instructional designer. He recently got involved in a newly funded European research project which focusses on the implementation of teaching analytics for a workplace learning environment. The workplace learning environment includes data collection capabilities for students and teachers. All relevant data a securely stored. Data protection rights have been recognised and are fully in place, following EU-GDPR. In addition to the implementation part of the project, all project partners agreed to follow an educational design research approach.

While David started to better understand the key features of teaching analytics and how to conduct educational design research, he knows that you just recently learned about these topics as well. Can you help David to create a strategy for implementing robust teaching analytics capabilities following the learning analytics profiles (student, learning, curriculum) approach?

Another challenge, for which David asks for your help, focusses on the benefits of learning analytics design, i.e., using available data from the workplace learning environment to provide dynamic perspectives including design decisions during the course of learning. Can you point out three benefits David may use for his project?

4.9.3 Step 2. Prepare Your Answer

The implementation of robust teaching analytics capabilities is crucial for the design, implementation and development of digital-enhanced learning environment. Think about your own educational institution and the current implementation strategy.

  1. 1.

    Describe your implementation strategy and share available cases or evidence as well as guidelines in your educational institution.

  2. 2.

    Provide tips for other learners when reflecting on their own experiences and institutional practice

4.9.4 Step 3. Exemplary Sample Solution

Learning Analytics Profiles

The strategy for implementing robust teaching analytics capabilities in the workplace learning environment require at least the following key issues while following the three profiles (1) student profile, (2) learning profile, (3) curriculum profile:

The student profile includes static and dynamic indicators. Static indicators include gender, age, education level and history, work experience, current employment status, etc. Dynamic indicators include interest, motivation, response to reactive inventories (e.g., learning strategies, achievement motivation, emotions), computer and social media competencies, enrolments, drop-outs, pass/fail rate, academic performance, etc.

The learning profile includes indicators reflecting the current behaviour and performance within the learning environment (e.g., learning management system). Dynamic indicators include trace data such as time specific information (e.g., time spent on learning environment, time per session, time on task, time on assessment). Other indicators of the learning profile include login frequency, task completion rate, assessment activity, assessment outcome, learning material activity (upload/download), discussion activity, support access, ratings of learning material, assessment, support, effort, etc.

The curriculum profile includes indicators reflecting the expected and required performance defined by the learning designer and course creator. Static indicators include course information such as facilitator, title, level of study, and prerequisites. Individual learning outcomes are defined including information about knowledge type (e.g., content, procedural, causal, meta cognitive), sequencing of materials and assessments, as well as required and expected learning activities.

The available data from all data profiles are analysed using pre-defined analytic models allowing summative, real-time, and predictive comparisons. The results of the comparisons are used for specifically designed interventions which are returned to the corresponding profiles. The (semi-) automated interventions include reports, dash-boards, prompts, and scaffolds for teachers. Additionally, teachers can send customised messages for following up with critical incidents (e.g., students at risk, assessments not passed, satisfaction not acceptable, etc.).

Learning Analytics Design

The traditional perspective on learning design is rather static and does not include changes to the learning environment within a short timeframe or while learning processes. In contrast, learning analytics design provides a dynamic perspective including design decisions on the fly. Especially for learning environments with a large number of learners, the benefits of learning analytics design are obvious:

  • Teachers using navigation sequence analysis can identify areas of dropout and change the related materials and instructions accordingly.

  • Identifying alignment or misalignment of optimal learning design with actual behaviour of the learners enables the teacher to build adequate interventions when needed.

  • The teacher may provide assistance, scaffolds, or feedback to learners off the track.

  • The teacher may identify learning materials and activities which need revisions to improve the overall quality of the learning environment.

4.9.5 Step 4. Rubrics for Assessing Your Work

 

Unacceptable (1)

Good/solid (3)

Exemplary (5)

Student profile data

It is not clear what data is related to the student profile.

Data related to the student profile are clearly identified. Analytics perspectives are not fully developed.

Data related to the student profile are clearly identified and examples are provided. Analytics perspectives are linked with benefits for teaching.

Learning profile data

It is not clear what data is related to the learning profile.

Data related to the learning profile are clearly identified. Analytics perspectives are not fully developed.

Data related to the learning profile are clearly identified and examples are provided. Analytics perspectives are linked with benefits for teaching.

Curriculum profile data

It is not clear what data is related to the curriculum profile.

Data related to the curriculum profile are clearly identified. Analytics perspectives are not fully developed.

Data related to the curriculum profile are clearly identified and examples are provided. Analytics perspectives are linked with benefits for teaching.

Learning analytics design

The examples do not relate to the basic assumptions of learning analytics design.

The examples are related to teaching practice.

The examples are clearly related to teaching practice and provide reasonable benefits for learning and teaching.

4.9.6 Step 5. Self-Evaluate Your Answer

Now that you have seen the exemplary solution, please rate your own work using the criteria in the rubrics for assessing your work.

Calculate your overall score based on the rubrics for assessing your work.

 

Unacceptable (1)

Good/solid (3)

Exemplary (5)

Student profile data

   

Learning profile data

   

Curriculum profile data

   

Learning analytics design

   

For each of the criteria in the rubric assign to your solution:

  • 1 point if the option “Unacceptable” applies,

  • 3 points if the option “Good / solid” applies,

  • 5 points if the option “Exemplary” applies.

Then add up the individual points to calculate your overall score.

My overall score is:

Please mark the applicable answer.

  • 0–4 points

  • 5–8 points

  • 9–11 points

  • 12–16 points

  • 17–20 points