Introduction

Open learner models are learner models that allow the user (learner, teacher, peers and/or other stakeholders in the education process) to view the content of the learner model of an intelligent tutoring system or other advanced learning environment, in a human-understandable form. This focus on understandability is necessary if users are to be able to act appropriately on the learner model information. For example, rather than viewing the underlying system rules or complex knowledge representations, users can be presented with views of this learner model data in interfaces that have been designed to support learning. This has been described for learner models inferred using a variety of modelling approaches, e.g., Bayesian networks (Zapata-Rivera and Greer 2004a, b); concept mapping (Perez-Marin et al. 2007); constraint-based modelling (Mitrovic and Martin 2007); and simpler weighted algorithms (Johnson et al. 2013). In principle, any type of learner model can be opened to users, and the method of presenting the learner model may depend on:

  • the purpose of opening it,

  • the target users,

  • the learning context and

  • the learning tasks to be performed.

For example, a map-based visualisation where flags on each island fill as a learner’s understanding of a concept increases, and a car travelling along a route across bridges between islands indicates progress, was designed for understandability by learners with Autism Spectrum Disorder (Grawemeyer et al. 2015); simple skill meters were used by a teacher to follow groups’ progress on the spot in the classroom (Martinez-Maldonado et al. 2015); and animations of a learner’s programming code execution based on their learner model, were shown alongside animations of expert code to highlight learner misconceptions (Johan and Bull 2009).

In some cases multiple forms of learner model presentation may be available, where the system may adaptively deliver the most appropriate visualisation for the user (Mazzola and Mazza 2009), or the user can select the visualisation they wish to use (e.g., Bull et al. 2008; Conejo et al. 2011; Duan et al. 2010; Johnson et al. 2013; Perez-Marin and Pascual-Nieto 2010). Figure 1 illustrates this diversity with several visualisations from the Next-TELL OLM (Bull et al. 2015; Johnson et al. 2013), which can be selected according to the user’s (learner’s or teacher’s) preferred way of accessing the learner model data.

Fig. 1
figure 1

Five of the Next-TELL OLM visualisations (skill meters, table, treemap, competency network and word clouds) and manual data entry

The visualisations in this example are from Artificial Intelligence in Education topics, where some of the participants at the AIED 2013 conference performed self and peer ‘assessments’ of areas of expertise of themselves and other conference participants, to familiarise themselves with the OLM. In this example there were no automated data sources as would usually be the case when the OLM is deployed: participants used the interface shown at the bottom of Fig. 1 to provide a numerical value for each area and sub-area they wished to assess, for incorporation into the learner modelling algorithm. They could also give optional text to provide feedback or explanations for the learner model value that they provided.

While the Next-TELL OLM visualisations are based on the same underlying learner model, they can be used as best suits the purpose for viewing the learner model, as well as the user’s preference for visualisations. For example, the treemap is particularly useful for displaying large, hierarchically structured learner models, because clicking on a cell (the size of which indicates the level of competency), leads to the display of the next level of data (or subcompetencies). However, with this visualisation it is harder to compare competencies from different parts of the tree. In contrast, the other examples allow users to see all parts of the learner model—though the table and skill meters require scrolling if the competencies do not all fit onto the screen, so are less usable in large learner models. In contrast, the network visualisation gives a clearer overview in a smaller space, with brightness and size of nodes indicating the level of competency. However, the tradeoff is that it can sometimes be hard to read. The word cloud is a useful compromise. Strengths are indicated in larger blue text on the left hand side, while weaker competencies are shown in larger black text on the right hand side. This provides a clear and quick overview of the extremes (strengths and weaknesses), which can be useful for teachers’ on-the-spot classroom decision-making, but it is more difficult to determine the borderline competencies as these are in smaller text.

Since the Next-TELL OLM is domain-independent, and may be used with large or small competency sets which can be defined by teachers, offering a choice of learner modelling visualisations was considered important. Log data from use of this OLM in three schools has demonstrated that there are indeed differences in the relative usage levels of the various visualisations amongst learners and teachers (Bull et al. 2015). While the log data does not itself explain why particular visualisations were selected by individual users, it does highlight the utility of providing different options. The purposes for opening the learner model (see below), and the size of the competency sets may suggest the kinds of learner model visualisation that would be most useful in any particular system, however individual preferences are still sometimes evident, even within the same domain and system (Mabbott and Bull 2004).

Much of the early work on OLMs was based upon learner models that were embedded in intelligent tutoring systems. An important trend away from this has seen OLMs built as interfaces onto independent reusable learner model services for use by multiple systems (Kay et al. 2002; Brusilovsky et al. 2005; Kay 2008; Conejo et al. 2011; Kay and Kummerfeld 2012). Similarly, OLMs may aggregate data from several external systems, and present the combined evidence from these systems, to the user (Kay and Lum 2005; Bull et al. 2012).

In addition to being able to view the learner model data (inspectable learner models), OLMs may permit some forms of interactive maintenance of the learner model between the system and the user. For example, the user might contribute additional data for the learner model. This role includes situations where the user can directly edit (i.e., change) the content of the learner model (Ahmad et al. 2010; Czarkowski et al. 2005); add evidence to be considered by the system alongside other information in the learner model (Cook and Kay 1994; Kay 1997; Johnson et al. 2013); and joint negotiation of the content of the learner model—i.e., the user and system aim to agree on the learner model representations through some form of discussion (Bull and Pain 1995; Dimitrova 2003; Kerly and Bull 2008; Suleman et al. 2015). In these cases the learner (or other user) can help to ensure that the learner model is up to date, in contrast to purely inspectable learner models, where control of the model data lies fully with the system. In negotiated learner models, the negotiation moves available are the same for each party (student and system), while in editable models, the learner can simply update the contents without challenge from the system. Other open learner models allow the user to try to persuade the system that their challenge to the model is correct, but they are required to demonstrate their new level of knowledge by, for example, answering a small number of questions that assess their understanding—the system has control in this case; it will not update the model unless it is successfully ‘persuaded’ (e.g., Bull et al. 2007). Thus, there is a continuum of cases between system control of the learner model data, through joint control, to user control. Furthermore, the set of learning data may be interpreted in multiple ways, for example using different standards of mastery (Kay and Kummerfeld 2012).

As indicated above, two important OLM issues are:

  1. (i)

    how the learner model is presented, and

  2. (ii)

    how the learner model is maintained.

The first is, of course, the realm of open learner models only—“closed” learner models do not display any information to the end user, though they may share data with other systems. However, when a learner model is open to the user, the question of whether they can propose changes or directly correct it in ways similar to those described above, becomes inevitable.

OLMs have been identified as an important ongoing area of interest in learner modelling (Demaris and Baker 2012), and OLMs of various kinds have demonstrated significant learning gains (Brusilovsky et al. 2015; Kerly and Bull 2008; Long and Aleven 2013; Mitrovic and Martin 2007; Shahrour and Bull 2009). This provides a basis for our ongoing interest in further developing the SMILI☺ Open Learner Modelling Framework (from Bull and Kay 2007).

As a foundation for revisiting SMILI☺, the next section provides a brief overview of the original framework. We then discuss important changes in learner modelling and OLMs, especially in terms of changes since we created SMILI☺. We then review the ways SMILI☺ has been used and, in light of all these aspects, we present an update to the SMILI☺ Framework. We conclude with an agenda for future research.

Overview of SMILI☺

The original SMILI☺ Open Learner Modelling Framework (Bull and Kay 2007) aimed to provide a systematic way to describe an OLM by answering the following questions:

  1. 1.

    Context: How does the open learner model fit into the overall interaction? How central is it to the interface?

  2. 2.

    How was it evaluated?

  3. 3.

    WHAT is open?

  4. 4.

    HOW is it presented?

  5. 5.

    WHO controls access?

The SMILI☺ framework called for open responses to questions 1 and 2. For each of the last three, WHAT, HOW and WHO, SMILI☺ provided a template table to complete. The rows were the elements for that question. For example, in the case of WHAT, the first element specified the extent to which the model was accessible, and one aspect of this indicated the completeness—i.e., was the full model open, or only part of it?

The columns of the table were the purposes for making the model open:

  • Improving the accuracy of the model;

  • Supporting metacognitive processes of planning, monitoring and reflection;

  • Facilitating collaboration or competition;

  • Facilitating navigation;

  • Respecting the learner’s right to access and control their personal data, and their trust in the learner model;

  • Using the learner model as an assessment of the learner.

A SMILI☺ description of an OLM involved completing these tables. For example, for each of these purposes, the description indicated whether the OLM presented the complete set of learning data in a manner that supported each of these purposes. Tables 1, 2, 3, and 4 summarises key elements of SMILI☺. Table 1 summarises the first two questions. The elements of the framework can be seen in Tables 2, 3 and 4, which show how the purposes interact with each element: the aspects that are critical for a purpose are marked X, those that are more contentious are marked = and those that are not relevant are blank. So, for example, it was generally envisaged that learner access to their complete learner model was important in relation to their right to access data about themselves, to be able to control the content of their learner model (where this option is available), and to encourage trust in the learner model. It was not considered essential that the learner model be open if it was used as an assessment of the learner (though for formative assessment, this is often very relevant). For the other purposes of opening the learner model, partial access (selected data) was considered sufficient. It was anticipated that the purposes of opening specific OLMs would align with one or more of the purposes shown in Tables 2, 3, and 4, but that authors would also be able to clearly highlight any differences between purposes of opening their OLMs with reference to the various elements. Further details explaining Tables 2, 3, and 4 are given in the original SMILI☺ Framework description (Bull and Kay 2007).

Table 1 The original SMILI☺ open learner modelling framework overall: these two elements capture two overall aspects of the open learner modelling
Table 2 Original SMILI☺ framework: WHAT is available?
Table 3 Original SMILI☺ framework: HOW is the model presented?
Table 4 Original SMILI☺ framework: WHO controls access?

The same tables can be used to describe any system. Essentially, SMILI☺ supported the systematic description of an OLM in terms of these elements for each of the questions. This meant that one should be able to compare two OLMs by comparing these tables, or by overlaying them to show similarities and differences. It was also anticipated that as more OLMs were described, the relationships between elements and purposes in Tables 2, 3, and 4 could be revised accordingly.

Table 5 gives an excerpt from the SMILI☺ Table for comparing two OLMs: the Next-TELL OLM (Johnson and Bull 2015) and the LEA’s Box OLM (Bull and Al-Shanfari 2015). The LEA’s Box OLM builds on the Next-TELL OLM, taking many of the original features, but also making the OLM negotiable. This feature is currently being implemented, and SMILI☺ allows us to clearly illustrate the difference between the original Next-TELL OLM and the extensions for the LEA’s Box OLM. A cross (x) is placed against an element (e.g., ‘complete’ or ‘partial’ for ‘extent of model accessible’ and ‘access to uncertainty’), in each of the columns/purposes for which the opening of the learner model is intended. Where there are two crosses (xx), this indicates an especially strong purpose of opening a specific element in the OLM. Table 5 includes those elements that are most relevant to the distinction between the two OLMs selected for illustration here. We also provide an overview of some of the evaluations of the Next-TELL OLM as a means to summarise its use. The LEA’s Box OLM will undergo similar evaluation, but in the context of a negotiated learner model.

Table 5 SMILI☺ open learner modelling framework comparison excerpt

Table 5 shows that the complete learner model (of competencies) is accessible in both OLMs, but in the LEA’s Box OLM, the purpose of opening the full model is stronger with reference to maintaining the accuracy of the learner model, and for user control and trust of the model. This is relevant to the feature of learner model negotiation, which has two primary aims: to increase the accuracy of the learner model content through discussion with the user; to promote reflection during the negotiation process. Other purposes for viewing the learner model are also relevant to these two OLMs, though there is no mechanism to navigate from the OLM to specific materials or exercises, etc.

In the Next-TELL OLM, the evidence calculations are available to be viewed by users, allowing them to infer the level of certainty in the model (for example, many pieces of similar evidence could be interpreted as high certainty), though the OLM does not itself state the level of certainty of its representations. On inspecting the evidence for learner model entries in the LEA’s Box OLM, the learner can make similar decisions. However, the negotiation of the LEA’s Box learner model will require the system to explicitly state if it has high or low certainty, and whether the learner’s stated confidence in their competencies matches the system’s model of their competencies (see Bull and Al-Shanfari 2015).

Both OLMs allow self, peer and instructor assessments to be given, in addition to automated data in the OLM. Because of the negotiation feature, all sources of evidence are important for maintaining the accuracy of the model—a student may potentially disagree with any source, and these sources need to be identified. The negotiation also offers an equal level of control to the learner, as opposed to the Next-TELL OLM that retains greater control in that it will allow users to add evidence (as does the LEA’s Box OLM), but it does not permit discussion or negotiation aimed at changing the learner model contents. Both OLMs are inspectable, but the LEA’s Box OLM can also be negotiated. This feature allows the learner model inspection to be initiated by the System in LEA’s Box, as well as the student.

We now discuss important changes since the original framework and then present a revised form of SMILI☺. It is noteworthy, at this stage, to highlight that the second column of Table 4 indicates the stakeholders we explicitly identified: the system for which the learner model was designed, the user (learner), a peer learner, an instructor (parent, teacher, facilitator, mentor) and a catch-all other. We will return to these users later.

The Emerging Nature of a Learner Model: a New Definition

Before we address the nature of OLMs in greater depth, it is timely to revisit the definition and evolution of the notion of a learner model. This is important because we will argue that the emerging importance of OLMs is related to a radical shift in the nature of learner modelling, compared with the early foundations of AIED research. It has certainly influenced our current understanding of learner models, compared with the time we created SMILI☺. We now analyse the evolution of learner models especially in terms of their nature, but also their role and the shift in the perception of the learner and other stakeholders.

The Nature of the Learner Model

The foundation goal of the AIED and ITS research community was to create highly effective computer-based teaching systems. These systems were described in terms of four key components: domain expertise, teaching expertise, student model and interface. Notably, Self (1994, 1998) argued that the defining aspect of an intelligent teaching system was its student model (Self 1999). This was needed for the system to care precisely, because it is the learner model that drives personalisation of the teaching. All four components provided fertile ground for Artificial Intelligence research. In the early vision, this was intended to be a two-way street, with the challenges of creating intelligent tutoring driving Artificial Intelligence innovation. There was also a view that there should be a symbiotic relationship between Artificial Intelligence in Education and Cognitive Psychology, with the expectation that each could learn from the other (Anderson et al. 1990; Kay 2012). Learner models had a special role in this respect: research on the ways that human minds represent knowledge and how people reason, could inform the design of student models.

Although there has been a wide diversity of learner model representations and modelling techniques, it is possible to identify three important classes of learner model. The earliest days saw a dichotomy between cognitive and pragmatic learner models. More recently, we have also seen the emergence of a third important class: data-intensive, automatically generated models, that harness huge and still growing bodies of learning data to create learner models. This trend parallels a shift from deep AI to statistical techniques. However, despite the diversity of types of learner model representation, there is a consistent view of the learner model as a dynamic representation of the learner’s knowledge (lack of it, misconceptions and similar) as it evolves through the learning interactions. The representation of a learner model requires two key elements: the ontology for the aspects to be modelled; and a means to use learning data, as it becomes available, to infer what the learner knows. The AI part of AIED drove considerable work that used learning data as the evidence for learner modelling. Another approach is to create interfaces that enable the learner or others to volunteer their judgements of what the learner knows. (In the latter case, this is often to complement automatically-inferred data, rather than to replace it.)

One important foundational strand of research aimed to create cognitively based student models. Anderson et al. (1990) created the “cognitive tutors” which had detailed, high fidelity student models based on rules that were intended to represent the ways that a learner actually reasons. The process of student modelling was based on model-tracing (Corbett and Anderson 1994) where an individual student’s learning process was tracked in terms of their path in this model. The other major class of cognitively based student modelling is the constraint-based tutors (Mitrovic 2003). These model learners in terms of whether their input satisfies or violates constraints. Both of these forms of student modelling have been very successful, producing widely deployed teaching systems. For OLMs, this class of student model poses challenges because the models are complex and detailed. It is unclear whether it is useful to show the full extent of these to a student, or how to do so meaningfully. However, both the cognitive tutors and constraint-based tutors have provided OLMs (e.g., Long and Aleven (2013) and Mitrovic and Martin (2007), respectively). These are skill-o-meters or skill meters that give the student an overview of their progress, by summarising the state of large parts of the actual student model. Rigorous evaluations such as in the above references have demonstrated the value of such skill meters in improving learning, especially for the lower achieving students.

In contrast to the above, most student modelling research took what we will call a pragmatic approach, with no claim to cognitive validity. For example, in the seminal early work by Carbonell (1970), the teaching domain was represented as a semantic network. The student’s progress was modelled as an overlay on this.

Given that the foundation research had such a strong focus on intelligent teaching and cognitive validity, there was considerable discussion about the cost of defining the model. Notably, Self (1990) argued that the “intractable” problem of student modelling could be bypassed if the student model represented just what the system actually needed to perform its teaching actions. This often meant that a very useful learner model could, in fact, be quite simple. Indeed, a simple array of mastery scores for a small set of learning outcomes could be useful for personalisation. If a learner model is simple, this also makes it easier to create an effective OLM interface that is closely matched to the underlying representation. Better yet, given the demonstrated benefits of OLMs, the designer of the learner model could consider the design of the OLM interface at the same time as they designed the representation and reasoning for personalised teaching (Kay 1994; Kay and Kummerfeld 2012).

Moore’s Law has transformed computing (see Moore 1965, 2006). Early AIED systems had to operate with computational resource storage that was very modest compared to modern smart watches and other low cost devices. This influenced the design of those learner models. Importantly, it used to be impractical to keep all the long term data about a learner, so the model typically held a subset, in compressed or summarised form. This was typically limited to the current session. Multi-session learner models, where they existed, were often very simple, for example holding the average of a small set of the most recent data. However, cloud computing has become widely available since SMILI☺, which means that we can now rely on low cost, long term storage and so can design very large learner models that are kept over the very long term, including with a lifetime of learning data.

More recently, there has been an explosion of data rich approaches. One important class of these is based on educational data mining research (Baker and Yacef 2009; Beck and Woolf 2000). This is a green field area for OLM research. We are not aware of any work that has aimed to integrate the design of an OLM with the process of automatically creating student models from large amounts of learning data. Another data rich approach is to simply keep all the raw data about the learner in the learner model and interpret it at run time as needed. One early use of this approach was by Kay (1994). Current widely used learning technology, including LMSs, as well as the emerging MOOC platforms, keeps huge amounts of data about learners. Over 12 years ago, Mazza and Dimitrova (2003) created valuable interfaces for LMS data. This was a forerunner of current Learning Analytics. Notably, these interfaces were designed for teachers to see the level of activity around learning resources. This approach can also be seen in MOOCs, which currently store learning data in rather arbitrary formats and structures (Cook et al. 2015). This has been recognised as limiting the potential value of that data, especially for educational data mining. Some systematic database models are beginning to emerge (Veeramachaneni et al. 2013; Pardos and Kao 2015). However, we emphasise that these data stores have not yet been designed as learner models; they are framed around the learning resources, not a model of the knowledge or skills of the learner.

We see another recent trend that affects the nature of learner models. This is due to the emergence of sensors that can capture substantial amounts of data to model important aspects of a person. For example, eye-trackers can provide detailed data about the learner’s focus and gaze for learner modelling (Kardan and Conati 2012). Similarly, it has been possible to model emotions with the help of sensors (Arroyo et al. 2009; Woolf 2010). Taking a far broader view of learning, the many sensors used by the Quantified Self (Rivera-Pelayo et al. 2012) movement can be used to create models of a person’s progress on their most important goals, such as learning to regulate their behaviour to improve their health (Kennedy et al. 2012). This is also in line with our previous observation that OLMs may now need to be able to combine data from multiple sources for visualisation (Bull et al. 2012).

Role and Perceptions of the Learner Model

We now consider important shifts in the role of learner models. In early work, both cognitively based and pragmatic student models were deeply embedded within the teaching system, used by the system to personalise learning. This partly reflected the state of computing, where a program ran on a stand-alone computer. The growth of the web opened the possibility of learner model servers (Brusilovsky et al. 2005; Kay et al. 2002; Zapata-Rivera and Greer 2004b). This is reflected in the emergence of independent open learner models (Bull et al. 2008, 2012; Conejo et al. 2011; Kay 2008). These may support reuse of parts of the learner model by different applications, or allow the creation of OLMs that enable a learner to see their progress, potentially based on data from many sources, including various learning applications, over the long term (Bull and Gardner 2009; Gluga et al. 2010, 2013). This transforms the learner model from its role as one part of a teaching system into a first class citizen, with a valuable role outside any one teaching system. This shift in role has enabled the OLM to be an important source of information to prompt self-regulated learning and metacognitive skills (Bull and Kay 2013). This is important not only for learning in a specific domain, but also for developing deep learning and metacognitive approaches to learning more generally.

The final important shift that we discuss relates to perceptions of the learner model. One perspective can be seen in the use of the term student model in early work, whereas we now more frequently use the term learner model. The newer term highlights the active role of the learner, as well as also being applicable outside of formal learning contexts, or in the workplace. Self (1974) described the earlier view of a student model as enabling

“a human teacher or the teaching program itself … to determine how much [a student] knows at any time”.

It is notable that, more than 40 years ago, Self already set a foundation for opening the learner model for the human teacher. Also notable, it does not mention its use by the learner.

When we created the SMILI☺ framework, we considered the use of the learner model by a teaching system, the learner, their peers, an instructor and “others”. Our catch-all “other” reflected our view of the state of OLM work at the time. We had considered people with teacher-like or mentoring roles. This includes OLM work such as Lee and Bull (2008) and Zapata-Rivera et al. (2007) with OLMs for use by parents. We had not considered the full range of other stakeholders who have good reason to be interested in learning data, such as school leaders and policy-makers. By contrast, the origins of Learning or Academic Analytics, had a very different starting point, that had a strong focus on institutional use of learning data (Long and Siemens 2011). This, too, calls for useful information about an individual learner, for example, at-risk students. It also relies on effective interfaces for understanding the learning analytics data. Another role for learning data is to create new knowledge about learning. In this case the intended stakeholder is the educational researcher. This is a role that has emerged with research in Educational Data Mining (EDM), Learning Analytics and Learning@ Scale. We will return later to the potential links between OLMs and these newer stakeholders.

Open Learner Models Since the SMILI☺ Open Learner Modelling Framework

As stated above, at the time we developed the SMILI☺ Framework, there was already increasing interest in opening the learner models of intelligent tutoring systems. This increase in their use has continued both in the more traditional adaptive teaching systems, such as constraint-based tutors (Duan et al. 2010) and cognitive tutors (Long and Aleven 2013); and in systems using newer technologies and displays, for example: open social learner models (Brusilovsky et al. 2011); using Facebook to discuss learner model contents (Alotaibi and Bull 2012); e-portfolio and independent OLMs (Raybourn and Regan 2011); OLMs in MOOCs (Cook et al. 2015); systems taking data for an independent OLM from a variety of applications (Bull et al. 2012). In addition to more traditional learner model visualisations such as skill meters (e.g., Corbett and Bhatnagar 1997; Mitrovic and Martin 2007) and concept maps (e.g., Mabbott and Bull 2004; Perez-Marin et al. 2007; Rueda et al. 2003), innovative visualisation methods have been deployed, such as treemaps (Brusilovsky et al. 2011; Johnson et al. 2013; Kump et al. 2012) and word or tag clouds (Johnson et al. 2013; Mathews et al. 2012). Therefore, interest in OLMs has been maintained as systems embrace the opportunities that new technologies offer for learning.

We now introduce some recent examples of OLMs, well beyond what we had considered when we created SMILI☺. These illustrate some of the drivers for the updated framework we present in the next section.

OLMs with Interactive Tabletops

Figure 2 shows a classroom with interactive tabletops (Martinez Maldonado et al. 2011, 2012). The students are doing a collaborative concept-mapping task. The teacher, standing in the figure on the left, wanted to make use of the affordances of interactive tabletops to gain a new way to tackle key challenges in small group teaching. During a class, she needs to maintain awareness of the progress of each group so that she can use this information to decide which group most required attention. In this classroom, the OLM on her tablet (on the right of the Fig. 2) helps by showing the progress of each group. It was designed in collaboration with the teacher, who defined the purposes she needed the learner model to serve. In this case, as well as showing the progress of each group, the OLM also displays the progress of each individual, on the learning activity she designed. The same underlying system can also show complex models of the quality of group collaboration, based on each learner’s touches and speech. We introduce this example for several reasons. It is one example of emerging technology, in this case surface computing, augmented with a Kinect to provide identification of touch actions, combined with sophisticated directional microphones. This provides a completely new level of information about face-to-face small-group learning processes and progress, enabling the modelling of the collaboration within each group. Importantly, as stated above, the OLM was designed in collaboration with the teacher, based on user-centred design approaches. The work was evaluated in-the-wild, in authentic classes. This work is also notable because the early lab versions made use of sophisticated machine learning to build models of group collaboration. When we moved to an authentic setting, the time pressure of a 60-min scheduled tutorial class and the constraints of the actual curriculum meant that the learner model in that setting became a simpler representation of the concept created by each group on a concept-mapping task (Martinez-Maldonado et al. 2015). Based on user-centred design processes, we came up with the display at the right of Fig. 2. Each colour block represents one group. The darker lines are an overview of the overlay learner model, for that group’s propositions matching those in the teacher’s expert model and the lighter extensions indicate other propositions.

Fig. 2
figure 2

Interactive tabletops, one new way to collect learning data, and transform it into a learner model (on the right), in this case for the teacher to use

Another example of an OLM with an interactive tabletop is the use of skill meters together with a (physical) empathic robotic tutor (Jones et al. 2014), as illustrated in Fig. 3. The robot aims to scaffold the learner’s use of their skill meters, which are displayed on the tabletop (right of Fig. 3) together with the map-based geography task. The robot interactions are being designed following a human-centred design approach to identify how teachers scaffold students’ use of the skill meters in the classroom setting (Jones et al. 2015). Subsequent use of robotic tutors in this way should enable human teachers to spend more time with students who require it, while other learners remain engaged with the robot and the task of reflecting on their learning with an open learner model.

Fig. 3
figure 3

Robot and user at an interactive tabletop: robot scaffolding use of the OLM

OLMs with Large Scale Online Learning

We now consider a very different class of OLM, shown in Fig. 4. This comes from a maths course in the Khan AcademyFootnote 1 online platform, an example of emerging mainstream learning support. In this case, the learner has just completed a pre-test and is presented with an overview of their progress in relation to the complete set of 104 skills (right) and in terms of the level of achievement from the lowest level (practiced) up to the highest level (mastery). This also illustrates the use of badges, which may be seen as a form of OLM. This example is important because it has a range of visualisations that present three views of the learner’s knowledge, against a carefully defined “curriculum”. While we doubt that the learners or designers call these OLMs, this nevertheless shows that the ideas behind OLMs are becoming mainstream. Platforms like this offer many opportunities to conduct in-the-wild studies of the effectiveness of varied approaches to the design and use of OLMs and other learning visualisations.

Fig. 4
figure 4

Visualisation in Khan Academy, after a student has done a pre-test for a maths unit

OLMs, Big Data and Learning Analytics

An important general development in numerous fields is that of big data. Along with this, there has been substantial interest in learning analytics, which includes visualisation of educational data (Klerkx et al. 2014; Tervakari et al. 2014); and learning analytics dashboards are being developed to help users better understand the data (Brown 2012; Charleer et al. 2014; Duval 2011; Verbert et al. 2013). While visual learning analytics approaches have gained widespread interest, their attention is often on performance, activity completion, navigation or behaviour-focussed statistics. They are also more typically aimed at teachers or other stakeholders such as school or educational leaders rather than students, though the recognition of the importance of learning analytics visualisations for learners is becoming stronger (e.g., Corrin and de Barba 2014; Dawson et al. 2012; Durall and Gros 2014; Grann and Bushway 2014). Open learner model visualisations could be seen as a specific type of learning analytics, in that the visualisation is of the learner model. As visual learning analytics aims to make complex data available to help users interpret aspects of learning, so too do open learner models. A core difference, however, is that in open learner models, the inferences about learning have already taken place as part of the modelling process. This can, therefore, provide an additional step to help those (teachers, learners and others) who may wish to use data about learners’ current knowledge or competency states, rather than the activity or behaviour data that they still need to interpret. While there are exceptions (e.g., Bull et al. 2013; Durall and Gros 2014; Ferguson 2012; Kalz 2014; Kay and Bull 2015; Nussbaumer et al. 2015), there has been relatively little reference to both learning analytics visualisations and open learner models in single publications. The time has arrived for these two fields to better encompass the work of each other, at least in the overlapping goals of providing meaningful visualisations about what students can achieve, rather than the behaviour that has been demonstrated. Connecting learning analytics with open learner models can help learning analytics visualisations become more meaningful for classrooms, while the experiences with big data and visual analytics can facilitate the development of OLMs with today's data-rich and evidence-based online learning opportunities.

This section introduced carefully chosen examples of classes of post SMIILI☺ OLMs. They highlight several important differences between the systems we had in mind when we created the framework. First, these newer OLMs are characterised by far richer sources of data, such as the multi-modal aggregated data of the tabletop classroom. That example also illustrates the need for user-centred approaches to designing the OLMs that a teacher needs and wants. It shows the gap between research visions (such as supporting long term learning of group work skills) and pragmatic classroom OLMs (such as our teacher wanting to support her class orchestration, tracking and advice giving). Both this and the robot example illustrate the diverse emerging interfaces for learning, so different from the WIMP interaction that we assumed when creating SMILI☺. The tabletop classroom and the large-scale online learning and learning analytics examples illustrate cases where there is vastly more data than in the systems that informed the design of SMILI☺. Perhaps this is the biggest change since our earlier paper; the volume of learning data has grown tremendously to include large streams of data from learners’ digital footprints over the long term. We now need to consider how these changes should drive revisions to SMILI☺. It is somewhat surprising that the original SMILI☺ still addresses issues associated with most of these changes. This was partly because we had carefully considered most of these issues, albeit not necessarily appreciating which would increase in prominence.

In our view, in 2007 the increasing development and use of OLMs required the SMILI☺ Framework to help researchers and developers to consistently describe, analyse and compare their OLMs. It is perhaps now even more important to find a consistent approach not only to distinguish different OLMs, and highlight their core features and goals, but also to emphasise the differences between what can be achieved with OLMs compared to the most common approaches in visual learning analytics. Our revised SMILI☺ Framework aims to incorporate features that are likely to relate to, and be important in both and, perhaps, also other related fields including visualisation of preference data from user modelling more generally. The Framework can be readily adapted to incorporate elements and purposes that are more important to other fields (e.g., the purpose of identifying the most useful learning materials, or the most frequent contributors to a discussion, in learning analytics).

How Has the SMILI☺ Open Learner Modelling Framework Been Used?

When assessing whether SMILI☺ was widely used to describe OLMs as we initially envisaged, the answer is clearly ‘no’. To see how it has been used since then, we reviewed the 154 papers listed in Google Scholar as citing SMILI☺ (on 18 Feb. 2015) and performed a thematic analysis. We did this based on the titles and abstracts. Some are coded for multiple themes. Table 6 shows the themes and uses of OLMs over this time. The largest uses were in conjunction with studies of interface innovations in OLMs and for design-informing aspects of OLM work. There has been a small, perhaps growing body of papers linking OLMs to research on metacognition.

Table 6 Thematic analysis of uses of SMILI☺

Although the original SMILI☺ paper is cited as indicated in Table 6, this has mostly been as a reference to the existence of the framework (e.g., Brusilovsky et al. 2011; Paramythis et al. 2010), as a detailed review and/or as giving examples of open learner models (e.g., Hsaio and Brusilovsky 2012; Kump et al. 2012), or with reference to the elements and especially purposes of OLMs (e.g., Long and Aleven 2013; Martin et al. 2009; Verginis et al. 2011), rather than using the full framework to describe the OLMs presented by the authors. There could be several reasons for this, for example:

  1. 1.

    The SMILI☺ Open Learner Modelling Framework may be too complicated to use. Defining an OLM using the SMILI☺ Framework is intensive, requiring reflection on, and explicit identification of all purposes and elements of the OLM, and how they relate to each other. This must be with sufficient detail to allow the OLM to be fully understood by others, and properly compared with other OLMs. The kind of information shown in Tables 1, 2, 3, 4 and 5 would take considerable time to define for all relevant elements and purposes for a complex OLM.

  2. 2.

    Perhaps some features of the framework were not considered in the design of OLMs, and so people did not report on this. We would not expect any particular OLM to contain all or even most of the features incorporated in the framework, but we would nevertheless encourage researchers to highlight the core features of their OLMs.

  3. 3.

    It may be more difficult to publish papers with strong descriptive components, as a paper fully using the SMILI☺ Framework would inevitably be. Perhaps the framework is sometimes consulted during OLM design, but this aspect of the design is not published because of page or word-count restrictions.

  4. 4.

    It may be the case that the SMILI☺ Framework is not sufficiently powerful to define all aspects of the OLM that authors wish to present. In such cases we encourage researchers to extend the framework as meets their needs and as we do below.

It would have been good to be able to report that the SMILI☺ Open Learner Modelling Framework had been used extensively, and had supported OLM designers in the manner in which was intended. The possibility raised in point 3, of consultation by others of the SMILI Framework even if not subsequently published, is illustrated in the theses of research students (e.g., Girard 2011; Velez Ramos 2009). To some extent this allows us to conclude that the possibility of it being too complicated to use (point 1), is not the case. It is also suggestive of the possibility that standard papers do not easily allow the level of detail to be reported (point 3). This also applies to our own publications, where there has been insufficient space to use the full framework. For example, in describing the Next-TELL OLM design, we used a reduced version which displays the elements (rows) but not the columns (purposes). The purposes have been referred to in the text, but were not broken down into detail (Johnson and Bull 2015). While, as stated above, the purposes of learner modelling and relevant elements in the SMILI☺ Framework have clearly been recognised, and the paper has been recognised as a state-of-the-art review (for the time), actual use of the SMILI☺ Framework to define OLMs may be more difficult to determine.

The Revised SMILI☺ Open Learner Modelling Framework

We created the original SMILI☺ Framework because there was no common or systematic way of describing and analysing OLMs, which made the comparison of OLMs and OLM research more difficult. We had hoped that OLM designers and learner modelling researchers who were looking for solutions would find that SMILI☺ helped them recognise and describe the crucial features of their OLMs, without having to first study many OLMs to discover the diverse ways in which OLMs have already been used. An early version of SMILI☺ was tested at a well-attended workshop (Bull and Kay 2005), where participants used the framework to describe their own work. This activity was met with enthusiasm and provided a compact overview of many OLMs. This indicated that SMILI☺ had a role as a descriptive tool. We therefore used the feedback and discussion from participants to help create SMIILI☺ (Bull and Kay 2007). In light of the changes in learner modelling and OLMs as described in “The Emerging Nature of a Learner Model: a New Definition”, as well as the way that SMILI☺ has been used, described in “How Has the SMILI☺ Open Learner Modelling Framework Been Used?”, we now present an updated version.

Our revised framework has two versions. It is more lightweight in its simplest version, because it just calls for identification of the purposes that are relevant in the text, but does not require the breakdown in the SMILI☺ Table (as in Johnson and Bull 2015). The second version retains the full complexity of the earlier version, for situations where the reporting space allows for it. It also allows the full framework to be available for the design phase of OLMs.

Table 7 returns to the foundation questions. These are largely unchanged. The one new element is in bold and we have reformatted them to make it clearer that this is the set of essential questions which designers should consider and which can be used to describe OLMs. The first two are tightly interlinked: the context is a key driver for the purposes and both of these aspects define the ways that the OLM can be evaluated. Our earlier paper provided definitions and motivations for these first two questions. This was adequate for descriptive purposes. However, there is a need for more work to establish a set of recommended and standard approaches to evaluation, matched to the contexts of the OLM and its purposes.

Table 7 Core questions for designing and describing OLMs (bold indicates new elements)

We now consider the third question of Table 7, the purposes an OLM can serve. These were the columns of the earlier framework. We have slightly revised the text of our earlier purposes and reordered them to show important groupings (applicable for both versions of the framework). We have also added three new purposes, shown in bold font in Table 8.

Table 8 WHY create the OLM: what is the purpose of the OLM? (Q3 in Table 7) (bold indicates new elements, italic indicates metacognitive elements)

The first pair of purposes in Table 8 indicate that the OLM can give the learner access to, and control over the model, as well as the ability to contribute to it. This is central to ensuring user’s have control needed for their privacy (Pardo and Siemens 2014). This also incorporates the issue of user trust in the model. The next block of purposes, shown in italic, relates to metacognitive roles for the learner model, such as reflection, planning and self-monitoring. While this had been a purpose of earlier negotiated OLMs (e.g., Bull and Pain 1995; Dimitrova 2003), this is an aspect we came to appreciate more fully for OLMs in general, since the original SMILI☺ paper (subsequently explored more extensively in: Bull and Kay 2008, 2013). These metacognitive purposes have been the most common drivers for creating many OLMs (e.g., Feyzi-Behnagh et al. 2013; Long and Aleven 2013; Verginis et al. 2011).

The new purposes take account of important new trends that have emerged since the original version of SMILI☺. One addition encompasses interface agents. These might operate in two ways: the agent can serve as an OLM, interacting and negotiating with the learner, for example discussing reflection, planning and monitoring (extending, for example, the suggestion of an interface agent providing graphs for feedback (Hu et al. 2013); or the agent might help the learner make effective use of a more conventional form of OLM (as in the example of the physical robotic agent supporting the use of skill meters in Fig. 3 (Jones et al. 2015). The new purpose on the potential to promote positive affective states is based on an extension to the framework made by Girard (2011).

The last three purposes are more pragmatic. The first, navigation, has long been a valuable role. For example ELM-ART provided an OLM as a list of the course topics in a programming course; these were colour-coded to highlight learning topics completed, those recommended based on the learner’s current state of knowledge, as well as those not recommended because the learner had not demonstrated mastery of pre-requisites (Weber and Brusilovsky 2001). A more recent example operated in a very different context, semester-long group software projects; this OLM gave a unified view of data from learners’ activities in a complex information space including a wiki, version control system and issue tracker. The OLM helped learners and teachers see a high level view of each student’s activity, using this to navigate to the detailed evidence (Upton and Kay 2009). But we also see OLMs of this type in widespread interfaces, such as Google page visits, reviewed in Kay and Kummerfeld (2012). The final purpose of assessment has placed greater emphasis on formative assessment, and overlaps in this regard with metacognitive purposes such as reflection and self-monitoring. We have added the purpose of aggregation of data from multiple sources because this reflects the emerging value of an interface that enables a learner to see data about their learning that has been collected in various systems and/or settings (as in Bull et al. 2012). This is particularly important for blended learning (see e.g., Velez et al. 2009). It is also valuable for lifelong learning that makes use of the sensor data that the Quantified Self community is using for diverse, lifelong important goals, such as learning how to understand and change behaviour to become healthier and happier. Other emerging possibilities include examples such as the role of the OLM of supporting reflection in games. This is because a defining feature of games is to make the learning experience very engaging, even immersive; this makes it important to carefully consider the ways to integrate a break period for reflection when the learner reaches a suitable point in the game. In contrast to the other cases where new purposes are added, our existing purpose of promoting reflection can already encompass this. This also demonstrates the flexibility of the original framework—while additional purposes can be added as required, as described above, new trends in open learner modelling may also be accommodated into the framework without it needing to be modified.

We now consider the fourth question in Table 7, the aspects or elements of the learner model that are made open. This remains as in the original (Table 9).

Table 9 WHAT aspects of the learner model are open?

The next two questions from Table 7 are shown below, in the same form as the initial paper.

  • HOW is the learner model presented?

    • presentation or visualisation of the learner model (e.g., text or graphical, with reference to specific details or the whole model)

    • method of accessing the learner model (inspectable, cooperative, negotiated, editable, etc.)

    • flexibility of access to the learner model (availability of different views, level of detail)

  • WHO controls access to the learner model?

    • learner model access initiative (from system, user)

    • control over learner model access (complete or partial, user or system)

These match the earlier goals of describing OLMs. Today, these need to be reconsidered as we discuss at the end of the paper.

We have also added another WHO question, WHO is the intended user: who may access the OLM? This reflects the changing nature of learner modelling, with models stored in the cloud, over the long term, with many easy means to make them available to people other than the learner and those closely involved in a narrow learning context. This considers the increasing access to OLMs by users who are not the learner, as well as to encompass the broader access to learner data in Learning Analytics (e.g., institutional or organisational use of data (see, for example, Siemens (2013)). This broadens the initial SMILI☺ analysis also allowing description of peer learners and instructors as potentially controlling input to the learner model, access initiative and control over access.

We now illustrate the revised SMILI☺ Framework, first as a way to create a detailed description of an OLM. Table 5 did this for part of the framework comparing the Next-TELL and LEA’s Box OLMs. Table 10 completes this for the LEA’s Box OLM, which uses the same, or very similar visualisations as shown for the Next-TELL OLM in Fig. 1 (Bull et al. 2015; Johnson et al. 2013). As in previous tables, Table 10 illustrates the cells available for researchers and system designers to indicate the elements that apply in their OLM against the various purposes for opening the learner model. Table 10 includes the new purposes identified above (underlined): OLM as interface agents, OLM to promote positive affective states, OLM comprising data from multiple sources.

Table 10 Example of using SMILI☺ to describe an OLM in detail (LEA’s Box OLM)

The LEA’s Box OLM can be visualised to both learners and teachers. Table 10 shows use of the OLM by students, and is interpreted as follows. Of the various purposes for opening the learner model from the original framework, the LEA’s Box OLM has all but supporting navigation amongst its main purposes. This column is completely empty. This is because, unlike OLMs such as in ELM-ART (Weber and Brusilovsky 2001) that provide links directly to materials that are considered appropriate for the learner’s current skills, the role of the LEA’s Box OLM is to combine data from potentially many systems. Therefore it does not itself include learning materials or quizzes, etc. However, one aspect of negotiation may be to point the learner back to further engage with a system that has provided data. Nevertheless, this is not considered a primary purpose for the LEA’s Box OLM. Of the new purposes (underlined), displaying data from multiple sources is a major aim. The other new purposes, while possibly relevant in future developments, are not central to the OLM in its current version, nor are there any immediate plans to incorporate these purposes.

Table 10 shows that the OLM is considered useful to support collaborative or competitive interaction amongst students, but this is a ‘lesser’ purpose. Similarly, while the user’s right to have access to data about themselves is important, this was not one of the main aims of opening the learner model in LEA’s Box. This was primarily to visualise data from multiple sources (one of the new purposes); to make the learner model available as a means of formative assessment; and to increase the accuracy of the learner model by student-system negotiation of its contents, while also promoting metacognitive purposes such as reflection and planning, during the discussion. Table 10 has also split the learner’s right to view their learner model from the purposes of allowing the learner greater control over their model, and fostering greater trust in the system. This is because, as mentioned above, the learner’s right to view the data is not a primary purpose of making the learner model available to them, but allowing them greater control over the learner model contents with the negotiation facility, is central. Likewise, the aim to encourage trust in the learner model is a key purpose of the LEA’s Box OLM, since the data comes from many sources, which may have different levels of contribution to the data for any competency.

Figure 5 illustrates how the various data sources are communicated in the OLM: each colour represents a different data source. Thus, in this example, three activities/systems/data sources have contributed to the learner model, with one, two or three sources providing data for each area of expertise. Figure 5 shows the breakdown of the skill meters from Fig. 1, and also two of the other Next-TELL and LEA’s Box OLM visualisations: smiley faces for younger learners, and radar plot, which allows easy comparison of skills across different areas of a curriculum.

Fig. 5
figure 5

OLM visualisations broken down by data source

Table 10, through its empty rows, also makes it easy to see the elements of open learner models that are not relevant in the LEA’s Box OLM. For example, as it is an independent open learner model (not linked to a specific teaching system), and does not offer tutorial guidance, the subsequent learning interaction is not personalised beyond the display of the learner model, or moves during negotiation of the model. This contrasts with the purpose of encouraging the learner to take greater control, which is typically a strong purpose for opening many elements. Currently only the learner, system and teacher have access to the learner model of an individual. However, the learner has no control over when the system or teacher can access their learner model.

Our important purposes of improving the accuracy of multiple source open learner models and promoting metacognitive activities through learner model negotiation, are expressed in the SMILI☺ Framework by the two crosses (xx) against several of the elements. Of particular relevance to learner model negotiation are the role of time—the current learner model is discussed (changes cannot be made to previous states); access to sources of input (since a learner may only wish to challenge a single source of data); access method (the learner model must be inspectable, but in particular, it must be negotiable); and to be negotiated, the negotiation must be able to be initiated by either the student or the system, as required (otherwise the negotiation would be one-sided, and so not a true negotiation). Negotiation episodes can be very simple, as in the following example (Bull and Al-Shanfari 2015):

  • LEARNER: My value for [Competency 1] should be [higher].

  • SYSTEM: Your use of [Tool 1] showed [some difficulties].

  • LEARNER: In [Quiz 1] I did [well]. The value for [Quiz 2] is [too low].

  • SYSTEM: [Quiz 1] was [5 days] ago and you used [Tool 1] [4 days ago]. [Quiz 2] was [1 day] ago. The level of [Competency 1] in [Tool 1] was [easy].

In the LEA’s Box OLM, the development of the negotiation takes a straightforward approach, using text templates. Square brackets indicate variables. Here, the OLM accesses the timestamp of data from the learner’s use of an online tool that provides data to the OLM and from two quizzes. It explains that some of the learner data was older, and also that the more recent use of a tool was with a relatively simple task. If the learner did not wish to accept the system’s reasoning, the system could further explain that easier activities may result in higher scores, and it may highlight the difference between a performance score and a competency. It could further point out that old data would be less relevant to measuring the current competency level. Through negotiation such as this, in addition to determining an accurate representation for the learner model, the learner should come to better recognise their skills as they consider the evidence provided by the OLM, as well as when formulating their own justifications to support their claims. Of course, if the learner’s claim about their learning is demonstrated (for example, by returning to part of a quiz), the system will update the learner model accordingly.

The example in Table 10 has allowed us to demonstrate how areas of importance in any particular OLM, as well as issues that are less relevant for the same OLM, can be distinguished. We have not here provided a detailed explanation of the table, but rather, just indicated some of the main points. In research theses there would be space for such explanation, and comparison to other similar or very different OLMs could be facilitated by the use of the SMILI☺ Framework, as shown previously in Table 5. While Table 5 contained only the most relevant excerpts for our discussion, the full table could also be used. We have therefore retained the complete version of the Framework, with the addition of our three new purposes, as we believe it serves as a useful prompt for issues to consider in the design of OLMs, as well as to highlight differences between them.

In contrast to the detailed version of the SMILI☺ Framework that is close to the original, we also offer a lightweight version that will better lend itself to shorter publications, thereby facilitating the description and comparison of OLMs as originally intended. For this we require only the elements (rows) to be included, and furthermore, only those elements that are relevant. Tables 11 and 12 show an excerpt for the Next-TELL OLM (see Johnson and Bull 2015), in its minimal form, for student and teacher user types. Like Table 5, these omit rows and columns that are not relevant, and so allow some reporting without using the full space required for the detailed version of the framework, together with extended explanations. This also illustrates how the SMILI☺ Framework can be used to compare important aspects of the design of different elements of an OLM for different target users. Another use of the lightweight version of the Framework could be to compare different elements against different visualisations in a multiple-view OLM, or different data-sources contributing data to a multiple-source OLM. Many other ways of comparing aspects of OLMs may be found in future work. While the full SMILI☺ Framework could be used for this, in practice this may be less likely and, indeed, not always necessary.

Table 11 What is available in the OLM?
Table 12 How is the OLM presented?

Future Uses of SMILI☺ and Research Needed

Our current position is that we still consider SMILI☺ to be useful for its original purpose of describing OLMs, and even more so that it can be valuable for designers. Essentially, it captures our understanding of the literature and experience in creating and evaluating OLMs. If the SMILI☺ Open Learner Modelling Framework is used more at the design stage as is explained above, this may not be reflected fully in publications. We believe that our simpler version, based on just the core questions with detail for just selected parts, may be valuable for descriptive reports. We have done this in a book chapter to describe the purposes for opening the Next-TELL OLM to students and teachers, not differentiating the purposes separately (Johnson and Bull 2015, and Tables 11 and 12). This could still involve quite a long description, which could be a barrier for reporting in some publications, but it allows greater flexibility to accommodate different publication types while enabling more aspects of OLM design to be reported than currently occurs. Furthermore, the table could be presented including only the relevant components for the purpose of a publication, and with only brief explanation. We believe that the full SMILI☺ Framework is still valuable for research theses and, perhaps, project deliverables, given the depth of reporting required for these.

Our updated SMILI☺ highlights areas where more research in OLMs is needed. Notably, we need tools and research to support the first and last pairs of questions in Table 7. The first, on the context of the OLM, seems ripe today for work that can be done in MOOCs and other open and large-scale web-based learning software. These have the potential to support A-B studies comparing OLMs and the ways they best fit the learning contexts. This can include contexts with new interface elements, notably games and interface agents. The second question, about evaluation, relates to both the context of the OLM and its use, and the purposes of opening the learner model. The AIED gold standard of evaluation in AIED is to assess whether students learn more, perhaps with more nuanced studies, assessing the value of OLMs for particular groups (for example, students who began with low initial performance). But the OLM is essentially an interface element and this suggests that specialised forms of classic HCI usability measures might also be valuable. This is another area where research is needed.

This paper has revised our set of questions in light of our experience and changes in the field, with the addition of new purposes to the framework. The fourth question, on the aspects of the learner model to open, remains as in the initial framework, though we might expect it to be used differently given different trends in the use of educational data, and different techniques and technologies. But we have concluded that more research is needed to update the remaining questions. There is a pressing need for research that tackles the fifth question, HOW are these components of the learner model presented or visualised? This question is being asked in the Learning Analytics community, for example, where one approach built upon user-centred design methods (Martinez-Maldonado et al. 2015). Other publications illustrate a variety of learning analytics dashboards showing learning data (e.g., Brown 2012; Charleer et al. 2014; Duval 2011; Verbert et al. 2013). The Next-TELL OLM research has explored a diverse set of OLM interfaces (Bull et al. 2015; Johnson et al. 2013). It would be valuable to take this work to the next level with studies showing the effectiveness of OLM interfaces to support the range of purposes in Table 8. Similarly, the sixth question, WHO controls access to the learner model data, poses a challenge. It calls for policies about learner data, as discussed at the 2014 Asilomar Convention.Footnote 2 We need to consider whether data is available to the individual learner only, the learner and teacher, parents, peers (all peers or only friends, in aggregated form or individual models), and others. We also need to tackle interface challenges that empower learners, teachers or others to manage such control.

The International Journal of Artificial Intelligence in Education was created 25 years ago. Our SMILI☺ framework was created almost 10 years ago, building on a substantial body of AIED research that was beginning to recognise the potential benefits of the small number of OLMs that had been produced at that time. This paper has highlighted some of the important changes in the nature of student/learner modelling and OLMs since the original SMILI☺. Academic children of the AIED community, including Learning Analytics, Learning at Scale, Computer Supported Collaborative Learning and Learning Sciences, are now creating interfaces that we would call OLMs. Emerging forms of informal, life-long and life-wide learning such as the Quantified Self movement are also creating OLMs. Our updated versions of SMILI☺ have the potential to inform design and enable systematic analyses of the ever more diverse drivers for OLMs, and we have now offered SMILI☺ in both a lightweight and a detailed form.

Summary

We initially created SMILI☺ to provide a framework for describing OLMs. We did this to help make sense of the diverse work that had been done and to enable more rigorous comparisons of OLMs. We drew on our long experience and broad knowledge of OLM research to establish the elements of the framework. This paper has reflected on the actual use of the SMILI☺ OLM Framework, and the developments in Artificial Intelligence in Education and cognate fields since the publication of SMILI☺ in 2007. Notably, as technology has become pervasive and ubiquitous, AIED is moving into lifelong, life-wide learning, involving many devices.

The framework has been cited many times, but reporting of the details of its actual use has mostly been reserved for research theses or project reports. This reflects the role of SMILI☺ for OLM design and that these venues have space for such detailed descriptions. We now conclude that SMILI☺ has proved useful, partly for the reasons we intended and also to inform design and thinking about OLMs, their potential purposes and issues to consider in designing them. In light of important changes in learning technologies since the initial paper, and the ways SMILI☺ has been used, we now propose the retention of the extended form of the original framework for detailed reporting. We also recommend a more lightweight subset where elements and purposes of the framework do not have to be cross-matched, or do not all have to be included if not relevant. We have presented an updated form of SMILI☺ for the core question: WHY create the OLM?, including three new purposes. We still recommend the elements of the question: What aspects of the learner model are open? And we point to the need for more research to address the challenges of the other four core questions, as well as our new question on the intended user of the OLM.