Introduction

This paper is a contribution to the International Journal of Artificial Intelligence in Education (IJAIED) Festschrift in memory of Jim Greer: a scholar for whom I had the privilege to work over a two-year period (1999–2001), and to communicate with throughout my academic career both before and after joining his lab in time for the turn of the century. The ‘Advanced Research on Intelligent Educational Systems’ (ARIES) Lab at the University of Saskatchewan was the perfect place to be during that period, as the world was looking forward with high expectations for future developments in the new century, not least in Artificial Intelligence in Education (AIED). The ARIES Lab offered an environment with both experienced, notable researchersFootnote 1, as well as excellent graduate students and researchers who were earlier in their careersFootnote 2– a setting where ideas and discussion flowed eagerly in both (and all) directions.

Jim’s AIED research evolved from AI-focussed solutions where he moved the field forward with applicable techniques, to practical applications deployed to large numbers of students. Ultimately, he accomplished large scale impact on learning and teaching at the University of Saskatchewan after becoming Director of the University Learning Centre and the Gwenna Moss Centre for Teaching and Learning, in 2005.

This paper considers Jim’s substantial and influential work throughout his career, with a particular focus on his 25-year-long influence on my own open learner model (OLM) research. The paper is organised as follows: first, an introduction to many of Jim’s research interests over the years; then an outline of key issues relevant to open learner modelling including an introduction to some of Jim’s related research; and finally, specific examples of Jim’s influence across my OLM research programme and additional harmonies between our respective directions.

Jim Greer’s Continually Evolving and Timely Research

The purpose of this section is to provide context of the extensive variety in Jim’s research, before focusing on parts of his work that were more directly connected to OLMs. Much of the work introduced in this section is expanded later in the paper.

Insight into timely solutions was evident even in Jim’s early AI-focussed research, which included formalising granularity hierarchies for granularity-based recognition systems (Greer and McCalla 1988). This was an important development for early AIED, and the basis for discussion of granularity-based reasoning as central for belief revision in learner modelling throughout interaction with intelligent tutoring systems (ITS) (McCalla and Greer 1994). The granularity-based approach was exemplified in the distributed architecture of an advisor that incorporated various types of knowledge into a single model, taking into account student knowledge data as well as cognitive and domain knowledge, enabling it to perform dynamic instructional planning (McCalla and Greer 1990). It also supported: reverse engineering to help people recognise plans in code for software maintenance (Palthepu et al. 1996); a hybrid semantic clustering and graph-partitioning strategy in an object-oriented database (Ramanujapuram and Greer 1996); use of a Bayesian network to propagate knowledge through a granularity hierarchy for adaptive testing (Collins et al. 1996); and recognition and diagnosis of a learner’s developing solution plan (Koehn and Greer 1993).

Jim was part of the first attempt at applying belief revision to the revision of learner models, undertaken with a domain-independent student model maintenance system that comprised stereotypical and deductive knowledge (Huang et al. 1991). This later formed the basis of a hybrid decision-support system for agriculture that was developed to interpret a quantitative simulation output, and provide understandable, trustable recommendations personalised to the user’s own situation (Greer et al. 1994). Growing from this research was a general consideration of ethical issues surrounding user modelling when aiming to make arguments more persuasive, discussed in relation to designing an intelligent agent for online sales (Greer et al. 1996).

Alongside this, Jim was also pursuing interests in knowledge-based systems, which included investigation of knowledge-based chess in a chess tutor, where applying a granularity approach was more difficult since student moves had to be related to chess plans, and had to also consider potential future moves that the system could make (Gadwal et al. 1993). Another example is a robust knowledge-based system with an expert diagnostician that used a case-based approach to tutoring of the diagnosis of bronchial asthma, evaluated students’ attempts at diagnoses, and offered explanation for the reasoning of the expert diagnosis (Prasad et al. 1989).

Jim’s interests approaching the turn of the century were numerous, including the early stages of his and his students’/colleagues’ work on many of the emerging questions of the time. These included, for example: investigation into the relative merits of algorithms for discovering relationships in discrete data (Bowes et al. 2000); visualising probability, and probability propagation and cause-effect relationships in Bayesian belief networks (Zapata-Rivera et al. 1999); and hyperspace guided navigation assistance tools (Greer and Philip 1997). Examples that focussed specifically on education spanned software tools designed to allow organisation and presentation of educational materials using web resources (Thomson et al. 1996); domain and learner modelling to individualise hypermedia in education (Kettel et al. 2000); a shell for constructing pedagogically-focussed ITSs (Arruarte et al. 1997); a ‘newsgroup-like’ environment for students to post questions, comments and answers, and maintain a database of those who may be able to help (Bishop et al. 1997); and integration of a discussion forum and a one-to-one peer help facility to provide an intelligent help-desk to support students (Greer et al. 1998b), the project I joined in 1999. The above were all important topics during that period, and remained so, as the century unfolded.

Jim’s interests continued to evolve at the forefront of crucial themes that led to later practical advancements. For example: drawing on ITS research, defining principles for using learning objects in instructional planning for individualising learning content management systems (Mohan et al. 2003); a middleware platform with an ontology-based event mechanism to enable legacy applications to share information with software agents for integration into new e-learning environments (Zapata-Rivera et al. 2003); using e-learning standards and ontology languages for developing a portable, reusable learner modelling architecture suitable for large-scale use in the semantic web (Winter et al. 2005); a learner modelling server in distributed multi-agent tutoring environments (Zapata-Rivera and Greer 2001) that also used a Bayesian network visualiser to display the model to users (Zapata-Rivera and Greer 2000); a complex, multiple-user, multiple-agent-based fragmented user modelling approach for distributed environments (Vassileva et al. 2003); and recognition of the continuing inflexibility of learning management systems (Greer 2006). Metacognitive benefits for students were pursued by bootstrapping learner models from e-portfolios through a reflective practice of users specifying knowledge levels and linking to the corresponding e-portfolio artifacts as evidence through an OLM, which may itself become a new e-portfolio artifact (Guo and Greer 2007); and enhancing group awareness to support collaboration when tracking users in a user model-based content management system (Brooks et al. 2006b). Consideration of collaboration and peer learning further yielded: knowledge-based inference in an assistant designed to offer personalised and context-specific support to peer helpers in a learning situation (Kumar et al. 2001); user query-based individual and cohort learner ranking and group visualisations for students and teachers for e-learning applications (Brooks et al. 2007); and a framework for inspectable learner models at the centre of e-learning, where learners could interact with other (human or artificial) users and learning materials, with the aim of supporting learner reflection, interactive diagnosis or assessment for improved model accuracy, and promoting learner and teacher acceptance and involvement in the process (Zapata-Rivera and Greer 2003). Privacy was also an important theme, where research included privacy-enhanced personalisation for e-learning (Anwar et al. 2006) and trust relationships amongst co-learners in privacy-preserving reputation management (Anwar and Greer 2012); and a general privacy filter approach with semantic streams in collaborative learning (Kettel et al. 2004). Other work encompassed a platform, language, and ontology-independent framework and architecture for the delivery of user model data amongst e-learning applications sharing that data (Brooks et al. 2004); and a research agenda for issues relating to learning resource metadata, for example, relevant to the identification of inadequate content sequencing, required content modification, content recommendation for learners, and matching learners for collaboration (Brooks et al. 2006a).

Most recently, and of particular note, is Jim’s highly opportune work on advancing the effective use of data-mining and learning analytics in a whole-university setting. For practical use by instructors and instructional designers, visualisation of lecture capture usage with reference to re-watching behaviour, viewing over time, and differences in usage between groups was offered (Brooks et al. 2013), with clustering techniques identifying five interaction classifications that are of relevance to instructional designers and educational researchers (Brooks et al. 2014). Administrators have benefitted from data visualisation of flow through academic programmes, with the potential to drive programmatic change at university level (Greer et al. 2016). Accurate and explainable predictive models were developed to help institutional learning specialists or instructional designers tasked with applying large-scale interventions to better understand the student population by means of personas (Brooks and Greer 2014). Jim also remained focussed on the direct needs of students, including persuasive systems with socially-oriented strategies to increase student motivation to engage (Orji et al. 2019), an approach also developed for a mobile app (Orji et al. 2018); and a scalable personalised advice recommender based on predictions of success (Greer et al. 2015). In addition, Jim was centrally involved in recognising and encouraging the scholarship of teaching and learning throughout the University of Saskatchewan (Wuetherick et al. 2016). Similarly, for instructors, he accessibly promoted ideas surrounding the need to recognise different student motivations and to support the breadth of career goals, as well as to understand and facilitate appropriate and timely help-seeking behaviour (Greer 2013). Jim’s work is set to continue to underpin further advancements at the University of Saskatchewan, and also to endure as a foundation well beyond his own institution.

Open Learner Models

Open Learner Models are learner models that are in some way open or accessible to users (or other systems) in an understandable manner. A common aim of OLMs is to promote metacognitive awareness and activities such as reflecting, planning, self-monitoring and self-assessment, and to afford learners greater control and responsibility over their learning. However, OLMs can also provide feedback on activities from a range of applications, support collaboration and peer interaction, promote positive affect, aid navigation to materials and exercises, maintain the accuracy of the learner model through allowing user input or interactive model maintenance, increase trust in a system since the reason for adaptations can be identified, and OLMs also address the issue of a user’s right to access data held about them (Bull and Kay 2007, 2016). Although OLMs are usually aimed at the student being modelled, they can also be accessible to instructors, peers, parents, instructional designers, system designers, educational technologists, administrators, policy-makers, and so on (Kay 2016; Reimann et al. 2011). This section outlines some of the core issues and concludes with an overview of Jim’s related interests.

OLM Externalisation and Visualisation

As an illustration of the variety of externalisations that may be used in OLMs, Fig. 1 shows three of the (eight) Next-TELL OLM views for instructors and students (Bull et al. 2016c). The upper left example displays part of a set of hierarchical skill meters, a simple, easily understandable visualisation of the learner model data where the level of skill (or understanding, competency, mastery, etc.) is indicated by the amount of the meter that is filled. The upper right gives an excerpt from a more complex network visualisation where the size and brightness of nodes portrays the level of understanding or skill, with the connections between layers in the hierarchy shown by the lines linking them. Nodes can be expanded and collapsed as required, to allow focus at an overview level or on details of a specific part of the hierarchy, whilst maintaining in view any other areas of the network desired. This visualisation can be particularly useful if there are a lot of nodes, since it is not necessary to scroll, as would be the case for a large learner model with many skill meters. On the lower left is a treemap visualisation, where the colour and size of each area reflects the corresponding skill level, and clicking on any area takes the user to the corresponding next level in the hierarchy. This allows easier access to the various layers in a very large hierarchical domain than does the network visualisation, but has the disadvantage that different parts of the hierarchy cannot be viewed at the same time. In any given case, in addition to the size of the domain, other factors may determine the kind of visualisation chosen for an OLM. For example, the domain structure (e.g. hierarchical, conceptual map or a series of independent units), learner modelling technique (e.g. knowledge tracing, constraint-based or probabilistic), type of detail modelled (e.g. general skill levels, or specific concepts and misconceptions), age or information visualisation literacy of learners (e.g. familiarity with interpreting graphs or complex data visualisations), and the purpose of offering an inspectable model (e.g. navigation, highlighting knowledge gaps, obtaining model data from the user). Of course, simpler visualisations can be applied in more cases since the externalisation format does not have to match the complexity of the model itself. Multiple visualisations can also be made available in a single system, each designed according to a specific purpose of viewing; or alternative visualisation selections may be offered (e.g. Fig. 1) since individuals’ preferences and their own reasons for viewing their learner model at a particular time (e.g. identifying weaker knowledge, seeking pre-requisites, or planning revision) may influence the type of visualisation they choose in cases where different options exist.

Fig. 1
figure 1

Next-TELL OLM visualisations and evidence (Bull et al. 2016c)

Some OLMs also display evidence for the values or attempt to explain the reasoning for the learner model data. The lower right of Fig. 1 shows part of the evidence for a Next-TELL OLM competency value: the source of evidenceFootnote 3 (other computer-based activities or systems, or teacher/peer/self-assessments), and the weight of that evidence in the model. The provision of evidence and explanation is increasingly important as OLMs and OLM-like approaches are becoming more prevalent with data being readily available, since it can be difficult for users to maintain an overview of their various activities and the potential contributing sources of information into their learner model. Where evidence and explanation have been offered, this has most often been to help students understand how their learner model information was aggregated or inferred and what this means in relation to their learning, but instructors are now also making use of this information as they aim to better understand their students’ needs; and learner model evidence and explanations may also be made available to other relevant users. Evidence may take various forms. For example, systems may offer: excerpts from the learner’s activity trace in an environment; descriptions of the outcomes of recent problem-solving attempts and how these are applied in the model; how understanding of a given concept implies the understanding of pre-requisite concepts; reference to the relative difficulty of tasks or the amount of evidence contributing to a value; overview of teacher or peer assessments in the model; the relative weighting of different sources of data; explanation of the modelling mechanism, and so on (see Bull 2020). In the case of learning specialists viewing information on learners who are strategically important to their institution, it is vital that predictive models can be explained; however, there can be a trade-off between building an accurate predictive model and being able to explain it in an understandable manner (Brooks and Greer 2014). This observation similarly applies to many types of stakeholder needing to understand and act upon the learner model, either their own, or the models of others.

Interactive Maintenance of OLM

OLMs can be categorised according to the level or type of interactivity between the user and system concerning the content and accuracy of the learner model, and the relative levels of user versus system control over the data (Bull and Kay 2007, 2016). This includes learner models that are ‘inspectable-only’, where the system infers the model unaided and retains complete control over its contents. There are various intermediate methods by which learners may contribute complementary or additional information themselves. The most relevant for this paper are ‘co-operatively maintained’ (e.g. Fig. 2 upper, where the system seeks input for attributes it does not itself infer (Bull and Shurville 1999)), or ‘learner adds information’ where a user can opt to provide additional information for use alongside system-inferred data. Other intermediate methods in which the user and system jointly maintain the model allow discussion or challenging of the data therein, and aim for an agreed resolution if there are differences in viewpoints. Most relevant here are ‘persuadable’ where the system ultimately has control over the model data, or ‘negotiated’ models where separate belief values are maintained in cases of unresolved disagreement (e.g. Fig. 2 centre left, showing negotiation options (Bull et al. 1995b)). In contrast, directly ‘editable’ learner models provide the user with complete control over the data, although evidence and/or explanation may still be offered by the system for consideration (e.g. Fig. 2 centre right, where the system shows previous responses as evidence (Mabbott and Bull 2006)). Some types of interactively maintained OLM necessarily involve the provision of evidence and explanation, since each party in the model maintenance process needs to justify their position to the other. In most cases of interactive learner modelling, interaction about the model takes place between the learner and the system, but learner-instructor interaction about the model attributes can also help to define its contents, and other users (e.g. peers, teachers, parents, administrators) may be able to provide information directly to an individual’s or group’s model (e.g. Fig. 2 lower, showing peer feedback (Bull et al. 2016c)). An extended breakdown of interactive learner model maintenance methods is available in Bull (2020).Footnote 4

Fig. 2
figure 2

Interactive learner model maintenance: upper, SCRAWL (Bull and Shurville 1999); centre left, Mr Collins (Bull et al. 1995b); centre right, Flexi-OLM (Mabbott and Bull 2006); lower, Next-TELL OLM (Bull et al. 2016c). Note: Some text from old screen shots retyped to improve readability

Independent Open Learner Models

Independent open learner models (IOLM) are OLMs that are independent of a full teaching system (see Bull and Kay 2013). They are constructed in the customary manner, usually (but not always) inferred from an individual’s learning interaction in an environment. However, instead of the system guiding or tutoring the student according to their inferred needs as indicated by their learner model, the responsibility for learning decisions rests with the user. The IOLM typically helps them identify their learning requirements for themselves, to follow up on as appropriate (inside or outside the specific environment), transferring more of the accountability for learning onto the learner and encouraging learner awareness and independence. The subsequent adaptation in IOLMs is therefore necessarily different from ITSs that mostly use the model to enable adaptive tutoring: it is primarily concerned with issues relating to constructing and/or conveying the model to the user. The OLMs presented in Figs. 1 and 2 are IOLMs.

Overview of Jim Greer’s OLM and OLM-related Research

As seen in the previous section, Jim’s research interests were vast. Much of this work involved open learner modelling or closely related ideas or topics, to some extent. For example, the Bayesian belief network visualiser referred to previously (Zapata-Rivera et al. 1999) was incorporated into VisMod (Zapata-Rivera and Greer 2004), an inspectable Bayesian learner model that also supported learner-teacher negotiated assessment (Fig. 3, upper left: excerpt from visualisation of the student’s opinions and system-inferred/teacher values contributing to knowledge nodes). Community interaction was visualised for instructors (Brooks et al. 2007) using a nested sociogram approach (Fig. 3, upper right: red nodes represent instructors and assistants, and grey nodes denote students. Node size indicates perceived importance in the community; edges show reply-to relationships; and proximity to the centre reflects participation category). EP-LM was designed to ask reflective questions to encourage students to link e-portfolio entries to interactively provide evidence for their knowledge in order to initialise a learner model (Guo and Greer 2007). The learner models of the integrated Help-Desk peer-help system were open, to allow correction by students if they contained inaccurate data (Greer et al. 1998b). In addition, at the end of a help session both the helper and helpee provided explicit feedback on the knowledge of the other, to update the respective learner models. This is a form of interactive learner model maintenance involving other users that applied after help sessions (with other forms of interactive model acquisition being teacher and self-assessment, and voting on the quality of postings; further data was acquired from assignment marks, testing, and browsing behaviour). This work evolved into the I-Help system, which developed a complex agent-based modelling mechanism where users provided information on their needs, with personal agents finding electronic resources, locating discussion threads and negotiating help session partners on their specific user’s behalf, taking into account a large variety of fragmented user model information (Vassileva et al. 2003). Users also gave feedback on each other to contribute to peers’ models (e.g. Figure 3, centre left: questions on the clarity of a helpee’s help request, whether they appropriately identified the help topic, their level of knowledge, and whether the user would wish to encounter the same helpee again). The PHelpS workplace peer help system relied heavily on users interactively maintaining the user models, with motivations being to enable help to be targeted according to users’ strengths and gaps in task knowledge, and to avoid being contacted to give help on tasks to which they were not well-suited (Greer et al. 1998a). Peers could also consult each other’s profiles to aid in the provision of help and in selecting a helper from amongst those suggested (Fig. 3, centre right: stars - can help; crosses - cannot help).

Fig. 3
figure 3

Upper left, VisMod (Zapata-Rivera and Greer 2004); upper right, sociogram visualisation (Brooks et al. 2007); centre left, I-Help (Vassileva et al. 2003); centre right, PHelpS (Greer et al. 1998a); lower, Ribbon Tool (Greer et al. 2016)

Jim also focussed on clarifying the utility of technological approaches to non-technical stakeholders. For example, he helped develop a persona approach for visual narratives from statistical predictive models to explain student classifications, to facilitate learning specialists’ understanding of information produced by data scientists (Brooks and Greer 2014); and he worked on identifying patterns of user behaviour for presentation to instructors and instructional designers to help them understand learners’ use of the technology (Brooks et al. 2013). Jim was active in urging instructors at his institution to look to learning analytics to support their understanding of their students’ needs and move closer towards individualised learning (Greer 2013). Furthermore, he was central in the design of the Ribbon Tool displaying student progression through degree programmes (Fig. 3, lower) to provide backing for persuasive arguments for change for administrators and decision-makers (Greer et al. 2016).

Even Jim’s earlier work showed traces of this type of OLM-related approach. The CPR newsgroup provided statistics on usage patterns to help instructors to identify issues requiring further attention in class, and amongst future goals were to build individual student models (Bishop et al. 1997). The MicroWeb Toolkit allowed records of pages accessed by students and the time spent to be available to teachers to assist them with evaluation and planning, and to create lists of specific pages or paths for individuals (Thomson et al. 1996). There is thus a clear trail of ideas and solutions that are also important in open learner modelling, throughout Jim’s career.

Jim was very much concerned with privacy in systems where information about users can be shared. An initial approach to addressing user concerns about who can see their data in a learning setting was a survey into the kind of information that students would be willing to declare to others, both named and anonymously, and to whom they would be prepared to release the various types of information (Kettel et al. 2004). Recommendations were offered for the construction of privacy-preserving, trustable personalised systems that also support collaboration (Anwar et al. 2006). In PHelpS, employees could opt to keep their user models hidden from others in the workplace help-seeking context (Greer et al. 1998a). The I-Help agent-based peer help system had at its core a fragmented representation of participant data that made it very difficult for information to be seen outside of users’ respective agents’ release and interpretations of it (Vassileva et al. 2003). Privacy filters were implemented as one type of blocker (to modify or restrict events to be published) in the Massive User Modelling System, which was designed to integrate pedagogical and domain applications (Brooks et al. 2004). Later work explored how to facilitate trust with privacy protection using identity management, which allowed a level of anonymity, and supported reputation transfer across multiple identities that may be created by a learner (Anwar and Greer 2012).

Jim Greer’s Influence on a Programme of Research on Open Learner Models

Largely thanks to some excellent students, my research interests have included several exciting technologies used with OLMs. For example, interactive (negotiated) OLM maintenance using a chatbot (Kerly et al. 2008), an OLM that could be used and edited on a Pocket PC (Bull and McEvoy 2003), an OLM that gave haptic feedback (Lloyd and Bull 2006), and an empathic robot explaining the OLM that was displayed on a tabletop (Jones et al. 2017). In addition, Jim’s I-Help project offered an innovative multi-agent environment with fragmented learner models (Vassileva et al. 2003) in which to explore how personal agents interact to find help for their owners (Bull et al. 2003). In strong contrast to the above advanced technologies, simple learner-adjustable ‘physical OLMs’ were introduced as learning indicators for classroom orchestration and to encourage peer help in a school without electricity (see Bull 2020, for a short description). Jim’s influences on my (non-I-Help) research rested somewhere (and almost everywhere) between these extremes. It was primarily of a practical nature – as was one of his most remarkable and particular talents – with reference to many of the more instantly or attainably deployable possibilities, and in considerations of questions and issues that make for useful research projects more generally. Jim positively impacted my early PhD research at the University of Edinburgh, when my understanding of research and feasible research questions was still developing; projects that grew from that research after I moved to the University of Brighton; my time with him at the ARIES Lab at the University of Saskatchewan; and later, IOLMs deployed to accompany university courses at the University of Birmingham. Jim’s impact persists as I continue with OLM and learning technologies research, consulting on others’ projects. The main connections that have influenced and inspired my OLM research and practice are illustrated in Fig. 4Footnote 5. Its purpose is to indicate some of the intricacy of Jim’s influence on a programme of OLM research, rather than for comprehensive consideration, and is provided for orientation during the descriptions that follow. (See the main text for references.)

Fig. 4
figure 4

Jim Greer’s impact on an OLM research programme. Plain (red) text – Greer’s research; italic (blue) text – Bull’s research. Solid lines - influences from Jim Greer’s projects, or commonalities; dashed arrows - projects that build directly upon others in Canada or UK; shaded (red) rectangles - closely related projects (Canada); shaded (blue) triangles - closely related projects (UK). Underlined - learning analytics-focussed

One of the foremost influences on my early research development was the leading learner modelling book: “Student Modelling: The Key to Individualized Knowledge-Based Instruction”, edited by Jim Greer and Gord McCalla (1994). Although my PhD had by that time progressed beyond needing to comprehend the most relevant issues in student modelling for my specific research goals, this book served as a frequent more general reference. Of particular importance was one of Jim’s chapters: “The State of Student Modelling” (Holt et al. 1994). It gave a clear overview of the state-of-the-art, making it easier to distinguish the most important innovations in my own PhD, and how to situate them in the existing body of research. For example, Holt et al.’s (1994) paper states “Little work has been done on representing individual learner characteristics such as learning style, affective state, specific idiosyncratic knowledge, or various individual attributes”. Therefore, in addition to positioning the novelty of my research in the field of intelligent computer assisted language learning, the starting point of my research (e.g. with papers on modelling various sources of language transfer (Bull 1995) and language learning strategy use (Bull 1997b)), it became clearer also how to position the corresponding concepts in the AIED and user modelling literature – as “extending the scope of the student model” (Bull et al. 1995a).

The remainder of this section provides a chronological account of Jim’s continued extensive influence across a large proportion of my research on open learner modelling.

Meeting Jim

I initially met Jim when I was a PhD student at the University of Edinburgh, at my first conference: the 1993 AIED conference. I had the pleasure of speaking with him several times during the conference, about learner modelling in general as later described in the chapter mentioned above (Holt et al. 1994); his recent input on knowledge-based tutoring as exemplified in the umrao prototype chess tutor for bishop-pawn endgames referred to previously (Gadwal et al. 1993); and his timely consideration of appropriate methods for evaluating various aspects of ITSs (Mark and Greer 1993). We also spoke about my PhD research on a negotiated IOLM named Mr Collins. Such discussion was a very beneficial experience for me and, fortunately for me, our interactions repeated with increasing frequency at AIED and related conferences over subsequent years.

Specific Examples 1: Early Work (e.g. UMRAO, SMMS, KARE) and Mr Collins

The chess tutor umrao was unlike much of the previous work on computer chess, because it took into account how humans play chess and so represented both expert chess plans and those of novices, which could be compiled into well-formed and ill-formed endgame strategies (Gadwal et al. 1993). This allowed faulty reasoning and misconceptions to be represented in the plans, and the generation of plausible moves as applicable to a novice chess player (whereas only expert plans were used by the system to calculate its own moves). The primary educational aim was to offer a problem-solving partner (co-solver) to the student, facilitating their exploration of bishop-pawn endgame strategies, and including different, flexible types of feedback with varying levels of system control. umrao was built for this well-defined domain to enable investigation into a range of issues, and was designed to contribute findings in two fields. The first was knowledge-based chess (the requirement to also consider sub-optimal plans as well as the separation of problem-independent plans and strategies which are specific instantiations of the plans, to enable the compilation of a strategy graph). The second field was intelligent tutoring (with the increased flexibility of model-tracing tutoring where there was no sophisticated student model).

This notion of contributing to two fields at once also fitted well with my own plans at that time, as indicated above: building an intelligent system that could further the research in both artificial intelligence in education / user modelling (with a learner modelling approach aimed at promoting learner reflection whilst increasing the accuracy of the model through student-system negotiation of its contents), and intelligent computer-assisted language learning (incorporating theoretical and empirical insights from the field of second language acquisition). Although the Mr Collins learner model considered a range of factors relevant to learning languages, the language domain was very small: twelve rules of personal object pronoun placement in European Portuguese, with the learner model primarily constructed from parsing short sentences input by the learner, and their stated confidence in the correctness of each sentence. This small, well-defined domain was sufficient and facilitating for fulfilling the requirements for investigating the two areas of interest. Clearly, many projects contribute to findings in multiple fields, but at the time I was still in awe of possibilities, and Jim’s discussion was amongst the strongest influences and encouragements from people outside my own department.Footnote 6 Although much of the Mr Collins design was already identified when I met Jim, conversation with him helped remarkably in focussing the options.

Other commonalities also aligned, even though they were rather differently grounded. The primary pedagogical aim underpinning umrao was to provide a co-solver of problems to encourage experimentation with strategies (Gadwal et al. 1993). Although, as identified above, Mr Collins was not a tutor, part of the early architecture design included an artificial collaborator (that could function as a co-learner, tutee or tutor), whose role included to promote reflection on learning and ways of learning in the context of negotiated learner modelling (Bull 1993). This aspect of Mr Collins had been designed drawing on, for example, Cumming and Self’s (1991) suggestion of replacing plan recognition with jointly agreed or negotiated plans, Chan and Baskin’s (1988) computer as learning companion, and Cumming and Self’s (1991) and Dillenbourg and Self’s (1990) notion of the intelligent educational system as a collaborator in learning (as opposed to expert tutor). Gadwal et al.’s (1993) work with umrao offered a further perspective, and an additional confirmational vantage for this research pursuit. Eventually, however, following consideration of the scope of the work, this aspect of Mr Collins was postponed (until the PeerISM project, see below).

An intention from that time that did persist, and that became reinforced through discussion with Jim, was to investigate the anticipated knowledge of learners as a source of design information. In umrao it had been crucial to understand the skills of novice chess players as a basis for tutoring in bishop-pawn endgames, since the system needed to recognise ill-formed endgame solution strategies. The plan library therefore held both expert and novice plans, and strategies constraining the likely moves to those that a novice chess player would tend to make. In umrao’s case, information was obtained through think-aloud problem-solving protocols (Gadwal et al. 1993). Mr Collins had a ‘student model continuum’ that, in addition to the current inferred knowledge (and various types of misconception), also kept previous models, and included predicted future stages in the form of stereotypical models (for modification as learning progressed), in the system’s version of the learner model (which could differ from the learner’s own version of the model). The purpose of the future models was both to aid diagnosis and to raise learner awareness of the typical learning progression, since these future models were open for inspection. The initial stereotypes design (Bull et al. 1993) was originally inspired by research on acquisition sequence in second language acquisition (Pienemann 1989), and specifically based on preliminary applied linguistics research on the acquisition of pronoun placement rules in European Portuguese (Benson 1989), with the expectation to revise the future model sequence as required (Benson’s research did not investigate the same range of rules used in Mr Collins). Therefore, the weekly homework (multiple choice, translation and sentence transformation exercises) of 47 learners of Portuguese over a five-week period was examined, to monitor changes in students’ ability to use the rules, to further inform the future learner model sequence (Bull et al. 1995b). Although different, umrao’s plan libraries and strategies applicable to novice learners clearly resonated with Mr Collins’ stereotypical ‘student model continuum’ representing anticipated future learner model states. Other work Jim was involved in, smms, was combining stereotypical and deductive knowledge for a domain-independent student model maintenance system (Huang et al. 1991). Mr Collins echoed the idea of combining different modelling techniques, with simple inferred beliefs recorded for the current and historical learner models, and (modifiable) stereotypes for the future models. However, smms and umrao were considerably more sophisticated than Mr Collins, highlighting many of their respective technical features that were far beyond the scope of the Mr Collins research. smms later formed the basis of Explain, a hybrid decision-support system for agriculture that was developed to interpret a quantitative simulation output, and provide understandable, trustable recommendations personalised to the user’s own situation (Greer et al. 1994). Trust is also a central issue in open learner modelling as users can see information the system holds about them, as well as how and why the system is adapting to them; and it is a theme that extends through later OLM research (see Bull and Kay 2007, 2016). Similar to the above case of contributing findings to multiple fields, investigating learner knowledge or skills as a basis for learner model design was not an unobvious approach. Nevertheless, to discover associations with Jim’s work was encouraging at the stage where some aspects of my PhD research plan were still somewhat inscrutably furled in my mind.

Another harmony that advanced amidst our work was the role of the user in a task that was traditionally performed by the system. Jim worked with kare, an artificial intelligence tool with a ‘human-in-the loop’: the software engineer (maintenance) and the system supported each other in the task of reverse engineering to recognise programming plans (Palthepu et al. 1996). In negotiated learner modelling (at the time called ‘collaborative student modelling’ in Mr Collins (Bull et al. 1995b; Fig. 2, centre left)), the system and student work together towards an agreed learner model, rather than all the inferences being solely the responsibility of the system (see Bull 2016). In both cases, including the user endeavours to ease the task of accurately identifying user information (in the case of kare, the plans of a previous programmer; in Mr Collins, the learner’s own learner model).

Specific Examples 2: Evaluation of ITS, and Mr Collins

After the 1993 AIED conference, I later followed up on Mark and Greer’s (1993) paper that was published in IJAIED that year. One point, that ITSs could be viewed as a whole system or from the perspective of their constituent components or features, became particularly salient in my thoughts. Mark and Greer suggested that evaluation techniques may be differently suited to evaluating entire ITSs and specific components or features of them. Although the paper was focused on ITS evaluation methodologies, discussing these in relation to system architecture and behaviour as well as educational impact, the above led me to contemplate more purposefully the underlying theoretical educational goals of my system. Mr Collins was indeed a component of an ITS. The name stood for a ‘collaboratively maintained, inspectable learner model’ (with the ‘Mr’ intended to indicate a more human-like partner or collaborator in the student-system collaborative learner modelling enterprise). As a whole, Mr Collins comprised a very small domain model, and a broader (but nevertheless still small) learner model as the focus of the research which, in addition to the more common knowledge/misconceptions representations, encompassed the typical and predicted acquisition sequence of rules, likely sources of analogy for the particular learner, and the individual’s learning strategies (later described in Bull et al. 1995a). However, it had no teaching component as was a characteristic and core component in most ITSs. Instead, the Mr Collins architecture had incorporated a learner-system negotiation mechanism to allow the learner to help maintain the accuracy of the learner model through menu-based discussion of their knowledge: discussion that was also designed to directly prompt students’ reflection on their learning, helping to facilitate planning and self-monitoring, etc. Initially it was intended that Mr Collins might be extended with teaching strategies (as indicated in an early paper: Bull 1993), or some of the less standard learner model features might be integrated into the learner model of ‘more complete’ ITSs. However, it now became clearer that the learner model, the primary focus of the research, could be developed and evaluated independently. Indeed, much of my later interest focussed on the feasibility and practicality of such independent OLMs, which did not include tutoring and sometimes had no domain model.

Jim’s evaluation paper (Mark and Greer 1993) had sparked abundant thoughts that continued throughout my later research. The paper also influenced many others, no doubt in a variety of ways, as it became one of the more influential IJAIED papers and was therefore invited for inclusion in a 25th anniversary special issue of the journal containing updated versions of the most cited papers (Greer and Mark 2016). It has recently been further extended with reference to new evaluation goals for teacher-orchestration systems (du Boulay 2020), which highlights the continuing importance of the original agenda.

SPECIFIC EXAMPLES 3: PHelpS and S/UM, diyM (and related IOLMs)

As I moved to the University of Brighton after the PhD research on Mr Collins, several new environments were built as extensions to the independent and collaborative open learner model theme. 2SM displayed the respective models of two students on the same screen to encourage rich face-to-face peer discussion (Bull and Broady 1997). PairISM calculated suitable types of interaction for a pair of students (collaboration, peer tutoring or individual learning), based on the comparative contents of their learner models (Bull and Smith 1997). PeerISM allowed two peers to provide feedback to each other, aiming to promote reflection through both receiving and giving feedback, and there was also an artificial peer for further input or to assume the partner role if a human peer was not available (Bull et al. 1999). See Yourself Write allowed expert tutors to deliver feedback on writing through an inspectable learner model, supplemented by system inference over time, aiming to entice students to use their feedback as well as enabling dialogue with the teacher (Bull 1997c). The latter two mirror another commonality with Jim’s OLM interests: VisMod allowed teachers to initiate dialogue with a student about their learner model (Zapata-Rivera and Greer 2004); and artificial learners were amongst the proposals for use with inspectable student modelling tools in combination with a student modelling server (Zapata-Rivera and Greer 2003).

S/UM, an IOLM with learner and user models designed to promote peer interaction and reflection amongst university students, grew from the above projects (Bull 1997a). Students could seek collaborative or cooperative partners for feedback or help. Each student had an inspectable learner model that was constructed from peer feedback and system inference based on quantitative input, and a self-maintained user model that indicated availability to accept help requests and topics on which they considered themselves able to offer feedback. The user models also held representations on areas in which the student desired help, and whether they preferred collaborative or co-operative interaction. This shared some aims with Jim’s work on PHelpS, which comprised multiple user-maintained, inspectable peer-accessible user models (Fig. 3, centre right) to facilitate peer help with specific tasks in the workplace (Greer et al. 1998a).

I was also working on diyM, a ‘do-it-yourself’ learner model where students could construct their own models to supplement system-built learner models in other environments, to help achieve greater accuracy in the model and to prompt reflection in so doing (Bull 1998). For example, used together with Mr Collins, diyM could help resolve inconsistencies within the student’s own version of the learner model; with See Yourself Write it could provide additional input from the student to the teacher who was giving feedback; with PeerISM and S/UM, additional input could help resolve any discrepancies between the learner’s beliefs and peer feedback. DiyM could also be used entirely as a reflection tool, independently of other environments. This level of direct user input to the learner model values can be compared to the user maintenance required for the PHelpS (Greer et al. 1998a) workplace user models.

The PHelpS paper (Greer et al. 1998a) became another of the International Journal of Artificial Intelligence in Education’s most cited papers, enjoying later reflection on its importance for future work in the special issue mentioned above (see Vassileva et al. 2016).

The ARIES Lab: I-Help

The Seventh International User Modeling conference was held in Banff in 1999. Since I would be in Canada for the conference to present initial work on SCRAWL, a co-operatively maintained IOLM that modelled students’ knowledge of writing, their writing strategies and their target readership (Bull and Shurville 1999), I emailed Jim to ask whether I might visit the ARIES Lab for a couple of weeks before or afterwards. I was particularly interested in the PHelpS workplace peer help project (Greer et al. 1998a) because of the related aims of peer help in S/UM (Bull 1997a), and the user being required to provide information directly to their user model as was also necessary for some aspects of the modelling in SCRAWL (Bull and Shurville 1999; Fig. 2, upper). I was also still working on diyM, in which learners constructed their own learner models in collaborative learner modelling and peer interaction settings to initialise or provide additional information for the models (Bull 1998). PHelpS had also recently been extended to a university context (Greer et al. 1998b), and this later became the I-Help project (Vassileva et al. 1999b), where one component of the help system matched university students who had questions with potential peer helpers. I-Help was also described from the perspective of open learner modelling, for example, to allow learners to check the accuracy of the model, to assert or update their goals, to indicate their availability, or to provide information to assist peer helpers (Vassileva et al. 1999a). I was therefore confident that a visit to the ARIES Lab would help further refine ideas for subsequent projects arising from S/UM. This included SCRAWL, which I had been keen to extend to include the benefits of further interaction between students about their respective writing strategies, and additional ways of incorporating the diyM models. I was eager for an extended opportunity to discuss this with Jim away from the conference setting, where there would be many others also eager to profit from his vast knowledge and experience in user modelling and AIED.

Jim replied to my email enquiry about visiting the ARIES Lab with his customary swiftness. “Why not come for two years?”, he suggested. So I did. This, of course, muffled some of the other ongoing projects (including S/UM, diyM and SCRAWL), but it instead offered so much in terms of experiences and learning that helped enhance my understanding of how to undertake larger-scale projects; and as a university instructor, Jim also perfectly modelled how to encourage student learning. I subsequently benefitted substantially from each of these qualities demonstrated by him.

Although further practical work on S/UM ceased, moving to the ARIES Lab led to a paper where the modelling approaches of both I-Help and S/UM were used to illustrate the fragmented nature of learner models in the approach of ‘active learner modelling’, which describes “a virtual infinity of potential models, computed ‘just in time’ […] to the breadth and depth needed for a specific purpose” (McCalla et al. 2000). This active modelling process can bring together information from a variety of types of source, for example: “raw data recorded by a web application, partially computed learner models inferred by an ITS, opinions about the learner recorded by a teacher or peers, or a history of learner actions” (McCalla et al. 2000). This illustration and contrast with I-Help was far beyond any expectations I had previously had for S/UM!

I-Help was already well underway when I joined the project. Julita Vassileva was designing the multi-agent architecture underpinning peer matching for help sessions (Vassileva et al. 1999b). Gord McCalla was defining the active modelling approach that was at the core of the system (McCalla et al. 2000). Jim Greer was overseeing the continuing work on the Co-operative Peer Response (CPR) facility where students could post questions and answers (Bishop et al. 1997) and the PHelpS one-to-one peer help system (Greer et al. 1998a), and their integration into a single help environment for students (Greer et al. 1998b). Alongside this, he was always highly active in responding to student queries and help requests in the various deployed versions.

Specific Examples 4: I-Help

During my PhD research with Mr Collins, which was implemented for a language domain, the generalisation of the approach was discussed with reference to learning about electrical circuits (Bull et al. 1995a). When I arrived at the ARIES Lab, the threaded public discussion forum (extended from CPR) and private one-to-one discussions (based on PHelpS) were being deployed in computing courses at the University of Saskatchewan, building on a previous deployment (see Greer et al. 1998b). One of my early contributions to the project was to work with Jim in considering the potential for use in medical education where collaborative learning was common, highlighting the benefits of the agent-based negotiation mechanism that did not require a detailed domain model to be constructed (Greer and Bull 2000). We also pursued this more generally for small group problem-based learning settings where I-Help could be applied to support students within and across groups (Bull and Greer 2000). Figure 5 shows parts of the interfaces of the private (upper) and public (lower) discussions from around that time. Note how learners could adjust their knowledge levels and availability, and also contribute information for their personal agent to use in selecting a partner such as, for example, when offering help: people or topics that they wished to avoid.

Fig. 5
figure 5

The private and public discussion components in the I-Help project (anonymised). Note: Old screen shots edited to form a consistent interface representative of a single user in private discussions (matching agent name, banner colour), and reconstituting the approximate colour of the CPR public forum from a black and white image

My involvement in the I-Help project also included assisting in describing the modelled attributes and data sources in the private discussions (Bull et al. 2001b). As shown in Fig. 6, these included the more characteristic learner model attributes of knowledge, interests and cognitive style, where the model data was obtained from the learners themselves, and also from peer feedback after help sessions in the case of knowledge (as shown previously in Fig. 3, centre left). Data was additionally harvested from the public and private discussions: interests from the former, and cognitive style through the latter. Attributes relating to participation as helper were also modelled: readiness (whether the learner was online or was likely to be online soon), eagerness to help, and helpfulness. Data was again sourced variously from both learners and peers, and the public and private discussions. Finally, preference attributes relating to other aspects of the help sessions were modelled, with data coming from direct learner input (see Fig. 5, upper), and data on help load also from the private discussions. The outcome of a help request was that the help requester received a ranked list of the top users recommended as helpers, according to the relevant contents of their respective learner models. The requester could then select the person they chose to contact and, if that user accepted, a text-based exchange was initiated. This work was later elaborated for the I-Help private discussions with further detail on the modelling approaches and helper recommendation from the perspective of the ‘caring’ personal agents of users (Bull et al. 2003). Alongside this was examination of usage of the I-Help public discussions, where the forum was found useful by all participant types: those with questions, those offering help, and those preferring to only read postings (Bull et al. 2001a).

Fig. 6
figure 6

Learner model attributes and sources of data in I-Help private discussions

Jim was always extraordinarily active in I-Help, supporting his students’ learning immensely. This was especially the case at the beginning of courses until students recognised the benefits of engaging, and I-Help use became more self-sustaining. Nevertheless, Jim continued to monitor I-Help, and provided input frequently. His continual support and encouragement of students was one of his (many) commendable qualities, and something I hoped to go some way towards emulating in my own teaching after leaving the ARIES Lab. Fortunately, I was able to incorporate I-Help into my own courses at the University of Birmingham. As can be inferred from Fig. 7, the interfaces of the two components of I-Help (upper, private discussions; lower, public discussions) had been fully integrated by that point. The upper left screen shows how learners provided information to their agent about their preferences, with part of the bottom area of the screen concerning desired helper characteristics enlarged for readability; the upper right shows how learners made help requests. As well as the topic of help and relevant course or group, the question type is selected; options have been expanded and enlarged for readability (see Bull and McCalla 2002, for details). The lower area of the screen is where help requests are specified. The bottom part of Fig. 7 shows the threaded public discussion forum. Amongst the threads in an undergraduate ‘Personalisation and Adaptive Systems’ course is a response from Jim (see arrow), who kindly visited our discussions about the user modelling in I-Help, in I-Help itself! My response, to prompt student interaction in case this was necessary, is shown in the post below. However, the students were incredibly excited to have such an eminent personalisation researcher as Jim amongst them – indeed, one of the I-Help originators – and they engaged avidly.

Fig. 7
figure 7

The integrated I-Help private and public discussions (anonymised)

The work on I-Help continued at the ARIES Lab after the original project ended, with: (i) the addition of the capability to offer packages of standards-based learning objects; (ii) the incorporation of a shared tool for document annotation to extend collaboration possibilities; and (iii) a new name – iHelp Courses (Brooks et al. 2006b). Thus, the previous I-Help components became linked with instructional content in iHelp Courses.

Specific Examples 5: I-Help and Mobile OLMs

When I moved to the University of Birmingham, there was a strong focus on the innovative promises of mobile learning, especially with handheld computers (e.g. a tool for concept mapping on small-screen devices (Chan and Sharples 2002) and a mobile learning organiser (Holme and Sharples 2002).) Students taking the ‘Human-Centred Systems’ MSc were loaned handheld computers for the duration of their study, and these seemed an ideal technology with which to explore OLMs that could also encompass features of learning that were of particular relevance to the mobile context. For example, to what extent does the user’s location affect their learning: what could be usefully modelled and how would this information be used in adaptation to facilitate mobile learning? How might mobile learner models support planned or ad hoc collaboration? How might learning with desktop and handheld computers be integrated to enable seamless interaction across devices? An initial logbook and questionnaire study of students’ handheld computer use was undertaken as a starting point for the design of mobile OLMs for use with handheld devices, finding that students naturally used their handheld computers in a variety of locations, and that activities varied across locations (Bull 2003). Following this, multiple mobile OLMs were developed, including (e.g. Bull et al. 2004): (i) a system for a handheld device that took into account contextual information on the stated amount of time the learner had available and the likelihood of them being interrupted in their current location; (ii) an ITS for interaction on a desktop or Pocket PC, with the OLM being editable for cases where learners did not have the opportunity to synchronise their learner model between sessions; (iii) an approach where tutoring occurred on the PC, and additional individualised revision materials were recommended for later use on the handheld computer; (iv) a system where the main interaction was desktop-based, with the possibility of obtaining step-by-step explanations, experimentation with explanations or independent reading according to preferred study style. Follow-up interaction was on the handheld computer where students could choose to share a high-level summary OLM with others to gauge their relative understanding, and to facilitate collaboration and peer tutoring especially when they came together away from the lab setting. The aim of prompting reflective interaction between peers as they tried to resolve any inconsistencies between their respective models was central.

Much of this mobile OLM work still reflected clear traces of I-Help; in particular, (ii) where learners may need to maintain some of the learner model information themselves, and (iv) which took into account the individual’s study and explanation preferences, and learner models could be used in the context of collaboration and peer help. However, even (i) had some similarity in the sense of modelling attributes outside the learner’s knowledge or skills; and (iii) in recommending help, albeit in the form of revision materials rather than peer helpers.

Specific Examples 6: I-Help (and Related Projects) and IOLMs

As indicated previously, unlike PHelpS (Greer et al. 1998a), where employees were helping each other on clearly defined, structured tasks in the workplace, I-Help (Vassileva et al. 2003) could be deployed flexibly in many courses. Although not specifically described as such at the time, I-Help could be seen as an IOLM: one that was co-operatively maintained by the learner, the system, and other users (see Bull et al. 2001b). Alongside the mobile OLMs, work at Birmingham on (non-mobile) IOLMs was also unfurling. Some of these were associated with a specific domain, whereas others were domain-independent. All domain-independent IOLMs required instructor input to set them up (as did I-Help) but, once this was completed, instructors could continue to use the IOLM themselves as much or as little as they wished. Many followed learning closely, using the IOLM information to adjust their teaching, whereas some largely withdrew, welcoming the IOLM as a resource primarily for students. In some of the IOLMs students could opt to share their learner models with others, so much of the peer interaction and help that took place in those cases resulted from the availability of peer models.

An early prototype implemented for Japanese particles was jple, which utilised multiple-choice questions to build a small learner model, and offered simple OLM visualisations: a table contrasting positive with lacking skill levels, and a graphical view where positives were displayed above the axis and difficulties below, as shown on the top left of Fig. 8(Bull and Nghiem 2002). jple was open to the student the model represented, and peer models could also be accessed anonymously (or by name if individuals had opted to release their name). Students could choose which types of model to view, such as: stronger peers to help them appreciate the level for which they could aim; or less advanced peers, for example, to help them recognise that they were accomplishing more than they had realised, and increase their confidence. This research aimed to incorporate the benefits of some of the previous projects on peer collaboration at the University of Brighton, and planned to extend the work to include peer recommendation according to users’ respective knowledge of the task, inspired by PHelpS (Greer et al. 1998a). jple models were also available for the instructor to view.

Fig. 8
figure 8

Upper left, jple (Bull and Nghiem 2002); middle left, OLMlets (Bull et al. 2006); lower left, OLMLA (Xu and Bull 2010); centre, UK-SpecIAL (Bull and Gardner 2010); right, Flexi-OLM (Mabbott and Bull 2006)

In the meantime, the domain-independent OLMlets was being developed to allow instructors to define multiple-choice questions for their university courses (Bull et al. 2006). OLMlets had a strong focus on helping students to identify misconceptions, and many misconceptions were revealed through the instructor-defined multiple-choice response options, with over 94% and 97% of students found to be holding at least one misconception in two courses later examined in greater detail (Bull et al. 2010). In one of those courses (mathematics), there were more learner models than there were students registered on the course, suggesting that the utility of OLMlets had been recognised from other courses in which it was available. These additional students may have wished to identify specific difficulties in their mathematical skills required for other courses that already assumed those skills (and therefore did not cover them in OLMlets). OLMlets offered five simple learner model visualisations. The graph view is shown middle left of Fig. 8. Like jple, the graph view displays positive and negative information on different sides of an axis but, in this case, the existence of misconceptions is also portrayed (yellow). Clicking on the red ‘misconceptions’ link in each of the OLMlets views leads to short statements of any identified misconceptions for the corresponding topic. Links are also available to the questions and any additional materials or web links provided by the instructor. Following widespread use of OLMlets, UK-SpecIAL (Bull and Gardner 2010) was developed to amalgamate the data from (initially 10 first year) OLMlets courses for which an individual was registered, and display their progress towards the UK-SPEC Standard for Professional Engineering Competence (Engineering Council 2005) with reference to the various courses contributing to each UK-SPEC Learning Outcome. This was displayed in a ‘boxes’ format (Fig. 8, centre), similar to one of the OLMlets visualisations, and shows levels of competency by the colour of the boxes for each of the courses that contributes to a specific UK-SPEC Learning Outcome; with further information available for a module by clicking on its title.

At around the time of the first OLMlets deployments, Flexi-OLM was being used in a specific course: C programming. Flexi-OLM constructed a more detailed learner model from a combination of short pieces of code entered and multiple-choice responses, and offered a choice of seven simple and structured learner model views, with an excerpt from the pre-requisites structure illustrated on the right of Fig. 8(Mabbott and Bull 2006). The colour of the nodes indicates the skill level (with red used in cases where misconceptions are present), and clicking on a node leads to further breakdown. Users could also try to persuade Flexi-OLM to change the learner model data by requesting additional testing (which would require correspondingly correct or incorrect responses to change a value upwards or downwards); or edit the model directly, permitting the learner full control to update the data (Fig. 2, centre right). Like the umrao chess tutor that could be used to investigate different tutoring styles (Gadwal et al. 1993), Flexi-OLM was also designed as a vehicle to explore issues, in this case visualisation acceptance and use, and different types of interactively maintained learner model.

Other IOLM examples included Flight Club, which also offered a longer-term planning view alongside the OLM, for trainee private pilots who were highly motivated independent learners (Gakhal and Bull 2008). Notice, for language learners, included a comparison to the expert knowledge (Shahrour and Bull 2008); and olmla, with multiple views for advanced second language users (Xu and Bull 2010), similarly incorporated an expert comparison (Fig. 8, lower left: ‘sentences’ view contrasting sentences based on expert and learner rules). CALMsystem, for school science topics, aimed also to be available for integration into an ITS to allow user-system learner model negotiation using a chatbot (Kerly et al. 2008). In addition, the Next-TELL OLM (Bull and Wasson 2016), and LEA’s Box OLM (Ginon et al. 2016) can each receive learner model data from a range of external sources, without themselves performing any tutoring. The Next-TELL OLM is additionally able to support student-teacher discussion of the model contents if a learner wishes to challenge the data, using a separate tool to negotiate the value of activity or competency nodes (Bull and Vatrapu 2012). This has some similarity with Jim’s interest in students and teachers building and visualising conceptual maps (Zapata-Rivera et al. 2000).

Independence of learner models from a complete ITS reflected both the early work with Mr Collins (Bull et al. 1995a) and resultant projects, and I-Help’s (e.g. Vassileva et al. 2003) independence from a teaching system. In addition, for the Next-TELL and LEA’s Box OLMs, an approximation of the active learner modelling approach used in I-Help, where learner models are fragmented, from different sources, and computed according to purpose, as necessary (McCalla et al. 2000).

The previous work on jple led to the development of umpteen(Bull et al. 2007b), a persuadable IOLM deployed in several courses. umpteen presented the individual learner model as a ranked list of skill meters, and had a table summary of the group’s knowledge. It permitted students to release their individual models to instructors and specific peers, named or anonymously. Findings from an experimental study across three groups of different sizes (Bull et al. 2007b) resulted in the integration of umpteen’s approach of sharing models into OLMlets (see Bull and Britland 2007). This then allowed instructors and peers to access individual learner models that had been released to them (in named or anonymous form), alongside the student’s own model, in addition to a (previously available) combined model. Topics in the model could be released in different ways, to the same or different users, with students also able to define groups to enable them to easily share their model in certain ways with a specific set of users. This embodies a smaller set of privacy settings than the prototype pest (Privacy Event Stream Traffic) server, which was designed for multi-agent environments such as I-Help, that would share their data, where a greater range of information types was available (Kettel et al. 2004). A later approach in the Next-TELL OLM allows students to provide quantitative and qualitative feedback to each other (e.g. after group work or examination of a peer artifact), with quantitative values contributing to the learner model of the user receiving feedback, alongside teacher and self-assessments, and automated data from external systems (Bull et al. 2016c). Instructors can adjust weightings from the default equal weighting granted to all data sources. Peer input aims to further motivate discussions amongst learners, as well as providing helpful information to the individual receiving feedback, and additional reflection in the giver of feedback as they formulate their comments for another learner. Some of the various IOLM aims, therefore, were related to the objective of promoting peer interaction in I-Help.

Jim led extensive investigations into the reasons I-Help was more successful as a support tool in some courses than others (Greer et al. 2001). This laid the foundation for future deployments of I-Help, as well as offering useful findings for others aiming to deploy peer help-based systems into courses. He took usage level as an indicator of perceived usefulness, since students will not use an optional system if they do not consider it helpful; and questionnaires were distributed to gather further opinions. The timing of introduction of the I-Help private discussions into a course was considered important with reference to usage levels, with some students stating that they would likely have engaged more, had it been available from the start of their course. The I-Help public forum was used heavily in some (notably the more technical) courses, but rarely in others. Students considered it useful to be able to confirm that they were on track, to compare their own progress to that of others, and to be able to recognise that others were experiencing similar problems as themselves. Also important was the initial input from tutors to support interaction until students were able to experience the benefits of participation, and usage became more self-sustaining. This additional instructor effort at the start was easily outweighed by the later reduction in individual requests for help that they would otherwise receive. Smaller, cohesive groups had less demand for I-Help, since students were already supportive of each other. Similarly, in larger courses that incorporated groupwork, there was less need for I-Help since students had already established knowledge networks. Where students shared lab space, they could communicate directly, face-to-face. Although not generally the case, the increased visibility and recognition for participation from peers and, in some cases, instructors, underpinned the motivation of some. The I-Help private discussions used a virtual currency originally designed to motivate participation; however, the I-Help economy was not a major factor in motivating most of the participating students, perhaps because there were no tangible rewards associated with students’ I-Help currency. Therefore, student participation clearly indicated the perceived utility of the peer help environment for those taking part.

Participation in the above work influenced my later research surrounding the deployment of IOLMs. Log data was commonly used to identify usage levels and thereby infer perceived utility, and the learner models were often taken as an indication of level of understanding at different points in time. Questionnaires were employed to provide additional detail on preferences and perspectives. The kinds of questionnaire items to include owed much to my experiences with I-Help evaluations. Several features were identified in questionnaire responses by 18 students from two small courses as likely positive influences on uptake of IOLMs to promote formative assessment and independent learning, based on their decisions about whether to use OLMlets in current and previous courses (Bull 2010). At that time, OLMlets was deployed in 20 courses. The following were very strongly recommended for IOLM deployments: that students understand the reason for using the IOLM in a particular course; that the IOLM is available close to the beginning of the course; and that an indication of students’ misconceptions is available, where applicable. Strongly recommended were: learners may compare their learner model to the expected knowledge for the current stage of the course (i.e. not only against the target knowledge); and peers should have the option to release their learner models to each other. Also recommended: introduce the environment during a lab session, where possible; and include a variety of topics, concepts or skills rather than only a high-level overview. Whether the questions for a particular course are straightforward or require more thinking were claimed to influence uptake less; and whether the learner models were summatively assessed did not generally affect students’ perceptions of their utility as a learning support.

From the above, it can be seen that there is a clear parallel between the I-Help findings and results from OLMlets users for the difference in usage levels across courses for the timing of introduction and the availability of peer comparison. Also similar were the possibility to check that progress was appropriate: in I-Help this is obtained by responses from peers or instructors; in OLMlets, from looking at peer models or from comparing one’s own model to expectations set by the instructor for the present stage of the course (see left of Fig. 9, shown for the OLMlets skill meters view). Similar to the case of I-Help, in a study of umpteen (on which the peer model component of OLMlets was based), some participants claimed it reassuring to discover that others were facing similar difficulties (Bull et al. 2007b). In an OLMlets deployment, students also commented on the usefulness of peer models as a comparison as well as to prompt collaboration and peer help and, in a few cases, competition and recognition (Bull and Britland 2007). In the MAgAdI blended learning OLM context (a project in which I was only marginally involved), students requested the addition of peer performance information (Martin et al. 2012). Shared lab space was a strong factor in promoting face-to-face discussion where OLMlets was available but, unlike in the I-Help situation of this leading to less need for the online peer help, releasing OLMlets models to peers is itself what often led to discussion of students’ respective understanding in the shared lab setting and beyond (Bull and Britland 2007). Use of a class Facebook group to discuss OLMlets led to students asking for help from peers or confirmation from the instructor even in a small group of fifteen, as well as discussions about the questions in OLMlets (Alotaibi and Bull 2012). Following from these findings with OLMlets, a discussion component was incorporated into the Next-TELL OLM (Bull et al. 2014). An initial study tracked eleven volunteers from two small group courses using the Next-TELL OLM for formative assessment purposes. Most viewed and contributed to discussions (10 and 9 respectively). However, viewing peer assessment or feedback was rated as useful by relatively fewer in this context that also included the peer discussion component, than had been the case previously, without the discussion feature. As with I-Help, there were no tangible benefits to engaging with the above IOLMs in most cases. (In a small minority of courses the OLMlets learner models contributed a small amount to the final assessment, but even when learners were given the opportunity to edit (i.e. change) their learner model values (to enable them to easily update it if they had learned something away from OLMlets), in a course where the final model state formed 5% of the course mark, log data revealed that students largely did not misuse this facility (Bull 2010).)

Fig. 9
figure 9

Excerpts from: left, OLMlets comparison; centre and right, Next-TELL OLM comparison

Specific Examples 7: Learning Analytics, Dashboards and Visualising (I)OLMs

As the amount of available student data rose, Jim was increasingly interested in the externalisation of learner models to students and instructors, as indicated by the following statement: “We see the presentation of learner models, both individual student models and models describing cohorts of students, to be of increasing importance as more e-learning tools are included in course curricula” (Brooks et al. 2007).

In the IOLM situation, where there is typically limited or no tutoring provision, and where promoting metacognitive behaviours using the IOLM is usually a central goal, it is particularly important to externalise the model in a manner that is both understandable and can support metacognition and learning on an individual basis. Many of the above IOLMs were applied to investigate ways in which to present the learner model to learners, offering a choice of visualisations within a single system to allow users to select the view that best suited their preferences or purpose of viewing at the time. Most had separate visualisations for the same knowledge, though the pre-defined structure of some of the views exhibited different relationships amongst the various concepts or competencies. Flexi-OLM in particular reflects this, incorporating pre-requisites (Fig. 8, right), concept map, hierarchical structure and lecture structure views (Mabbott and Bull 2006). An eye-tracking study with Flexi-OLM found that, in addition to some differences depending on whether an OLM view was one of the user’s stated preferred formats, some visualisations were more likely to be scanned with lower time spent on knowledge information whereas other views encouraged inspection of information on knowledge levels (Bull et al. 2007a). Thus, some visualisations may be better suited to some purposes of viewing the information than others (e.g. for gaining a quick overview before navigating to a suitable task or materials versus planning deeper study given the current understanding of pre-requisites). Log data from lab-based studies and OLMs deployed in practice, with children and adults, in taught courses and independent learning, in different domains and with diverse OLM visualisations, revealed that although there were some views that were more popular, each of the offered visualisations was accessed, and some students used multiple views (Bull et al. 2010). Thus, unless there is a clear pedagogical reason to provide a specific visualisation in a particular case, it may be effective to provide multiple-views of the same OLM data. Furthermore, where visualisation can be filtered by topic or competency and activity/data source (see centre and right of Fig. 9) in the Next-TELL OLM, which comprises data from multiple sources, log data revealed that these filters were sometimes applied (Bull et al. 2013a). Follow-on work discussed displaying inconsistencies in learner model data using OLMlets, Next-TELL and LEA’s Box OLM views as examples, suggesting opacity, blur, arrangement, colour, solid/dashed/width of lines, and distinct comparison visualisations (Al-Shanfari et al. 2016).

Jim’s approach was taking a complementary direction. He was instrumental in bringing the benefits of OLMs to learning analytics applications at the University of Saskatchewan. However, rather than investigating multiple visualisations for a single model, he was seeking understandable, appropriate visualisations to facilitate specific tasks. In one strand of work he constructed fine-grained user models according to large-scale usage data, for student and instructor reflection. For example, a community visualisation tool for instructors aimed to convey explanatory detail in a pedagogically informative manner by visualising social interactions in relation to activity type: discussion participants, lurkers, and those enrolled on a course but not participating in discussions (Brooks et al. 2007; Fig. 3, upper right). Connections between individuals were shown: the distance of nodes from the centre of a circle indicated participation level (activity type), and the size of nodes reflected an individual’s influence or importance in the community. In visualisation of lecture capture usage, graphs were provided for watching and re-watching behaviour, etc.: three-dimensional heatmaps indicated patterns of viewing over time, such as when watching occurred and the amount of time viewed, as well as consistency of viewing; and a histogram revealed differences between groups, such as higher versus lower achievers (Brooks et al. 2013). In a query tool, instructors could seek answers to questions such as “who is falling behind?”, with results displayed in a table (Brooks et al. 2007). Instructors were able to build and run their own queries; thus, they could ensure that information delivered matched their specific purposes. They could also permit students to run the queries, with comparative data anonymised and their own data highlighted. Both instructors and students communicated an appreciation of the information available, and student behaviour was impacted as they could judge expectations for increased participation marks. In other work building on Sankey Diagram functionality, the Ribbon Tool (Fig. 3, lower) highlights flow through academic programmes to enable administrators to readily interpret data that could be used to drive programmatic change (Greer et al. 2016). Learner personas based on decision-tree leaf nodes were created in the form of narrative descriptions of typical learners according to historical data of learner activity, which can be applied to new students (Brooks and Greer 2014). These outline the outcome of predictive modelling to help identify which students might be at-risk and hence may benefit from intervention. The personas are designed to be easily recognised and remembered by those who need to use them: instructional designers, institutional learning specialists or academic advisors. Jim also provided useful information directly to students with Sara, the Student Advice Recommender Agent (Greer et al. 2015). Sara is comparable to an early alert system based on learner activity combined with predictive modelling and demographic information drawn from canonical personas. The advice strings are presented weekly and comprise a few lines of text which are easily understandable and actionable, and are scalable for embedding in large courses. Other work focussing on students’ needs includes a persuasive system with socially focussed strategies to heighten engagement: ‘upward social comparison’ in a table display listing the student’s own grades together with those of five anonymised, stronger students; ‘social learning’ using a graph showing grade ranges for all students; and ‘competition’ in a ranked table of results (Orji et al. 2019). This was integrated into a Learning Management System. Although most learners were motivated by competition (followed by social comparison and, finally, social learning), there were differences in strategy preferences leading to the subsequent development of student persuasion profiles. In addition, a mobile app displays the social comparison and social learning visualisations to enable viewing when students are on the move (Orji et al. 2018). A subset of the Next-TELL OLM visualisations was also adapted for smartphones (Bull and Wasson 2016); and Orji et al.’s (2018) aim of easily supporting educational interaction in various locations was shared by the mobile OLM environments introduced previously (Bull et al. 2004). On the theme of learning analytics visualisations, the Next-TELL project also offers visualisations of learning analytics data alongside the OLM visualisations. Whilst the OLM is focussed on visualising competencies inferred from activities with external systems (e.g. Bull and Wasson 2016), as well as self, peer and instructor feedback (e.g. Bull et al. 2016c), the learning analytics visualisations are predominantly reflective of activity data (see Bull et al. 2013b).

Specific Examples 8: Security, Privacy and Trust

The above work on social strategies (Orji et al. 2018, 2019) required dealing with issues of security and privacy, since information relating to other users was shown. Pseudonymised IDs were employed to mask true identities. In the cases of I-Help (Vassileva et al. 2003), and pest(Kettel et al. 2004) in Canada, and umpteen(Bull et al. 2007b) and OLMlets (Bull and Britland 2007) in the UK, users can control many of the privacy parameters, as described earlier. I-Help also used highly fragmented models. A further approach developed for I-Help was to offer users programmable agents to find out what was happening in the I-Help community, a move which itself brings new privacy questions (Cao and Greer 2004). In a Norwegian IOLM context, privacy and security were especially important in the design of the avt service to recommend activities from a variety of vendors to learners, where it is proposed that ranked recommendations of activity-vendor pairings be accessed via an IOLM structured around a ‘subject map’ (Morlandstø et al. 2019). Critical questions include where the learner data to allow effective personalisation is stored, which providers and other stakeholders may access it and when, and the integration and sharing of data between them. Other work in Canada investigated issues surrounding privacy in e-learning contexts in general, but also with specific reference to the development of iHelp (Anwar et al. 2006). Further work extended to concerns of trust and privacy-preserving reputation management and transfer, exemplified in the iHelp discussion forum where trust amongst co-learners is vital (Anwar and Greer 2012).

Jim’s earlier work was also concerned with user trust in the systems themselves, for example: Explain, the decision-support system for agriculture that aimed to offer understandable, applicable and trustable recommendations through explanation (Greer et al. 1994). wacsa, the intelligent online sales agent, sought to raise awareness of the possibly persuasive and even coercive outcomes of constructing user models for sales contexts and questioned, amongst other things, whether user models might best be deleted between sessions thereby losing the potential benefits to the customer (and business) of a long-term consumer model, or whether the model ought to be made available to the customer before it is used (Greer et al. 1996). Opening the learner model is one of the ways in which AIED systems can increase user trust (assuming an accurate or amendable model), and is a basis of much of my own research on trust in relation to IOLMs, where the work has primarily focussed on trust in the IOLM itself. For example, it was found that first year university students generally claimed to trust the OLMlets and UK-SpecIAL information based on their experience of the deployments (Bull et al. 2009). An experimental study additionally revealed that different features of an OLM may be more important for developing trust for different users, such as: complexity of visualisation, level of control over the data, and the possibility to compare to peers or instructor expectations – where trust was defined as “the individual user’s belief in, and acceptance of the system’s inferences; their feelings of attachment to their model; and their confidence to act appropriately according to the model inferences” (Ahmad and Bull 2008).

Specific Examples 9: Interactively Maintaining the Learner Model

Jim was also involved in efforts to bootstrap learner models based on e-portfolios, where users underwent a reflective practice of specifying their knowledge levels whilst linking to e-portfolio artifacts that demonstrated, or provided evidence for this knowledge (Guo and Greer 2007). The bootstrapping process has similarities to ‘(mostly) learner-maintained learner models’, also used to initialise the model in diyM (Bull 1998). This is one of the forms of interactive learner modelling where students directly provide information to their model (see Bull 2020). The idea of self-specification of knowledge levels such as in the above (Guo and Greer 2007) also laid traces in the facility by which Next-TELL OLM users may provide self-assessments of their artifacts to contribute (additional) competency data to their learner model (Bull et al. 2016c). PHelpS similarly required employees to maintain their user models themselves in order that suitable colleagues could be identified for help sessions, and pairings only suggested as appropriate to the two users’ respective knowledge and availability (Greer et al. 1998a), though here peers also provided user model information following a help session. I-Help correspondingly involved self-maintenance of many aspects of the learner model, although this was a substantially more complex environment with peers and personal agents also contributing much of the data, as indicated in Figs. 3 and 6(see Bull et al. 2001b). S/UM likewise required students to update some of the information for modelling: knowledge, availability to interact, and areas in which they would like to receive and offer help, as well as the type of interaction they prefer; i.e. collaboratively working on a solution or co-operatively helping each other (Bull 1997a). Other information was drawn from peer feedback and system inference. PeerISM also used self and peer evaluation alongside system inference (Bull et al. 1999), and See Yourself Write built the model from teacher feedback, incorporating system inference over time and allowing further discussion with the instructor about the model (Bull 1997c). SCRAWL likewise required user maintenance of some aspects of modelling for the writing process, and comprised three models (Bull and Shurville 1999). SMsystem is a system inferred and maintained component of the student model based on help topics viewed, and SMstudent holds the learner’s direct contributions to supplement the SMsystem evidence with their explicitly claimed knowledge. A reader model, RM, is formed by the student in response to system questioning about the target readership. A writer model, WM, is jointly constructed by the student and system to represent orientation to writing, inferred from answers to questions about how they write, and subsequent student amendments to the profile that is set up according to their responses.

Editable models are another form of interactively maintained model as exemplified, for example, in Flexi-OLM (Mabbott and Bull 2006); and C-POLMILE, one of the mobile learner models which would allow editing if a learner had not been able to synchronise their desktop and handheld computers between sessions, in addition to being a reflection activity (Bull and McEvoy 2003). Editing the model is also part of many of the approaches in which learners can contribute data to their learner model, since they may change some aspects of it as updates are required. For example, in PHelpS, knowledge gaps in the model can be removed following a help session (Greer et al. 1998a), and I-Help private discussions permit students to alter their own perceptions of their knowledge level following interaction with a peer (Bull et al. 2001b). Of course, information from peer evaluations (and personal agents in the case of I-Help) may conflict with any user-given data.

Persuasion of the learner model allows the learner to challenge the system if they disagree with some aspect of their model. It is available as an additional option in Flexi-OLM (Mabbott and Bull 2006), in Umpteen(Bull et al. 2007b), and in the LEA’s Box OLM where persuasion is parameterised by the instructor (Bull et al. 2016b). This is similar to negotiation of the learner model as in Mr Collins (Bull et al. 1995b) and CALMsystem (Kerly et al. 2008) but, rather than separate learner and system values being retained in the learner model if there is no agreed outcome, the system’s viewpoint succeeds in a persuadable learner model if it is not convinced by the learner’s attempt to demonstrate their knowledge (or lack thereof). These approaches are less directly reflected in Jim’s research, although the complex negotiation mechanisms between the I-Help agents (Vassileva et al. 2003) contribute to a vast learner modelling enterprise of interdependent artificial and human participants. The approaches differ, however, in that I-Help personal agents are seeking the best help deals for their respective owners, whether they be helper or helpee, whereas in negotiated learner modelling the learner and the system are together aiming to resolve any inconsistencies in the learner model to help maintain its accuracy, whilst also prompting reflection (see Bull (2016); or for comparative discussion of interactive learner model maintenance approaches, see Bull (2020)).

Specific Examples 10: Stakeholders

As mentioned previously, OLMs have most commonly been aimed at the student that the model represents, with increasing numbers of OLMs also available to instructors or peers; and they can additionally be opened to other stakeholders such as parents, system designers, educational technologists, instructional designers, administrators and policy-makers (Kay 2016; Reimann et al. 2011). Both Jim and I focussed heavily on students, instructors and peers, but Jim’s work has also been especially successful with some of the less-often included stakeholders. For example, the lecture capture usage information mentioned above is relevant to educational researchers and instructional designers (Brooks et al. 2014); the Ribbon Tool (Fig. 3, lower) for tracking flow through academic programmes is aimed at administrators (Greer et al. 2016); and the persona descriptions drawn from predictive modelling aim for better understanding between data scientists and learning specialists (Brooks and Greer 2014).

The above scale of impact contrasts with my own work, although this did aim for more widespread use than within the originating university. For instance, the ease of setting-up OLMlets questions and ‘expected knowledge’ comparison was described for instructors in university engineering departments (Bull et al. 2006); and support was provided for teacher use of the Next-TELL OLM in schools, including the application of existing competency frameworks or setting up and sharing their own frameworks (see e.g. Johnson et al. 2013). The Next-TELL project also involved the proposal of a teaching analytics model based around an OLM, that required the collaboration of teaching experts, design-based research experts and visual analytics experts (Vatrapu et al. 2011). Another approach aiming to reach broader stakeholder groups is the padadashboard reflecting activity-based learner models (a project in which my role was minor), that is principally targeted at adults with dyslexia, with information available for psychologists, pedagogical experts and counsellors in addition to the instructors (Mejia et al. 2017).

A study of university students’ perceptions of the likely utility of the LEA’s Box OLM visualisations, visualisation types, purposes of inspecting, and use of the learner model discussion feature for interactive model maintenance before they used the OLM, in order to gauge the extent to which they would consider it beneficial (as this can be important in the initial motivation to try an OLM when it is available on an optional basis), provided information that serves as recommendations for consideration by OLM developers (Bull et al. 2016b). Continuous and quantised skill meter visualisations, and a table view, were the most popular amongst the ten options available, but all 25 participants expected to use at least one of the presented structured visualisations. Later research investigated anticipated use of visualisation examples from a variety of (I)OLMs, with findings applicable to both OLM and learning analytics dashboard designers (Bull et al. 2018). Skill meters were judged the most likely to be used amongst the 38 participants, followed by graph, grid, table, network, pre-requisites map, hierarchical tree, and concept map views. However, each of the 17 visualisations studied had some participants who would expect to use it, whereas other participants would not. Recommendation combinations were made, therefore, according to the overall context and purpose of creating an OLM.

Additional work sought information about the probable acceptance of OLMs in UK schools from the perspective of ‘assessment for learning’, providing information about six OLMs as examples (Kerly and Bull 2007). Fifteen primary and secondary level teachers, headteachers and Local Authority or government (OfSTED) inspectors were surveyed. They considered individual learner models and comparisons to teacher expectations to likely be useful for both children and teachers, with a lower level of support for the availability of information on peers. Nevertheless, a small-scale study with the Subtraction Master (student-persuadable and teacher-editable) OLM revealed interest from a small majority of eleven 8-9-year-old children for viewing their OLM smilies and being able to compare these to those of the ‘average peer’, with no representation shown if the child was performing less well (Bull and McKay 2004). Positive outcomes were particularly reported by teachers in a short study of Wandies, an OLM for 7-8-year-old children, presented as different coloured magic wands that supported pair work with a pair model, and prompted spontaneous peer interaction away from the task and the specific pairs (Bull et al. 2005). Children were also motivated to attain gold wands.

Another stakeholder that is relevant in the child context is the parent. With the VisMod Bayesian student model for children and teachers, it was suggested that parents might also become involved in discussion and understanding of their child’s learning and the teacher’s teaching (Zapata-Rivera and Greer 2004). This idea of an OLM for parents was also addressed in Fraction Helper (which displayed the OLM as growing and dying trees), designed to support parents in helping their (9-10-year-old) children overcome misconceptions about fractions (Lee and Bull 2008). However, a small-scale study found that some of the parents also held misconceptions, suggesting the utility of interactions specifically for parents to resolve their own erroneous beliefs, which were not necessarily the same as those of their children, before passing their misconceptions on to their children. As in Wandies, where children found the attainment of gold wands motivating, in Fraction Helper children strove to grow their trees.

Recent Research

Leaving the University of Birmingham provided the opportunity to focus more fully on some of the particularly interesting problems and issues surrounding OLMs and learning analytics, some of which align well with Jim’s interests; some inspired by him. Alongside some other (as yet unpublished) consulting and collaborative work, the following have especially extended several of the previous OLM themes.

A chief direction of continuing research is the goal of illuminating the relative utility of different (I)OLM views for specific purposes (alongside recommendations to retain options for the user to select amongst). Most previous research on (I)OLM visualisation has considered visualisation use within a particular system, resulting in some specific visualisations not being contrasted with other specific visualisations. In addition to the study described above (Bull et al. 2018), in work undertaken at the University of PittsburghFootnote 7, representative edited visualisations from a range of our respective previous (I)OLMs were combined in a pen-and-paper study, to discover which individual and peer comparative views students would anticipate using to help them determine what to work on next (Bull et al. 2016a). More structured visualisations were perceived useful for viewing an individual model, whereas skill meters and similar visualisation types were considered easier for comparison visualisations. Other research at the University of Pittsburgh compared design alternatives for additional details on OLM data when interacting with the visualisation, where knowledge in relation to topics, concepts and activities can be explored (Guerra et al. 2018). The further information appears on mouseover, allowing surrounding context to be retained. Results suggested especially that a summary feature may be helpful when presenting finer-grained information.

Particularly relevant to Jim’s work with I-Help and related projects (e.g. Greer et al. 1998b; Vassileva et al. 2003) is recent work at the University of BergenFootnote 8 on the avt project to recommend activities from a variety of educational technology providers as appropriate to the individual learner at a specific time, according to a subject map and their overlay learner model (Morlandstø et al. 2019), as described above. Many of the same issues are critical, for example: privacy, security, trust, identifying appropriate resources to address knowledge gaps, and appropriate ranking of recommendations.

Finally, I had long wanted to write a general paper on OLMs to contribute towards clarification of some of the issues that have been inconsistently reported in the literature over time, and approaches that have received less attention. Jim’s paper on ‘The State of Student Modelling’ (Holt et al. 1994) was immensely inspirational in this regard, having helped illuminate many themes. I did not aim to emulate the tremendous achievement of that work, but instead focussed on describing some of the core OLM ideas that have been pursued by a variety of researchers. Seeing how Jim always strove to communicate key concepts to other researchers fortified my motivation to try to write something more manageable, and on a much smaller scale (see Bull 2020), but that will hopefully provide a usable entry point to some of those embarking on OLM research. This is particularly relevant now, since the size of the OLM literature is expanding rapidly, and OLM ideas are also being encompassed in some of the more recent learning analytics directions – as exemplified in much of Jim’s learning analytics work.

Summary

This paper has described some aspects of a programme of research on (independent) open learner models, focussing on areas that were particularly inspired by Jim, or that had strong commonalities with his interests, over a 25-year period. Some of these influences came directly from Jim’s generous suggestions and advice; other influences seeped in through exposure to his incomparable thinking and capacity to put ideas into practice. Much of this work would not have progressed so far without his inspiration, encouragement, imagination and creativity.

Particularly impressive was Jim’s ability and foresight not only to identify the most relevant questions throughout the emergence of new technologies, changes in the educational landscape, and accumulation of empirical results, but also to lead the field by both building upon the previous developments and feeding in new and timely ideas. His early work often combined a very AI-centred approach taking what AI was capable of, with applying the techniques to better understand student needs, placing the individual at the very centre of an approach catering specifically to their own educational requirements. He continued to work to incorporate AI into educational technologies in a meaningful way, to learn more about the user and their relationship with the domain and other learners, and to directly assist learning in a range of practical situations. By the end of his career, Jim was bringing insights from research and putting into practice some exceptional methods of supporting teaching and learning for a range of stakeholders across the University of Saskatchewan. His legacy to the University is clear, as stated on their Teaching and Learning web pagesFootnote 9: “In the last few years, Jim found new ways to lift up the University of Saskatchewan as a leader in teaching and learning through his work as our Senior Strategist in Learning Analytics. Jim’s work in the areas of big data and early alert is outstanding and leaves a path for us to continue.”