Introduction

Over the past decade, higher education institutions have accumulated large amounts of educational data due to students’ interaction with online learning environments and Learning Management Systems (LMS) (Bodily & Verbert, 2017; Jivet et al., 2018). By educational data, Ifenthaler and Yau (2020) distinguish between three data profiles: the student profile (e.g., demographics, student attitudes), the learning profile (e.g., student performance within the LMS), and the curriculum profile (e.g., graduation competencies, course learning outcomes, and course sequences within a study plan). The vast amount of different types of data opens up the opportunity to use a wide range of Learning Analytics (LA) tools to transform these data into insights that can benefit a wide variety of higher education stakeholders (Baker, 2015; Romero & Ventura, 2020). By higher education stakeholders, we refer not only to teaching staff and students, but also to institutional leadership and managers as defined by Branson et al. (2016), including deans, department chairs, and academic planning staff.

In higher education, one of the processes that could benefit from the incorporation of LA tools is continuous curriculum improvement. According to Hilliger, Aguirre, et al. (2020), Hilliger, Celis, et al. (2020), this process consists of systematically assessing student outcomes in specific courses of an academic program, in order to evaluate the effectiveness of curriculum practices in terms of learning results. Over the past decade, most program chairs and teaching staff members have lacked analytical tools for monitoring the effectiveness of curriculum elements, such as content sequences, instructional resources, and learning activities (Bouwma-Gearhart & Hora, 2016; Gottipati & Shankararaman, 2017). The need for tools to inform curriculum decision-making became more urgent during the outbreak of the COVID-19 pandemic, considering that two thirds of higher education institutions rapidly shifted from traditional education to digital learning (IAU, 2020), without having necessarily discussed and studied the effectiveness of curriculum changes (Crawford et al., 2020). Therefore, more research is required to understand how the incorporation of LA tools can help practitioners to timely reflect on what curriculum elements best support student learning and competency development (Brown et al., 2018; Sclater, 2018).

In these lines, Curriculum Analytics (CA) emerged as a subfield of LA, narrowing the collection and analysis of educational data to specifically support curriculum decision-making and program quality improvement (Greer et al., 2016). By CA, we mean the collection, analysis, and visualization of program- and course-level data, such as program structure and course grading, aiming to inform curriculum renewal strategies at a program level (Hilliger, Aguirre, et al., 2020; Hilliger, Celis, et al., 2020). So far, some promising tools have been developed to provide managerial and teaching staff with program-level information—such as graphical representations of course sequence patterns (Brown et al., 2018; Heileman et al., 2017). Concerning competency attainment, recent studies have focused their efforts on mapping skill development among the curricula, analyzing course descriptions in terms of skills taxonomies and progression levels (Gottipati & Shankararaman, 2017; Kitto et al., 2020). Still, little is known about how these tools support actual processes for continuous curriculum improvement in existing university settings (Brown et al., 2018).

To advance this subfield of LA, this paper aims to understand how a CA tool could support the reflection of teaching staff and managers about students’ academic progress and competency attainment, driving curriculum improvement at a course- and a program-level. To meet this research objective, a design-based research approach was adopted to design and implement a CA tool in different university settings. Specifically, we used the phases of the Integrative Learning Design Framework (Bannan-Ritland, 2003), aiming to evaluate its perceived usability and usefulness in two consecutive building-testing cycles (Rees Lewis et al., 2020). The first cycle consisted of a design phase and an instrumental case study to evaluate its use to support 5 managers and 124 teaching staff in a 3-year continuous improvement process in one Latin American university. The second cycle consisted of a redesign phase and a field study based on two workshops with 16 managers and 9 teaching staff members in two Latin American universities (including the university involved in the first cycle). Based on the findings of both cycles, a broad evaluation was conducted to capture lessons learned, extrapolating implications for future research and practice.

Background

In 2016, Greer, Molinaro, Ochoa and McKay proposed the first workshop about LA for curriculum improvement to the program committee of the sixth international conference on Learning Analytics and Knowledge (LAK) (Greer et al., 2016). This workshop proposal was accepted, and as a result, seven papers were presented about analytical tools and metrics to support curriculum decision-making. Along with describing relevant indicators and tool features, these papers also discussed future work to create approaches to engage managers and teaching staff in using data to improve student learning at a program-level (Greer et al., 2016). This is how CA emerged as a subfield of LA, narrowing the collection, analysis, and visualization of different types of educational data to inform curriculum decision-making (Hilliger, Aguirre, et al., 2020; Hilliger, Celis, et al., 2020). This implies integrating different data sources to drive improvements in both course instructional design and academic program delivery, aiming to increase competency attainment in the long term (Greer et al., 2016).

Since then, the number of people who are starting and returning to higher education has increased, as well as the need for monitoring and verifying competency attainment among a larger number of students (Kitto et al., 2020). To meet this need, recent studies suggest implementing processes for continuous curriculum improvement, aiming to understand how competencies are developed at a program level (Dicker et al., 2019; Riad Shams & Belyaeva, 2019). It is expected that continuous curriculum improvement would not only help teaching staff to introduce relevant teaching practices and assessment methods into current learning environments (Dicker et al., 2019; Riad Shams & Belyaeva, 2019), but also clarify to employers how students achieve competencies throughout an academic degree (Kitto et al., 2020). So far, considerable efforts have been devoted to verifying competency development and attainment, including testing students through high-stake exams, or employing a workforce to manually map competencies across courses that are part of a study plan (Kitto et al., 2020). However, information regarding graduation competencies and their attainment is rarely readily available, so staff perceptions and preferences are still the main sources of information to be used for the revision of higher education curricula (Kitto et al., 2020).

Considering that higher education programs are requiring constant updating and revision, further CA tools are needed for different purposes. In order to map competency development at a program level, prior work has proposed text analysis techniques to map curriculum documentation (e.g., course syllabi) into educational constructs (e.g., Bloom taxonomy) (Gottipati & Shankararaman, 2017; Kitto et al., 2020). With respect to course sequences, studies have analyzed the impact of students’ co-enrolment in different courses, in order to build recommendation systems that suggest a path that allows students to complete a study plan successfully (Brown et al., 2018; Heileman et al., 2017). Regarding competency attainment, researchers and vendors have proposed tools targeting different stakeholders, providing them with different type of indicators. For example, Chou et al. (2015) designed a tool for students—providing them with radar charts about their competency attainment in terms of grades, credit hours, and peer performance—while vendors have developed web-based application to plan the assessment of competencies at a course-level—such as eLumen (https://elumenconnect.com/).

Regardless of the efforts that have been already made, research about CA is still in an early stage. So far, only small-scale evidence is available regarding the effectiveness of analytical tools for supporting student success and continuous curriculum improvement (Ifenthaler & Yau, 2020). Consequently, managers and teaching staff have not been widely exposed to the use of CA tools, so they do not necessarily understand the capabilities and limitations of its use to reflect on students’ academic progress and competency attainment (Greer et al., 2016). In this context, it is still an open question whether this type of tools contribute to student learning and program quality from the perspective of different higher education stakeholders (Brown et al., 2018). Given the need for further studies, researchers agree that more robust design-based research is needed to develop and evaluate the use of CA tool at a program level, aiming to understand how this type of tools encourage evidence-based practices for continuous curriculum improvement (Greer et al., 2016).

Methods

This study addressed the following research question: How can a CA tool support continuous curriculum improvement in higher education settings from the perspective of the stakeholders therein (e.g., managers and teaching staff members)? To answer this research question, we followed a design-based research (DBR) approach. According to Barab (2014), design-based research consists of a series of interventions in which different research methods are used to capture lessons learned, aiming to derive a theory or a tool in a real-world setting. From this DBR perspective, we applied the Integrative Learning Design (ILD) Framework developed by Bannan-Ritland (2003). This framework has already been used in prior studies to evaluate mobile technologies and LA tools (Pérez-Álvarez et al., 2018; Pérez-Sanagustín et al., 2016). This framework is organized in four phases: (1) informed exploration, which studies the needs of intended users for a particular tool or intervention to support their learning goals; (2) enactment, which consists of the design of a technological tool to collect feedback from unsolicited users; (3) evaluation of local impact, which aims to evaluate the impact of the intervention from the perspective of the intended users at a local level for an extended period of time; and (4) evaluation of broader impact, which evaluates the consequences of adoption of the technological intervention to a wider audience.

In order to ensure that we had at least two iterations to capture lessons learned, we organized the four phases suggested by the ILD framework in a two-cycle structure (see Fig. 1), and each cycle followed a building-testing logic (Rees Lewis et al., 2020). First, we conducted an informed exploration phase by using the persona technique. This phase led to a first cycle, in which a first version of the CA tool was developed jointly by a Latin American university and a vendor, leading to the local evaluation of its implementation in the same university that was involved in its development. Then, a second cycle was implemented to redesign the CA tool developed during the first cycle, besides evaluating its perceived usability and usefulness in two contrasting university settings (including the university involved in the first cycle). Finally, a broad evaluation phase was conducted by integrating the findings from both cycles. For this integration, we followed the approach proposed by O’Cathain et al. (2010), aiming to discuss the implications of using the tool to support continuous curriculum improvement in different higher education institutions. The following sections describe each one of the phases and cycles, besides presenting their main results.

Fig. 1
figure 1

Study design structure based on the Integrative Learning Design Framework

Informed exploration

This first phase consisted of identifying users’ needs for a CA tool by using the persona technique. This technique was introduced into software development by Alan Cooper in the early 1980s, aiming to create user archetypes based on shared goals and behavioral patterns (Calabria, 2004). By identifying user archetypes and their needs, designers seek to provide better user experience, aiming to ensure adoption of the designed tools (Calabria, 2004). Thus, we decided to use this technique during the informed exploration phase, so archetypes of users could inform developers’ choices in further tool development phases.

To identify personas and their needs for a CA tool, we used a convenience sampling method to collect qualitative information from different higher education stakeholders. First, we conducted three group interviews in three different universities with eight managers (program/assessment coordinators and associate directors for curriculum and instruction or related areas): two in Chile, three in México, and three in Panamá. Then, we applied a paper-based survey to 25 teaching staff members and 51 students affiliated to a large university in Chile. Both the interview and survey included the following questions: (1) How would you use a CA tool? (2) What kind of information and functionalities would you expect from this type of tool?

Regardless of the limitations of using convenience samples, the reliability and validity of the persona technique mainly relies on how well represented end users are by the data collected from a real-world setting, rather than the opinion of the individual writing the personas (Calabria, 2004). In these lines, two researchers analyzed the qualitative information collected from interviews and surveys by means of inductive coding (one researcher has a background in design and the other has experience in educational technologies), so the personas were created according to the needs for a CA tool that emerged from the qualitative data. By categorizing the main needs, we found that managers and teaching staff emerged as primary personas for a CA tool (see Fig. 2), whereas current and future students emerged as secondary personas. This means that the design of a CA tool should prioritize interfaces for department/program chairs and assessment/academic planning staff as managers, and teaching faculty and part time instructors as teaching staff, considering that their needs will not be met if a CA tool were designed for someone else (Calabria, 2004). However, by adding few specific features to personalize it for learners, the needs of current and future students could also be met for course and program enrolment respectively.

Fig. 2
figure 2

Primary Intended Users of a CA tool

First cycle (Cycle 1)

First cycle: enactment

This phase consisted of developing a first version of a CA web-based tool based on the results of the informed exploration phase. This tool was developed jointly with uPlanner, a Latin American vendor that offers technological solutions and research services to higher education institutions. The first version was based on a prior CA tool of the University of Sydney (Gluga et al., 2010). Its functionalities allow managers and teaching staff members to collect and store program information (e.g., graduation competencies matrix and curriculum maps) and course-level evidence about student competency attainment (e.g., course syllabi and grading). To gather all this evidence and make it readily available for curriculum discussions and staff reflection, this tool is organized in four entities: (1) Administration, (2) Competencies, (3) Study Plans (Programs), and (4) Course Syllabus. Figure 3 presents these four entities and the main tasks undertaken by intended users.

Fig. 3
figure 3

CA Tool and Main Functionalities

In addition to the functionalities described in Fig. 2, this tool generates an automated report on competency attainment at a course-level based on students’ partial grades in specific assessment methods (see Fig. 4), aiming to meet managers’ and teaching staff’ needs for information to ensure the quality of educational programs and courses therein. The generation of this report requires teaching staff to undertake the following tasks at the beginning of the academic period:

  • Indicate the relationship between competencies at a program-level and learning outcomes at a course-level.

  • Choose one or more assessment methods to measure the learning results associated to the program-level competency.

Fig. 4
figure 4

Sample screenschot of automated report regarding percentages of competency attainment at a course-level (see details at the following link: https://youtu.be/6o9qilI1Gjg)

Once the teaching staff has chosen an assessment method, the CA tool integrates the assessment method grades from the institutional LMS by using a process of Extract, Transform, and Load (ETL). Then the CA tool transforms these grades into a percentage of competency attainment according to the following equation:

$$\mathrm{\%}=\frac{Assessment\, Method \,Grade\times 100}{Maximum \,Grade}$$
(1)

First cycle: local evaluation

First cycle evaluation: study design and objective

In this phase, the objective was to evaluate how the use of the first version of the CA tool could support continuous curriculum improvement in a real-world university setting. To meet this objective, we conducted an instrumental case study. According to Zelkowitz and Wallace (1998), an instrumental case study is an observational method that allows researchers to collect data about specific attributes of a tool as the project develops. Under a building-testing logic (Rees Lewis et al., 2020), we decided to conduct this type of study to evaluate the perceived usefulness and usability of the first version of the CA tool in at least one university setting. In this case study, we specifically evaluated whether the CA tool supported a continuous improvement process that was implemented between the first semester of 2015 and the second semester of 2017 in a Latin American university (Hilliger et al., 2019).

First cycle evaluation: study context

This case study took place at an engineering school in a Latin American university (U1). In 2014, this school had to renew the accreditation conferred by the Accreditation Board for Engineering and Technology (ABET) to five engineering programs: (1) Civil Engineering, (2) Electrical Engineering, (3) Software Engineering, (4) Mechanical Engineering, and (5) Chemical Engineering. Considering that continuous improvement has been a criterion of ABET accreditation since the 2000s (Lattuca et al., 2006), this school decided to implement a continuous improvement process between 2015 and 2017. This process was organized in six semesters; and every semester, one or two of the 11 student outcomes proposed by ABET Criterion 3 were assessed at a course level (http://bit.ly/2SeVzRj), collecting evidence in at least two courses per program (see Fig. 5). At the beginning of the semester, teaching staff had to develop an assessment plan to report what assessment method could be used to account for competency attainment at a course-level (http://bit.ly/2ACZUXF). Once the semester finished, they had to report documentary evidence of competency assessment in their courses, such as competency assessment results and samples of the assessment methods that were used to measure competency attainment. The documentary evidence and the percentages of competency attainment were presented at curriculum discussions held at the end-of-semesters. In these meetings, program chairs and teaching staff of the same engineering program discussed if improvement actions were needed at course- or program-level.

Fig. 5
figure 5

Semester tasks that were part of the continuous improvement process implemented at U1

Between the first semesters of 2015 and 2016, the documentary evidence was uploaded to a Dropbox folder, and the percentages of competency attainment were calculated in spreadsheets. Since the second semester of 2016, the CA tool was incorporated into the continuous curriculum process (see Fig. 6), in order to facilitate the following activities:

  • Filling a course description form to describe broadly the teaching and assessment methods.

  • Indicating the relationship between program core competencies and course learning outcomes.

  • Listing performance indicator for competencies at a program level that could be assessed at a course level.

  • Aligning performance indicators with graded assessment methods at a course level.

  • Generating automated reports of competency attainment (this functionality is integrated with the institutional LMS to automatically capture the students’ grades).

  • Uploading documentary evidence as attachments, including: course syllabus, assessment plans, competency assessment results, sample of assessment methods

Fig. 6
figure 6

Evaluation of the CA tool to support a 3-year continuous improvement process. Light grey dots indicate the semesters where the teaching staff tasks were not supported by the CA tool, and the dark grey dots the periods where the tool was implemented as part of the process

First cycle evaluation: participants and data gathering techniques

Between 2015 and 2017, five managers and 124 teaching staff members participated in the continuous improvement process implemented at U1. The five managers were program chairs, and the 124 teaching staff members were 61 teachers (44 faculty members and 17 part-time instructors) and 63 teaching assistants. The managers were responsible for organizing program meetings for curriculum discussions, the teachers were responsible for planning competency assessment and selecting documentary evidence of competency attainment, and the teaching assistants were responsible for helping teachers to upload evidence in the CA tool once implemented (see Table 1).

Table 1 Managers and teaching staff involved throughout the continuous improvement process implemented at U1 (before and after the CA tool was implemented)

To develop the instrumental case study, data was collected and analyzed in three steps. The first step consisted in analyzing how the CA tool was used to facilitate the collection of documentary evidence, and how this evidence accounts for teachers’ reflections regarding the attainment of student competencies. For comparing the number and the type of documentary evidence generated before and after the CA tool was implemented, three researchers used a coding scheme to classify the evidence reported by each teaching staff member for each course-section that was involved in the continuous improvement process. This scheme was developed on a bottom-up coding approach, and each category was defined by examining the files uploaded in Dropbox and the CA tool. Six categories emerged from this bottom-up approach: (1) reported assessment plans, (2) reported a sample of assessment methods, (3) reported competency attainment results (based on grades), (4) reported course syllabus, (5) included a course description, and (6) reported the percentages of competency attainment. The researchers used these categories to assign scores from 0 to 1 to account for the type of documentary evidence reported every semester (see coding scheme in Appendix A). Then, each course section was assigned a score that ranged from 0 to 6 in each course section, in which a score equal to 0 indicates a minimum amount and variety of evidence (and a score equal to 6 indicates a maximum amount and variety of evidence).

The second step consisted in analyzing the 27 meeting memos that documented reflections of the teaching staff members during curriculum discussions (http://bit.ly/2DOS8Nu). Specifically, these meeting minutes were analyzed according to a coding scheme developed by the three researchers (see coding scheme in Appendix B). The authors classified the notes in the meeting minutes in three categories: (1) reflections on teaching, (2) reflections on assessment, and (3) reflections on curriculum. Although Kappa coefficients were not estimated, two rounds of cross-checking were made to ensure coding reliability.

Finally, the third step consisted in measuring the perceived usability and usefulness of the CA tool from the viewpoint from its users. For this purpose, we developed a paper-based questionnaire based on the prior work of Ali et al. (2013), considering that their objective was also to explore teaching staff’s perspectives in a real-life context. Our questionnaire consisted of a closed-ended and an open-ended question section (see questionnaire in Appendix C). The closed-ended section consisted of a 5-point Likert scale to determine the level of staff’s agreement on different items related to perceived usefulness and perceived ease-of-use, while the open-ended section included the following questions to understand usability and ease-of-use implications from an exploratory perspective. Regarding the 5-point Likert scale, we estimated the percentage of respondents who marked a high level of agreement with each of the items. By high-level agreement, we considered respondent scores that were equal to or higher than 4, considering that the scale ranged from 1 (strongly disagree) to 5 (strongly agree).

First cycle evaluation: results

The results of the document analysis show that the number and the variety of evidence reported per course section increased from two to five after the CA tool was implemented (see Fig. 7). In most cases, these three additional items included course syllabi, course descriptions, and reports of student competency attainment. This effort to collect a greater number and variety of evidence did not respond to greater administrative pressure, since U1 managers submitted evidence for accreditation in the first semester of 2016. Thus, all subsequent work was done solely with the motivation of sustaining the effort to continuously improve the curriculum.

Fig. 7
figure 7

Average number of evidence items submitted per course section each semester

However, the results of the analysis of meeting minutes show that managers and teaching staff exhibited varying levels of reflection during curriculum discussion held throughout the continuous improvement process (see Fig. 8), without revealing a noticeable pattern before and after the CA tool implementation. In most cases, these reflections alluded to the need to revise competency matrixes and curriculum maps, in order to improve the alignment between program-level competencies and course learning outcomes. Throughout the process, teaching staff demonstrated further interest in making assessment methods more meaningful, along with finding new ways to deliver feedback. Still, it is not clear if these could be directly attributed to their use of the tool or their involvement in the continuous improvement process.

Fig. 8
figure 8

Results of coding meeting minutes throughout the continuous improvement process (see coding scheme in Appendix B)

With respect to the use of the CA tool to support staff reflection, respondents of the paper-based questionnaire indicated that the tool views had too many tabs and fields, so the automated report of percentages of competency attainment was hard to find. Concerning the responses to closed-ended questions, they revealed that only 56% of respondents considered that the CA tool allowed them to obtain more information than other tools (see Fig. 9). According to respondents’ comments, the CA tool facilitated the use of evidence to account for the implementation of a competency-based curriculum throughout the ABET accreditation process, but this information was not necessarily used to inform course redesign and assurance of learning. Still, the results of the questionnaire show that 92% of respondents agreed with the item ‘In general the CA tool seems useful for curriculum management’, and they mentioned that this tool could be potentially used to provide staff and students with information about course methods and its alignment with competencies from the graduate profile.

Fig. 9
figure 9

Percentage of teaching assisstants (N = 25) who agreed with the questionnaire items related to the perceived usability and usefulness of the CA tool (see questionnaire in Appendix C)

First cycle evaluation: lessons learned and limitations

In this first cycle, we showed that a CA tool could be easily incorporated into a continuous improvement process implemented in a real-world university setting. According to the results of this study, the first version of the CA tool was perceived to be useful for curriculum management, allowing teaching staff to not only plan competency assessment, but also collect evidence of competency attainment. So far, vendors have offered web-based applications to plan assessment of competencies at a course level (such as eLumen), while researchers have proposed visualizations of competency attainment in terms of course grading separately (Chou et al., 2015). In this first cycle, both managers and teaching staff were exposed to the capabilities of a CA tool that integrates both features, and findings show that this tool helped teaching staff to collect a greater number and variety of documentary evidence (see Fig. 7). Considering that faculty often lack evidence to reflect on their practice (Bouwma-Gearhart & Hora, 2016), this CA tool has the potential to provide staff with timely information regarding student competency attainment, so they could analyze their progress against the graduate profile.

However, the results of this first cycle also show that there are usability and functionality issues that prevent teaching staff from using the tool to inform course design and further reflect about students’ performance in terms of competency attainment. In that sense, the larger amount and variety of information collected throughout the CA tool was not necessarily readily available or actionable information for teaching staff members, so it was not necessarily used for curriculum decision-making (Pistilli & Heileman, 2017). Although the analysis of meeting minutes indicated that managers and teaching staff reflected about actions that could eventually lead to better student learning and program quality, such as improving the alignment between program-level competencies and course learning outcomes or making assessment methods more meaningful, the questionnaire results did not provide further evidence to connect these reflections with the incorporation of the CA tool into the continuous improvement process. In these lines, the lessons learned captured from the first cycle revealed the need to improve the tool design (reducing tabs and text) and include features to easily monitor competency development and attainment at different levels (student level, course level, and program level).

Additionally, we detected some limitations that could have influenced the results of this first cycle. First, data was collected in one Latin American university, so it might not be representative for all educational systems. Although it provided a convenient context for exploring the incorporation of a tool into a continuous improvement process, the second cycle should at least include another institution to contrast findings. Second, the questionnaire results only represented a small sample of teaching staff members who interacted with the CA tool, without incorporating perspective of other higher education stakeholders, such as program chairs and coordinators. In order to address these limitations, we decided to explore perspectives of managers and teaching staff in the second cycle in two contrasting settings, choosing more comprehensive frameworks to collect information on how these tools could be used in different institutions.

Second cycle (Cycle 2)

Second cycle: enactment

In this phase, the CA tool was redesigned based on the lessons learned that were captured from the first cycle. Figure 10 presents the entities and functionalities of the resulting tool, including: (1) Competencies, (2) Study Plan, (3) Course Syllabus, and (4) Reports (personalized for managers and teaching staff). In this new version, we improved the ‘look and feel’, reducing the number of tabs and text. Besides, we included a ‘drag and drop’ functionality to align program competencies with courses part of a study plan. As a result, the competencies assigned to courses are easily available for teaching staff in the syllabus module, so they can reflect about their alignment with course learning outcomes. Finally, we included a reports module. This module not only shows the percentages of attainment at a course-level (like the previous version), but also reports on competency attainment at a program-level and at a student-level. More details about the functionalities of the new version are presented in Fig. 10 and at the following link: https://youtu.be/Vx_BuYhmYac.

Fig. 10
figure 10

Redesign of the CA Tool Functionalities (see details at: https://youtu.be/Vx_BuYhmYac)

Second cycle: local evaluation

Second cycle evaluation: study design and objective

Considering that the lessons learned from the second cycle revealed the need to evaluate the tool in contrasting higher education settings, we decided to conduct a field study in two different universities during the second cycle. According to Zelkowitz and Wallace (1998), field studies are useful to determine the effectiveness of a tool to undertake certain actions, so they are often used to determine the effectiveness of a new tool from the perspective of a subject group. This implies that an outside group monitors the actions undertaken by a subject group in the evaluated tool, emulating the actual usage given to the tool in a real-world settings (Zelkowitz, 2009; Zelkowitz & Wallace, 1998). In the second cycle, our objective was to evaluate the effectiveness of the second version of the CA tool to support the reflection of managers and teaching staff regarding students’ competency attainment and program quality. To meet this objective, three researchers worked during a year on the design of a protocol to evaluate the perceived usability and usefulness of this second version. Then, they organized and conducted workshops in a period of six months, emulating curriculum discussions that could have been held at a program level in a convenience sample of managers and teaching staff members from two contrasting higher education institutions (including the higher education institution involved in the first cycle).

Second cycle evaluation: study context

In order to emulate real curriculum discussions, the researchers in charge of the field study developed a protocol to collect data during workshops that were held with managers and teaching staff at each one of the institutions involved in this study. These workshops were carried out in person and lasted an hour and a half (approximately), establishing a common methodological framework for all subject groups involved in this study. This framework consisted of (see Fig. 11): (1) an overview of the new version of the CA tool, (2) a list of predefined tasks to evaluate the new functionalities of this tool, (3) the application of an online questionnaire with open- and closed-ended questions, and (4) a guided curriculum discussion to collect qualitative information on the perception of the participants regarding the usefulness of the new functionalities of the CA tool.

Fig. 11
figure 11

Activities Carried Out during the Workshops to Evaluate the Redesign of the CA Tool

Second cycle evaluation: participants and data gathering techniques

Aiming to capture lessons learned from contrasting settings, the participants of the field study were affiliated with two Latin American universities. These two universities were different in terms of student body (selective versus broad access), ownership and administration (private vs public), and staff’s level of exposure to the CA tool (experienced versus novice). In workshop 1, we involved 5 managers and 5 teaching staff members affiliated to U1. In workshop 2, we involved 11 managers and 4 teaching staff members affiliated to U2. Table 2 summarizes the information of the workshops’ participants, in addition to presenting demographic statistics of managers and teaching staff members who were involved in this study.

Table 2 Participants of workshops to evaluate the perceived usefulness of the CA tool

Considering that the first cycle revealed the need for more robust workshops to evaluate the potential impact of the CA tool, we collected data by using an online questionnaire based on the Evaluation Framework for Learning Analytics (EFLA). This framework was developed by Scheffel (2017), it was proposed to evaluate LA services in terms of data, awareness and reflection, and impact, providing researchers with a standardized way to compare different LA strategies. Considering that it has already been used to evaluate LA tools (Broos et al., 2018), we decided to use this framework to evaluate whether the new version of the CA tool supported the reflection about students’ academic progress and competency attainment from the perspective of managers and teaching staff. Just like in the first cycle, our questionnaire consisted of a closed-ended and an open-ended question section, but the closed-ended section consisted of a 10-point Likert scale based on EFLA (see Appendix D). Specifically, we translated the eight items of the scale for teachers into Spanish, and then we adapted this scale for managers. Regarding the EFLA score, we estimated the score for each subject group (teaching staff and managers) following the steps suggested by LACE (n.d.). These steps imply:

  • Calculating the average value for each item based on the answers given for that item (scores between 0 and 10).

  • Calculating the average for each dimension based on the average of its items (items 1 and 2 correspond to the ‘data’ dimension, items 3–6 to the ‘awareness and reflection’ dimension, and items 9 and 10 to the ‘impact’ dimension)

  • Calculating the dimensional scores with the following formula:

    $${\text{Dimensional}}\,{\text{Scores}} = \frac{{(x - 1)}}{9} \times 100$$
    (2)
  • Calculating the overall EFLA score by calculating the average of the three-dimensional scores.

In what respects to the discussion held at the end of the workshops, two researchers analyzed the transcripts recorded from both workshops. Then, these researchers conducted an open coding process, generating initial categories of information about the perceived usefulness of the tool (Creswell, 2012). To validate these categories, researchers contrasted these categories with participants’ answers to open-ended questions in the online questionnaire. Five categories emerged from this open coding process (see coding scheme in Appendix E): (1) quality assurance of higher education programs, (2) assessment and assurance of student learning, (3) graduate skill attainment, (4) support for international accreditation, and (5) support for final accreditation. To identify the prevalent uses that managers and teaching staff would give to this new version of the CA tool, researchers analyzed the frequency of mentions of each category from the perspective of managers and teaching staff. Besides, quotes from workshop participants were extracted to complement the analysis.

Second cycle evaluation: results

According to the results of the online questionnaire, managers and teaching staff members assessed the new version of the CA tool with EFLA scores of 76/100 and 85/100 respectively (see Table 3). Considering that an EFLA score over 70 could be considered acceptable for a first iteration (Broos et al., 2018), the new version of the CA tool outperformed the older version, particularly from the perspective of teaching staff members. Teachers assessed the tool impact with a score over 90, besides assessing the dimensions regarding data and awareness and reflection with scores over 80. Considering the highest scores in each scale item, these results indicate that teaching staff perceived that the CA tool stimulated them to plan their courses more efficiently (average score of 9.22/10), helping them to be more aware of the current learning situation of their students (average score of 8.56/10).

Table 3 Results of the questionnaire scale based on the evaluation framework for learning analytics (see questionnaire in https://bit.ly/2WIkr9o)

In order to complement findings obtained from the analysis of the questionnaire, Fig. 12 shows the results of the analysis of the discussions of workshop participants about the perceived usefulness of the new version of the CA tool. According to these results, seven managers perceived that the tool is useful for quality assurance of higher education programs, while six teachers and six managers perceived that the tool is useful for assessment and assurance of student learning.

Fig. 12
figure 12

Frequency of Mentions regarding the Discussiongs about the Perceived Usefulness of the CA Tool (see related coding scheme: https://bit.ly/2LDvHxk)

During these discussions, both managers and teaching staff indicated that they valued the information of the reports generated by the new version of the CA tool:

  • (The CA tool) allows managing information in high detail at different aggregate levels, for example, monitoring of course-level data and student academic progress from the perspective of curriculum administration. (Teaching staff, U1)

  • (The CA tool) allows monitoring student academic progress, in order to be able to provide them with guidance regarding their professional future, in addition to promoting reflection regarding the pedagogical practices. (Manager, U2)

Second Cycle Evaluation: Lessons Learned and Limitations

According to the results of this second cycle, managers and teaching staff perceived that the new version of the CA tool was useful for assuring program quality and competency attainment in terms of student learning. Not only did participants assess this tool with EFLA scores over 70, which has already been considered beyond acceptable in prior studies (Broos et al., 2018), but they also mentioned that it was useful for monitoring student academic progress at different levels. By having different types of reports about competency attainment, managers and teaching staff members could formulate different types of improvement actions, including new teaching practices or student academic advising to help students become career ready. Besides, the benefits of the CA tool are aligned to the needs of primary intended users that were identified during informed exploration phase, which accounts for the success of having used a design-based research approach to involve users throughout two building-testing cycles (Rees Lewis et al., 2020).

The results of this second cycle also show that the new version of the CA tool was considered friendly and easy to use by the subject group who participated in the field study. This group included teachers and managers from different age groups, who have not necessarily been exposed to this new version of the CA tool. This confirms the importance of not only involving managers and teaching staff to improve the usability of analytical tools, but also to make them aware of the functionalities and limitations of these types of tools (Greer et al., 2016). With respect to new functionalities, both managers and teaching staff mentioned that the new reports entity was the one that they most valued, because it provided them with information that is not easily accessible for them and for higher education stakeholders in other higher education settings (Kitto et al., 2020).

Although these findings have resulted promising, there were some limitations during workshop implementation. Initially, researchers planned to implement workshops in four Latin American universities (in addition to the two universities that participated in this study). However, Chilean social outbreak interrupted the field study (Barlett, 2018), and the subsequent COVID-19 pandemic interrupted academic activities and travelling to impart the workshops face-to-face in different university settings.

Broad evaluation

By triangulating lessons learned from each phase of the study, we identified several implications to promote further research in CA. First, the information exploration phase and subsequent cycles indicate that teaching staff and managers are the key stakeholders to be considered when designing a CA tool, and that students appear as secondary users. This does not mean diverting attention from the student's learning process but providing managers and staff with relevant indicators to inform their decision-making (Ochoa, 2016). By providing staff with information about students’ competency attainment in different course learning experiences, managers and teaching staff would be more inclined to introduce curriculum elements that are more meaningful to learners (Dicker et al., 2019; Riad Shams & Belyaeva, 2019).

Second, the lessons learned from the two-cycle structure confirm the importance of involving users in a building-testing logic (Rees Lewis et al., 2020), allowing researchers to inform what actionable information is crucial for meeting user needs. In this study, the first cycle revealed that CA tools could be useful for collecting a greater and wider variety of documentary evidence, but it was not clear how this information could be transformed into insights regarding competency attainment among students. Then, the second cycle was useful to understand that curriculum discussion between managers and teaching staff members require the analysis of students’ academic progress at different levels (student-level, course-level, and program-level). Considering that data integration is often a barrier for scaling up LA initiatives (Klein et al., 2019), further work is required to ensure that higher education institutions have the technology infrastructure and the data-related process to incorporate CA tools into actual curriculum discussions.

Currently, the CA tool is being used by 20 Latin American universities. During 2021, the vendor involved in its design conducted an online questionnaire with 947 users, including managers and teaching staff from three Latin American universities—different from those included in the first and second cycle. The questionnaire included the question: How likely would it be for you to recommend this tool to a friend or colleague? Customers rated their answers on a scale from 0 to 10, and their responses were used to predict the likelihood of referral according to the Net Promoter Score methodology (Reichheld, 2003). According to users’ answers, 40% were labeled as promoters (users who gave ratings of nine or ten), 25% were label as passively satisfied (users who answered seven or eight), and only 35% were level as detractors (users who gave a score from zero to six). The questionnaire also contained an open-ended question in which users could make a comment about any feature of the CA tool. Some of the comments emphasized the value of the tool in facilitating the planning of course syllabi, and the availability of different types of automated reports. However, a third of the comments referred to usability problems that could not be solved in the second cycle. Therefore, a third and final lesson learned from this analysis is that the improvement of CA user experiences is a continuous effort, in the same way that the improvement of the quality of the programs should be a sustained process.

Discussion and conclusions

This paper has presented the results of a DBR approach to understand how a curriculum analytics tool could support continuous curriculum improvement in higher education settings from the perspective of managers and teaching staff members. Findings reveal the importance of providing managers and teaching staff members with information about students’ academic progress at different levels, including reports of competency attainment at a program-level and at an individual level. By having this type of information readily available for curriculum discussions, these stakeholders can revise the alignment between program competencies and course learning outcomes, besides reflecting about the effectiveness of current teaching practices and assessment methods at a course-level. Although this does not impact student performance and program quality in the short term, evidence-based improvement actions might have beneficial effects in the long term.

With respect to prior work, this study confirms the need to incorporate analytical techniques to revise and improve higher education curriculums. So far, different studies have proposed different types of LA techniques and methods to unpack curriculum elements, including text analysis of course documentation (Gottipati & Shankararaman, 2017; Kitto et al., 2020), process mining analysis of course sequences (Brown et al., 2018; Heileman et al., 2017), and visualizations of competency attainment in terms of course credit and grading (Chou et al., 2015). However, research in CA is still in an early stage, so few studies have explored the adoption of this type of tools in existing higher education settings. Considering all the curriculum changes that have been implemented during the outbreak of the COVID-19 pandemic, further work is required to understand the potential benefits of incorporating this type of tools to data-related processes within universities and colleges, along with exploring potential barriers that higher education stakeholders might face in its use for everyday practices.

Still, this study actually expands the current understanding of the capabilities and limitations of CA tools by illustrating a novel approach. Prior work has proposed separate tools to map competencies, plan competency assessment, and visualize students’ academic progress. The CA tool proposed in this study integrates all these functionalities into one tool. For its proper functioning, it requires managers to establish the relationship between program competencies and courses, so teaching staff members can verify the alignment of course learning outcomes and assessment methods. Although this might imply more workload for these users, these are tasks that are supposedly part of their current practice. Furthermore, these tasks are currently undertaken manually (Kitto et al., 2020), without necessarily offering managers and teaching staff reports on competency attainment at different levels. In that sense, the CA tool proposed in this study implies a step forward in terms of technology convergence.

Regardless of these promising findings, there is further work that it is required to ensure that higher education institutions can incorporate this type of tools. One aspect that is crucial for future CA development projects is building institutional capacities to integrate data from different profiles, along with incorporating processes for continuous curriculum improvement. During and beyond COVID-19, managers and teaching staff members will have to discuss curriculum changes constantly, so more structured information would be needed to update curriculum elements and make them more meaningful for learners. Consequently, further collaboration is required around CA initiatives, involving LA researchers, vendors, and higher education practitioners.