7.1 Introduction

Emerging global market trends and technological progress have led to rapid changes as well as job ambiguity in the workplace. Consequently, learning that would enable workers to acquire job opportunities in the twenty-first century has shifted from focusing only on traditional cognitive skills and discipline-specific knowledge to the integration of complex cognitive and social/interpersonal competencies, also referred to as twenty-first century skills, such as critical thinking, teamwork, cultural and diversity awareness, multilingualism, and use of digital technologies (Joynes et al., 2019). There continues to be a large discrepancy between what young people are learning in school and the types of skills that people need to succeed in the workforce and today’s society.

Over the last two decades it has been acknowledged that existing education curricula are no longer adequate drivers for success in the increasingly competitive environment (Voogt & Roblin, 2012). To address this problem, international organisations such as the United Nations Educational, Scientific and Cultural Organization (UNESCO), Partnership for Twenty-first Century Skills (P21; Battelle for Kids, 2019) and Assessment and Teaching of Twenty-first Century Skills (ATC21S; Care et al., 2018), have developed initiatives to place these skills at the centre of learning (González-Salamanca et al., 2020). Furthermore, the United Nations 2030 Sustainable Development Goal 4 promotes the need for learners to obtain more comprehensive skills beyond traditional numeracy and literacy, including global citizenship skills, as well as the skills needed to promote sustainable development.

In agreement with these initiatives and Sustainable Development Goal 4, SSA is adopting more holistic education systems that offer lifelong competence for the twenty-first century, and at the same time is embracing innovation, cultural and diversity awareness, and the use of information technology in both formal and informal learning contexts. Nonetheless, there have been challenges, specifically around how to define these skills in the context of SSA, how to teach these skills in classrooms and schools, how to successfully integrate the skills into curricula, and importantly, how to assess these skills (Kim & Care, 2020). One of the most critical concerns has to do with national assessment frameworks. As is also the case of many ‘Global North’ countries, the national assessment frameworks in SSA primarily focus on the measurement of traditional curriculum-based learning outcomes (Siarova et al., 2017). This has created a need to adjust and expand assessment frameworks in order to identify different methods of capturing the holistic nature of lifelong competencies suitable for twenty-first century living.

There are four goals of this chapter: (1) to discuss some complexities of assessing twenty-first century skills; (2) to identify common methods used to assess these skills and discuss strengths and weaknesses of the methods; (3) to describe general findings from a review of existing assessments of twenty-first century skills in SSA; and (4) to discuss implications of the findings in the context of SSA.

7.2 The Complexities of Assessing Twenty-First Century Skills

Societal demands of education in the twenty-first century have seen a fundamental shift from education in the twentieth century, where knowledge accumulation was highly valued. In today’s rapidly changing, globally interconnected world, it is not enough to memorize mathematical facts and vocabulary; instead, there is a need and a real demand for education systems to provide learning opportunities and experiences that align with what learners will face in the real world—where they must not only have information but also know how to use that information in different ways and in different situations. Unfortunately, there continues to be a substantial mismatch between the goals of some education systems—the desire to teach and learn twenty-first century skills—and the implementation of these goals (Care et al., 2019a, b).

Part of the issue is that current teaching approaches are based on offering the same curriculum to every learner at the same grade level. Linked with this approach has been the use of very similar assessment approaches, typically summative, to measure learning outcomes. However, the transversal skills that fall under the twenty-first century competencies skills umbrella are dynamic, and how they are exhibited varies according to the situation and context. The implication is that the traditional, typical way of collecting information to infer learner abilities such as for literacy and numeracy—paper-and-pencil, multiple-choice, short answer questions that ask learners to define a term or provide information about a phenomenon—cannot adequately capture individual abilities in these twenty-first century competencies. Although traditional assessment methods may be able to identify what individuals know, they are unable to demonstrate whether individuals can take what they know and apply that knowledge in real-life situations, where they must simultaneously know which skills to use, when to use them, how to use them, and quickly pivot if the situation changes. Therefore, the assessment of twenty-first century skills must go beyond the traditional summative-type tests or examinations that determine whether a learner has grasped (i.e., memorized or comprehended) the domain of interest (Care & Kim, 2018; Galloway et al., 2017; Kim & Care, 2020; OECD, 2017), to the assessment of complex learning processes which are typically not visible, using qualitative behavioural indicators. For example, a question like “What is 24 x 4?” has a correct response, and it is easy to identify whether a learner knows the answer or can work it out. However, open-ended questions such as “What is the best way to approach this problem and why?” are more difficult to assess using traditional assessment methods since there are many ways to respond to this question. In addition to capturing the skills and processes underlying the competencies, assessments need to capture whether and how individuals apply these skills in real-life situations (Kim & Care, 2017).

The challenge is how best to capture these behaviours. Assessments need to be sufficiently broad and dynamic to assess skills that are demonstrated across different situations and in response to different contexts, recognizing that these skills may be exhibited differently depending on the situation and the learner. At the same time, the assessments and how data are collected using these assessments, need to be systematic, reliable, and practical (Vista et al., 2018). The use of technology-based methods has been one way to address this challenge by transforming qualitative measures into quantitative data that are scored in a consistent way (Sibberns, 2020). Accordingly, computer-based assessment has been recognized as adding value to traditional paper and pencil assessment approaches. Technology-based assessment has the capacity to track a wide range of competencies such as thinking processes and interactions through which the learners arrive at their responses (Sibberns, 2020). Using this type of assessment seems promising; however, technology-based assessment does not of itself ensure validity, and access to technology is still far from ubiquitous in SSA. This, of course, precludes access to some of these digitalized assessment innovations (Scardamalia et al., 2012). One thing is clear—the complexity of twenty-first century competencies makes it challenging to assess them.

A practical challenge in assessing twenty-first century skills is that specific grade- or age-level expectations of learners have not been made explicit. Nor are there widely available curricula that outline what the learning goals are when it comes to these competencies. Several studies found that despite movement in policy, teaching and assessment of these skills remains a challenge both in SSA (Netherlands National Commission for UNESCO, 2015; Kim & Care, 2020) and elsewhere (e.g., Care et al., 2019a, b; Vista et al., 2018).

7.3 Common Methods of Assessing Twenty-First Century Skills in Sub-Saharan Africa

From a review of literature specific to SSA, five approaches to measuring twenty-first century skills are considered here. These include scenario-based assessment, questionnaires, video recording and direct observations of activities, portfolio assessment, and technology-based assessment. Each of these methods is described briefly with reference to these twenty-first century skills.

7.3.1 Scenario-Based Assessment

Scenario-based assessment consists of a description of a situation followed by items (or questions) to which response is required. In the case of twenty-first century skills, this approach enables the task designer to ground assessment activities in the concrete details of a situation so as to stimulate the respondent’s multi-dimensional action or thinking. If scenarios are linked to episodes that are familiar to a respondent, the latter generally find it easy to engage in the task. In a study conducted among university students, Haynes et al. (2009) identified some benefits of the approach:

Scenarios provided all students a chance to participate and give ideas without having to worry about getting a bad grade for the wrong answer. Scenario assessment can even be more beneficial if the context in which the scenarios are applied involves working in teams. (Haynes et al., 2009, p. 4)

Scenario-based assessments that are based on familiar situations remove stressors that might be associated with classroom-based knowledge, and so liberate those being tested to provide more authentic responses. In addition, the approach provides a good milieu within which individuals can express and explore their social, cognitive, and behavioural responses associated with target skills more broadly. Daily life scenarios may be perceived as permitting more nuance in responding than the correct/incorrect response mode that is frequently associated with assessment in the formal education sector.

Scenario-based assessment is subject to complexity in scoring. The development of scoring rubrics is complicated. If done well, rubrics can provide reliable scoring of quite complex behaviours. However, if rubrics are poorly expressed or not thoroughly reviewed for clear thinking, the assessors are blocked from accurately representing the proficiencies of the respondents. The process of designing and redesigning scenarios in response to feedback from the field is also time-consuming. However, if the process is handled appropriately, it becomes possible for the assessor to distinguish a poor performer from a better performer. Finally, the assessor’s judgment can be influenced by contrast bias during the scoring of individuals’ responses. This can occur when an assessor inappropriately compares a single individual to a group of respondents who performed extremely well or extremely poorly. For example, an average individual may be scored as weak if preceded by others who have performed extremely well on the same task. Note that contrast bias can be reduced if two or more assessors are involved in the assessment and then aggregate their scores (Yeates et al., 2015).

7.3.2 Self-Report Questionnaires

Self-report questionnaires are commonly used to measure participants’ attitudes and opinions. Items on self-report measures typically ask respondents to rate themselves on a variety of factors using responses along a Likert scale, such as “strongly agree” to “strongly disagree”. There are several popular self-reports and/or questionnaires that have been developed to capture a variety of skills desired in the twenty-first century. These include The Student Experience 21Footnote 1 by Battelle for Kids, an online suite of tools that includes a 24-item student perception survey that focuses on hope, engagement, belonging, and twenty-first century learning; and the PISA 2018 assessment of Global Competence (OECD, 2017), which included a self-report component through which students report evaluation of own attitudes and competencies.

Many more surveys of twenty-first century skills continue to be developed for different age groups, contexts, locations, and purposes. For example, Boyaci and Atalay (2016) developed a 39-item survey, Twenty-first Century Learning and Innovation Skills, for Turkish primary students using a Likert scale to capture self-report responses on creativity and innovation, critical thinking problem-solving, and cooperation and communication. Kelley et al. (2019) developed an instrument for high school students to assess their self-reported proficiencies for communication, collaboration, critical thinking, and creativity within project-based learning activities. Example items include: “I am confident in my ability to help the team solve problems and manage conflicts” (collaboration); “I am confident in my ability to elaborate and improve on ideas (creativity).

Questionnaires and self-reports are often used to measure twenty-first century skills because they are easy to design and administer, can be completed quickly, tend to be less costly than technology-based assessments and open response formats, and can be easily analyzed quantitatively (Boyaci & Atalay, 2016). At the same time, the method is well-known and therefore familiar to many respondents, making it easier to use. Another benefit is the facility to compare multiple perspectives, such as the student versus the teacher (Kelley et al., 2019).

However, the use of the Likert response method in combination with self-ratings has been criticized for failing to provide evidence of construct validity (Chakrabartty, 2014). Lombardi et al. (2018) pointed out that students may lack objectivity or may bias their responses due to perceived social pressure. In addition, responses to hypothetical statements may not parallel how students would respond when encountering a situation in real life (Soland et al., 2013). This is a particular issue when measuring twenty-first century skills that are less well-defined or more nuanced, and which, therefore, lack key indicators against which a respondent can evaluate themselves. These weaknesses are threats to validity and may pose serious concerns for capturing students’ abilities.

7.3.3 Videotaping and Direct Observations

Direct observation or video recording of individuals engaging in tasks is an assessment method for measuring practical or socio-emotional skills in which the target individual or group is observed in given situations to determine how they perform. A set of instructions is provided to the target individuals or groups about the task requirements, the assessment criteria, and the duration of the assessment. It may be that only the administrator evaluates, or that both the administrator and peers participate in the evaluation. A scoring guide, or set of rubrics, is used to rate the performance (Mclean & Connor, 2018; Kemp, 2001).

A study of direct observation of social worker students in field practice revealed that the approach yielded more valid results when clear goals of the assessment are agreed upon by both the assessors and students (Irwin, 2014). Goal setting is therefore viewed as an important step when preparing for direct observation assessment. Goals provide a clear guide concerning what is expected of the students and may help in reducing their anxiety during the process. The approach has been commended for embracing greater external or contextual validity than self-rating scales, as behaviour is measured in the form of its natural occurrence (Nock & Kurtz, 2005). Unlike many assessment approaches that focus on relatively narrow factors such as frequency, direct observation provides the opportunity to gather a wide range of contextual information about the events that occur during task performance, including the antecedents and consequences of observed behaviour (Nock & Prinstein, 2004).

Despite these benefits, a number of issues have been identified. First, the direct observation approach is costly in terms of administration time and money, while video-based approaches also require adequate technologies and equipment (Irwin, 2014). Second, direct observation may induce respondent-assessor reactivity due to the obtrusiveness of the assessor as the observer. In addition, perceptual bias may influence the assessor’s interpretation of behaviours that in turn is used to infer good or poor quality (Skinner et al., 2000). Third, in the case of the education context, there is potential for teachers to experience role conflict when they combine the role of an assessor who is judging students’ achievement with that of teacher who is responsible for developing students’ skills. Such conflict might ensue from the teachers’ natural tendency to help and guide a student; or from their own concerns that student performance might reflect on perceptions of teacher success (Kemp, 2001). To achieve success in utilizing direct observation assessment, it is necessary to train the assessors and equip them with the principles of managing these issues.

The method itself provides a clear opportunity for rich demonstration of complex skills, due to its open-ended nature. In order for assessment of twenty-first century skills to benefit from the method, the target skills must be comprehensively de-constructed and identified in scoring criteria, analysts must be comprehensively trained in observation, and sufficient resources in terms of number of observer analysts must be available.

7.3.4 Portfolio Assessment

In recent years, portfolios have become a popular resource for assessing some twenty-first century skills. A portfolio is a collection of student work that is examined and scored based on a set of predetermined criteria. A portfolio includes learners’ annotated evidence, such as peer or teacher ratings, research reports, products, and argumentative essays, that reflect what the learner has achieved in learning (Davis et al., 2001).

Portfolio assessment has the potential to assess complex phenomena such as attitudes and ethics that are difficult to measure using traditional classroom assessment approaches. The use of portfolio assessment in assessing competencies has the added advantage of showing changes in behaviour over time since it is grounded in self-reflection (Bialik et al., 2016; Baki & Bargin, 2004). Portfolio assessment is widely used in secondary and tertiary education where ‘products’ are seen as the most informative sources of competence. Predominantly, portfolios have been used in the arts and business fields to assess individuals’ competencies. This is because it s a product of the knowledge and skills contribution that is being assessed, as distinct from assessment of dynamic display of skills.

However, one of the major concerns in using portfolio assessment is the issue of validity. The portfolio of work may not include a representative sample of the information across the targeted learning outcomes (Barton & Collins, 1997). Assessors may also find it difficult to verify whether the evidence is attributable to the student or whether it is faked or plagiarized (Davis et al., 2001). In other words, the link between the individual being assessed and their product is not always verifiable.

Another challenge with portfolio assessments is that similar to open response measures, it can be difficult to score consistently given the myriad of evidence that might be part of the portfolio of work. There have been efforts to ensure that portfolios can be scored in a way that ensures high levels of reliability and validity, for example, by developing clearly written rubrics. In other words, assessment developers are focused on ensuring that the scoring process is highly structured to make sure there is consistency in the criteria for scoring as well as increased agreement among raters. Another concern is assessor bias, which makes it difficult to ensure objectivity in scoring. One solution for this is to standardize rubrics, although again excessive standardization of scoring rubrics as an effort to reduce assessors’ subjective judgment may remove the nuanced information that might be critical to capturing the target skills.

7.3.5 Technology Based Assessment

Technology-based assessment resources for twenty-first century skills are less accessible in regions that do not have high levels of technological and communications infrastructure throughout governance, societal, and education systems. Technology-based assessment takes many forms and can include the four methods included in the brief review above. Scenario-based assessments, surveys, observation methods, and portfolio designs can all be used to structure assessment tasks and their scoring criteria within technology environments.

Computer-based applications may have additional advantages over other media for assessment because they can provide automated scoring and reporting. In a study conducted by Harahap et al. (2020), a digitalized meta-inquiry assessment helped students to receive rapid feedback at each of the stages including problem formulation, hypothesis development, data collection, hypothesis testing and overall research progress. The social system component in the model allowed students to maintain continuous interactions with teachers and cooperation with colleagues as peer mentors, mediators, and as motivators.

Technology-based assessment has been found to be cost-effective and environmentally friendly because the entire process of inputting responses, scoring, and giving feedback is paperless. Students can respond to questions more quickly than when working with paper and pencil assessment versions (Khairil & Mokshein, 2018). Beyond such facilities as tracking of speed which for assessment in some contexts is valued, assessment of skills such as problem-solving has highlighted the fact that multiple paths through tasks can be tracked, hence enabling other characteristics of these skills to be prioritised (Ramalingam et al., 2017). This tracking facility is a characteristic that lends itself well to the assessment of twenty-first century skills. Since many of these skills are regarded as complex in nature and in demonstration, the facility of tracking branching within a skills demonstration environment is a major benefit.

Some researchers have identified limitations associated with the use of computer-based assessment. Schaeffer and Palmgren (2017) postulated that activities that are ambiguous may fail to transform into assessment frameworks that require standardisation in order for data capture to take place seamlessly. There are also concerns related to inequality when using technology-based assessment. Due to economic variations within and among different locations and schools, some individuals may have more access to devices and greater computer literacy skills than others, thus leading to disparities in performance. Similarly, assessment conditions and procedures, such as internet connection speeds, power fluctuations and hardware and software specifications, tend to vary from one location to another and these may directly affect the performance of some students (Blazer, 2010).

7.4 Overview of Assessments of Twenty-First Century Skills Used in Sub-Saharan Africa

In order to identify existing assessments of twenty-first century skills that are being used in SSA, the authors conducted a review of assessments. Assessments were identified by searching electronic databases, scanning reference lists, and consultations with experts in the field of measurement, assessment, and evaluation. Google Scholar and platforms such as INEEFootnote 2 Measurement Library and Research Gate were used to gather a pool of assessments of twenty-first century skills. The criterion applied was that the assessments were used in the region, rather than whether they were developed specifically within or for the context. Assessments were considered only if they focused on twenty-first century skills, had documented technical information, and were developed for children or adolescents. In addition, tools that had a commercial cost associated with the purchase and administration of the assessment were not considered.

Based on these criteria, seven assessment were identified and examined along five dimensions. The dimensions are assessment purpose, assessment type or form, assessment context, target population, and skills assessed. These are described below. There are many dimensions upon which reviews of assessments can be based (see Galloway et al., 2017; Lamb et al., 2019; Soland et al., 2013). Given the implementation environment for ALiVE, the five selected dimensions provide a grounded approach to reviewing tools in the context of SSA. The interdependence of decision-making about each of these dimensions is highlighted in the brief descriptions.

7.4.1 Five Dimensions

7.4.1.1 Assessment Purpose

According to Schwartz et al. (2011), there are five broad purposes of assessments, including monitoring system performance, accountability, setting priorities and signaling important competencies, and supporting instructional improvement. The more general identification of assessment for summative or formative purposes overlays this. The purpose is formative if the assessment will be used to support teachers in setting and reviewing instructional goals and providing individualized feedback to students. The purpose is summative if the assessment is used to determine whether an instruction has been effective after the fact, and data can be provided to the individual, school, or system. Some assessment tools are constructed in ways that lend their use to both purposes. The need for detail in assessment results is determined by purpose, with individual formative typically needing more detail than large scale benchmarking or accountability.

7.4.1.2 Assessment Type or Form

Five approaches to measuring twenty-first century skills have been described. These are scenario-based assessment, questionnaires/self-report, video recording and direct observations, portfolio assessment, and technology-based assessment. Some measuring tools draw on more than one approach in order to capture multiple facets of the target constructs. This strategy can elicit rich data but can also present some analytic and reporting challenges. For example, mixing both survey self-report formats together with scenario-based formats presents challenges for data analysis (Hoskins & Liu, 2019). The context or physical environment in which the assessment is to take place must be considered in decisions about the type or form of assessment to be used. Where a household-based assessment is involved, fewer resources will typically be available to draw on for an assessment event than in a classroom. The availability of resources influences how complex the type or form of assessment can be.

7.4.1.3 Context for Use

The context in which the assessment takes place will determine the type or form of assessment. The main context for use of educational assessments is the classroom. This provides the facility for assessing large numbers of individuals in a relatively efficient and standardised way. However, it also prioritises the use of the written word or technologies in order to enable group administration. Other contexts include out-of-school programs, which may offer a mix of administration facilities; and household-based assessment, which is typically less well-resourced. Household-based assessment has been used in citizen-led approaches to gather large-scale data for advocacy purposes. Each of these contexts offers variably beneficial environments for the assessment of skills like literacy and numeracy, or for assessment of skills such as communication or creative thinking.

7.4.1.4 Target Population

The nature of the target population establishes the possible type of assessment, its context, and the actual target constructs. The maturational level of the individuals and groups will determine what characteristics or capabilities can be assessed, and through what medium. Significant aspects will include whether language and literacy skills can be assumed; and elements such as concentration span, physical comfort, and communication modes will be considered. The capacity of the target population to consent to assessment is an important feature, particularly where the assessment will take place outside of the formal education context.

7.4.1.5 Specific Skills

The selection of skills to assess has a greater influence on decisions on all other dimensions. Different skills require different modalities for expression, and so will determine the type or form of assessment. Since many twenty-first century skills, as discussed, are complex, it may be necessary to accept limitations in their capture. Accordingly, some aspects of a complex skill may be targeted while others will be put aside due to the overwhelming difficulties in collecting evidence. Such decisions are pragmatic, and as long as they are made explicit, can be justified in reporting results.

7.4.2 Selected Assessments

7.4.2.1 Assessment of Life Skills and Values in East Africa (ALiVE; Mugo, 2024)

ALiVE is designed to measure life skills and values of adolescents across Kenya, Tanzania, and Uganda. Tools were developed to assess collaboration, problem-solving, respect, and self-awareness of 13–17 year-old adolescents through the medium of household-based assessment. The goal of ALiVE is to generate evidence of these proficiencies for use by governments in their current efforts to include these competencies in the education system. The tools use scenario and performance-based tasks, scored by qualitative rubrics. The assessments are administered orally on an individual basis apart from the collaboration tasks which are administered to groups of four adolescents.

7.4.2.2 Optimizing Assessment for All (OAA; Kim & Care, 2020)

OAA was a collaborative effort designed to strengthen education systems’ capacity to integrate twenty-first century skills into their teaching and learning through use of assessment. Assessments for collaboration, problem solving, and critical thinking were integrated within mathematics, health, environment, English, social studies, and science for grades 4–8. OAA is a set of classroom assessment tasks with both closed and open response types that are scored using rubrics. OAA assessment tasks took traditional test items and revised them to assess complex skills aligned with each country’s new learning goals. A critical product of OAA was generic templates which allow for the development of assessment tasks that take into account culture, context, skills of interest, and can be created in any country using their existing items.

7.4.2.3 ISELA (D’Sa & Krupar, 2019)

The International Social and Emotional Learning Assessment (ISELA) was developed among Syrian refugee children in Kurdistan, Iraq, to understand self-concept, stress management, perseverance, empathy, and conflict resolution in children of 6–12 years. The context was conflict-affected and fragile states, and so the tool design targeted understanding childhood resilience. The purpose of tool development was for use in program and national evaluation. The focus is on social-emotional skills acting as protective factors.

7.4.2.4 Educator Assessment of Learners’ Soft Skills Ability (EALSA; Education Development Center, 2019)

The EALSA is primarily a work readiness tool developed within the context of the Measuring Skills @ Scale Project. The tool is focused on the skills of communication, interpersonal skills, dependability, and problem solving/critical thinking. It comprises a self-report questionnaire using items that in the main are contextualised within the work environment. The purpose is to generate evidence of respondents’ current levels of functioning and so provide information to instructors about where best to intervene to improve performance.

7.4.2.5 The Youth Power Action Youth Soft Skills Assessment (YAYSSA; Omoeva et al., 2023)

YAYSSA measures social and emotional skills of youth in low-resource settings. Specific targets are positive and negative self-concept, higher order thinking skills, and social and communication skills. The tool is designed for use in evaluation of programs to improve economic outcomes and functioning of youth. It is a self-report tool, with a mix of attitudinal responses to vignettes, and self-evaluation of the individuals’ skills.

7.4.2.6 Life Skills and Citizenship Education Measure (LSCE; Hoskins & Liu, 2019)

The LSCE is a product of a collaboration of UNICEF and the World Bank in the Middle East, with the tool developed to measure 12 life skills in the context of Life Skills and Citizenship Education. The goal is to use the information to inform government priorities, policy, and instructional objectives. The current tool captures eight of the skills through a mix of scenario-based multiple-choice response formats and self-report of competencies using Likert scales.

7.4.2.7 Tanzania Life Skills Assessment (Research Triangle Institute, 2019)

The Tanzania Life Skills Assessment takes a self-report approach to its item design for grit and self-control, and a performance-based approach to problem solving which collects a number of methods respondents use to solve word problems associated with EGRA. Based on listening to brief scenarios, the respondent identifies how likely they are to behave in the same way as the protagonist.

7.4.2.8 Review of Selected Assessments

Each of the seven assessments was reviewed in light of the foregoing description of common methods used in the assessment of twenty-first century skills, and the five dimensions. Table 7.1 provides a summary of information for each assessment. Note that the inclusion of these assessments is neither an endorsement nor confirmation that they are reliable and valid.

Table 7.1 Selected assessments of twenty-first century skills in sub-Saharan Africa

Most of the seven tools capture information for summative purposes, with just two of these designed to inform policy. The two tools that are designed to inform the education sector most consequentially are derived from two quite different sectors, but both draw on strong government support. ALiVE and LSCE are both large-scale initiatives, one supported primarily by the civil organisation sector, and the other supported by UNICEF and the World Bank. Their target skills draw on the cognitive, interpersonal, and intrapersonal, including competencies associated with global competence and citizenship. With the exception of EALSA and its focus on work readiness, and OAA and its classroom focus, the remaining tools prioritise skills that might act as protective factors or skills of interest in the face of conflict and adversity. In terms of method, self-report remains the dominant method of collection of information. Given the dominant use of this method, the contexts in which several of the tools were developed, and the age of samples that contributed data, there may be concern about the degree to which respondents were at a level of cognitive or social-emotional maturity to able to provide data truly symptomatic of authentic actions in a real world.

7.5 Conclusion

Sub-Saharan African countries are increasingly adopting twenty-first century skills or life skills into their primary and secondary education. However, in the main, assessment remains focused on traditional academic-oriented curricula. As national education systems move more rapidly into nurturing and assessing new competencies, there are few resources available to draw from.

Methods currently used in the region have not been reviewed critically to stimulate change in the approach to assessment of these dynamic competencies. This chapter focused on identifying some assessment approaches for twenty-first century skills applied in SSA. The form of assessment tools found most commonly used in SSA for measurement of these skills are scenario-based assessment with standardized scoring rubrics, and self-report questionnaires using Likert scale response options. While self-report questionnaires are favoured for their simplicity in design, administration and scoring, there are serious concerns about the degree to which they stimulate socio-culturally desired responses. Scenario-based activities however, have gained recognition for their ability to prompt respondents into multidimensional action or thinking, fostering idea generation without fear of presenting incorrect responses. An example of this approach, from East Africa, is the ALiVE project, which employed a context/scenario-based assessment method to gauge adolescents’ proficiency in specific competencies. The outcomes of this assessment furnished compelling evidence supporting the method’s effectiveness. Furthermore, the robustness of the scoring approach adopted facilitated useful behavioural differentiation across proficiency levels, providing information that could be used for instructional purposes by teachers.

The review of tools in this chapter highlights the relative paucity of diversity in tools development in the SSA region. This finding alone is useful in guiding future efforts to experiment with multiple approaches rather than defaulting to approaches that have been used traditionally to capture evidence of achievement in subject-matter in the education sector.