4.1 Introduction

Problem solving is a term that describes a vast number of processes and applications. It is used as we struggle with daily domestic logistics, used in mathematics classrooms as students are presented with ‘specific problems’, measured in large scale assessment programs to evaluate domain general problem-solving abilities, and referred to as wicked problems as we confront global issues. There is no doubt that all these applications draw on human cognition—how to analyse issues and resolve these in ways to achieve some desired goal or goals. Problems are all around us. The problems may vary in importance to the individual and the society, but have one common quality. A problem is a scenario which challenges us with the unknown. If a solution is known, it is no longer a problem. Initial engagement with a scenario is often to evaluate whether a problem actually exists. What may be a problem for one individual or in one context, may not be a problem for another—because they have different experiences, different knowledge, and different resources at hand. Problems are fascinating. Problems can present situations which lead to negative outcomes, or which can provide outcomes that help our societies progress.

We vary in our ability to solve problems, and this has raised interest in the teaching and assessment of problem solving. Problems themselves vary widely, and this has led to interest in both domain-specific and domain-general application of problem-solving proficiencies. Do we use the same strategies or processes to solve a problem at home, or at school, or at work? Researchers who have identified sets of processes that need to be invoked to solve problems would answer in the affirmative. And it is this perspective that has fed twenty-first century interest in assessment of problem solving—with, for many jurisdictions, the consequent intention to improve citizens’ problem-solving abilities.

Countries in East Africa, as well as globally, are looking to equip their young people with problem-solving competencies, which are then hoped to resolve the major problems that all societies confront. Problem solving is one of the competencies always included in any description of twenty-first century skills, and typically included in curricula developed by education systems.

Since 2003, problem solving has been one of the constructs assessed in the OECD’s Programme for International Student Assessment (PISA), strengthening global knowledge of the construct within the confines of achievement that can be measured within the educational context. Notwithstanding the acceptance of a core definition and description of problem solving implied by the just under 80 jurisdictions and countries which participated in the 2018 PISA round, several countries have shown interest in their own formulation of the concept. For example, Molnár et al. (2022) report on how students from Jordan and Hungary approach problem solving differently, while Wicaksono and Korom (2022) draw on problem-solving frameworks predominantly developed or adapted for use in Indonesia.

Problem solving in basic education systems has tended to be domain-specific. In other words, problems are posed to students within a particular domain, for example mathematics or science, and methods for resolving these problems are pursued based on the knowledge domain required as well as a set of cognitive processes. The domain-specific approach to problem solving is best characterised by the work of Polya (1945), whose focus was on mathematics. Particularly in the twenty-first century, researchers have focussed on the complexity of problem solving as we face ill-defined problems (Funke, 2014), leading to more interest in domain-general applications of problem solving (Rudolph et al., 2017; Greiff et al., 2014), which are hoped to lead to building greater proficiencies. These problems are characterised by lack of definition, with goals unclear, and means of moving towards those goals also unclear.

Increasingly, the domain-general conceptualisation of problem solving has become the target of international large scale assessment programs, including microworld situational scenarios (Greiff et al., 2015), and computerised tasks based on aspects of daily life included in PISA since 2012 (OECD, 2013).

The distinction between domain-specific and domain-general approaches to problem solving is particularly relevant when considering how the education sector seeks to promote problem-solving competencies versus societal interest in the functioning of society and management of global issues. National curricula typically promote problem solving as it pertains to specific areas of learning, and mainly to mathematics and science. However, the current focus on problem solving highlights the anomaly between formal education’s traditional approach to the capability within mathematics and science studies, and society’s need for individuals to apply the steps of problem solving in daily life. Conceptualising problem solving as a twenty-first century skill is to prioritise its application in daily life, and to transfer use of steps or processes previously exercised within academic or research studies to practical issues.

Another characteristic of problem solving is how it has increasingly been viewed as an information processing competency. It is therefore a competency that is more frequently described as a set of processes than as a skill. This characteristic sets it aside from many other competencies that fall under the twenty-first century skills’ or lifeskills’ umbrellas. The characteristic facilitates both our understanding of the competency and our ability to assess it. The Assessment of Lifeskills and Values in East Africa (ALiVE) project, designed to measure the lifeskills and values of adolescents, has benefited from this feature in that it provides the opportunity to formulate the processes and find ‘carriers’ for these in daily life. And this is where we return to the need to ensure that the conceptualisation of the construct is true to its context.

4.1.1 Conceptualising Problem Solving in East Africa

The first task for ALiVE in developing tools for assessing the lifeskills and values of adolescents across Kenya, Tanzania, and Uganda, was to agree on commonly held understandings of these competencies in the participating countries. The development of conceptual and assessment frameworks by the ALiVE team was contextualised through reference to recent research undertaken in the ALiVE participating countries. Giacomazzi et al. (2022) explored the nature of problem solving across contexts, with a focus on its nature in three East African nations. Kenya, Tanzania, and Uganda in recent years have all integrated various lifeskills into their national education curricula (Kenya Institute of Curriculum Development, 2019; NCDC, 2019; Nkya et al., 2021), raising questions about reliance on Western definitions of these constructs and relevance of these constructs.

Giacomazzi et al. (2022) describe problem solving as “identifying the problem, knowing and understanding the problem, asking for advice, evaluating the options and choosing between them, and finding the best solution” (p. 5). Many of the respondents in the contextualisation study associated problems and problem solving with interpersonal conflict or community difficulties. Asking for advice referred to seeking guidance from significant others and the community. Aspects of this conceptualisation differ from those used in most large-scale assessments and those associated with domain-specific competencies which prioritise cognitive elements only. A key element identified by Giacomazzi et al. (2022) is the social aspect of problem solving, and this of course draws attention to the role played by culture in our understanding of how problem solving takes place and is manifest. Incorporating this element is more complex than may initially appear. The consideration of community and relationships rests on an individual’s knowledge of these aspects of human life. At an individual level, difficulty in problem solving occurs where limited knowledge is available or when a situation itself is ambiguous. In extreme instances of such ambiguity, and particularly in cases where the problems are widely seen as critical, such problems have increasingly been been termed ‘wicked problems’ (Funke et al., 2018).

The ALiVE team that investigated problem solving also drew on the highly visible framework used by OECD PISA (2013). That framework was set in principles well aligned with ALiVE’s current concerns with education and the strengthening the integration and development of these skills: “The acquisition of increased levels of problem-solving competency provides a basis for future learning, for effective participation in society and for conducting personal activities” (p. 120). The framework drew on an information processing approach, distinguishing it from domain-specific knowledge and approaches. The processes explored in PISA were: exploring and understanding; representing and formulating; planning and executing; and monitoring and reflecting.

Tobinski and Fritz (2017) draw attention to problem solving occurring at the intersection of available and missing knowledge. They make the point that the initial state of a problem is based on previous knowledge. Although their interest was in handling of complex problems in the physical environment, the principle is useful in consideration of how the social context impacts problems and their solutions. Tobinski and Fritz (2017) draw attention to the initial difficulty and recognition of a problem being contingent on existing knowledge. If assessment of problem solving is to provide the same opportunities for any individual to demonstrate their problem-solving processes, then the scenario or situation which constitutes the problem needs to be equally familiar to all being assessed.

4.2 Argument and Approach

The significant challenges for ALiVE in development of an assessment for problem-solving skills of adolescents were: to accommodate the contextualisation recommendations inherent in Giacomazzi et al. (2022) in adoption of a conceptual structure with its consequences for an assessment framework; and to generate assessment tasks compatible with the limitations and affordances of household-based assessment.

The ALiVE approach to design and development of the assessment was construct-driven (Wilson, 2005). This means that the form of the assessment tasks, coding criteria, and intended reporting style, are determined by the substance of the construct and how visible signs of this substance might best be captured. The theoretical structure of the construct, problem solving, was therefore fundamental to the assessment design and form.

Drawing upon research literature including sources quoted in this chapter, and the Giacomazzi et al. (2022) contextualisation study, the ALiVE initiative adapted a conceptual structure for problem solving and developed its assessment framework. The framework was developed through a realistic appraisal of what would be possible to assess at household level. Figure 4.1 illustrates both the structure and framework.

Fig. 4.1
A 3-tier flowchart starts with problem-solving and flows down into defining the problem, finding the solution, and applying the solution, each of which further branches out into 3 more factors.

Conceptual structure and assessment framework for problem solving

The first step in problem solving is to identify whether a problem exists. This step can be complex and will vary in cognitive demand according to the nature of the situation and the resources of the individual. A straightforward approach is to ask “who”, “what”, “how”, and “why”. The answers to these questions help to define the problem space; its current status and the ‘discomfort’ that characterises it; and the artefacts available within the space and those required upon which to draw (Care & Griffin, 2017).

The conceptual structure shows the over-arching construct of problem solving, and three dimensions – defining the problem, finding the solution, and applying the solution. Within each of these dimensions are sets of processes, or ‘subskills’. The subskills that could reasonably be assessed within the context of household-based assessment are identified in the assessment framework. These are shown in Fig. 4.1 as shaded. The dimension of ‘applying the solution’ was not targeted in the assessment program, due to the logistical problems of enabling the activities. For each assessment task, ideas were created to formulate a scenario in which each of the problem-solving processes could be targeted. The ideas were generated by the ALiVE team and were drawn from community experiences. The ideas describe possible scenarios in daily life; the assessment tasks comprised the scenario and a series of questions about it that would prompt responses interpretable within the problem-solving logic and its processes.

The structure and the assessment framework adopted for the development of the household-based assessment draw on the OECD’s PISA problem-solving framework (2013) and the findings of Giacomazzi et al. (2022). The primary difference lies in how some of the cognitive processes are clustered together, and the inclusion of advice seeking. To some degree the differences are also a function of the generic levels at which processes are presented. For example, in the same way that ALiVE’s structure depicts dimensions and subskills within these, the OECD PISA example combines two distinct processes within what is shown in Fig. 4.2 as each of four steps.

Fig. 4.2
Three rows of textboxes, one each for ALive, Giacomazzi et al 2022, and O E C D 2013. Alive has 8 factors, recognising the problem, information gathering, understanding the problem, exploring alternative solutions, reflecting on consequences, selecting, implementing, and verifying the solution.

Problem solving processes across conceptual structures

4.2.1 From Concept to Assessment Development

In development of the assessment tool, three factors were considered. These were: the nature of the construct itself; the medium through which the assessment would be conducted; and the use to which the assessment results would be put.

4.2.1.1 Nature of the Construct

The adoption of the assumption that several distinct processes underly a problem-solving event, has implications for how assessment task scenarios must be created. Ideally, any given scenario will stimulate multiple processes, responses to which will be recordable and codable. Since a certain sequence in enactment of the processes typically occurs, both description of the task and the following questions and prompts must follow a logical order. The predictability of responses is a significant requirement in building the sequence. It means that each question and prompt must be phrased in such a way that the response can be interpreted within the parameters of the process for which the question or prompt has been created.

4.2.1.2 Medium of Assessment

The intention of ALiVE, as discussed in Shariff et al. (2024; Chap. 2, this volume) was to collect evidence of the competencies of 13–17 year old adolescents across Kenya, Tanzania, and Uganda regardless of whether they were attending formal education or not. Hence, the optimal way of accessing this group was through a household-based approach. This approach had implications for the administration of assessments, as well as the form of the assessment tools.

The main features of the administration can be described briefly across practical matters, while these had implications for the nature of the assessment itself. It was essential that Test Administrators were fully versed in how to approach the adolescents, how to standardise the assessment interaction, and how to interpret responses so as to code these appropriately. In order to ensure this, the Test Administrators participated in three days of training, one day of which was dedicated to familiarisation with the life skills and value to be assessed. The assessment of problem solving was to last for no longer than 7–10 min for each adolescent, with the prompts being provided orally on a one-to-one basis, with no stimulus aids.

The implications of these features were several.

  • For orally administered assessment, lack of written or pictorial records or prompts meant that the assessment scenarios needed to be sufficiently simple and short that the task did not pose a significant short-term memory load for the adolescent;

  • Each assessment task needed to be communicable by the Test Administrator using simple language which could reasonably be comprehended by 13–17 year old adolescents across a range of backgrounds, education experience, and abilities;

  • Each assessment task needed to be capable of generating a predictable range of responses that would be amenable to immediate recording and coding by the Test Administrator;

  • Each assessment task needed to have the potential to stimulate responses across several of the hypothesised problem-solving processes, in order to maximise time use efficiency; and

  • While drawing on situations familiar to the adolescent target group across the three countries the assessment tasks needed to be situated in contexts that would not discriminate between adolescents based on gender, religion, culture, or language.

4.2.1.3 Use of the Assessment Results

The intended use of the assessment results was for advocacy, not for reporting at individual adolescent level that could be used for instructional intervention. Accordingly, broad brush levels of competency and patterns across the adolescent target group in terms of gender, education, geographic location, language, would be the focus of reporting. Results could be framed in descriptive text that would clearly communicate what adolescents are able to do, so that these could be relayed to the education community. Given this anticipated use, it was important to generate information across a coherent but minimal number of subskills, to optimise messaging.

4.3 Method

4.3.1 Structuring the Work: Conceptualisation and Assessment

The formal work sessions for defining problem solving and developing its assessment tools began in April 2021, with the first of six workshops attended by 47 representatives of the collaborating organisations. For each life skill and value, teams were assembled. The teams reviewed the findings of the contextualisation study and set the task of achieving consensus on a single definition for each construct. Drafts were presented for discussion across the four teams. This was followed by highlighting sources for task ideas which led on to technical sessions on issues of fairness and difficulty levels. The ‘skills’ teams then took their work back to country convenings for further analysis. The second workshop reviewed status, checking likely alignment of task ideas against concept domains. This process continued through to July at which point ‘think-alouds’ (cognitive laboratories) were held to check early assumptions of the teams about the tasks, items, and scoring rubrics. By September of 2021, the analysis of student data from the think-alouds was used to set the assessment blueprints by identifying what was, and was not, possible to assess given the field conditions under which the large-scale assessments would be undertaken. From this time onward, each skill team worked independently of each other to fine-tune the assessments and the scoring rubrics. The problem-solving team drew primarily upon members from Uganda.Footnote 1 Team membership fluctuated somewhat over the full development period. Analysis of dry run and pilot data early in 2022 was undertaken in a week long in-person workshop participated in by researchers.Footnote 2 This workshop assembled the evidence for finalisation of the large-scale assessment tools, confirmed construct structures, and subskills within constructs, through statistical analysis of the response data. Within the workshop, separate teams were again convened to focus on the specific constructs. Analysis of data from the large-scale analysis was undertaken across October to November 2022.Footnote 3

With the conceptual structure and descriptions of the processes within this available to them, the ALiVE problem-solving team first engaged in idea creation. Prompts for the process were to consider scenarios that adolescents might typically encounter, and evaluate whether these would provide the opportunity for display of the dimensions and their subskills. As the ideas were successively explored and critiqued, the elucidation of performance indicators and their quality criteria took place (Care et al., 2016). The first ‘check’ of the utility of the task ideas and their operationalisation into scenarios, prompts, and identification of performance indicators was undertaken through a cognitive laboratory process (Zucker et al., 2004; Griffin et al., 2012) with adolescents from the target population. Analysis of responses informed a review phase in which some task ideas were discarded, and for others their task descriptions, prompts, and performance indicators were refined. The next phase was a pilot of the assessment tasks with 392 adolescents (Kenya n = 154; Tanzania n = 79, Uganda n = 159). The pilot generated more information about time required for the assessment, need for modifications in phrasing of tasks, and refinement of coding criteria (informed by exploration of reliability across coders). These pilot data were also used to explore the robustness of the dimensions and subskills scales. The final check included 308 Kenyan adolescents in a ‘dry run’ of the full assessment. The responses were collected by Test Administrators using tablets and the application Kobo Collect. These responses were then analysed to explore item response distributions, and variations across performance indicators. Subsequently, very minor amendments were made in phrasing, and some rubrics were redefined to ensure accuracy of coding.

All problem-solving assessment tasks followed the same measurement design, with each task item generating one response for each subskill scale. From an initial pool of nine assessment tasks, six were good fit solutions, in the sense that these generated scales with excellent internal reliabilities. Of these six, just three were selected for the final large-scale assessment, in the interests of maintaining reasonable testing times for the adolescents. The responses given by adolescents to the tasks were coded according to criteria in the form of rubrics, contributing to the reporting for each subskill across three proficiency levels as illustrated in Table 4.1.

Table 4.1 Subskills descriptions and performance indicators at increasing levels of proficiency

Each of the levels of proficiency for the performance indicators provided in Table 4.1, contributes to the over-arching identification of levels of performance for the over-arching skill, problem solving, which is reported in Ariapa et al. (2024; Chap. 10, this volume)

Of interest is the degree to which this sequence model, running from recognition of problem to information gathering, then to exploring solutions prior to selection, is at odds with the cyclic notions of problem solving. Complex problem solving (Greiff et al., 2015; Bennett et al., 2003) typically refers to multiple cycles both across and within each of the separate subskills. This possibility was not allowed for in the ALiVE assessments, which instead followed a one-step logic within each process model.

4.3.2 Implications of a Contextualised Definition of Problem Solving

The role played by the contextualised definition of problem solving was made explicit in the task idea creation and subsequent item development, and was demonstrated in response data gathered during the pilot phase.

It is notable that the subskill of information gathering is the most difficult to perform by adolescents across Kenya, Tanzania, and Uganda (see Ariapa et al.,  2024; Chap. 10, this volume). The performance indicator at the highest level, ‘able to identify different aspects or sets of factors that, if known, might help to solve the problem’, is similarly difficult across all three assessment tasks. This demonstrates that the difficulty is not specific to task, but is associated with the subskill itself. It may be that this phenomenon is associated with the difference noted above between emphasis on cognitive processes seen in most conceptual structures for problem solving, rather than on relationships or community (Giacomazzi et al., 2022). The subskill requires that an individual hypothesise factors or elements that are part of a problematic situation, and then seek to inform these. This is essentially what is referred to elsewhere as exploring the problem space (e.g., Care & Griffin, 2017; Newell & Simon, 1972). If a first instinct when confronted with a problem is to seek advice from others (Giacomazzi et al., 2022), then this might impact on diverse approaches to information gathering. The issue can be seen through analysis of qualitative responses from adolescents to one of the tasks during the pilot process.

4.3.2.1 Task Example 1

One task scenario presents the issue of two school friends fighting. Data were collected during the pilot phase which informed the later coding of responses. The adolescent is asked ‘to solve this problem, what else do you need to know about it?’ From the qualitative responses collected in the pilot, approximately two thirds of adolescents respond that they need to know the cause of the fight; about one third respond that they need to know who started the fight or what the relationship is between the two friends. There is therefore little variation in responses, which demonstrates paucity in searching for other relevant artefacts or elements in the problem space. Questioning the reason for the fight implies that there might be some (possibly community) value judgement about justification for the fight, while questioning the relationship may address the point made in the contextualisation study—that relationships need to be prioritised. Whether these community or relationship priorities crowd out adolescents’ capacity to brainstorm other salient factors is a reasonable question. Whether both these main responses are symptomatic of reported comments collected in the contextualisation study is an interesting point. One young Kenyan is reported to have commented on problem solving:

Whenever he wants to solve a problem, he’s the person who does not ambush you wanting to solve the problem. He can’t be on one party’s side; he takes sides with both parties. If he takes sides with one person, he is not a problem solver. (Giacomazzi et al., 2022, p. 7)

This limited variety of responses is noted in another task. The limitation may be due to widespread recognition of the type of risk described by the task, which therefore inhibits canvassing of more options.

4.3.2.2 Task Example 2

Another task scenario describes a younger sibling taking longer than anticipated to return from doing an errand. The interpretation of the scenario is based on the individual respondent’s knowledge of their community and the occurrences that typically take place within it, reflecting Tobinski and Fritz’s (2017) point that pre-existing knowledge impacts on how an individual interprets any given event. In the pilot run for the assessments, just 6% of adolescents responded that there was no problem. A strong consensus for recognition of the problem at community level was indicated by the limited number of reasons proffered for the situation identified as a problem by the remaining 94%. Over one half identified ‘something bad’ including predominantly kidnap, accident, and rape, while ‘child associated’ reasons such as getting lost, being distracted by friends, and playing, accounted for less than a quarter of reasons. Whether this particular set of reasons, in the proportions obtained, would have been proffered in other socio-economic, cultural or national environments is not known. However, the homogeneity of responses suggests a shared cultural understanding of most likely risks in the communities in which this adolescent group was based. It also sets the ‘difficulty’ level for the problem by virtue of establishing this common knowledge base.

The lack of variation in responses to need for additional information to inform the problem space may therefore be attributed to multiple factors, including the contextualised reality described above. A second may be the familiarity of the scenarios themselves on which the problems were based. Familiarity was a major criterion used to develop and then evaluate task ideas for problem setting (Tobinski & Fritz, 2017). Familiarity was judged necessary due to the household-based assessment approach that could not assume formal education content upon which to set problems. However, that familiarity itself could have nullified the need to think more creatively about the situation. And this takes us to the third factor, familiarity with an educational assessment paradigm. In all three countries, the experience of young people is that their responses to any assessment situation will be judged as either correct or incorrect—there is less scope for nuance. This may make it more likely that respondents will take a safe option of responding in conventional ways.

4.4 Discussion

Adolescence is recognised as a time of growth across physical, cognitive, and social-emotional development (Erikson, 1968). It is a time when individuals take on significantly different roles from those which they have played in the past, and often significantly different from each other. Some adolescents remain primarily dependent on their family or carers while others run independent lives. Some are pre-pubescent while others are married or have offspring. To create assessment tools at the level of simplicity required for ALiVE while simultaneously engaging the interest of this target group poses some special problems. The difficulty for ALiVE was somewhat offset by the decision to use scenarios from daily life. Although individuals at different levels of maturity might respond to these differently, the scenarios themselves would be similarly familiar to all. As shown in Ariapa et al. (2024; Chap. 10, this volume) that assumption of responding at different levels of maturity was well-founded. However, the data also indicate that education has at least as great, if not more, influence on problem solving competencies in this population. If education has an impact on these competencies, it follows that problem solving is not purely a function of general intelligence nor of cognitive maturation. It means that instructional time in the context of a general curriculum is providing a learning environment in which problem-solving processes are nurtured. This can only be a positive for the goals of ALiVE and the education systems of the participating countries—Kenya, Tanzania, and Uganda.

The most known household-based assessments across Africa and Asia focus on literacy and numeracy (e.g., Uwezo, 2019; ASER, 2021). Such assessments of course are concerned with what is correct and what is not. In other words, accuracy is valued. This is typically well understood by children and adolescents, both in assessment at household level and in the school. The assessment of skills, however, cannot focus on accuracy since their application is about appropriateness rather than a binary judgement of success or failure; assessment therefore is about determining a level of competency. Responses are best interpreted as indicating either less or more of the target construct. Bringing an ‘accuracy mindset’ to an assessment event can be counter-productive when in fact the goal is to determine whether an appropriate level of skill is brought to the demand. In the case of problem solving, its very nature requires generation of multiple hypotheses and solutions, and encourages trial and error. It is essentially an iterative process, and to judge just one step as accurate or inaccurate does not do justice to the overall process.

Guidance to test-takers is generally provided before any assessment event, and so was the case with the ALiVE assessment. However, given lack of familiarity with this type of assessment, it is plausible that the adolescents might not have well understood the assessment goal of canvassing their thoughts, views, and processes. In addition, many adolescents were shy, and may have been inhibited in taking on a more interactive, conversational role in the assessment event. Giving short answers with the goal of providing a correct response may therefore have led to some under-performance.

The brevity of responses could also have been influenced by the framing of the task scenarios themselves. Since it was important to limit the cognitive load, given the lack of support materials, the stimulus text describing each scenario was kept as short as possible. This could have framed the mode of the adolescents’ responses by encouraging them to be equally brief.

Two criteria applied to the creation of tasks were that responses would fall within a relatively limited range, and be compatible with coding. Formulation of tasks that fulfilled this curriculum could have cut out alternative scenarios that might have stimulated more varied or more creative responses, hence limiting the true range of competencies. The predictability was necessary, given the practicalities of the one-to-one assessment administration, the lack of additional stimulus materials, the relatively short training of the Test Administrators, and the need to record and code on the spot. However, evidence from the large-scale assessment results indicates that few adolescents were actually able to demonstrate high level competencies, so it is possible that this concern had an impact on this particular test event. It is important to consider for future development, whether artificial limits on opportunities to demonstrate high level competencies might be presented by such assessment approaches.

The development and use of assessment tasks to capture demonstrations of problem-solving behaviour from adolescents in ALiVE was contextualised in three ways. The definition and structure of the construct was based on a study of understandings of that construct. The task ideas were derived from events and circumstances familiar in the lives of the adolescents. The environment in which adolescents demonstrated the skill was their home and community. Notwithstanding this localisation of the development, process, and use, the tasks functioned in a manner interpretable within existing knowledge about problem solving as researched widely (e.g., Csapó & Funke, 2017). This outcome is testament to the ubiquity of problem solving as a human process, the robustness of its core elements, and its adaptability under different circumstances.