Abstract
There are few reliable and feasible quality assurance methods to support scaling up of psychological interventions delivered by non-specialist providers. This paper reports on the phased development and validation of a digitally administered Knowledge of Problem Solving (KOPS) measure to assess competencies associated with a “task-shared” problem-solving intervention for adolescents with diverse mental health problems in India. Phase 1 established key competencies required to deliver the intervention, followed by item generation for a corresponding knowledge-based competency measure that could be administered efficiently through e-learning systems. In phase 2, items were refined based on responses from an “experienced” reference sample comprising 17 existing counsellors with direct experience of the problem-solving intervention, and a “novice” sample with 14 untrained university students and NGO staff. In phase 3, we evaluated two parallel versions of the measure in a validation sample (N = 277) drawn from universities and NGOs. The resulting 17-item measure was structured around a hypothetical case, followed by multiple-choice questions that asked about the most appropriate response to a practice-based scenario. The difficulty level of the test items was well matched to the ability level of participants (i.e. most items being of moderate difficulty and few items being easy or difficult). Only one item showed a negative discrimination index and was removed from the 17-item forms. The final 16-item version of the KOPS measure provides a scalable digital method to assess key psychotherapeutic competencies among non-specialists, particularly in relation to a transdiagnostic problem-solving intervention. Similar formats could be deployed more widely alongside e-learning programmes to expand the global workforce capable of delivering evidence-based psychological interventions.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Task-sharing is an established strategy for building mental health service capacity, especially in low- and middle-income countries. Task-sharing often involves the training of non-specialist providers (NSPs) to deliver evidence-based psychological interventions (Hoeft et al., 2018; Scott et al., 2018;. A major barrier to scaling up task-sharing approaches is the reliance on traditional models of in-person, expert-led training workshops for NSPs (Philippe et al., 2022; van Ginneken et al., 2021). Attention has therefore focused on less-resource intensive training models, and particularly those involving digital technologies that enable efficiencies in training resources and in post-training support (Naslund et al., 2019; O’Connor et al., 2018).
A promising approach to task-sharing in the area of adolescent mental health has been developed by Sangath NGO and international collaborators as part of the Premium for Adolescents (PRIDE) programme in India. PRIDE was initiated in 2016 and concluded in 2022, with the overall aim to establish a transdiagnostic, stepped care model addressing common mental health problems in Indian secondary schools (Michelson et al., 2020a). A first-line problem-solving intervention (“Step 1”) was designed in a brief (3-week) face-to-face format utilising “lay” counsellors, and tested against problem-solving booklets alone in a large randomised controlled trial (Michelson et al., 2020b). The counselling format had significant effects on self-reported psychosocial problem severity at 6 and 12 weeks, as well as sustained effects on psychosocial problems and mental health symptoms over 12 months (Malik et al., 2021).
Subsequent efforts to scale up the Step 1 intervention in India have involved updating an existing e-learning platform, which was originally set up to train prospective providers in a brief intervention for adults with depression (Muke et al., 2020). The platform can be accessed through any internet-enabled device, for example a smartphone or a computer, and content can be followed independently (i.e. as a self-guided programme) or with external coaching. A randomised controlled trial has investigated the relative effects of self-directed and coach-supported training for prospective Step 1 providers, demonstrating a significant incremental benefit of coaching on knowledge-based competencies (Mathur et al., 2023a, b).
In this brief report, we describe the design and preliminary psychometric evaluation of a competency assessment measure that was chosen as the primary outcome in the aforementioned PRIDE training trial (see Mathur et al., 2023a) for complete trial protocol). The Knowledge of Problem Solving (KOPS) scale incorporates a vignette-based, multiple-choice question–answer format. Its design is intended to address key feasibility challenges in scaling up expert-rated observational scales (Cooper et al., 2015; Kohrt et al., 2015; Ottman et al., 2020). The user-friendly digital format permits efficient self-administration, while the vignettes reflect diverse therapeutic scenarios that may be encountered in real-world settings. Although initially intended for use in India, there may also be wider applications for the measure given that problem-solving is among the most common practice elements in evidence-based psychological interventions for adolescents worldwide (Michelson et al., 2022).
Methods
Design
We followed a phased approach in line with previous research on scalable psychotherapeutic competency measures (Cooper et al., 2015; Restivo et al., 2020). Phase 1 developed a competency “blueprint” that outlined the knowledge and applied skills required to deliver a transdiagnostic problem-solving intervention. Phase 2 involved drafting two parallel versions of the competency measure. Phase 3 tested the psychometric properties of these parallel forms.
Participants
Following on from desk-based activities in phase 1 (see procedures below), phase 2 involved 14 individuals without experience of providing mental health services of any type (the “novice” group), and 17 individuals who had already been trained in the PRIDE problem-solving intervention (the “experienced” group). The novice group comprised university students studying psychology, education, or allied disciplines and NGO staff working with adolescents. The experienced group was recruited from among Sangath staff who were not otherwise directly involved in designing the KOPS scale.
Phase 3 was embedded within a larger study that used a randomised controlled trial design to evaluate two digital formats (with and without coaching) for training prospective NSPs in the PRIDE problem-solving intervention (Mathur et al., 2023a). A total of N = 277 trial participants were recruited from four universities (located in Delhi, Bangalore, and Mumbai) and five NGOs working in the fields of adolescent health and education. The current study uses data collected from trial participants at baseline (i.e. before allocation to a training condition).
Phase 1: Selection of Competencies
Procedures and Interim Findings
A working group comprising 3 India-based authors (SM, RM, TS) reviewed the existing PRIDE intervention manual and training materials (Michelson et al., 2020a, b) to generate lists of non-specific counselling competencies (e.g. rapport building; verbal and non-verbal communication) and competencies that are specific to problem-solving (e.g. identifying problems; selecting and implementing solutions). An initial blueprint was reviewed by six experts comprising original developers of the problem-solving intervention and other clinical experts, as well as a separate group comprising eight NSPs who had been previously trained to deliver the intervention in question. These reviewers were independent from the working group and advised on the extent to which the blueprint achieved adequate coverage of key competencies needed for effective delivery of a transdiagnostic problem-solving intervention (Table 1). Experts and NSPs rated individual items (from 1 = lowest to 3 = highest) according to their relative importance, and provided additional qualitative feedback on the distinctiveness and redundancy of respective competencies. To enhance external validity, competency domains were also cross-referenced with problem-solving competencies from a widely cited CBT competency framework that has been used in large-scale training of psychological practitioners elsewhere (Roth & Pilling, 2008). Based on feedback and external comparisons, items were consolidated, removed, or added. The final blueprint (see Supplementary Materials, Table I-SM) covered 18 competencies, 13 of which were non-specific and five were specific to problem-solving.
Phase 2: Item Generation
Procedures and Interim Findings
Item generation was guided by established principles for creating multiple-choice quizzes (MCQs) (Haladyna, 2004; Plake & Wise, 2014). The objective was to create two parallel MCQ forms to permit repeat assessment without practice effects. Two independent teams each created a unique case description reflecting common adolescent mental health problems in the study setting, as well as generating a series of plausible counselling vignettes that followed from the case description. Similar formats have been used for assessing competencies of NSPs in other low-resource settings, though not in relation to youth-focused psychosocial interventions or problem-solving specifically (Asher et al., 2021; Ottman et al., 2020).
Each form began with a briefing note about the case’s presenting problems and context. This was followed by five vignettes, each pertaining to a different counselling session with the same case, arranged in sequential order. Each of the vignettes was accompanied by either 3 or 4 questions, making a total of 18 questions and with each question intended to assess a different competency. All items were designed as one-best-answer multiple-choice questions, which consisted of a lead-in question followed by four answer options. Items focused on assessing applied knowledge (i.e. knowing how to implement the intervention in a given situation) rather than theoretical knowledge, as recommended in other pedagogical research (Carneson et al., 1996). To create plausible but incorrect “distractors”, we referred to a list of common errors/misconceptions that the PRIDE supervisors had noted over the course of 5 years spent training and supervising NSPs (e.g. the misconception that confidentiality must never be broken under any circumstance).
Two parallel versions of the measure, each comprising 18 items, were subsequently piloted. Items that were correctly answered by more than 35% of the novice group were deemed to be too easy and vulnerable to guessing. Conversely, items for which less than 65% of the experienced group could answer correctly were deemed to be too difficult/ambiguous. Cognitive interviews were additionally conducted with two novices and two experienced individuals, which helped to ensure that the items were clearly worded and had one best answer. Out of 36 items (18 in each version), 19 were refined, mainly by re-wording the incorrect options or distractors in such a way that would improve their discrimination ability. Fifteen items were removed due to their ambiguity and replaced with newly drafted items. Two items which corresponded separately to “risk assessment” and “risk management” were combined. Thus, two parallel 17-item measures were generated, with each competency represented by a single question.
Phase 3: Psychometric Evaluation
Procedures
The study was hosted on the REDCap platform, which permits creation, administration, and management of online surveys (Harris et al., 2019). After providing demographic information and informed consent, participants were randomised to receive one of the two parallel forms. The randomisation sequence was programmed into REDCap. Upon randomisation, participants were automatically presented with the relevant KOPS form. The forms were available in both English and Hindi and participants could choose their preferred language. Each participant was provided with 90 min to complete the measure.
Data Analysis
Rasch analysis is a commonly used psychometric method for developing assessment tools in educational contexts. It is based on item-response theory (Fischer & Molenaar, 2012) and involves estimating an item characteristic curve (ICC) for each item showing the probability of a correct response as a function of the respondent’s ability/knowledge. Ideally, the curve is S-shaped meaning a low probability of a correct response when ability is also low, and an increasing probability of a correct response as ability increases. The horizontal position of the curve is the difficulty parameter: curves centred to the left of “average ability” represent easy items (the probability of a correct response is high even for those with low ability); curves centred to the right of “average ability” represent difficult items (the probability of a correct response is low even for those with high ability). The slope of the curve is the discrimination parameter: flat curves suggest that an item cannot discriminate between respondents with different ability; steep curves discriminate well. The analysis also yields a test information curve (TIC) indicating the information provided by the test (y) as a function of ability (x). Ideally this curve is flat, indicating that the information provided by the test is equal for all ability levels. In reality, the curve is commonly bell-shaped indicating that the least information is provided for people with extremely low or high ability.
The aim of our analysis was to identify items that were psychometrically weak, and to evaluate the information characteristics of the test overall. That is, to identify items that may be too difficult or easy (based on the difficulty parameter), or discriminate poorly (based on the discrimination parameter) while also considering the overall shape of the test information curve. Analyses were carried out using R 4.2 (R Core Team, 2002) and the ltm (Rizopoulos, 2006) and tidyverse (Wickham et al., 2019) R packages.
Findings: Test Completion
In total, N = 277 individuals completed one of the two versions of the KOPS competency measure (n = 123 for Form A; n = 154 for Form B). This imbalance was because the original randomisation sequence was generated for 500 participants. The mean age of participants was 26.1 years (SD = 7.1). Most participants were female (n = 229, 82.7%) and included a mix of university students (n = 122, 44%) and NGO workers (n = 155, 56%). In terms of the participants’ highest level of completed education, n = 126 (45.5%) held a bachelor’s degree; n = 86 (31.0%) held a post-high school diploma or equivalent; n = 63 (22.7%) held a master’s degree; and n = 2 (0.7%) had completed education up to 12th standard (i.e. had finished high school).
Findings: Rasch Analysis
Forms A and B were fairly well matched in terms of overall item difficulty (see Table 2), discrimination (see Table 3), and the overall TICs (see Figs. 1 and 2). Thus, the forms appeared to be similar in terms of the information they provide at different ability levels. Figures 1 and 2 also show that although the most information was provided about individuals of average ability, the curves had a reasonable spread around the average. This pattern suggests that the forms provided some information about individuals above and below average ability, but not so much at the extremes.
In terms of the ICC curves, the difficulty columns in Tables 4 and 5 show that most items were moderately difficult for both forms, with a few very easy items (one on Form A, two on Form B), and a few very difficult items (two on each form). This mix was reasonable. The discrimination columns in Tables 4 and 5 show that, on both forms, 11 items had moderate to high discrimination (above 0.65), and 4 items (Form A) and 2 items (Form B) had potentially problematic low discrimination (below 0.35). The column P (correct|average) in Tables 4 and 5 indicates the probability of a correct response for respondents of average ability. Most items (12 for Form A, 13 for Form B) had a probability below 0.5, suggesting that a respondent of average ability would choose an incorrect response more often than a correct one.
Given the positive characteristics of the TIC and the generally good range of discrimination and difficulty across items, most items were deemed useful. The exception was the item for “brainstorming” on Form B, which had a negative discrimination index and extremely low difficulty. Although removing this item had almost no impact on the TIC, it was rejected along with its counterpart on Form A, thus keeping the forms balanced. The final measure therefore consisted of 16 items, each linked to a unique competency as listed in Table 6, and available in parallel forms.
Discussion
This study developed and validated the KOPS measure: a brief, scalable measure that can be used to assess the competency of non-specialists to deliver a problem-solving intervention for adolescents with common mental health problems. Two versions of the measure with equivalent difficulty levels were developed to allow repeated testing of training outcomes over time without practice effects. The difficulty of the test items was well matched to the ability level of trainees, with most items being of moderate difficulty and few items being easy or difficult, which is ideal for a test (Case & Swanson, 1998).
Competency measures are vital for ensuring that non-specialists have acquired the key knowledge and skills needed to undertake new mental health care roles. Typically, these measures have been designed for use with structured observations of actual sessions or analogue situations with “clients” (Fairburn & Cooper, 2011). Such observational formats require skilled assessors who can reliably identify and rate practices, but who are typically in short supply in many global settings. In addition to being labour-intensive, there are also practical challenges of in vivo and role-play assessments, such as role-played situations feeling inauthentic (particularly when an adult is playing the role of a child or adolescent); trainees’ anxiety at being observed and rated; and the inherent variability of real-life cases (Cooper et al., 2017; El Masri et al., 2023). Observational assessments are especially impractical when trainees are accessing training and supervision remotely via e-learning platforms. Although recorded sessions can potentially be uploaded to file servers or emailed, this is harder in low-resource settings with weak digital infrastructure. Distance and technical barriers can also undermine the authenticity of role-plays conducted online (Young, 2022).
The KOPS measure obviates the need for skilled assessors and can be self-administered and scored in a relatively simple digital format. The use of written case vignettes, designed with input from local practitioners, enables respondents to apply their knowledge to practice-based scenarios that reflect common presenting problems and process issues encountered in the field. Questions are arranged in sequential order following the chronology of a multi-session intervention. These characteristics offer further advantages, in terms of external validity, relative to knowledge-based quiz formats that emphasise theoretical aspects of psychotherapy over practical applications of knowledge (Myles & Milne, 2004).
This study has several strengths. In developing our measure of provider competency, a rigorous stepwise approach was used to achieve two test forms with equivalent difficulty levels. We followed an iterative process in developing our KOPS competency blueprint, triangulating content from an existing problem-solving intervention manual and associated training curriculum; an independent competency framework that included problem-solving and non-specific competencies; and incorporating formative feedback from experienced clinicians as well as novices. Moreover, items were pre-tested on a relatively large sample of novice practitioners including university students and NGO workers who closely resembled the intended training population.
We acknowledge that the measure does not directly assess the applicability of the acquired knowledge in real-life settings and we did not obtain data on its predictive validity with respect to clinical outcomes. These would be future research directions along with concurrent validation against the “gold-standard” of role-plays to evaluate the concordance of knowledge-based competency with skill-based assessment.
Conclusions
Competency measures are an integral part of quality assurance in psychological therapies. Previous research has recognised that many existing competency measures involve observer-rated formats that are prohibitively expensive and time consuming for routine use, highlighting the need to strike a balance between reliability, validity, and feasibility (Muse & McManus, 2013; Ottman et al., 2019). The digitally administered approach used in the current study could be rapidly extended to other contexts, offering a scalable methodology for developing and evaluating knowledge-based competencies for NSPs engaged in task-sharing of psychological interventions across a range of settings.
Data Availability
Data may be obtained from the principal investigator (Vikram Patel) subject to reasonable request.
References
Asher, L., Birhane, R., Teferra, S., Milkias, B., Worku, B., Habtamu, A., Kohrt, B. A., & Hanlon, C. (2021). “Like a doctor, like a brother”: Achieving competence amongst lay health workers delivering community-based rehabilitation for people with schizophrenia in Ethiopia. Plos One, 16(2). https://doi.org/10.1371/journal.pone.0246158
Carneson, J., Delpierre, G., & Masters, K. (1996). Designing and managing multiple choice questions (2nd Ed). https://doi.org/10.13140/RG.2.2.22028.31369
Case, S. M., & Swanson, D. B. (1998). Constructing written test questions for the basic and clinical sciences third edition. National Board of Medical Examiners.
Cooper, Z., Doll, H., Bailey-Straebler, S., Bohn, K., De Vries, D., Murphy, R., O’Connor, M. E., & Fairburn, C. G. (2017). Assessing therapist competence: Development of a performance-based measure and its comparison with a web-based measure. JMIR Mental Health, 4(4). https://doi.org/10.2196/mental.7704
Cooper, Z., Doll, H., Bailey-Straebler, S., Kluczniok, D., Murphy, R., O’Connor, M. E., & Fairburn, C. G. (2015). The development of an online measure of therapist competence. Behaviour Research and Therapy, 64, 43–48. https://doi.org/10.1016/j.brat.2014.11.007
El Masri, R., Steen, F., Coetzee, A. R., Aoun, M., Kohrt, B. A., Schafer, A., Pedersen, G. A., El Chammay, R., Jordans, M. J. D., & Koppenol-Gonzalez, G. V. (2023). Competency assessment of non-specialists delivering a psychological intervention in Lebanon: A process evaluation. Intervention, 21(1), 47–57. https://doi.org/10.4103/intv.intv_15_22
Fairburn, C. G., & Cooper, Z. (2011). Therapist competence, therapy quality, and therapist training. Behaviour Research and Therapy, 49(6–7), 373–378. https://doi.org/10.1016/j.brat.2011.03.005
Fischer, G. H., & Molenaar, I. W. (Eds.). (2012). Rasch models : Foundations, recent developments, and applications.
Haladyna, T. M. (2004). Developing and validating multiple-choice test items (Third). Routledge.
Harris, P. A., Taylor, R., Minor, B. L., Elliott, V., Fernandez, M., O’Neal, L., McLeod, L., Delacqua, G., Delacqua, F., Kirby, J., Duda, S. N., & & REDCAP Consortium. (2019). The REDCap consortium: Building an international community of software platform partners. Journal of Biomedical Informatics, 95, 103208. https://doi.org/10.1016/j.jbi.2019.103208
Hoeft, T. J., Fortney, J. C., Patel, V., & Unützer, J. (2018). Task-sharing approaches to improve mental health care in rural and other low-resource settings: A systematic review. Journal of Rural Health, 34(1), 48–62. https://doi.org/10.1111/jrh.12229
Kohrt, B. A., Jordans, M. J. D., Rai, S., Shrestha, P., Luitel, N. P., Ramaiya, M. K., Singla, D. R., & Patel, V. (2015). Therapist competence in global mental health: Development of the ENhancing Assessment of Common Therapeutic factors (ENACT) rating scale. Behaviour Research and Therapy, 69, 11–21. https://doi.org/10.1016/j.brat.2015.03.009
Malik, K., Michelson, D., Doyle, A. M., Weiss, H. A., Greco, G., Sahu, R., James, E. J., Mathur, S., Sudhir, P., King, M., Cuijpers, P., Chorpita, B., Fairburn, C. G., & Patel, V. (2021). Effectiveness and costs associated with a lay counselor–delivered, brief problem-solving mental health intervention for adolescents in urban, low-income schools in India: 12-Month outcomes of a randomized controlled trial. Plos Medicine, 18(9). https://doi.org/10.1371/journal.pmed.1003778
Mathur, S., Weiss, H. A., Neuman, M., Field, A. P., Leurent, B., Shetty, T., & J, J. E., Nair, P., Mathews, R., Malik, K., Michelson, D., & Patel, V. (2023a). Coach-supported versus self-guided digital training course for a problem-solving psychological intervention for nonspecialists: Protocol for a pre-post nested randomized controlled trial. JMIR Research Protocols, 12(1), e41981. https://doi.org/10.2196/41981
Mathur, S., Weiss, H. A., Neuman, M., Leurent, B., Field, A. P., Shetty, T., E.J., J., Nair, P., Mathews, R., Malik, K., & Michelson, D. P. V. (2023b). Developing psychotherapeutic competencies in non-specialist providers: A pre-post study with a nested randomised controlled trial of a coach-supported versus self-guided digital training course for a problem-solving psychological intervention in India. Preprint.
Michelson, D., Hodgson, E., Bernstein, A., Chorpita, B. F., & Patel, V. (2022). Problem solving as an active ingredient in indicated prevention and treatment of youth depression and anxiety: An integrative review. Journal of Adolescent Health, 71(4), 390–405. https://doi.org/10.1016/j.jadohealth.2022.05.005
Michelson, D., Malik, K., Krishna, M., Sharma, R., Mathur, S., Bhat, B., Parikh, R., Roy, K., Joshi, A., Sahu, R., Chilhate, B., Boustani, M., Cuijpers, P., Chorpita, B., Fairburn, C. G., & Patel, V. (2020a). Development of a transdiagnostic, low-intensity, psychological intervention for common adolescent mental health problems in Indian secondary schools. Behaviour Research and Therapy. https://doi.org/10.1016/j.brat.2019.103439
Michelson, D., Malik, K., Parikh, R., Weiss, H. A., Doyle, A. M., Bhat, B., Sahu, R., Chilhate, B., Mathur, S., Krishna, M., Sharma, R., Sudhir, P., King, M., Cuijpers, P., Chorpita, B., Fairburn, C. G., & Patel, V. (2020b). Effectiveness of a brief lay counsellor-delivered, problem-solving intervention for adolescent mental health problems in urban, low-income schools in India: A randomised controlled trial. The Lancet Child and Adolescent Health, 4(8), 571–582. https://doi.org/10.1016/S2352-4642(20)30173-5
Muke, S. S., Tugnawat, D., Joshi, U., Anand, A., Khan, A., Shrivastava, R., Singh, A., Restivo, J. L., Bhan, A., Patel, V., & Naslund, J. A. (2020). Digital training for non-specialist health workers to deliver a brief psychological treatment for depression in primary care in India: Findings from a randomized pilot study. International Journal of Environmental Research and Public Health, 17(17), 1–22. https://doi.org/10.3390/ijerph17176368
Muse, K., & McManus, F. (2013). A systematic review of methods for assessing competence in cognitive-behavioural therapy. Clinical Psychology Review, 33(3), 484–499. https://doi.org/10.1016/j.cpr.2013.01.010
Myles, P. J., & Milne, D. L. (2004). Outcome evaluation of a brief shared learning programme in cognitive behavioural therapy. Behavioural and Cognitive Psychotherapy, 32(2), 177–188. https://doi.org/10.1017/S1352465804001183
Naslund, J. A., Shidhaye, R., & Patel, V. (2019). Digital technology for building capacity of nonspecialist health workers for task sharing and scaling up mental health care globally. In Harvard Review of Psychiatry (Vol. 27, Issue 3, pp. 181–192). Lippincott Williams and Wilkins. https://doi.org/10.1097/HRP.0000000000000217
O’Connor, M., Morgan, K. E., Bailey-Straebler, S., Fairburn, C. G., & Cooper, Z. (2018). Increasing the availability of psychological treatments: A multinational study of a scalable method for training therapists. Journal of Medical Internet Research, 20(6). https://doi.org/10.2196/10386
Ottman, K. E., Kohrt, B. A., Pedersen, G. A., & Schafer, A. (2020). Use of role plays to assess therapist competency and its association with client outcomes in psychological interventions: A scoping review and competency research agenda. Behaviour Research and Therapy, 130, 103531. https://doi.org/10.1016/j.brat.2019.103531
Ottman, K., Kohrt, B. A., Pedersen, G., & Schafer, A. (2019). Scoping review of therapist competency and client outcomes title: Use of role plays to assess therapist competency and its association with client outcomes in psychological interventions: A scoping review and competency research agenda running header: Scoping review of therapist competency and client outcomes. https://www.elsevier.com/open-access/userlicense/1.0/
Philippe, T. J., Sikder, N., Jackson, A., Koblanski, M. E., Liow, E., Pilarinos, A., & Vasarhelyi, K. (2022). Digital health interventions for delivery of mental health care: Systematic and comprehensive meta-review. In JMIR Mental Health (Vol. 9, Issue 5). JMIR Publications Inc. https://doi.org/10.2196/35159
Plake, B. S., & Wise, L. L. (2014). What is the role and importance of the revised AERA, APA, NCME standards for educational and psychological testing? Educational Measurement: Issues and Practice, 33(4), 4–12. https://doi.org/10.1111/emip.12045
R Core Team. (2002). Download R-4.2.1 for Windows. The R-project for statistical computing. https://cran.r-project.org/bin/windows/base/
Restivo, J. L., Mitchell, L., Joshi, U., Anand, A., Gugiu, P. C., Singla, D. R., Hollon, S. D., Patel, V., Naslund, J. A., & Cooper, Z. (2020). Assessing health worker competence to deliver a brief psychological treatment for depression: Development and validation of a scalable measure. Journal of Behavioral and Cognitive Therapy, 30(4), 253–266. https://doi.org/10.1016/j.jbct.2020.10.001
Rizopoulos, D. (2006). ltm: An R package for latent variable modeling and item response theory analyses. In JSS Journal of Statistical Software (Vol. 17). http://www.jstatsoft.org/
Roth, A. D., & Pilling, S. (2008). Using an evidence-based methodology to identify the competences required to deliver effective cognitive and behavioural therapy for depression and anxiety disorders. Behavioural and Cognitive Psychotherapy, 36(2), 129–147. https://doi.org/10.1017/S1352465808004141
Scott, K., Beckham, S. W., Gross, M., Pariyo, G., Rao, K. D., Cometto, G., & Perry, H. B. (2018). What do we know about community-based health worker programs? A systematic review of existing reviews on community health workers. In Human Resources for Health (Vol. 16, Issue 1). BioMed Central Ltd. https://doi.org/10.1186/s12960-018-0304-x
van Ginneken, N., Chin, W. Y., Lim, Y. C., Ussif, A., Singh, R., Shahmalak, U., Purgato, M., Rojas-García, A., Uphoff, E., McMullen, S., Foss, H. S., Thapa Pachya, A., Rashidian, L., Borghesani, A., Henschke, N., Chong, L. Y., & Lewin, S. (2021). Primary-level worker interventions for the care of people living with mental disorders and distress in low- and middle-income countries. Cochrane Database of Systematic Reviews, 2021(8). https://doi.org/10.1002/14651858.CD009149.pub3
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T., Miller, E., Bache, S., Müller, K., Ooms, J., Robinson, D., Seidel, D., Spinu, V., … & Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686
Young, L. (2022). Counselling students responses to conducting role-play activities. Counselling and Psychotherapy Research, 22(3), 678–688.
Acknowledgements
We acknowledge the contribution of experts who provided input on the blueprint and initial test items: Pooja Nair, Rhea Mathews, Manogya Sahay, Sai Priya Kumar, Chris Fairburn, John Naslund, Anant Bhan, Juliana Lynn Restivo, Kanika Malik, Prerna Sharma, Rhea Sharma, Vidhi Tyagi, and Udita Joshi.
Funding
This study was part of the PRIDE (PRemIum for aDolEscents) programme, funded by the Wellcome Trust through a Principal Research Fellowship to Vikram Patel via Grant Number 106919/Z/15/Z. The funder played no role in the study design; collection, analysis, or interpretation of data; writing of the manuscript; or decision to submit the manuscript for publication. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Author information
Authors and Affiliations
Contributions
SM and AF led on the development of the competency assessment measure, with contributions from TS, DM, and VP. SM led on drafting the manuscript, with critical inputs from AF, DM, and VP. All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics Approval
Institutional Review Board approvals were obtained from Sangath (the implementing organisation in India); Harvard Medical School, USA (the sponsor); and the London School of Hygiene and Tropical Medicine, UK (a collaborating institute).
Consent to Participate
Informed consent was obtained from all individual participants included in the study.
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mathur, S., Michelson, D., Shetty, T. et al. Knowledge of Problem Solving (KOPS) Scale: Design and Evaluation of a Digitally Administered Competence Measure for a Common Practice Element in Task-Shared Youth Mental Health Interventions. J. technol. behav. sci. 9, 418–427 (2024). https://doi.org/10.1007/s41347-023-00356-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41347-023-00356-9