Learning from a Learning Collaboration: The CORC Approach to Combining Research, Evaluation and Practice in Child Mental Health

  • Isobel Fleming
  • Melanie Jones
  • Jenna Bradley
  • Miranda WolpertEmail author
Open Access
Original Article


This paper outlines the experience of the Child Outcomes Research Consortium—formerly known as the CAMHS Outcomes Research Consortium; the named changed in 2014 in recognition of the widening scope of the work of the collaboration; a learning collaboration of service providers, funders, service user groups and researchers across the UK and beyond, jointly committed to collecting and using routinely collected outcome data to improve and enhance service provision and improve understanding of how best to help young people with mental health issues and their families.


Learning collaboration Routine outcome monitoring CORC PROMS and PREMS 


The Child Outcomes Research Consortium (CORC—formerly known as the CAMHS Outcomes Research Consortium; the named changed in 2014 in recognition of the widening scope of the work of the collaboration) was formed in 2002 by a group of child mental health clinicians, managers and funders all working in the National Health Service (NHS) in England. They worked across five different service providers across the country, but shared a mutual curiosity as to the effectiveness of their and their colleagues’ practice and how best to improve their own practice.

They determined that one way to find out about the impact of their work was to ask those they worked with (not routine practice then or now) and thus set about exploring appropriate tools to try to access these views in a systematic way (Wolpert et al. 2012). Interest grew amongst other services and interested academics joined the founding group. The collaboration opened to wider membership in 2004 and was formalised as a not-for-profit learning consortium in 2008 (see

Over the last decade the collaboration has grown to include over half of all services across the UK (70 membership groupings) with members also in Scandinavia and Australia, and seeks to act as a peer-learning group (Fullan 2009). It also increasingly includes a range of voluntary sector and counselling services.

The collaboration has pioneered the routine use of patient-reported outcome and experience measures (PROMs and PREMs) across child mental health services in England (supported by research reviewed elsewhere in this special issue) and has informed and contributed to policy development (Department of Health 2004, 2012). Its work and learning has underpinned the current national service transformation initiative: children and young people’s improving access to psychological therapies (CYP IAPT; which seeks to implement patient-reported routine outcome measurement across children’s mental health services in England.

The Child Outcomes Research Consortium has recently introduced a self-review and accreditation system to allow members to internally assess quality and gain external assurance that they are implementing best practice in outcome evaluation.

From the outset, CORC has sought to bridge the worlds of clinical decision-making, evaluation and research. Table 1 offers a conceptualisation of the way that the collaboration conceived this continuum and outlines the role of CORC at each level.
Table 1

CORC support for clinical practice, service evaluation and research


Primary aim

How CORC supports each aim

Clinical practice

Aid clinical decision making

• Makes measures freely available

• Trains clinicians in use and interpretation of measures, UPROMISE and bespoke trainings

• Advises on how to choose data collection systems

• Provides access to free data collection systems

Service evaluation

Support performance management

• Provides team and service level reports that compare service with others using appropriate metric

• Provides advice on how to consider such data collaboratively using the MINDFUL approach

• Present reports at service meetings


Contribute to the evidence base

• Analyses collated data to support member enquiries

• Used data to answer key questions

• Shares findings with members and publically as relevant

• Submits to articles to peer review journals and publishes findings

This is a challenging agenda and there are clear tensions, as well as interdependencies, between the desire to use outcomes to directly inform clinical practice and using them to inform research and service evaluation (Wolpert 2014). Below we elaborate the key challenges faced in trying to use patient-reported routine outcome and experience measurement to contribute to research, evaluation and practice, and how CORC has tried to address them. In this paper we are reflecting on the practical issues and sustainability, rather than implementation (see CORE paper for a methodological approach) of CORC methodologies.

PROMs, PREMs and Clinical Practice

The Child Outcomes Research Consortium emphasises that any feedback measure should be used in the context of collaborative working and with an aspiration to shared decision-making to directly inform clinical work (Law 2012; Law and Wolpert 2014). Practitioners are encouraged to consider the outcomes of clients they see using normative data and to discuss this in supervision (Law and Wolpert 2014). This approach is supported by service users themselves (Roberson 2011).

It should be noted that the collaboration has not yet finalised ways to support members to track progress for individual clients against trajectories of change. This is something that the collaboration is seeking to pursue: learning from the approach pioneered by Lambert, Bickman, Duncan, Miller and others,work is underway to develop trajectories of change using a range of measure for a UK population.

As reported elsewhere in this special issue, there are well-recognised challenges to encouraging clinicians to use such measures as part of their routine practice including: a) concerns about inappropriate use and impact on therapeutic relationship; b) lack of confidence in choosing and using measures; c) concerns about insufficient support for increased administrative demands to inadequate data systems to support the collection of considerable amounts of additional data fields (Badham 2011; Curtis-Tyler 2011; de Jong et al. 2012; Johnston and Gowers 2005; Moran et al. 2012; O’Herlihy 2013; Wolpert 2013.)

The collaboration addresses these challenges as follows:
  1. a)

    In terms of concerns about impact on the therapeutic relationship; CORC explicitly recognises the dangers of forms being used as a “tickbox exercise” without regard for the therapeutic relationship (Wolpert 2014). CORC stresses there may be a necessary stage of “feeling clunky” that clinicians have to work through (Abrines et al. 2014) and recommends considering starting small with a few clinical staff so as to have the opportunity to “work through the bumps” in the processes (Edmondson et al. 2001).

  2. b)

    In terms of concerns arising from lack of confidence; CORC provides a range of free support materials on the website, including video training materials for both clinicians and supervisors ( Specialist one-and three-day training courses (U-PROMISE) has been developed by CORC in collaboration with others to ensure that clinicians and supervisors can use the tools effectively. This training has been shown to increase clinicians’ positive attitudes to and self-efficacy when using PROMs and feedback (Edbrooke-Childs et al. 2014).

  3. c)

    In terms of insufficient resources and support to allow for data collection, CORC provides guidance to funders of the need to resource and support this activity ( and also provides free databases to members to try to support them whilst their services find the best ways to collect the data routinely (


PROMs, PREMs and Service Evaluation

Collaborating services send their data to a central team of researchers and data analysts who produce reports that allow comparison with relevant comparators. A dashboard is being trialled to allow for a rapid review of key data. These reports are tailored to members’ needs in relation to four main domains of service metrics: 1) Who is my service seeing; 2) How well are we addressing their needs; 3) What do service users think of their support; 4) How good is our evidence on what we are doing and what could we be doing better?

Members are also offered bespoke reporting in more depth, which includes statistical comparisons of service outcomes with those of other services using funnel plots and other relevant visual representation.

Members are encouraged to use these reports to consider their outcomes in comparison with others, to inform discussions with commissioners and others in line with practice-based evidence (Wolpert et al. 2014). CORC recommends a systematic and collaborative approach to consideration of such data by service providers, funders and users adopting the ‘MINDFUL’ framework, whereby appropriate statistical comparisons are made in relation to the most meaningful clinical unit (in the UK this is the multidisciplinary team) employing multiple perspectives and harnessing the strength of a learning collaboration (Wolpert et al. 2014).

This MINDFUL framework (see Box 1) involves: a consideration of multiple perspectives, interpreting differences in the light of the current base of evidence, a focus on negative differences when triangulated with other data, directed discussions based on ‘what if this were a true difference’ which employ the 75–25 % rule (discussed further below), the use of funnel plots as a starting point to consider outliers, the appreciation of uncertainty as a key contextual reality and the use of learning collaborations to support appropriate implementation and action strategies.
Box 1

The MINDFUL framework

MINDFUL approach to using data to inform performance management in teams (Wolpert et al. 2014)

• Multiple perspectives: child, parent, practitioner considered separately

• Interpretation: team or individual level or care pathway

• Negative differences: as a starting point

• Directed discussions: focus on what one would do if negative differences were real (75 % discussion time) rather than examining reasons for why they might be not real (25 % discussion time)

• Funnel plots: a good way to present data to reduce the risk of over-interpretation but still only a starting point

• Uncertainty: important to remember that all data are flawed and that there is a need to triangulate data from a variety of sources

• Learning collaborations: CORC supports local learning collaborations of service users, commissioners and providers, to meaningfully interpret data

Key challenges to using data for service evaluation include a) data completeness b) data quality and c) inappropriate use of data.

The Child Outcomes Research Consortium has sought to respond these challenges as follows:
  1. a)

    In relation to data completeness, CORC collects information on how many referrals there are to a service and works with services to compare their data completeness (Mellor-Clark et al., in this issue). This remains a real challenge on a number of levels, including in terms of getting clinicians to use measures but also ensuring that data is entered on relevant systems. However, an independent audit found that the implementation of CORC protocols across a service (2011–2013) was associated with a doubling in the use of repeated outcome measurement during this period (30–60 %; Hall et al. 2013).

  2. b)

    In relation to data quality, data is checked back and forth between the central team and collaborating services. CORC runs implementers’ meetings every 6 months for those in charge of collecting data and has developed a learning community of data managers who are increasingly skilled in understanding issues surrounding data management. CORC has also greatly contributed to raising the awareness of the use and type of outcome measures, which is likely to have long term effects on data quality (Hall et al. 2013).

  3. c)

    In relation to an inappropriate use of data for performance management as part of this ‘MINDFUL’ framework, a sequenced approach to questioning the service and team-level reports is recommended, including consideration of data quality and appropriateness of tools used. The advice is for services to use funnel plots to consider variation in order to minimise the over-interpretation of random variation (Spiegelhalter 2005; Fugard et al. 2014.) It is recommended that service discussions start by considering the outliers who are performing more poorly that expected. Whilst recognising that these negative outliers may be artefacts related to data quality, it is also important to consider the possibility that they reflect real differences. To contract the human tendency to explain any negative differences as data errors, CORC promotes the spending of 25 % of discussion time on considering data quality concerns, and 75 % of time on a thought experiment to consider if these data were showing up problems in our practice what might they be and how might we investigate this and rectify these issues (Wolpert et al. 2014).


PROMs, PREMs and Research

Over the last decade, CORC members have built up a rich (if flawed) dataset consisting of over a quarter of a million records (263,928 as of 24th February 2014) although only 24 % have meaningful outcome data. CORC has started to mine this data on behalf of members to answer key questions that may help inform our understanding of how best to help children and young people with mental health issues, always bearing in mind the need for caution given the missing data (Clark et al. 2008). In doing so, we are able to close the loop, turning practice-based evidence to evidenced-based practice.

The Child Outcomes Research Consortium now has a clear protocol whereby members (and non-members) can apply to use the data or request for analyses to be carried out by the central team. Key analyses already published include consideration of the sort of goals young people set for themselves when they come to therapy (Bradley et al. 2013) analysis of measure of service satisfaction (Brown et al. 2012) and analysis of service-level outcome (Wolpert et al. 2012). Further analyses currently in progress include an exploration of impact of evidence-based practice and a comparison of outcomes achieved between those seen in clinical services and those not seen in the community.


Bridging the worlds of research, service evaluation and clinical decision-making remains a complex and challenging agenda. CORC certainly does not have all the answers and daily obstacles remain. We hope that by sharing our experience we can help advance further work in this challenging but worthwhile area.



The authors would like to thank all members of the Child Outcomes Research Consortium; the CORC committee at the time of writing (includes M.W.): Alan Ovenden, Alison Towndrow, Ann York, Ashley Wyatt, Duncan Law, Evette Girgis, Julie Elliott, Mick Atkinson and Tamsin Ford; and the CORC Central Team at the time of writing (includes M.W., J.B. and I.F.): Robbie Newman, Rachel Argent, Slavi Savic and Thomas Booker. The authors would also like to thank Julian Edbrooke-Childs for his insightful comments.


  1. Abrines, N., Midgley, N., Hopkins, K., Hoffman, J., & Wolpert, M. (2014). A qualitative analysis of implementing shared decision making in child and adolescent mental health services (CAMHS) in the UK: Stages and facilitators. Clinical Child Psychology and Psychiatry. doi: 10.1177/1359104514547596.
  2. Badham, B. (2011). Talking about talking therapies: thinking and planning about how to make good and accessible talking therapies available to children and young people. Retrieved from
  3. Bradley, J., Murphy, S., Fugard, A. J. B., Nolas, S. M., & Law, D. (2013). What kind of goals do children and young people set for themselves in therapy? Developing a goals framework using CORC data. Child and Family Clinical Psychology Review, 1, 8–18.Google Scholar
  4. Brown, A., Ford, T., Deighton, J., & Wolpert, M. (2012). Satisfaction in child and adolescent mental health services: Translating users’ feedback into measurement. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-012-0433-9.Google Scholar
  5. Clark, D. M., Fairburn, C. G., & Wessely, S. (2008). Psychological treatment outcomes in routine NHS services: A commentary on Stiles et al. (2007). Psychological Medicine, 38(5), 629–634. doi: 10.1017/S0033291707001869.CrossRefPubMedPubMedCentralGoogle Scholar
  6. Curtis-Tyler, K. (2011). Levers and barriers to patient-centred care with children: Findings from a synthesis of studies of the experiences of children living with type 1 diabetes or asthma. Child: Care, Health and Development, 37(4), 540–550. doi: 10.1111/j.1365-2214.2010.01180.x.Google Scholar
  7. de Jong, K., van Sluis, P., Nugter, M. A., Heiser, W. J., & Spinhoven, P. (2012). Understanding the differential impact of outcome monitoring: Therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy Research, 22(4), 464–474. doi: 10.1080/10503307.2012.673023.CrossRefPubMedGoogle Scholar
  8. Department of Health. (2004). National service framework for children, young people and maternity services. Retrieved from
  9. Department of Health. (2012). Children and young people’s health outcomes strategy. Retrieved from
  10. Edbrooke-Childs, J., Wolpert, M., & Deighton, J. (2014). A qualitative exploration of patient and clinician views on patient reported outcome measures in child mental health and diabetes services. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-014-0586-9.
  11. Edmondson, A. C., Bohmer, R. M., & Pisano, G. P. (2001). Disrupted routines: Team learning and new technology implementation in hospitals. Administrative Science Quarterly, 46(4), 685–716. doi: 10.2307/3094828.CrossRefGoogle Scholar
  12. Fugard, A., Stapley, E., Ford, T., Law, D., Wolpert, M. & York, A. (2014). Analysing and reporting UK CAMHS outcomes: An application of funnel plots. [in press].Google Scholar
  13. Fullan, M. (2009). Motion leadership: The skinny on becoming change savvy. Thousand Oaks: Corwin.Google Scholar
  14. Hall, C. L., Moldavsky, M., Baldwin, L., Marriott, M., Newell, K., Taylor, J., et al. (2013). The use of routine outcome measures in two child and adolescent mental health services: A completed audit cycle. BMC Psychiatry, 13, 270.CrossRefPubMedPubMedCentralGoogle Scholar
  15. Johnston, C., & Gowers, S. (2005). Routine outcome measurement: A survey of UK child and adolescent mental health services. Child and Adolescent Mental Health, 10(3), 133–139.CrossRefGoogle Scholar
  16. Law, D. (2012). A practical guide to using service user feedback & outcome tools to inform clinical practice in child & adolescent mental health some initial guidance from the children and young peoples’ improving access to psychological therapies outcomes-oriented practice (co-op) group. Retrieved from–outcome-tools-.pdf.
  17. Law, D., & Wolpert, M. (Eds.). (2014). Guide to Using Outcomes and Feedback Tools With Children, Young People and Families (2 ed.). London: CAMHS Press.Google Scholar
  18. Moran, P., Kelesidi, K., Guglani, S., Davidson, S., & Ford, T. (2012). What do parents and carers think about routine outcome measures and their use? A focus group study of CAMHS attenders. Clinical Child Psychology and Psychiatry, 17(1), 65–79.CrossRefPubMedGoogle Scholar
  19. O’Herlihy, A. (2013). Progress in using ROM. Children and young people’s improving access to psychological therapies. Outcomes and feedback bulletin, 2013, 3–4. Retrieved from–data-and-feedback.pdf.
  20. Roberson, J. (2011). How can we make outcome monitoring better? Retrieved 29 January 2014, from:
  21. Spiegelhalter, D. J. (2005). Funnel plots for comparing institutional performance. Statistics in Medicine, 24(11), 85–202.Google Scholar
  22. Wolpert, M. (2013). Do patient reported outcome measures do more harm than good? BMJ, 346, f2669. doi: 10.1136/bmj.f2669.CrossRefPubMedGoogle Scholar
  23. Wolpert, M. (2014). Uses and abuses of patient reported outcome measures (PROMs): Potential iatrogenic impact of PROMs implementation and how it can be mitigated. Administration and Policy in Mental Health and Mental Health Services Research, 41(2), 141–145. doi: 10.1007/s10488-013-0509-1.CrossRefPubMedPubMedCentralGoogle Scholar
  24. Wolpert, M., Deighton, J., De Francesco, D., Martin, P., Fonagy, P., & Ford, T. (2014). From ‘reckless’ to ‘mindful’ in the use of outcome data to inform service-level performance management: Perspectives from child mental health. BMJ Quality & Safety,. doi: 10.1136/bmjqs-2013-002557.Google Scholar
  25. Wolpert, M., Ford, T., Law, D., Trustam, E., Deighton, J., Flannery, H., et al. (2012). Patient reported outcomes in child and adolescent mental health services (CAMHS): Use of idiographic and standardized measures. Journal of Mental Health, 21, 165–173.CrossRefPubMedGoogle Scholar

Copyright information

© The Author(s) 2014

Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  • Isobel Fleming
    • 1
  • Melanie Jones
    • 2
  • Jenna Bradley
    • 1
  • Miranda Wolpert
    • 1
    Email author
  1. 1.Child Outcomes Research ConsortiumLondonUK
  2. 2.Evidence Based Practice UnitLondonUK

Personalised recommendations