Skip to main content

Improving national hospice/palliative care service symptom outcomes systematically through point-of-care data collection, structured feedback and benchmarking



Every health care sector including hospice/palliative care needs to systematically improve services using patient-defined outcomes. Data from the national Australian Palliative Care Outcomes Collaboration aims to define whether hospice/palliative care patients’ outcomes and the consistency of these outcomes have improved in the last 3 years.


Data were analysed by clinical phase (stable, unstable, deteriorating, terminal). Patient-level data included the Symptom Assessment Scale and the Palliative Care Problem Severity Score. Nationally collected point-of-care data were anchored for the period July–December 2008 and subsequently compared to this baseline in six 6-month reporting cycles for all services that submitted data in every time period (n = 30) using individual longitudinal multi-level random coefficient models.


Data were analysed for 19,747 patients (46 % female; 85 % cancer; 27,928 episodes of care; 65,463 phases). There were significant improvements across all domains (symptom control, family care, psychological and spiritual care) except pain. Simultaneously, the interquartile ranges decreased, jointly indicating that better and more consistent patient outcomes were being achieved.


These are the first national hospice/palliative care symptom control performance data to demonstrate improvements in clinical outcomes at a service level as a result of routine data collection and systematic feedback.


Every part of the health care system needs to systematically improve the services that it offers, including hospice/palliative care. Like other health care providers, it is important for hospice/palliative care to measure patient-defined outcomes and to continually strive to improve the care that is offered. Previous work has helped to conceptualise key domains that relate to quality of care and characterise meaningful outcomes within the setting of life-limiting illnesses [13]. Continued work is required to develop further and measure meaningful outcomes beyond crude indices such as mortality or simple process measures that may or may not actually improve patient outcomes.

Key parameters for the systematic introduction of performance improvement include the following:

  1. 1.

    Selecting measures that are meaningful to patients, their caregivers and clinicians

  2. 2.

    Using tools that can inform policy and funding decisions systematically

  3. 3.

    Embedding systems to collect these measures in routine clinical practice and analyse them in a standard way nationally

  4. 4.

    Ensuring that the performance of individual services can be tracked longitudinally using the same measures to evaluate changes in the quality of care

  5. 5.

    Providing timely and respectful mechanisms for feedback of each service’s performance

  6. 6.

    At a systems level, working to understand the key factors that drive changes in performance through benchmarking and ensuring that services apply best available evidence for the changes required to improve outcomes

The care provided by specialist palliative care services in Australia reaches a wide range of people with life-limiting illnesses although this is still predominant in people whose diagnosis is cancer. From population estimates, the percentage of people with life-limiting illnesses in Australia who are referred to specialist palliative care services is just under 60 % across the community [4].

With the challenges generated by clinical practice in hospice/palliative care and a dedicated workforce utilising limited resources, there is a need to ensure that every service is delivering the best possible care to the people who most need that care [5]. An essential prerequisite of a quality service is to have in place sufficiently robust measures to ensure patients’ needs, and outcomes can be assessed systematically in routine practice. Another prerequisite is that there is close collaboration between peer services in order to participate in benchmarking, refine models of care and continue to improve outcomes systematically.

In order to undertake meaningful benchmarking, there needs to be ways to compare patient outcomes in a small rural service with those in a large university teaching hospital. The focus is therefore on individual patients’ measurements regardless of setting, as it is the patients’ outcomes that ultimately define quality of care. These data are aggregated to service level comparisons.

This patient-centred approach requires systematically collecting outcome measures at point-of-care in order to inform areas where improvements need to occur [6, 7]. It also requires methods to control for differences in the mix of patients seen in different services (age, gender, life-limiting illnesses, prognosis), given that hospice/palliative care services have differing patterns of referral [8].

The Australian Palliative Care Outcomes Collaboration (PCOC) is a national program funded by the federal Department of Health that is designed to improve clinical outcomes in palliative care through an explicit audit and feedback quality cycle that includes the following:

  1. 1.

    National service level performance derived from patient outcome measures

  2. 2.

    Systematic benchmarking between participating services or relevant subgroups of them. This involves measuring each service against national benchmark standards that PCOC sets and reports against

  3. 3.

    Actively implementing quality improvement initiatives. While each service implements their own quality improvement programs, nationally employed staff facilitate identifying priorities for clinical and systems change and support change management processes across each participating service through communities of practice (Quality Improvement Facilitators (QIFs))

  4. 4.

    For individual patients, outcomes are recorded at each encounter (if in the community) and at least with each phase change (in hospital)

  5. 5.

    Aggregate data are analysed and reported back to participating services allowing comparison to all other participating (deidentified) services nationally every 6 months

Important principles underpinning participation in the initiative include that it is voluntary, data are owned by the service submitting them and there is timely return of analysed, comparative data to each participating service where only the service receiving the data knows their own actual performance. All other data are anonymised. Participating services are supported throughout the process by receiving training in standardised clinical assessment, interpreting and using the data, and ways of optimising quality improvement programs. More detail on PCOC and its operation and progress has been reported previous [6, 7] as well as at

The aim of this study was to determine whether hospice/palliative care services’ patients’ and caregivers’ outcomes have improved nationally since the inception of point-of-care data collection, structured and timely feedback and benchmarking by PCOC and also whether there was greater consistency in service performance. The null hypothesis was that there was no difference between the performance of services over baseline during the study period.


Nationally consistent clinical assessments are collected by participating services at every clinical encounter with the patient (in the community) and at least with every phase change (in hospital), whether care was provided directly or through consultative services. This is derived from a point-of-care data collection. An ‘episode of care’ changes each time the setting of care changes (community care, inpatient care, specialist nursing facility). Phase of care are clinically relevant categories of care that describe the palliative care trajectory [9]. Within this routine point-of-care collection, data are therefore aggregated at episode and phase level in order to help to compare similar subpopulations (Table 1; collected when a person’s clinical condition changes) [9, 10].

Table 1 Phase definitions

The Palliative Care Phase of Care is a measure of relative resource utilisation linked directly to clinical needs, irrespective of diagnosis or prognosis [9, 10]. There are four clinical phases for the patient: stable, unstable, deteriorating and terminal and a fifth (bereavement phase) when specific bereavement support is provided to the family. Movement between phases is determined by clinical needs and the urgency of the interventions required. A new phase is assigned whenever a clinical change requires patient/family reassessment and modification of the care plan.

Work has been undertaken to identify quality measures in hospice/palliative care [11, 12]. In PCOC, symptoms are measured using two key measures. The seven domains of the Symptom Assessment Scale measure insomnia, appetite, nausea, bowels, breathing, fatigue and pain on a 0–10 numerical patient self-rating scale [13, 14]. The four domains of the clinician-rated Palliative Care Problem Severity Score capture pain, other (physical) symptoms, psychological/spiritual problems and family/carer problems measured as a categorical scale (absent, mild, moderate or severe) [10].

In the service feedback report for January–June 2009, PCOC introduced eight casemix adjusted relative mean improvement (CARMI) [15] measures for each of the measures in the clinician-rated Palliative Care Problem Severity Score (pain, other symptoms, family/carer problems and psychological/spiritual problems) and for four items in the patient- (or proxy-) rated Symptom Assessment Score (pain, nausea, breathing problems and bowel problems). The CARMI is a risk-adjustment methodology that measures the difference between the change in pain and symptom scores achieved and what was expected. The ‘expected’ scores are based on what was actually achieved for different classes of patients (the ‘casemix’) during a baseline period in July–December 2008. The CARMI measures allow services to compare themselves to this national baseline and to each other, taking into account the different mix of patients at each service. This score was calculated by averaging the change for each patient in the same phase (stable, unstable, deteriorating, terminal) with the symptom score at the start of the phase in order to create the baseline expected change score. This forms the anchor point against which changes in services’ performances (improving or worsening) were assessed longitudinally, ensuring that patient-level data compared similar patients.


Data for the eight routinely reported CARMI measures were analysed at phase level in 6-month periods for all of the services in the Collaboration that provided data in all six (January 1, 2009 to December 31, 2011) 6-month reporting periods and score changes compared to the baseline. For each service in each six monthly report, this figure was averaged across all phases.

For each measure, a longitudinal multi-level random coefficient model was fitted to determine whether there was a significant, positive increase in the proportion of phases that were better than baseline over the 3-year period.

Consent and ethical oversight

The study was approved by the Human Research Ethics Committee of the University of Wollongong, the auspicing body for the Collaboration. Individually identified data were not collected. Data collection was of routine clinical data, and separate consent was not required to be sought for this.


Data from all 30 services who were continuously engaged in the PCOC audit and feedback process between January 2009 and December 2011 were included in the analysis. These 30 services varied in their service delivery models and geographic settings (Table 2) and from other participating services whose data were not provided for all six periods and from those services who are not participating (Table 2). The total number of episodes of care they reported was 27,928 with 65,463 phase of care. For services, the mean and median numbers of patients and episodes and phases of care increased in each 6-month period (Table 3).

Table 2 Characteristics of the 30 services that contributed data for all six monthly collection periods January 2009 to December 2011 in the Australian Palliative Care Outcomes Collaboration
Table 3 Changes in caseload, episodes and phases of care overtime in the 30 services that provided data in all six monthly periods participating in the Australian Palliative Care Outcomes Collaboration

At a patient level, these data report the care provided to 19,747 patients of whom 46 % were female and 85 % of whom had cancer as their primary life-limiting illness. Mean age was 70.9 years (SD 14.3; median 73; range 0–103).

For both patient- and clinician-reported outcomes, there were statistically significant improvements in all domains over the 3-year period at a service level with the exception of pain (Table 4). Consistent with this, the median service level percentage of patient phases achieving at least the baseline median change increased incrementally over the period. At the same time, the service level interquartile ranges also decreased in the same domains over the same period of reporting suggesting that not only was overall performance improving but also outcomes were being achieved more consistently (Table 5; Fig. 1).

Table 4 Regression coefficients (standard errors) of fixed effects from the multi-level models
Table 5 Service level percentage of patient phases achieving at least the baseline average change—median and interquartile ranges over time of the 30 services participating continuously in the Australian Palliative Care Outcomes Collaboration for key domains of care
Fig. 1
figure 1

Boxplots showing the distributions of the percent of patient phases at or above baseline by service with clinician-rated measures (Palliative Care Problem Severity Score (PCPSS)) and patient-rated Symptom Assessment Scale (SAS). Data from January 1, 2009 to December 31, 2011; 65,463 phases of care for 19,747 patients


This is the first time that national hospice/palliative care performance data in symptom control have been presented, and the first data that demonstrate that patient-centred improvements in care can be delivered nationally. This program of work demonstrates that it is feasible to measure patient-centred palliative care outcomes routinely at point-of-care as an integral part of the clinical encounter. More importantly, the data confirm that it is possible to work with services to improve systematically the care that is provided in ways that can be measured using patient- and family-centred outcomes. Work is ongoing to better understand why pain is the only symptom not to significantly improve.

Other initiatives have started around the world that are seeking to routinely improve patient outcomes through routine data capture, analysis and feedback using similar processes [1618]. There is a need to harmonise measures and ensure that data are also being benchmarked at patient level across these initiatives to understand variations in outcomes between services internationally.

Building routine data collection into clinical care is the critical foundation in order to understand patient outcomes. This allows comparison between patients, not simply between services. Demonstrating the rates of improved symptom control is crucial if, as a community, we are to have confidence in the care that is offered to people at the end of life and to further invest in it.

Given that hospice/palliative care was a sector of health care that was largely data naïve a decade ago, a national voluntary program of this size and complexity demonstrates very rapid progress. For many services for the first time, the Collaboration has embedded standardised and routine clinical assessments. More importantly, PCOC has catalysed a process of services starting to compare and contrast models of service delivery and levels of resourcing in ways that have not happened before.

Strengths of this program

Bespoke measures important to patients and their families cannot be derived from clinical records and need to be collected prospectively. These data fulfil this crucial criterion. The diversity of settings makes such collection even more crucial, and this study represents the various clinical settings in which hospice/palliative clinical care is delivered.

By using phase and a measure of function, PCOC has also embedded a new common language for rapidly describing the position on the care trajectory of individual patients [19].

The use of these two simple measures (phase and function) to describe each patient also allows for data standardisation across the palliative care population in a way that has not been possible before. This includes an ability for referring health professionals and specialist service providers to use descriptors with agreed definitions to describe a person’s physical status accurately and quickly.

By controlling for patients’ overall physical status (which is the major predictor of resource utilisation at the end of life) in the comparisons made, residual variations are largely going to be due to variations between services: models of care, clinical competencies, resourcing or combinations of these factors. This has allowed a process of embedding quality systematically across a whole sector of the health system relatively quickly. Developing a culture of rapid evaluation and re-evaluation after adjusting local models of clinical care delivery is an exciting development within hospice/palliative care.

Data collected in this prospective way are of high quality because their collection is built into routine clinical practice. Tools used clinically on a day-to-day basis to measure and plan patient care can be captured and, from a service’s perspective, be used to follow performance over time with a small number of key measures that are important to patients and their families. The simplicity of the measures is a major strength especially with the ability to complement this work with direct patient and family/caregiver surveys.


This analysis is limited to those services that participated for the entire 42-month period, and although they may not be entirely representative of all services, they represent a range of service models in a range of settings and provide care to a large number of patients. Importantly, the finding that patient-centred outcomes can be improved is in no way diminished by the number of services.

These data can only reflect people who are referred to specialist palliative care services, and this currently represents about 60 % of people who will die from cancer with much lower rates for other life-limiting illnesses [20]. Extending this data collection into primary care to cover the balance of patients is going to be far more challenging. Further development of the data system will enhance the ability to follow individual patients across a range of settings of care.

Staff competency in clinical ratings is an area of ongoing training and calibration. There is also an unquantifiable level of proxies making clinical ratings on behalf of patients, but this has systematically diminished over the course of the data reported here and would likely therefore serve to underestimate the magnitude of improvement reported. Quantifying the discrepancy between patient and proxy ratings has been an important part of this process [14].

Implications for research

These data are a demonstration of what can be measured nationally on a routine basis. However, these data are not sufficient to explain why similarly resourced services have different patient outcomes. Differences in clinical outcomes between services as a result of differing staffing levels can also be deduced from the PCOC processes. This is a work that needs to be done urgently. Equally, research on why pain is the only symptom not to significantly improve is also urgent.

There is the challenge of whether service level improvements are translating into improvements at an individual patient level at a level that is clinically meaningful. As numbers increase in the dataset, subgroup analyses will also be able to be undertaken by site of care and by diagnostic subgroups.

Implications for clinical practice and quality improvement

The quality of hospice/palliative care can be improved, but this requires performance to be measured routinely by the people for whom it most counts—patients. Without such measurement, it is tempting to rely on the praise and gratitude of families who have experienced the services offered. Given these data, it is difficult to justify any service that does not actively include measurement of the service’s performance and uses these data to drive quality improvement processes.

Implications for health policy

The community-wide benefits of hospice/palliative care services include benefits for patients and their families/caregivers [21]. Ensuring that the services offered systematically improve is likely to amplify the benefits that have already been observed. Given the increasing levels of investment in hospice/palliative care services by health services, it is crucial to expand the evidence base that supports improved health outcomes for people at the end of life and for caregivers while in the role and subsequently. These data also suggest that funders can now consider linking funding levels to patient-centred quality outcomes.


Although the outcomes are encouraging, this program of work also highlights the continued deficits that exist in symptom control. Only by systematic and routine measurement can the magnitude of this be identified and addressed. The process of addressing each individual service’s performance is being addressed by the PCOC staff (QIF) who work alongside every participating service to identify areas for improvement, define interventions to be implemented and help to monitor the subsequent outcomes.

Ultimately, this study demonstrates that meaningful outcomes can be routinely collected in hospice/palliative care and, that by providing a feedback loop and service to service benchmarking, patient-focused improvements can be delivered.


  1. Keay TJ, Fredman L, Taler GA, Datta S, Leverson SA (1994) Indicators of quality medical care for the terminally ill in nursing homes. J Am Geriatr Soc 42(8):853–860

    CAS  PubMed  Google Scholar 

  2. Hales S, Zimmermann C, Rodin G (2008) The quality of dying and death. Arch Intern Med 168(9):912–918

    PubMed  Article  Google Scholar 

  3. Patrick DL, Curtis JR, Engelberg RA, Nielsen E, McCown E (2003) Measuring and improving the quality of dying and death. Ann Intern Med 139(5 Pt 2):410–415

    PubMed  Article  Google Scholar 

  4. Currow DC, Abernethy AP, Fazekas BS (2004) Specialist palliative care needs of whole populations: a feasibility study using a novel approach. Palliat Med 18(3):239–247

    PubMed  Article  Google Scholar 

  5. Waller A, Girgis A, Currow D, Lecathelinais C, Palliative Care Research Program Team (2008) Development of the palliative care needs assessment tool (PC-NAT) for use by multi-disciplinary health professionals. Palliat Med 22(8):956–964

    CAS  PubMed  Article  Google Scholar 

  6. Currow DC, Eagar K, Aoun S, Fildes D, Yates O et al (2008) Is it feasible and desirable to collect voluntarily quality and outcome data nationally in palliative oncology care? J Clin Oncol 26(23):3853–3859. doi:10.1200/JCO.2008.16.5761

    PubMed  Article  Google Scholar 

  7. Eagar K, Watters P, Currow DC, Aoun SM, Yates P (2010) The Australian Palliative Care Outcomes Collaboration (PCOC)—measuring the quality and outcomes of palliative care on a routine basis. Aust Health Rev 34(2):186–192. doi:10.1071/AH08718

    PubMed  Article  Google Scholar 

  8. Currow DC, Tieman JJ, Greene A, Zafar SY, Wheeler JL et al (2012) Refining a checklist for reporting patient populations and service characteristics in hospice and palliative care research. J Pain Symptom Manag 43(5):902–910. doi:10.1016/j.jpainsymman.2011.05.015

    Article  Google Scholar 

  9. Eagar K, Gordon R, Green J, Smith M (2004a) An Australian casemix classification for palliative care: lessons and policy implications of a national study. Palliat Med 18(3):227–233

    Article  Google Scholar 

  10. Eagar K, Green J, Gordon R (2004b) An Australian casemix classification for palliative care: technical development and results. Palliat Med 18(3):217–226

    Article  Google Scholar 

  11. Hanson LC, Scheunemann LP, Zimmerman S, Rokoske FS, Schenck AP (2010) The PEACE project review of clinical instruments for hospice and palliative care. J Palliat Med 13(10):1253–1260

    PubMed  Article  Google Scholar 

  12. Pasman HRW, Brandt HE, Deliens L, Francke AL (2009) Quality indicators for palliative care: a systematic review. J Pain Symptom Manag 38(1):145–156

    Article  Google Scholar 

  13. Aoun SM, Monterosso L, Kristjanson LJ, McConigley R (2011) Measuring symptom distress in palliative care: psychometric properties of the Symptom Assessment Scale (SAS). 14(3): 315-21, doi: 10.1089/jpm.2010.0412.

  14. To THM, Ong WY, Rawlings D, Greene A, Currow DC (2012) The disparity between patient and nurse symptom rating in a hospice population. J Palliat Med 15(5):542–547

    PubMed  Article  Google Scholar 

  15. Trauer T (2010) Assessment of change in outcome management. In: Trauer T (ed) Outcome measurement in mental health. Cambridge University Press, Cambridge, pp 206–218

    Google Scholar 

  16. Barbera L, Seow H, Howell D, Sutradhar R, Earle C, et al. (2010) Symptom burden and performance status in a population-based cohort of ambulatory cancer patients. 116(24): 5767-5776, doi: 10.1002/cncr.25681

  17. Kamal AH, Bull J, Stinson C, Blue D, Smith R et al (2011) Collecting data on quality is feasible in community-based palliative care. J Pain Symptom Manag 42(5):663–667. doi:10.1016/j.jpainsymman.2011.07.003

    Article  Google Scholar 

  18. Casarett DJ, Harrold J, Oldanie B, Prince-Paul M, Teno J (2012) Advancing thescience of hospice care: coalition of hospices organized to investigate comparative effectiveness. Curr Opin Support Palliat Care 6(4):459–464

    PubMed  Article  Google Scholar 

  19. Lunney JR, Lynn J, Foley DJ, Lipson S, Guralnik JM (2003) Patterns of functional decline at the end of life. JAMA 289(18):2387–2392

    PubMed  Article  Google Scholar 

  20. Currow DC, Agar M, Sanderson C, Abernethy A (2008) Populations who die without specialist palliative care: does lower uptake equate with unmet need? Palliat Med 22(1):43–50

    PubMed  Article  Google Scholar 

  21. Currow DC, Wheeler JL, Abernethy AP (2011) International perspective: outcomes of palliative oncology. Semin Oncol 38(3):343–350

    PubMed  Article  Google Scholar 

Download references


The team acknowledge the dedicated input of clinicians from right around the country and applaud their courage in measuring performance and willingness to respond to data by changing the way that care is delivered in order to provide better outcomes for patients and their families. The team also acknowledge that this dataset represents almost 20,000 people who died an expected death, with all that this represents in human terms for each of those people and their families.


The Australian national Palliative Care Outcomes Collaboration is funded by the Australian Government Department of Health

Conflict of interest

The authors declare that they have no conflicts of interests.

Author information

Authors and Affiliations


Corresponding author

Correspondence to David C. Currow.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Currow, D.C., Allingham, S., Yates, P. et al. Improving national hospice/palliative care service symptom outcomes systematically through point-of-care data collection, structured feedback and benchmarking. Support Care Cancer 23, 307–315 (2015).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Palliative care
  • Symptom control
  • Performance measurement
  • Clinical benchmarking