Abstract
Methods of assessing and monitoring the performance of clinicians have received a lot of publicity in recent years. We review the main methodologies concentrating on the distinction between monitoring individual performance and monitoring aggregated performance. We also highlight the importance and difficulties associated with incorporating and assessing risk factors into the process. We discuss how software architecture can be developed to implement these methodologies. We illustrate this development by a case study involving the creation of a software tool to produce funnel plots to analyse surgeon performance. We discuss how such tools are currently evaluated and propose that in future assessments of usability would benefit from an experimental study.
Similar content being viewed by others
References
B.E. Keogh and R. Kinsman, National Adult Cardiac Surgical Database Report 2000–2001 (Dendrite Clinical Systems, 2002).
S. Ashley, B. Ridler and R. Kinsman, National Vascular Database Report 2002 (Dendrite Clinical Systems, 2003).
M. R. De Leval, K. Francois, C. Bull, W. Braun and D. Spielgelhalter, Analysis of a cluster of surgical failures. Application to a series of neonatal arterial switch operations, Journal of Thoracic Cardiovascular Surgery 107 (1994) 914–924.
J. Lovegrove, O. Valencia, T. Treasure, C. Sherlaw-Johnson and S. Gallivan, Monitoring the results of cardiac surgery by variable life adjusted display, Lancet 350 (1997) 1128–1130
V. Parsonnet, D. Dean and A.D. Berstein, A method of uniform stratification of risk for evaluating the results of surgery in acquired adult heart disease, Circulation 779 (1989) I3–I12.
F. Roques, et al., Risk factors and outcome in European cardiac surgery: Analysis or the EuroSCORE multinational database of 19,030 patients, European Journal of Cardiothoracic Surgery 15 (1999) 816–822.
H. Goldstein and D.J. Spiegelhalter, League tables and their limitations: Statistical issues in comparisons of institutional performance, Journal of the Royal Statistical Society, Series A 159 (1996) 385–443.
J. Stark, et al., Mortality rates after surgery for congenital heart defects in children and surgeons’ performance, Lancet 355 (2000) 1004–1007.
David J. Spiegelhalter, Funnel Plots for Comparing Institutional Performance (to be published 2005).
M. Egger, G.D. Smith, M. Schneider and C. Minder, Bias in meta-analysis detected by a simple, graphical test, British Medical Journal 315 (1997) 629–634.
B.E. Keogh and R. Kinsman, Fifth National Adult Cardiac Surgical Database Report 2003 (Dendrite Clinical Systems, 2004).
K. Butler, A. Wishansky, S.J. Laskowski, E.L. Morse and J.C. Scholtz, Quantifying Usability: The Industry Usability Reporting Project, (Proceedings of the Human Factors and Ergonomics Society Conference, 2001) 7–13.
R. Fitzpatrick Strategies for Evaluating Software Usability (Dublin Institute of Technology, 1998) 2–6.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rees, M., Dineschandra, J. Monitoring Clinical Performance: The Role of Software Architecture. Health Care Manage Sci 8, 197–203 (2005). https://doi.org/10.1007/s10729-005-2010-1
Issue Date:
DOI: https://doi.org/10.1007/s10729-005-2010-1