World Journal of Urology

, Volume 29, Issue 3, pp 283–289

The crossroads of evidence-based medicine and health policy: implications for urology

Open Access
Topic Paper


As healthcare spending in the United States continues to rise at an unsustainable rate, recent policy decisions introduced at the national level will rely on precepts of evidence-based medicine to promote the determination, dissemination, and delivery of “best practices” or quality care while simultaneously reducing cost. We discuss the influence of evidence-based medicine on policy and, in turn, the impact of policy on the developing clinical evidence base with an eye to the potential effects of these relationships on the practice and provision of urologic care.


Policy Evidence-based medicine Urology Pay-for-performance Accountable care organizations Medical home 


On March 23, 2010, the Patient Protection and Affordable Care Act (PPACA) was signed into law, heralding monumental changes in access to the US health care system. Due to unsustainable growth in Federal spending, with unfunded liability for Medicare alone estimated at $36.4 trillion, various policy experiments were introduced under the legislation in an attempt to “bend the cost curve” [1]. These experiments have several elements in common. They assume there is widespread provision of healthcare services that are not based on evidence, do not improve human health, and which, if eliminated, would reduce health care costs. They assume timely provision of appropriate evidence-based care will not only improve quality, but lower overall spending. They anticipate widespread physician adoption of information technology such as electronic health records. They also assume physicians, hospitals, and insurance companies will be able to create new care delivery mechanisms and find ways to equitably distribute global payment (“gainsharing”). While these assumptions will soon be tested, the policy interventions they motivate are based on decades of published literature regarding the evidence basis of medicine delivered in the United States.

Surprisingly, the notion of evidence-based clinical practice as an explicit standard, itself, is a contemporary phenomenon. Coined by Eddy in 1990, the term “evidence-based” was used to highlight the absence of scientific data to support many then-common medical practices that were simply assumed to be effective [2]. As he later described, a major problem that had developed was that “coverage and medical necessity were defined tautologically; if the majority of physicians were doing it, it was medically necessary and should be covered” [3]. Supporting this observation was Wennberg’s seminal study demonstrating that common surgical procedures such as tonsillectomy or open prostatectomy were performed at widely differing rates in Vermont compared with New Hampshire [3]. As there was no reason to believe the population of either state had differing indications for such procedures, Wennberg concluded that this “practice pattern variation” was due to physician assumptions about treatment efficacy that were not based on a common standard of evidence. Thus began a new era in the history of medicine in which clinical practice became the subject of critical inquiry, with subsequent reports corroborating widespread geographic variation in care and outlying, aberrant practices that contradicted contemporary society guideline recommendations [4, 5, 6, 7, 8].

We here review some of the health policy implementations born out of the push for evidence-based medicine (EBM) that may affect the practice and delivery of urologic care.


Over the past decades, the advancement of methods such as meta-analysis and cost-effectiveness analysis to critically examine treatments and practices helped reveal that many medical “standards of care” were based not on evidence of efficacy, but on consensus or regional opinion [3]. Both private and public sector payers viewed mounting healthcare costs with growing alarm and struggled with processes to determine what services should be covered. The health policy response to these influences has been to introduce scientific rigor and standardized processes to the way in which organized medicine defines good clinical practice.

In the late 1980s and early 1990s, multiple professional medical organizations, including the American Urological Association (AUA), pioneered guideline development processes and methodologies that are still in use today [3, 9, 10]. The AUA Practice Guidelines Committee approaches urologic issues with the imperative of evaluating existing recommendations every 2–3 years using cost-effective methods that fulfill Institute of Medicine criteria for producing evidence-based guidelines [11]. Focusing on topics that are “prevalent, costly, and are characterized by significant practice variation,” the Committee’s procedure relies on systematic reviews as the foundation for its recommendations, with the summarization and synthesis of data through qualitative or quantitative means, the assessment of level of recommendation (LoR) and level of evidence (LoE), and the explicit linkage of LoR to LoE [11].

Evidence of adherence to guideline recommendations in many areas of medicine is disheartening. Insofar as “best practices” are reflected in quality—defined by the IOM as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” [12]—one study examining guideline-based processes of care (e.g., prophylactic antibiosis on day of surgery) suggest that Americans across the nation are subject to poor quality: only 55% of recommended care was received [8]. Within urology, studies are mixed regarding quality of care and data are limited on whether guidelines increase evidence-based practice. A longitudinally performed Gallup survey of practicing urologists suggested that use of computed tomography and bone scans in patients with prostate cancer had significantly decreased after publication of the 1996 AUA Prostate Cancer Guidelines Panel, suggesting practices could be changed with introduction of society recommendations [13]. However, in a later examination of a large Medicare cohort, imaging studies performed for patients with clinically localized prostate cancer varied considerably not only based on geographic region but also by primary treatment (radiotherapy versus prostatectomy), intimating that guideline recommendations were not broadly accepted [14].

Such contradiction has been observed in studies of bladder cancer treatment as well. In one investigation, most surveyed urologists’ practice patterns were in keeping with contemporary AUA guidelines, but certain recommendations, such as cystectomy following two-time failure of intravesical chemotherapy for high-grade Ta-T1 disease, were not [15]. Analyses of Surveillance Epidemiology and End Results (SEER)/Medicare data also suggest extreme performance variation in recommended therapeutic modalities such as mitomycin C after transurethral resection of bladder tumor, suggesting that guidelines have limited impact [16, 17]. The influence of patient preference on fulfillment of recommended therapy has not been well studied, however, and may certainly have an impact. It is also true that many diseases are too rare or care is too difficult to measure to create evidence-based guidelines.

While its potential for greater consistency in care provision with streamlined dissemination of “best” medical knowledge is yet to be entirely fulfilled, the modern guideline has at least become a pivotal component not only in the impetus for EBM, but the appraisal of its performance.

Pay-for-performance programs

If a best clinical practice can be defined, that practice in theory can and should be measured—a possibility more recently encountered with development of quality measures and “pay-for-performance” (P4P) reimbursement schemas. A logical extension of EBM to policy, P4P programs endorse specific quality indicators and directly link performance on them to physician and hospital reimbursement. Well-designed P4P programs promise to be potent drivers of change in the way healthcare is delivered, and indeed, they are proliferating—multiple health care settings from around the world have reported their use, as well as over half of all health maintenance organizations in the United States [18, 19]. Results of P4P programs on quality in primary care and other non-urologic settings are conflicting. Though the durability and clinical significance of P4P on quality improvement have been questioned [20, 21], a recent systematic review found that a majority of studies did show positive effects on quality [22]; the number showing minimal to no effect, in essence, emphasizes the importance of proper program design.

To wit, the creation of successful P4P plans appears to require careful balance of several key factors. These include selection of appropriate, transparent, and high-impact measures and correct design of incentives. Measurement and payment can be structured on the level of the individual, the group, or a combination, but the rewarded or penalized party must have the ability to influence the metric being evaluated [19]. For example, holding the urologist accountable for administration of pre-operative antibiotics within 30 min of incision assumes s/he has direct control over nursing and anesthesia practices, which is often not the case. The reward amount must also be meaningful to the rewarded party for any incentive to be effective [19]. For instance, when a practice carries multiple insurers with only a few patients in each, incentives from any one plan may not rise to a level of value to the provider. In a study of P4P programs in California, physician organizations noted that bonuses received in one year were only ≤2% of total capitation [23]. Medicare’s P4P program, the Physician Quality Reporting Initiative (PQRI), has had challenges reporting information to participating physicians regarding performance and reasons for varying levels of reimbursement after participation, which may blunt its ability to incentivize physicians.

Payment can reward absolute goals or a certain rate of improvement from baseline. Such designs can result in programs that improve poorly performing outliers or favor providers in organizations with established continuous quality improvement infrastructure. It is argued that P4P programs can also be designed to improve not only quality of care, but access to care among underserved populations [19].

In urology, the AUA partnered with the American Medical Association Physician Consortium for Performance Improvement® to create prostate cancer quality of care measures for the PQRI program using a defined, multi-stakeholder process. These measures mainly examine processes of care [24, 25]. Though reporting on care outcomes would intuitively seem most relevant to surgeons, such measures require risk-adjustment which is not feasible under the current, claims-based PQRI regime. Measurement depends on voluntary physician reporting with newly designed CPT II codes that record, for instance, documentation of pre-treatment prostate-specific antigen level, Gleason score, and tumor stage. At this point, whether reporting of these indicators will lead to improvement in the processes of care they purport to measure is unknown, as is any possible impact on patient outcomes. P4P measures specific to urology have been developed and utilized in some private insurance networks, often by insurers themselves in an opaque process, and several reports have suggested other urologic disease processes where P4P programs could be implemented, including benign prostatic hypertrophy (BPH) and bladder cancer [26, 27].

Comparative effectiveness research

Healthcare expenditures have risen continuously to reach 17.3% of Gross Domestic Product (GDP) in 2010 and are expected to reach 19.3% by 2019 [28]. In response, some policy advisors have called for explicit consideration of cost with evaluation of appropriateness and coverage of services [29], while others have vociferously objected to any evaluation of cost whatsoever. In 1989, for example, Medicare administrators proposed employing cost-effectiveness analysis—or the comparison of “the relative value of different interventions in creating better health and/or longer life” [30]—as a basis for tying reimbursement decisions to data or evidence. This was quashed in no small part due to concerns about “rationing” and deep American distrust for organizational, versus individual, decision-making [31].

Two recessions later, as healthcare costs continue to consume an ever-larger portion of the nation’s economic pie, policymakers have discovered the political will to embrace a close cousin of cost-effectiveness research, comparative effectiveness research (CER). CER proposes to examine how treatments perform against each other in achieving clinical objectives, without explicit assessment of costs. At the Federal level, CER has been posited as a major way forward: building on the Medicare Modernization Act of 2003, in which approximately $15 million was allocated to CER [31], the American Recovery and Reinvestment Act of 2009 dedicated another $1.1 billion (by comparison, the entire NIH budget in 2010 was $31 billion dollars [32]) to “research that compares the clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions” [33]. Additionally, a regular funding mechanism for CER was introduced in the PPACA with the creation of the Patient-Centered Outcomes Research Institute (PCORI) with an estimated annual budget of $500 million dollars coming from a new tax on Medicare and private payers [34]. In light of this Federal enthusiasm, CER is poised to affect clinical practice, for although it does not include cost-cutting as an overt objective, its proponents suggest that cost containment would be a secondary benefit through the elimination of ineffective or inappropriate care [35].

Despite persistent opponent concern for “rationing,” supporters of CER cite what Gold et al. have articulated as the “two realities” that “provide compelling context to health policy decisions”: “the availability of health-related interventions now in the marketplace exceeds by a considerable margin our societal ability to afford them, and current decision rules are inadequate to guide choices toward those interventions that are likely to yield the most benefit for the population” [30]. To this end, the IOM has defined “key elements” of CER—“the direct comparison of effective interventions, the study of patients in typical day-to-day clinical care, and the aim of tailoring decisions to the needs of individual patients”—and identified 100 top priorities for it [36]. Roughly half of these CER priorities pertain to health care delivery, a third to disparities, a fifth to functional limitation/disability, followed by cardiovascular/peripheral vascular disease, psychiatric disorders, and cancer as the successive most frequent emphases [37]. Fourteen topics pertain to urology, four of which appear in the highest quartile of importance, including the goals of “compar[ing] the effectiveness of management strategies for localized prostate cancer,” the effectiveness of imaging and biomarkers in patients with cancer, and comparison of methods to reduce health disparities [36]. According to the IOM, CER primarily includes systematic reviews, observational studies, and randomized controlled trials [38]. Many studies in urology have been performed that qualify as CER, with comparisons ranging from robotically assisted versus open methods of radical prostatectomy to surgeon hand-scrubbing versus application of sterilizing gel among pediatric urologists [39, 40].

Much of CER, to date, has been conducted outside of the surgical literature, most importantly in the application of evidence-based medicine to the evaluation of how health care delivery, or the structure of medical practice, affects outcome. The promise of CER is that it may bolster the evidence behind medical practices that improve health and reduce cost at the expense of inefficiency (rather than access to care). In addition to CER, policymakers and stakeholders have promoted two innovations in care coordination as methods by which effective medical practices could be most efficiently delivered: accountable care organizations and the medical home.

Accountable care organizations

Accountable care organizations (ACOs) have gained significant interest among healthcare policy leaders, driven in large part by the success of many prominent healthcare systems that utilize ACO principles: the Mayo Clinic, Cleveland Clinic, the Permanente Medical Group, as well as smaller operations like Bassett Healthcare in New York and the Billings Clinic in Montana, among others [41]. While the Federal definition of an ACO is still a work-in-progress, the central tenet is shared responsibility between coordinated groups of providers to deliver high-quality care for medical conditions at predetermined cost targets.

The PPACA defines ACOs as networks of physicians, large physician groups, hospital-physician partnerships, hospitals employing physicians, and any other arrangement that the Secretary of Health and Human Services (HHS) “determines appropriate” [42]. ACOs must enter three-year contracts, have certain legal and administrative attributes, have at least 5,000 patients, and “define processes to promote evidence-based medicine and patient engagement, [and] report on quality and cost measures” [42]. Though payment structure is through Medicare Parts A and B, ACOs share some percentage of cost-savings derived from meeting quality and cost-containment criteria. The cost-containment assessment is based on average per capita spending adjusted for patient characteristics being less than a certain percentage below an “applicable benchmark” specified by the Secretary of HHS [42]. ACOs should thus be motivated to deliver healthcare “value,” defined by high-quality/low cost, as they participate in gainsharing and their activities are profiled using quality metrics.

ACO proponents hope that the promise of extra reimbursement will overcome historical hurdles that physicians and hospitals have faced in working toward joint objectives. For example, a group of urologists might form an ACO with other specialists, a core of primary care physicians (PCP), and a hospital to deliver coordinated care across a spectrum of diseases such as benign prostatic hypertrophy (BPH) or urinary calculi. The group would then be held accountable for the cost and quality of all aspects of care, from diagnosis to early medical management, surgical management, and prevention of recurrence. The ACO would need to define how to distribute the capitated reimbursement or gainsharing amount between ACO participants and how to organize to provide appropriate rates of surgical and medical care without providing inappropriate or wasteful care that would inflate its cost structure [43]. This hypothetical arrangement is but one of many possible ACO arrangements; the intent of the law is to inspire a wide variety of delivery models [41].

The Medicare Payment Advisory Commission (MedPAC) has recommended promoting care delivery in this model for several years [44, 45, 46]. In 2005, CMS introduced the Physician Group Practice Demonstration to test the implementation of ACOs. A Medicare report at year three indicates success at improving quality and reducing costs [43]. However, one report of 15 centers participating in a randomized trial of enhanced care coordination versus standard of care found no difference in patient outcomes in thirteen of fifteen centers [47].

Critical regulatory details of ACOs are still unclear. Foremost among these is patient attribution—that is, defining exactly which patients are considered part of an ACO and which of their care episodes count against the ACO contract or toward potential savings. Such can be challenging to define, especially when patients may not get all of their care from an ACO (as, by law, they are not required to do).

Although the fundamental idea of large provider groups being accountable to efficiency and quality has existed for some time, and data do exist as to the idea’s effectiveness [48], questions have been raised about the ability of ACOs to reduce costs when viewed from the perspective of the healthcare system as a whole. One examination of the private-payer healthcare market in California concluded that while ACOs might decrease costs for Medicare, they have a potential to increase costs for private insurers [49]. The authors contend that the increasing number of “ACO-like” healthcare groups and organizations, including consolidated hospital chains and large multi-specialty groups, has led directly to double-digit increases in hospital charges from 1999 to 2005 through improved bargaining power with private payers. Encouraging the formation of more ACOs may foster further provider consolidation in local healthcare markets, increasing their bargaining power with payers.

Again, ACOs remain experimental, as does the role for specialist care within them, though a prominent model is the salaried physician in a large multispecialty clinic or practice. The trend toward large group practices in urology may position them well to coordinate with other physician groups or hospitals. Areas where urologists already participate in coordinated multidisciplinary care are oncology and renal transplant [50, 51]. These groups and the lessons learned from their experiences may be germinal to broader organizational connectivity and accountability.

Medical home

Advanced in 1967 by the American Academy of Pediatrics, the concept of the “medical home” was developed in response to the consequences of multiple providers relying on fragmented health records to provide care for children with special health needs: “for children with chronic diseases or disabling conditions, the lack of a complete record and a ‘medical home’ is a major deterrent to adequate health supervision” [52]. Over 30 years later, the same could still be said to hold true not only for children but also for adults. Given the related problems of increased cost, poor-quality care (including duplication of or gaps in services), and poor health outcomes, this decades-old proposal has been resurrected as one of the “patient-centered” delivery innovations, called for by the IOM [12]. This concept promotes four major characteristics in primary care: “accessibility for first-contact care for each new problem or health need, long-term person-focused care (“longitudinally”), comprehensiveness of care in the sense that care is provided for all health needs except those that are too uncommon for the primary care practitioner to maintain competence in dealing with them, and coordination of care in instances in which patients do have to go elsewhere” [53].

Ample data have demonstrated the benefits of accessible, continuous, comprehensive, and coordinated care (with some arguing that family-oriented, community-oriented, and culturally competent care follows suit as a result) [53]. While international comparisons have shown the correlation of strong primary care systems with decreased rates of low birth weight, infant mortality, and child mortality, improved health outcomes have been seen domestically as well when a usual source of care can be identified. Earlier diagnosis of problems, better performance of preventive processes of care, fewer hospitalizations, fewer emergency room visits, and decreased expenditures have been associated with features of the medical home [53]. Additionally, decreases in health care disparities by race/ethnicity [54] and overall mortality [55] have been associated with this care delivery model. Patient-centered systems have also been shown to result in increased levels of patient satisfaction [56, 57].

Recent incarnations of the patient-centered medical home (PCMH) emphasize a more systems-savvy approach in which organizational access increases provider-patient communication, information technology improves medical record-keeping and safety, and team-based models deliver effective, efficient, evidence-based care [58]. As such, reliance shifts from the primary care provider (PCP) in isolation to the PCP and other health professionals (such as nurses, social workers, dieticians, physical therapists, pharmacists) in the context of a team that “forms and reforms according to patient needs” and takes “collective responsibility” of patient care, thereby enhancing quality while potentially decreasing resource utilization [54]. Proponents of the medical home place the PCP at its core with the belief that those physicians are best equipped to deliver value to the healthcare system. A seminal investigation from the Medical Outcomes Study examined records of over 20,000 patients and found that specialists providing care outside their field of expertise tended to generate more tests and referrals than generalists [59].

With current policy trends, including the PPACA, the PCMH will likely be favored as a model of care delivery, particularly as health care access grows, affecting specialist practice in several ways [49]. Because the PCMH could also be delivered via a specialty practice (e.g., endocrinology for a complicated diabetic or oncology for a patient with cancer), not only will specialists need to know how to engage with the patient in the context of the primary care-based PCMH but, if interested, would need to learn how to function as the “hub” of care as well [60]. The structural and systems-level implications for being the PCMH may, thus, be different depending on the current state of the specialist practice. Though many urologists participate in multidisciplinary care, the patient-centered model may require more intensive access, coordination, comprehensive care, and communication infrastructure, particularly with common non-oncologic processes such as stone disease and BPH. The timely tracking and forwarding of patient information to relevant providers may prove one of the most challenging of these requirements within the PCMH model, in addition to tracking data for quality assurance and monitoring.

Certainly, the renewed emphasis on earlier MedPAC recommendations for accountable provider groups increases the motivation for improving communication and coordination between collectively responsible providers [44, 45, 46]. Though the operational details of collective accountability are, by policy, left to be determined by the providers (physicians, hospitals, insurers), the mandate from the PPACA is clear, and the pooled accountability will no doubt impact specialist reimbursement and practice. The challenge for independent practitioners may be immense. Additionally, some have expressed concern that the PCMH, as first-contact provider, may significantly reduce referrals to specialists; however, the PCMH as proposed is not designed to function as a “gatekeeper” that limits referrals but, rather, is an entity that promotes appropriate referrals. As such, reimbursement or payment under the PCMH is expected to come from cost-savings incurred from better patient-centered care, including not only decreased inappropriate referrals to specialists, but increased preventive services, decreased emergency visits, and decreased hospitalization rates [60].


The US healthcare system’s lack of central control creates, in essence, a large laboratory for ways to deliver high-quality, evidence-based care. With the PPACA, a large stimulus has been provided to experiment and drive innovation in healthcare delivery. Despite this promise, the US system fails far too many people in terms of access to and quality of care. The opportunity for urologists—and their responsibility as the ultimate advocates for their patients—is to take a leading role in shaping the future of healthcare delivery so that it is better for the patient and hospitable for the physician.


Conflicts of interest


Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.


  1. 1.
    Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (2009) Washington, DCGoogle Scholar
  2. 2.
    Eddy DM (1990) Practice policies: where do they come from? JAMA 263(9):1265, 1269, 1272 passimGoogle Scholar
  3. 3.
    Eddy DM (2005) Evidence-based medicine: a unified approach. Health Aff (Millwood) 24(1):9–17. doi:10.1377/hlthaff.24.1.9 CrossRefGoogle Scholar
  4. 4.
    Chassin MR (1998) Appropriate use of carotid endarterectomy. N Engl J Med 339(20):1468–1471. doi:10.1056/NEJM199811123392010 PubMedCrossRefGoogle Scholar
  5. 5.
    Bernstein SJ, Hilborne LH, Leape LL, Fiske ME, Park RE, Kamberg CJ, Brook RH (1993) The appropriateness of use of coronary angiography in New York State. JAMA 269(6):766–769PubMedCrossRefGoogle Scholar
  6. 6.
    Winslow CM, Kosecoff JB, Chassin M, Kanouse DE, Brook RH (1988) The appropriateness of performing coronary artery bypass surgery. JAMA 260(4):505–509PubMedCrossRefGoogle Scholar
  7. 7.
    Winslow CM, Solomon DH, Chassin MR, Kosecoff J, Merrick NJ, Brook RH (1988) The appropriateness of carotid endarterectomy. N Engl J Med 318(12):721–727. doi:10.1056/NEJM198803243181201 PubMedCrossRefGoogle Scholar
  8. 8.
    McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA (2003) The quality of health care delivered to adults in the United States. N Engl J Med 348(26):2635–2645. doi:10.1056/NEJMsa022615 PubMedCrossRefGoogle Scholar
  9. 9.
    Eddy DM, Hasselblad V, Shachter R (1990) A Bayesian method for synthesizing evidence. The confidence profile method. Int J Technol Assess Health Care 6(1):31–55PubMedCrossRefGoogle Scholar
  10. 10.
    American Urological Association (2007) Bladder cancer. Guideline for the management of nonmuscle invasive bladder cancer: (Stages Ta, T1, and Tis). 2007 update. (Renewed and validity confirmed 2010). AUA clinical guidelines. American Urological AssociationGoogle Scholar
  11. 11.
    Faraday M, Hubbard H, Kosiak B, Dmochowski R (2009) Staying at the cutting edge: a review and analysis of evidence reporting and grading; the recommendations of the American Urological Association. BJU Int 104(3):294–297. doi:10.1111/j.1464-410X.2009.08729.x PubMedCrossRefGoogle Scholar
  12. 12.
    The Institute of Medicine (2001) Crossing the quality chasm: a new health system for the 21st centuryGoogle Scholar
  13. 13.
    Gee WF, Holtgrewe HL, Blute ML, Miles BJ, Naslund MJ, Nellans RE, O’Leary MP, Thomas R, Painter MR, Meyer JJ, Rohner TJ, Cooper TP, Blizzard R, Fenninger RB, Emmons L (1998) 1997 American Urological Association Gallup survey: changes in diagnosis, management of prostate cancer, benign prostatic hyperplasia, other practice trends from 1994 to 1997. J Urol 160(5):1804–1807PubMedCrossRefGoogle Scholar
  14. 14.
    Saigal CS, Pashos CL, Henning JM, Litwin MS (2002) Variations in use of imaging in a national sample of men with early-stage prostate cancer. Urology 59(3):400–404PubMedCrossRefGoogle Scholar
  15. 15.
    Joudi FN, Smith BJ, O’Donnell MA, Konety BR (2003) Contemporary management of superficial bladder cancer in the United States: a pattern of care analysis. Urology 62(6):1083–1088PubMedCrossRefGoogle Scholar
  16. 16.
    Hollenbeck BK, Ye Z, Dunn RL, Montie JE, Birkmeyer JD (2009) Provider treatment intensity and outcomes for patients with early-stage bladder cancer. J Natl Cancer Inst 101(8):571–580. doi:10.1093/jnci/djp039 PubMedCrossRefGoogle Scholar
  17. 17.
    Strope SA, Ye Z, Hollingsworth JM, Hollenbeck BK Patterns of care for early stage bladder cancer. Cancer 116(11):2604–2611. doi:10.1002/cncr.25007
  18. 18.
    Rosenthal MB, Landon BE, Normand SL, Frank RG, Epstein AM (2006) Pay for performance in commercial HMOs. N Engl J Med 355(18):1895–1902. doi:10.1056/NEJMsa063682 PubMedCrossRefGoogle Scholar
  19. 19.
    Rosenthal MB, Dudley RA (2007) Pay-for-performance: will the latest payment trend improve care? JAMA 297(7):740–744. doi:10.1001/jama.297.7.740 PubMedCrossRefGoogle Scholar
  20. 20.
    Rosenthal MB, Frank RG, Li Z, Epstein AM (2005) Early experience with pay-for-performance: from concept to practice. JAMA 294(14):1788–1793. doi:10.1001/jama.294.14.1788 PubMedCrossRefGoogle Scholar
  21. 21.
    Chen JY, Kang N, Juarez DT, Hodges KA, Chung RS, Legorreta AP Impact of a pay-for-performance program on low performing physicians. J Healthc Qual 32(1):13–21 (quiz 21–12)Google Scholar
  22. 22.
    Van Herck P, De Smedt D, Annemans L, Remmen R, Rosenthal MB, Sermeus W Systematic review: effects, design choices, and context of pay-for-performance in health care. BMC Health Serv Res 10:247. doi:10.1186/1472-6963-10-247
  23. 23.
    Damberg CL, Raube K, Teleki SS, Dela Cruz E (2009) Taking stock of pay-for-performance: a candid assessment from the front lines. Health Aff (Millwood) 28(2):517–525. doi:10.1377/hlthaff.28.2.517 CrossRefGoogle Scholar
  24. 24.
    American Medical Association—Physician Consortium for Performance Improvement: Prostate Cancer Work Group (2007) Prostate cancer physician performance measure set. Accessed 10 Oct 2010
  25. 25.
    Miller DC, Saigal CS (2009) Quality of care indicators for prostate cancer: progress toward consensus. Urol Oncol 27(4):427–434. doi:10.1016/j.urolonc.2009.01.011 PubMedCrossRefGoogle Scholar
  26. 26.
    Stovsky M, Jaeger I (2008) BPH procedural treatment: the case for value-based pay for performance. Adv Urol:954721. doi:10.1155/2008/954721
  27. 27.
    Rhoads KF, Konety BM, Dudley RA (2009) Performance measurement, public reporting, and pay-for-performance. Urol Clin North Am 36(1):37–48, vi. doi:10.1016/j.ucl.2008.08.003 Google Scholar
  28. 28.
    Truffer CJ, Keehan S, Smith S, Cylus J, Sisko A, Poisal JA, Lizonitz J, Clemens MK Health spending projections through 2019: the recession’s impact continues. Health Aff (Millwood) 29(3):522–529. doi:10.1377/hlthaff.2009.1074
  29. 29.
    Persad G, Wertheimer A, Emanuel EJ (2009) Principles for allocation of scarce medical interventions. Lancet 373(9661):423–431. doi:10.1016/S0140-6736(09)60137-9 PubMedCrossRefGoogle Scholar
  30. 30.
    Gold MR (1996) Cost-effectiveness in health and medicine. Oxford University Press, New YorkGoogle Scholar
  31. 31.
    Neumann PJ, Rosen AB, Weinstein MC (2005) Medicare and cost-effectiveness analysis. N Engl J Med 353(14):1516–1522. doi:10.1056/NEJMsb050564 PubMedCrossRefGoogle Scholar
  32. 32.
    Services DoHaH (2010) National institutes of health fiscal year 2010 budget requestGoogle Scholar
  33. 33.
    American Recovery and Reinvestment Act of 2009-Title XVIII (2009)Google Scholar
  34. 34.
    Sox HC Comparative effectiveness research: a progress report. Ann Intern Med 153(7):469–472. doi:10.1059/0003-4819-153-7-201010050-00269
  35. 35.
    Weinstein MC, Skinner JA Comparative effectiveness and health care spending-implications for reform. N Engl J Med 362(5):460–465. doi:10.1056/NEJMsb0911104
  36. 36.
    The Institute of Medicine (2009) 100 Initial priority topics for comparative effectiveness research. Institute of Medicine. Accessed 17 Oct 2010
  37. 37.
    Iglehart JK (2009) Prioritizing comparative-effectiveness research—IOM recommendations. N Engl J Med 361(4):325–328. doi:10.1056/NEJMp0904133 PubMedCrossRefGoogle Scholar
  38. 38.
    Sox HC, Greenfield S (2009) Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med 151(3):203–205PubMedGoogle Scholar
  39. 39.
    Hu JC, Gu X, Lipsitz SR, Barry MJ, D’Amico AV, Weinberg AC, Keating NL (2009) Comparative effectiveness of minimally invasive vs open radical prostatectomy. JAMA 302(14):1557–1564. doi:10.1001/jama.2009.1451 PubMedCrossRefGoogle Scholar
  40. 40.
    Weight CJ, Lee MC, Palmer JS Avagard hand antisepsis vs. traditional scrub in 3600 pediatric urologic procedures. Urology 76(1):15–17. doi:10.1016/j.urology.2010.01.017
  41. 41.
    Minott DHJ, Luft H, Gutterman S, Weil H (2010) The group employed model as a foundation for health care delivery reform, vol 83. The Commonwealth FundGoogle Scholar
  42. 42.
    Office of the Legislative Counsel (2010) Compilation of patient protection and affordable care actGoogle Scholar
  43. 43.
    McClellan M, McKethan AN, Lewis JL, Roski J, Fisher ES A national strategy to put accountable care into practice. Health Aff (Millwood) 29(5):982–990. doi:10.1377/hlthaff.2010.0194
  44. 44.
    Medicare Payment Advisory Commission (2008) Report to the congress: medicare payment policy. Washington, DCGoogle Scholar
  45. 45.
    Medicare Payment Advisory Commission (2008) Report to the congress: reforming the delivery system. Washington, DCGoogle Scholar
  46. 46.
    Medicare Payment Advisory Commission (2007) Report to the congress: promoting greater efficiency in medicare. Washington, DCGoogle Scholar
  47. 47.
    Peikes D, Chen A, Schore J, Brown R (2009) Effects of care coordination on hospitalization, quality of care, and health care expenditures among medicare beneficiaries: 15 randomized trials. JAMA 301(6):603–618. doi:10.1001/jama.2009.126 PubMedCrossRefGoogle Scholar
  48. 48.
    Luft H (2010) Becoming accountable—opportunities and obstacles for ACOs. N Engl J Med 363(15):1389–1391PubMedCrossRefGoogle Scholar
  49. 49.
    Berenson RA, Ginsburg PB, Kemper N Unchecked provider clout in California foreshadows challenges to health reform. Health Aff (Millwood) 29(4):699–705. doi:10.1377/hlthaff.2009.0715
  50. 50.
    Axelrod DA, Millman D, Abecassis MM US health care reform and transplantation, Part II: impact on the public sector and novel health care delivery systems. Am J Transpl 10(10):2203–2207. doi:10.1111/j.1600-6143.2010.03247.x
  51. 51.
    Nuttall M, Wilby D, Chappell B, O’Brien T (2009) What would truly patient-centred urological care look like? BJU Int 104(3):287–288. doi:10.1111/j.1464-410X.2009.08477.x PubMedCrossRefGoogle Scholar
  52. 52.
    American Academic of Pediatrics Council on Pediatric Practice (1967) Standards of child health careGoogle Scholar
  53. 53.
    Starfield B, Shi L (2004) The medical home, access to care, and insurance: a review of evidence. Pediatrics 113(5 Suppl):1493–1498PubMedGoogle Scholar
  54. 54.
    Rosenthal TC (2008) The medical home: growing evidence to support a new approach to primary care. J Am Board Fam Med 21(5):427–440. doi:10.3122/jabfm.2008.05.070287 PubMedCrossRefGoogle Scholar
  55. 55.
    Shi L, Macinko J, Starfield B, Wulu J, Regan J, Politzer R (2003) The relationship between primary care, income inequality, and mortality in US States, 1980–1995. J Am Board Fam Pract 16(5):412–422PubMedCrossRefGoogle Scholar
  56. 56.
    Stewart M, Brown JB, Donner A, McWhinney IR, Oates J, Weston WW, Jordan J (2000) The impact of patient-centered care on outcomes. J Fam Pract 49(9):796–804PubMedGoogle Scholar
  57. 57.
    Homer CJ, Klatka K, Romm D, Kuhlthau K, Bloom S, Newacheck P, Van Cleave J, Perrin JM (2008) A review of the evidence for the medical home for children with special health care needs. Pediatrics 122(4):e922–e937. doi:10.1542/peds.2007-3762 PubMedCrossRefGoogle Scholar
  58. 58.
    Fisher ES (2008) Building a medical neighborhood for the medical home. N Engl J Med 359(12):1202–1205. doi:10.1056/NEJMp0806233 PubMedCrossRefGoogle Scholar
  59. 59.
    Greenfield S, Nelson EC, Zubkoff M, Manning W, Rogers W, Kravitz RL, Keller A, Tarlov AR, Ware JE Jr (1992) Variations in resource utilization among medical specialties and systems of care. Results from the medical outcomes study. JAMA 267(12):1624–1630PubMedCrossRefGoogle Scholar
  60. 60.
    American College of Physicians (2009) Understanding the patient-centered medical home: relationship of the patient-centered medical home to specialty and subspecialty practices. Accessed 15 Oct 2010

Copyright information

© The Author(s) 2011

Authors and Affiliations

  1. 1.Department of UrologyUniversity of CaliforniaLos AngelesUSA

Personalised recommendations