Journal of General Internal Medicine

, Volume 25, Issue 8, pp 774–779

A July Spike in Fatal Medication Errors: A Possible Effect of New Medical Residents

Open Access
Original Research

Abstract

BACKGROUND

Each July thousands begin medical residencies and acquire increased responsibility for patient care. Many have suggested that these new medical residents may produce errors and worsen patient outcomes—the so-called “July Effect;” however, we have found no U.S. evidence documenting this effect.

OBJECTIVE

Determine whether fatal medication errors spike in July.

DESIGN

We examined all U.S. death certificates, 1979–2006 (n = 62,338,584), focusing on medication errors (n = 244,388). We compared the observed number of deaths in July with the number expected, determined by least-squares regression techniques. We compared the July Effect inside versus outside medical institutions. We also compared the July Effect in counties with versus without teaching hospitals.

OUTCOME MEASURE

JR = Observed number of July deaths / Expected number of July deaths.

RESULTS

Inside medical institutions, in counties containing teaching hospitals, fatal medication errors spiked by 10% in July and in no other month [JR = 1.10 (1.06–1.14)]. In contrast, there was no July spike in counties without teaching hospitals. The greater the concentration of teaching hospitals in a region, the greater the July spike (r = .80; P = .005). These findings held only for medication errors, not for other causes of death.

CONCLUSIONS

We found a significant July spike in fatal medication errors inside medical institutions. After assessing competing explanations, we concluded that the July mortality spike results at least partly from changes associated with the arrival of new medical residents.

KEY WORDS

medication error mortality July Effect teaching hospitals medical residents 

INTRODUCTION

Inexperienced medical staff are often considered a possible source of medical errors.1, 2, 3, 4, 5, 6 One way to examine the relationship between inexperience and medical error is to study changes in the number of medical errors in July, when thousands begin medical residencies and fellowships.1,7, 8, 9, 10, 11 This approach allows one to test the hypothesis that inexperienced residents are associated with increased medical errors1,8,9,11, 12, 13, 14, 15—the so-called “July Effect.”

Previous attempts to detect the July Effect have mostly failed,1,8, 9, 10, 11, 12, 13, 14, 15, 16, 17 perhaps because these studies examined small,8,10, 11, 12, 13,15, 16, 17 non-geographically representative samples,8, 9, 10, 11, 12, 13, 14, 15, 16, 17 spanning a limited period,11, 12, 13, 14, 15, 16 although a study of anaesthesia trainees at one Australian hospital over a 5-year period did demonstrate an increase in the rate of undesirable events in February—the first month of their academic year.1 In contrast, our study examines a large, nationwide mortality dataset spanning 28 years. Unlike many other studies,18 we focus on fatal medication errors—an indicator of important medical mistakes. We use these errors to test the “New Resident Hypothesis”—the arrival of new medical residents in July is associated with increased fatal medication errors.

METHODS

Primary Dataset

We examined all official U.S. computerized death certificates (n = 62,338,584).19 Our dataset begins with 1979, when hospital status (e.g., inpatient) was first recorded, and ends with 2006, the latest data year available.

We assumed that, inside medical settings, fatal medication errors are more likely to be influenced by inexperienced residents than by patients. In contrast, outside medical settings, we assumed that inexperienced residents play a relatively smaller role, while the patient plays a correspondingly larger role. For example, Phillips, Barker, and Eguchi 20 showed that a significantly larger fraction of medication errors outside medical institutions involved alcohol—indicating the reduced importance of medical residents and the increased importance of the patient.

Therefore, we focused on persons dying inside medical settings: inpatients, outpatients, and those dying in the emergency department. Outpatients are included in our analysis because, in our dataset, 'outpatient' officially refers to persons receiving medical care inside medical institutions, without being admitted to the hospital.21 Because outpatient and ED settings are not distinguished in our dataset, we cannot analyze these settings separately. We compare persons dying inside medical institutions (inpatients, outpatients/ED) with persons dying before reaching medical institutions (those dead on arrival “DOA”).

Geographic detail and exact date of death are unavailable after 2004; consequently, all analyses requiring this information omit later data. In some analyses, we examined both primary and secondary causes of death. For these analyses, our study period begins with 1983, when secondary causes of death were first coded on computerized certificates.

Definitions

We define fatal medication errors as deaths in which medication errors are recorded as the primary cause of death. All other causes of death analyzed are also defined according to the primary cause. Officially acknowledged medication errors (n = 244,388) are coded E850-E858 in the International Classification of Diseases, 9th Revision (ICD9) 22 and X40-X44 in the 10th revision (ICD10).23 Medication error involves “accidental overdose of drug, wrong drug given or taken in error, and drug taken inadvertently [and] accidents in the use of drugs and biologicals in medical and surgical procedures.”22,23 This category is equivalent to “fatal preventable adverse drug event” used elsewhere.24

The ICD category “medication errors” is distinct from the ICD category “adverse effects,” which we also examined. Adverse effects signify cases where “correct drug [was] properly administered in therapeutic or prophylactic dosage, as the cause of adverse effect” 22,23 (E930-E949 (ICD9); Y40-59 (ICD10)). This category includes unexpected allergic reactions resulting from proper drug administration and is equivalent to “fatal non-preventable adverse drug events,” used elsewhere.24 In addition to these categories, we examined surgical errors (E870-E876 (ICD9); Y60-Y69 (ICD10)), external causes (e.g., accidents, homicides, and suicides), and all deaths combined.

In contrast to many other studies2,25, 26, 27 we analyze: (1) only preventable adverse effects;25 (2) only medication errors (rather than combining several types of medical errors like medicinal and surgical);2,26 (3) only fatal medication errors;27 (4) only those medication errors coded as the primary cause of death (rather than medication errors coded as primary, secondary, and/or tertiary).2,26 In addition, we examine a nationwide dataset, whereas most other studies extrapolate to nationwide figures from small non-geographically representative samples.2,26 For these reasons, the number of medication errors in our study differs from the number in other studies.

Secondary Datasets

Computerized death certificates do not record whether the patient died in a teaching hospital, but they do record the county of death (1979-2004). Starting in 1980, American Hospital Association (AHA) surveys 28, 29, 30 recorded hospital types in each county. We used these surveys to identify counties containing major teaching hospitals 9 near the beginning (1980), middle (1992), and end (2004) of our study period. For each county, we calculated the proportion of hospitals that are major teaching hospitals; we assumed that this proportion is a good indicator of the influence of teaching hospitals and of medical residents in a county. A related indicator, the proportion of patients treated in major teaching hospitals, cannot be accurately measured with AHA datasets.

In addition to computerized death certificates and AHA surveys, we examined monthly data from three other datasets:
  1. (1)

    Hospital admissions, recorded by the National Hospital Discharge Survey (1979-1997);31 monthly admissions were not coded after 1997.

     
  2. (2)

    Visits to the ED, recorded by the National Hospital Ambulatory Medical Care Survey, Emergency Department (1992-2005).32

     
  3. (3)

    Visits to outpatient departments, recorded by the National Hospital Ambulatory Medical Care Survey, Outpatient Department (1992-2005).33

     

The latter two datasets provide complete information from January through November but omit a varying number of days in December. Consequently, we did not analyze December data for these datasets.

Statistical Analysis

We used two procedures to estimate significance levels, depending on the dataset investigated. For the death certificate data, we used standard procedures.34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45 These procedures cannot be easily employed for the other datasets examined because these datasets use very complex multi-stage cluster sampling techniques.46 For these datasets, we estimated significance levels with bootstrap procedures.46

For each of the 28 years under analysis, we determined a least-squares regression equation 34 for the monthly data; this procedure allowed us to estimate the expected number of events in a given month of a given year. In this regression procedure, we used two independent variables: (1) number of days in the month (28–31), and (2) number of the month (1–12). We then summed the 28 expected values for a given month to determine the total expected value for that month during the entire 28-year study period.34

We generated a regression equation for each year separately, rather than a regression equation for all years combined, because the first procedure corrects for possible changes in the monthly distribution from one year to another. The second procedure, using combined data, generates nearly identical expected values for each of the 12 months. For example, there is a correlation of 0.999 between the expected number of monthly deaths generated by the two procedures. (All correlations reported in this paper are the standard Pearson correlations.)

When analyzing mortality from each cause separately, we used linear regression because there is a linear pattern in monthly deaths from each cause under study—for the 28 years combined, the quadratic regression coefficients were insignificant (b2 = -0.29; t = -0.14).34 Inspection of regression results for each year separately also reveals no significant departure from linearity.

Linear regressions were also appropriate for nearly all other analyses because the quadratic regression coefficients were insignificant: for inpatient mortality (b2 = -0.29; t = -0.14), outpatient and ED mortality (b2 = -4.74; t = -2.01), and DOA (b2 = -2.29; t = -1.63). However, for ED admissions and for mortality from all causes combined, cubic regressions were appropriate.

Two-tailed significance tests are customary but sometimes inappropriate.47, 48, 49 For some of our analyses to be meaningful, one-tailed tests are required. For example:
  1. (1)

    We examine the difference:

     

D1 = July Effect inside teaching hospital counties—July Effect outside teaching hospital counties

We expect D1 to be both statistically significant and to have a positive value, thus requiring a one-tailed test.
  1. (2)

    We also examine the difference:

     

D2 = July Effect inside medical institutions—July Effect outside medical institutions

Here too, we expect the difference to be both statistically significant and to have a positive value, thus requiring a one-tailed test.
  1. (3)

    We also examine the correlation between the July Effect in a region and the concentration of teaching hospitals in that region. Here, we expect the correlation to be both statistically significant and to have a positive value, thus requiring a one-tailed test.

     

Unless otherwise stated, all our significance tests are two-tailed.

Following official recommendations 35 and our earlier practice, 36, 37, 38, 39, 40, 41, 42, 43 we calculated standard errors 44,45 and significance levels, even though we examined complete counts, not samples.

RESULTS

Figure 1 displays for each month the ratio:
Figure 1

Ratio of observed to expected deaths for inpatient medication errors by month, United States, 1979-2006 (with 95% confidence intervals). Unless otherwise noted, error bars in Figure 1 and in subsequent figures were determined using a poisson approximation.46

$$ {\hbox{R}} = {\hbox{Observed}}\;{\hbox{number}}\;{\hbox{of}}\;{\hbox{deaths}}/{\hbox{Expected}}\;{\hbox{number}}\;{\hbox{of}}\;{\hbox{deaths}} $$
for inpatient deaths from medication errors. When R exceeds 1.00, observed mortality exceeds the number expected. In July, observed mortality significantly exceeded the expected level [1.062 (1.023-1.100)]. In all other months, mortality levels did not deviate significantly from expected. Henceforth, we use “JR” to indicate the value of R for July.

Figure 1 reveals a July Effect for data aggregated for 28 years. The July Effect was also evident when each year was examined separately. For inpatient deaths from medication errors, JR exceeded 1.00 for 21 of the 28 years (P = 0.006; one-tailed binomial test). During the study period, JR displayed no trend (b = 0.0003; t = 0.104; P = N.S.). In particular, JR did not decline after July 1, 2003, when resident hours were reduced.27 In the three years before this reduction (2000-2002), the average JR was 1.03; in the three years after this reduction (2004-2006), the average JR was 1.05.

Figure 2 displays JR for medication errors occurring in three settings: inpatient, outpatient/ED, and DOA. As expected, JR was not elevated [0.998 (0.945-1.052)] for DOA but was elevated for inpatients [1.062 (1.023-1.100)] and for those dying in outpatient/ED settings [1.060 (1.025-1.095)]. Henceforth, we combine these “intra-institutional” deaths. As in Figure 1, mortality from intra-institutional medication errors spiked only in July. For medication errors, JR inside medical institutions [1.061 (1.035-1.087)] was significantly larger than JR for DOA [0.998 (0.945-1.052); P = 0.02; one-tailed ratio of ratios Z-test].45
Figure 2

July effect for fatal medication errors by hospital setting, United States, 1979-2006 (with 95% confidence intervals). Error bars were calculated using Daly and Bourke.44

The July spike for intra-institutional medication errors does not appear to have resulted from a rise in admissions to medical institutions, because inpatient admissions decreased in July [-3% (-5% to -2%)]; and neither increased nor decreased significantly for outpatient admissions [0% (-2% to 2%)] or ED admissions [1% (0% to 2%)].

Figure 3 compares JR for medication errors with JR for other causes of death. Except for medication errors, no cause of death displayed a significant July Effect inside medical institutions. In particular, JR was not elevated for adverse effects, i.e., for medication deaths not considered to result from error. Similarly, there was no July Effect for deaths inside medical institutions from all causes combined.
Figure 3

July effect for fatal medication errors and for comparison causes of death inside medical institutions, United States, 1979-2006 (with 95% confidence intervals).

Given the “New Resident Hypothesis,” JR should be largest in geographic regions with the largest concentrations of teaching hospitals. To test this prediction, we calculated the proportion:
$$ {\hbox{Number}}\;{\hbox{of major teaching}}\;{\hbox{hospitals}}/{\hbox{Total}}\;{\hbox{number}}\;{\hbox{of}}\;{\hbox{hospitals}} $$
for each of the nine officially defined U.S. regions.35 There was a strong regional correlation between this proportion and JR (r = 0.80; t = 3.54; n = 9; P = 0.005, one-tailed test). Thus, the greater the concentration of teaching hospitals in a region, the greater the July Effect for intra-institutional medication errors in that region.

In contrast, the comparison causes of death in Figure 3 did not display regional correlations of this sort (for intra-institutional mortality from adverse effects: r = -0.34; t = -0.95; P = N.S.; for surgical errors: r = -0.13; t = -0.36; P = N.S.; for all causes: r = 0.46; t = 1.36; P = N.S.).

Given the “New Resident Hypothesis,” the July Effect should be concentrated in counties with teaching hospitals. To test this prediction, we examined counties with teaching hospitals near the beginning, middle, and end of the study period (Fig. 4). Henceforth, we term these “teaching hospital counties” and compare them with all other counties. As expected, for teaching hospital counties, JR was elevated (by 10%) for intra-institutional medication error deaths [JR = 1.10 (1.06-1.14)]. In contrast, JR was not elevated for all remaining counties [JR = 1.03 (1.00-1.07)]. JR for teaching hospital counties was significantly larger than JR for all remaining counties (P = 0.03; one-tailed ratio of ratios Z-test).45 The comparison causes of death displayed no July Effect either for teaching hospital counties or for other counties (Fig. 4).
Figure 4

July effect by cause of death, for teaching hospital counties and for all other counties, United States, 1979-2004 (with 95% confidence intervals). The mortality dataset identifies only counties with at least 100,000 people; thus “all other counties” may include sparsely populated counties that contain teaching hospitals.

In a further test of the “New Resident Hypothesis,” we compared the following proportion for two groups:
$$ {\hbox{Number}}\;{\hbox{of major teaching}}\;{\hbox{hospitals}}/{\hbox{Total}}\;{\hbox{number}}\;{\hbox{of}}\;{\hbox{hospitals}} $$
Group 1 consists of 102 counties for which the proportion of teaching hospitals increased over time. Group 2 consists of all remaining 2,324 counties. For Group 2 (the overwhelming majority of all counties) the annual JR (1979-2004) decreased over time (b = -0.009). In contrast, for Group 1, the annual JR (1979-2004) increased over time (b = 0+.0008). Consistent with the “New Resident Hypothesis,” the slope for Group 1 significantly exceeds the slope for Group 2 (t = 2.12; d.f. = 48; P = 0.02, one-tailed test).34

The above analyses examined medication errors coded as primary cause of death. If, in July, death registrars are unusually likely to code medication errors as a primary rather than as a secondary cause, then there should be a compensatory drop in medication errors coded as a secondary cause. However, July medication errors coded as a secondary cause decreased by only 9 [JR = 0.99 (0.95 to 1.04)]. In contrast, these errors increased by 379 when coded as a primary cause.

If, in July, death registrars are unusually likely to ascribe a death to medication error rather than to adverse effects or suicides by medication, then these latter causes should decrease in July. However, neither adverse effects nor medication suicides decreased significantly in July. In July, adverse effects decreased by only 33 [JR = 0.91 (0.81 to 1.01)]; medication suicides decreased by only 51 [JR = 0.97 (0.93 to 1.02)].

DISCUSSION

Inside medical institutions, fatal medication errors spiked in July and in no other month. This July spike appeared only in counties containing teaching hospitals; in these counties, July mortality from medication errors was 10% above the expected level. These findings were evident only for medication errors and not for other causes of death or for deaths outside medical institutions.

Alternative Hypotheses

Although our findings are consistent with the “New Resident Hypothesis,” other hypotheses are conceivable; these are assessed below.
  1. 1)

    The July Effect may result from various behavioral changes occurring during the summer. For example: A) a possible spike in summer alcohol consumption, combined with harmful alcohol-medication interactions; B) a summer spike in injuries from accidents and other “external causes,” combined with increased medical efforts (e.g., prescriptions) to treat these injuries; C) an increase in summer tourism (tourists may receive worse health care). In addition, the July Effect may appear only in teaching hospital counties, because these counties might have an elevated proportion of summer tourists.

     
If the July Effect in fact resulted from these summertime behavioral changes, then there should be a general summertime increase in medication errors—not only in July but also in August. No such August spike is found.
  1. 2)

    The July Effect may result from the July 4th holiday. However, while July 4th is celebrated nationwide, the July Effect is evident only in teaching hospital counties. Moreover, medication errors do not spike in other months containing national holidays.

     
  2. 3)

    The July Effect may result from coding changes in July. Our findings above undermine this hypothesis (e.g., our analysis of adverse effects and medication suicides). The misclassification of some other cause of death might contribute to the July Effect, though we have seen no studies which show that in July there is a spike in the misclassification of any cause of death as medication error. Finally, it is difficult to understand how these putative types of misclassification could occur only in July and only in teaching hospital counties.

     

The analyses above suggest that, at present, the New Resident Hypothesis is the best available explanation for our findings.

Advantages and Limitations

Our use of official, computerized death certificates offers significant advantages: this dataset enabled us to examine a large, nationwide, multi-decadal sample and thereby detect a statistically significant July spike not found in earlier studies. However, our dataset is limited to the most severe type of medication errors (those resulting in death) and provides little detail about each medication error.

In part, because of these limitations, several questions remain: Is there a July Effect for non-fatal medication errors? What are the detailed mechanisms contributing to the July Effect (e.g., miscommunication, inadequate oversight)? Why is there a July spike in fatal medication errors but not in fatal surgical errors? These important questions require further study, perhaps with different kinds of datasets that provide more detail per case.

Implications

Despite these gaps in research, our findings have several implications for medical policy—they provide fresh evidence for: 1) re-evaluating responsibilities assigned to new residents; 2) increasing supervision of new residents; 3) increasing education concerned with medication safety. Incorporating these changes might reduce both fatal and non-fatal medication errors and thereby reduce the substantial costs1 associated with medication errors.

CONCLUSION

Our nationwide, multi-decadal study enabled us to discover previously unknown evidence for a July spike in fatal medication errors. This spike seems to result at least partly from changes associated with new medical residents. The July Effect seems to be a significant public health problem and warrants further investigation.

Notes

Acknowledgments

Funding: Marian E. Smith Foundation. We thank Kimberly M. Brewer, Robert D. Kleinberg, and Miranda Phillips for valuable comments and suggestions.

Conflict of Interest

None disclosed.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  1. 1.
    Haller G, Myles PS, Taffe P, et al. Rate of undesirable events at beginning of academic year: retrospective cohort study. BMJ. 2009;339:b3974.CrossRefPubMedGoogle Scholar
  2. 2.
    Kohn LT, ed, Corrigan J, ed, Donaldson MS, ed. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.Google Scholar
  3. 3.
    Altman DE, Clancy C, Blendon RJ. Improving patient safety—five years after the IOM report. N Engl J Med. 2004;351:2041–3.CrossRefPubMedGoogle Scholar
  4. 4.
    Phillips J, Beam S, Brinker A, et al. Retrospective analysis of mortalities associated with medication errors. Am J Health Syst Pharm. 2001;58:1835–41.PubMedGoogle Scholar
  5. 5.
    Weingart SN, Wilson RM, Gibberd RW, Harrison B. Epidemiology of medical error. BMJ. 2000;320:774–7.CrossRefPubMedGoogle Scholar
  6. 6.
    Lesar TS, Briceland LL, Delcoure K, Parmalee JC, Masta-Gornic V, Pohl H. Medication prescribing errors in a teaching hospital. JAMA. 1990;263:2329–34.CrossRefPubMedGoogle Scholar
  7. 7.
    U.S. medical school seniors enjoy most successful “match day” in 30 Years: 2008 residency program match is the largest in history. Washington, D.C.: Association of American Medical Colleges; 2008. http://www.aamc.org/newsroom/pressrel/2008/080320.htm. Accessed March 6, 2010.
  8. 8.
    Finkielman JD, Morales IJ, Peters SG, et al. Mortality rate and length of stay of patients admitted to the intensive care unit in July. Crit Care Med. 2004;32:1161–5.CrossRefPubMedGoogle Scholar
  9. 9.
    Barry WA, Rosenthal GE. The effect of July admission on intensive care mortality and length of stay in teaching hospitals. J Gen Intern Med. 2003;18:639–45.CrossRefPubMedGoogle Scholar
  10. 10.
    Ford AA, Bateman BT, Simpson LL, Ratan RB. Nationwide data confirms absence of 'July phenomenon' in obstetrics: it's safe to deliver in July. J Perinatol. 2007;27:73–6.CrossRefPubMedGoogle Scholar
  11. 11.
    Englesbe MJ, Pelletier SJ, Magee JC, et al. Seasonal variation in surgical outcomes as measured by the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP). Ann Surg. 2007;246:456–62.CrossRefPubMedGoogle Scholar
  12. 12.
    Banco SP, Vaccaro AR, Blam O, et al. Spine infections: variations in incidence during the academic year. Spine. 2002;27:962–5.CrossRefPubMedGoogle Scholar
  13. 13.
    Rich EC, Gifford G, Luxenberg M, Dowd B. The relationship of house staff experience to the cost and quality of inpatient care. JAMA. 1990;263:953–7.CrossRefPubMedGoogle Scholar
  14. 14.
    Rich EC, Hillson SD, Dowd B, Morris N. Specialty differences in the 'July Phenomenon' for Twin Cities teaching hospitals. Med Care. 1993;31:73–83.CrossRefPubMedGoogle Scholar
  15. 15.
    Shulkin DJ. The July phenomenon revisited: are hospital complications associated with new house staff? Am J Med Qual. 1995;10:14–7.CrossRefPubMedGoogle Scholar
  16. 16.
    Buchwald D, Komaroff AL, Cook EF, Epstein AM. Indirect costs for medical education. Is there a July phenomenon? Arch Intern Med. 1989;149:765–8.CrossRefPubMedGoogle Scholar
  17. 17.
    Claridge JA, Schulman AM, Sawyer RG, Ghezel-Ayagh A, Young JS. The 'July phenomenon' and the care of the severely injured patient: fact or fiction? Surgery. 2001;130:346–53.CrossRefPubMedGoogle Scholar
  18. 18.
    Barker KN, Flynn EA, Pepper GA, Bates DW, Mikeal RL. Medication errors observed in 36 health care facilities. Arch Intern Med. 2002;162:1897–903.CrossRefPubMedGoogle Scholar
  19. 19.
    Mortality detail file, 1979-2004. Hyattsville, Md.: National Center for Health Statistics (Computer Data File).Google Scholar
  20. 20.
    Phillips DP, Barker GEC, Eguchi MM. A steep increase in domestic fatal medication errors with use of alcohol and/or street. Arch Intern Med. 2008;168:1561–6.CrossRefPubMedGoogle Scholar
  21. 21.
    Documentation of the Mortality Tape File for 1992 Data. Hyattsville, MD: National Center for Health Statistics; 1992.Google Scholar
  22. 22.
    ICD9.chrisendres.com Free Online Searchable 2009 ICD-9-cm. http://icd9cm.chrisendres.com/. Accessed March 6, 2010.
  23. 23.
    International Statistical Classification of Disease-Related Health Problems, 10th revision. Geneva, Switzerland: World Health Organization; 2006. http://www.who.int/classifications/icd/en/. Accessed March 6, 2010.
  24. 24.
    Gurwitz JH, Field TS, Harrold LR, et al. Incidence and preventability of adverse drug events among older persons in the ambulatory setting. JAMA. 2003;289:1107–16.CrossRefPubMedGoogle Scholar
  25. 25.
    Lazarou J, Pomeranz BH, Corey PN. Incidence of adverse drug reactions in hospitalized patients: a meta-analysis of prospective studies. JAMA. 1998;279:1200–5.CrossRefPubMedGoogle Scholar
  26. 26.
    Sox HC Jr, Woloshin S. How many deaths are due to medical error? Getting the number right. Eff Clin Pract. 2000;3:277–83.Google Scholar
  27. 27.
    Yoon HH. Adapting to duty-hour limits—four years on. N Engl J Med. 2007;356:2668–70.CrossRefPubMedGoogle Scholar
  28. 28.
    American Hospital Association. AHA Annual Survey of Hospitals: 1980. Chicago: American Hospital Association; 1982.Google Scholar
  29. 29.
    American Hospital Association. AHA Annual Survey of Hospitals: 1992. Chicago: American Hospital Association; 1994.Google Scholar
  30. 30.
    American Hospital Association. AHA Annual Survey of Hospitals: 2004. Chicago: American Hospital Association; 2006.Google Scholar
  31. 31.
    National Center for Health Statistics. National Hospital Discharge Survey: Multi-Year Data File, 1979-97. Hyattsville, Md: National Center for Health Statistics; 1999. http://www.cdc.gov/nchs/nhds.htm. Accessed March 6, 2010.
  32. 32.
    National Center for Health Statistics. National Hospital Ambulatory Medical Care Survey: 1992-2005, Emergency Department Computer Data File. Hyattsville, Md: Centers for Disease Control and Prevention: 2007. http://www.cdc.gov/nchs/ahcd.htm. Accessed March 6, 2010.
  33. 33.
    National Center for Health Statistics. National Hospital Ambulatory Medical Care Survey: 1992-2005, Outpatient Department Computer Data File. Hyattsville, Md: Center for Disease Control and Prevention; 2007. http://www.cdc.gov/nchs/ahcd.htm. Accessed March 6, 2010.
  34. 34.
    Kleinbaum DG, Kupper LL, Muller KE, Nizam A. Applied Regression Analysis and Other Multivariable Methods. 2nd ed. Belmont: Wadsworth Publishing Co; 1988.Google Scholar
  35. 35.
    National Center for Health Statistics. Vital Statistics of the United States, Yearly Volumes: Volume 2, Mortality, Part A. Section 7. Washington, D.C.: Government Printing Office.Google Scholar
  36. 36.
    Phillips DP, Christenfeld N, Ryan NM. An increase in the number of deaths in the United States in the first week of the month—an association with substance abuse and other causes of death. N Engl J Med. 1999;341:93–8.CrossRefPubMedGoogle Scholar
  37. 37.
    Phillips DP, Paight DJ. The impact of televised movies about suicide: a replicative study. N Engl J Med. 1987;317:809–11.PubMedGoogle Scholar
  38. 38.
    Phillips DP, Carstensen LL. Clustering of teenage suicides after television news stories about suicide. N Engl J Med. 1986;315:685–9.PubMedGoogle Scholar
  39. 39.
    Phillips DP, Bredder CC. Morbidity and mortality from medical errors: an increasingly serious public health problem. Annu Rev Public Health. 2002;23:135–50.CrossRefPubMedGoogle Scholar
  40. 40.
    Phillips DP, Christenfeld N, Glynn LM. Increase in U.S. medication-error deaths between 1983 and 1993. Lancet. 1998;351:643–4.CrossRefPubMedGoogle Scholar
  41. 41.
    Phillips DP, Jarvinen JR, Phillips RR. A spike in fatal medication errors at the beginning of each month. Pharmacotherapy. 2005;25:1–9.CrossRefPubMedGoogle Scholar
  42. 42.
    Phillips DP, Ruth TE, Wagner LM. Psychology and survival. Lancet. 1993;342:1142–5.CrossRefPubMedGoogle Scholar
  43. 43.
    Phillips DP, Liu GC, Kwok K, Jarvinen JR, Zhang W, Abramson IS. The Hound of the Baskervilles effect: natural experiment on the influence of psychological stress on timing of death. BMJ. 2001;323:1443–6.CrossRefPubMedGoogle Scholar
  44. 44.
    Daly LE, Bourke GJ. Interpretation and Uses of Medical Statistics. 5th ed. Oxford: Blackwell Science Ltd; 2000:544.Google Scholar
  45. 45.
    Altman DG, Machin D, Bryant TN, Gardner MJ. Statistics With Confidence. 2nd ed. Bristol, England: BMJ Books; 2000:46–7.Google Scholar
  46. 46.
    Chernick MR. Bootstrap Methods: A Guide for Practitioners and Researchers. 2nd ed. Hoboken, NJ: Wiley-Interscience; 2008.Google Scholar
  47. 47.
    Daly LE, Bourke GJ. Interpretation and Uses of Medical Statistics. 5th ed. Oxford: Blackwell Science Ltd.; 2000:123–5.Google Scholar
  48. 48.
    Altman DG, Machin D, Bryant TN, Gardner MJ. Statistics With Confidence. 1st ed. Bristol, England: BMJ Books; 1989:60–1.Google Scholar
  49. 49.
    Bland M. An Introduction to Medical Statistics. 2nd ed. Oxford: Oxford University Press; 1995:138.Google Scholar

Copyright information

© The Author(s) 2010

Authors and Affiliations

  1. 1.Department of SociologyUniversity of California at San DiegoLa JollaUSA
  2. 2.School of Public HealthUniversity of California at Los AngelesLos AngelesUSA

Personalised recommendations