Separate but Equal? The Sorting of USMDs and Non-USMDs in Internal Medicine Residency Programs



The US internal medicine workforce relies on international and osteopathic medical graduates to fill gaps in residency. Little is known about the distribution and impact of IMGs, DOs, and USMDs concentrating in different types of IM programs.


Determining the extent to which USMDs, DOs, and IMGs concentrate in different types of IM programs and comparing Board pass rates by program concentration.

Design, Settings, and Participants

This survey study used data from the AMA’s FREIDA database for 476 non-military IM programs in 2017–2018, and 2016–2018 ABIM exam pass rates for 388 accredited programs.


Outcomes were (1) program concentration based on percentage of residents who were USMDs, IMGs, and DOs in 2017–2018 and (2) 2016–2018 program ABIM pass rates as proxies for program quality. Key independent variables were hospital type (community-based, community-based university-affiliated, or university-based) when program concentration was the outcome, and program concentration when Board pass rates were the outcome.


Twenty-five percent of programs were “USMD-dominated,” 17% were “DO-dominated,” 42% were “IMG dominated,” and 16% were “integrated.” The chances that a university hospital was USMD-dominated were 32 percentage points higher than that for a community hospital (AME = 0.32, baseline probability = 0.11, 95% CI, 0.17–0.46, P < .001). USMD-dominated programs also had significantly higher pass rates by 4.0 percentage points (AME = 0.04, baseline proportion = 0.90, 95% CI, 0.02–0.06, P < .001) than integrated programs, while DO-dominated programs had significantly lower pass rates (AME = − 0.1, baseline proportion = 0.90, 95% CI, − 0.15 to − 0.04, P < .001).


USMDs and non-USMDs systematically cluster in certain types of residency programs and their training may not be equal, as measured by board pass rates.

There are currently not enough US-trained allopathic graduates (“USMDs”) to fill residency positions. In 2019, for example, 18,925 US seniors vied for 32,194 positions in the NRMP Match, only filling 55% of first-year positions nationwide.1 In internal medicine (IM), that proportion was only 41.5%. Graduate medical education (GME) thus relies on “non-USMDs” (US citizen and non-US citizen IMGs, DOs) to fill gaps in residency training.

While some have expressed concerns about the quality of non-USMDs,2,3,4,5,6,7,8,9 Match statistics reveal that USMDs and international graduates who matched to internal medicine as their preferred specialty have virtually the same mean US Medical Licensing Exam (USMLE) Step 1 scores (233 vs. 232).10, 11 Matched DOs have comparatively lower mean Step 1 scores (226), although data are only available for 60.7% of US osteopathic seniors,12 as many DOs take the COMLEX instead of USMLEs. Since Step 1 scores are one of the biggest predictors of residency placement,13 we might expect residency programs to recruit a mixture of USMDs and international graduates, with others focusing recruitment efforts on DOs.

Anecdotal reports suggest, however, that IM programs are highly segregated by graduate type, with university hospitals being more likely to train USMDs, and community hospitals being more likely to train IMGs and DOs.14, 15 Despite a large body of literature on IMGs and DOs in the US health workforce, few studies have systematically analyzed residency placement by graduate type, or interrogated the implications of such placement patterns. Whitcomb and Miller found that 45% of IM programs were IMG-dominated in 1995.16 They did not differentiate between hospital types however (i.e., university vs. community-based hospitals), so the extent to which USMDs, IMGs, and DOs are currently sorted into community vs. university hospitals remains unknown.

Many non-USMDs go on to work in primary care, often in underserved areas with low socio-economic status populations.8, 17,18,19,20 Identifying distribution patterns of non-USMDs and USMDs may have implications for physician inequality, particularly if the quality of training differs between settings. For example, several studies have already found that programs with higher percentages of USMDs have higher pass rates on the American Board of Internal Medicine (ABIM) exams;21,22,23,24 this could signal differences in quality across concentrated programs.

To that end, we analyzed the composition of IM residency programs across the country, to determine the extent to which programs are disproportionately staffed by USMDs, DOs, and IMGs. As a preliminary step towards measuring differences in training quality between segregated programs, we also examined programs’ ABIM board pass rates.


We obtained data through a usage agreement from the AMA’s Fellowship and Residency Electronic Interactive Database Access (FREIDA) on all IM residency programs that filled out FREIDA surveys in 2017–2018, excluding military programs. We also obtained the 2016–2018 cumulative exam pass rates from the ABIM for each accredited program with more than 10 residents.25 The study was deemed non-human subjects research by Temple University IRB.

Our outcomes were program concentration and program pass rates for the ABIM exam. Program pass rate refers to the cumulative rolling percentage of residents from each program who passed their ABIM certification exams from 2016 to 2018. Program concentration was based on the three-year (2016, 2017, and 2018) average percentage of residents who were USMDs, DOs, and IMGs. Where this information was not available in FREIDA (for 16% of the programs), we used the 2018 value as reported on program websites. We categorized programs into one of four categories of concentration (see Table 1 for descriptive statistics). First were integrated programs, where the percent of residents who were USMDs was between 30 and 65%. This range represented one standard deviation (35 percentage points) centered on the mean percentage of USMDs across all programs in our sample (49%). USMD-dominated programs had more than 65% of residents who were USMDs (i.e., more than half a standard deviation above 49%). We defined DO- and IMG-dominated programs as having fewer than 30% of residents who were USMDs (more than half a standard deviation below 49%) and where the modal resident type was DO and IMG, respectively. Note, the FREIDA database did not distinguish between USIMGs and non-US citizen IMGs; these two graduate types were thus grouped together in the survey, even though we recognize they represent different populations.

Table 1 Summary Statistics By Program Concentration

When program concentration was the outcome, our key independent variable was hospital type. When program pass rate was the outcome, our independent variable was program concentration. Programs reported being one of three hospital types in FREIDA: University-based (“The majority of experience takes place in a hospital that serves as a primary affiliate of the medical school”), Community-based with University Affiliation (“The majority of experience is in a community-based hospital that is affiliated with an academic medical center, but is not a primary affiliate of the academic medical center”), and Community-based (“The majority of experience is in a community setting that is not in an academic medical center, or a hospital with a medical school affiliation”).

Our control variables included program size (z-standardized total number of residents in the program), US Census classification region (Northeast, South, West, and Midwest), visa status (whether the program offers visas for non-citizens), and NRMP Match participation (program participation in the National Resident Matching Program). Reliable program-level data on mean Step 1 scores was not available.

We modeled the association between hospital type and program concentration holding all control variables constant using multinomial logistic regression. We then used fractional logistic regression to estimate associations between concentration and program pass rates in models including program type and all of our control variables. Following the recent guidelines developed by Norton, Dowd, and Maciejewski, we present our results using average marginal effects (AME), the instantaneous rate of change in the probability (in the case of multinomial logistic regression), or the proportion (in the case of fractional logistic regression) of the outcome given a unit increase in the predictor.26

We conducted all analyses between October 2018 and April 2019 using Stata v.15.1.


Data were available for 508 non-military residency programs (see Table 1). When we analyzed program concentration, we had complete information for 476 programs (for the categorical variables visa status, and NRMP participation, we added a residual category for missing values). When we analyzed pass rates, we had complete information for 388 programs; the bulk of our missing values came from the pass rates variable itself. Our analysis of missing data suggests that our sample with complete information on all variables and both outcomes over-represents programs in the Northeast (AME = 0.15, baseline probability = 0.72, 95% CI, 0.06–0.24, P = .002), and underrepresents unaffiliated community hospitals (AME = − 0.42, baseline probability = 0.85, 95% CI, − 0.52 to − 0.33, P < .001) and DO programs (AME = − 0.64, baseline probability = 0.91, 95% CI, − 0.74 to − 0.54, P < .001). Larger program size is also associated with a greater chance of being represented in our sample (AME = 0.49, baseline probability = 0.08, 95% CI, 0.43–0.54, P < .001).

We found that 76 (16%) IM programs were integrated, 119 (25%) were USMD-dominated, while another 201 (42%) were IMG-dominated, and 80 (17%) were DO-dominated.

Holding constant program characteristics such as size, region, visa status, and NRMP Match participation, hospital type has a large association with concentration. Table 2 shows average marginal effects estimates of program type on program concentration from a single multinomial logistic regression. The chances that a university hospital is USMD-dominated are 32 percentage points larger than that for a community hospital (AME = 0.32, baseline probability = 0.11, 95% CI, 0.17–0.46, P < .001). Conversely, university hospitals are significantly less likely to be IMG-dominated (AME = − 0.24, baseline probability = 0.48, 95% CI, − 0.39 to − 0.09, P = .002). In addition, university-affiliated community hospitals are less likely than unaffiliated community hospitals to have a DO-dominated program (AME = − 0.15, baseline probability = 0.25, 95% CI, − 0.25 to − 0.04, P = .01).

Table 2 Average Marginal Effects (AME) of Program Type among 476 Internal Medicine Programs on Program Concentration

Table 3 shows the average marginal effects estimates of program concentration on ABIM Board pass rates from a fractional logistic regression. Concentration is significantly associated with pass rates holding constant other program characteristics; USMD-dominated programs have significantly higher pass rates by 4.0 percentage points (AME = 0.0405, baseline proportion = 0.9021, 95% CI, 0.0232–0.0578, P < .001) than integrated programs, while DO-dominated programs have significantly lower pass rates (AME = − 0.1, baseline proportion = 0.90, 95% CI, − 0.15 to − 0.04, P < .001). When IMG-dominated programs were used as the reference group (regression not shown), USMD-dominated programs have significantly higher pass rates than IMG-dominated ones (AME = .0404, baseline proportion = 0.9022, 95% CI, 0.0239–0.0568, P < .001). We note that hospital type does not have any significant associations with pass rates.

Table 3 Average Marginal Effects (AME) of Program Concentration among 388 Internal Medicine Programs on Program-level ABIM Board Pass Rates (2016–2018)


We found that USMDs, IMGs, and DOs tend to concentrate in different types of hospitals nationwide, with university hospital programs disproportionately staffing USMDs, and community-based programs disproportionately staffing IMGs and DOs. Our results confirm the persistence of Whitcomb and Miller’s 1995 findings nearly twenty-five years later that roughly 4 in 10 IM programs (42% vs. 45%) are IMG-dominated.16 We also demonstrated the existence of DO-dominated and integrated programs. These patterns of concentration and differences in outcomes are consistent with the sociological definition of segregation, where individuals are sorted into different settings based on characteristics that come to “symbolize dominant or subordinate status” within a community, like a profession.27

We also found significant differences in Board pass rates between programs with different concentrations of graduates. Insofar as USMLEs are correlated with the boards, we would expect lower board pass rates for programs with higher concentrations of graduates that typically score lower on their USMLEs. That was the case with DOs; they have lower USMLE scores on average and DO-dominated programs had correspondingly lower ABIM board pass rates. This was not the case, however, with IMGs. Recall that nationwide, USMDs and IMGs who matched to IM as their preferred specialty have virtually the same mean Step 1 scores. Performance on Step 1 is predictive of performance on Step 2 and Board passage, suggesting that USMDs and IMGs enter with similar knowledge and test-taking ability.28 It could be that medical graduates entering USMD-dominated residency programs do so with abilities captured by Board exams but not Step 1. Another possibility, however, is that discrepancies in board passage rates between USMD-dominated and IMG-dominated programs reflect real differences in educational experience during residency. Of the many factors that predict Board pass rates, workload related to variations in resource availability, variations in clinical experiences, faculty availability, and levels of supervision may explain differences between types of programs.21, 24 This suggests that there could be something about the educational milieu in programs with high concentrations of USMDs that is more conducive to success on the Boards, such as stronger support or better resources for residents.

The mechanisms behind these observed patterns of segregation are likely complex and multi-factorial. One reason could be self-selection. USMDs might focus their application efforts on higher-prestige university hospitals, particularly since academic prestige and reputation are known factors of influence for USMDs when choosing a residency.29, 30 For a minority of USMDs, the presence of DOs or IMGs in a program lowers their likelihood of ranking that program highly, further contributing to the idea that self-selection could be playing a role.30,31,32 For non-USMDs, Web-based services such as generate personalized lists of “IMG- or DO-friendly” programs based on applicants’ scores, geographic specifications, and nationality, thereby facilitating non-USMDs matching to “friendlier” community programs.32 One study from the 1990s found that IMGs were significantly more likely to apply to “IMG-dominated programs” as compared with non-IMG-dominated programs, and that the tendency for programs to recruit IMGs increased over time, leading to increased segregation.16 But over 20% of IMGs in that study still applied to “non-IMG-dominated” programs, suggesting that self-selection is likely insufficient on its own for understanding sorting patterns in the profession. Since we lack the data to know what proportion of USMDs and non-USMDs applied to which types of hospitals, we cannot systematically rule in or out self-selection as an explanation.

Another explanation could be merit-based, with USMDs simply being more competitive than non-USMDs. We did not have access to individual or program-level Step 1 data, but nationwide, matched DOs have significantly lower Step 1 scores on average, which could explain why DOs concentrate in certain programs. It would not explain the existence of IMG-dominated programs, however, given that matched international graduates and USMDs have virtually the same mean Step 1 scores. When NRMP data are further disaggregated to distinguish between US citizen IMGs (“USIMGs”) and non-US citizen IMGs (“IMGs”), these patterns become even more complicated. IMGs who matched to IM as their preferred specialty have a higher mean USMLE Step 1 score than USMDs (236 vs. 233), while USIMGs have lower scores on average (225 in 2018, 228 in 2016).10, 11, 33 However, since program-level FREIDA data do not distinguish between IMGs and USIMGs, instead grouping them together, we cannot disaggregate between USIMG- and IMG-concentrated programs, which would help determine to what extent segregation is merit-based.

USMLE scores are not the whole story either. Language barriers and cultural competency are known to be major concerns among clinicians training IMGs,34 and could represent another form of merit. To date, there are no large-N studies examining the criteria IM residency program directors use to select USMDs vs. non-USMDs, although recent ethnographic research finds that concerns about cultural competence are among several reasons why university programs might prefer USMDs.32

A third explanation could be implicit or overt bias. Previous studies outside of IM have documented instances of discrimination towards IMGs, in particular.35,36,37,38 Experimental studies, for example, have found that hypothetical foreign-trained applicants (including USIMGs) were less likely to receive residency interviews than equally qualified American-trained applicants, and were expected to have higher qualifications for the same positions.37,38,39

Finally, these trends could be related to institutional factors. Program competitiveness is often perceived to be linked to the proportion of USMDs on the house staff, so program directors may feel compelled to recruit more USMDs than they otherwise would, given pressures to improve program standing.32, 40

Regardless of the cause, the segregation of residents based on characteristics like perceived medical school prestige can contribute to differential treatment within the profession, exacerbating status differences between USMDs and non-USMDs.27 These differences matter not only for residents but also for patients. One study found that all else being equal, primary care practitioners were less likely to refer patients to an IMG compared with a USMD—a bias which could result in suboptimal referral decisions.41 This bias could be due to a lack of exposure to non-USMDs; another study found more negative perceptions of DOs among MDs the Deep South who had had little previous contact with DOs, suggesting that adequate representation of graduate types in training programs may help reduce status differences between graduates and possibly improve referral patterns.9

Our findings are also relevant in terms of the national conversation surrounding diversity within medicine. They raise questions about whether segregation of graduates is purposeful, or indeed desirable. There are many benefits to having a diverse workforce in medicine, including improved cultural competence, better access for the underserved, and broader research agendas.42 IMGs represent an important source of diversity in training programs,42, 43 and given that many come over with extensive clinical experience, they could share their advanced clinical skills with USMDs.44 At the same time, working with USMDs might also help mitigate some of the professional challenges experienced by non-USMDs, such as difficulty with acculturation.45 For their part, DOs—who are already less likely to pass the Boards based on their USMLE scores10, 12—may not benefit from being concentrated in DO-dominated programs.

A final consideration is that the systematic sorting of non-USMDs into community programs may also help keep medicine appealing to USMDs, who go on to fill more prestigious positions.46 In this way, the profession not only relies on non-USMDs to fill residency positions; it may specifically rely on them to fill lower status residency positions, thus helping preserve USMDs’ access to the elite rungs of the profession. These findings therefore represent an important first step towards critically considering the implications of pedigree-based segregation in GME.

Our study was limited by missing data, particularly ABIM pass rates for the approximately 24% of programs which were smaller or unaccredited. These types of programs may be more likely to have high percentages of non-USMDs, which could mean that our results are conservative with respect to the relationship between IMG- and DO-dominated programs and pass rates. Similarly, we had to exclude about 6% of programs in the concentration analysis due to missing data, which could lead to an underestimation of our results, particularly for smaller, DO-dominated, and community programs. We also lacked reliable Step 1 scores for both programs and individuals, which are correlated with both residency match outcomes and ABIM pass rates. Future research should ideally take these scores into account if they are available at the program or individual level. As previously mentioned, we were unable to parse differences between US citizen IMGs and non-US citizen IMGs in our analyses because the FREIDA database does not distinguish between these types of graduates. Disaggregating these two different populations in the future would help adjudicate whether differences in merit are driving segregation patterns in internal medicine. Finally, while we found associations between program concentration and ABIM board pass rates, we were unable to establish a causality. More research is needed to better understand this relationship and elucidate its implications for resident training, especially as the transition to a Single GME Accreditation System nears its completion in 2020.47 Unfortunately, the cross-sectional nature of our data precludes a comparison of program concentration and Board pass rates between pre- and post-Single Accreditation System, but this could be an interesting avenue for future research.

Despite IMGs and DOs receiving considerable attention in the medical education literature, few studies have explored how various types of medical graduates are systematically sorted into different residency programs, or how such sorting might impact their training. A previous study dating back to 1995 noted the existence of so-called IMG-dependent programs in IM,16 but left unspecified the existence of “DO-dominant,” “USMD-dominant,” or “integrated” programs in the specialty. Our study, to our knowledge, is the first to systematically describe the sorting of medical graduates into IM residency programs by hospital type. Furthermore, we explore one possible impact of such sorting on residents’ training by examining the association between program concentration and ABIM board pass rates. These results suggest that there may be important differences in training across programs with high concentrations of USMDs and non-USMDs.


  1. 1.

    National Resident Matching Program (NRMP). National Resident Matching Program, Results and Data: 2019 Main Residency Match®. Washington, DC. 2019.

  2. 2.

    Boulet JR, Cooper RA, Seeling SS, Norcini JJ, McKinley DW. US Citizens Who Obtain Their Medical Degrees Abroad: An Overview, 1992-2006. Health Aff 2009;28(1):226–33.

    Article  Google Scholar 

  3. 3.

    Eckhert NL. Perspective: Private Schools of the Caribbean: Outsourcing Medical Education. Acad Med 2010;85(4):622–30.

    PubMed  Article  Google Scholar 

  4. 4.

    Knobel RJ. Placement of Foreign-Trained Physicians in U. S. Medical Residencies. Med Care. 1973;11(3):224–39.

    CAS  PubMed  Article  Google Scholar 

  5. 5.

    Lohr KN, Vanselow NA, Detmer DE, Committee on the U.S. Physician Supply, editors. The Nation’s Physician Workforce: Options for Balancing Supply and Requirements. Washington, DC: National Academies Press; 1996.

    Google Scholar 

  6. 6.

    Dahm MR, Cartmill JA. Talking their way to success: communicative competence for international medical graduates in transition. Med Educ 2016;50(10):992–3.

    PubMed  Article  Google Scholar 

  7. 7.

    Dow WH, Harris DM. Exclusion of international medical graduates from federal health-care programs. Med Care 2002;40(1):68–72.

    PubMed  Article  Google Scholar 

  8. 8.

    Howard DL, Bunch CD, Mundia WO, Konrad TR, Edwards LJ, Amamoo MA, et al. Comparing United States versus international medical school graduate physicians who serve African-American and white elderly. Health Serv Res 2006;41(6):2155–81.

    PubMed  PubMed Central  Article  Google Scholar 

  9. 9.

    Reeves RR, Burke RS. Perception of Osteopathic Medicine Among Allopathic Physicians in the Deep Central Southern United States. J Am Osteopath Assoc 2009;109(6):318–23.

    PubMed  Google Scholar 

  10. 10.

    National Resident Matching Program. National Resident Matching Program, Charting Outcomes in the Match®: U.S. Allopathic Seniors, 2018. Washington, DC; 2018.

  11. 11.

    National Resident Matching Program. National Resident Matching Program, Charting Outcomes in the Match®: International Medical Graduates, 2018. Washington, DC; 2018.

  12. 12.

    National Resident Matching Program. National Resident Matching Program, Charting Outcomes in the Match®: Senior Students of U.S. Osteopathic Medical Schools, 2018. Washington, DC; 2018.

  13. 13.

    Gauer JL, Jackson JB. The association of USMLE Step 1 and Step 2 CK scores with residency match specialty and location. Med Educ Online 2017;22(1):1358579.

    PubMed  PubMed Central  Article  Google Scholar 

  14. 14.

    Manthous CA. Confronting the Elephant in the Room: Can We Transcend Medical Graduate Stereotypes? J Grad Med Educ 2012;4(3):290–2.

    PubMed  PubMed Central  Article  Google Scholar 

  15. 15.

    Worth T. Osteopathic Med Schools Like Kansas City University Answer The Call For More Doctors 2017. Available from: Accessed 2 Jan 2019.

  16. 16.

    Whitcomb M, Miller R. Comparison of IMG-dependent and non-IMG-dependent residencies in the national resident matching program. JAMA J Am Med Assoc 1996;276(9):700–3.

    CAS  Article  Google Scholar 

  17. 17.

    American Medical Association (AMA). International Medical Graduates in American Medicine: Contemporary Challenges and Opportunities 2010. Available from:

  18. 18.

    Hagopian A, Thompson M, Kaltenbach E, Hart LG. The Role of International Medical Graduates in America’s Small Rural Critical Access Hospitals. J Rural Health 2004;20(1):52–8.

    PubMed  Article  Google Scholar 

  19. 19.

    Fordyce MA, Doescher MP, Chen FM, Hart LG. Osteopathic Physicians and International Medical Graduates in the Rural Primary Care Physician Workforce. Fam Med 2012;44(6):396–403.

    PubMed  Google Scholar 

  20. 20.

    Sciolla AF, Lu FG. Cultural Competence for International Medical Graduate Physicians: A Perspective. In: Rao NR, Roberts LW, editors. International Medical Graduate Physicians: A Guide to Training. Cham: Springer International Publishing; 2016. p. 283–303.

    Google Scholar 

  21. 21.

    Atsawarungruangkit A. Residency program characteristics that are associated with pass rate of the American Board of Pediatrics certifying exam. Adv Med Educ Pract 2015;6:517–24.

    PubMed  PubMed Central  Article  Google Scholar 

  22. 22.

    Falcone JL, Middleton DB. Pass Rates on the American Board of Family Medicine Certification Exam by Residency Location and Size. J Am Board Fam Med 2013;26(4):453–9.

    PubMed  Article  Google Scholar 

  23. 23.

    Mims LD, Mainous AG, Chirina S, Carek PJ. Are Specific Residency Program Characteristics Associated With the Pass Rate of Graduates on the ABFM Certification Examination? Fam Med 2014;46(5):360–8.

    PubMed  Google Scholar 

  24. 24.

    Willett LL, Halvorsen AJ, Adams M, Chacko KM, Chaudhry S, McDonald FS, et al. Factors Associated with Declining Residency Program Pass Rates on the ABIM Certification Examination. Am J Med 2016;129(7):759–65.

    PubMed  Article  Google Scholar 

  25. 25.

    American Board of Internal Medicine. Residency Program Pass Rates 2016-2018, 2018. Available from:

  26. 26.

    Norton EC, Dowd BE, Maciejewski ML. Marginal Effects—Quantifying the Effect of Changes in Risk Factors in Logistic Regression Models. JAMA. 2019;321(13):1304–5.

    PubMed  Article  Google Scholar 

  27. 27.

    Reskin BF. Sex Segregation in the Workplace. Annu Rev Sociol 1993;19:241–70.

    Article  Google Scholar 

  28. 28.

    Kay C, Jackson JL, Frank M. The Relationship Between Internal Medicine Residency Graduate Performance on the ABIM Certifying Examination, Yearly In-Service Training Examinations, and the USMLE Step 1 Examination. Acad Med 2015;90(1):100–4.

    PubMed  Article  Google Scholar 

  29. 29.

    Aagaard EM, Julian K, Dedier J, Soloman I, Tillisch J, Pérez-Stable EJ. Factors affecting medical students’ selection of an internal medicine residency program. J Natl Med Assoc 2005;97(9):1264–70.

    PubMed  PubMed Central  Google Scholar 

  30. 30.

    Stillman MD, Miller KH, Ziegler CH, Upadhyay A, Mitchell CK. Program Characteristics Influencing Allopathic Students’ Residency Selection. J Am Osteopath Assoc 2016;116(4):214–26.

    PubMed  Article  Google Scholar 

  31. 31.

    Riley JD, Hannis M, Rice KG. Are international medical graduates a factor in residency program selection? A survey of fourth-year medical students. Acad Med 1996;71(4):381–6.

    CAS  PubMed  Article  Google Scholar 

  32. 32.

    Jenkins TM. Doctors’ Orders: The Making of Status Hierarchies in an Elite Profession. New York: Columbia University Press; In press.

  33. 33.

    National Resident Matching Program. National Resident Matching Program, Charting Outcomes in the Match® for International Medical Graduates Characteristics of International Medical Graduates Who Matched to Their Preferred Specialty in the 2016 Main Residency Match®, 2016. Washington, DC.; 2016.

  34. 34.

    Pilotto LS, Duncan GF, Anderson-Wurf J. Issues for clinicians training international medical graduates: a systematic review. Med J Aust 2007;187(4):225–8.

    PubMed  Article  Google Scholar 

  35. 35.

    Desbiens NA, Vidaillet HJ. Discrimination Against International Medical Graduates in the United States Residency Program Selection Process. BMC Med Educ. 2010;10:5.

    PubMed  PubMed Central  Article  Google Scholar 

  36. 36.

    Moore RA, Rhodenbaugh EJ. The Unkindest Cut of All: Are International Medical School Graduates Subjected to Discrimination by General Surgery Residency Programs? Curr Surg 2002;59(2):228–36.

    PubMed  Article  Google Scholar 

  37. 37.

    Nasir LS. Evidence of Discrimination Against International Medical Graduates Applying to Family Practice Residency Programs. Fam Med 1994;26(10):625–9.

    CAS  PubMed  Google Scholar 

  38. 38.

    Balon R, Mufti R, Williams M, Riba M. Possible discrimination in recruitment of psychiatry residents? Am J Psychiatry 1997;154(11):1608–9.

    CAS  PubMed  Article  Google Scholar 

  39. 39.

    Go PH, Klaassen Z, Chamberlain RS. An ERAS-Based Survey Evaluating Demographics, United States Medical Licensing Examination Performance, and Research Experience Between American Medical Graduates and United States Citizen International Medical Graduates: Is the Bar Higher on the Continent? J Surg Educ 2012;69(2):143–8.

    PubMed  Article  Google Scholar 

  40. 40.

    Atsawarungruangkit A. Association between proportion of US medical graduates and program characteristics in gastroenterology fellowships. Med Educ Online 2017;22:5.

    Article  Google Scholar 

  41. 41.

    Kinchen KS, Cooper LA, Wang NY, Levine D, Powe NR. The impact of international medical graduate status on primary care physicians’ choice of specialist. Med Care 2004;42(8):747–55.

    PubMed  Article  Google Scholar 

  42. 42.

    Cohen JJ, Gabriel BA, Terrell C. The Case For Diversity In The Health Care Workforce. Health Aff 2002;21(5):90–102.

    Article  Google Scholar 

  43. 43.

    Andriole DA, Klingensmith ME, Schechtman KB. Diversity in General Surgery: A Period of Progress. Curr Surg 2005;62(4):423–8.

    PubMed  Article  Google Scholar 

  44. 44.

    Chen PG-C, Nunez-Smith M, Bernheim S, Berg D, Gozu A, Curry L. Professional Experiences of International Medical Graduates Practicing Primary Care in the United States. J Gen Intern Med 2010;25(9):947–53.

    PubMed  PubMed Central  Article  Google Scholar 

  45. 45.

    Chen PG-C, Curry LA, Bernheim SM, Berg D, Gozu A, Nunez-Smith M. Professional Challenges of Non-US-Born International Medical Graduates and Recommendations for Support During Residency Training. Acad Med 2011;86(11):1383–8.

    PubMed  PubMed Central  Article  Google Scholar 

  46. 46.

    Allman R, Perelas A, Eiger G. POINT: Should the United States Provide Postgraduate Training to International Medical Graduates? Yes Chest 2016;149(4):893–5.

    PubMed  Article  Google Scholar 

  47. 47.

    Accreditation Council for Graduate Medical Education. Single GME Accreditation System 2019. Available from: Accessed 7 Sept 2019.

Download references


The authors wish to gratefully acknowledge research support from the Temple University Sociology Department and the Center for the Humanities at Temple (CHAT). We also thank the NRMP for giving us permission to cite their data.

Author information



Corresponding author

Correspondence to Tania M. Jenkins Ph.D..

Ethics declarations

The study was deemed non-human subjects research by Temple University IRB.

Conflict of Interest

All authors have nothing to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Previous Presentations

Eastern Sociological Society Meetings, March 15, 2019, Boston, MA.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jenkins, T.M., Franklyn, G., Klugman, J. et al. Separate but Equal? The Sorting of USMDs and Non-USMDs in Internal Medicine Residency Programs. J GEN INTERN MED 35, 1458–1464 (2020).

Download citation


  • graduate medical education
  • workforce supply
  • admissions/selection