Skip to main content

Advertisement

Log in

Medicare hospital payment adjustments and nursing wages

  • Research Article
  • Published:
International Journal of Health Economics and Management Aims and scope Submit manuscript

Abstract

Despite the importance of the nursing profession for healthcare delivery, costs, and quality, there is relatively little research on how provider payments to hospitals affect the labor market for nurses. This study deals with the hospital wage index (HWI) adjustment to Medicare hospital payments, an area-level adjustment intended to compensate hospitals in high-cost labor markets. Since the HWI adjustment is based on hospital-reported labor costs, some argue that it incentivizes hospitals in concentrated markets to pay higher wages to nurses and other workers (the “circularity” critique). We investigate this critique using market-level data on the relative wages reported by nurses and hospital-level data on the average hourly wage for healthcare workers. For identification, we exploit a 2005 change in the geographic area used to define labor markets, which resulted in exogenous changes in the ability of some hospitals to influence their area’s wage index. We find that worker-reported relative nurse wages and hospital-reported healthcare worker wages are higher in some locations where hospitals experienced increased opportunities to game the circularity of the wage index, but these effects appear to be driven by pre-existing wage growth. Medicare’s HWI adjustment method does not appear to suffer from inefficiency due to circularity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Some related evidence comes from a study of 21 hospitals selected for an audit because they were large employers within their CBSAs, among other factors. Most hospitals overstated average hourly wages on their Medicare cost reports, some by as much as 21% (OIG 2007, p. 4).

  2. See, for example, Nicholson and Song (2001), Lindrooth et al. (2006), Acemoglu and Finkelstein (2008), and Kaestner and Guardado (2008) on the effects of Medicare payment on hospitals’ use of nurse labor, among other outcomes. Relatedly, Konetzka et al. (2004) examine nurse staffing in skilled nursing facilities.

  3. Hospitals typically treat a mix of Medicare and non-Medicare patients; care provided to non-Medicare patients may be reimbursed by other payers (i.e., Medicaid, private insurance), or paid out-of-pocket by some uninsured patients, or uncompensated in certain cases (charity care and bad debt).

  4. Some hospitals are exempted from the IPPS. For example, Maryland hospitals are paid under that state’s all-payer rate setting system. Hospitals designated as “Critical Access Hospitals” are reimbursed based on the costs of the care they provide to Medicare patients.

  5. Medicare may adjust the HWI for certain groups of hospitals (IOM 2012, p. 86). The most common adjustment is reclassification of a hospital to a different labor market area with a higher HWI. Such adjustments are made at the request of an individual hospital and are reviewed and approved by the Medicare Geographic Classification Review Board (MGCRB). We discuss the potential impact of reclassification in the description of our empirical methods.

  6. Authors’ calculations from the 2004 and 2005 CMS Impact Files; counts exclude Puerto Rico.

  7. Clemens and Gottlieb (2014) uses the price shock created by Medicare’s consolidation of physician payment areas in 1997 to identify the effect of physician payment on treatment decisions.

  8. Some studies find that nurse wages are higher in more concentrated hospital markets, which is consistent with classic monopsony (Hurd 1973; Link and Landon 1975; Feldman and Scheffler 1982; Bruggink et al. 1985). However, alternative empirical approaches yield differing degrees of support for monopsony power (Adamache and Sloan 1982; Hirsch and Schumacher 1995, 2005; Currie et al. 2005; Staiger et al. 2010). It is possible that gaming of the HWI adjustment process contributes to the weak evidence about monopsony power in the nurse labor market. Since hospitals in highly concentrated markets have a greater ability to influence the area-level HWI, market concentration might induce upward pressure on nurse wages, and prior studies thus may yield attenuated estimates of monopsony power.

  9. First, we measure HHI among the subset of IPPS hospitals, since only their wage data are used to construct the HWI. Second, we use two-stage least squares models to identify changes in HHI driven by the change from MSAs to CBSAs. Third, we interact changes in the HHI with a measure of Medicare’s importance to hospitals. We expect that if the wage index process creates circularity, then changes in the HHI will have a larger effect on wages for markets or hospitals where Medicare is more important to revenues.

  10. In the pre-period, therefore, we must assign MSA-level HHI values to each CBSA-level relative nurse wage; this is described below.

  11. ACS data from 2006 are too close to the policy change given that wage data are retrospective, and the 2005 ACS is the first to include the PUMA (location) variable required for our analysis.

  12. 8.5% of the nurses in our sample are male; we include these observations to increase the number of nurses and the precision of our estimates of \(\varphi _{k}\). We control for sex in Eq. (3).

  13. Healthcare-related occupations are defined as: medical scientists; physicians; dentists; optometrists; podiatrists; other health and therapy; pharmacists; dietitians and nutritionists; respiratory therapists; occupational therapists; physical therapists; speech therapists; therapists, not elsewhere classified; physicians’ assistants; psychologists; social workers; clinical laboratory technologists and technicians; dental hygienists; health record tech specialists; radiologic tech specialists; licensed practical nurses; health technologists and technicians, not elsewhere classified; biological technicians; private household cleaners and servants; guards, watchmen, doorkeepers; protective services, not elsewhere classified; dental assistants; health aides, except nursing; nursing aides, orderlies, and attendants; dental laboratory and medical appliance technicians.

  14. A PUMA is a place (often following county or Census-defined “place” borders) including at least 100,000 residents.

  15. The annual survey by the American Hospital Association (AHA) does not collect data on nurse wages, only employment. The Occupational Mix Survey, a survey that most hospitals are required to complete every three years, does not contain wage data prior to 2006, and our identification strategy requires data before 2005. Finally, the HWI itself is a poor choice for a dependent variable since, by design, the HWI will vary with hospital market composition. Consider a hospital in a small urbanized area that is defined as part of the large MSA in 2000 and then as a small CBSA in 2007. In both 2000 and 2007, wages in the hospital’s immediate urbanized area are higher than those in the rest of the MSA. The hospital’s HHI and HWI would both clearly increase between 2000 and 2007. However, this mechanical correlation over time has nothing to do with circularity (the hospital raising wages in response to the increased HHI).

  16. We include the CBSA-level shares of residents who are black, Hispanic, American Indian/Alaskan native, Asian and/or Pacific Islander, or another race. The omitted category is the share of residents who are white non-Hispanic. For education, we include the CBSA-level shares of residents that have some college education, an associate’s degree, a bachelor’s degree, and a master’s degree or more. The omitted category is the share of residents whose highest level of education is high school or less. In the case of marital status, we include the CBSA-level shares of residents who are: (1) divorced, separated, or widowed; and (2) never married. The omitted category is the share of residents who are married.

  17. The AHA data are proprietary and we have access only to the 1999 and 2009 data for this project. In the 2009 data, we identify CAHs using the last four digits of the hospital’s Medicare Provider Number or MPN, following ResDAC (2016). In the 1999 data, we identify CAH hospitals using a list obtained from the Flex Monitoring Team, a university consortium that studies issues affecting rural hospitals. This list includes Critical Access Hospitals as of January 2004, the earliest date available.

  18. We believe that the ability to request reclassification does not remove the incentive for hospitals in highly-concentrated markets to increase nurse wages. There is uncertainty about whether such requests will be approved or denied. A hospital’s application must demonstrate that its wages exceed those paid by other hospitals in the market to which it was geographically assigned and are comparable to the higher-paying hospitals in the market to which it seeks assignment. Using data from fiscal year 2007, which is within the time period of our analysis, Dalton et al. (2007) calculate that 23% of all IPPS hospitals experienced labor market reclassifications (p. 17).

  19. Of these 419 CBSAs, 320 experienced increases in HHI; the mean and maximum increases are 0.184 units and 0.98 units respectively. By comparison, 79 CBSAs experienced decreases in HHI, and the mean and maximum decreases are \(-\,0.044\) units and \(-\,0.415\) units respectively. The remaining 20 unchanged CBSAs had an HHI of 1 in both periods.

  20. As described in section “Assigning MSA-level HHI measures to 2000 CBSA-level relative nurse wages” of the “Data Appendix”, when assigning values of the MSA-level HHI to each CBSA-level relative nurse wage in 2000, we use a crosswalk between a CBSA and a single MSA. In each pairing of CBSA to MSA, we determine the fraction of the CBSA population that resided in the single MSA to which it is matched. The column 3 models exclude a small number of CBSAs that matched to MSAs where less than 75% of the population resided.

  21. These are calculated using the same Impact File data used to calculate the HHI measures, and in the same way—using the MSA in 2000 (matched to CBSA-level nurse wages based on population overlaps) and the CBSA in 2008.

  22. The median is 0.2065 which is near the classification of a “highly concentrated” market in horizontal merger considerations (U.S. Department of Justice and FTC 2010).

  23. Appendix Table 7 reports sample means. The first-stage results from 2SLS models are reported in Appendix Table 8.

  24. Results are available upon request.

  25. Such perceptions are illustrated by the case of a Nantucket hospital and the IPPS “rural floor” provision. Each hospital’s HWI must be at least as high as the HWI for rural hospitals in the same state (the rural floor). The Nantucket hospital’s conversion to the IPPS system increased hospital reimbursements elsewhere in Massachusetts by raising the state’s rural floor. Since the HWI adjustment is budget-neutral, the windfall to Massachusetts’ hospitals reduced reimbursements to hospitals in other states, which then advocated for policy changes.

  26. Specifically, we have \(940-574+48-11+29-5+2=429\) CBSAs.

References

  • Acemoglu, D., & Finkelstein, A. (2008). Input and technology choices in regulated industries: Evidence from the Health Care Sector. Journal of Political Economy, 116(5), 837–880.

    Article  Google Scholar 

  • Acumen, (2009). Revision of the Medicare Wage Index: Final Report, Part 1. Burlingame, CA: Acumen, LLC.

  • Adamache, K. W., & Sloan, F. A. (1982). Unions and hospitals: Some unresolved issues. Journal of Health Economics, 1(1), 81–108.

    Article  CAS  PubMed  Google Scholar 

  • Akerlof, G. A., & Robert, J. S. (2009). Animal spirits: How human psychology drives the economy, and why it matters for global capitalism. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Auerbach, D. I., Buerhaus, P. I., & Staiger, D. O. (2015). Will the RN workforce weather the retirement of the Baby boomers? Medical Care, 53(10), 850–856.

    PubMed  Google Scholar 

  • BLS (Bureau of Labor Statistics). Various years. Local Area Unemployment Statistics tables. Available at http://www.bls.gov/lau/.

  • Bruggink, T. H., Finan, K. C., Gendel, E. B., & Todd, J. S. (1985). Direct and indirect effects of unionization on the wage levels of nurses: A case study of New Jersey hospitals. Journal of Labor Research, 6, 407–16.

    Article  Google Scholar 

  • Bun, M. J. G., & Harrison, T. D. (2014). OLS and IV estimation of regression models including endogenous interaction terms. LeBow College of Business, Drexel University School of Economics Working Paper WP 2014-3.

  • Centers for Medicare & Medicaid Services. (2017). (US). Medicare Program; Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and the Long- Term Care Hospital Prospective Payment System and Proposed Policy Changes and Fiscal Year 2018 Rates; Quality Reporting Requirements for Specific Providers; Medicare and Medicaid Electronic Health Record (EHR) Incentive Program Requirements for Eligible Hospitals, Critical Access Hospitals, and Eligible Professionals; Provider-Based Status of Indian Health Service and Tribal Facilities and Organizations; Costs Reporting and Provider Requirements; Agreement Termination Notices. Fed Regist., 82(81), 19796–20231.

  • Clemens, J., & Gottlieb, J. D. (2014). Do physicians’ financial incentives affect medical treatment and patient health? American Economic Review, 104(4), 1320–1349.

    Article  PubMed  PubMed Central  Google Scholar 

  • Currie, J., Farsi, M., & MacLeod, W. B. (2005). Cut to the bone? Hospital takeovers and nurse employment contracts. Industrial and Labor Relations Review, 58(3), 471–493.

    Article  Google Scholar 

  • Dalton, K., Pope, G. C., Adamache, W., Dulisse, B., & West, N. (2007). Potential Refinements to Medicare’s Wage Indexes for hospitals and other sectors. RTI International (June) No. 07-3.

  • DePasquale, Christina, & Stange, Kevin. (2016). Labor supply effects of occupational regulation: Evidence from the nurse licensure compact. National Bureau of Economic Research Working Paper Number 22344.

  • DHHS (Department of Health and Human Services). (2001). Medicare State Operations Manual Provider Certification, Transmittal 26. Department of Health and Human Services, Health Care Financing Administration, March 1, 2001. Available at: www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/r26som.pdf. Last updated August 21, 2015.

  • Elliott, R., Ma, A., Sutton, M., Skatun, D., Rice, N., Morris, S., et al. (2010). The role of the staff MFF in distributing NHS funding: Taking account of differences in local labour market conditions. Health Economics, 19, 532–548.

    PubMed  Google Scholar 

  • Feldman, R., & Scheffler, R. (1982). The union impact on hospital wages and fringe benefits. Industrial and Labor Relations Review, 35, 196–206.

    Article  CAS  PubMed  Google Scholar 

  • Gruber, J., & Kleiner, S. A. (2012). Do strikes kill? Evidence from New York State. American Economic Journal: Economic Policy, 4(1), 127–157.

    Google Scholar 

  • Hirsch, B. T., & Macpherson, D. (2016). Union membership and coverage database from the CPS. Available at www.unionstats.com.

  • Hirsch, B. T., & Schumacher, E. J. (1995). Monopsony power and relative wages in the labor market for nurses. Journal of Health Economics, 14, 443–476.

    Article  CAS  PubMed  Google Scholar 

  • Hirsch, B. T., & Schumacher, E. J. (2005). Classic monopsony or new monopsony? Searching for evidence in nursing labor markets. Journal of Health Economics, 24, 969–989.

    Article  PubMed  Google Scholar 

  • HRSA (Health Resources and Services Administration). (2014). The future of the nursing workforce: National- and State-Level Projections, 2012–2025. Rockville, MD. Available at: https://bhw.hrsa.gov/sites/default/files/bhw/nchwa/projections/nursingprojections.pdf.

  • Hurd, R. W. (1973). Equilibrium vacancies in a labor market dominated by non-profit firms: The shortage of nurses. Review of Economics and Statistics, 55, 234–240.

    Article  Google Scholar 

  • IOM (Institute of Medicine). (2012). Geographic adjustment in Medicare payment. Phase I, Improving accuracy. In M. Edmunds & F. A. Sloan (Eds.), Committee on Geographic Adjustment Factors in Medicare Payment, Board on Health Care Services.

  • Kaestner, R., & Guardado, J. (2008). Medicare reimbursement, nurse staffing, and patient outcomes. Journal of Health Economics, 27(2), 339–361.

    Article  PubMed  Google Scholar 

  • Keenan, P. (2003). The nursing workforce shortage: Causes, consequences, proposed solutions. The Commonwealth Fund Issue Brief April 2003. Available at: http://www.commonwealthfund.org/usr_doc/keenan_nursing.pdf.

  • Konetzka, R. T., Yi, D., Norton, E. C., & Kilpatrick, K. E. (2004). Effects of medicare payment changes on nursing home staffing and deficiencies. Health Services Research, 39(3), 463–488.

    Article  PubMed  PubMed Central  Google Scholar 

  • Lindrooth, R. C., Bazzoli, G. J., Needleman, J., & Hasnain-Wynia, R. (2006). The effect of changes in hospital reimbursement on nurse staffing decisions at safety net and nonsafety net hospitals. Health Services Research, 41(3, part 1), 701–720.

    Article  PubMed  PubMed Central  Google Scholar 

  • Link, C. R., & Landon, J. H. (1975). Monopsony and union power in the market for nurses. Southern Economic Journal, 41, 649–659.

    Article  Google Scholar 

  • Mark, B., Harless, D. W., & Spetz, J. (2009). California’s minimum-nurse-staffing legislation and nurses wages. Health Affairs, 28(2), w326–334.

    Article  PubMed  Google Scholar 

  • McHenry, P., & McInerney, M. (2014). The importance of cost of living and education in estimates of the conditional wage gap between Black and White women. Journal of Human Resources, Summer, 49(3), 695–722.

    Google Scholar 

  • MedPAC (Medicare Payment Advisory Commission). (2007). An alternative method to compute the wage index. In Report to the Congress: Promoting greater efficiency in Medicare. Washington, DC: MedPAC.

  • MedPAC (Medicare Payment Advisory Commission). (2014). Payment basics: Hospital acute inpatient services payment system. Washington DC: Revised October 2014.

  • Missouri Census Data Center. (2010). MABLE/Geocorr2K: Geographic correspondence engine with census 2000 geography. Version 1.3.3 (August, 2010). Available at http://mcdc2.missouri.edu/websas/geocorr2k.html. Last updated August 21, 2015.

  • Naidu, S., Nyarko, Y., & Wang, S.-Y. (2015). Monopsony power in migrant labor markets: Evidence from the United Arab Emirates working paper.

  • NBER. (2012). CMS’s SSA to FIPS CBSA and MSA County Crosswalk. Available at: www.nber.org/data/cbsa-msa-fips-ssa-county-crosswalk.html. Last updated June 8, 2015.

  • NBER. (2014). CMS Impact File Hospital Inpatient Prospective Payment System (IPPS). Available at: www.nber.org/data/cms-impact-file-hospital-inpatient-prospective-payment-system-ipps.html. Last updated June 11, 2014.

  • Nicholson, S., & Song, D. (2001). The incentive effects of the medicare indirect medical education policy. Journal of Health Economics, 21(6), 909–933.

    Article  Google Scholar 

  • OIG (Office of the Inspector General). (2007). Review of hospital wage data used to calculate inpatient prospective payment system wage indexes. A-01-05-00504. Washington DC: Department of Health and Human Services.

  • Phibbs, C., & Robinson, J. C. (1993). A variable-radius measure of local hospital market structure. Health Services Research, 28(3), 313–324.

    CAS  PubMed  PubMed Central  Google Scholar 

  • Propper, C., & Van Reenen, J. (2010). Can pay regulation kill? Panel data evidence on the effect of labor markets on hospital. Journal of Political Economy, 118(2), 222–273.

    Article  Google Scholar 

  • ResDAC. (2016). Provider Number Table. Available at: www.resdac.org/cms-data/variables/provider-number. Accessed May 4, 2016.

  • Rice, N., & Smith, P. (1999). Approaches to Capitation and Risk Adjustment in Health Care: An International Survey. York: Centre for Health Economics, University of York.

    Google Scholar 

  • Ruggles, Steven, J. Trent Alexander, Genadek, Katie, Goeken, Ronald, Schroeder, Matthew B., & Sobek, Matthew. (2010). Integrated Public Use Microdata Series: Version 5.0 [Machine-readable database]. Minneapolis: University of Minnesota.

  • Schumacher, E. J. (1997). Relative Wages and Exit Behavior among Registered Nurses. Journal of Labor Research, 18(4), 581–592.

    Article  Google Scholar 

  • Staiger, D. O., Spetz, J., & Phibbs, C. S. (2010). Is There Monopsony in the Labor Market? Evidence from a Natural Experiment. Journal of Labor Economics, 28(2), 211–236.

    Article  Google Scholar 

  • U.S. Census Bureau. (2015). American FactFinder. Available at: http://factfinder2.census.gov. Last updated July 16.

  • U.S. Department of Justice and FTC. (2010). Horizontal merger guidelines. Washington, DC. Available at: https://www.justice.gov/atr/horizontal-merger-guidelines-08192010#5c. Accessed May 3, 2016.

Download references

Acknowledgements

We gratefully acknowledge financial support from the Schroeder Center for Health Policy at the College of William and Mary and we thank Michael Daly for excellent research assistance. We are also grateful for helpful comments from Daifeng He and session and seminar participants at the Federal Reserve Bank of Richmond Regional Economics Workshop, the Society of Labor Economists annual meeting, the biennial meeting of the American Society of Health Economists, and the University of Memphis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter McHenry.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Human and animal participants

This study uses de-identified and anonymous data. This article does not contain any studies with human participants or animals performed by any of the authors.

Appendices

Appendix

See Tables 5, 6, 7 and 8.

Table 5 Descriptive statistics for sample of 419 CBSAs
Table 6 Area-level analysis first stage results dependent variable is change in Actual HHI, 2000 to 2008
Table 7 Descriptive statistics for hospital sample
Table 8 Hospital-level analysis first stage results; all hospitals that did not reclassify, weighted by beds (n=2170)

Data Appendix: Details on variable construction and linkage by geography

1. Relative nurse wages

Area-level relative nurse wages were constructed from respondent-level census data as described in the text. We calculate the hourly wage (W) as the annual wage and salary income from the prior year divided by the product of weeks worked and usual hours per week. The 2008 ACS weeks worked variable is recorded in bins (1–13, 14–26, 27–39, 40–47, 48–49, and 50–52 weeks). We merge in a crosswalk created with the pooled 2005–2007 ACS samples that include this interval-valued variable and also its continuous-valued companion variable. For each weeks worked bin in the ACS, we impute the average number of weeks reported by 2005–2007 ACS respondents in the same bin.

We drop observations with imputed values for income, occupation, or industry and we drop outlier observations with hourly wages in the highest 1% and lowest 1% of the distribution in each sample (2000 and 2007–2008). We control for various determinants of wages in X, including years of potential labor market experience and indicators for sex, race/ethnicity, region, marital status, education, public sector employment, and part-time. We also control for the contemporaneous unemployment rate in the state using the 1999 and 2006–2007 annual average unemployment rates for each state reported by the Bureau of Labor Statistics (BLS, various years).

Areas were defined as CBSAs in both the 2000 Census and the pooled 2007–2008 ACS. Each CBSA is either a metropolitan or a micropolitan area. We follow CMS definitions and treat micropolitan CBSAs in the same state as part of a single “rest of state” area. In addition, eleven large CBSAs are divided into Metropolitan Divisions. CMS treats Metropolitan Divisions within a CBSA as separate areas, so we do the same. For example, there is a single CBSA code for Seattle, Tacoma, Bellevue, WA, but this large area is subdivided into Seattle–Bellevue–Everett, WA and Tacoma-Lakewood, WA. Finally, some parts of the country are not in CBSA-defined areas, and we count each of these as a part of its “rest of state” area.

Our original crosswalk file from MABLE/Geocorr2k (Missouri Census Data Center 2010) defined 940 CBSAs. Some respondents in our sample resided in areas outside these CBSAs, so we assigned them by state to 46 rest-of-state areas. In addition, 574 CBSAs are micropolitan CBSAs that we assign to 47 rest-of-state areas. These two sets of rest-of-state areas contained different states, so the total number of rest-of-state areas created is 48. 11 CBSAs are subdivided into 29 Metropolitan Divisions. Our original crosswalk file uses the CBSA definitions from 2010, but our wage data are from 2007 and 2008. Therefore, we adapt the crosswalk to reverse changes that occurred between 2007 and 2010. This subtracts 5 CBSAs and adds 2 CBSAs to the crosswalk. Thus, we measure relative nurse wages in 429 CBSAs.Footnote 26 We exclude 8 Maryland CBSAs (where hospitals did not participate in the IPPS) and 2 CBSAs where there were no IPPS hospitals in 2007 (and therefore without an HHI value for that year). Our analysis sample includes the remaining 419 CBSAs.

Individual respondents were assigned to CBSAs based on the PUMA of residence. We used two versions of the MABLE/Geocorr2k crosswalk (Missouri Census Data Center 2010) between PUMAs and CBSAs; one version described overlaps between PUMAs and CBSAs with the population distribution in 2000, and the other used the 2009 population distribution. In cases where the PUMA matched to a single CBSA, this is a simple assignment. However, some PUMAs overlap multiple CBSAs. For respondents in these PUMAs, we duplicated their observations to include one in each overlapping CBSA. When we estimate the relative nurse wage regression, we weight each observation by the population overlap between that observation’s original PUMA and the CBSA to which it was assigned. For example, suppose a PUMA shares 80, 15, and 5% of its population with CBSAs A, B, and C, respectively. Then each respondent in that PUMA becomes three observations in our regression sample, each assigned to a CBSA (A, B, and C). The respondents assigned to CBSA A, B, and C received a regression weight of 0.85, 0.15, and 0.05, respectively. In our samples, 87% of respondents live in a PUMA that overlaps a single CBSA; among the remaining 13% of respondents, the average PUMA population share in the most-overlapping CBSA is 71%.

In Tables 9 and 10, we illustrate the number of observations used in the construction of CBSA-level nurse wages from the 2000 Census and the pooled 2007–2008 ACS. Our concern is the precision with which we estimate relative nurse wages at the CBSA (location) level. The average location-specific relative nurse wage relies on data from 157.2 RNs and 1312 other workers in the 2000 Census, with somewhat smaller sample sizes for the pooled 2007–2008 ACS. While there are CBSAs with quite small samples of RNs (as few as 5), most CBSAs rely on much larger samples. We believe these are sufficient sample sizes to have confidence in our measures of relative nurse wages across locations. The sample sizes in Table 10 support the pooling of multiple ACS surveys: CBSA-specific sample sizes would be quite small for individual years.

Table 9 Summary statistics for observation counts, by CBSA, 2000 census
Table 10 Summary statistics for observation counts, by CBSA, 2007–2008 ACS

2. Herfindahl-Hirschman Index (HHI)

We construct area-level values of the Herfindahl-Hirshman index using hospital-level data on the number of beds from the CMS Medicare IPPS Impact Files. Each year’s Impact File contains data collected in the summer preceding the start of the federal fiscal year (NBER 2014). For example, the 2000 Impact File includes data collected in the summer of 1999, since fiscal year 2000 began October 1, 1999. Therefore, to align the HHI measures with the relative nurse wage estimates obtained from Census data for 2000 and 2007–2008, we use data from the 2000 and 2007 Impact Files.

In each year, we construct area-level HHI estimates using the geographic area used by Medicare to construct hospital wage indices. In the 2007 Impact File, we define the HHI at the CBSA level. The 2007 Impact File does not contain the CBSA for hospitals in the Indian Health Service, so we assign CBSAs to these hospitals by merging the Impact File observations to a Mable/Geocorr county-to-CBSA crosswalk. Because this crosswalk uses FIPS county codes and the Impact File contains only SSA county codes, the merge requires that we first assign FIPS county codes to the Impact File Hospitals using a crosswalk from SSA county to FIPS county, available from NBER (2012). We then define the CBSA-level HHI by calculating each hospital’s share of beds out of the total beds at Impact File hospitals in the CBSA, then squaring the shares and summing the shares in the CBSA. In this same step, we also define CBSA-level measures of the bed share at the largest four hospitals and the largest hospital’s share of all beds.

In the 2000 Impact File, we define the HHI at the MSA level. Because the 2000 Impact File does not report the MSA for a small number of hospitals in the Indian Health Service, we first obtain missing county identification codes for these hospitals from the 1999 AHA annual survey (using a merge by hospital Medicare Provider Number, or MPN) and we then obtain missing MSA codes with a county-MSA crosswalk. We define the MSA-level HHI by calculating each hospital’s share of beds out of the total beds at Impact File hospitals in the MSA, then squaring the shares and summing the shares in the MSA. In this same step, we also define MSA-level measures of the bed share at the largest four hospitals and the largest hospital’s share of all beds.

This same 2000 MSA-level measure of HHI serves as year 2000 values of the simulated HHI. To obtain year 2007 values of simulated HHI, we use the 2000 Impact File, and assign each hospital to the CBSA it would belong to in 2007 using a county-to-CBSA crosswalk from NBER (2005), but updated with a change that took place in July 2006 (CBSA 46940 was renamed CBSA 42680). Once each hospital was assigned to a CBSA, we define the CBSA-level HHI using each hospital’s share of beds out of the total beds at Impact File hospitals in the CBSA (and by squaring and summing the shares), as well as CBSA-level measures of the bed share at the largest four hospitals and the largest single hospital.

However, prior to assigning hospitals to their 2007 CBSA, we first needed to assign a county code to each hospital in the 2000 Impact File, since no county code is included in the file. For this step, we obtained street addresses of hospitals in the 2000 HCRIS (Medicare Cost Report files) that could then be merged by Medicare Provider Number or MPN back to the 2000 Impact File. Note that the HCRIS data contain a county name, but not a county code. To obtain a county code, we geocoded hospital locations using Texas A&M geocoding services. The geocoding process yields a specific latitude and longitude for each hospital address and an indication of the level of the geocoded match (i.e., whether the hospital address was geocoded to an exact location, a line segment, a zipcode, etc.). We then used a spatial join procedure in which all points (i.e., hospitals) were linked to counties in the U.S. Census 2000 County Boundary files. The output from this spatial join procedure was inspected and a county code was assigned to each hospital according the following steps. First, if the county name obtained from the 2000 County Boundary file matched the county name in the HCRIS data (allowing for spelling errors or abbreviations), we used the county code from the 2000 County Boundary file. Alternatively if the county name was not identical in both sources, or was missing in the HCRIS data, but the geocoding occurred at the exact location or the Tiger/Line segment level, we used the county code from the 2000 County Boundary file. For those cases where the geocoding did not produce an exact match and where county name matches could not be used to confirm the county code, we used a 2000 Mable/Geocorr crosswalk between ZCTA and County to identify ZCTAs that lie entirely within a single county. If a hospital geocoded to the centroid of such a ZCTA, that hospital was assigned to the county code. For the small number of remaining hospitals that were not assigned following the steps above, the county code was confirmed either using address information from the 1998 HCRIS data, or using the hospital’s county code from the American Hospital Association (AHA) annual survey from 1999.

3. Assigning MSA-level HHI measures to 2000 CBSA-level relative nurse wages

Each CBSA-level relative nurse wage estimate from the 2000 Census was assigned an MSA-level HHI measure constructed from the 2000 Impact file with a crosswalk that maps each CBSA to a single MSA. This CBSA-to-single-MSA crosswalk was constructed for this project from a 2005 crosswalk available from NBER (NBER 2012) and based on CMS data. The NBER/CMS crosswalk lists both the CBSA and the MSA to which each U.S. county is assigned. As noted above, the NBER/CMS crosswalk uses the same county borders in New England that CMS uses in the geographic adjustment of hospital payments, and is thus preferred in those states over the OMB-defined MSAs used in Mable/Geocorr, which follow town and city borders in New England. Since the crosswalk reflects assignments as of 2005 and our goal was to assign CBSAs from 2007 to an MSA, the NBER/CMS crosswalk was first updated to reflect a change effective July 1, 2006 in which CBSA 46940 was renamed CBSA 42680.

The NBER/CMS crosswalk was then combined with 2000 county-level population counts from the U.S. Census Bureau (American FactFinder), and the population data were used to define the share of CBSA residents that lived in a given MSA for each possible CBSA-MSA match. The CBSA-to-single-MSA crosswalk was produced by assigning each CBSA to the MSA in which the greatest share of the CBSA’s residents lived. To illustrate this process, Table 11 shows five hypothetical counties that comprise a single CBSA (CBSA C1), four of which fall into MSA M1 and one of which falls into MSA M2. Since 85% of the CBSA residents lived in counties located within MSA M1, and only 15% lived in the county located within MSA M2, CBSA C1 is assigned to MSA M1.

Table 11 Illustration of CBSA-MSA crosswalk construction

Tables 12 and 13 provide additional details on the crosswalk. As shown in Table 12, some CBSAs overlapped with as many as five different MSAs, but a large majority of CBSAs (64%) overlapped with only one MSA, and another large share overlapped with only two MSAs (30%). Similarly, Table 13 shows that in 267 of the 419 CBSAs in our sample, 100% of CBSA residents lived in a single MSA. Another 78 CBSAs were matched to an MSA where more than 90% (but less than 100%) of residents lived, and another 52 CBSAs were matched to an MSA where more than 80% (but less than 90%) of residents lived.

Table 12 MSA-CBSA overlap
Table 13 Percent of the CBSA population residing in its matched MSA

We examined the group of CBSAs that matched to a single MSA more closely. This group consists of two types of CBSAs: (1) CBSAs that were formed from identical boundaries as a single MSA (these may be either urban or rest-of-state areas), and (2) CBSAs that were formed from a sub-area within a single MSA (again these may be either urban or rest-of-state areas). Of the 267 CBSAs that match to a single MSA, 151 are of the first type, and 116 are of the second type. This latter group is interesting for two reasons. First, the adoption of CBSAs (as opposed to MSAs) in defining the hospital market creates exogenous variation in the hospital concentration in the market. Second, because a new CBSA was formed from a subset of an MSA, we can be confident about linking the CBSA-level relative nurse wage in 2000 to the MSA-level market concentration measure in that year, since all the nurses residing in that CBSA were living in areas that were part of the MSA.

We also examined those CBSAs that match to more than one MSA (prior to our selection of a single MSA); this may arise for several reasons. In some cases, one urban CBSA may have formed from two or more urban MSAs; in other cases, one urban CBSA may have formed from one or more urban MSAs and part of a rest-of-state area. Alternatively, a rest-of-state CBSA may have been formed by combining part of a rest-of-state MSA with part or all of one or more urban MSAs. In these cases, the adoption of CBSAs in defining the hospital market also creates exogenous variation in hospital concentration. Because the new CBSA was formed from multiple MSAs, this introduces some noise into the assignment of the CBSA to a single MSA for the purposes of linking the CBSA-level relative nurse wage in 2000 to the MSA-level hospital concentration measure in that year. For this reason, we use the data on population overlap in Table 13 to conduct some sensitivity tests of our results, as described in the text.

4. Area-level bite measures

We use two measures to capture the importance of the Medicare HWI adjustment process to hospitals and workers in an area. The first measure is the share of all inpatient days in a CBSA that occur in IPPS hospitals and are paid by Medicare. The second is an index that reflects the degree to which Medicare days at IPPS hospitals are concentrated in the CBSA. Both measures are constructed from information in the 1999 AHA annual survey (as opposed to Impact File data) to account for the importance of the HWI among all hospitals in the CBSA. To construct each measure, we first identify those hospitals in the 1999 AHA survey that are IPPS hospitals. If the hospital has a valid MPN, we can identify IPPS status from its last four digits (between 0001 and 0879 for IPPS hospitals), per CMS documentation (DHHS 2001, p. 2–164). If the hospital had a missing MPN or an MPN equal to “777777,” we use other fields on the survey to select those hospitals that are “community” and “general medical and surgical” hospitals; for each of these hospitals, we obtained MPNs from an online search and used the last 4 digits of the MPN to designate them as IPPS hospitals or not. In three cases, we could not obtain the MPN through a manual lookup; we designated two of these three as IPPS hospitals because they did not appear on a list of Critical Access Hospitals (which are not paid according to IPPS rules). In addition to identifying IPPS hospitals, the construction of these measures also requires assigning each hospital in the 1999 AHA survey to the CBSA it would belong to in 2007. For this step, we use the same NBER/CMS crosswalk described above (after it was modified for the one CBSA change that took place in July 2006).

Once IPPS hospitals and CBSAs were identified, we constructed the Medicare-IPPS share as the total number of Medicare inpatient days occurring at IPPS hospitals in the CBSA divided by the total number of all inpatient days occurring at all hospitals in the CBSA. We constructed the Medicare-IPPS HHI by calculating for each hospital, the number of its Medicare inpatient days paid by IPPS as a share of all inpatient days occurring at all hospitals in the CBSA (this share is equal to 0 for non-IPPS hospitals), then squaring the shares, and summing the squared shares in each CBSA.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McHenry, P., Mellor, J. Medicare hospital payment adjustments and nursing wages. Int J Health Econ Manag. 18, 169–196 (2018). https://doi.org/10.1007/s10754-017-9232-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10754-017-9232-x

Keywords

JEL Classification

Navigation