The current study is a secondary analysis utilizing data from the Language Access System Improvement (LASI) study. This observational study was designed to evaluate the effects of a system intervention that (i) increased access to video professional interpretation while (ii) certifying bilingual physicians’ language skills on communication and clinical outcomes. The larger LASI study occurred in two phases: “pre”—before the system intervention was implemented (February 2014–April 2014), and “post”—after the implementation of the system intervention (January 2016–July 2017). Because VMI was not available until the post study period, and in order to compare across three professional interpretation modalities (in-person, VMI, and telephone), only patients from the post sample were included in this analysis. The methods have previously been reported elsewhere, and are briefly described here.13, 14
The LASI study took place in a large urban, academic primary care practice. In the time-frame for this analysis, in-person, VMI, and telephone modalities of professional interpretation were regularly available to facilitate communication; in-person staff interpreters needed to be scheduled, VMI and telephone interpreters were available on demand from contracted vendors.
Patients were eligible to participate in the LASI study if they were ≥ 40 years old; received primary care within the practice; self-identified as Chinese or Latino; preferred to receive their medical care in English, Cantonese, Mandarin, or Spanish; and were able to complete a telephone survey. Within 1 week following a primary care (index) visit, patients were contacted by corresponding bilingual and bicultural research assistants, and were invited to complete a telephone survey about their communication experiences during their index visit. Participants were categorized as having limited English proficiency (LEP) using our previously validated algorithm.15 Only those participants with LEP who reported on the survey that professional interpretation was provided at their index visit were included in this analysis.
Patients who reported using a professional interpreter—either in-person, VMI, or via telephone—during their index visit answered five “detailed” items and a sixth “overall” item about the quality of interpretation: (1) How was the interpreter at listening to what you had to say? (2) How was the interpreter at explaining what you said to the doctor? (3) How was the interpreter at helping you understand your medical problems? (4) How was the interpreter at helping you understand medical test results? (5) How was the interpreter at helping you understand your treatment plan? (6) Overall, how was the quality of the interpretation? The items used five ordered response options (1 = poor, 2 = fair, 3 = good, 4 = very good, and 5 = excellent), and were adapted from a validated communication measure, Interpersonal Processes of Care,16 to be specific to the interpretation rather than to the provider’s communication. The a priori plan was to score responses to the five “detailed” items as a scale and leave the sixth “overall” item as a separate single-item measure.
Key Independent Variables and Covariates
Among patient-specific characteristics, age, gender, education level, preferred language, and English ability were self-reported. Preferred language was designated by patient responses to the question, “In what language do you prefer to receive your medical care?” and English ability was based on the US Census question “How well to you speak English?” LEP status was determined using our previously validated algorithm which combines preferred language and English ability.15 Health literacy was determined using a single, validated question, “How confident are you filling out medical forms by yourself?”17, 18 Additional characteristics were pulled from the electronic medical record: Elixhauser comorbidities19 and visit-level characteristics including whether the index visit was with the patient’s own primary care provider or not, the number of problems addressed during the visit, the number of clinic visits in the prior 12 months, and the length of time as an established patient in the practice.
We performed descriptive analyses on demographic characteristics to assess their distributions across the different modes of professional interpretation. A confirmatory factor analysis model of the five “detailed” interpretation quality items fit very well: comparative fit index (CFI) = 0.997.20 Next, scale scores were calculated as each patient’s mean response to the five items resulting in a scale with theoretical range from 1 (“poor”) to 5 (“excellent”). Internal consistency of the corresponding 5-item scale also was excellent: Cronbach’s alpha = 0.964. Distributions of the different modes of professional interpretation were assessed separately for the 5-item scale and the sixth “overall” quality item. We fit multivariable linear mixed models of each outcome (5-item quality scale, single-item “overall” quality score). Each model included random intercepts for visit physicians as well as covariates describing patient language, age, gender, education, comorbidities, whether the provider was the patient’s primary care provider, the number of problems listed in the visit note, health literacy, frequency of visits in the past year, and length of time as a patient within the practice, and clustering within physicians. In addition, interaction effects (up to three-way) between language, education, and interpreter modality were initially considered and a backward elimination process dropped non-significant interaction terms (p > 0.10). Analyses were conducted using Stata version 14.2 (College Station, TX: StataCorp LP). The study was approved by the Committee on Human Research at the University of California, San Francisco.