Survey Method
We developed an 18-item web-based questionnaire based on prior studies and our interest in the domain of impact on clinical care. We chose a priori to pre-test the questionnaire at four sites in order to represent those that had large and small numbers of providers and which demonstrated higher and lower usage of electronic referral. Based on our pre-test, we clarified wording of items and added the domain of impact on clinical practice.
We mailed a letter introducing the study to all eligible participants prior to initiating the survey, and then sent an e-mail to all participants containing a link to the questionnaire. After sending weekly e-mail reminders for 3 weeks, we telephoned and then mailed a paper version to non-responders. We collected questionnaires from October 2007 through January 2008. We offered a light catered lunch to the two clinics with highest response rates.
The institutional review board at University of California, San Francisco approved the study.
Measures of Participant Characteristics
We asked participants to identify their training level (resident, mid-level provider, or attending physician), practice setting (hospital-based, COPC or Consortium), and volume of care (frequency of seeing patients in clinic each week, frequency of using electronic referrals, length of time using electronic referral in months). Because we anticipated that individual preferences for technology would influence providers’ experiences with electronic referrals, we used Prasad and Agarwal’s validated 4-item scale, which asked participants to rate their willingness to use new information technology on a 5-point Likert scale.25
Participant-Specific Process Measures
We asked providers to note when they submitted electronic referrals: “during,” “between,” “after” patient visits, “never, someone else submits for me,” or “never refer.” We defined time spent referring as a categorical variable with five mutually exclusive levels ranging from “less than 2 min from start to submit” to “greater than 10 min from start to submit.”
Measures of Impact on Clinical Care
We asked participants to compare overall clinical care using electronic referrals to prior methods of referring patients to subspecialists on a 5-point Likert scale (“much worse” to “much better than prior methods”).
Measures of Impact on Clinical Practice
We assessed three practice domains: content, process, and access to subspecialists using electronic referrals compared to prior methods. We used a 5-point Likert scale ranging from “much better” to “much worse.” For content measures, participants rated subspecialty guidance of workup and how well the subspecialist addressed the clinical question. For process measures, we asked participants to rate their ability to track the referral. To gauge access to subspecialists, we asked participants to rate wait time for an available appointment for subspecialty clinics, as well as access to a subspecialist for urgent and non-urgent patient issues.
Statistical Analysis
For the main dependent variable “overall clinical care,” we collapsed 5-level Likert scale responses to two levels, “better” (“much” and “somewhat better”) and “not better” (“no change,” “somewhat” and “much worse”). We chose this dichotomization because of our a priori belief that the success of electronic referrals should be measured by its ability to improve clinical care. We tested for bivariate associations and then used a logistic regression model to determine adjusted odds ratios (AOR). We constructed stepwise multiple regression models, considering as candidates all variables that were associated with the outcome at p < 0.20 in bivariate models, using Spearman’s rho for ordinal independent variables and chi-squared tests for dichotomous variables. After constructing the model with the independent variables of “time spent referring” and “affinity for information technology,” we then added other factors singly and in order. We retained the newly added variable if its effect was statistically significant at p < 0.05.
For other dependent variables (i.e., measures of content, process, and access), we were interested in whether things improved, worsened or were left unchanged. Because the results were statistically similar for five versus three categories, we collapsed 5-level responses into three categories: “better” “same” or “worse.”
In Table 1, we present all candidate variables. In Table 2, we present the AOR for the independent variables for overall clinical care. We transferred all responses from a web-based server (DATSTAT Illume 4.5, Seattle, WA) in Excel format to STATA/S.E. 9.2 (Stata Corporation, College Station, TX).
Table 1 Participant Characteristics (n = 298)
Table 2 Adjusted Odds Ratios of Physician Report that Clinical Care is Better as a Result of the Electronic Referral Process, by Physician Characteristics