Setting
Beginning in July 2005, SFGH, a city and county hospital staffed by University of California San Francisco physicians, implemented eReferral to improve the process of specialty referrals. SFGH is the main provider of specialty care for the uninsured and underinsured in San Francisco. SFGH and its affiliated clinics use a hybrid paper and EHR documentation system. Once a clinic adopts eReferral, it becomes the only mechanism for new patient referrals. We examined adult medical specialty clinics that were in the process of adopting eReferral: cardiology and pulmonary (January 2007), endocrinology and rheumatology (May 2007) and renal (January 2008); as well as two surgical specialty clinics, neurosurgery and orthopedics (July 2007).
Description of Referral Processes
Prior to eReferral, non-emergent referrals for specialty appointments were managed using a paper-based system. Referring providers filled out forms which were faxed or hand-carried to the specialty clinic. In all but two specialty clinics, clerical staff assigned these appointments on a first-received, first-scheduled basis. If a referring clinician needed an appointment sooner than offered, he or she could page the on-call subspecialty resident or fellow to request permission to “overbook” the patient.
With eReferral, referring clinicians use a web-based application that is integrated with SFGH’s EHR, the Siemens Invision/Lifetime Clinical Record. Referring providers initiate an electronic form that is populated with their contact information and the patient’s contact, demographic and clinical data from the EHR. A free text field is provided to enter the reason for consultation and pertinent clinical information. (See Fig. 1) A designated specialty reviewer (a physician specialist in medical specialties or a nurse practitioner in surgical specialties) reviews the referrals. Reviewers adjudicate each referral within 72 hours of submission and decide to either schedule or not schedule an appointment. If the reviewer deems that the appointment is necessary and there is sufficient information for the specialist seeing the patient to make a clinical decision, the appointment is scheduled. The reviewer can request the next available regular appointment or an expedited appointment. When the clinical question is not clear, the necessary work-up is not completed or the problem can be handled in the primary care setting, an appointment is not scheduled. The reviewer can ask for clarifying information, guide further evaluation or provide education as to how the referring provider can manage the issue. Referring and reviewing clinicians can communicate via eReferral in an iterative fashion until both agree that the patient does not need the appointment or the appointment is scheduled. (Fig. 2) All correspondence is conducted within eReferral and captured in the patient’s EHR. At the time of a specialty visit, the electronic referral form is printed and available to the specialty clinician as a hardcopy.2
In prior work that examined referring clinicians eligible to use eReferral, we found that 53.5% were attending physicians, 22.9% were nurse practitioners, and 23.6% were residents.2 The number of referrals for different specialties varied between 20 and 250 referrals per month. Reviewers spent between 5 and 15 minutes on each referral request and between 1 and 6 hours a week reviewing. During the study period, reviewers’ time was paid for by a grant.
Study Participants
After calculating our sample size, we determined the number of questionnaires to collect per specialty depending on patient volume. For each specialty clinic, we selected different time periods to distribute the questionnaire so that approximately half of sessions would be when patients were referred via the paper-based system and half via eReferral. Within these time periods, we selected specific clinic dates randomly. For paper-based responses, we selected dates prior to when eReferral was being initiated. For eReferral responses, we chose dates far enough after the adoption of eReferral so that the majority of new patient appointments would have been scheduled via eReferral. Because of scheduling backlogs, we waited to administer post-eReferral questionnaires until the majority of new patients would have been referred via eReferral (up to 6 months after initiation of eReferral). We defined a new patient visit as the first time the patient was seen in that clinic in the past two years. We asked the first clinician who saw a new patient during randomly selected clinic sessions to fill out the questionnaire. If the first clinician to see the patient was a medical student, we excluded the response from the study.
Survey
Survey Method
We developed a 6-item paper-based questionnaire to assess the appropriateness of each specialty clinic visit and the adequacy of information about the clinical question and pre-visit work-up. Research or clinic staff appended questionnaires to patient charts for all eligible new-patient appointments during the selected clinic sessions. Questionnaires did not require the specialty provider to reveal any self-identifying information other than his/her level of training. We instructed clinicians to leave completed questionnaires in a collection envelope that study staff collected at the conclusion of each session.
For each participating clinic, research assistants attended the first 5–10 clinic sessions to answer any procedural questions; we did not share the study hypotheses. After the initial visits, study staff provided intermittent education about the survey to encourage participation. In order to calculate response rate, we randomly selected 1–3 clinic sessions in which we distributed the questionnaire for each specialty clinic. We compared the proportion of surveys returned to the known number of new patients seen during that session. We received approval from the Institutional Review Board at the University of California, San Francisco.
Questionnaire Content
Participants first identified their level of training (medical student, resident, fellow, nurse practitioner/physician’s assistant, attending or other) and specified whether the patient was referred via eReferral, non-eReferral method (paper-based) or not known. Clinicians rated the difficulty of identifying the reason for the consultation as very difficult, somewhat difficult, or not difficult at all and rated the appropriateness of the referral as completely appropriate, somewhat appropriate, or not appropriate. Clinicians responded whether she or he would recommend a follow-up visit for the patient and if so, whether or not the follow-up visit could have been avoided if there had been a more complete work-up prior to the initial appointment. All responses were based on the clinical judgment of the specialist; we did not provide definitions of difficulty in identifying the reason for referral or the appropriateness of referral.
Statistical Analysis
In the data analysis, we divided data from all completed questionnaires into two specialty categories: medical and surgical specialty groups. We used the new patient visit as the unit of analysis.
Regarding the mode of referral, we grouped “don’t know,” with paper-based referrals because all “don’t know” responses occurred before the initiation of eReferral. We collapsed “somewhat difficult” and “very difficult” as difficult. We collapsed “completely appropriate” and “somewhat appropriate” responses into the appropriate category in recognition that there is a range of appropriateness that may require further specialty evaluation.
We analyzed data from the medical and surgical specialty groups separately. For each questionnaire item, we compared the proportions of positive or negative clinician response for referrals made via paper-based methods with the proportions of responses for referrals made by eReferral. We measured statistical significance using Fisher’s exact test for 2 by 2 contingency tables, with a p-value threshold set at 0.05. We used simple descriptive statistics to characterize the participants and their responses.