Lung cancer is the second most-commonly-diagnosed and deadliest cancer in the world [1]. An estimated 2.2 million new lung cancers were diagnosed in 2020, and 1.8 million people died from the disease globally [1]. In the United States, lung cancer accounted for an estimated 228,820 new cancers and 135,720 deaths in 2020 [2], translating to an incidence rate of 59.3 per 100,000 persons and a mortality rate of 40.2 per 100,000 [2]. Several states (primarily in the southeastern and midwestern United States) have higher-than-average lung cancer incidence and mortality rates. South Carolina is among these, with an age-adjusted incidence of 65.5 per 100,000 persons and a mortality rate of 44.8 per 100,000 persons [2].

Although there are many determinants of lung cancer (e.g., genetics), the disease can largely be attributed to cigarette smoking (past or current) [3], which is linked to 80–90% of lung cancer deaths [4]. To lower lung cancer risk, individuals are advised to avoid/stop smoking [5]. For years, national-level smoking cessation programs, such as the US Department of Health and Human Services’ National Network of Tobacco Cessation Quitlines (1-800-QUIT-NOW) [5], have targeted the smoking problem. The most effective interventions include behavioral and counseling, medication, interactive or tailored text messaging, and interactive web-based strategies [6].

Individuals 50–80 years old with a 20-pack-year smoking history and who currently smoke or quit within the past 15 years are at the highest risk for lung cancer [7]. These recommendations, updated in 2021, expand screening eligibility guidelines to a wider age range (previously, it was 55–75) and those with a shorter smoking history (previously, it was 30 pack-years). For this population, the United States Preventive Services Task Force recommends a low-dose computed tomography (LDCT) scan after a “counseling and shared decision-making (SDM) conversation with their provider.” [7] A counseling and SDM discussion between provider and patient helps the patient evaluate complex medical information in a short time frame [8]. In absence of such discussion, some patients experience anxiety, which can negatively affect health-care decisions.

The Centers for Medicare and Medicaid Services (CMS) has included lung cancer screening counseling, an SDM visit, and LDCT screening—once yearly—as a Medicare preventive service benefit since 2015 [8]. CMS guidelines require that screening counseling and SDM visits include (1) a determination of LDCT eligibility based on lung cancer risk, (2) SDM that includes a decision aid (DA), (3) counseling on the importance of LDCT and smoking abstinence, and (4) a written order for LDCT if appropriate [8]. Despite clear guidelines, provider and patient barriers impede the SDM process. For example, providers may be unaware of recent changes in screening guidelines/eligibility or have reservations about its efficacy [9, 10], Providers may also lack time to engage in SDM[9], have limited training on implementing SDM [11], and receive poor reimbursement from insurance [12]. Patient barriers to SDM discussions about LDCT include competing health-care needs, lack of knowledge about risk, fear, reservations due to stigma associated with smoking, mistrust of the provider or medical system, cost, lack of professional navigation by providers, and/or logistical concerns [11].

Shared DAs, as the CMS recommends, can promote patient engagement in SDM discussions by improving lung cancer and screening knowledge and reducing personal decisional conflict [12]. However, these aids have had mixed success, partly because of the above-mentioned barriers. In a systematic review of existing research on the implementation of decision support interventions into routine clinical practice, Ewlyn et al. [13] discovered that reliance solely on patients to independently use a DA (e.g., mailing it before a clinic visit), or dependence solely on clinicians to preassign or newly introduce a DA into their consultation, was ineffective. The most effective, yet challenging, method for promoting use of DAs in clinical practice is to employ a systems approach to pre-identify appropriate patients prior to their visit and facilitate use of the DA in-clinic [13]. Buy-in from clinical leadership and training of individual providers also increase the probability of routine DA use [13].

Based on known barriers to DA use in clinical practice, McDonnell et al. [14] developed and tested a brief LDCT DA titled “Is Lung Cancer Screening for You?”, designed for use by the patient immediately prior to a consultation to learn more about lung cancer and LDCT and to indicate their values/concerns about LDCT. The aid is also meant to guide and tailor the SDM discussion to quickly address patient questions and concerns. In a pilot study testing the general feasibility of a printed version of the DA, patients and providers deemed the DA easy to read and brief [14]. To improve its access and interactivity, the research team developed a mobile version of the DA. Similar to the print version, the mobile version was meant for in-clinic use, to educate patients about lung cancer, and facilitate an SDM discussion between a high-risk patient and a health-care provider about LDCT screening. The current study aimed to investigate the feasibility, acceptability, usability, and preliminary effectiveness of this computer-based version of the DA in clinical practice.

Methods

Intervention description

“Is Lung Cancer Screening for You?”—developed and pilot tested by McDonnell et al. [14]—is a brief DA that covers risks factors for lung cancer (principally smoking), a formula for calculating cigarette smoking history (in pack-years), LDCT screening facts, questions to spur patient reflection on screening (plus a question about receiving LDCT screening in the next 30 days), and lung cancer resources (e.g., phone numbers to quit-smoking hotlines). The original DA was a 12-page booklet containing plain-language text and racially/ethnically diverse images.

The “Is Lung Cancer Screening for You?” DA was adapted prior to this study for use on any mobile or other computer-based device via the Redcap survey platform [15]. The Redcap platform is widely available within academic settings and enables low-cost, robust development that will be scalable across the state of South Carolina and eventually anywhere with Internet service. The platform is HIPPA compliant, enabling straightforward IRB review for multiple sites [15]. Visually, the computer-based DA looked identical to the paper-based version, but it also contained interactive elements, such as an ability to increase text size, a read-aloud feature (aimed at low-literacy users), a pack-year calculator, and a patient response area for the reflective statements. The DA was designed to compile patients’ responses in a single-page, on-screen report, which health-care providers can use during the SDM process. For this study, the DA was disseminated on a 12-inch touch-screen computer.

Provider training

Prior to the study, all providers were given an overview of the study protocol and a short (10-min) video-based training on how to use the DA. The video included an overview of SDM for LDCT, an SDM exemplar involving a provider/partner and a patient (actor), instructions on billing for LDCT screening, and a template for documenting SDM. Providers received this training either in-person during family medicine grand rounds or via email (which directed providers to our website: https://lungcancersc.com/resources/for-clinicians/decision-aids).

Recruitment

All participants were recruited from two academic medical centers located in Columbia, South Carolina, an urban southeastern U.S. city. To be eligible, individuals had to meet the CMS guidelines for being high-risk for lung cancer as well as be Medicare- or Medicaid-eligible. Three recruitment approaches were used. In the first, a research team member examined upcoming appointment schedules at two sites (a family practice clinic and a pulmonology clinic), identified eligible patients, and called them to assess participation interest. The second strategy consisted of mailing invitations to patients at the same two clinic sites who met study eligibility criteria (based on information in their electronic medical records) but did not have a scheduled appointment; these invitations were followed up with a call from a research team member. Thirdly, flyers were placed in the clinic waiting room to inform potential participants about the study and let them know how to contact the research coordinator. Eligible participants who made appointments solely for this study were seen via an SDM or an annual wellness visit both of which Medicare covers with minimal out-of-pocket fees. The sample size for this study was based on the availability of participants during the early part of the Covid-19 pandemic and study timeline.

Measures

Feasibility

Ref. [16] Instrument: Feasibility of Intervention Measure. Items: 4. Measures: The extent to which an intervention can be implemented successfully. Scoring: 5 response options, from 1 (Completely Disagree) to 5 (Completely Agree); total score (sum of all items) ranges 1–20 (20 = fully feasible, 1 = not feasible). Cronbach α: 0.89.

Acceptability

Ref. [16] Instrument: Acceptability of Intervention Measure. Items: 4. Measures: Whether an intervention is agreeable, palatable, or satisfactory. Scoring: 5 response options, from 1 (Completely Disagree) to 5 (Completely Agree); total score (sum of all items) ranges 1–20 (20 = fully acceptable, 1 = unacceptable). Cronbach α: 0.85.

Usability

Ref. [17] Instrument: System Usability Scale. Items: 10. Measures: Ease of use (user-friendliness) of a technology. Scoring: 5 response options, from 1 (Strongly Disagree) to 5 (Strongly Agree); total score (sum of responses for each statement, with positive items reverse-scored) ranges 1–100 (100 = very easy to use, 1 = very difficult to use). Cronbach α: 0.79–0.97.

Decisional Conflict

Ref. [18] Instrument: Decisional Conflict Scale. Measures: personal perceptions of uncertainty in choosing options. Scoring: 3 possible response options (Yes = 0, Not Sure = 2, No = 4); total score (sum of responses, divided by 10, and multiplied by 25) ranges 0–100 (0 = no decisional conflict, 100 = very high decisional conflict). Cronbach α: 0.81.

Screening intent

Assessed (using an original instrument/item) whether the person intended to be screened within the next 30 days. Scoring: 5 response options, from 0 (not at all to 5 (Very much).

Actual Screening

Examined participants’ electronic medical records post-SDM visit to determine if the person followed through with receiving LDCT.

Fidelity

Instrument: Original instrument developed by McDonnell et al. [14] Items: 15. Measures: The extent to which providers thoroughly implement study procedures, with a focus on both overall study implementation (e.g., showing the participant the DA, asking the participant about the DA) and SDM implementation (e.g., discussing pros and cons of screening). Scoring: 2 response options (Yes/No).

Data collection and analysis

After consenting to participate, each participant completed demographic and computer proficiency questionnaires. No participants were excluded if their computer proficiency was low. The research coordinator gave an overview of the DA then directed the participant to complete the DA on a tablet computer. The research coordinator remained available during the completion of the DA in case participants had any questions or concerns. Participants were asked to arrive at least 30 min to their doctors appointments to complete consenting, pre-intervention questionnaires, and intervention activities. Next, the participant answered questions (via the above-described measures) related to feasibility, acceptability, usability, decisional conflict, and screening intent. Immediately prior to the SDM conversation, the research coordinator gave the provider the tablet containing a summary of the participant’s responses. The research coordinator was present during the entire SDM discussion to observe and record fidelity and SDM discussion time lapse. Lastly, the research coordinator used electronic medical records to determine whether participants received screening post-appointment. Each participant received a $25 incentive for participation in the SDM evaluation.

All data were analyzed using univariate methods, including frequency and calculations of means. The data reported are based on the scoring range for each measurement scale.

Results

Demographics

The 33 participants in our study were all current smokers or former smokers who quit less than 15 years ago. A majority were female, and a little more than half reported race as African American. Participants’ mean age was 66.5 years (SD, 9.5). A little more than a third of participants were married, most held a high school diploma/GED or higher, and most lived in a household earning less than $20,000 annually. A majority indicated being either “unable to work” or “retired.” A little less than half reported fair to poor health, although few (11%, n = 4) reported having chronic obstructive pulmonary disease (COPD)—a common comorbidity of lung cancer. See Table 1 for details. Most participants (60%, n = 20) were recruited through mailings to advertise the study, while most of the remaining participants were recruited by examining upcoming appointment schedules (18%, n = 6) or direct referrals from providers (18%, n = 6). One participant was referred to the study by a staff member. No participants were recruited via flyers posted in the clinic waiting room.

Table 1 Description statistics of participants

Feasibility, acceptability, usability, decisional conflict, and screening

Participants deemed implementing the “Is Lung Cancer Screening for You?” DA in a clinic setting as highly feasible (m = 17.96; SD, 2.71) and exceedingly acceptable (m = 18.45; SD, 1.87). The mean usability score was 72.41 (SD, 16.06); scores higher than 80.3 are excellent, and scores between 68 and 80.2 are considered good.

Participants had an average decisional conflict score of 19 (out of 70; zero represents no conflict) after DA use. Prior to conversations with providers, 100% of participants intended to be screened, and all respondents maintained that intention after the conversations. All participants who desired screening were scheduled for it by their provider. Follow-up review of participants’ electronic medical records revealed that 25 of 33 participants (76%) were screened following the SDM conversation. As noted earlier, 22 of 25 participants were scheduled within a 72-h window. Six of the remaining eight patients were not screened within 30 days because they were never reached and eventually lost to follow-up. Notably, one of the six was eventually diagnosed with lung cancer. One participant was not recommended for screening because of other medical priorities, which shifted the focus of the appointment. The final participant who did not received screening within 30 days was ineligible for screening because of receiving a CT scan within the past year.

Fidelity

The 16 participating providers covered a majority (m = 23.5; SD, 6.4) of the 33 fidelity checklist items during discussions. Providers often kept good eye contact with participants, strongly encouraged current smokers to quit, discussed lung cancer risk factors, and placed an LDCT screening order the same day as the visit (if screening was agreed upon). More than half (67%, n = 22) of participants were scheduled for LDCT screening within 72 h of the SDM discussion. All participants scheduled for screening were notified of their screening appointment.

In most cases (64%, n = 21), providers introduced the observing research coordinator and described the reason for observation. A majority of providers used the digital DA during conversations (67%, n = 22) and/or asked patients if they had read the DA (61%, n = 20). Most providers reviewed patients’ value statements from the DA (64%, n = 21), and all patients who were current smokers were offered smoking cessation information (100%, n = 19). Among the 15 fidelity instrument items, providers least often assessed patients for lung cancer symptoms (39%, n = 13), asked if patients had questions about LDCT (45%, n = 15), and/or finalized the discussion by reviewing patients’ “intent to be screened” rating scale within the DA (48%, n = 16). See Table 2 for details. The average time to complete an SDM discussion with support of the DA was 5.95 min (SD, 2.05).

Table 2 Study fidelity

Discussion

Findings demonstrate that the computer-based version of “Is Lung Cancer Screening for You?” is acceptable and feasible for use in-clinic settings. The DA reduced decisional conflict by educating participants about the rationale and pros/cons of screening. The DA successfully facilitated a brief but effective SDM conversation. Fidelity scores showed that health-care providers can effectively engage patients in an SDM discussion about lung cancer screening when trained how to use a DA in clinical practice, and when that DA is designed to collect patient concerns and values related to LDCT screening. These findings are consistent with prior hypotheses by Elwyn et al. [13] that if barriers related to clinical implementation of DAs are removed, then their use is more likely.

In addition to the success of “Is Lung Cancer Screening for You?” with promoting SDM, the high LDCT screening rates can be attributed to the timely scheduling of patients for screening following the SDM discussion followed by patient notification of their screening appointments. While most previous literature has largely focused on systems-level barriers to screening (e.g., difficulty identifying eligible patients), [9, 19] few studies have focused on the lack of timely patient scheduling and appointment notifications/reminders as a deterrent. However, Carter-Harris and Gould [11] noted that the use of information technology for facilitating systems-level exchanges and management of scheduling, testing, results tracking, etc., can be a critical asset for promoting LDCT screening.

Despite the success of our DA in promoting screening, some notable improvements would strengthen the LDCT SDM process. First and foremost, providers should always assess for lung cancer symptoms. Doing so could help prioritize what is discussed during the SDM conversation. A LDCT is not the diagnostic test of choice if symptoms are present. Ref. [20] Secondly, providers should always ask if the patient has any questions about the radiologic procedures of LDCT to ensure the person understands the risks and benefits of the CMS-recommended screening. Ref. [8] Additionally, providers should always take into consideration the patient’s values in a shared decision. Ref. [21] This values assessment should not only include a patient’s intention to be screened but also discussion about the extent to which LDCT screening aligns with the patient’s overall well-being (e.g., affordability, safety, perception of others). Ref. [21] Using a DA containing a values assessment tool could enable providers to streamline the SDM process. Fourth, providers should ensure that they are not only making a clear assessment on a patient’s intent to be screened, but also placing orders for screening in a timely manner. Not scheduling the appointment immediately could mean a patient is lost to follow-up. Of note are the two separate scheduling practices between the two clinics in this study. While one clinic scheduled patients prior to their departure from the clinic, the other relied on a scheduling department to call patients within 24-h. All patients who were scheduled during their appointment for LDCT followed through, whereas follow-up beyond the appointment created barriers such as difficulty reaching the patient. Most concerning is that untimely LDCT could lead to a later discovery of lung cancer, as occurred with one participant in our study, who was not scheduled in a timely manner following an appointment. One method of intervention for the clinic without in-appointment scheduling might a chart-embedded alert that will inform both the provider, the scheduling team, and others daily if a patient is not scheduled for LDCT within 24-h.

Conclusions and Implications

DAs can be excellent resources for patients and providers, making both parties aware of the latest LDCT screening recommendations as well as providing a systematic way of informing the provider about the patient’s values and concerns as they relate to screening. DAs are most effective when championed by providers within clinical practice settings and when these tools do not distract significantly from the limited time allocated for appointments. Future research should seek to recruit a more robust participant sample with varying positions on screening (no intention to screen, open to screening, etc.) to determine the statistical significance of the impact of “Is Lung Cancer Screening for You?” on SDM and LDCT screening behaviors. Further research should also assess the performance of, and preferences for a computer-based versus paper-based version of our DA among providers and patients. Qualitative research would be valuable for ascertaining provider and patient experiences with the DA. Moreover, implementation and dissemination of this research could be useful for determining the best methods for supporting the routine use of “Is Lung Cancer Screening for You?” in diverse clinical settings (e.g., hospitals, primary-care clinics, federally qualified health centers) with a variety of providers (medical trainees, nurse practitioners, family physicians, and specialists).

Limitations

This study had some notable limitations. It consisted of 33 participants residing in one region of one state; therefore, findings may not be applicable to individuals residing in other areas. Recruitment was greatly affected (even paused) by the COVID-19 pandemic. Recruitment was also affected by the chronically ill nature of the population who already underwent CT scans for other health conditions which disqualified them for a LDCT at that time. Also more robust analysis (e.g., regression) could not be performed to ascertain in what specific contexts our DA reduces decisional conflict and increases screening intent among high-risk individuals. Decisional conflict was only measured post-test; therefore, we cannot report with certainty that the low decisional conflict scores were a direct result of DA use. In addition, all participants had high intentions to be screened following DA use and before engagement in SDM, but screening intention was not measured before DA use. Therefore, we are unable to comment on the DA’s direct impact on screening intention. Furthermore, our study did not compare traditional shared decision-making consultations versus those that occurred with use of our DA so there is no data to evidence the extent to which our intervention improved SDM time and quality. Lastly, we did not measure how well our DA performed in an unobserved scenario so there is potential that the presence of an observer may have influenced behaviors and interactions between patients and providers. Despite these limitations, our study provided valuable information about the feasibility of using a brief computer-based LDCT DA in a real-world clinical setting with racial/ethnically diverse patients who have varying technology use experience levels.