Advertisement

Translational Behavioral Medicine

, Volume 3, Issue 3, pp 295–303 | Cite as

Implementation of an online pragmatic randomized controlled trial: a methodological case study

  • Nathan K Cobb
  • Josée Poirier
Open Access
Original Research

ABSTRACT

Rigorous evaluation of eHealth interventions is acutely needed but can be challenging to execute in a cost- and time-efficient way. The purpose of this study is to describe a randomized controlled trial carried out as part of an approach that evaluates and informs product development throughout an intervention’s life cycle. We present the methodological case of a pragmatic randomized controlled trial evaluating the effectiveness of the web-based intervention “Daily Challenge.” We conducted the trial entirely online and leveraged existing resources to implement it quickly and within budget. One thousand five hundred three participants were recruited in 49 days (17.1 % of candidates assessed for eligibility). Then, 68.7 % of participants were reached for follow-up at 30 days and 62.5 % at 90 days. Data collection (baseline to 90-day follow-up) was completed within 5 months. Rigorous trials can be conducted efficiently and in a timely manner, enabling evaluation on a continuous basis. Development should include ongoing empirical input to inform product iterations.

KEYWORDS

Health informatics Software development Research Randomized controlled trial Intervention 

INTRODUCTION

Evaluation of eHealth interventions remains a significant challenge for researchers, developers, payers, and ultimately, end-users [1]. Lacking evidence of effect, new approaches may be abandoned; conversely, inappropriate approaches may be commercially accepted due to a perceived lack of “anything better.” Despite this clear need, evaluation can be difficult within the short timeframes required by the ever advancing technology (e.g., text messaging, social networking sites, or mobile apps). New models appropriate for the web, social networks, and mobile devices are sorely needed [2, 3, 4], in particular for commercial programs.

One approach to this problem is through the adoption of engineering approaches throughout the intervention life cycle, from initial prototypes through final evaluation. Software project management strategies—such as “agile development” and “lean startup”—emphasize using immediately available data points for day-to-day decision making, escalating to more formal methods for key decisions and culminating in more rigorous evaluation only when required to confirm the performance of the product [5]. Data can be obtained from utilization records or from formalized evaluation methods ranging from A/B testing (rapid randomized trials comparing features or variations) through more research-focused protocols emphasizing more generalizable outcome metrics and factorial trials to determine the effect of interdependent intervention components [6, 7]. While many companies are capable of the first steps in this process, they often lack research staff with clinical trial experience. This leads to an intriguing opportunity for industry–academic partnerships.

We present in this methodological case study a randomized controlled trial (RCT) that composed part of design and evaluation of “Daily Challenge,” a generalized health and well-being intervention built in a commercial environment. The RCT was designed from the start to be a rapid, pragmatic randomized controlled trial of effectiveness, recruiting directly from our target population with minimal exclusion criteria and with the capacity to provide statistically valid and clinically useful data within months of the enrollment of the first participant. While the trial was conducted in a commercial environment, it was designed and implemented in a manner consistent with rigorous academic studies. This paper reviews both the opportunities and challenges inherent in such a model and provides lessons learned that may be adapted in either commercial or academic environments.

PROTOCOL

Environment

MeYou Health LLC is a subsidiary of the publicly traded health and well-being company Healthways Inc, operating independently and with the specific mandate to bring the methodology of online startups to health behavior interventions. We are a multi-disciplinary group of designers, engineers, product managers, and behavioral scientists. We develop products within the “agile” and “lean startup” frameworks, in which all design and programming occurs in 2-week blocks. At the beginning of each block, the product lead determines which elements will be worked on and ensures that the tasks have been broken down into subtasks that can be accomplished within the timeframe. We deploy a minimal viable product (with only the necessary features) as early as possible. We immediately recruit consumers to rapidly collect direct feedback and process data. These data guide future product iterations, which are typically released at the end of each 2-week block.

MeYou Health uses a mixed-method, multi-year research program that informs design, evaluates implementation, and demonstrates effectiveness throughout each product’s life cycle. Our methods include ethnographic studies and user interviews, A/B testing, observational analytics, and randomized controlled trials. Continuous analyses of consumer feedback, process data, and experimental evidence identify features to add, remove, or improve. These data immediately inform design priorities and decisions (how a particular feature will look, feel, or work) for future product iterations.

Intervention description

Daily Challenge is a freely accessible web- and mobile-based intervention that has enrolled over 250,000 participants since its initial pilot launch in September 2010. The multi-behavior intervention aims to improve the overall well-being in a general population; interventions for well-being and this one specifically have been described elsewhere [8, 9]. Design of the system was informed by Social Cognitive Theory, stressing a reciprocal learning process along with social influence and support within social networks to impact self-efficacy [10]. The system is intended as a population intervention, emphasizing small behavioral changes to drive population-level impact. Daily Challenge participants receive a daily email and/or text message suggesting a small health action (a “challenge”) that they can usually complete in a few minutes, along with information about how to complete the challenge and its relation to well-being (Fig. 1). Participants report having completed the challenge, optionally share with other participants how they did it, and collect virtual rewards. A cross-disciplinary team of writers, a behavioral scientist, and subject matter experts create the challenges.
Fig 1

Example of an email delivering the day’s challenge. The “Done” button includes a link to the website

The challenge mechanism is wrapped in a web-based social network designed to enhance social influence and support (Fig. 2) and enhance adherence and retention [9]. Participants are encouraged to recruit members of their real-life social network and connect with them within Daily Challenge. Additionally, participants may interact and establish connections with people they meet through the intervention. Participants can form pacts to complete challenges together, encourage one another, cheer each other on via “smiles,” and comment on others’ challenge completion stories. Prior evaluation work indicated that higher numbers of social ties is correlated with improved adherence and retention [9].
Fig 2

A participant’s homepage on the website. The page includes the day’s challenge, the participant’s connections, and public posts by other Daily Challenge participants who shared how they did the challenge

Research design

We conducted the randomized controlled trial within a 1-year period. Initial needs assessments and funding decisions were made in October 2011; experimental design and trial infrastructure occurred between November 2011 and March 2012. The trial itself (recruitment to 90-day follow-up) was conducted between April and September 2012. The research protocol was approved by Independent IRB (Protocol DC-EFF-2012) and registered with ClinicalTrials.gov (Identifier NCT01586949).

We used a pragmatic, effectiveness approach that emphasized generalizability of the results to the intervention’s target population [11]. At the time of the trial, Daily Challenge was constructed to primarily leverage existing social network data drawn from Facebook. We limited exclusion criteria and deliberately recruited participants through Facebook exclusively. Participants were free to use the intervention as they saw fit and received no additional instructions, counseling, or program support beyond what a user of the regular system would receive.

The two-arm trial compared the intervention to a generic health newsletter control. The once-weekly newsletter contained four short stories about well-being topics that had been published no more than 7 days prior. The newsletter was intended as an attention control, to both minimize loss to follow-up as well as to obscure group assignment to participants.

Outcome measures

The primary outcome well-being was assessed using the Individual-level Well-Being assessment and Scoring method (IWBS) [12]. This validated instrument covers healthy behaviors (for example, diet and exercise) which has been used to evaluate intervention outcome [8, 12] and led to insights into the relationship between well-being and healthcare costs [13], hospital visits [13, 14], short-term disability [14], and work productivity [14]. For every point increase in IWBS scores (on a 100-point scale), individuals are 2.2 % less likely to have a hospital admission, 1.7 % less likely to have an emergency room visit, 1.0 % less likely to incur any healthcare costs, or associated with 1 % less cost [13]. Participants’ overall well-being scores (range, 0 to 100) served as primary outcome measurement. Based on pilot data and projected recruitment costs, we powered the trial to detect a 2.2-point difference in IWBS scores. Perceived social support was assessed with the Interpersonal Support Evaluation List (12-item version) [15]. The instrument measures the perceived availability of social support. Its three subscales (appraisal, belonging, and tangible support) combine into an overall score (range, 12 to 48).

Data collection

Participants provided sociodemographic data (age, gender, race, ethnicity, and zip code, which were used to estimate education level and income) and took the two outcome assessments at baseline. We deliberately limited the number and length of the questionnaires to lessen the burden on participants and minimize attrition during enrollment. We followed up with participants after 30 and 90 days.

Implementation

Recruitment, eligibility verification, and data collection took place in real time and over the web: no in-person visits or phone screenings were required. The team of designers and engineers that developed Daily Challenge built the online trial infrastructure and the newsletter with the same “agile” approach used for the intervention. We collected informed consent, demographics, and outcome data using an extension of the existing intervention interface. Detailed process data were gathered as part of the built-in quality improvement system: email opens, site visits, page views, challenge completions, and social interactions on the site were recorded in real time into a relational database and available for analysis. Trial-specific tools (e.g., the outcome data collection interfaces) were built using the same software infrastructure as the main product. Trial data collection was integrated into the software framework to eliminate redundant data collection or mismatched interfaces. This strategy both minimized engineering resources and enhanced visual appeal.

The site investigator handled incoming participant communication, arranged incentive distribution, and coordinated distributed team efforts. Investigators remained blind to group assignments and only had access to de-identified data. A web-based monitoring system enabled the engineers and the investigators to track the trial’s progress and ensure its integrity.

Enrollment

We recruited US-based individuals of legal age (19 or older in Alabama or Nebraska; 18 or older elsewhere) with a Facebook account through the self-service advertising system of the social platform. Advertising was titrated to a consistent enrollment rate of approximately 30 participants per day. After IRB approval, we re-used ads created, tested, and refined as part of the intervention development process.

Individuals who clicked on an ad were taken to the website, where they could sign up for the intervention using their Facebook credentials. This authentication method reduced data collection, as it granted us access to the participant’s user ID, name, verified email address, and friends network data. We additionally utilized Facebook user IDs in combination with web cookies to block control participants from using Daily Challenge during the trial, preventing contamination.

Individuals received an offer to take part in the trial after initiating registration for Daily Challenge. Candidates provided informed consent, demographic, and baseline data, and a mobile phone number. We required that the number be verified to ensure we had a valid means to reach participants throughout the study. Participants received a four-digit code by text message or automated phone call. Two attempts were allowed to submit a valid code online, or the validation process was considered a failure.

Candidates were excluded if they had a proxy email address, had a Facebook friend enrolled in the trial, failed to provide informed consent, or failed to complete enrollment in the allotted time (45 min). Based on pilot data indicating a significant over-representation of women recruited through Facebook, we oversampled men, excluding potentially eligible women to maintain a minimal male representation of 30 %. This ratio was selected empirically and designed to more closely mirror the gender breakdown expected in commercial implementations.

Randomization was automated, gender-stratified, and otherwise blind to baseline data. Treatment group participants proceeded to use the intervention immediately, while control condition participants were presented with a page that informed them that they would receive a weekly health newsletter by email. Emails were sent weekly in batches, so participants could wait up to 7 days for their first newsletter. Control condition participants who returned to the site during the course of the study saw a message reminding them of their participation in the trial and could not see any aspect of the Daily Challenge intervention. At the end of the trial, control participants were offered full access to the Daily Challenge system.

We incentivized participants at enrollment and each follow-up; compensation was independent of use or non-use of the intervention. Participants received a $20 Amazon.com gift card each time they completed the assessments. The incentives were distributed by email and redeemable online.

Follow-up

We deployed a multi-modal strategy to reach participants during their 7-day follow-up window. We used the intervention’s communication mechanisms (emails, text messages, on-site prompts) and added another channel (private Facebook messages) to ask participants to follow up online. If they failed to do so, we resorted to a fallback call from a contracted call center at an academic medical center. We empirically made two IRB-approved strategy changes during the trial in an attempt to improve on early follow-up rates: shifting the first telephone call from day 5 of the follow-up window to day 3 and adding Facebook private messages to our initial electronic contact channels.

RESULTS

We targeted approximately 320 million US-based individuals over the age of 18 using health-related and interest keywords. We displayed a total of 82 advertisements 444 million times that were seen by 23 million unique individuals during the recruitment period (Fig. 3). The campaign generated 129,177 click-throughs. Of these visitors, 8,731 signed up for the intervention and were assessed for eligibility. Four thousand forty-seven candidates declined to participate (46.3 %), 146 did not meet inclusion criteria (1.7 %), 261 women were excluded for undersampling (3.0 %), 336 were excluded for technical reasons (3.8 %), 19 failed phone number validation (0.2 %), and 2,438 timed out of or discontinued enrollment (27.9 %).
Fig 3

Enrollment and follow-up diagram

Of the 1,770 candidates who initiated the phone validation process, 1,506 candidates successfully verified their mobile phone number (85.1 %); 19 twice entered a wrong code and thus failed validation (1.1 %); and 245 candidates did not complete the process (13.8 %). The option to receive the validation code through text message was more popular than by automated call: 1,555 candidates proceeded with the text message option (87.9 %), 195 with the automated call (11.0 %), and 20 used both on separate attempts (1.1 %). The two methods showed comparable success rates (text message: 80.6 % of attempts; phone call: 82.6 %).

We ultimately recruited 1,503 participants in 49 days (17.1 % of candidates assessed for eligibility). Participants included 30.1 % men with a mean age of 42.5 years, 29.75 % of college graduates or higher, and a mean median income of $56,561. Participants self-identified as non-Latino (90.8 %) and White (87.7 %; non-exclusive categories) most frequently. Participants took less than 15 min on average to complete enrollment, from first visit to the website through confirmation of their phone number. Data collection was completed within 5 months.

We successfully followed up with 68.7 % of participants at 30 days and 62.5 % at 90 days (Table 1). We reached 74.9 % (n = 1,126) participants at least once, and 56.3 % (n = 846) at both follow-ups. The mean compensation was $42. Of the participants reached, 1,026 (91.1 %) responded online at least once; 100 (8.9 %) of them reassessed exclusively by phone. The majority of follow-ups occurred within the first 4 days of the participant’s window (89.4 % of reassessments at 30 days, 90.7 % at 90 days).
Table 1

Follow-up rates (in cumulative percentages) by assessment method for each day of the 7-day follow-up windows at 30 and 90 days. Communication channels used to prompt participants listed for each day

 

Online (%)

Phone (%)

Combined (%)

Communication channels

Day 30

28.8

0

28.8

Email, text message

Day 31

40.7

0

40.7

Email, text message

Day 32

50.5

4.2

54.7

Phone call, email, text message

Day 33

56.0

5.4

61.4

Phone call, email, text message

Day 34

58.3

6.1

64.3

Facebook message, phone call

Day 35

59.5

6.6

66.1

Phone call

Day 36

61.6

7.1

68.7

Phone call, email

Total (n)

926

106

1,032

 

Day 90

26.1

0

26.1

Email, text message

Day 91

36.7

0

36.7

Facebook message, email, text message

Day 92

46.2

4.7

50.8

Phone call, email, text message

Day 93

51.2

5.5

56.8

Phone call, email, text message

Day 94

52.6

6.0

58.5

Phone call

Day 95

53.5

6.5

59.9

Phone call

Day 96

55.4

7.1

62.5

Phone call, email

Total (n)

833

107

940

 

Program participation and retention

We tracked participants’ program usage through email opens (control, treatment groups) and site visits (treatment group). Email opens could only be recorded if the participant enabled image loading and are therefore likely underreported. Five hundred sixty-one (74.6 %) treatment participants and 383 (51.0 %) control participants opened at least one program email. In the treatment group, 639 (85.0 %) participants visited the site at least once and 692 (92.0 %) completed at least one challenge. Over 50 % of participants continued to complete challenges at 60 days (Fig. 4).
Fig 4

Program engagement. Percentage of participants in the treatment group who at least once: opened a program email, visited the site, and completed a challenge in their first 30 days; between days 30 and 60; between days 60 and 90 in the study. Challenge completions could be reported through text messaging and did not require a site visit

DISCUSSION

We have described the design and implementation of a randomized controlled trial in an operating web-based intervention. The trial was conducted as part of a larger research program conceived to inform program development on a continuous basis, and that leveraged internal, existing resources.

Recurring themes in eHealth evaluation are timeliness and cost. We minimized both factors by conducting the trial in the medium in which the intervention was delivered, i.e., online. Recruitment tapped into the intervention’s own target population and was entirely done using online advertising; enrollment, baseline data collection, and 89.2 % of our follow-ups took place online in an automated system. We used staff time only to coordinate the delivery of incentives and for fallback contacts for participants the automated system could not reach at follow-up.

The entire study, from first enrollee to last follow-up, was conducted under 5 months. Conceptualization to completion of the primary analyses took approximately 12 months. A number of factors enabled this speed. As mentioned above, our use of online advertising enabled rapid recruitment (1,503 participants in 49 days), and we did not need to train or recruit additional staff resources to implement the study. However, most important was our use of internal funding. External funding (e.g., NIH) would have required significant lead time in order to write and submit/resubmit an application and then await funding. Equally importantly, our funding was not spread over a multi-year grant period, which allowed us to spend and recruit as quickly as the team felt was feasible. We believe that our situation in needing research of this nature is not unique and that internally funded commercial studies such as this provide an incredibly valuable opportunity for public–private partnerships. In this case, MeYou Health retained academic consultants to extend its internal capacity, including an additional behavioral scientist with published experience in eHealth interventions as well as an academic statistician. Given the short timelines and their independence from a traditional funding cycle or academic year, researchers and commercial organizations seeking scientific input should create partnerships as early as possible in the intervention development process.

Achieving significant follow-up rates in online trials is exceedingly difficult, particularly for pragmatic studies that recruit online. Our follow-up rates compare favorably with other real-word effectiveness trials of online interventions [16]. Without the addition of an offline backup strategy, trials conducted entirely online have a mean follow-up rate of 53 %. Our online rates reached 61.6 % at 30 days and 55.4 % at 90 days, and improved to 68.7 and 62.5 %, respectively, with the addition of phone calls. The low number of participants that were contacted by phone (106 at 30 days, 107 at 90 days) could be attributed to an inadequate call protocol, or interpreted as evidence of the success of the preceding contact methods. We believe that we reached the majority of individuals willing to provide follow-up information with our online protocols, resulting in a residual population unreceptive to follow-up regardless of the contact modality. This likely was due to our multi-modal online contact strategy, in particular, the addition of text messaging to validated phone numbers. A limitation of our approach was in not validating email addresses obtained via Facebook, a number of which were later confirmed to be invalid (approximately 1.7 %). Future work will be well served to devise mechanisms to rapidly gather multiple contacts and validate them without compromising the user experience.

There are a number of limitations to our approach. All trials suffer from attrition, but web-based trials are particularly prone to the problem. Attrition occurs throughout the recruitment process, through the course of the intervention and again at follow-up. We attempted to minimize attrition at enrollment and follow-up by limiting the amount of information we requested from participants. This also maximizes generalizability but at the expense of a richer dataset that might offer more explanatory power for any results. Eysenbach [17] has referred to a “law of attrition” in utilization of online interventions, where use often tails off rapidly. In our case, retention and ongoing use of the intervention remained relatively high, with 50 % of our intervention participants continuing to actively engage at 60 days. Surprisingly, however, this appeared to have no impact on differential loss to follow-up, as we saw slightly higher follow-up rates in the control condition. The choice of 30 and 90 days for follow-up is also a limitation resulting from this compromise; longer-term outcome data would be valuable to determine effectiveness but also place greater burden on participants and delay the availability of useful data back to the product team. Finally, recorded self-report of challenge completion was used as a marker of engagement while our behavioral outcomes were limited to those present in the IWBS [12], limiting our ability in this study to evaluate specific behavioral changes.

By performing all recruitment online, we are able to estimate the number of individuals that saw our advertisement for the Daily Challenge, the precise number of people that clicked through an ad, the number that initiated registration, and the number that were successfully enrolled. Our click-through rate of 0.03 % (129,177 visitors/444,352,132 ad impressions) and response rate of 0.6 % (129,177 visitors/23,196,930 individuals shown an ad) are roughly comparable to rates in other published trials [18, 19, 20]. We encourage other researchers to include similar data in their own publications to enable comparisons and a better sense of context.

This study was only one component of a larger combined development and evaluation process that fits within the “lean startup” paradigm. The lean model stresses the creation of a minimal initial product (or intervention) that is tested for consumer acceptance and interest. Following this, development occurs iteratively with successive versions acquiring new features and refining or dropping old ones. The iterative process is driven by “actionable metrics,” process data that can constructively inform design often in the form of much smaller randomized trials pitting product variations against each other. These models stress the ability of the team to change direction (thus the term “agile”) and redirect a product based on what they learn during early development. Such a methodology may be difficult to mesh with traditional funding mechanisms but, given its complexity, can benefit significantly from formal research support.

If development and evaluation models like the one presented here are to be successfully adapted in behavioral medicine, there will need to be closer collaboration between developers, industry, and academic researchers. Such collaboration is not limited to traditional theory-based design assistance or formal statistical analysis but should include ongoing input into the interpretation of metrics and rapid testing and how the results may inform future product decisions. Collaboration will always include challenges, but it brings with it enough potential to change how we develop the interventions of the future that we believe it to be critical—and hope this case study provides some insight into one model.

Notes

Acknowledgments

The study was fully funded by Healthways, Inc. The authors would like to thank Trapper Markelz for insightful discussions on web product development approaches and acknowledge the contributions of the MeYou Health design and engineering teams in working to make this, and other related studies, successful.

References

  1. 1.
    Pagliari C. Design and evaluation in eHealth: challenges and implications for an interdisciplinary field. J Med Internet Res. 2007; 9(2): e15.PubMedCrossRefGoogle Scholar
  2. 2.
    Cobb NK, Graham AL, Byron MJ, Niaura RS, Abrams DB. Online social networks and smoking cessation: a scientific research agenda. J Med Internet Res. 2011; 13(4): e119.PubMedCrossRefGoogle Scholar
  3. 3.
    Bennett G, Glasgow RE. The delivery of public health interventions via the Internet: actualizing their potential. Annual Rev Publ Health. 2009; 30: 273-92.CrossRefGoogle Scholar
  4. 4.
    van Gemert-Pijnen JEWC, Nijland N, van Limburg M, et al. A holistic framework to improve the uptake and impact of eHealth technologies. J Med Internet Res. 2011; 13(4): e111.PubMedCrossRefGoogle Scholar
  5. 5.
    Ries E. The lean startup: How today's entrepreneurs use continuous innovation to create radically successful businesses. New York: Crown Business; 2011.Google Scholar
  6. 6.
    Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007, May; 32(5 Suppl): S112-8.Google Scholar
  7. 7.
    Danaher BG, Seeley JR. Methodological issues in research on web-based behavioral interventions. Ann Behav Med. 2009, Aug; 38(1): 28-39.Google Scholar
  8. 8.
    Prochaska JO, Evers KE, Castle PH, et al. Enhancing multiple domains of well-being by decreasing multiple health risk behaviors: a randomized clinical trial. Popul Health Manag. 2012, Oct; 15(5): 276-86.Google Scholar
  9. 9.
    Poirier J, Cobb NK. Social influence as a driver of engagement in a web-based health intervention. J Med Internet Res. 2012; 14(1): 1-9.CrossRefGoogle Scholar
  10. 10.
    Bandura A. Social foundations of thought and action: a social cognitive theory. Englewood Cliffs: Prentice-Hall; 1986.Google Scholar
  11. 11.
    Roland M, Torgerson DJ. Understanding controlled trials: What are pragmatic trials? BMJ. 1998, Jan; 316(7127): 285.Google Scholar
  12. 12.
    Evers K, Prochaska JO, Castle PH, et al. Development of an individual well-being scores assessment. Psychology of Well-Being: Theory, Research and Practice. 2012; doi: 10.1186/22111522-2-2.
  13. 13.
    Harrison PL, Pope JE, Coberley CR, Rula EY. Evaluation of the relationship between individual well-being and future health care utilization and cost. Popul Health Manag. 2012; doi: 10.1089/pop.2011.0089.
  14. 14.
    Shi Y, Sears L, Coberley C, Pope JE. Classification of individual well-being scores for the determination of adverse health and productivity outcomes in employee population. Popul Health Manag. 2012; doi: 10.1089/pop.2012.0039.
  15. 15.
    Cohen S, Mermelstein R, Kamarck T, Hoberman H. Measuring the functional components of social support. In: Sarason IG, Sarason BR, eds. Social support: Theory, research and application. The Hague: Martinus Nijhoff; 1985: 73-94.CrossRefGoogle Scholar
  16. 16.
    Mathieu E, McGeechan K, Barratt A, Herbert R. Internet-based randomized controlled trials: a systematic review. J Am Med Inform Assoc. 2012. doi: 10.1136/amiajnl-2012-001175.PubMedGoogle Scholar
  17. 17.
    Eysenbach G. The law of attrition. J Med Internet Res. 2005; 7(1): e11.PubMedCrossRefGoogle Scholar
  18. 18.
    Kapp JM, Peters C, Oliver DP. Research recruitment using Facebook advertising: big potential, big challenges. J Cancer Educ 2013, Jan 6.Google Scholar
  19. 19.
    Lohse B. Facebook is an effective strategy to recruit low-income women to online nutrition education. J Nutr Educ Behav. 2013, Jan; 45(1): 69-76.Google Scholar
  20. 20.
    Ramo DE, Prochaska JJ. Broad reach and targeted recruitment using Facebook for an online survey of young adult substance use. J Med Internet Res. 2012; 14(1): e28.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2013

Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  1. 1.Division of Pulmonary & Critical Care, Department of MedicineGeorgetown University Medical CenterWashingtonUSA
  2. 2.MeYou Health, LLCBostonUSA

Personalised recommendations