Journal of Child and Family Studies

, Volume 24, Issue 2, pp 505–513

Challenges with Online Research for Couples and Families: Evaluating Nonrespondents and the Differential Impact of Incentives

Authors

    • School of Family LifeBrigham Young University
  • Keitaro Yoshida
    • Marriage, Family, and Human Development Program, School of Family LifeBrigham Young University
Original Paper

DOI: 10.1007/s10826-013-9863-6

Cite this article as:
Busby, D.M. & Yoshida, K. J Child Fam Stud (2015) 24: 505. doi:10.1007/s10826-013-9863-6

Abstract

In this study some of the challenges of conducting online research with couples and families were considered. Of particular concern with internet samples are the high percentages of individuals who have invalid email addresses and the low response rates to research requests. Using a sample of 2,049 individuals from whom we had extensive information, we invited them to participant in a short survey on their couple relationship. We explored whether participants who had invalid email addresses were different from those who had valid addresses and we compared those who completed the survey with those who did not. Also we explored the influence of different monetary incentives on response rates. The findings indicated that when evaluating 18 different areas including background measures, personality measures, family of origin measures, and couple measures, there were only minor differences between those with valid and invalid email addresses, and only one difference between those who completed the survey and those who did not. Also a lottery type monetary incentive showed promise in improving response rates compared to no incentive and a standard $20 incentive.

Keywords

Online researchPaid incentivesFamily backgroundCouplesNonrespondents

Introduction

As the internet has become a dominant form of communication, information gathering, social networking, and entertainment, it is natural that scholars have started to use the internet for conducting studies with couples and families. While the number of studies using the internet for data gathering has grown exponentially during the last few decades (Goritz and Luthe 2013; Tourangeau 2004), the challenges that exist with this type of research are understudied (Singer and Ye 2013; Bosnjak and Batinic 2002).

Many of the challenges are in regard to sample biases and methodological problems (Cantrell and Lupinacci 2007; LaCoursiere 2003; Tourangeau 2004). Perhaps the most consistent critique of internet research is that people without internet access represent important and unique subgroups that shouldn’t be missed. Using terms such as the “digital divide” authors contend that poorer and less educated populations are likely to be missed when using the internet to gather data (Carroll et al. 2005). However, recent surveys have shown that the digital divide is becoming much smaller with more than 88 % of the US households owning a computer and 75–81 % of households having access to the internet (Carroll et al. 2005; PEW 2009; Sachoff 2008; Zickuhr and Smith 2012). In addition, over 50 % of those in the lowest income bracket, and over 60 % of those with only a high school education, have internet access, challenging the view that most of these groups don’t have online access (Carroll et al. 2005; PEW 2009; Zickuhr and Smith 2012). However, the elderly continue to be overrepresented in the group without internet access with about 60 % of this group being composed of people over 65 (PEW 2009; Sachoff 2008).

Additionally, changes from land based phones to cell phones and the concomitant problems with reaching people on the phone have diminished the advantages of what was once the gold standard of survey research, randomized calls to representative US households (Tourangeau 2004). Also several researchers have found that individuals are often more willing to disclose difficult information about sensitive subjects and to provide more details to open ended questions when completing an internet survey as compared to a phone or mail survey (Tourangeau 2004; Tuten et al. 2002), further diminishing the advantages of in person, mail, or phone surveys. The fact that most research conducted through mail, in person, or telephone data collection procedures does not utilize nationally representative samples, also indicates that internet research is not likely to be more biased than much of the existing research (Schonland and Williams 1996; Tuten et al. 2002).

Lower response rates are particularly problematic with internet research as participants are sometimes prone to ignore emails, change email addresses, and have filters that automatically block certain types of email messages (Tourangeau 2004). However, responses to all types of research requests are dropping and causing problems for scholars using any method of contact (Schoeni et al. 2013; Singer and Ye 2013). To improve response rates many researchers use two methods, repeated reminders and monetary incentives (Bosnjak and Batinic 2002; Goritz and Luthe 2013).

It is particularly important for relationship scholars to understand how nonrespondents are unique from respondents. Do more satisfied couples respond better to requests to participate in research or stay in longitudinal studies? Most of the research on those who respond to research requests is concerned with the specific situations of potential participants such as what time the request was extended, or what characteristics of surveys led to higher response rates (Bosnjak and Batinic 2002; Singer and Ye 2013). Since those who do not respond by definition are not able to provide information in more detail about themselves and their relationships, it is not usually possible to discover what makes these nonrespondents unique. One way that nonrespondents can be studied is when they participated in an earlier study and provided data and are then invited to participate in a new study. While not strictly a longitudinal panel study, it is similar to this type of research in many respects and has the advantage of information from the first study that can be used to evaluate nonrespondents in the second study.

The problem with these unplanned “longitudinal” studies is that usually the only information available from the first study is basic demographic data. Rarely is there more extensive personality information and we have not found examples where details about relationship functioning were available. One of the few exceptions to the studies that contained just basic demographic variables was a study done by Rogelberg et al. (2003). These researchers found that nonrespondents were different than respondents in that they had personalities that were less conscientious and less agreeable. They also found that satisfaction type variables were not distinct between respondents and nonrespondents.

One of the unique challenges with online research is that a common way to contact people is through the use of email addresses, however these can change and become invalid quickly or sometimes potential participants have filters that block out requests from unknown sources such as researchers. One of the important research questions that has not been addressed is whether those who do not respond because of invalid email addresses are distinct from those who do not respond but have valid email addresses. Without the answer to this question it is difficult to know whether research using online surveys is valid.

Researchers have found that monetary incentives have a positive effect on increasing response rates among research participates (Church 1993; Godwin 1979; Hopkins and Podolak 1983; White 1988) When participants receive a monetary incentive at the initiation of the study, such as a $1 bill with the first mailing, this results in a significantly higher response rate than participants receiving a small gift after completion of the survey (Houston and Nevin 1977; Nederhof 1983; Whitmore 1976).

Other scholars have examined the response rates among participants by means of increased monetary incentives. The findings suggest that increasing monetary incentives is effective to a certain point, after which response rates do not continue to increase at the same rate. James and Bolstein (1992) examined the response rates in a mail survey among participates receiving $1, $5, $10, $20, or $40. The response rate increased significantly as the monetary incentive increased, although after $20 a notable difference was not found. Unfortunately, because of the difficulty in prepaying participants of online research and the fact that research on incentives with online surveys is still sparse, it is not clear whether these same trends apply to online research (Singer and Ye 2013).

Goritz (2006) reviewed online research and found that the effects of incentives was similar to mail or telephone surveys in that incentives significantly increased response rates (Goritz 2006). However, these increases were very small at less than a 3 % improvement in response rates. A few researchers conducting research over the internet have used new technologies to transfer money directly to people on-line providing participates with prepaid incentives. Although even with this advanced technology Bosnjak and Tuten (2003) reported that these prepaid incentives had no significant advantages on participates response rates for internet surveys. In Bosnjak’s study, group one received $2 (prepaid group), group two was promised they would receive $2 upon completion, group three participated in a prize drawing upon completion (two $50 and 4 $25 for a subsample of 329), and group four was the control group. While Bosnjak found that in web surveys prepaid incentives showed no advantage, he did report that prize drawings increased the willingness to participate and also increased the number of completed surveys.

Lotteries represent an appealing approach to providing incentives because they are much more cost effective. They also have the advantage of only being given to participants after they have completed the study, hence money is not wasted on those who decide not to start or complete a study. However, the evidence is mixed on their effectiveness. While Bosnjak’s previously reviewed study showed significant effects with the lottery incentives, Goritz and Luthe (2013) found no improvement with one sample and only a slight improvement with another sample using lottery incentives. Singer and Ye (2013), in their summary of research using online lottery incentives, concluded that the current research generally shows little or no impact on response rates, but the available research is sparse and needs further study before firm conclusions can be made.

In the current study we explored the two most prevalent challenges we have experienced with online research; evaluating nonrespondents and evaluating different strategies for increasing response rates. We addressed the following research questions:

1. Do participants who do not have a valid email address differ from those who do? Because we have a pool of participants who completed a survey from 1-2 years previously who provided their email address, we are able to explore whether those who do not have valid email addresses are different than those who do have a valid email address when contacted later.

2. Among those with valid email addresses, do respondents differ from nonrespondents in terms of background factors, family of origin variables, personality dimensions, and relationship functioning variables?

3. Do different types of monetary incentives improve response rates to internet research requests?

Method

Participants

Participants for this study came from a large dataset of individuals who completed an online Survey called RELATE (Busby et al. 2001) between one and two years previously. Fifty-six percent of the sample (N = 2,049) was female and 46 % was male. The average age of the sample was 28.2 with a standard deviation of 9.3. Eighty-six percent of the sample was Caucasian, 4 % was Latino/a, 4 % was African American, 3 % was Asian, and 3 % listed “other” as their race. In terms of income, 28 % earned $20,000 or less a year, 22 % earned between $20,000 and $40,000, 11 % earned between $40,000 and $60,000, 14 % earned between $60,000 and $100,000, with the remaining 25 % earning more than $100,000. For educational attainment, 5 % of the participants had a high school education or less, 59 % of the sample had some college but had not graduated, 17 % had a bachelor’s degree, and the remaining 19 % had a least some graduate training. Fifty-two percent of the sample said they were in an exclusive dating relationship with their partner, 31 % reported that they were engaged, and 17 % reported that they were married. For relationship length, 17 % of the sample had been in their relationship for <6 months, 13 % from six to twelve months, 34 % from one to 2 years, 24 % from 3 to 5 years, and the remaining 12 % for more than 5 years.

Procedures

All participants completed an appropriate consent form prior to the taking the RELATE instrument and all data collection procedures were approved by the institutional review board at the author’s university. Individuals completed RELATE online after being exposed to the instrument through a variety of settings. Some participants were requested to take RELATE as part of an undergraduate class, others completed it as part of a workshop for couples, some individuals completed it after finding it online, and some completed it as part of the assessment package given by a professional therapist or clergy member. In the instructions for the instrument, individuals were instructed to take the questionnaire on their own without consulting, viewing, or questioning their partners about items.

The data from this study only comes from individuals who met the following three criteria. First, they checked the box indicating they would be willing to be contacted in the future for additional research studies. Second, they were in an exclusive romantic relationship with a partner as indicated by answering that they were seriously and exclusively dating their partner, or they were engaged or married. Third, between one and two years had passed from the time that they completed RELATE. Participants who met these three criteria were contacted and asked to complete a survey about their relationship of approximately 100 items. For the experimental study on incentives, participants were randomly assigned to the cells of the experimental design until the desired sample sizes for each cell were reached. This resulted in a total sample of 2,049 individuals who were sent out email invitations to participate in this study. These individuals were sent three email reminders approximately 2 weeks apart asking them to participate before they were considered a nonrespondent.

The 2,049 participants were sent out an email notice asking them about their willingness to participate in a study about their romantic relationship. Forty-five percent of the emails were returned as undeliverable either because the email was blocked by a filtering program or the address provided did not represent a current email account. It was not possible to determine the percentage of email messages that were returned due to a filter or an invalid address. Because all of the participants had already answered a questionnaire one or two years before, it was possible to compare the participants with returned email messages to those who received the email message.

Design

We had the following six conditions in our experimental design: Group 1 (N = 94), the control group, was given no incentive other than the standard plea to help the researchers understand how to improve couple relationships. Group 2 (N = 92), received $20 when they completed the survey. Group 3 (N = 91), participated in a $100 drawing where one participant from each 50 was randomly selected to receive a $100 incentive. Group 4 (N = 176), participated in a $100 drawing where one participant from each 100 was randomly selected to receive a $100 incentive. Group 5 (N = 377), participated in a $100 drawing where one participant was randomly selected from 200 participants to receive a $100 incentive. Group 6 (N = 300), participated in a $100 drawing where one participant was randomly selected from 300 to receive a $100 incentive. Participants were told exactly which of the six rewards they would be receiving prior to starting the survey to test the influence of the reward on their response rates.

Upon initiation of the study we were not sure what percentage of the respondents were likely to have invalid email addresses but we hoped that before the available sample ran out we would at least have approximately 1,200 participants available for random assignment. In the end we ran out of participants at 1,130 because the nonvalid email address percentage was higher than we hoped. Although we randomly assigned individuals to conditions, because of the different incentives with lotteries being drawn after 50, 100, 200, and 300 respondents, we needed different numbers of individuals in each condition. Consequently, we had a computer program written that allowed us to a prior establish proportions of the presumed sample that would be randomly assigned so that the end result would be 100 participants in conditions 1–3, 200 in condition 4, 400 in condition 5, and 300 in condition 3. Because the number of respondents with valid email addresses was slightly smaller than our original projections, we ended up with a few less participants in most conditions than anticipated.

Measures

The RELATE is a questionnaire designed to evaluate the relationship between romantically linked partners, be they dating, engaged, or married. The questions examine several different contexts—individual, cultural, family (of origin), and couple—developed from research reviews that have delineated important variables that are related to the development and maintenance of successful relationships (Busby et al. 2001; Larson and Holman 1994). Previous research has documented the RELATE’s reliability and validity, including test–retest and internal consistency reliability, and content, construct, and concurrent validity (Busby et al. 2001). We refer the reader specifically to Busby et al.’s discussion of the RELATE for detailed information regarding the theory underlying the instrument and its psychometric properties.

To answer the research questions, those with an invalid email address were compared to those with a valid email address. Also those with a valid email address who completed the survey were compared to those who did not complete the survey. To organize our measures in this study, we used the extensive research by Larson and Holman (1994) that was later updated by Holman and associations (2001) when they developed a comprehensive model delineating the important background, individual, family, and couple variables that influence adult relationships. Consequently we compared participants on variables from these four domains as follows:

Background Measures

Gender, Race, Relationship Status, Age, Education Level, Income, and Relationship Length were all single item questions as indicated in the sample section.

Personality Measures

While there were many scales on RELATE measuring aspects of personality and other individual characteristics, we selected the measures of the Big Five Personality measures (Draper and Holman 2005) to be comparable with the Rogelberg et al. (2003) study. The Big Five Personality measures on RELATE are lists of adjectives such as friendly, kind, flexible, nervous, etc. Paticipants are asked to rate how much these words describe them on a five-point Likert response scale ranging from never to very often. The personality scales contained from 3 to 6 items each. The Cronbach’s alpha for the Agreeable Scale with this sample was .75, for the Conscientious Scale was .75, for the Openness Scale was .73, for the Surgency Scale was .80, and for the Neuroticism Scale was .83.

Family of Origin Measures

While there were a wide variety of scales available measuring different aspects of the family of origin, we selected two scales that have been shown in previous research to be consistently related to couple outcomes (Busby et al. 2005) the Family Impact Scale and the Parent’s Marriage Scale. The Family Impact Scale consisted of four items measuring whether the impact of the family of origin was currently causing problems for participants in their adult relationships (e.g. “From what I experienced in my family, I think family relationships are safe, secure, rewarding, and a source of comfort”). The Parent’s Marriage Scale was a three item scale measuring how satisfied the participants’ parents were in their marriage. The response scale for these items was a 5-point Likert scale ranging from strongly agree to strongly disagree. The Cronbach’s alpha for the Family Impact Scale was .78 and for the Parent’s Marriage Scale was .90.

Couple Measures

Again there were a wide variety of measures on different aspects of the couple relationships on the RELATE instrument but we selected a measure of Positive Communication (e.g.“When I talk to my partner I can say what I want in a clear manner.”) Negative Communication (e.g. “I use a tactless choice of words when I complain.”), Relationship Satisfaction (e.g. “How satisfied are you with your overall relationship with your partner?”), and Relationship Stability (e.g. “How often have you thought your relationship might be in trouble?”. Each of these scales consisted of between 3 and 8 items that were answered on five-point Likert response scales. The Cronbach’s alpha for the Positive Communication scale was .77, for the Negative Communication scale was .79, for Relationship Satisfaction was .86, and Relationship Stability was .81.

Results

Respondents with Invalid Email Addresses

To answer our first research question we evaluated whether those who had an invalid email address when they were recontacted 1 or 2 years later were significantly different on background, personality, family of origin, or couple variables than whose who had a valid email address. While we expected to have a number of people who no longer had the same email address, it was distressing, but not surprising to find that 45 % of the email invitations were returned as “undeliverable” by our email server. At least one study has shown that about half of users falsify requests for email addresses (Bradley 2009). This may be indicative of one significant challenge with internet research where email addresses are the primary way to contact individuals. With the number of email scams occurring it may be that filter settings and distrust are so high that researchers can expect this problem to persist into the foreseeable future. Consequently, it is of the upmost importance to explore whether these individuals were in some way unique from those with valid email addresses.

The first three background variables of Gender, Race, and Relationship Status, were categorical, therefore a two way contingency table analysis was conducted to explore whether these variables were related to having a valid email address. The results indicate that there was no relationship between whether participants had a valid or invalid email address and their gender, race, or relationship status. In fact the Pearson Chi Square values for these variables were not even close to being significant as the p values were above .30.

Table 1 contains the means, standard deviations, T values, and Cohen’s d comparing those with valid and invalid email addresses on the continuous variables. To protect against Type 1 errors due to the large number or mean comparisons, we elected to use a significance level of .01 as indicating there were significant differences between groups. Accordingly, there were only three variables in Table 1 that showed a significant mean difference between groups. Those with a valid email address were likely to be older, more educated, and to have a higher income than those with invalid email addresses. On the remaining 12 scales there were no significant differences between the groups.
Table 1

Means, standard deviations, and T values comparing participants with valid and invalid email addresses

Scale

Valid address

Invalid address

T value

Sig.

Cohen’s d

Mean

SD

Mean

SD

Age

29

9.9

27

8.5

3.8

.000

.17

Education

6.0

1.8

5.8

1.6

3.3

.001

.15

Income

3.7

2.7

3.1

2.5

5.5

.000

.17

Relationship length

4.1

1.5

4.0

1.5

1.5

.126

.07

Agreeable

4.3

.47

4.4

.47

1.2

.232

.05

Conscientious

3.4

.84

3.5

.83

.91

.362

.04

Openness

4.0

.54

4.0

.53

1.5

.128

.06

Surgency

3.4

.68

3.5

.68

1.7

.092

.07

Neurotic

2.5

.51

2.6

.51

2.4

.018

.11

Family impact

2.1

1.1

2.1

1.1

.33

.741

.01

Parents’ marriage

3.3

1.3

3.3

1.4

.67

.503

.03

Positive communication

4.1

.61

4.1

.63

.94

.346

.04

Negative communication

2.3

.65

2.3

.66

.54

.587

.02

Satisfaction

3.9

.75

3.9

.75

.76

.449

.03

Stability

1.9

.78

1.9

.80

.12

.905

.01

Comparison between Respondents and Nonrespondents

To answer the second research question, we looked at two groups of participants within the valid email group, those who responded to the request to complete a survey and those who did not. Overall 10 % of the participants responded to the request to complete the survey. Comparisons between the 10 % who responded and the 90 % who did not were made on the same scales as in the previous analysis. A two way contingency table analysis was conducted to explore whether Gender, Race, and Relationship Status were related to whether a participant responded or didn’t respond to the survey. Neither race nor relationship status was significantly related to whether participants completed a survey. However the Pearson Chi Square of 7.01(df 1, N = 1,078) for gender was significant at p = .008 indicating that significantly more females (13 %) and less males (7 %) responded to the survey than would be expected by chance alone.

Table 2 contains the means, standard deviations, t-values, and Cohen’s d comparing those who responded to those who did not respond on the continuous variables. None of the mean differences were significant.
Table 2

Means, standard deviations, and T values comparing respondents and nonrespondents

Scale

Respondents

Nonrespondents

T value

Sig.

Cohen’s d

Mean

SD

Mean

SD

Age

30

10.7

29

9.8

1.3

.188

.15

Education

6.3

1.8

6.0

1.8

1.6

.106

.16

Income

3.6

2.7

3.8

2.7

.67

.499

.07

Relationship length

4.3

1.5

4.0

1.5

1.7

.082

.17

Agreeable

4.3

.49

4.3

.47

1.6

.110

.16

Conscientious

3.3

.87

3.4

.83

1.3

.198

.13

Openness

3.9

.59

4.0

.53

1.3

.185

.13

Surgency

3.4

.69

3.5

.68

1.4

.161

.14

Neurotic

2.5

.57

2.5

.50

.73

.469

.07

Family impact

2.2

1.1

2.1

1.0

1.0

.303

.10

Parents’ marriage

3.2

1.3

3.4

1.3

1.1

.291

.10

Positive communication

4.0

.61

4.1

.60

1.3

.188

.13

Negative communication

2.3

.63

2.3

.65

.33

.740

.03

Satisfaction

3.8

.77

3.9

.75

.91

.363

.09

Stability

1.9

.77

1.8

.78

.70

.483

.07

Do Incentives Improve Response Rates?

The last research question was whether different monetary incentives improved response rates. Table 3 shows the number of people in each incentive category and the percentage who completed the surveys for each experimental condition. Clearly in terms of percentages, the respondents who were given the chance to win a $100 incentive for every 50 people were more likely to respond than the other conditions. However the Chi Square analyses evaluating these percentages demonstrated that this group was only significantly better at responding than group 4. The Pearson Chi Square comparing these two groups was 3.94 (1, N = 267) and was significant at p = .04.
Table 3

Percentage of respondents who completed surveys in each experimental condition

Experimental condition

Sample size

Percentage complete

1. Control Condition

94

7.4

2. $20 Incentive

92

8.7

3. $100 lottery per 50 participants

91

14.3

4. $100 lottery per 100 participants

176

6.8

5. $100 lottery per 200 participants

377

10.6

6. $100 lottery per 300 participants

300

11.0

Discussion

The results from this study illustrate several of the challenges that are faced by researchers who use the internet as a source for gathering data. There are large percentages of people who provide email addresses as a means for contact who either change email addresses within a two year period of time, have filtering programs that block many requests for research participation, or they may have provided false email address initially. In this study approximately 45 % of the initial participants did not have a deliverable email address later. This is strikingly similar to the fifty percent reported by Bradley (2009) who provided false email addresses as part of an internet study. According to entries online about how to avoid spam emails, two common strategies that are recommended are to provide either false email addresses or to provide disposable email addresses that forward only wanted mail to their regular email accounts. Either of these approaches would result in a substantial loss of potential participants for researchers.

Researchers could implement several strategies to help reduce this percentage of invalid addresses such as requiring email verification when the email is provided the first time or sending requests to remind participants to update their email addresses if they change. While we were unable to locate any research that indicates email verification or any other approach actually reduces the number of emails that are invalid, an email verification process at least ensures that the original email is valid. Researchers should be very cautious though in contacting participants too often as this may increase participant irritation and some studies have shown that when people are dissatisfied or irritated with those conducting the research they are much less likely to participate (Bosnjak and Batinic 2002).

The good news about participants with invalid email addresses is that they were very similar to those with valid email addresses. Participants with valid email addresses were slightly older, better educated, and had slightly higher incomes but their personalities, families of origin, and couple dynamics were indistinguishable from those with invalid addresses. These findings suggest that older, more stable individuals keep the same email addresses or use anti-spam strategies less often than younger participants. In terms of the research findings, it may be largely irrelevant whether or not large percentages of people are not contactable in the future. Those who are contactable appear to be similar on a wide variety of individual and relationship dimensions. This is an important finding that has not been demonstrated before because most scholars have no available data to analyze on people who cannot be reached.

The low overall response rate of 10 % is a significant concern, though not out of range of recent studies demonstrating very low response rates to all types of research requests that are in the 10–30 % range (Schwarz et al. 1998; Tourangeau 2004). Still, response rates so low present a wide variety of challenges to researchers who will run through lists of potential participants very quickly when nine out of ten do not respond. Additionally, with such a high nonresponse rate, the question of how the sample of respondents is unique becomes even more crucial. The data from this study clearly show that on 18 different variables ranging from race to relationship satisfaction the only variable that was significantly different for respondents and nonrespondents was gender. Since more females than males are likely to respond to a relationship oriented survey such as the one we were using, researchers may want to oversample males. Other than gender, it does not appear that nonrespondents are substantially unique from respondents. While obtaining higher levels of response to research requests is certainly desirable for many reasons, not the least of which includes time and money, unduly worrying about the generalizability of results from respondents does not appear to be merited based on this study. Other scholars have cautioned researchers about the potential biases of internet based samples and these concerns are certainly serious (Brenner 2002; Cantrell and Lupinacci 2007). Nevertheless the concerns regarding samples gathered over the internet indicate that these biases are likely to be different rather than more serious than biases of mail, in-person, or telephone surveys.

With the lower response rates of the internet based sample in this study, it was heartening to see that the cheaper form of incentive, the $100 lottery system of condition 3, produced almost double the response rate from the $20 per person condition. With about 100 participants in each of these conditions the lottery incentive would only cost $200 total as compared to $2,000 total for the $20 condition. This is a substantial savings and even though the improvement in response rates was not statistically different than most of the other conditions, it would require much higher response rate for incentives that are given to each person to outweigh the savings of the lottery system. It is also interesting that the largest contrast in incentives was between condition 3 and 4 where the only difference was in the odds of winning. Apparently respondents do pay attention to their odds of winning in the lottery system but only when crossing the threshold from less than 100 to over 100. However, in general the results for our incentive experiment could be interpreted the same as Goritz’s (2006) study when she concluded that the lack of substantial changes in response rates after incentives is evidence that it isn’t cost-effective to use monetary incentives at all. We would argue that more research is necessary before this conclusion is reached but certainly at this point in time less expensive choices such as lotteries would seem to be most consistent with the existing knowledge.

Many questions about incentives remain unanswered and suggest the importance of future research. Researchers should test whether substantially increasing the monetary incentive for both condition 2 and 3 would improve the response rates and whether the lottery is more advantageous when larger amounts of money are provided. Although some scholars have shown there is a leveling off of response improvement as incentives become larger (Szelenyi et al. 2005), the incentives provided were quite modest and might not be the same as when amounts of $100 or above are provided. With a lottery system, even increasing the incentive amount to as high as $500 would still only equate to $10 per participant if one award were given every time 50 participants returned surveys. Still, it may be that as the amount of the incentive becomes large potential participants would be less trusting and worry that it was a scam. These speculations should be tested in future studies.

The challenge of obtaining a representative sample of the United States population is very difficult with internet surveys as there are no comprehensive lists of email addresses like there are for phone numbers and mailing addresses and the lists of available email addresses are highly suspect in terms of how they were gathered and whether potential participants knew their names would be used for future research studies (Bradley 2009). Consequently, probability sampling is likely to be a very difficult problem to solve. One group of researchers combined two data gathering approaches by starting with a randomized telephone sample and providing them with a URL to complete a online survey (Sundberg-Cohon and Peacock 1998). Such creative ways to approaching probability samples on the internet may prove fruitful. Because of the many advantages of internet research it may be more important to compare the best samples from the internet with probability samples gathered from more traditional methods to see how they might be unique. For the near term, most researchers using the internet are left to seek participants through advertising, posting notices in a variety of free forums, interest groups, blogs, or other techniques that are likely to bias samples in unknown ways, just as nonprobability samples gathered by more traditional methods are biased.

We have been conducting research over the internet for more than 10 years now and have well over 130,000 surveys that have been completed. While there are many challenges with this type of research, the ability to reach large numbers of people quickly and have the data immediately available to analyze makes this research much less onerous than mail or telephone surveys. Except for the costs of paying computer programmers, it is also much less expensive. Even the costs for programmers are less than expected if the computer programs work more than a few years and the long-term reduction in the costs for mailings, telephone lines, and research assistants are considered. Perhaps the most beneficial aspect of internet research has been that because the overall sample sizes are so large, very unique cases, such as adult males who report being sexually victimized by their partners, or people who have been divorced more than three times, can be studied that would be almost impossible to study in substantial numbers without the internet. Additionally, the results from this study suggest that respondents are very similar to nonrespondents and that there are some ways that might improve response rates that merit more study.

Copyright information

© Springer Science+Business Media New York 2013