There are multiple methods of collecting survey data including mail, phone, fax, and face-to-face interviews. Because access to computers is increasing, there is greater use of computer-based self-administered surveys (e.g., web surveys) for research and clinical information gathering [1, 2]. Responses to self-administered web-based surveys have been found to be generally comparable to mail surveys and have the advantage of eliminating data entry, providing a basis for estimating response times, and facilitating the use of complex branching and dynamic assessment [3, 4].

The presentation and elicitation of answers to survey questions using the web presents new challenges for researchers. Only a limited number of items can fit on a computer screen. Rather than marking their answer using a pencil or pen, respondents on web-based surveys click on the answer via a mouse or touch-screen technology. Instead of moving down a page manually, the next screen is presented by the computer interface following response to the previous item. A fundamental question is whether to automatically advance to the next screen as soon as a response is provided to the last (perhaps only) question on the screen or to require the respondent to select a “next” button.

The automatic advance approach requires less effort and time than does the next button. However, respondents may initially enter the wrong response. Using a back button gives them the opportunity to return and change their response, but respondents may be reluctant to do so once they are presented with the next question. This study evaluates the effects of using next and back buttons on survey completion times, missing data, internal consistency reliability, and mean scale scores.

Methods

Based on power calculations, we targeted a sample of 800 study participants (200 for each of 4 study arms). With this target sample, we would have 80% power to detect a small effect (effect size = 0.28) between two groups. We included 807 participants from a national, web-based polling company, Polimetrix (now YouGov Polimetrix, www.polimetrix.com). Over one million adult panel members have provided email addresses, contact information, and responses to core profile items in order to receive occasional surveys about a variety of subjects [5]. Panel members generally receive about four surveys per year and earn “polling points” that can be redeemed for rewards. Polimetrix sent out a study invitation to the panel and the first ones to respond were included.

We administered 56 items assessing performance of social/role activities rated on a 5-category never to always response scale. Sample items include “I am limited in doing my work (include work at home),” “I am able to do all of my regular family activities,” and “I am able to do all of my regular leisure activities.” We also administered 56 items measuring satisfaction with social/role activities using a 5-category not at all to very much response scale. Sample items include “I am happy with how much I do for my family,” “I am satisfied with my ability to work (include work at home),” “I am satisfied with my current level of social activity,” and “I am satisfied with my ability to do leisure activities.” These items were preliminary versions of item banks created for the Patient-Reported Outcomes Measurement Information System (PROMIS) project [6].

One item was presented to the study participants on each screen. Participants were randomized to one of four study arms: (1) automatic advance to the next question and no opportunity to go back (auto/no back); (2) automatic advance to the next questions with an opportunity to go back (auto/back); (3) next button to go to the next question and no opportunity to go back (next/no back); or (4) next button to go to the next question with an opportunity to go back (next/back). The next button was included for all conditions but using it to advance to the next question was only required of persons randomized to the 3rd and 4th conditions (see Fig. 1). The next button could be used by all participants to skip a question.

Fig. 1
figure 1

Screen shot of sample item

We evaluate time to complete the items based on web server time elapsed from initiation of first item displayed to the response of the last item displayed. Additionally we reviewed missing data rates, internal consistency reliability [7], and mean scale scores by condition.

Results

We identified respondents whose response times were judged by the study team to be unreasonably fast (i.e., <2 s per item): 18 in the auto/no back condition, 10 in the auto/back condition, 2 in the next/no back condition, and 1 in the next/back condition. All 18 of these respondents in the auto-no back condition and the 18 respondents in the other three experimental conditions with the fastest times (including those with unreasonably fast times) were excluded from the analysis.

The remainder of the sample had an average age of 53 (SD = 16). The majority were women (64%); 83% were non-Hispanic white, 6% Hispanic, 4% African American, and 7% other. Two percent reported less than a high school education, 18% were high school graduates, 44% had some college, and 37% were college graduates. Age, gender, race, and education did not differ significantly (P > .05) by experimental condition (see Table 1).

Table 1 Sample demographic characteristics

Time to complete differed significantly by study arm: social/role performance (F = 10.36, P < .001; social/role satisfaction: F = 15.77, P < .001). The average total response time for the 56 social/role performance items was 4.3 min for the auto-no back group, 4.9 min for the auto-back group, 6.5 min for the next-no back group, and 7.2 min for the next-back group. The average total response time for the 56 satisfaction items was 4.7 min for the auto-no back group, 4.3 min for the auto-back group, 6.5 min for the next-no back group, and 7.2 min for the next-back group (Table 2). Differences between groups with and without a back button were not statistically significant in Bonferroni-adjusted multiple comparisons. Bonferroni-adjusted differences between groups with and without a next button were statistically significant (P < .025). The next button increased response time by about 2 min overall (2 s per item).

Table 2 Time to complete, missing data, and scale scores by study arm

Internal consistency reliability for the 56-item performance of social/role activities and 56-item satisfaction with social/role activities scales was 0.99 for each of the experimental groups. We also found no significant differences in missing data rates or scale scores by group (see Table 2).

Discussion

Time to complete the survey was about 50% longer when respondents were required to use a next button after answering an item to go to the next item. Missing data, reliability, and mean scale scores were similar across the randomized groups. However, it is important to note that our sample was highly educated and experienced survey respondents; therefore, these results may not generalize to other samples. Future research needs to be conducted to determine if requiring the use of a next button provides benefit in terms of data quality for less educated or less computer experienced respondents. It is possible that the effect of including the next button on missing data or reliability depends on level of education. We were unable to address this issue in our study because we had limited power to detect interaction effects by educational attainment.

Including a next button is important to allow respondents to skip items they prefer not to answer. However, requiring use of the next button after each response increases response time without changing the quality of the data. Hence, we recommend that after item responses are made that there be automatic advancement to the next item with the opportunity to go back to the previous item for computer-based surveys, particularly in more educated and experienced survey respondents. This approach helps to minimize response burden but retain a safeguard for respondents who initially enter an incorrect response.

We also note that the automatic advancement with no back option results in 18 respondents who responded so rapidly (<2 s per item) that their attention to each item was questionable. In contrast, there were 10 people in the automatic advancement with a back option that responded this quickly and 1–2 people in the next button conditions. Therefore, the presence of a back button not only provides a safeguard for respondents who enter an incorrect response to the previous item but was also found to reduce the number of respondents who spent so little time answering a question that it was unlikely they gave it much thought. The automatic advance option may be particularly helpful for people with physical limitations that make fine motor control of a mouse more difficult or fatiguing (e.g., patients with Parkinson’s disease, spinal cord injury).

All of the items tested in this study allowed for only one answer. “Select all that apply” or numeric/text entry are not compatible with an automatic advance format as the computer would not know when a participant had finished selecting all desired responses or entering all data. Therefore, future usability testing may address to what extent switching between automatic advance and use of a next button in a single survey would be confusing to participants. It might also be interesting to evaluate the relative merits of the single question per screen used in this study versus presenting multiple questions per screen.

Allowing respondents the option to go back and change the prior response has implications for computer-adaptive testing (CAT). Because the items presented later in the administration are dependent on answers to previous items, the next item presented could change when a respondent changes their initial response. Additional work is needed to document whether this produces any confusion among respondents. Similarly, if an item that is answered incorrectly initially leads to satisfaction of a threshold standard error for the CAT, it may be important to delay termination of the CAT until the respondent has an opportunity to go back to that final item.

In summary, this study provides support for automatic advance to the next item with the option to go back to the previous item.