Advertisement

Web-Based Survey Methodology

  • Kevin B. WrightEmail author
Living reference work entry

Abstract

This chapter examines a number of issues related to online survey research designed to access populations of various stakeholders in the health care system, including patients, caregivers, and providers. Specifically, the chapter focuses on such issues as finding an adequate sampling frame for obtaining samples of online populations, measurement issues, enhancing response rates, overseeing web-based survey data collection, and data analysis issues. Moreover, it examines issues such as measurement validity and reliability in web-based surveys as well as problems with selection biases and generalizability of study findings. Finally, the chapter assesses the pros and cons of using SurveyMonkey and Qualtrics as web-survey platforms/services and their utility for studying various online contexts that may be of interest to social science and health scholars.

Keywords

Online data collection External validity Response rates Sampling Survey research Websurveys 

1 Introduction

Over the past two decades, we have seen considerable growth in the area of online survey methodology, particularly in the areas of online survey development and implementation (Dillman 2000; Wright 2005; Lieberman 2008; Murray et al. 2009; Greenlaw and Brown-Welty 2009; Kramer et al. 2014). Paralleling the growth of online survey methodology, scholars have engaged in research to evaluate the relative merits and problems associated with online survey methods within a broad array of academic disciplines and areas (Wright 2005), including health social science researchers (Shaw and Gant 2002; Wright 2000, 2011; Konstan et al. 2005; Wright et al. 2010a). Recent trends suggest that web surveys will become increasingly prominent in the future and result in higher response rates as the general population becomes increasingly made up of “digital natives” (Kramer et al. 2014).

Understanding how to effectively design and implement online surveys as well as some of the many advantages and disadvantages of this research method can help health social science researchers study a variety of health contexts and issues. In short, understanding online survey methods is a helpful set of skills to add to one’s research method toolbox. Health social science researchers have used online surveys to access a number of segments of the population, including access to physicians, other health care workers, online support community participants, hospital staff members, patients, lay caregivers, and a variety of other web-accessible stakeholders in the health care system (Owen and Fang 2003; Konstan et al. 2005; Wright et al. 2010a; Siegel et al. 2011; Wright and Rains 2013).

Given the ubiquity of popular web survey platforms and services, such as SurveyMonkey and Qualtrics, it is easy to forget that online survey research is relatively young and constantly evolving. Until recent years, creating and conducting an online survey was a time-consuming task requiring familiarity with web authoring programs, HTML code, and scripting programs. Today, various survey authoring software packages and online survey services like SurveyMonkey have made online survey research much easier and faster. Yet many researchers who have been slow to move to online survey research from traditional paper and pencil survey research may be unaware of the advantages and disadvantages associated with online survey research. For example, previous research has identified numerous advantages to using online surveys over traditional survey methods, including access to individuals in distant locations, the ability to reach difficult to contact participants, and the convenience of automated data collection (Greenlaw and Brown-Welty 2009; Kramer et al. 2014). However, disadvantages of online survey research include uncertainty over the validity of the data and sampling issues and concerns surrounding the design, implementation, and evaluation of an online survey (Kramer et al. 2014 ).

While a number of health social scientists currently use online surveys, many others may be unfamiliar with the process as well as the major promises and pitfalls of this research method. Other researchers may have been trained to use other researcher methods (e.g., experiments, content analysis, and so on), but they may be interested in expanding their research skills to include the use of online surveys. This chapter should be of interest to both types of scholars.

Toward that end, this chapter examines a number of issues related to using online surveys to access populations of various stakeholders in the health care system, including patients, caregivers, and providers. Specifically, the chapter focuses on such issues as finding an adequate sampling frame for obtaining samples of online populations, measurement issues, enhancing response rates, overseeing web-based survey data collection, and data analysis issues. Moreover, it examines issues such as measurement validity and reliability in web-based surveys as well as problems with selection biases and the generalizability of study findings. Finally, the chapter assesses the pros and cons of using web-based survey platforms and services like SurveyMonkey and Qualtrics and their utility for studying various online health-related communities and web portals.

2 Advantages and Problems Associated with Online Survey Methods

Research conducted on the Internet provides expanded opportunities for reaching populations of interest to health social scientists, including various stakeholders in the health care system (Eysenbach and Wyatt 2002; Wright 2005; Lieberman 2008). Emerging methodological research suggests that the Internet is an appropriate venue for survey data collection, including within health contexts (Riper et al. 2011; Kramer et al. 2014). Online surveys offer some key advantages over traditional surveys (see “Cell Phone Survey” and “Phone Survey: Introductions and Response Rates”). For example, studies have found that recruitment advertisements on Facebook have been used successfully to recruit “hard-to-reach” populations, such as sexual minorities, people with rare diseases, veterans with post-traumatic stress disorder (PTSD), and a variety of other participants who may be of interest to health communication researchers that are not easily accessed through traditional recruitment strategies (Pedersen et al. 2015). In terms of longitudinal surveys, recruiting participants via social networking sites, like Facebook, may also benefit longitudinal retention in research, which is often affected by inability to locate participants who have moved or changed contact information (Pedersen et al. 2015). Another major advantage of web-based survey research is that it allows researchers to conveniently access populations in ways that bypass spatial, chronological, and material constraints (Evans and Mathur 2005). Moreover, web-servers are capable of collecting large numbers of data from participants who are accessing the online survey at the same time (Evans and Mathur 2005). As a result, relatively large sample sizes can be attained within very short periods of time. For example, in one of my recent graduate research methods seminars, I had a group of Air Force Officers use their military social networks to obtain a sample of 870 people in less than 24 h. Social networking sites and other online platforms allow researchers to draw upon existing online social networks to reach large numbers of people quickly. Participants can respond to online surveys at a convenient time for themselves, and they may take as much time as they need to answer individual questions. Broadband access to the Internet also facilitates the transmission of multimedia content, which can enhance the sophistication of online surveys. For qualitative researchers, improved broadband access also allows for online focus groups and chat rooms where participants interact with each other and the interviewer/facilitator in a multimedia setting (Wright 2005). Multimedia capabilities also allow survey researchers to embed video or add more interactive measurement features to the survey (e.g., sliding bars and fuel gauge images to help them visualize different perceptions, etc., they are being asked about on the online survey).

Another important advantage of online surveys is lower cost to researchers and their institution (Evans and Mathur 2005). Compared to traditional survey methods, online surveys are much cheaper to construct and implement. In addition, more sophisticated experimental designs can take advantage of the online sphere in terms of randomly linking participants to different stimuli (e.g., YouTube videos with a specific persuasive appeal in a health message design experiment) and then having members of the control and treatment groups complete a postexperimental survey. Such features can reduce the need for physical laboratory space, the cost of incentives to get participants to come to the lab, and the cost of transferring responses to data analysis programs. Moreover, online surveys help reduce the environmental burden since participants do not need to travel to take a survey, and there is no need to print pencil-and-paper surveys (Wright 2005). Features such as automated data collection, skip patterns, automated reminders to participate survey instruments, and easy-to-download data files into SPSS or other statistical analysis software make online survey methods an attractive and less expensive way to engage in survey research compared to traditional survey methods (Couper 2008; Greenlaw and Brown-Welty 2009; Kramer et al. 2014).

3 Critiques of Online Surveys

Despite the many advantages of using online surveys, there have also been numerous critiques of online survey methods, including data security issues, sampling issues, and ethical concerns (Manfreda et al. 2008; Payne and Barnfather 2012; Curtis 2014). This section focuses on common concerns about using online survey research that stem from these criticisms.

During the first several years when online surveys started to become more popular (during the mid to late 1990s), many early studies that compared online surveys to paper-and-pencil surveys were concerned with the issue of measurement equivalency. At the time, researchers worried that online surveys may invite measurement validity and reliability problems. However, a number of studies have been published in the past decade support an emerging consensus that both modes of data collection are generally comparable in terms of reliability and validity (Johnson 2005; Wright 2005; See Fan and Yan 2010).

Another major concern that has often been raised regarding online surveys is that online samples are not representative of the general population (Dillman 2000; Wright 2005). While this was certainly true in the early days of the Internet, in recent years more and more diverse groups of individuals (including populations of all types of patients, health-related support groups, health information communities, and so on) have found their way to cyberspace due to the reduction in costs of smart phones and other devices as well as the increased affordability of high-speed online Internet access. Studies have found that individuals do not seem to differ on many psychological and communication measures when comparing online surveys to traditional paper-and-pencil surveys (Bosnjak and Tuten 2001; Fan and Yan 2010). Another problem that can occur among researchers who use longitudinal online surveys (i.e., repeated measures) is participant attribution (Wright 2005; Murray et al. 2009). However, studies suggest that the attrition in online longitudinal surveys does not differ much from traditional surveys (Murray et al. 2009; Fan and Yan 2010). Moreover, automated email reminders appear to be a cheap and convenient way to reduce attrition in online surveys.

For researchers who use online surveys as a component of an online experimental design, the lack of experimental control can become a serious issue depending upon the nature of the experiment and the inability to have control over manipulating the environment beyond random assignment to different experimental conditions online (Wright 2005). Participants may need guidance in filling out the questionnaire, and there may be little or no opportunity for real time questions from participants. People who participate in online experiments or online surveys that ask sensitive questions may be hesitant to participate if they believe that a researcher will use a person’s IP address or other information that could be used to identify their particular responses. A careful online informed consent form that contains reassurances about confidentiality and security can help to increase online survey response rates. However, a researcher needs to make sure that safeguards are built into surveys and the way in which they are disseminated that will help keep the identity of participants anonymous and secure (Couper 2008). Institutional Review Boards (IRB) at most major universities typically have some guidelines regarding the conduct of web-based research, particular in terms of participant confidentiality and privacy issues. However, depending upon the sophistication of the survey design, the IRB may have additional concerns or questions for a researcher to address. It is important for researchers to clarify the IRB guidelines for online survey research at their particular institution early in the conceptualization process to avoid added delays in terms of launching the survey.

4 Sampling, Measurement, and Enhancing Response Rates in Online Survey Research: Promises and Pitfalls

4.1 Sampling Issues and Online Surveys

Global Internet use increased around 400% between 2000 and 2009 and, today, it is a common reality in affluent Western societies (Kramer et al. 2014). In terms of sampling frames, the Internet provides many possibilities in terms of reaching potential international participants for an online survey. Such surveys are important to cross-cultural research in the health social sciences as well as building research teams comprised of scholars from different nationalities. Some scholars have argued that online surveys allow for more efficient implementation of psychological assessments when compared with traditional assessment procedures (Lieberman 2008; Shih and Fan 2008; Kramer et al. 2014). In some cases, participant recruitment on the Internet may be the most appropriate way to reach the target population. For example, if the long-term goal of a study is to establish effectiveness of an online health intervention in online cancer support communities, then a sample recruited directly from online communities would more accurately represent that population.

However, there are a number of researchers who have warned about threats to external validity (due to sampling problems) in online survey research, and they recommend certain practices to avoid (Murray et al. 2009). For example, researchers should avoid posting an open invitation link on a forum or sending out invitations to the entire target population (a census). Moreover, many respondents have a tough time distinguishing between a legitimate survey and a spam message (especially if a person is fearful that clicking on the link to the survey might infect their computer with a virus). In addition, some individuals are more drawn to a survey topic than others or may have more time than others in terms of being able to complete an online survey questionnaire. Such issues often lead to selection biases in the sampling process. Researchers need to consider how recruitment and enrollment on the Internet may present unique challenges to sample validity and representativeness (Shih and Fan 2008; Murray et al. 2009). Whenever feasible, online survey researchers should attempt to use available sampling frames to generate a probability sample of potential participants who will be invited to participate in the online survey.

Due to the global reach of the Internet, additional sampling frames of participants can be accessed conveniently and cheaply within relatively short periods of time if a questionnaire is translated and/or adapted for use in other cultures. For example Wright and colleagues (2015) conducted an online survey in China and Korea about media use, willingness to communicate about health, and weight-related stigma associated with US fast-food restaurant chain food consumption behaviors. Using contacts at universities in China and Korea, the research team was able to recruit a relatively large convenience sample of participants from both countries. However, it is important to rely on native speakers (who are members of the culture being investigated) in the construction of translated online surveys so that more nuanced regional differences in language and language use can be included in the survey (Payne and Barnfather 2012). Researchers should not rely on more simplistic language translation tools that can be found online.

However, it is also important to point out that true global research is difficult to obtain since Internet access and use are not equally distributed worldwide, and a substantial digital divide exists between privileged and underprivileged socioeconomic groups and countries (Pullmann et al. 2009). In general, the countries with the greatest Internet access are typically more affluent, better educated, and have a higher gross domestic product (GDP) rate. Moreover, online platforms that exist in one country (e.g., Facebook in the USA) may not be available (for political or legal reasons) in other countries (such as China).

4.2 Measurement Issues with Online Surveys

Online surveys do not appear to compromise the psychometric properties of common quantitative measures (e.g., Likert-type scales, and so on), and studies have found that participants are typically not less representative of the general population than those of traditional studies (Denissen et al. 2010). The anonymity of the Internet-based platforms has been found to have a positive influence on communication behaviors that have important implications for the ubiquitous use of self-report questions on online surveys. Online anonymity has been linked in a variety of studies to feelings of reduced risk when communicating with others (Wright and Miller 2010a), increased self-disclosure of thoughts and feelings (Valkenburg and Peter 2009), reduced stigmatization of visible disabilities and health conditions (Simon Rosser et al. 2009; Wright and Rains 2013), and greater willingness to communicate with others (van Ingen and Wright 2016). Shy or anxious individuals are faced with fewer inhibitions to participate in online surveys compared to face-to-face surveys, and sensitive topics can be addressed confidentially. Because many health issues are highly sensitive and a number of diseases and health conditions are negatively stigmatized, online surveys can help health communication scholars gain access to individuals living with stigmatized health problems, people who have limited mobility due to their health issues, and people who are apprehensive about discussing sensitive health issues (who may be less willing to participate in traditional surveys).

However, it is important to recognize that creating an online survey questionnaire is not simply a case of reproducing an e-version of the paper-and-pencil survey. Formatting may need to be changed to simplify data entry, to clarify possible responses, or to avoid the possibility of submitting data before completing the survey. Due to the diversity of participants’ access to the Internet, computer or smartphone differences, software differences, researchers need to make decisions about the complexity of visual design, the potential speed differences when downloading the survey on different devices, and the ability to view the whole questionnaire on a range of screen settings. Each of these design decisions can potentially influence the measurement reliability and validity of key variables on the online survey.

4.3 Enhancing Response Rates Using Online Surveys

Researchers have identified several factors that appear to increase response rates in online surveys, including personalized email invitations, follow-up reminders, prenotification of the intent to survey, and simpler/shorter web questionnaire formats (Cook et al. 2000; Porter and Whitcomb 2003; Galesic and Bosnjak 2009). Other factors that increase response rates include: incentives, credible sponsorship of the survey, and multimodal approaches (Johnson 2005; Fan and Yan 2010). When online surveys initially appeared in widespread form in the 1990s, many researchers were concerned about inferior responses rates of online surveys (compared to mailed surveys). However, a number of studies have since found online surveys to be similar to traditional mailed surveys in terms of response rates (see Dillman 2000; Kaplowitz et al. 2004; Manfreda et al. 2008).

For example, Kaplowitz et al. (2004) found that a web survey application achieved a comparable response rate to a mail hard copy questionnaire when both were preceded by an advance mail notification. In addition, reminder mail notification had a positive effect on response rate for the web survey application compared to a treatment group in which respondents only received an e-mail containing a link to the Web survey. In terms of health social science research, van Ingen and Wright (2016) examined online coping and social support following a major life crisis using a large, representative web-based panel study in the Netherlands that yielded a response rate of 83% (2,544 respondents). Reminder emails and easy-to-use web questionnaire formatting were used in this survey, and the researchers were able to obtain a diverse sample of participants.

Another factor that appears to influence response rates in online survey is the convenience for participants. Participants can take an online survey in the comfort of their home environment. In addition, web survey questionnaire programs (e.g., SurveyMonkey) provide easy to navigate Likert-type, semantic differential, scales that allow participants to quickly click on a choice using a computer mouse (compared to a cumbersome phone survey or a lengthy paper-and-pencil survey). Such convenience features of online surveys appear to increase readiness to participate and may lower the compensation necessary to convince members of the target population to participate (Wilson et al. 2010).

Online surveys offer several other advantages in terms of the recruitment of participants. Researchers can rely on easy to create Internet advertisements and use online community and mailing lists, which are less time-consuming to produce and less costly to distribute, than posters, flyers, newspaper, TV, and radio advertisements. This can help extend the reach of an online survey. In addition, online surveys appear to be well suited in terms of their ability to attract greater diversity in sample by encouraging recipients (who share characteristics of interest to the researcher) to forward the message to potentially suited and interested participants. For example, including a request to forward a message about an online survey to other senior citizens if a research advertises the survey within an online community for older adults may lead to additional older adults becoming aware of (and potentially participating in) the survey. Other web communities can be used to access large numbers of individuals based on sex, race, nationality, and other demographic variables of interest. Some online survey services (such as SurveyMonkey) will help researchers reach certain demographic groups via databases of people who have completed SurveyMonkey surveys in the past. Automatic emails can be sent to remind participants to participate in a cross-sectional survey or they can be set to remind participants in a longitudinal study to complete an online survey during designated time frames.

5 Resources for Creating/Managing Online Surveys

The particulars involved in designing, implementing, and managing an online survey are beyond the scope of this chapter. However, for scholars who are new to online survey methods, there are a number of web resources containing helpful information for scholars who want to conduct online studies, such as online pdf guides (Couper 2008), and the online course hosted by the University of Leicester (http://www.geog.le.ac.uk/ORM/site/home.htm) or the Web Survey Methodology project (http://www.websm.org/). In addition, popular online survey services like SurveyMonkey and Qualtrics offer online tutorials as well as customer support via email or phone. While such online survey services can make things easier for a researcher who is new to online survey research, they can come at a steep price, such as high subscription rates and “add on” fees for requesting particular features. Additional issues regarding online survey services will be discussed in more detail later in this chapter.

6 Overseeing Web-Based Survey Data Collection and Analysis Issues

Once the online survey has been launched, it is important for researchers to be diligent in terms of monitoring recruitment emails and postings to assure a sufficient number of recruitment messages have been sent/posted to the target population members. When posting recruitment advertisements to online communities, it is common community moderators to remove messages that have not been approved by the moderator or the community members. For researchers who are interested in surveying members of such groups, it is important to secure permission to post recruitment messages in advance of launching the survey. Many IRBs require evidence of permission from an online community moderator or leader in the form of an email or an attached letter to post recruitment messages for the study. In my own research (see Wright 2000, 2011; Wright et al. 2010b), I have found that providing community members with a link to a webpage that discusses the results of the survey once it is completed (in layperson’s terms) is a helpful way to gain access to online health-related communities.

Online survey research allows for communication between the research and participants via email if questions about particular items surface during the data collection phase. In addition, features such as the amount of time it takes a participant to complete the survey and the time of day when the survey was taken are typically included when downloading survey data files from services like SurveyMonkey or Qualtrics. This can help a researcher decide whether or not to include or exclude data from a participant who took an extremely short (e.g., 30 s) or long (e.g., 3 weeks) time to complete the survey. Moreover, such services also include the IP addresses of the respondents’ computers. SurveyMonkey and Qualtrics survey templates will recognize duplicate IP addresses, and it will not allow someone from the same IP address to submit more than one response to the online survey. As mentioned earlier, these services also include the ability to send automated reminder emails to potential participants, although it is important for the researcher to verify whether these emails have actually been sent (especially since systems can go offline due to unexpected power outages and maintenance issues).

Although it may take less time to reach a sufficient sample size using online surveys, it is important to realize that many responses from online participants may be left blank (unless the research requires participants to complete every question). As a result, what looks to be an initial sample of 300 people based on a SurveyMonkey data overview report, it is possible that there are large numbers of unusable responses from participants. I typically over-sample so that I receive 20–30% additional responses from participants over my target sample size goal. Most data from online surveys need to be cleaned, recoded, or transformed in some way. Most common statistical software programs (e.g., SPSS, SAS, and so on) make it relatively easy to perform these tasks. Data from surveys that use a large number of filter questions and that incorporate skip logic on the online survey may be more cumbersome to clean and organize once it has been collected.

7 Pros and Cons of Various Web-Based Survey Platforms and Services

New web-based survey platforms and services appear online each month, and so, it is difficult to provide a comprehensive list of all of the choices researchers have in terms of finding a web-based survey platform or service that will be most useful for the research projects. Certainly, SurveyMonkey and Qualtrics appear to be the two popular platforms/services for creating and distributing online surveys in the USA. However, there are many other platforms/service available online that range from relatively low cost to expensive, “full service” options (in which the company helps a researcher design the online survey, recruit participants, analyze data, and so on). In this section, I will discuss several pros and cons of various types of platforms and services in general as opposed to critiquing specific platforms/services.

Web-based survey platforms and services, such as SurveyMonkey and Qualtrics, provide an easy way for researchers to engage in online survey research. Standard subscription plans for these companies offer a variety of templates for different types of online surveys, tutorials, customer support, a wide range of online survey measures (e.g., short answer, Likert-type scales, semantic differential scales) and the ability to conveniently track and download response to a data analysis program (like SPSS or SAS). However, the standard plans typically have a limit on the number of responses you can collect, and they do not include additional services, such as help with sampling, recruitment advertisement development, consultations, and data analysis. SurveyMonkey charges an extra fee (beyond the basic subscription plan) to download data from an online survey into SPSS or another data analysis program format. For additional fees, SurveyMonkey offers researchers access to a wide range of online populations (e.g., databases of people who are willing to complete online surveys that have been created by both companies) and help recruiting these individuals. SurveyMonkey offers services that will allow a researcher to narrow the range of online participants he or she would like to access for a particular online survey based on a multitude of demographic characteristics (i.e., age, sex, occupation, region of the country, and so on) and various other segmentation variables that are collected by SurveyMonkey for their participant databases. Of course, these types of potential participant databases suffer from problems such as selection bias and relevance issues (e.g., certain surveys may not be of concern or interest to people in the databases, but they may be willing to take the survey if they are being compensated by SurveyMonkey).

Individual subscriptions to SurveyMonkey and Qualtrics (as well as similar companies) can be expensive, but site licenses for universities and units within them tend to be reasonable in terms of cost. For researchers who wish to download qualitative data directly into programs like Invivo will not find this type of option when using SurveyMonkey or Qualtrics. However, there are ways to cut and paste qualitative data from SPSS or Excel into this type of qualitative data analysis program. Moreover, these companies continue to add new features for consumers on a regular basis, so they will likely become more flexible when it comes to the types of services and data management options that will be available in the future.

8 Conclusion and Future Directions

The purpose of this chapter was to introduce health social scientists to the pros and cons of conducting online survey research, including sampling, measurement, and response rate issues. Moreover, it briefly examined some resources for getting started with online survey research, best practices in terms of overseeing and managing online surveys, and some advantages of using SurveyMonkey and Qualtrics as web-based survey platforms/services. This section briefly discusses the implications of this research method for health social scientists.

For health social science researchers, there are clearly several benefits of conducting online surveys of various health care system stakeholders via the Internet which make it an attractive alternative to traditional survey methods. These include the relative ease of survey design and implementation (especially when using platforms/services like SurveyMonkey or Qualtrics) and the potential to conduct relatively large-scale surveys while eliminating the costs of stationery, postage, and administration. Most online survey creation tools and/or use of SurveyMonkey and Qualtrics do not require any programming skills, and the cost of sending multiple e-mail invitations and reminders is negligible. More sophisticated features of online surveys allow validation checks as data are collected or randomization of respondents to different versions of the questionnaire (for experimental designs). However, it is important to remember that the cost of online survey design and implementation may increase as the complexity/sophistication of the online survey increases (especially when using SurveyMonkey or Qualtrics).

Researchers should always be concerned with sample representativeness and other factors that may undermine the external validity of the data obtained. As with traditional survey methods, studies that can use probability samples will have better external validity than nonprobability samples. Online health organization websites, such as hospitals and physician groups, often have detailed lists of providers, staff, and other key members of the organization that can be used as a sampling frame when conducting a probability sample. Patients are harder to reach through health care organizations due to patient privacy laws (e.g., HIPPA) and organizational practices that are designed to protect patient confidentiality. However, online support communities and health information websites (such as WebMD) are helpful portals for gaining access to people who are living with a variety of health problems.

Online surveys have also been found to have issues with selection bias and the inability to reach individuals who may not have quality Internet access (e.g., high speed Internet) or the latest technology (e.g., I-Phone, tablets, and so on). Unfortunately, this may include a lot of older adults and people who face a number of health disparities due to socio-economic factors. However, one promising trend appears to be the gradual adoption of computers and devices in economically disadvantaged regions of the world. While people lack access to the most up-to-date technology, they may be able to access online surveys with the technology they have (especially if the online survey questionnaire uses a more simplistic design). Other problems, such as low response rates, can often be remedied by sending multiple email reminders, reducing the complexity of the online survey instrument, and by finding key opinion leaders who are members of the online population (or segment) of interest who may be willing to promote the survey on behalf of the researcher. Such collaborations can enhance a researcher’s credibility with a specific segment of the population and ease the burden of participant recruitment.

Despite these problems, online surveys allow social scientists to access unique populations of individuals facing health concerns (e.g., people who seek online support for a rare disease), people who may be difficult to survey in other contexts, and a variety of health care professionals (since most physicians and other providers can be reached online). Online surveys also allow social scientists to research populations at a quasi-global level (i.e., the global south tends to have lower access to the Internet than other regions of the world), and this may open the door to international research collaboration on a variety of health issues.

In short, online surveys offer many advantages many advantages to health social science researchers. However, the technical and methodological implications of using this approach should not be underestimated. Additional research is needed to be better understand the pros and cons of online surveys and to find designs/approaches that improve their external validity, including approaches that increase the representativeness of invited samples and limit response bias.

References

  1. Bosnjak M, Tuten TL. Classifying response behaviors in web-based surveys. J Comput Mediated Commun. 2001;6. Retrieved from http://www.ascusc.org/jcmc/vol6/issue3/boznjak.html.
  2. Cook C, Heath F, Thompson R. A meta-analysis of response rates in web or internet based surveys. Educ Psychol Meas. 2000;60:821–36.CrossRefGoogle Scholar
  3. Couper MP. Designing effective web surveys. New York: Cambridge University Press; 2008.CrossRefGoogle Scholar
  4. Curtis BL. Social networking and online recruiting for HIV research: ethical challenges. J Empir Res Hum Res Ethics. 2014;9(1):58–70.CrossRefGoogle Scholar
  5. Denissen JJ, Neumann L, van Zalk M. How the internet is changing the implementation of traditional research methods, people’s daily lives, and the way in which developmental scientists conduct research. Int J Behav Dev. 2010;34(6):564–75.CrossRefGoogle Scholar
  6. Dillman DA. Mail and internet surveys: the tailored design method. New York: Wiley; 2000.Google Scholar
  7. Evans JR, Mathur A. The value of online surveys. Internet Res. 2005;15(2):195–219.CrossRefGoogle Scholar
  8. Eysenbach G, Wyatt J. Using the internet for surveys and health research. J Med Internet Res. 2002;4(2):e13.CrossRefGoogle Scholar
  9. Fan W, Yan Z. Factors affecting response rates of the websurvey: a systematic review. Comput Hum Behav. 2010;26:132–9.CrossRefGoogle Scholar
  10. Galesic M, Bosnjak M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Q. 2009;73(2):349–60.CrossRefGoogle Scholar
  11. Greenlaw C, Brown-Welty S. A comparison of web-based and paper-based survey methods: testing assumptions of survey mode and response cost. Eval Rev. 2009;33(5):464–80.CrossRefGoogle Scholar
  12. Johnson JA. Ascertaining the validity of individual protocols from web-based personality inventories. J Res Pers. 2005;39(1):103–29.CrossRefGoogle Scholar
  13. Kaplowitz MD, Hadlock TD, Levine R. A comparison of web and mail survey response rates. Public Opin Q. 2004;68(1):94–101.CrossRefGoogle Scholar
  14. Konstan JA, Simon Rosser BR, Ross MW, Stanton J, Edwards WM. The story of subject naught: a cautionary but optimistic tale of internet survey research. J Comput Mediated Commun. 2005;10(2):Article 11. http://jcmc.indiana/edu/vol10/issue2/konstan.html.Google Scholar
  15. Kramer J, Rubin A, Coster W, Helmuth E, Hermos J, Rosenbloom D, … Brief D. Strategies to address participant misrepresentation for eligibility in web-based research. Int J Methods Psychiatr Res. 2014;23(1):120–29.Google Scholar
  16. Lieberman DZ. Evaluation of the stability and validity of participant samples recruited over the internet. Cyberpsychol Behav Soc Netw. 2008;11(6):743–5.CrossRefGoogle Scholar
  17. Manfreda KL, Bosnjak M, Berzelak J, Haas I, Vehovar V, Berzelak N. Web surveys versus other survey modes: a meta-analysis comparing response rates. J Mark Res Soc. 2008;50(1):79.Google Scholar
  18. Murray E, Khadjesari Z, White IR, Kalaitzaki E, Godfrey C, McCambridge J, Thompson SG, Wallace P. Methodological challenges in online trials. J Med Internet Res. 2009;11(2):e9. https://doi.org/10.2196/jmir.1052.CrossRefGoogle Scholar
  19. Owen DJ, Fang MLE. Information-seeking behavior in complementary and alternative medicine (CAM): an online survey of faculty at a health sciences campus. J Med Libr Assoc. 2003;91(3): 311.Google Scholar
  20. Payne J, Barnfather N. Online data collection in developing nations: an investigation into sample bias in a sample of South African university students. Soc Sci Comput Rev. 2012;30(3):389–97.CrossRefGoogle Scholar
  21. Pedersen ER, Helmuth ED, Marshall GN, Schell TL, PunKay M, Kurz J. Using Facebook to recruit young adult veterans: online mental health research. JMIR Res Protocol. 2015;4(2):e63.CrossRefGoogle Scholar
  22. Porter SR, Whitcomb ME. The impact of contact type on web survey response rates. Public Opin Q. 2003;67(4):579–88.CrossRefGoogle Scholar
  23. Pullmann H, Allik J, Realo A. Global self-esteem across the life span: a cross-sectional comparison between representative and self-selected internet samples. Exp Aging Res. 2009;35:20–44.CrossRefGoogle Scholar
  24. Riper H, Spek V, Boon B, Conijn B, Kramer J, Martin-Abello K, Smit F. Effectiveness of e-self-help interventions for curbing adult problem drinking: a meta-analysis. J Med Internet Res. 2011;13(2):e24. https://doi.org/10.2196/jmir.1691.CrossRefGoogle Scholar
  25. Shaw LH, Gant LM. In defense of the internet: the relationship between internet communication and depression, loneliness, self-esteem, and perceived social support. Cyberpsychol Behav. 2002;5(2):157–71.CrossRefGoogle Scholar
  26. Shih TH, Fan X. Comparing response rates from web and mail surveys: a meta-analysis. Field Methods. 2008;20(3):249–71.CrossRefGoogle Scholar
  27. Siegel MB, Tanwar KL, Wood KS. Electronic cigarettes as a smoking-cessation tool: results from an online survey. Am J Prev Med. 2011;40(4):472–5.CrossRefGoogle Scholar
  28. Simon Rosser BR, Gurak L, Horvath KJ, Michael Oakes J, Konstan J, Danilenko GP. The challenges of ensuring participant consent in internet-based sex studies: a case study of the men’s INTernet sex (MINTS-I and II) studies. J Comput-Mediat Commun. 2009;14(3):602–26.CrossRefGoogle Scholar
  29. Valkenburg PM, Peter J. Social consequences of the internet for adolescents. Curr Dir Psychol Sci. 2009;18:1–5.CrossRefGoogle Scholar
  30. van Ingen EJ, Wright KB. Predictors of mobilizing online coping versus offline coping resources after negative life events. Comput Hum Behav. 2016;59:431–9.CrossRefGoogle Scholar
  31. Wilson PM, Petticrew M, Calnan M, Nazareth I. Effects of a financial incentive on health researchers’ response to an online survey: a randomized controlled trial. J Med Internet Res. 2010;12(2):e13.CrossRefGoogle Scholar
  32. Wright KB. Perceptions of on-line support providers: an examination of perceived homophily, source credibility, communication and social support within on-line support groups. Commun Q. 2000;48:44–59.CrossRefGoogle Scholar
  33. Wright KB. Researching internet-based populations: advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. J Comput Mediat Commun. 2005;10:Article 11. Retrieved from http://jcmc.indiana.edu/vol10/issue3/wright.html.Google Scholar
  34. Wright KB. A communication competence approach to healthcare worker conflict, job stress, job burnout, and job satisfaction. J Healthc Qual. 2011;33:7–14.CrossRefGoogle Scholar
  35. Wright KB, Miller CH. A measure of weak tie/strong tie support network preference. Commun Monogr. 2010;77:502–20.CrossRefGoogle Scholar
  36. Wright KB, Banas JA, Bessarabova E, Bernard DR. A communication competence approach to examining health care social support, stress, and job burnout. Health Commun. 2010a;25(4): 375–82.CrossRefGoogle Scholar
  37. Wright KB, Rains S, Banas J. Weak tie support network preference and perceived life stress among participants in health-related, computer-mediated support groups. J Comput-Mediat Commun. 2010b;15:606–24.CrossRefGoogle Scholar
  38. Wright KB, Rains S. Weak tie support preference and preferred coping style as predictors of perceived credibility within health-related computer-mediated support groups. Health Commun. 2013;29:281–287.CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  1. 1.Department of CommunicationGeorge Mason UniversityFairfaxUSA

Personalised recommendations