Research in Higher Education

, Volume 56, Issue 2, pp 166–177 | Cite as

Living with Smartphones: Does Completion Device Affect Survey Responses?

Article

Abstract

With the growing reliance on tablets and smartphones for internet access, understanding the effects of completion device on online survey responses becomes increasing important. This study uses data from the Strategic National Arts Alumni Project, a multi-institution online alumni survey designed to obtain knowledge of arts education, to explore the effects of what type of device (PC, Mac, tablet, or smartphone) a respondent uses has on his/her responses. Differences by device type in the characteristics of survey respondents, survey completion, time spent responding, willingness to answer complex and open-ended questions, and lengths of open-ended responses are discussed.

Keywords

Smartphones Completion device Survey response 

Introduction

Surveys have become ubiquitous in multiple arenas. People are inundated with market research and customer service surveys on a weekly, if not daily basis. Institutional research is no stranger to this survey enchantment. In higher education, surveys are used frequently for collecting information to demonstrate effectiveness and identifying areas for improvement (Kuh and Ikenberry 2009). Student surveys are perhaps most closely associated with assessment in higher education, as they are asked to evaluate the quality of their instruction, rate their satisfaction with a variety of institutional aspects, and report their social and academic behaviors (Kuh and Ewell 2010). However, surveys are an important means to gain information from other stakeholders as well, such as faculty, staff, and alumni (Cabrera et al. 2005; Kuh and Ewell 2010). Surveying a variety of institutional affiliates, in addition to students, can provide multiple perspectives for institutions attempting to gather data for the purposes of curriculum improvement, internal evaluation, accreditation, outcomes assessment, and strategic planning, just to name a few.

Because survey data can hold the key to such critical information, the design of surveys should be considered carefully. A decade ago, as surveys transitioned from paper to online formats, much research was done to explore how the new mode of delivery affected responses (Dillman 2007). When web-based surveys were first introduced, there were notable concerns with issues of sampling bias and coverage because of falling response rates (Atrostic et al. 2001; Baruch 1999; Couper 2000; Porter 2004), but as internet access has grown exponentially, more pressing concerns lie with the impact of mode on the responses themselves. While the online mode does not seem to have an impact on the content of responses (Denscombe 2006; Miller et al. 2002), there are design elements, including aspects of the interface, that can potentially influence respondent behavior (Tourangeau et al. 2003). Incorporating programming to make surveys dynamic instead of static (such as skip logic to automatically move past certain items or Java-based programming to remove ineligible response options) can greatly assist respondents in their cognitive processing if the survey has a non-linear format in which not all items are applicable to all respondents (Norman et al. 2001; Tourangeau 2004). Other design issues, including how page breaks and scrolling contribute to respondent breakoff (Couper et al. 2001; Peytchev 2009; Peytchev et al. 2006; Saxon et al. 2003), the effectiveness of progress bars on completion rates (Couper et al. 2001; Villar et al. 2013; Yan et al. 2010), browser compatibility and response placement (Kaye and Johnson 1999), and even color contrast and placement of emphasis (Tourangeau 2004) have all been explored in the context of online surveys.

Now as internet access shifts away from laptops and desktops to smartphones and tablets, survey researchers must again explore how completion device influences response patterns and data quality. While smartphones and tablets offer the convenience of internet access virtually anywhere, the touch screen functioning, truncated viewing area, and smaller keyboard layout make them more conducive to certain activities (such as checking email and watching funny cat videos on YouTube) but less conducive to others (such as selecting radio buttons from a large item matrix or typing in extensive and detailed responses to open-ended prompts). Recent studies (Buskirk and Andrus 2012; Peytchev and Hill 2010) suggest that additional scrolling and typing difficulties associated with mobile devices introduce additional burdens onto survey takers and negatively impact their survey-taking experience. Other research comparing survey patterns between PC (laptop and desktop), tablet, and smartphone users found that young people were more likely to use smartphones for survey completion, while young and employed people were more likely to use tablets, suggesting that if the goal of the survey is to reach younger populations then the survey programming should be smartphone-enabled and that user testing should be done with tablets and smartphones (de Bruijne and Wijnant 2014). Furthermore, Mavletova (2013) found that mobile phone respondents had lower survey completion rates and shorter length of open-ended answers, and that the mobile phone users were significantly younger than those using a PC. However, this same study did not find differences based on gender or level of education concerning the type of device that respondents used to complete the survey.

Given this information, it is imperative that institutional researchers and others interested in higher education survey results understand the impact of device on survey respondents. While most have accepted web-based survey delivery, as a necessity, the field must now advance one step further in collecting knowledge on how web-based surveys translate to smartphones and tablets. Therefore, this study explores the characteristics, including age, income, gender, and current employment status, of those using certain types of devices. Additionally, this study examines relationships between type of device and a variety of other survey-taking characteristics, including breaking off before completion, backing up in the survey, time duration, item nonresponse for complicated question layout, open-ended text box completion, and the length of open-ended responses.

Methods

Participants

This study used data from the 2012 and 2013 administrations of the Strategic National Arts Alumni Project (SNAAP). SNAAP is an online alumni survey designed to collect data annually about the educational experiences and careers of alumni from arts high schools, art and design colleges, conservatories, and arts schools, departments, and programs within comprehensive universities. SNAAP was developed in 2008 and conducted 3 years of pilot testing. SNAAP data contains a variety of vital topics of interest to institutions including alumni satisfaction with their institutional experiences, preparation for any further education, career paths, relevance of arts training in careers, resources and supports for career success, effects of student loan debt, and impact of and satisfaction with income. The participants were 58,768 alumni from 109 different arts high schools, undergraduate, and graduate colleges or arts programs within larger universities. All alumni from each program or institution were invited to participate. For those institutions participating in both years, alumni who had already completed the study in 2012 were not contacted again in 2013. Of those who participated, 1,389 were from high schools (2 %), 44,644 undergraduate programs (76 %), and 12,735 graduate programs (22 %). Of these alumni, 41 % were male, 59 % female, and 0.2 % transgender. The majority of alumni (85 %) reported their ethnicity as Caucasian. The average institutional response rate was 18 %.

Measures

The measures were questions included in a larger survey that was administered to participants online. Participants were emailed an invitation including a link to the survey. Participants could log in multiple times, so they were not constrained to complete all questions during a single sitting. Participants were not required to answer any of the items; therefore, they could advance through the survey even if they did not respond to every item.

For this study, we utilized responses to demographic questions on the survey itself, as well as several metadata measures. These metadata measures provide information about various survey-taking behaviors. The grouping variable of interest was completion device. Of those responding, the majority completed the survey on a desktop or laptop computer (42 % used a PC and 43 % a Mac), but a nontrivial number of alumni answered the survey using a smartphone (9 %) or tablet (5 %). Virtually all respondent devices could be tracked through the software, but there was a trivial number in the “other” category (.4 %), meaning that these respondents’ devices could not be traced. Additionally, this study used dichotomous metadata measures for breakoff status (whether or not respondents reached the end of the survey and hit the “submit” button), backup status (whether or not respondents used the browser to go back to previously completed pages), and item nonresponse for complex matrix layout questions or large tables with many response options and items (whether or not respondents left items missing, see Fig. 1 for example). There was also the time duration variable of interest, which recorded how long (in minutes) respondents spent with the survey open in their browser. Finally, there were metadata measures concerning the open-ended questions, of which there are 10 on the SNAAP instrument, placed all throughout the survey. There were dichotomous variables for whether or not respondents wrote any response in the open text box, as well as interval variables for the length of the responses.
Fig. 1

Example of “complex matrix” layout question set

There were also demographic variables of interest for this study, collected in the final section of the survey instrument. The continuous variable of age was collected from a write-in number box, while income was converted from an ordinal variable by using the midpoint of each of 12 different response ranges. The categorical variable of gender had the three response options of male, female, and transgender. Finally, the categorical variable of current employment status had seven different response options: full-time (35 h or more per week), part-time only, unemployed and looking for work, in school full time, caring for family full time, retired, and other.

Analyses

To test the statistical significance of the findings for dichotomous or categorical variables, Chi square analyses were run between the four device groups. These analyses were run for gender, current employment status, breakoff status, backup status, item nonresponse status (for two sets of matrix layouts), and open-ended response status (for 10 open-ended questions). For continuous items, ANOVAs and Mann–Whitney non-parametric tests were used to compare means and medians between groups (Field 2009). To adjust for multiple testing within the same data pool, Bonferroni’s correction was used and alpha was set to .001. These comparisons were done for age, income, duration, and length of open-ended responses. Finally, since current employment status and age are related, a simple multinomial logistic regression was done with device as the dependent variable, current employment status as the factor of interest, and age as the covariate.

Results

When looking at the demographics and characteristics of those using these devices, many statistically significant and substantive differences are observed. Not surprisingly, alumni using a smartphone for survey completion are more likely to be younger (on average 10 years) than those using other devices (Table 1). Also, those using tablets on average have higher incomes than other alumni (Table 1). Females are more likely (χ2 = 141.045, p < .001) than their male counterparts to use smartphones or tablets (Table 2). Finally, while the differences in employment by the type of device that was used to complete the survey were statistically significant (χ2 = 657.952, p < .001), Table 2 shows that for the most part these differences were not substantively drastic except for a few instances. Those using PCs and tablets were much more likely than their Mac and smartphone user counterparts to be retired (10.1 % for tablets and 9.3 % for PCs versus 6.0 % for Macs and 2.9 % for smartphones). Those using smartphones were also slightly more likely than their peers to be in school full-time. However, after controlling for age, the results from the multinomial logistic regression revealed that both of these effects by employment status were no longer statistically significant (all p values >.10).
Table 1

Average age and income of respondents by type of device on which the survey was taken

Device

Age

Income

PC

 Mean

46.59

55,172.45

 N

21,005

17,976

 SD

15.83

44,474.32

Mac

 Mean

43.07

52,632.77

 N

21,371

18,397

 SD

15.20

44,100.82

Smartphone

 Mean

37.32

47,452.60

 N

3,063

2,848

 SD

12.42

40,824.37

Tablet

 Mean

46.45

60,997.26

 N

2,549

2,186

 SD

14.43

48,138.70

Total

 Mean

44.41

53,791.37

 N

48,188

41,580

 SD

15.49

44,343.01

F

326.005

37.690

Sign

0.000

0.000

Total will not match the sum of the four device categories due to removal of the “other” category

Table 2

Gender, employment status, completion status, backup status, and response to complicated questions by type of device on which the survey was taken

 Measure

PC

Mac

Smartphone

Tablet

Total

Count

%

Count

%

Count

%

Count

%

Count

%

Gender

 Male

8,983

42.2

8,845

40.9

1,025

32.4

958

36.8

19,811

40.7

 Female

12,263

57.7

12,765

59.0

2,122

67.2

1,639

62.9

28,789

59.1

 Transgender

24

.1

35

.2

13

.4

7

.3

79

0.2

Employment status

 Full-time

13,960

65.5

13,329

61.5

2,031

64.9

1,577

61.0

30,897

63.5

 Part-time only

2,680

12.6

3,441

15.9

480

15.3

363

14.0

6,964

14.3

 Unemployed and looking

621

2.9

738

3.4

128

4.1

76

2.9

1,563

3.2

 In school full time

479

2.2

728

3.4

134

4.3

64

2.5

1,405

2.9

 Caring for family full time

260

1.2

277

1.3

83

2.7

78

3.0

698

1.4

 Retired

1,984

9.3

1,296

6.0

90

2.9

260

10.1

3,630

7.5

 Other

1,321

6.2

1,858

8.6

182

5.8

169

6.5

3,530

7.3

Completion status

 Complete

21,372

87.0

21,707

85.4

3,141

57.6

2,593

83.8

8,813

83.4

 Partial complete

3,183

12.9

3,719

14.6

2,310

42.4

502

16.2

9,714

16.6

Backup status

 Respondent did not back up

23,440

95.5

24,281

95.5

5,233

96.0

2,943

95.1

55,897

95.5

 Respondent backed up

1,115

4.5

1,145

4.5

218

4.0

152

4.9

2,630

4.5

Response to complicated question 1

 Did not respond to all items

2,180

10.2

2,152

9.9

166

5.3

240

9.3

4,738

9.7

 Did respond to all items

19,192

89.8

19,554

90.1

2,975

94.7

2,353

90.7

44,074

90.3

Response to complicated question 2

 Did not respond to all items

2,149

10.1

1,983

9.2

165

5.3

219

8.5

4,516

9.3

 Did respond to all items

19,189

89.9

19,684

90.8

2,967

94.7

2,370

91.5

44,210

90.7

Perhaps more importantly, the types of devices that respondents use also seem to affect how they respond to the survey. As shown in Table 2, smartphone users were far more likely (χ2 = 2923.123, p < .001) not to complete the survey once they had started it (42 % drop-off rate compared to 13 % for PCs, 15 % for Macs, and 16 % for tablets). For those that do complete the survey, smartphone users spent a statistically significant greater amount of time on the survey than their counterparts (Table 3). There were no statistically significant differences (χ2 = 4.494, p > .25) between those who backed up in the survey and those that did not by survey completion device (Table 2). Surprisingly, as shown in Table 2, respondents using devices other than smartphones were nearly twice as likely not to respond completely to questions with complex layouts (χ2 = 77.680, p < .001 for complicated question 1; χ2 = 78.319, p < .001 for complicated question 2).
Table 3

Median time spent on survey (in minutes) by type of device on which the survey was taken

Device

Median

N

SD

PC

27.62

21,372

244.61

Mac

27.60

21,706

297.08

Smartphone

31.82

3,141

99.38

Tablet

28.48

2,593

119.46

Total

27.98

49,016

258.10

F

  

5.907

Sign

  

0.000

Total will not match the sum of the four device categories due to removal of the “other” category

The final area that we explored was how device affected responses to open-ended questions (those with open text boxes where alumni were allowed to elaborate on their answers or answer questions that did not have pre-determined response options). For all 10 questions, the differences between device type for whether they wrote responses in the open-ended questions was statistically significant (χ2 ranging from 22.687 to 306.556, p < .001). However, the substantive differences were not so clear cut. There were a few exceptions (like questions 2, 6, 7, and 9), but for most of the questions, the differences were only a few percentage points (Table 4). While smartphone and tablet users were only slightly less likely to answer open-ended questions, their responses to those questions were much shorter. For many of the questions, the median length of responses for tablet and smartphone users was nearly half of those for alumni using traditional computers (Table 5).
Table 4

Whether or not a respondent wrote in the 10 open-ended questions by type of device on which the survey was taken

 Question

PC

Mac

Smartphone

Tablet

Total

Count

%

Count

%

Count

%

Count

%

Count

%

Question 1

 Did NOT write something

4,765

34.5

5,807

42.4

744

37.7

632

36.5

11,948

38.3

 Did write something

9,039

65.5

7,898

57.6

1,230

62.3

1,098

63.5

19,265

61.7

Question 2

 Did NOT write something

7,228

33.8

6,928

31.9

1,201

38.2

1,015

39.1

16,372

33.5

 Did write something

14,144

66.2

14,778

68.1

1,940

61.8

1,578

60.9

32,440

66.5

Question 3

 Did NOT write something

16,724

78.3

16,899

77.9

2,574

81.9

2,097

80.9

38,294

78.5

 Did write something

4,648

21.7

4,807

22.1

567

18.1

496

19.1

10,518

21.5

Question 4

 Did NOT write something

5,788

62.8

3,983

60.3

943

70.5

711

65.6

11,425

62.6

 Did write something

3,428

37.2

2,619

39.7

394

29.5

373

34.4

6,814

37.4

Question 5

 Did NOT write something

19,446

91.0

19,646

90.5

2,906

92.5

2,400

92.6

44,398

91.0

 Did write something

1,926

9.0

2,060

9.5

235

7.5

193

7.4

4,414

9.0

Question 6

 Did NOT write something

6,286

32.0

6,777

33.4

1,383

47.7

909

38.8

15,355

34.0

 Did write something

13,382

68.0

13,507

66.6

1,516

52.3

1,433

61.2

29,838

66.0

Question 7

 Did NOT write something

2,560

13.0

2,783

13.7

591

20.4

370

15.8

6,304

13.9

 Did write something

17,108

87.0

17,501

86.3

2,308

79.6

1,972

84.2

38,889

86.1

Question 8

 Did NOT write something

13,303

62.2

13,983

64.4

2,348

74.8

1,871

72.2

31,505

64.5

 Did write something

8,069

37.8

7,723

35.6

793

25.2

722

27.8

17,307

35.5

Question 9

 Did NOT write something

16,989

79.5

17,496

80.6

2,752

87.6

2,146

82.8

39,383

80.7

 Did write something

4,383

20.5

4,210

19.4

389

12.4

447

17.2

9,429

19.3

Question 10

 Did NOT write something

19,365

90.6

19,643

90.5

2,959

94.2

2,413

93.1

44,380

90.9

 Did write something

2,007

9.4

2,063

9.5

182

5.8

180

6.9

4,432

9.1

Group with the highest percentage of having a written response is bolded for each question

Table 5

Median length of responses to open-ended questions by type of device on which the survey was taken

Device

Q 1

Q 2

Q 3

Q 4

Q 5

Q 6

Q 7

Q 8

Q 9

Q 10

PC

 Median

35

245

92

109

80

150

119

126

278

119

 N

9,039

14,144

4,648

3,428

1,926

13,382

17,108

8,069

4,383

2,007

 SD

50.08

426.84

212.87

213.18

150.07

194.52

202.78

164.34

363.87

286.35

Mac

 Median

33

264

109

102

84

147

118

127

283

125

 N

7,898

14,778

4,807

2,619

2,060

13,507

17,501

7,723

4,210

2,063

 SD

48.45

458.49

210.85

188.51

180.06

201.17

206.43

165.08

399.88

266.77

Smartphone

 Median

29

179

73

73

46

103

78

92

185

67

 N

1,230

1,940

567

394

235

1,516

2,308

793

389

182

 SD

28.70

305.00

135.84

111.15

83.63

124.94

117.37

120.12

337.23

133.72

Tablet

 Median

30

184

75

83

52

122

91

113

228

102

 N

1,098

1,578

496

373

193

1,433

1,972

722

447

180

 SD

51.40

302.53

148.12

189.27

138.32

161.72

133.83

145.75

296.27

188.84

Total

 Median

33

245

97

103

78

144

114

124

274

119

 N

19,344

32,582

10,564

6,844

4,441

29,964

39,046

17,381

9,468

4,460

 SD

48.41

431.56

206.28

199.40

162.13

193.99

198.16

162.61

377.40

269.10

F

16.515

55.793

16.902

14.318

10.893

54.968

76.387

20.970

13.151

5.917

Sign

0.000

0.000

0.000

0.000

0.000

0.000

0.000

0.000

0.000

0.000

Total will not match the sum of the four device categories due to removal of the “other” category. Group with the longest median comment is bolded for each question

Discussion

The results of this study suggest that there are differences in type of completion device based on certain demographic characteristics, and in turn that device does have some influence in particular survey-taking behaviors. Many patterns of results are consistent with previous literature on the subject. Our study found that there were differences for type of completion device based on age, with younger respondents more likely to use smartphones, mirroring the results of de Bruijne and Wijnant (2014) and Mavletova (2013). This finding is not surprising, given that younger populations more readily adopt new technology, and many are very accustomed to smartphone use in general. There were also differences based on current employment status that are attributable to age, with retired respondents more likely to use tablets. This is possibly another generational effect that speaks to the technological needs and uses of older individuals (who make up the majority of the retired population). Older individuals might be less likely to use smartphones due to the difficulty involved in smaller, truncated screens, which previous research suggests place additional burdens on survey respondents (Buskirk and Andrus 2012). However, older individuals might appreciate the simplicity of tablets in their quick startup time and icon-based interfaces, and they provide larger viewing areas than smartphones. If people are retired, they are unlikely to need a desktop or laptop computer for extensive work purposes, and may instead prefer tablets for the streamlined ability to check email, social media, and get online. It may be that the generational effects are not simply explained by whatever is the newest technology, but also by how the functional aspects of the technology meet the unique needs of the population.

Unlike previous studies, our results indicate that there are some differences in completion device based on gender and income. Tablet users had significantly higher incomes compared to other device users. Does this mean that tablets are still considered to be “luxury” items? If most people have either a computer or a smartphone (or both) for their internet access needs, is one less able to justify the purchase of a tablet for personal use? The gender differences found in the study are also interesting but require some speculation as to their explanation. Women were more likely to complete the survey with a smartphone, and while this may speak to women being more frequent users of smartphones, another rationale for this finding relates to the idea that women are more compliant to survey requests in general. If women are more likely than men to complete a survey on their smartphones, perhaps this is because they are more complaint to the survey request regardless of the device on which they received it. In other words, it may be that if men receive a survey request while checking email on their smartphones, they are more likely to disregard it, knowing that it might be more burdensome to take a survey from this device. They may intend to “do it later” when they have access to a traditional computer. Women, however, as the more conscientious and willing survey takers, may want to comply with the request immediately, even if that means completing the survey from a smartphone. More research, particularly using methods of user testing and cognitive interviewing, is needed to further address these potential differences in compliance and motivation.

Results from analyses of metadata measures also suggest that type of completion device can impact various aspects of survey-taking behavior. Taking surveys on smartphones and to a certain extent tablets as well, does seem to increase respondent burden, as indicated in previous research (Peytchev and Hill 2010). Smartphone users were far more likely to abandon the survey, suggesting that difficulty in reading questions, selecting responses, and advancing through multiple screens can outweigh the intentions of the respondent to complete the survey. Even for those smartphone users who did manage to “power through” to the end, they require more time to complete the survey. This result is not surprising given the extra time that is necessary to scroll both vertically and horizontally, as well as type out text responses on touch screen keypads. The issue of open-ended questions is further illuminated when looking at the likelihood of even responding to the questions and the length of the responses. Smartphone and tablet users were somewhat less likely to answer open-ended questions, and when they did choose to type out a response it was much shorter in length than those of desktop and laptop users. Perhaps this pattern will change as more smartphones and tablets are accompanied by voice-dictation functions, but the data from this study suggest that a reliance on open-ended questions may be problematic if many respondents are completing the survey with smartphones or tablets. The respondent desire to keep responses short (if they even leave a response at all) might have a negative influence on the quality of the data.

A final noteworthy finding of interest was that smartphone users were actually more likely to fully complete complex layout item sets. At first glance, this result seems counterintuitive because these questions may require more horizontal and vertical scrolling on a truncated screen in order to access the stem and response options, so one would expect fewer complete responses. However, our results indicate the opposite finding. While more research is needed to determine the exact cognitive processes of smartphone respondents for these types of item sets, we suggest two possible explanations. The first reason may be that those who preserved to these points in the survey (which were located about one-third and one-half of the way through) on their smartphones and had not yet abandoned the survey are the more dedicated and conscientious survey takers. Therefore, their responses were more complete, and would be more complete regardless of the type of device on which they were taking the survey. A second possible explanation involves the approach to answering these sorts of complex questions. It may be that on a smartphone, even though there were 16 total items in the set, the respondent can only view one or two items at a time. Therefore, he or she must shift to a more methodical, algorithmic approach and simply focus on one item at a time, rather than using a more heuristic approach and viewing the set of items as a whole, which one might do with a larger screen. There is much debate on using these types of questions on surveys, regardless of mode (Dillman 2007), so this finding introduces another layer of complexity that takes smartphone users into account.

Limitations

Although there are many informative aspects of this study, there are some limitations of the data that should be noted. The sample available for this study may not be representative of all survey takers, as SNAAP only surveys arts alumni. Furthermore, before these alumni can become eligible for the sample, their institutions must choose to participate in the project itself, leading to potential selection bias for certain types of institutions. Additionally, alumni response rates are lower than other types of survey response rates in higher education, such as student and faculty surveys. However, research suggests that lower response rates do not always negatively impact response representativeness on key characteristics (Lambert and Miller 2014). But even given these limitations, this study still provides insight concerning the impact of device type in wed-based survey methodology.

Conclusions

Given the rapid changes associated with technology, faculty, administrators, and policy makers need to understand how the data they collect from surveys are affected by the devices respondents are using. As more people are replacing computers with smartphones and tablets, especially when traveling, more survey responses will be made through this technology. Therefore, it is important to know how respondent demographic characteristics play a role in the type of completion device. Additionally, type of device does have an impact on certain survey-taking characteristics, including breakoff status and open-ended response completion, and there is potential for these to negatively influence data quality. These results suggest that survey designers may have to rely less heavily on questions with complex layouts, long verbiage, and those with open-ended responses when creating their instruments. In particular, it is important to understand the audience for surveys to be administered. For example, if a survey is going to be sent to a younger population, the layout and functionality of the survey need to be optimized for smartphone devices to increase completion. For a younger population, survey designers might also think judicially on which open-ended questions to include, because these questions increase survey burden for those completing the survey on a smartphone. As technology continues to develop at a rapid pace, more research is needed on technology advances and the use of survey research in higher education and beyond.

References

  1. Atrostic, B. K., Bates, N., Burt, G., & Silberstein, A. (2001). Nonresponse in U.S. government household surveys: Consistent measure, recent trends, and new insights. Journal of Official Statistics, 17(2), 209–226.Google Scholar
  2. Baruch, Y. (1999). Response rates in academic studies: A comparative analysis. Human Relations, 52, 421–434.Google Scholar
  3. Buskirk, T.D., & Andrus, C. (2012). Smart surveys for smart phones: Exploring various approaches for conducting online mobile surveys via smartphones. Survey Practice, 5(1). http://surveypractice.org/index.php/SurveyPractice/article/download/63/pdf.
  4. Cabrera, A. F., Weerts, D. J., & Zulick, B. J. (2005). Making an impact with alumni surveys. New Directions for Institutional Research, 2005, 5–17. doi:10.1002/ir.144.CrossRefGoogle Scholar
  5. Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64, 464–494.CrossRefGoogle Scholar
  6. Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65, 230–253.CrossRefGoogle Scholar
  7. de Bruijne, M., & Wijnant, A. (2014). Mobile response in web panels. Social Science Computer Review, 0894439314525918.Google Scholar
  8. Denscombe, M. (2006). Web-based questionnaires and the mode effect: An evaluation based on completion rates and data contents of near-identical questionnaires delivered in different modes. Social Science Computer Review, 24(2), 246–254. doi:10.1177/0894439305284522.CrossRefGoogle Scholar
  9. Dillman, D. A. (2007). Mail and internet surveys: The Tailored Design Method (2nd ed.). Hoboken: John Wiley & Sons Inc.Google Scholar
  10. Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London: Sage Publications.Google Scholar
  11. Kaye, B. K., & Johnson, T. J. (1999). Taming the cyber frontier: Techniques for improving online surveys. Social Science Computer Review, 17, 323–337.CrossRefGoogle Scholar
  12. Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy, 22(1), 1–20.CrossRefGoogle Scholar
  13. Kuh, G. D. & Ikenberry, S. O. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education, Urbana, IL: University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment.Google Scholar
  14. Lambert, A. D., & Miller, A. L. (2014). Lower response rates on alumni surveys might not mean lower response representativeness. Educational Research Quarterly, 37(3), 38–51.Google Scholar
  15. Mavletova, A. (2013). Data quality in PC and mobile web surveys. Social Science Computer Review, 31(6), 725–743. doi:10.1177/0894439313485201.CrossRefGoogle Scholar
  16. Miller, T. I., Miller Kobayashi, M., Caldwell, E., Thurston, S., & Collett, B. (2002). Citizen surveys on the web. Social Science Computer Review, 20(2), 124–136.CrossRefGoogle Scholar
  17. Norman, K. L., Friedman, Z., Norman, K., & Stevenson, R. (2001). Navigational issues in the design of online self-administered questionnaires. Behavior & Information Technology, 20(1), 37–45. doi:10.1080/01449290010021764.CrossRefGoogle Scholar
  18. Peytchev, A. (2009). Survey breakoff. Public Opinion Quarterly, 73(1), 74–97. doi:10.1093/poq/nfp014.CrossRefGoogle Scholar
  19. Peytchev, A., Couper, M. P., McCabe, S. E., & Crawford, S. D. (2006). Web survey design: Paging versus scrolling. Public Opinion Quarterly, 70(4), 596–607. doi:10.1093/poq/nfl028.CrossRefGoogle Scholar
  20. Peytchev, A., & Hill, C. A. (2010). Experiments in mobile web survey design: Similarities to other modes and unique considerations. Social Science Computer Review, 28(3), 319–335. doi:10.1177/0894439309353037.CrossRefGoogle Scholar
  21. Porter, S. R. (2004). Raising response rates: What works? New Directions for Institutional Research, 121, 5–21.CrossRefGoogle Scholar
  22. Saxon, D., Garratt, D., Gilroy, P., & Cairns, C. (2003). Collecting data in the information age: Exploring web-based survey methods in educational research. Research in Education, 69, 51–66.CrossRefGoogle Scholar
  23. Tourangeau, R. (2004). Survey research and societal change. Annual Review of Psychology, 55, 775–801. doi:10.1146/annurev.psych/55.090902.142040.CrossRefGoogle Scholar
  24. Tourangeau, R., Couper, M. P., & Steiger, D. M. (2003). Humanizing self-administered surveys: Experiments on social presence in web and IVR surveys. Computers in Human Behavior, 19, 1–24.CrossRefGoogle Scholar
  25. Villar, A., Callegaro, M., & Yang, Y. (2013). Where am I? A meta-analysis of experiments on the effects of progress indicators for web surveys. Social Science Computer Review, 31(6), 744–762. doi:10.1177/0894439313497468.CrossRefGoogle Scholar
  26. Yan, T., Conrad, F. G., Tourangeau, R., & Couper, M. P. (2010). Should I stay or should I go: The effects of progress feedback, promised task duration, an length of questionnaire on completing web surveys. International Journal of Public Opinion Research, 23(2), 131–147. doi:10.1093/ijpor/edq046.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Indiana UniversityBloomingtonUSA

Personalised recommendations