The main objective of the German Emigration and Remigration Panel Study (GERPS) is to establish a longitudinal data set that provides information on life trajectories of international migrants. However, a large amount of paradata were also collected in order to obtain meta-information on respondents’ survey participation. This auxiliary information can help to optimize data quality at all stages of the survey process. By continuing the existing discussion in the field of online surveys, this chapter pursues a twofold objective: it reflects device usage (mobile vs. computer) and elucidates determinants of device choice. In particular, it analyses whether selectivity effects due to respondent’s device choices bias the sample. Moreover, this chapter investigates differences in response time between devices to detect differences in response burden. The analysis of response burden differences by device is an important issue, since an increased device-specific response burden can be a predictor of actual and further panel dropouts. In both device-specific selectivity and survey burden, only slight differences were found between mobile and desktop devices. Using these data, the following paper addresses the need to analyse potential sources of survey error and provides evidence that GERPS data do not appear to contain noteworthy bias attributed to device usage.
The major aim of the German Emigration and Remigration Panel Study (GERPS) is to establish a longitudinal data set that offers information on life trajectories of international migrants. In addition to content-related questions, many methodological questions can also be answered with the help of GERPS. Among other things, this is done using paradata that are passively collected during the survey in order to obtain meta-information on respondents’ survey participation.
Kreuter (2013) defines “paradata as additional data that can be captured during the process of producing a survey statistic. Those data can be captured at all stages of the survey process and with very different granularities.” This form of survey-related meta-information can help to optimize data quality within nearly all stages of the survey process, starting with the design approach then the pretest and ending in weighting and data cleaning based on the analysis of respondent attributes and response behaviour (Diedenhofen and Musch 2017; Verbree et al. 2019; Yan and Olson 2013).
In contrast to other forms of survey meta data such as interviewer comments and observations, computer programs of online and computer assisted surveys (CATI, CAPI) unobtrusively collect a large amount of paradata without impacting the respondents’ experience while they answer the survey questions without drawing any attention to the collection of the paradata. Therefore, collecting paradata simultaneous to the actual survey data has no actual disadvantage within computer-administered surveys–apart from minimal effects on processing and transmission times. Thus, from an information efficiency perspective, paradata are auxiliary information almost free of charge. They do not consume additional survey resources in terms of added respondent burden or extra survey time. Furthermore, paradata are non-reactive and therefore an objective form of data (for example compared to self-reports or interviewer data) (Jacob et al. 2019; Kreuter 2013; McClain et al. 2019).
Within GERPS, a vast amount of survey paradata were gained (Ette et al. 2020) that can be used to reflect data quality and to learn more about the response process and the response behaviour in general and under specific conditions. One of these specific conditions for online surveys is the question of which device the respondents used while completing the questionnaire. Mobile devices have become extremely popular and are used with increasing frequency for survey participation (Gummer et al. 2019). Previous studies have shown that the relative share of mobile respondents varies according to the composition of the sample and therefore results concerning device usage are rather inconclusive (see above). As systematic differences between groups could come across with method selection effects (Décieux and Hoffmann 2014). And these may come across with and increased risk of a selectivity bias between the different device modes. Therefore, the present study investigates whether device use is systematically caused by sociodemographic attributes of the respondents by addressing the first research question: What are the determinants of device choice?
Moreover, survey navigation significantly differs between mobile and desktop devices: An online survey on a traditional desktop device takes place on big screen, with a mouse and keyboard and strong processing power, but survey navigation on mobile devices usually proceeds on the small screen with a touchpad and less processing power (Herzing 2019; Mergener and Décieux 2018; Schlosser and Mays 2018). Therefore, most of the existing research suggests that the completion times increase due to the choice of using a mobile device for proceeding through the questionnaire. In addition, this can be particularly problematic especially in case of panel surveys as increased response times are used as an indicator for increased survey burden, which increases the danger of selective dropouts and decreasing data quality (Groves et al. 2000; Mancosu et al. 2019; Montgomery and Rossiter 2020; Tourangeau et al. 2000; Ward and Meade 2018). However, more nuanced approaches point out that the differences in response times due to the device decrease when a mobile-optimized design is used (Höhne and Schlosser 2018). As GERPS used a mobile-optimized design, the second research question focusses on response time differences between the desktop and mobile modes: Is there a response time difference between mobile and desktop respondents?
2 The Rising Importance of Paradata for Survey Research
At least within the last 20 years, the use of paradata in surveys has become increasingly important. The underlying causes are heterogeneous. Even if paradata had already been used in other surveys modes, for example if call record data were used to optimize timing of calls for telephone surveys (see e.g. Durrant et al. 2013), a crucial factors is the technological developments of digitization and the rise of computer-administered and especially online surveys, as these developments substantially simplified the collection of paradata. The modern computer programs of online and computer assisted surveys (CATI, CAPI) are able to unobtrusively collect a large amount of paradata while the respondents are answering the survey without adding respondent burden or survey time or affecting response behaviour (McClain et al. 2019; Kreuter 2013; Jacob et al. 2019).
Apart from these technological possibilities, McClain et al. (2019) mention two additional developments that are primarily responsible for the increasing importance of paradata: First, the growing need to understand and classify the ways in which respondents access web surveys. This includes the route into the survey, e.g. using a QR code, a link in an e-mail or on a homepage, as well as the device that is used to complete the survey such as mobile or desktop devices (e.g. Couper and Peterson 2017; Höhne and Schlosser 2018). Second, the renewed focus on the usability and response quality of web surveys. For these purposes, paradata offer objective indicators of response behaviour, data quality, and usability of the survey (e.g. Antoun and Cernat 2019; Brockhaus et al. 2017; Couper and Peterson 2017; Mayerl and Giehl 2018; McClain et al. 2019; Roßmann and Gummer 2016; Sendelbah et al. 2016). Moreover, paradata can be used to reflect and interpret responses from a content perspective (Yan and Olson 2013) or to classify respondents’ personalities based on indicators of response behaviour (Stieger and Reips 2010).
3 State of Research on Selectivity of Device Use and Response Time Differences
At the beginning of the era of online surveys, these were programmed to be answered as easily as possible using desktop or laptop computers with a mouse and keyboard. At that time, survey methodology mainly focused on the functionality and convenience of surveys within different browsers and operating systems (Couper 2008). However, due to technical development such as the increasing role of mobile devices (e.g., smartphones, tablets) as an element of the ongoing global digitalisation (Décieux et al. 2018; Erzen et al. 2019; Turkle 2017), studies of online survey research detect an increase of online questionnaires that are answered on mobile devices. Longitudinal analyses of device use show clear patterns: Although the desktop device option is still the most frequent mode, the share of mobile respondents is increasing within large panel studies. Increasing mobile device shares can for example be found in the Netquest Panel (Revilla et al. 2016), the GESIS Panel (Haan et al. 2019), and for the German Longitudinal Election Study (Gummer et al. 2019).
This increasing tendency towards mobile device usage in web surveys has led to a new demand in survey research (Gummer et al. 2019; Wenz et al. 2019). It has become increasingly important to learn more about the factors that explain device usage and device choice as these can lead to a selectivity bias between these modes. Moreover, interacting with surveys on mobile devices is done differently than on desktop computers. Navigation of the survey on a mobile device uses a touch screen rather than the traditional mouse and keyboard. In addition, devices differ concerning their processing power. The processing power of desktop devices is usually superior to that of mobile devices (Schlosser and Mays 2018). Therefore it became important to gather information on how respondents proceed through an online survey on a mobile device and to elucidate differences in response behaviour (e.g. Andreadis 2015; Lee et al. 2018; Mergener and Décieux 2018; Schlosser and Mays 2018).
3.1 Factors Affecting (Selectivity of) Device Choice
Research on specific effects of determinants (such as sociodemographic variables) of the choice of device to use when completing an online survey is rather inconclusive. Concerning the effect of gender, Cook (2014) found that females tend to participate more often on a mobile device. However, other studies found no clear effect of gender on device usage (Revilla et al. 2016; Schlosser and Mays 2018). Results regarding age and education have also been inconsistent. Numerous studies have concluded that younger respondents tend to use mobile devices more often for survey participation (e.g. Lambert and Miller 2015; Couper et al. 2017; Gummer et al. 2019) and others found at least inconsistent effects for age across different countries (Revilla et al. 2016). Concerning education, de Bruijne and Wijnant (2013) found that more educated individuals tend to use mobile devices more often for answering surveys. However, results from seven different countries examined by Revilla et al. (2016) and across 18 pooled web surveys Gummer et al. (2019) challenged these results, increasing doubts about the effect. Moreover, individuals living in a single-person household were found to use mobile devices more often for survey participation (Cook 2014; Haan et al. 2019). Taken together, this research on contextual factors affecting device usage shows inconclusive results and no clear selectivity pattern.
3.2 Response Time as an Indicator for Survey Burden Analysis
Response times are often used as an indicator to compare survey burden and survey convenience across modes, which are central to predicting current and future dropouts in a panel survey (Antoun and Cernat 2019). The longer a survey takes, the higher is the survey burden. High survey burden negatively affects the respondents’ perceived convenience and propensity for continuous, current and future participation (Gummer and Roßmann 2015; Peytchev 2009; Rolstad et al. 2011; Villar et al. 2013) and also the quality of the answers such that behaviours like satisficing and careless responding are more likely (Gibson and Bowling 2020; Leiner 2019; Roßmann et al. 2018).
Nearly all previous studies comparing response times across desktop and mobile device modes have found that web surveys take longer to complete on mobile devices than on desktop devices (Andreadis 2015; Antoun and Cernat 2019; Couper and Peterson 2017; Schlosser and Mays 2018). Moreover, meta analyses of 21 studies (Gummer and Roßmann 2015) and 26 studies (Couper and Peterson 2017) corroborate these results. However, in a closer look at the technical terms of survey completion, other studies have shown that these differences in response time significantly decrease in mobile-friendly survey environments. These studies, for example, show that response time differences significantly decrease in mobile optimized designed surveys, due to technically advanced smartphones or when responding being connected to WiFi (Schlosser and Mays 2018; Couper and Peterson 2017). Therefore, it can be concluded that most of the additional time needed on mobile devices is caused by additional scrolling (Couper and Peterson 2017).
3.3 Research Questions
As pointed out before, mobile devices have become increasingly popular and available “anywhere and anytime” (Thulin 2018). Hence, they represent an actual alternative that is increasingly used for survey participation (Gummer et al. 2019). Moreover, the GERPS sample consists of internationally mobile respondents, a group that usually shows an increased affinity for the internet and mobile devices (Ette and Sauer 2010). GERPS respondents tend to be younger and better educated than the German population as a whole (Ette and Erlinghagen 2021). Therefore, we expected a large number of mobile device respondents for GERPS. As a result, there may be an increased risk of selectivity biases as a consequence of the systematic device use. To address these questions, the aim of this paper is twofold. The first aim of this study is to identify the variables linked to device choice. As already described before, existing research found inconclusive results concerning the determinants of device choice. Any systematic differences between groups of mobile and desktop users would come across with method selection effects (Décieux and Hoffmann 2014), as specific individuals are, for example, more prone to use a mobile device to complete an online questionnaire. Such systematic selectivity would increase risk of a selectivity bias between the different device modes, especially if contextual variables would have a strong explanatory power for the mode choice. Therefore, the present study investigates factors affecting device choice for answering a survey by trying to answer the following research question: What are the determinants of device choice? Here, it is elucidated whether device choice can be comprehensively explained by the sociodemographic characteristics of the respondent such as age, gender, and so on.
Moreover, research focusing on differences in the response process of the different modes of online surveys mostly suggests that completion times increase due to the choice of using a mobile device for proceeding through the questionnaire. However, more nuanced approaches point to decreasing differences of response times when using a mobile-optimized design. Therefore, survey participation was made at least as convenient as possible for the respondents, e.g. by developing a mobile-optimized design (for more information see Ette et al. 2020). This mobile-optimized design should ensure the highest possible practicability on mobile devices (Andreadis 2015; Herzing 2019; Schnell et al. 2013). Thus, the second aim of the paper is to compare the response times of mobile and desktop respondents within the mobile-optimized design: Is there a response time difference between mobile and desktop respondents? When comparing the mode-specific response times, a large difference between mobile and desktop groups could for example point to a selective increase of survey burden due to device choice, which in turn can result in increasing dropouts. For a panel survey such as GERPS, an accumulation of selective dropouts due to device-specific survey burden in wave 1 would be especially impactful because when an individual drops out of participating in the survey, there is no possibility of asking the respondents about their willingness to be re-surveyed in the following waves. Due to its importance for ongoing data collection, the comparison of mode-specific response times is usually one of the initial steps when considering sample and data quality.
4 Data and Measures
4.1 Data and Data Cleaning
Given that online surveys are self-administered, there is an increased risk of distractions and respondents’ doing secondary activities while doing a survey, which may strongly impact the effect of survey response time (Antoun et al. 2017; Gibson and Bowling 2020; Höhne and Schlosser 2018; Sendelbah et al. 2016; Zwarun and Hall 2014). Therefore, for the analysis addressing research question 2, both samples were separately cleaned for response time outliers using the STATA module RSPEEDINDEX (Roßmann 2015). This module computes a response speed index for every respondent on the basis of the overall survey completion time. The values of the index can be interpreted as a measure of the mean response speed of survey respondents. An index value of 1 means that respondent’s response speed is equivalent to the mean response speed in the selected sample of respondents. Index values close to 0 indicate a very fast mean response speed, whereas values close to 2 indicate a very slow mean response speed of the individual respondent. Based on this index it was possible to flag response speed outliers in the lower (i.e., fast respondents) and the upper deviations (i.e., slow respondents) based on absolute cutoff values of the response speed index. Since no generally accepted cut-off criterion for this response speed index has been established, a threshold of 0.5 above and below the mean response time (response speed index of 1) of the emigrants and the remigrant sample was chosen. In the emigrant sample, 589 respondents had a response speed index value smaller than 0.5 and were flagged as fast responders (speeders), and 303 respondents had a value above 1.5 and were flagged as slow respondents. Within the remigrant sample, 810 respondents were flagged as speeders and 436 as slow respondents. All flagged respondents were excluded from the analysis for research question 2. Consequently, the cleaned, final sample for the analysis to address research question 2 consisted of 9563 respondents.
4.2.1 Dependent Variables
As mentioned above, we concentrate on two paradata measures, namely device type and completion time, as dependent variables in the following analyses. To assess which device respondents used to complete the survey, we drew on the user agent strings that the survey software collected as paradata. By using the STATA module PARSEUAS (Roßmann and Gummer 2016), these user agent strings were parsed into useable information that allowed us to determine whether respondents used a personal computer, tablet, or smartphone to complete the survey. Based on this information, a binary variable was created to code use of a mobile device (“No = 0” and “Yes = 1”). The overall completion time is used as an indicator of survey burden. For this article, response time was assessed as the server-side completion time measured in seconds and collected within survey paradata was used. For the analysis, these times were converted to minutes.
4.2.2 Independent Variables
Our analyses also examined other different independent variables that have been tested in previous literature and can be interpreted as determinants of device usage and survey completion time. These are the common sociodemographic variables. The current age of the respondent at the time of wave 1 based on the question of the year of birth in the questionnaire was categorized into four groups specified as “20–29 years”, “30–39 years”, “40–49 years”, and “50 years and older.” In addition, respondents’ gender was included in the analysis as a control variable. Male respondents were coded as “1″ and female respondents as “2″. Moreover, respondents’ education was measured by the highest vocational or college degree attained. The response options were 1 = “No degree”, 2 = “Intermediate Degree”, 3 = “Upper Degree”, and 4 = “other”. The analysis of the household status is based on the generated variable “household status after migration” in the GERPS data set, which consists of eight different values 1 = “One-person Household”, 2 = “Couple without children”, 3 = “Single parent”, 4 = “Couple with Children LE 16″, 5 = “Parents and adult children”, 6 = “Adults with parents”, 7 = “Multi-generation household” and 8 = “Other combinations”. This variable was dichotomized to the variable single-person household 1 = “Yes” for value 1 “One-person household” and 0 = “No” for all values from 2 to 8.
5.1 Selectivity of Mode Choice
As mentioned above, we expected a large number of respondents to have used a mobile device for completing the survey in both samples and therefore we tried to make smartphone and other mobile device usage more convenient, e.g. by including a QR code in the invitation letter and programming a mobile-optimized survey design. As Table 17.1 shows, both the emigrant and the remigrant samples consisted of about 30% individuals who completed the survey on a mobile device.
No significant differences concerning device use can be found in the t-test, testing response time differences between the samples of GERPS. Thus, data from completion of GERPS corroborate the notion of a high rate of mobile respondents and show similar patterns to those found in existing literature (Gummer et al. 2019; Haan et al. 2019; Revilla et al. 2016).
Table 17.2 shows two different logistic regression models investigating the determinants of device choice where “desktop” is coded as 0 and “mobile” is coded as 1. The models control for possible predictors that may theoretically affect device choice: gender, age, education, and living in a single-person household. Model 1 tests the effects within the emigrant sample and model 2 within the remigrant sample.
Concerning age, a significant difference in device usage can be found in both samples. Compared to 50+, younger age groups are significantly more likely to have used a mobile device to complete the survey. This result is consistent with previous literature (e.g. Couper et al. 2017; Gummer et al. 2019; Lambert and Miller 2015). Moreover, in both samples gender appears to be an important determinant of device use. Female respondents tended to use mobile devices more often for survey participation than males did. Again these findings are in line with previous research (Cook 2014). Furthermore, we found a significant effect for education. Respondents with higher education degrees used a mobile devise remarkably less frequently than respondents with lower degrees. This challenges the findings from de Bruijne and Wijnant (2013) who found an increasing tendency of highly educated respondents to use a mobile device for survey participation as the effect is exactly the other way around. The education effect in our samples corroborate the interpretation of Revilla et al. (2016) and Gummer et al. (2019), who stated that the effect of education on device usage seems to be inconsistent or inconclusive within different samples of populations. Concerning household size, a significant effect of household structure can only be found in the remigrant sample. Here, respondents living in a single-person household tended to use mobile devices less often for survey participation. Thus, the findings of previous studies that respondents living in single-person households tend to use mobile devices more often for survey participation (Cook 2014; Haan et al. 2019) is challenged by the data of the remigrant sample.
To conclude, when controlling for the generally selected mode selection effects in GERPS, in both samples we detected results that were consistent with some and in contrast with other previous findings. Our data showed the commonly found effects concerning age and gender, but a significant effect of education on device use that is contrary to the results in most existing studies. Moreover, living in a single-person household had a significant relationship with device use within the remigrants sample such that remigrants in single-person households were less likely to have used a mobile device to complete the survey, which challenges the results of previous studies. However, the strength of a systematic selection effect of the device choice is indicated by the model fit indices of the models (Gummer et al. 2019). Here it becomes clear that these have only a very slight explanatory power in both samples, which means that there is only an incidental selectivity effect of device choice within the GERPS data.
5.2 Analysis of Survey Burden Across Survey Modes
Research question 2 focusses on the analysis of response survey burden based on the overall survey completion time. This measure is an established and objective indicator for an analysis of survey burden. In a first step, an independent t-test was calculated in order to determine if there were differences in response times based on the device respondents used (desktop or mobile device) (Table 17.3).
Although on their face the mean response times may seem very similar, the results of the t-test showed that participants using a desktop device had statistically significantly lower response times (24.16 ± 0.10 min) compared to respondents answering the survey on the mobile device (24.74 ± 0.15 min). However, as survey participation is shaped by different respondent-related attributes, the differences in interview duration is also tested within logistic regression models to control for different respondents’ characteristics in both samples. To assess the effect of the device we fitted separate Ordinary Least Squares (OLS) regression models with “completion time” as the dependent variable. Table 17.4 shows two different regression models focusing on factors that might affect completion times: device used, gender, age, education, and single-person household. The rows display the contextual factors and the columns display the completion times within the different samples. Model 1 and Model 3 are the baseline models covering the bivariate effect of the device on the completion times (Model 1 of the emigrant sample, and Model 3 of the remigrant sample) and Model 2 and Model 4 control determinants of response time other than device within the emigrant sample (Model 2) and within the remigrant sample (Model 4).
Concerning response times, we found a difference between mobile devices and desktop users only in the remigrant sample. Among the remigrant sample, respondents answering the survey on a mobile device took significantly longer than desktop respondents. In practice, this means a 44 s (2.4% longer compared to the mean response time) longer spent on survey completion in the baseline model and 39 s (2.4% compared to the mean response time) longer when other determinants were controlled for. Although the emigrant sample showed similar patterns, among the emigrant sample no significant effect of the survey mode on the completion time of the survey (in the baseline model as well as in the model controlled for respondent related factors) was found. Thus, the results of this study show a slight tendency that the response process takes longer on mobile devices, which is consistent with previous research. However, as suggested by previous studies (Höhne and Schlosser 2018; Schlosser and Mays 2018) these differences are not strong. The effects weakness might be due to the mobile-friendly design used within GERPS. In both the remigrant and emigrant samples only slight differences in response times can be found, and in the emigrant sample (columns 1 and 2) these were not statistically significant. In both cases, the effect of the control variables such as age, gender, or single-person household status were much stronger.
Today an increasing number of online surveys are completed on mobile devices, which brings possible problems of selectivity effects and differences in how the respondents perceive survey burden, both factors that can affect data quality especially in a panel survey such as GERPS. Within GERPS more than one-third of the respondents used a mobile device to answer the questionnaire. Compared to other projects, the frequency is in the upper range, but not surprisingly high. However, having different groups of respondents who navigate through a questionnaire in a completely different way, always comes across with the risk of a selectivity biases and differences in previewed survey burden. Both were reflected by the research questions of this paper.
Concerning selectivity biases due to device use, it can be assumed that after controlling for the commonly investigated determinants of device use, the GERPS samples include only very slightly sociodemographic selectivity bias due to device use. Moreover, concerning differences in response burden, we found only a very small effect of response mode when response burden is measured by overall response time. This was especially the case within the remigrant sample.
Still, future studies should put a stronger emphasis on the difference between the emigrant and remigrant samples concerning the significance of the response time differences to elucidate whether the mobile friendly design had the desired effect of adjusting the response time differences between mobile design and pc use. Contrary to the expectations based on previous literature, response times did not differ significantly between mobile and desktop device groups in the emigrant sample. Therefore, a more differentiated perspective may help to elucidate the determinants and drivers of this missing difference (Struminskaya et al. 2015). For example, it might be interesting to investigate whether this missing effect is driven by specific patterns of mobile device type usage (smartphone vs. tablet) compared to the emigrant sample or due to better or worse quality of the internet connection abroad compared to within Germany (Schlosser and Mays 2018). A more advanced response time outlier definition, e.g. taking on-device distractions into account, could possibly substantiate the foundation of the results (Höhne and Schlosser 2018; Antoun et al. 2017). In addition, the GERPS sample may also provide a potential opportunity to make cross-national comparisons regarding device use and response time differences of German citizens within different regions of the world.
Andreadis, I. (2015). Web surveys optimized for smartphones: Are there differences between computer and smartphone users? Methods, Data, Analyses, 9(2), 213–228.
Antoun, C., & Cernat, A. (2019). Factors affecting completion times: A comparative analysis of smartphone and PC web surveys. Social Science Computer Review, 38, 477.
Antoun, C., Couper, M. P., & Conrad, F. G. (2017). Effects of mobile versus PC web on survey response quality: A crossover experiment in a probability web panel. Public Opinion Quarterly, 81(S1), 280–306.
Brockhaus, S., Keusch, F., Henninger, F., Horwitz, R., Kieslich, P., Kreuter, F., and Schierholz, M. (2017). Learning from mouse movements: Improving web questionnaire and respondents’ user experience through passive data collection. Miami.
Cook, W. A. (2014). Is mobile a reliable platform for survey taking? Defining quality in online surveys from mobile respondents. Journal of Advertising Research, 54(2), 131–148.
Couper, M. P. (2008). Designing effective web surveys. Cambridge: Cambridge University Press.
Couper, M. P., Antoun, C., & Mavletova, A. (2017). Mobile web surveys: A total survey error perspective. In P. P. Biemer, E. De Leeuw, S. Eckman, B. Edwards, F. Kreuter, L. E. Lyberg, C. Tucker, & B. T. West (Eds.), Total survey error in practice (pp. 133–154). Hoboken, NJ: John Wiley & Sons, Inc.
Couper, M. P., & Peterson, G. J. (2017). Why do web surveys take longer on smartphones? Social Science Computer Review, 35(3), 357–377.
de Bruijne, M., & Wijnant, A. (2013). Comparing survey results obtained via mobile devices and computers: An experiment with a mobile web survey on a heterogeneous group of mobile devices versus a computer-assisted web survey. Social Science Computer Review, 31(4), 482–504.
Decieux, J. P. P., Witte, N., Ette, A., Erlinghagen, M., Guedes Auditor, J., Sander, N., and Schneider, N. (2019). Individual Consequences of Migration in a Life Course Perspective: Experiences of the First Two Waves of the New German Emigration and Remigration Panel Study (GERPS). European Survey Research Association (ESRA) Conference, Zagreb, 2019.
Décieux, J. P., Heinen, A., & Willems, H. (2018). Social media and its role in friendship-driven interactions among young people: A mixed methods study. Young, 27(1), 18–31.
Décieux, J. P. P., & Hoffmann, M. (2014). Antwortdifferenzen im junk & crime survey: Ein Methodenvergleich mit goffmanscher Interpretation. In M. Löw (Ed.), Vielfalt und Zusammenhalt: Verhandlungen des 36. Kongresses der Deutschen Gesellschaft für Soziologie in Bochum und Dortmund 2012. Frankfurt am Main, New York: Campus.
Diedenhofen, B., & Musch, J. (2017). PageFocus: Using paradata to detect and prevent cheating on online achievement tests. Behavior Research Methods, 49, 1444–1459.
Durrant, G. B., D’Arrigo, J., & Müller, G. (2013). Modelling call recorded data: Examples from cross-sectional and longitudinal surveys. In F. Kreuter (Ed.), Improving surveys with paradata. Analytic uses of process information (pp. 281–308). Hoboken, NJ: John Wiley & Sons.
Erlinghagen, M., Ette, A., Schneider, N. F., Witte, N., & Décieux, J. P. (2019). Internationale migration zwischen hochentwickelten Staaten und ihre Konsequenzen für den Lebensverlauf. In N. Burzan (Ed.), Komplexe Dynamiken globaler und lokaler Entwicklungen. Essen: DGS.
Erzen, E., Odaci, H., & Yeniçeri, İ. (2019). Phubbing: Which personality traits are prone to phubbing? Social Science Computer Review.
Ette, A., & Erlinghagen, M. (2021). Structures of German Emigration and remigration: Historical developments and demographic patterns. In M. Erlinghagen, A. Ette, N. F. Schneider, & N. Witte (Eds.), Consequences of International Migration across the Life Course: Global Lives of German Migrants. Springer.
Ette, A., Décieux, J. P., Erlinghagen, M., Auditor, J. G., Sander, N., Schneider, N. F., & Witte, N. (2021). Surveying across borders: The experiences of the German emigration and remigration panel study. In M. Erlinghagen, A. Ette, N. F. Schneider, & N. Witte (Eds.), The global lives of German migrants. Consequences of international migration across the life course. Cham: Springer.
Ette, A., Décieux, J. P., Erlinghagen, M., Genoni, A., Auditor, J. G., Knirsch, F., Kühne, S., Mörchen, L., Sand, M., Schneider, N. F., & Witte, N. (2020). German emigration and remigration panel study (GERPS): Methodology and data manual of the baseline survey (wave 1). Wiesbaden: Bundesinstitut für Bevölkerungsforschung.
Ette, A., & Sauer, L. (2010). Auswanderung aus Deutschland. In Daten und Analysen zur internationalen Migration deutscher Staatsbürger. Wiesbaden: VS Verlag.
Gibson, A. M., & Bowling, N. A. (2020). The effects of questionnaire length and behavioral consequences on careless responding. European Journal of Psychological Assessment, 36, 410–420.
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation: Description and an illustration. Public Opinion Quarterly, 64(3), 299–308.
Gummer, T., Quoß, F., & Roßmann, J. (2019). Does increasing mobile device coverage reduce heterogeneity in completing web surveys on smartphones? Social Science Computer Review, 37(3), 371–384.
Gummer, T., & Roßmann, J. (2015). Explaining interview duration in web surveys: A multilevel approach. Social Science Computer Review, 33(2), 217–234.
Haan, M., Lugtig, P., & Toepoel, V. (2019). Can we predict device use? An investigation into mobile device use in surveys. International Journal of Social Research Methodology, 22(5), 517–531.
Herzing, J. M. E. (2019). Mobile web surveys. In F. O. R. S. Guide (Ed.), Swiss Centre of Expertise in the Social Sciences FORS. Lausanne: FORS.
Höhne, J. K., & Schlosser, S. (2018). Investigating the adequacy of response time outlier definitions in computer-based web surveys using paradata SurveyFocus. Social Science Computer Review, 36(3), 369–378.
Jacob, R., Heinz, A., & Décieux, J. P. (2019). Umfrage: Einführung in die Methoden der Umfrageforschung. Walter de Gruyter: Oldenburg.
Kreuter, F. (Ed.). (2013). Improving surveys with paradata. Analytic uses of process information. Hoboken, NJ: John Wiley & Sons.
Lambert, A. D., & Miller, A. L. (2015). Living with smartphones: Does completion device affect survey responses? Research in Higher Education, 56(2), 166–177.
Lee, H., Kim, S., Couper, M. P., & Woo, Y. (2018). Experimental comparison of PC web, smartphone web, and telephone surveys in the new technology era. Social Science Computer Review, 37(2), 234–247.
Leiner, D. J. (2019). Too fast, too straight, too weird: Non-reactive indicators for meaningless data in internet surveys. Survey Research Methods, 13(3), 229–248.
Mancosu, M., Ladini, R., & Vezzoni, C. (2019). ’Short is better’. Evaluating the attentiveness of online respondents through screener questions in a real survey environment. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 141(1), 30–45.
Mayerl, J., & Giehl, C. (2018). A closer look at attitude scales with positive and negative items. Response latency perspectives on measurement quality. Survey Research Methods, 12(3), 193–209.
McClain, C. A., Couper, M. P., Hupp, A. L., Keusch, F., Peterson, G. J., Piskorowski, A. D., & West, B. T. (2019). A typology of web survey paradata for assessing total survey error. Social Science Computer Review, 37(2), 196–213.
Mergener, A., & Décieux, J. P. P. (2018). Die “Kunst” des Fragenstellens. In B. Keller, H.-W. Klein, & T. Wirth (Eds.), Qualität und Data Science in der Marktforschung: Prozesse, Daten und Modelle der Zukunft (pp. 81–97). Wiesbaden: Springer Fachmedien Wiesbaden.
Montgomery, J. M., & Rossiter, E. L. (2020). So many questions, so little time: Integrating adaptive inventories into public opinion research. Journal of Survey Statistics and Methodology, 8(4), 667–690.
Peytchev, A. (2009). Survey breakoff. The Public Opinion Quarterly, 73(1), 74–97.
Revilla, M., Toninelli, D., Ochoa, C., & Loewe, G. (2016). Do online access panels need to adapt surveys for mobile devices? Internet Research, 26(5), 1209–1227.
Roßmann, J. (2015). RSPEEDINDEX: Stata module to compute a response speed index and perform outlier identification.
Rolstad, S., Adler, J., & Rydén, A. (2011). Response burden and questionnaire length: Is shorter better? A review and meta-analysis. Value in Health, 14(8), 1101–1108.
Roßmann, J., & Gummer, T. (2016). Using paradata to predict and correct for panel attrition. Social Science Computer Review, 34(3), 312–332.
Roßmann, J., Gummer, T., & Silber, H. (2018). Mitigating satisficing in cognitively demanding grid questions: Evidence from two web-based experiments. Journal of Survey Statistics and Methodology, 6(3), 376–400.
Schlosser, S., & Mays, A. (2018). Mobile and dirty: Does using mobile devices affect the data quality and the response process of online surveys? Social Science Computer Review, 36(2), 212–230.
Schnell, R., Hill, P. B., & Esser, E. (2013). Methoden der empirischen Sozialforschung (10th ed.). München: Oldenbourg.
Sendelbah, A., Vehovar, V., Slavec, A., & Petrovčič, A. (2016). Investigating repsondent multitasking in web surveys using paradata. Computers in Human Behavior, 55, 777–787.
Stieger, S., & Reips, U.-D. (2010). What are participants doing while filling in an online questionnaire: a paradata collection tool and an empirical study. Computers in Human Behavior, 26(6), 1488–1495.
Struminskaya, B., Weyandt, K., & Bosnjak, M. (2015). The effects of questionnaire completion using mobile devices on data quality. Evidence from a probability-based general population panel. Methods, Data, Analyses, 9(2), 261–292.
Thulin, E. (2018). Always on my mind: How smartphones are transforming social contact among young swedes. Young, 26(5), 465–483.
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.
Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
Verbree, A.-R., Toepoel, V., & Perada, D. (2019). The effect of seriousness and device use on data quality. Social Science Computer Review, 38, 720.
Villar, A., Callegaro, M., & Yang, Y. (2013). Where am I? A meta-analysis of experiments on the effects of progress indicators for web surveys. Social Science Computer Review, 31(6), 744–762.
Ward, M. K., & Meade, A. W. (2018). Applying social psychology to prevent careless responding during online surveys. Applied Psychology, 67(2), 231–263.
Wenz, A., Jäckle, A., & Couper, M. P. (2019). Willingness to use mobile technologies for data collection in a probability household panel. Survey Research Methods, 12(1), 1–22.
Yan, T., & Olson, K. (2013). Analyzing paradata to investigate measurement error. In F. Kreuter (Ed.), Improving surveys with paradata. Analytic uses of process information (pp. 73–96). Hoboken, NJ: John Wiley & Sons.
Zwarun, L., & Hall, A. (2014). What’s going on? Age, distraction, and multitasking during online survey taking. Computers in Human Behavior, 41, 236–244.
Editors and Affiliations
© 2021 The Author(s)
About this chapter
Cite this chapter
Décieux, J.P. (2021). Is There More Than the Answer to the Question? Device Use and Completion Time as Indicators for Selectivity Bias and Response Convenience in Online Surveys. In: Erlinghagen, M., Ette, A., Schneider, N.F., Witte, N. (eds) The Global Lives of German Migrants. IMISCOE Research Series. Springer, Cham. https://doi.org/10.1007/978-3-030-67498-4_17
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67497-7
Online ISBN: 978-3-030-67498-4