1 Introduction

Surveying international migrants is a complex endeavour with various pitfalls. It is challenging to survey emigrants and remigrants from one country of origin. Emigrants are more difficult to sample than the resident population because they are more spatially dispersed. Moreover, emigrants and remigrants are more difficult to survey than the non-migrated resident population because migrants are likely to be mobile again (Di Pietro 2012; Oosterbeek and Webbink 2011; Kodrzycki 2001). The difficulty is even greater in panel surveys because they have to deal with ongoing migration. The approach we followed in the German Emigration and Remigration Panel Study (GERPS) is an immediate response to these methodological issues. GERPS realises a probability sample of German emigrants and remigrants by exploiting public register information in the country of origin. The study surveys emigrants and remigrants shortly after migration and follows them over a minimum period of 2 years. It provides us with the opportunity to analyse consequences of international migration from at least two perspectives–shortly after emigration to destination countries and shortly after return to the country of origin. This research design makes GERPS a panel with globally highly dispersed sample members (see Ette et al. 2021 in this volume), suggesting an online survey as the most appropriate survey mode for controlling survey costs and ensuring participant contact. Due to the register-based sampling frame, however, we were only able to recruit participants via postal invitation. We therefore realised a “recruit-and-switch” design (Sakshaug 2013) by motivating our GERPS members via postal letters to fill out a web survey (“push-to-web”).

This chapter investigates the applicability of our research design by analysing two crucial challenges when setting up a probability-based online panel of internationally mobile populations. The first challenge concerns potential effects of GERPS’ push-to-web approach during the survey recruitment phase. Some studies indicate that online survey techniques negatively affect unit response in combination with more traditional recruitment methods (Millar and Dillman 2012; Hardigan et al. 2016; Tai et al. 2018). Section 16.2 investigates the consequences of our push-to-web design for unit nonresponse. The analyses draw on a split ballot experiment that offered two remigrant sub-samples an optional paper and pencil interview (PAPI) at different stages in the field process. The second challenge for setting up a probability-based online panel like GERPS is panel attrition. Blom et al. (2015) argue that motivating respondents for future survey participation constitutes one of the greatest challenges when conducting longitudinal online surveys. In Sect. 16.3, we therefore assess the challenge of panel attrition by investigating determinants of respondents’ consent to be contacted in the forthcoming waves of GERPS (henceforth called “panel consent”). The empirical analyses in Sect. 16.3 are also restricted to the sample of return migrants.

2 Dealing with the First Recruitment Challenge: Online Survey Participation

One crucial design element of GERPS is its push-to-web design in dealing with international migrants as a rare and hard-to-reach population (Lynn et al. 2018). GERPS follows a push-to-web approach for its practical survey implementation, combining postal invitation and two reminder letters with online surveying of respondents. One difficulty of this approach is its mode switch. Scholars assume that mode switching generally increases the burden to participate in a survey, consequently increasing the risk of losing survey participants (Groves et al. 2000). In our context, switching from offline recruiting mode to an online survey mode is a particularly critical event. It increases the complexity for survey participants and makes participation more burdensome as they need to enter a survey link on an electronic device and to log in with their code before being able to take part in the survey (Dillman 2017).

Although some argue that push-to-web may represent a critical event for potential participants of online surveys in general (Millar and Dillman 2012; Hardigan et al. 2016; Tai et al. 2018), it is reasonable to assume that push-to-web is especially problematic for responses from specific groups of invitees. Responses in push-to-web surveys might be particularly affected by spatial and social selectivity. One general problem of online panels is that individuals without internet access cannot participate. Internet access may be problematic for potential survey participants living in remote areas lacking network coverage (Couper 2000; OECD 2018). Unit nonresponse may, however, not only be dependent on remoteness, but also on digital literacy, indicating social selectivity (Mohorko et al. 2013; Herzing and Blom 2018). Lesser educated and older survey participants in particular likely have less access to digital devices or may lack digital competence, complicating device handling (Schmidt-Hertha 2014).

Since GERPS constitutes a push-to-web survey, we are interested in finding out whether pushing invitees to our online survey is related with general unit nonresponse and with nonresponse of specific respondent groups. We conducted a split ballot experiment within GERPS to address these questions. The idea of the experiment was to give two randomly sampled experimental groups of remigrants at different stages in the field process the opportunity to either participate offline (via PAPI) or online.

  • The first experimental group followed a concurrent (CC) mixed-mode survey design, offering sample members different survey possibilities simultaneously (De Leeuw and Berzelak 2016). 1000 randomly selected remigrants therefore received push-to-web invitation letters with enclosed paper questionnaires. Invitees with pending responses received a first reminder without a paper questionnaire, reminding them of the opportunity to participate either online or by PAPI. We enclosed another paper questionnaire in the second reminder letter.

  • The second experimental group followed a sequential (SQ) mixed-mode survey design, offering sample members different survey possibilities successively (De Leeuw and Berzelak 2016). Here, 1000 randomly selected remigrants received push-to-web invitation letters and, if necessary, a first reminder that were identical to the letters in the control group. Those with pending responses received a second reminder letter with enclosed paper questionnaire and stamped response envelope, offering them the opportunity to participate online or via PAPI.

By testing these two experimental groups against a control group (CG) with push-to-web-only design, we were able to investigate potential differences in survey-mode-dependent responses.

Drawing on the general assumption of increased unit nonresponse in case of mode switching, our first Hypothesis states that

H2.1

GERPS invitees in the experimental groups (i.e. with the optional PAPI offer) are more likely to respond than in the control group (i.e. push-to-web-only).

Regarding the argument on spatial selection, network coverage still differs in Germany in urban and more rural areas (BMVI 2019; Jacob et al. 2019). Remoteness could therefore still pose a problem for online surveys conducted in Germany. Accordingly, the second Hypothesis posits that

H2.2

The more remote the residences of invitees, the more likely GERPS invitees in the experimental groups are to respond compared with the control group.

The affinity for participating online also depends on potential participants’ education. We assume thirdly that

H2.3

Lesser-educated GERPS invitees in the experimental groups are more likely to respond than in the control group.

The assumption that older persons have limited digital competencies leads us to expect that

H2.4

Older GERPS invitees in the experimental groups are more likely to respond than in the control group.

2.1 Data and Methods

This Section’s analyses rely on population register data from respondents and non-respondents of the first wave of the German Emigration and Remigration Panel Study (GERPS). GERPS is based on a random sample drawn from local population registers and comprises information on 20–70 year old German nationals who either emigrated from or remigrated to Germany between July 2017 and June 2018 (Ette et al. 2021). The sample data enable the calculation of response rates and nonresponse analyses. We further enriched information on non-respondents by purchasing proxy information from Microm, a German micro- and geo-marketing agency. We matched this data with addresses in our gross sample (see Ette et al. 2020 for more information).

We conducted the split ballot experiment only with German remigrants. The following analysis includes three groups: n = 5999 remigrants from the control group, n = 1000 remigrants from the CC group, and n = 1000 remigrants from the SQ group. This leaves us with a final sample of N = 7999 remigrants. All members of each group received a 10-Euro conditional incentive to ensure same survey conditions. Our dependent dichotomous variable response indicates whether potential survey participants responded (1) or not (0). Unit (non)response is thereby defined according to AAPOR (2016) standards: Sample members are labelled as respondents if they answered 50 per cent to 100 per cent of all applicable questions. If sample members filled out less than 50 per cent, they are treated as “break-off” and are defined as non-respondents together with those sample members who did not even start the questionnaire. The key explanatory variable is the survey mode, indicating push-to-web control group (0), the CC group (1), and the SQ group (2). Further explanatory variables for investigating selection effects include remigrants’ region of residence (urbanity), measured in three categories (0 = “metropolitan region,” 1 = “regiopolitan region,” 2 = “rural region”) and their social status (0 = “low status,” 1 = “medium status,” 2 = “increased status,” 3 = “high status”). Urbanity and social status were generated by the geo-marketing agency Microm. Microm derives status information by comparing individuals’ micro-cell-level with nation-level information on education and income. It draws on municipality information from the Federal Office for Building and Regional Planning (BBR) to provide data on a region’s degree of urbanity. Age is introduced three times in its linear, quadratic, and cubic function in order to capture non-linear relationships with unit response as known from survey research (Lynn 2003; Durrant and Steele 2009; Goyder 1987; Groves and Couper 1998). Since many argue that women have a higher response probability than men (e.g. Groves and Couper 1998), we additionally control for gender (0 = “male,” 1 = “female”).

We calculate descriptive statistics and employ multiple logistic regressions with robust standard errors. The regressions aim at investigating potential spatial and social selection effects on unit response. We report average marginal effects (AMEs) instead of logits because they facilitate the interpretation. The AME expresses the average influence of a model variable over all observations-given their characteristics-on the probability of the outcome P(y = 1| x) (see Best and Wolf 2015).

The analytical strategy is as follows: First, we study survey-mode-dependent unit response by comparing response rates among the three groups. Second, we investigate selectivity of response by conducting comparative unit nonresponse analyses among the groups.

2.2 Survey Mode and Unit Response

We compare final response rates among our three groups according to AAPOR RR2 standard (AAPOR 2016).Footnote 1 There are slightly higher response rates in the CC and SQ groups compared to the control group. While we obtained an RR2 of 28.2 per cent in the control group, the response rates in the CC and SQ groups were 30.0 per cent and 29.7 per cent respectively. These results indicate some but no statistically significant support for H2.1 (CG and CC response rates: t(6,997) = −1.15, p = 0.25; CG and SQ response rates: t(6,997) = −0.96, p = 0.33).

Figure 16.1 depicts RR2 response rates over field time for each group. The survey started on 8 November 2018 when we sent the invitation to participate in GERPS. The field process lasted over 14 weeks (until 11 February 2019). Two vertical lines indicate the days of the first reminder (left; 21 November 2018) and of the second reminder (right; 5 December 2018).

Fig. 16.1
A multiple-line graph plots R R 2 response rates over field time for C G control group, C C concurrent mixed-mode group, S Q sequential mixed-mode group for the duration of 14 weeks.

Response Rates (RR2, in per cent) over field time by group. (Source: GERPSw1, authors’ calculations). CG control group, CC concurrent mixed-mode group, SQ sequential mixed-mode group

The response rate developments are very similar in all three groups, with the control group showing a slightly lower response rate towards the end of the survey time. There are remarkable and similar increases in survey participation after the survey invitation and the first reminder. The second reminder increases survey participation as well, but comparably less. The increase in response rates seems slightly delayed, which is likely correlated with the beginning of Christmas holidays shortly after week 6. The increase may mark a period of doing unfinished business before holidays. It especially persists in the CC and SQ groups, for which we observe another increase in week 9. This period marks the end of the festive season and thus probably the end of mail delivery issues.

Next, we report differences in AMEs of response probability for our three experimental groups to investigate potential selection issues (see Table 16.1). The AMEs are based on multiple logistic regressions with robust standard errors. Note that we only consider cases without missing information for the analysis in order to increase comparability between the models.Footnote 2

A first model (not shown) investigates general mixed survey-mode effects on unit response (see H2.1). Models 2 to 7 specifically test for mode-driven spatial and social selection effects on sample members’ response by interacting survey mode with urbanity, status group, and age (H2.2 to H2.4). All models additionally control for gender. With the rather simple architecture of the regression models, we avoid capturing mediator effects between survey mode and unit response. This allows us to investigate overall effects, including direct and indirect effects on unit response.

The assumed negative effect of mode switching on unit response (H2.1) is not supported in Model 1 (n = 7658, McFadden’s pseudo-R2 = 0.3%). Sample members from the experimental groups are only 1.2 percentage points more likely to respond compared to those in the control group (SE = 0.01, z = 1.32, p = 0.19). Table 16.1 depicts AME coefficients of logistic regression models that test for survey-mode-driven spatial selectivity (Models 2 and 3) and social selectivity (Models 4–7) in unit response.

Table 16.1 Average marginal effects (AMEs) on unit response (= 1) based on multiple logistic regressions

Models 2 and 3 show that in less urbanised regions, individuals in the experimental group tend to respond more often than those in the control group (H2.2).Footnote 3 The effects are, however, not statistically significant. The case numbers in the category “rural region” are too low for reliable interpretation (n = 30 in the CC group; n = 50 in the SQ group).

Models 4 and 5 show survey-mode-driven social selection with regard to education by using proxy information on individuals’ status group. Status-dependent differences in response probability between the different survey mode groups are small. No systematic relationship between survey mode and status group can be observed. This finding does not support our Hypothesis on digital competence (H2.3).

Models 6 and 7 also deal with social selection. In contrast to Models 5 and 6, however, they investigate whether coverage of older individuals improved in the experimental groups compared to the control group (H2.4). The empirical findings do not support this Hypothesis. In the SQ group and especially in the CC group, sample members of middle age (i.e. around 30–50 years) are roughly 2–5 percentage points more likely to respond than sample members in the control group. The age-dependent responses thus rather resemble a pattern that is regularly found in population surveys, indicating an underrepresentation of younger and older individuals (e.g. Gigliotti and Dietsch 2014; Herzog and Rodgers 1988; Cull et al. 2005; Kaldenberg et al. 1994).

However, the results in Table 16.1 only represent outcomes at the group level. We cannot deduce whether we are able to address response issues related to remote living, lower status, and old age with our paper questionnaires. For example, middle-aged sample members may not necessarily show higher response rates because they like participating by PAPI. Instead, their participation could be particularly positively influenced by our PAPI offer, but prompt them to participate online nevertheless. In this case, the paper questionnaire may just provide them a more reliable and meaningful image of GERPS.

We therefore used an explorative approach to additionally test whether remoteness, status group, and age of individuals in the experimental groups affect their chosen response mode (0 = “CAWI,” 1 = “PAPI”).Footnote 4 The results (not shown here) are partly in line with the findings from Table 16.1. Sample members’ response mode choice is not affected by remoteness or age. Sample members’ status group, however, slightly affects response mode choice: Individuals of high status are 1.3 percentage points more likely to respond via PAPI than individuals of low status.Footnote 5 Offering PAPI thus rather increases selection issues among our already better educated German migrants (see Ette et al. 2020).

3 Dealing with the Second Recruitment Challenge: Participation in Online Panels

In Sect. 16.3, we investigate the challenge of recruiting our participants from wave 1 for future waves of GERPS in order to set up an online migrant panel. Fortunately, research suggests that individuals who cooperated at least once tend to be likely to do so again (Lynn 2018). However, there are always sample members who are not willing to cooperate. This reduces the statistical power of data and might cause systematic selectivity biases. Therefore, panel researchers are inevitably confronted with the question of dealing with (non-)responding participants over the course of every consecutive wave of a panel survey (Roßmann and Gummer 2015). When focussing on unit response, researchers often distinguish three main components (Lepkowski and Couper 2002):

  1. 1.

    successful or failed localisation of the sample units,

  2. 2.

    successful or failed contact approaches,

  3. 3.

    and successful or failed cooperation.

The third component addresses the question whether researchers succeed in motivating respondents for future participation. Scholars describe it as the greatest challenge in setting up longitudinal online surveys, since online panels are often less confronted with failed localisation and failed contact (Blom et al. 2015). Thus, Section 16.3 focuses on panel consent as the third main component regarding issues with unit response. The focus here lies on two aspects: the individual respondent level (e.g. sociodemographic or personal characteristics) and the survey design level (e.g. survey modes and incentive schemes) (Lynn 2018).

At the individual level, sample members’ sociodemographic characteristics affect panel participation. For example, studies show that sample members’ willingness tends to decrease with increasing age (Schnell et al. 2013). This leads us to the following Hypothesis:

H3.1

Respondents’ willingness to further participate in GERPS decreases with increasing age.

Moreover, researchers found that female respondents more often agree to participate in surveys (Jacob et al. 2019). Thus, we assume that

H3.2

Female respondents are more willing to further participate in GERPS.

The previously mentioned studies also show that individuals with higher social status are more willing to participate in surveys (Schnell et al. 2013; Jacob et al. 2019). However, a more detailed analysis suggests that the effect of social status is primarily an educational effect (Haunberger 2011). We therefore expect that

H3.3

Higher educated respondents are more willing to further participate in GERPS.

Moreover, personality traits and individual dispositions such as feelings of social isolation may influence willingness to participate in a survey (Groves and Couper 2012; Saßenroth 2012). Adapting this finding to panel consent, we hypothesise that

H3.4

The stronger respondents’ feelings of isolation, the less they are willing to further participate in GERPS.

Respondents’ survey attitude and motivation to participate strongly correlate with the saliency of a survey (Groves et al. 2000; Blom et al. 2015; Sischka et al. 2020). Survey managers are therefore interested in designing surveys that increase respondents’ willingness to participate. Information on such survey factors in GERPS are mainly derived from paradata collected during the survey (for detailed information see Décieux 2021 in this book). Roßmann and Gummer (2015) already showed that survey-related data improves our understanding of unit nonresponse in surveys and attrition in panels. An important piece of information in this regard is respondents’ questionnaire completion time in wave 1. Respondents’ completion time in the first wave may indicate how burdensome respondents experienced the first survey. By participating in the first wave, sample members may anticipate what it will be like to participate in the panel survey (Lynn 2018), which is a unique feature of panel surveys. Respondents’ perception of the content (e.g. topic sensitivity of the questions), the necessary time and burden of participation or the design are very likely to have a direct impact on the likelihood of continued participation (Gummer and Daikeler 2020; Leeper 2019), and thus on respondents’ panel consent after having answered the questionnaire in wave 1. We therefore propose that

H3.5

Respondents with a higher interview duration (and thus higher burden) are less willing to further participate in GERPS.

As mentioned above, we realised GERPS as a push-to-web panel, allowing respondents to answer the survey via mobile and stationary devices. Gummer and Roßmann (2015) and Couper and Peterson (2017) suggest that it is more convenient to answer a survey on a computer than on a mobile device. Their corresponding findings are based on meta-analyses, showing shorter response times for computer users than for users of mobile devices. However, other studies have shown that these differences in response time decrease significantly in mobile-friendly survey environments (Schlosser and Mays 2018; Couper and Peterson 2017). GERPS therefore uses responsive design to adapt its survey layout to the device of the respondents. We thus expect participation via mobile device to be similarly convenient as participation via computer.

H3.6

Respondents who answered the first survey via mobile and stationary device do not significantly differ in their willingness to further participate in GERPS.

Incentives are used to motivate potential respondents to take part in a survey thus constituting strategies to directly counter unit nonresponse (e.g. Göritz and Neumann 2016; Krieger 2018; Lipps et al. 2019; Spreen et al. 2020). Previous research (e.g. Göritz and Neumann 2016, Krieger 2018; Lipps et al. 2019; Spreen et al. 2020) as well as our incentive experiments in wave 1 (Ette et al. 2020) showed that incentives can positively affect survey response rates. The findings mostly suggest that prepaid incentives perform superiorly to all other incentive schemes. Furthermore, post-paid incentives were usually found to perform better than lotteries, which are deemed the least successful incentive schemes. Scholars also assume that incentives positively affect respondents’ panel consent within longitudinal surveys (Göritz and Neumann 2016). However, research on how incentive strategies relate to panel consent and on how successful they are in the long run is sparse. We therefore investigate whether our incentive schemes in wave 1 affect respondents’ willingness to become a GERPS panel member.

H3.7

Respondents in the prepaid incentive scheme are more willing to further participate in GERPS compared to respondents in the post-paid and lottery incentive schemes.

Concerning different survey modes, the physical presence of the paper questionnaire and individuals’ related opportunity to customise their participation may positively affect their attitude towards the survey (De Leeuw 2018, Sakshaug et al. 2019). Therefore, offering different survey modes may likely increase panel consent rates as well. We assume that

H3.8

Respondents from the SQ and CC groups are more willing to further participate in GERPS than respondents in the control group.

3.1 Data and Methods

This section’s analyses are based on the first wave of GERPS (Ette et al. 2021). In accordance with Sect. 16.2, only the remigrant subsample was used in the following analysis. If applicable, the analysis was conducted for all remigrants, including the sample of the split ballot experiment (henceforth “experiment sample”). This resulted in two estimation samples of 6395 and 2130 remigrants.

Our dependent dichotomous variable panel consent indicates whether potential survey participants are willing to participate in future waves of GERPS (=1) or not (=0). The explanatory variables are divided into two groups, individual and survey-related factors. We differentiate the following individual-level factors: age, gender (0 = “male,” 1 = “female”), educational level, measured in three categories (1 = “no degree,” 2 = “intermediate level degree,” 3 = “upper level degree”), and a measure of self-rated social isolation (1 = “(very) often,” 2 = “sometimes,” 3 = “rarely or never”). Regarding survey-related factors, we computed a response speed index for every respondent on the basis of their overall survey completion time, using the Stata module RSPEEDINDEX (Roßmann 2015). We distinguished different response speed categories by assigning respondents to different quartiles of the speed index (0 = lower quartile, 1 = middle quartiles, 2 = upper quartile), thereby mitigating the impact of outliers. We further identified the device our respondents used to participate in GERPS from the User Agent String by using the Stata module PARSEUAS (Roßmann and Gummer 2016). We clustered the device types in two groups (0 = “Computer,” 1 = “Mobile”). The variable incentive scheme was measured in three categories (1 = “prepaid incentive,” 2 = “post-paid incentive,” 3 = “lottery”). The last two explanatory variables refer to the survey mode and the response mode and are only included in analyses of the experiment sample. In accordance with Sect. 16.2 the survey mode depicts push-to-web mode (= 0, “CG”), concurrent mixed-mode (= 1, “CC”) and sequential mixed-mode (= 2, “SQ”). The response mode indicates whether respondents answered the survey via CAWI (= 0) or via PAPI (= 1). Besides descriptive statistics, we employ multiple logistic regressions since our dependent variable is dichotomous. We report average marginal effects (AME).

3.2 Individual-Level and Survey-Related Correlates of Panel Consent

Overall, the willingness to participate in future waves of GERPS was high. In total, 92.8% of all remigrants gave panel consent. Table 16.2 exhibits how respondents’ willingness for future participation is related to their individual characteristics.

Table 16.2 AMEs of individual characteristics on panel consent (=1) based on multiple logistic regressions

Remigrants’ willingness to participate in future waves of GERPS is hardly affected by the sociodemographic factors depicted in Table 16.3. Probably, this finding is owed to the high overall willingness to participate in the future surveys. H3.1, H3.2 and H3.4 are not supported: We neither find significant effects of respondents’ age and gender, nor of their feelings of isolation on panel consent. We only find that respondents with an upper level school degree are more willing to participate in the panel survey than respondents without a school degree. This finding is in line with theory and our Hypothesis 3.3.

Table 16.3 AMEs of survey-related factors on panel consent (=1) based on multiple logistic regressions

Table 16.3 shows how respondents’ willingness for future participation is related to survey-related factors. While the patterns of the control variables (in this case the individual level factors) remained stable across all models, it becomes obvious that only respondents’ completion time (Model 1) shows a significant relationship with panel consent. Respondents with short interview duration are less willing to participate in future surveys of GERPS. This finding is not in line with Hypothesis 3.5, suggesting lower panel consent among respondents with longer interview duration.Footnote 6 Model 2 shows that the device respondents used to fill out the questionnaire is not significantly related with their willingness to become participant of the GERPS panel. Hypothesis 3.6 is thus not supported by our data. In Model 3, we see that respondents in different incentive schemes do not significantly differ in their panel consent rates. Incentive schemes thus do not seem to have “a long-term effect” on participation rates, which does not support H3.7.

Model 4 examines the relationship between different survey modes and panel consent in the experiment sample we focused on in Sect. 16.2. We see that panel consent rates do not differ between SQ and control group. However, respondents in the CC group are significantly less willing to give their panel consent than respondents in the control group. Thus, our proposed H3.8 indicating higher panel consent rates in the experimental groups cannot be validated. The results point in the opposite direction: CC mode seems to decrease panel consent rates. We elaborated on this effect by investigating the influence of response mode on respondents’ panel consent (Model 5). PAPI respondents are significantly less likely to give their panel consent than online respondents, indicating that the lower panel consent rate may not depend on respondents’ survey mode but on their response mode. We added response mode in Model 4 and interacted it with respondents’ survey mode to test this assumption (model not shown). Indeed, the influence of respondents’ survey mode (CC and SQ combined) on their panel consent disappears, while PAPI respondents remain significantly less likely than online respondents to give their panel consent.

4 Lessons Learned by Implementing a Probability-Based Online Panel of Internationally Mobile Individuals

This chapter studied two crucial challenges of GERPS on its way to setting up a probability-based online panel of internationally mobile individuals. The first challenge was to motivate sample members by postal invitation to participate in an online survey (“push-to-web”). We aimed at assessing the consequences of push-to-web for unit response since scholars often disagree about response rates in push-to-web and traditional surveys. Therefore, GERPS included a split ballot experiment in wave 1, providing two remigrant sub-samples with an optional paper and pencil interviewing (PAPI) opportunity at different stages in the field process. The second recruitment challenge concerned panel attrition. Survey organisers’ task of motivating respondents for future survey participation arguably constitutes one of the greatest challenges for longitudinal online surveys (Blom et al. 2015). We therefore studied how individual-level and survey-related factors are related to respondents’ consent to be contacted in the forthcoming waves of GERPS.

Our survey mode experiment proved push-to-web to be a viable strategy for achieving high response rates and high panel consent rates among internationally mobile individuals. First, our results contradict the argument that survey outcomes in push-to-web surveys have an increased risk to be biased. Internationally mobile individuals indeed make use of the PAPI offer. However, the additional option to participate with PAPI only seems to have a small positive effect on response rates in the later field process, independent of previous PAPI offers. Eventually, an optional PAPI offer does not result in substantial response changes among these hard-to-survey individuals–neither overall, nor for specific selection factors such as remoteness, education, and age. Contrary to our expectations, the optional PAPI offer increases response in favour of status-higher participants. However, education was measured by a proxy indicator based on aggregate-level geo data, as was the case for remoteness. Our results on spatial and educational selectivity may therefore be biased. Second, individuals who answered the survey via PAPI were significantly less ready to give panel consent than those who answered online. This might be due to a higher burden for respondents who answered in PAPI mode. While online participation is straightforward, PAPI mode is onerous and requires that the completed questionnaire be in an envelope and taken to a letterbox. Why these participants still chose to answer in PAPI mode remains open.

Furthermore, we found no differences in panel consent regarding incentive value, device type, and most of the analysed individual-level factors. This is likely to result from generally high panel consent among remigrants surveyed in the first wave of GERPS. Only respondents with an upper level school degree expectedly showed an increased willingness to participate in future waves of GERPS. The opposite was the case for fast responding individuals: They were less likely to give their panel consent than individuals with average responding speed. Fast responding individuals are likely affected by “satisficing.” Satisficing describes a response behaviour where individuals only make a minimum effort to generate a satisfactory response (Krosnick 1991; Roberts et al. 2019). For example, respective individuals may only opt for the incentive and click themselves through to the end of the online questionnaire. Satisficing might be an explanation for the lower panel consent rate among fast responding individuals, since it is usually associated with lower motivation and interest. A second explanation could be that we surveyed some individuals whose personal situation did not match with the group of internationally mobile individuals for which we designed our questions (e.g. globetrotters). Consequently, these individuals were unable to give meaningful answers to a large number of questions in our survey. In such a specific group, it is very likely that respondents do not feel addressed by the survey and therefore reject participation in future waves (Brower 2018; Lipps and Pollien 2019).

However, despite our very encouraging results on panel consent, the intention to participate in future GERPS waves is only a first indication regarding panel attrition. Psychological research in the tradition of Ajzen and Fishbein (1977) often demonstrated that attitudes and behaviour are only conditionally related. A possible gap between attitude and behaviour may have multiple causes. Possible overestimations of participation rates may stem from general known factors affecting unit nonresponse and panel attrition (e.g. Plutzer 2019; De Leeuw and Hox 2018; Weigold et al. 2018). Overestimations are also caused by traditional factors that bias response behaviour, such as social desirability bias or satisficing behaviour (e.g. Deol et al. 2017; Groves et al. 2000, Roßmann 2017; Andersen and Mayerl 2017). The actual participation rate of wave 2 will give a less biased impression and will provide us with further validations for the attitude-behaviour distance.

In sum, we learned two major lessons by addressing push-to-web and panel consent in the context of surveying internationally mobile individuals. While an optional PAPI offer only slightly promoted response rates, it clearly lowered respondents’ willingness to participate in our panel. This suggests a trade-off scenario, either to the detriment of response rates or panel participation rates. If we contrast both issues, we conclude that there is hardly any justification for adding additional survey modes next to CAWI. This is particularly the case if we take into account practical and methodological issues with mixed survey mode designs: They increase survey costs, potentially entail mode effects on unit response, and impair the feasibility of filter questions. Thereby, they inhibit surveys’ feasibility and threaten survey quality. Nevertheless, researchers must bear in mind that we assessed push-to-web in a migrant panel with individuals from an economically highly developed country living in an economically highly developed country. Applying push-to-web in panels with migrants originating from or living in economically less developed countries may cause more issues regarding response rates and panel participation rates.