1 Introduction

In regard to survey methodology, the evidence base suggests that there are several methods for inducing enhanced survey participation, both efficiently and effectively (Callegaro et al. 2014; de Leeuw et al. 2008; Dillman et al. 2014; Engel et al. 2014; Fan and Yan 2010; Gideon 2012; Groves et al. 2011). In addition to prenotification, providing assurances regarding privacy, sending personalized invitation letters, sending different types of reminders, appealing for cooperation, and sponsorship, providing prepaid monetary incentives (Becker et al. 2019: 222; Birnholtz et al. 2004: 357; Ernst Stähli and Joye 2016; Göritz 2014: 343; Laurie and Lynn 2009: 205; Lipps 2010: 81; Lipps et al. 2019: 6; Mercer et al. 2015: 124; Scherpenzeel and Toepoel 2012: 483; Singer and Couper 2008: 49; Singer and Ye 2013: 114) and applying a sequential mixed-mode design (Couper 2017: 132; de Leeuw 2018: 76; Dillman and Messer 2010: 553; Lynn 2013: 187; Sakshaug et al. 2019: 546) are used to optimize survey coverage, survey cost, and data quantity (in terms of rates of response, completion, and retention), as well as to minimize response bias due to selective participation or item nonresponse and mode effects (de Leeuw 2010: 2).

Both providing prepayment to potential respondents and applying a sequential mixed-mode design have become standard practices in survey management. This may be valid for single surveys as well as for multi-wave panel studies. While providing prepaid incentives should initiate a reciprocal exchange between the target persons and the survey management (Jäckle and Lynn 2008: 107; Jobber et al. 2004: 21; Porter 2004: 8; Ryu et al. 2005: 91; Scherpenzeel and Toepoel 2012: 472; Singer and Ye 2013: 115), the sequential offer of different survey modes, starting with a less expensive type (e.g. a self-administered web-based online survey) and then switching to another, more costly, type (e.g. an interviewer-administered telephone survey or computer-assisted personal interview), should consider different mode preferences among the target persons (de Leeuw 2005: 235; de Leeuw and Toepoel 2018: 52; Dillman 2017; Haan, Ongena, and Aarts 2014: 356; Lynn 2013: 201; Millar and Dillman 2011: 252; Olson et al. 2012: 612; Sakshaug et al. 2019: 548; Smyth et al. 2014: 135; Tourangeau 2017: 120). To boost response rates by using such a sequential mixed-mode design, in terms of magnitude and timing of survey participation after survey launch, the method of pushing potential respondents to the web mode in sequential mixed-mode surveys by sending them an invitation to an online survey, with a prepayment enclosed, followed by implementing a telephone survey as a follow-up mode for nonrespondents to the web phase, seems to be a successful strategy to obtain a maximum rate of responses by internet before the use of other survey modes (Becker 2022b: 260). The rationale for using the sequential mixed-mode plus “push-to-web” strategy is to reduce the burden of choosing between survey modes, in order to maximize survey participation (Dillman 2017). In particular, this strategy might be useful in order to minimize panel attrition in a long-running panel study (Becker 2022b: 280).

Meanwhile, the significant positive effect of providing a prepaid monetary incentive, in contrast to an in-kind gift, has been confirmed by numerous studies and meta-analyses (e.g. Göritz 2006; Jobber et al. 2004; Mercer et al. 2015; Singer and Ye 2013). However, there is unclear evidence on the effect of a sequential mixed-mode design (plus “push-to-web” strategy) in regard to enhancing the response rate (Couper 2017: 132; Lynn 2013: 187).

Furthermore, little is known about whether there may be interactions between the type of upfront payments and the survey mode offered by the researcher and chosen by invitees. Therefore, this contribution aims to answer the following main research questions: Do different prepaid incentives—such as cash versus in-kind incentives—have different effects on invited respondents’ likelihood of participating in different survey modes, and the speed with which they do so? Does cash, as compared to other, in-kind gifts, strengthen the “push-to-web strategy” applied in the context of a sequential mixed-mode design? Or is there a sustained effect of the unconditionally prepaid material incentive on the participation of panelists in an alternative survey mode when the survey mode that was initially offered is not used by invitees?

These questions are investigated in the context of a multiple-wave panel study on youths’ educational and occupational trajectories in Switzerland, the DAB panel, which has been running since 2012 (Becker et al. 2020). The target population consists of juveniles born around 1997 and living in the German-speaking cantons of Switzerland. Since the fourth panel wave, conducted in autumn 2014, a sequential mixed-mode design (an online survey first, then a telephone survey) has been employed to gather responses from the panelists. While in Wave 4 half of the panelists received no incentive, due to an experimental split (Becker and Glauser 2018), from Wave 5 (conducted in summer 2016) to Wave 8 (conducted in May/June 2020), each of the eligible panelists received an incentive that was enclosed in the invitation letter. The incentives differed across the waves. In Waves 5 and 6, a voucher (worth 10 Swiss Francs) and an engraved ballpoint pen (worth 2 Swiss Francs) were enclosed in the invitation letter, but for Waves 7 and 8 cash (a 10 Swiss Francs banknote) was sent to panelists (Becker et al. 2019).

In the remainder of this contribution, the next section outlines the theoretical background, as well as the hypotheses to be tested; the third section comprises a brief description of the data, variables, and statistical procedures; the fourth section presents the empirical findings; and the fifth and final section presents a discussion and the conclusion.

2 Theoretical considerations

When applying a sequential mixed-mode design it is often recommended to start with an inexpensive self-administered mode, such as a web survey or postal survey. Thereafter, a costly interviewer-administered mode, such as a telephone interview or face-to-face interview, should be offered to nonrespondents. This sequence seems to prompt a faster response, and to enhance the response rate. Furthermore, applying this sequence improves sample composition, particularly in the context of a longitudinal study, and it minimizes the cost and length of the fieldwork period (Bianchi et al. 2017; Couper 2011; de Leeuw 2005; Groves and Lyberg 2010; Kreuter 2013; Manfreda et al. 2008; Olson et al. 2021). Furthermore, to enhance the response rate, as well as to prompt a faster response after survey launch (resulting in invitees’ lower latency – the delay in them starting to complete the questionnaire), pushing invitees to the web mode seems to be a successful strategy for obtaining an optimum number of responses, before using telephone interviews as a follow-up mode for nonrespondents (Dillman 2017).

In such a case, enclosing a prepaid monetary incentive in the invitation letter might increase the effectiveness of the push-to-web strategy. Compared to in-kind gifts, providing cash should result in significantly higher response rates for the initial survey mode. Additionally, in such a case, the speed of return—i.e. the timing of participation, measured by the delay between receipt of the incentive and the start of survey participation (latency)—is expected to be much higher than when providing non-monetary incentives (Becker et al. 2019). By measuring the target persons’ taste for reciprocity directly, it is revealed empirically that strong and altruistic reciprocity to a researcher’s request for survey participation as well as to the unconditional prepayment is a significant mechanism explaining the panelists’ survey participation in regard to the likelihood and the latency of response (Becker 2023: 231; Fehr and Gächter 2000: 160). In-kind gifts may often be considered inadequate to compensate for the expected burden of responding to a survey, and therefore they prompt a rather limited preference for strong reciprocity (Fehr et al. 2002) and invited target persons tend to suspend their response (Becker 2023). However, if target persons are still ready to take part in a survey even after receiving only an in-kind gift, they are more likely to “prefer” the alternative interviewer-administered survey modes which are currently offered to them, in terms of an additional kind request to take part in the survey, after procrastinating over the completion of the questionnaire for a long time because of the low incentive of the in-kind gift. Provided that the target persons interpret these efforts by the survey management as well as the offer of an alternative survey mode as a kindness, they will accept this offer as a kind return. As a consequence, the intended effect of non-monetary incentives will be enhanced. However, if the invitee received money in advance and still delayed completing the questionnaire until offered the other survey mode, a different behavior will be observed in contrast to the case of the in-kind gift. Then they will show themselves more likely to participate in the initially offered web mode linked directly with the initial prepayment rather than the CATI mode more recently offered (Becker 2022b).

The rationale behind these arguments relates to invitees’ preference for strong reciprocity (Becker 2023: 224), as well as their mode preference (Olson et al. 2012) and their interest in the survey topic (Groves et al. 2004), together with the expected burden of responding (Becker et al. 2019: 222; Birnholtz et al. 2004: 357; Göritz 2014: 343; Jäckle and Lynn 2008: 107; Jobber et al. 2004: 21; Kropf and Blair 2005; Laurie and Lynn 2009: 205; Lipps 2010: 81; Lipps et al. 2019: 6; Ryu et al.2005: 91; Scherpenzeel and Toepoel 2012: 483; Singer and Couper 2008: 49; Porter 2004: 8). From the theoretical view of economic exchange between the target persons and researchers, it is stressed that the prepayment given by the researchers outweighs the target persons’ cost (e.g. time or effort) and increases their net benefit from survey participation (Singer and Ye 2013: 114). The economics of strong reciprocity (Fehr and Fischbacher 2005: 194) stresses the additional benefits for the invitee arising from the reciprocal exchange and the distribution of the payoff, between the target person and the researcher, such as the utility of reciprocity, in addition to the invitees’ material benefits relating to the prepayment (Becker 2023). Due to the universal character of money, as already mentioned by Simmel (1900), cash seems to be the most efficient medium by which to generate the utility of reciprocity, i.e. the target person’s interest in an equitable share of the payoff (Becker et al. 2019: 224; Becker 2023: 231). This comes from supporting science through participating in a survey, which is subjectively important to the target person. The reward received in advance from the researcher is then an expression of appreciation for this contribution. In the context of this exchange, however, cash will be evaluated as the fairest compensation for the target persons’ response to the researchers’ request (Kropf and Blair 2015; Singer and Ye 2013). Considering target persons’ preferences regarding the survey mode applied, there might be a multiplying effect of simultaneously providing a prepaid monetary incentive and using the push-to-web strategy on both the speed of return and the magnitude of the response rate. Additionally, in contrast to in-kind incentives, it is assumed that providing cash significantly facilitates the potential respondents’ cost–benefit calculation in favor of an immediate survey response.

The following hypotheses are derived from the current state of research and the theoretical considerations. (1) In general, it has often been empirically demonstrated that response rates are higher for target persons receiving a prepaid incentive, as compared to a control group that does not receive any incentive. However, even in a sequential mixed-mode design, there is a correlation between the type of unconditionally prepaid incentives received and the response rates. In terms of a fair exchange, providing cash has the biggest impact on the invitees’ likelihood of taking part in a survey in which several survey modes are offered to invitees, while in-kind gifts result in relatively lower response rates (Hypothesis 1).

(2) Regarding the effect of the incentive on invitees’ choice of survey mode, it is assumed that there are systematic differences between the survey modes in a sequential mixed-mode design with a “push-to-web” strategy. In particular, providing cash pushes panelists to the originally offered self-administered online survey because this incentive is more likely to increase the panelists’ preference for strong reciprocity and therefore the speed of reciprocation, as compared to the in-kind gifts. Since other incentives seem to be less adequate in relation to the burden panelists assume in responding, they result in panelists delaying their response: if they take part in the survey, mostly after a longer latency, they are therefore more likely to accept the consecutive interviewer-administered CATI mode—which is currently being offered to nonrespondents—than the web mode that was initially offered to them more than a week prior (Hypothesis 2).

(3) Logically, providing an in-kind incentive is useless without also providing an additional ‘incentive’, such as offering another survey mode than the one offered to them initially. Therefore, it is expected in this case that the sequential mixed mode moderates the positive effect of non-monetary incentives on the response rate (Hypothesis 3).

3 Data, incentives, variables, and analytical strategy

3.1 Data

The analysis is based on paradata collected during the fieldwork period across several panel waves. The paradata are linked with information on the panelists in the DAB panel study. The panel study is concerned with the determinants of educational and occupational choice, as well as training and job opportunities, of adolescents and young people born around 1997 and living in the German-speaking cantons of Switzerland (Becker et al. 2020). The initial target population consisted of 8th graders in the 2011/12 school year who were enrolled in regular classes in public schools. Students’ statistical data from the Swiss Federal Statistical Office for the school year 2009/10 were used as the basis for the sampling. The panel data are based on a random and 10% stratified gross sample of 296 school classes, drawn separately for each of the nine community types in the German-speaking cantons, out of a total universe of 3,045 school classes (Becker, Möser, and Glauser 2020: 128). After contacting the headmasters and teachers, 215 out of 296 school classes were ready to participate in the online survey in the first wave (Glauser 2015: 125–128).

Since 2012, 10 panel waves have been realized. In regard to the research problem, only Waves 4 to 8 are considered (Table 1). In these waves only, a sequential mixed-mode design was used to collect the data. The average response rate was about 80% for each of the waves. In the first wave, the gross sample consisted of 3,815 individuals, which diminished to 2,437 eligible panelists contacted in Wave 8. In the first three waves, the target persons were interviewed in their school class context using an online questionnaire only.

Table 1 Response rates (waves 4–8)

After they finished compulsory schooling, the target persons were pursued individually and the eligible panelists were interviewed in a sequential mixed-mode design (a self-administered online questionnaire first, then CATI, then a self-administered paper-and-pencil interview (PAPI), applying a “push-to-web strategy” (Millar and Dillman 2011). Across all of these waves, the second survey mode was offered to nonrespondents about 12 days after they received an invitation to participate. It is a feature of this panel study that the different survey modes were offered sequentially but the panelists could still complete the online questionnaire even after they were asked to take part via telephone interview. The PAPI mode is not considered here due to the low rate of take-up of this option (113 out of 13,101 individual units).

3.2 Incentives and survey modes

Since Wave 4, target persons have received an unconditional material incentive enclosed in the invitation letter. In Wave 5, they received a voucher (worth 10 Swiss Francs), in Wave 6 an engraved ballpoint pen (worth 2 Swiss Francs), and after that, in Waves 7 and 8, a prepaid monetary incentive (10 Swiss Francs in cash) (Becker et al. 2019). In Wave 4, an experiment was conducted to identify the effect of prepaid incentives (Becker and Glauser 2018). The sample was divided randomly, with half receiving a voucher (treatment group) and the other half receiving no incentive (control group). In this contribution, the control group from this experimental split is considered as a control group for the analysis, while the treatment group (n = 1,291) is excluded from the analysis for Wave 4 but included in the other waves.

In each of the panel waves, the “gift” was enclosed in a personalized invitation letter sent via regular postal mail. Using the first-class postage option offered by Swiss Post (the A-post), it was guaranteed that eligible target persons would receive this letter the day after it was sent out. This letter gave the panelists the URL for the survey website and the password for accessing the web-based questionnaire. In regard to the reputation of the researchers and the sponsor, the panelists were informed that the panel study had received a grant from a governmental agency and that it was being conducted by the same researchers as previous rounds at a cantonal university. One day later, the panelists received, via a personalized e-mail, the clickable URL and a password to log on to the website. If they did not start completing the questionnaire after four days, they received personalized reminders via text message, with a link to the online survey.

After about two weeks, the nonrespondents were informed that they were invited to participate via the CATI mode. After the prenotification for the CATI, the nonrespondents received a reminder via SMS after three call attempts. After three call attempts, they received a reminder via SMS. The number of reminders was not limited, and their timing was not standardized, in contrast to the initial online mode. Three weeks after the survey launch, they received a final e-mail reminding them again to take part via the CATI mode.

The present design has limitations. First, due to the sequential mixed-mode design it has to be considered that invitees are initially pushed to the web mode while the choice between online and CATI modes is possible after two weeks. This constraint will be considered by a statistical model of time-related competing risks described below in detail (see also: Becker 2022b, 2024). Second, since the survey participation is analyzed across several panel waves, it is not impossible that previous mode experiences, survey wave effects, and different communication strategies for reminders have an impact on invitees’ likelihood of participation besides the prepayment. On the one hand, the previous mode experiences and survey wave effects will be considered as exactly as possible in the following analysis. Furthermore, it has to be kept in mind that Wave 8 partially took place during the COVID-19 lockdowns (Becker et al. 2022d, e). On the other hand, the role of reminders and communication strategies for them have already been investigated (Becker 2022b). Third, temporary panel attrition or panel mortality could be a bias in the multivariate analysis. However, we have learnt that its selectivity has decreased across panel waves (Becker et al. 2019).

3.3 Dependent and independent variables

The dependent variable is a panelist’s survey response as a stochastic event across the fieldwork period. The response rate (RR1) is defined as the ratio of eligible units and their responses in terms of starting and completing the online questionnaire or the CATI (AAPOR 2016: 61). The analysis distinguishes between the responses to each of the two survey modes: the web-based online survey and the CATI.

Regarding the panelists’ social heterogeneity, different time-constant sociodemographic characteristics of the panelists have been considered as an independent variable in order to control for their impact on the response in one of the two survey modes. Based on previous studies that have found that women are more likely to respond to surveys than men (Becker 2022a), the panelists’ gender (reference category: male) is used. Since there is extensive evidence that the socioeconomic conditions in which target persons have grown up (including welfare, integration, and environment) affect their survey participation, the participants’ social origin is included in the multivariate analysis (Groves and Couper 1998: 30). Social origin, correlating with access to the internet, computer skills, and openness to scientific surveys, is indicated by the class scheme suggested by Erikson and Goldthorpe (1992), with the most privileged upper service class as the reference category.

The education and language proficiency of a target person correlates positively with survey response rates. Their education is also positively correlated with appreciation of the utility of social-scientific research and information-gathering activities (Groves and Couper 1998: 128), as well as with their computer literacy and language skills. The panelists’ level of education is measured by the school type in which they were enrolled at the end of their compulsory schooling—such as lower secondary schools with basic or intermediate requirements and pre-gymnasiums, implying advanced requirements (reference category: miscellaneous school types, such as integrated schools without selection) (Glauser 2015).

Panelists’ language proficiency is indicated by the standardized grade point average (GPA) in the German language class (Wenz et al. 2021). Using a dummy variable, it is controlled for that German is the first language, indicating the target person’s language ability (Kleiner et al. 2015). This indicator measures the impact of migration background—net of German mother tongue, educational level, and social origin—on survey response (Nauck 2019). Personal characteristics—such as persistence and decisiveness, as well as internal and external control belief—are controlled for (Becker 2022a: 13).

3.4 Statistical procedures

To analyze the interaction of the likelihood of response due to the survey design and the speed of return in regard to incentives, certain techniques and statistical procedures of event history analysis are utilized (Blossfeld, Rohwer, and Schneider). Since the survey response is a stochastic event that can occur at any point in time after the survey launch, and across different survey modes and in dependence of prepayment, these procedures are adequate for our research problem. In contrast to comparative-static estimations of survey response, these procedures are able to consider the timing of survey response, which depends on the effect of the incentives and the survey modes provided, as well as the response speed, indicated by the latency of return to the incentives across the time period that has elapsed since survey launch (Becker 2024). The parametric procedures of event history analysis model the likelihood of survey participation—that is, the hazard rate—as a stochastic and time-variant function of individual resources, the settings of the survey, and the different incentives. This rate\(r\left(t\right)\) is defined as the marginal value of the conditional probability of such an event occurring—namely the start of completing the questionnaire in a web-based online survey or taking part via the CATI mode—in the time interval \((t, t+\varDelta t),\) given that this event has not occurred before time \(t\) (Blossfeld et al. 2019: 29). Using this statistical procedure, it is possible to reveal the impacts of \(x\) for the probable occurrence of survey participation as the event \(y\) to be investigated: \({X}_{t} \to \text{P}\text{r}\left({Y}_{t{\prime }}\right)\), whereby \(t < t{\prime }\), taking the timing of events into account. Since \(t{\prime }\) approaches \(t\), the transition rate (or hazard rate) is defined by:

$$r\left(t\right)=\underset{{t{\prime }}\to t}{\text{lim}}\frac{\text{P}\text{r}(t \le {T}_{y}<{t}^{{\prime }}|{T}_{y}\ge t)}{{t}^{{\prime }}-t}$$

Parametrical estimations of hazard rates are particularly appropriate for this purpose. For fine-grained parametric analysis of the time-dependent effect of survey mode and incentive on the speed and the selectivity of response, the piecewise constant exponential model is utilized. According to Blossfeld et al. (2019: 124), the “basic idea is to split the time axis into time periods and to assume that transition rates are constant in each of these intervals but can change between them”. Using this model and the episode splitting it involves makes it possible to analyze the participation pattern in the initial phase of fieldwork, in comparison to the other phases of the entirety of the fieldwork period. Given theoretically defined time periods, the transition rate for survey participation is defined as follows:

$${r}_{k}\left(t\right)=exp\left\{\stackrel{-}{\alpha }\begin{array}{c}\left(k\right)\\ I\end{array}+{A}^{\left(k\right)}{\alpha }^{\left(k\right)}\right\} if t\in {I}_{t}$$

where k is the destination, I the time interval, \(\stackrel{-}{\alpha }\begin{array}{c}\left(k\right)\\ I\end{array}\) is a constant coefficient associated with the lth time period, \({A}^{\left(k\right)}\) is a vector of covariates, and \({\alpha }^{\left(k\right)}\) is an associated vector of coefficients that is assumed to not vary across time (Blossfeld et al. 2019: 125). In particular, it is possible to reveal whether the sample responding in the initial stages of the fieldwork period is different from the sample responding in later stages.

Due to the sequential mixed-mode design, peculiarities in the timing of events have to be considered for the bivariate and multivariate analyses. In the sequential mixed-mode design of the DAB panel study, access to the web mode was possible for each of the invitees during the whole of the field period. Nonrespondents were asked to take part via the CATI mode about two weeks after survey launch. There was then a competing risk of invitees taking part in one of the two offered modes, which were mutually exclusive during an overlapping risk period. A competing risk is an event—such as participation in one of the two survey modes offered at the same time—that either hinders the occurrence of the primary event of interest (e.g. participation in the online survey instead of CATI) or modifies the chance that this event (e.g. participation in CATI) will occur (Noordzij et al. 2013: 2670). When eligible panelists prefer one mode or another, the unchosen mode cannot be realized at another point in time, due to censoring. Panelists who have not started completing the questionnaire have the “chance” to take part in the CATI or online mode at a point in time that is convenient for them. Never-responding invitees are defined as censored cases.

In the case of competing risks, the traditional survival analysis is inadequate for statistical reasons. Therefore, the cumulative incidence competing-risk method is used to describe the invitees’ participation patterns across the field period. For example, the cause-specific cumulative incidence function (CIF), which is the probability of survey participation before the end of field period \(t\), is estimated to reveal the risk of choosing one of the competing survey modes. The CIF describes the incidence of the occurrence of an event while taking competing risks into account (Austin and Fine 2017: 4293). The cumulative incidence is calculated as the number of participations divided by the total number of invitees at risk for a specific time interval. In sum, it is a measure of the invitees’ probability of participating in the survey within the fieldwork period. In our case, it is a probability that depends on the accumulated hazards over time of both the event of interest (i.e. participation in the web mode) and the competing risk (i.e. participation in the CATI mode).

Furthermore, parametric regression procedures are used to estimate the impact of independent variables on the likelihood of events of interest, such as participation in the online or the CATI mode. For this purpose, the subdistribution hazards approach proposed by Fine and Gray (1999) is seen as the most appropriate method for analyzing competing risks (Noordzij et al. 2013; Schuster et al. 2020). It is implemented in the stcrreg module in the statistical package Stata (version 18). By taking competing risks into account, the coefficients can be used to compute the cumulative incidence of participation in one of the survey modes, and to depict the hazards in a CIF plot (Austin and Fine 2017).

4 Empirical results

4.1 Description of response patterns across survey modes and panel waves

In order to test Hypothesis 1, which states that providing cash has the biggest impact on the target persons’ likelihood of taking part in a survey regardless of the survey mode, the correlation between type of incentive and response rate is analyzed. The response rates (RR1) are rather constant across the complete surveys. Thus, it has to be considered that the survey composition is based on rather “panelized” invitees, who have survived up to the survey, so that condition effects due to a long-term experience with the panel cannot be completely ruled out. For the control group in Wave 4, the response rate is 83%. Thereafter, it declines to 80% in Wave 5, in which the panelists received a voucher as an incentive, and to 76% in Wave 6, in which they received an engraved ballpoint pen. In order to measure the average response speed across the panel waves, the median value of the latency of return to the incentives across the time period that has elapsed since survey launch is calculated. This value indicates how long it took until 50% of the target persons responded. The median value of the response speed increases from 15 days to 16 days across these waves. In the next two waves, in which the eligible panelists received a prepaid monetary incentive, the median value decreases to 11 days in Wave 7 and to seven days in Wave 8. As is supposed theoretically, the response rates increase in these waves: to 79% and 81%. This finding is in line with Hypothesis 1.

If the response rates are distinguished for the survey modes, the competing risk of an invitee’s mode choice has to be considered. Therefore, the trajectories of survey participation across the fieldwork periods are described by the cumulative incidences depicted as hazards in CIF plots. Controlling for the different survey modes, the effects of the different incentives become more obvious. However, at the same time, it is also revealed that the total response rates are affected systematically by the different responses for the survey modes across the waves (Fig. 1).

Fig. 1
figure 1

Cumulative incidence in sequential survey modes across panel waves

In line with Hypothesis 1 again, the panelists who received an incentive in advance are more likely to be motivated to take part in the initial web survey, as compared to the control group in Wave 4 (see left-hand panel). The response rates increase for the online mode across the waves, from 42% for the control group in Wave 4 to 47% in Wave 5 (voucher), 51% in Wave 6 (ballpoint pen), and finally to 66 and 76% in Waves 7 and 8, in which the invitees received cash.

Additionally, there is also an increase in the response speed in terms of the time that elapses between survey launch and the panelists starting to complete the online questionnaire. The median values are significantly lower for the last two waves (Wave 7: 11 days; Wave 8: seven days), as compared to the previous waves, with a median value of about 15 days. Hypothesis 2—expecting that nonrespondents are more likely to accept the CATI mode after procrastinating over the use of the web mode—is directly supported by the results for the CATI mode (see right-hand panel). It is found that panelists in Wave 4 provide the highest response rate (64%). The response rate declines across the waves, from 52% in Wave 5 to 37% in Wave 6, to 26% in Wave 7, and to 15% in Wave 8. The median value for the delay in survey completion increases across panel waves, from eight days in Wave 4, to 15 days in Wave 5, and to about 37 days in Waves 6, 7, and 8.

Overall, the most striking finding is that, with the same sequential mixed-mode design being used across the five waves, the overall response rates are rather similar, in spite of different types of prepayment being provided. While the incentives are effective in regard to the delay in responding and the choice of survey mode, the sequential mixed-mode design guarantees rather similar response rates across panel waves.

Fig. 2
figure 2

Period-specific hazard rates in sequential mixed-modes across waves

The effects of different types of prepayments, embedded in the context of the sequential mixed-mode design including the “push-to-web” strategy, also become obvious if one considers the hazard rates. These are predicted for each of the points in time across the fieldwork period (Fig. 2). The hazard rates for participation in the initial online mode are higher for panelists who received an incentive as compared to those not provided with an incentive, in Wave 4. The highest hazard rates are observed for the panelists who received cash (Waves 7 and 8).

Across consecutive waves, it is revealed that the effect of a prepayment is limited to the initial two weeks after the survey launch. While the propensity to take part in the alternative CATI mode is highest for the control group in Wave 4, the preference for this administered mode decreases across the waves. It is lowest in the panel waves in which the invitees received cash. This “money in the hand” resulted in a high number of responses taking place solely in the initial stage of the fieldwork period, and saved time in the fielding.

4.2 Multivariate analysis of the effects of incentives across survey modes

By applying multivariate competing-risk models to test Hypotheses 1 and 2, the effects of incentives provided in advance are revealed across survey modes for the complete fieldwork period (Table 2). On the one hand, the different types of incentives have different effects on the panelists’ likelihood of participating in the survey. For the online mode, it is found that each of the incentives enhances the response rate across the fieldwork period.

While the voucher provided in Wave 5 increases the likelihood of response by about \(\left[\left(exp\left(0.162\right)-1\right) 100\%=\right]\) 17.5%, compared to the missing incentive in the previous Wave 4, the effect of the ballpoint pen enhances the panelists’ propensity to respond by about 27% (Model 1). The highest increases in response rates are observed for the prepaid monetary incentives, at 88% (Wave 7) and 142% (Wave 8).

Table 2 Effect of prepaid incentives on participation in the DAB panel study (Waves 4–8)

This result, confirming Hypothesis 1, indicates that cash accelerates panelists’ return on the gift in an expected way. On the other hand, the voucher provided in Wave 5 reduces the likelihood of participating in the CATI mode by about \(\left[\left(exp\left(-0.515\right)-1\right) 100\%=\right]\) 40%, compared to the online survey and the control group in Wave 4 (Model 1).

In Wave 6, the ballpoint pen provides a stronger, significant effect. The reduction in the likelihood of participating in the CATI mode is about \(\left[\left(exp\left(-0.871\right)-1\right) 100\%=\right]\) 58%. In the most recent Waves 7 and 8, the prepaid monetary incentive results in a decline in the remaining sample of nonresponding panelists’ propensity to take part in the CATI mode. The likelihood of taking part in the CATI declines by about 86%, as compared to Wave 4. This finding confirms Hypothesis 2 in a remarkable way.

These findings are, again, in line with Hypothesis 1 on the impact of a cash incentive on survey response, as well as with Hypothesis 2 on the impact of incentives on the choice of survey mode. Indeed, not surprisingly, cash is the most efficient and effective incentive. This is true for the sample of eligible panelists remaining in a multiple-wave panel study. Overall, the findings on the interaction between incentives and sequential mixed-mode design are confirmed once again. This conclusion still holds true when individual characteristics of the invitees—such as their gender, social origin, and education, as well as language proficiency (in school) and language ability (at home)—are considered (see Model 2).

4.3 Incentive effects across survey modes revisited

The subdistribution hazards approach applied in the analysis above has been criticized. According to Noordzij et al. (2013: 2673), “subjects who experience a competing event remain in the risk set (instead of being censored), although they are in fact no longer at risk of the event of interest”. Thus, Schuster et al. (2020: 44) stress that this makes it difficult to interpret the estimations in a straightforward way. This approach is therefore not appropriate for etiological research.

Fig. 3
figure 3

Cause-specific hazards model (left-hand panel) and Kaplan-Meier method (right-hand panel)

An alternative procedure is the proportional cause-specific hazards model suggested by Kalbfleisch and Prentice (2002). Schuster et al. (2020: 44) emphasize that the “cause-specific hazard denotes the instantaneous rate of occurrence of the event of interest in a setting in which subjects can also experience the competing event”. Since this hazard is estimated by removing individuals from the risk set the moment they experience the competing event—meaning that competing events are treated as censored observations—it is possible to estimate the cause-specific hazards using an exponential model: \(\text{r}\left(\text{t}\right|\text{x}\left(\text{t}\right) = \text{e}\text{x}\text{p}(\beta ?\text{x}\left(\text{t}\right)\). In this model, all events other than the event of interest are treated as censoring. However, according to Lunn and McNeil (1995: 524), these methods have the drawback “that [they do] not treat the different types of failures jointly, complicating the comparison of parameter estimates corresponding to different failure types”.

The estimation of cause-specific hazards for participation in one of the survey modes using an exponential model results in findings that differ from the previous estimations. While the coefficients for survey participation in the CATI mode are similar regardless of the approach that is used, the findings are different for the initial online mode (see the estimated coefficients and their confidence intervals, plotted in the left-hand panel in Fig. 3). Using the cause-specific hazards approach, the differences between the incentives are under-estimated compared to the subdistribution hazards approach (see Reference Model 1 in Table 1).

The estimates produced by the exponential model show that there seem to be no differences in the propensity to participate in the initial mode across Waves 4, 5, and 6. Overall, the divide in terms of participation in the online mode is between the provision of no incentive or non-monetary gifts and the provision of a monetary incentive. However, this result is in contrast to the Kaplan-Meier failure estimates for participation in the online mode (see right-hand panel in Fig. 3). In this respect, the subdistribution hazards approach (see Table 1) provides results that correspond more closely with the processes applied in the fieldwork period than do the cause-specific exponential models.

4.4 Analysis of the interaction of incentives and sequential mixed-mode design

To test Hypothesis 3, we analyze whether the sequential mixed mode moderates the positive effect of non-monetary incentives on the response rate. First of all, an overall response rate for the complete field time is estimated using an exponential model (episode splitting included) whereby the type of material incentives, the survey modes (indicated as a time-variant state), and their time-dependent interactions are considered (see Model 1 in Table 3).

Table 3 Effect of prepaid incentives and survey mode on survey participation

Controlling for these interactions, it becomes obvious that cash enhances the response rate, while—in line with the assumption of a preference for reciprocity—the response rate is significantly lower when the invitees receive a “cheap” in-kind gift. If the interaction between the different types of material incentives and the survey mode offered to the invitees is included in the estimation, it is found that the nonrespondents who received an in-kind gift are more likely to “prefer” the CATI mode offered to them after 10 days (at least) of the survey launch. Invitees who receive prepayment in the form of cash tend to use the online mode even when the CATI mode is offered to them when they postpone filling out the questionnaire.

Since the different survey modes are offered simultaneously to those invitees who have not responded after 10–14 days, their propensity for survey participation is reanalyzed by using a competing risk model (online vs. CATI). Controlling for the different types of prepayment, it is found across the complete fieldwork period that the online mode is more likely to be chosen by nonrespondents even if the CATI mode is offered to them at the same time (Model 2). If the interaction terms of survey modes and types of incentives are considered, it becomes obvious that the web mode is preferred at each stage of the fieldwork period provided that the individuals received cash—even when the CATI mode is offered to the nonrespondents who tend to postpone their participation. For panelists who have received an in-kind incentive, things become worse in terms of the expected effects of unconditional incentives. Their inclination to take part in the survey is rather similar to that of the control group who receive no material incentive.

It seems to be the case that, in the absence of the sequential mixed-mode design, an in-kind gift does not work in the way intended by the researchers. In a final step, in order to test this ad hoc hypothesis, the previous finding is reanalyzed for only that part of the fieldwork period in which both different survey modes are available for the nonrespondents at the same time, as in a concurrent mixed-mode design (Models 3 and 4 in Table 3). More than half of the original sample of invitees is at risk of choosing the online or the CATI mode. If the mode choice “online vs. CATI” is considered, it is obvious that the nonrespondents are more likely to choose the initially offered online mode than the alternative CATI mode, provided that they received a monetary incentive. The effect of an in-kind gift on this mode choice is much weaker, but – and that is much more important – this effect is insignificant (Model 3). Since the sequential-mode design is based on the idea of offering an alternative survey mode to those invitees who postpone their response using the initially offered online mode, the adequate choice mode—namely CATI vs. online—is finally focused on (Model 4). As would be expected logically, those nonrespondents who received a non-monetary incentive are more likely to take part in the CATI mode than the nonrespondents who received a monetary incentive. Overall, postponing nonrespondents who received cash, however, are significantly more likely to prefer the online mode to the CATI mode. In other words, in regard to other types of incentives than cash, it is concluded that the efforts of the survey management are in vain. Non-monetary incentives only provide a sustainable effect on invitees’ survey participation when a sequential mixed-mode design is used. However, this also means that the costs for survey management are higher when using non-monetary incentives instead of cash, even if the value of the gifts is identical.

5 Summary and conclusions

The aim of the current contribution was to analyze the impact of prepaid incentives on panelists’ survey participation in a multiple-wave panel study applying both a sequential mixed-mode design and a push-to-web strategy. In a special experimental design, the contribution sought to reveal whether there is an additional effect of the provision of cash, in contrast to other, in-kind, incentives, on the response rate, the speed of reciprocation in favor of immediate completion of the questionnaire, and the choice of survey mode in favor of an initial online survey (see also: Becker et al. 2019). In other words, the contribution sought to reveal if the use of prepaid monetary incentives to induce the effect of a push-to-web strategy results in an optimal exhaustion of response. It was supposed that the provision of just cash (and no other incentive) leads to a significant cost and time saving during the fieldwork period. Finally, it was expected that non-monetary incentives (vouchers or ballpoint pens) would work only in the context of a sequential mixed-mode design, while it was expected that the always-occurring effect of cash on survey participation would be independent of the survey design.

We find that there is indeed an interaction of the different incentives and the sequential survey modes in regard to the magnitude of the response rate and the delay in survey response, as well as the choice of survey mode. In sum, the prepaid monetary incentive optimizes the methodological advantages ascribed to the application of the sequential mixed-mode design in longitudinal studies. Overall, the following finding is the most interesting one: using the same sequential mixed-mode design across five waves, the overall response rates were, surprisingly, rather similar for each of the surveys, despite the different incentives provided. Thus, it is very likely that the sequential mixed-mode design was responsible for the constantly high response rates across waves, despite changing incentives. While the different types of prepayment were effective in terms of the delay in completing the survey and the choice of survey mode, the sequential mixed-mode design obviously guaranteed the constantly high response rates across the panel waves. In other words, providing non-monetary incentives seems to be rather useless, provided that there are additional arrangements to increase invitees’ likelihood of participating in a survey, such as using a sequential mixed-mode design or sending out reminders (Becker 2022b, c).

Of course, there are several shortcomings of this empirical study. First, it cannot be ruled out that learning processes across consecutive waves (habituation) provide a competing explanation of the development of the timing and number of responses. Due to panel attrition, this could be strengthened by the selectivity of remaining samples. Second, the target population is special, consisting of young people born around 1997 and living in the German-speaking cantons of Switzerland. These “panelized” youths might be “digital natives” and therefore it could be assumed that the mode effects are overestimated. However, the “treatment” by different material incentives is given to rather similar “panelized” samples across the waves. Thus, it could be proposed that the “real” effect of incentives on survey mode preference has been detected. Third, the effect of panelists’ conditioning by the single sequential mixed-mode design used across the waves is not ruled out. In future, this study should be replicated for cross-sectional surveys, preferably in a classic experimental design. Fourth, just two survey modes were considered. Future research should take other modes into account, such as self-administered mail surveys (PAPI) and (computer-assisted) face-to-face interviews (CAPI). Fifth, and finally, since sequentially “mixed-mode surveys try to combine the best of all possible worlds by exploiting the advantages of different modes to compensate for their weaknesses” (de Leeuw 2010: 1), the current study approach should be applied to a concurrent mixed-mode design.