1 Introduction

In sophisticated analyses of individuals’ educational and occupational trajectories, panel studies have become a standard approach to collecting prospective longitudinal data (Becker et al. 2020; Schneider 2019; TREE 2016; Andreß et al. 2013; Blossfeld et al. 2011; Wagner et al. 2007). Such large-scale data sets are essential for studying developmental change across the life course in general, as well as transitions in different contexts (such as educational systems, firms, or labour markets) (Blossfeld et al. 2009). In theoretical and empirical respects, the repeated survey of one and the same target persons as they age is instrumental in contributing to an advanced understanding of how and why there are different structures of trajectories in their life courses. By taking the subjective views and experiences of a target person into account, the utility of these panel surveys is obvious in analyses examining different theoretical approaches and revealing the social mechanisms that result in various patterns of trajectories in the life course.

However, to study and explain change across the life course in this way it is essential to gather longitudinal data in an event-oriented design completely and without any gaps in the lifetime of the target persons (Blossfeld et al. 2019: 4). Temporary dropouts and increasing panel attrition across waves are annoying, as they result in substantial losses in time-, state-, and event-related information on trajectories (Blossfeld et al. 2019: 12; Lugtig 2014; Kreuter 2013a). Besides decreased sample size and selectivity biases in the remaining sample, probably resulting in inefficient estimations and biased findings, missing data (not at random) often undermine the potential for the theory-driven reconstruction of complex trajectories (Little and Rubin 2019). Finally, unit nonresponse and panel attrition result in disproportionate effort and cost in the panel management.

To prevent these problems related to unit nonresponse and panel attrition, several strategies have been suggested to optimise the retention rate across panel waves and the response rate in each survey (Dillman et al. 2014; Göritz 2014). Most of these—such as sequential mixed-mode design, pre-notification, advanced cover letters, prepaid monetary incentives, personalisation of contacts, salience, anonymity, and university sponsorship, to name but a few response-enhancing techniques—are standard practice in (self-) administered surveys with different survey modes (Becker 2022; Koitsalu et al. 2018; Cernat and Lynn 2018; Couper 2017; Dillman 2017; Petrovčič et al. 2016; Sauermann and Roach 2013; Keusch 2012; Millar and Dillman 2011; Sahlqvist et al. 2011; Fan and Yan 2010; Shih and Fan 2009; Roose et al. 2007; Deutkens et al. 2004; Roth and BeVier 1998; Schaefer and Dillman 1998; Dillman et al. 1995; Yammarino et al. 1991; Fox et al. 1988; Yu and Cooper 1983; Heberlein and Baumgartner 1978). Particularly in the case of web-based online surveys, follow-up contacts—i.e. (personalised) reminders sent to nonrespondents—by electronic mail (e-mail) or short message service (SMS) are deployed in order to overcome the low motivation of target persons to participate or to combat survey fatigue (Malhotra et al. 2014; Porter 2004; Porter et al. 2004; Porter and Whitcomb 2003). These reminders are among the most efficient and effective strategies used in panel studies to increase the willingness of target persons to take part in the survey, substantially enhance the response rate of less-motivated panellists, and keep the panel attrition as low as possible (Blumenberg et al. 2019; Cernat and Lynn 2018; Van Mol 2017; Manzo and Burke 2012; Göritz and Crutzen 2012; Svensson et al. 2012; Ashby et al. 2007; Kunz 2010; Muñoz-Leiva et al. 2010; Archer 2007; Virtanen et al. 2007).

Although the positive effect of these follow-up reminders in web-based surveys is well documented and generally accepted (Koitsalu et al. 2018; Cernat and Lynn 2018; Göritz 2014; Sauermann and Roach 2013; Göritz and Crutzen 2012; Ashby et al. 2007; Bethlehem et al. 2011; Cook et al. 2009), the research on survey methods focuses on the total number of reminders across the entire fieldwork period, as well as on the frequency with which reminders should be sent to target persons who postpone their response (Sauermann and Roach 2013; Rao and Pennington 2013; Misra et al. 2013; Svensson et al. 2012; Manzo and Burke 2012; Fang and Wen 2012; Muñoz-Leiva et al. 2010). The timing of reminders and the delay between them at different stages of the survey is also a topic that has been investigated in several methodological studies in this survey research area (Cernat and Lynn 2018; Van Mol 2017). While the efficiency and effectiveness of sending multiple reminders is taken for granted, their sustainable effects on survey participation and response rates have been neglected. Of course, the effect of follow-up reminders on a target person’s immediate response is part of established knowledge in survey methodology. However, the short- and long-term effects of multiple reminders on a panellist’s participation are as yet under-investigated. Therefore, this contribution, by focusing on online surveys and computer-assisted telephone interviews (CATIs), analyses whether (and when) panellists react to reminders in a way anticipated by the survey management. How long does it take for the anticipated effects of a reminder to fade across the fieldwork period? In other words, how sustainable are reminders sent to target persons? How many follow-up reminders work in a multiple-wave panel study? Are there any cumulative effects or even cross-over effects of reminders on the participation of noncompliant invitees across the survey modes in a sequential mixed-mode design?

These questions are answered by analysing longitudinal paradata collected during the fieldwork period of the two most recent panel waves of a panel study on the educational and occupational trajectories of juveniles born around 1997 and living in German-speaking cantons of Switzerland (Becker 2022; Becker et al. 2020). By employing statistical techniques such as episode splitting and piecewise observation of different time intervals, as well as statistical procedures in event history analysis, it is possible to detect short- and long-term, as well as delayed and cumulative, effects of electronic reminders on the panellists’ survey response in different survey modes (such as online or telephone). By applying competing risk models, it is also analysed if there are cross-over or carry-over effects of reminders sent during the initial online mode on survey participation during the period when the alternative CATI is offered to procrastinating panellists.

The remainder of the contribution is organised as follows. In Sect. 2, the theoretical framework is presented, alongside a brief summary of the state of research relating to the effects of electronic reminders on response rates in online surveys. Section 3 provides a description of the data set, variables, and statistical procedures. The findings are presented and discussed in Sect. 4, while conclusions are given in the final section, Sect. 5.

2 Theoretical background

2.1 How do multiple follow-ups work?

If one evaluates the state of research on the effects of reminders on survey response from the 1960s until today, the significance of different reminders as regards a target person’s participation in different survey modes seems straightforward (Dillman et al. 2014). In general, according to several meta-analyses (Cook et al. 2000; Roth and BeVier 1998; Yammarino et al. 1991; Chiu and Brennan 1990; Fox et al. 1988; Yu and Cooper 1983), it is accepted in survey methods research that one or more follow-up reminders increases the response rate (Koitsalu et al. 2018) and reduces panel attrition (Göritz 2014). Thus, reminders should prevent panellists from postponing their response due to lack of motivation, ability, and time, or from forgetting to complete a questionnaire in time (Malhotra et al. 2014). In this respect, reminders are among the most efficient individual strategies for increasing response rates and the timing of survey participation by encouraging participation and response in web-based surveys (Sahlqvist et al. 2011; Roose et al. 2007). Aside from an increased propensity to respond, decreased survey costs and a higher speed of response following survey launch are some of the advantages of digital reminders. In particular, web-based surveys save time and sending reminders can further sustain this advantage (Becker 2022; Becker and Glauser 2018).

Reminders are also necessary for invitees who throw away a cover letter or delete an invitation e-mail that includes the login and URL for an online questionnaire. In contrast to paper questionnaires sent by traditional post, which are permanently visible to the invitees, digital questionnaires do not work as a perpetual reminder. Following Rao and Pennington (2013: 670), it is possible that invitees who receive one or more e-mail reminders or SMS may not see them in their mailbox or smartphone before responding. In this case, it is unclear if reminders “remind” respondents to respond at all. According to Göritz and Crutzen (2012), this is true for less computer-literate target persons who read their e-mails less frequently than individuals with pronounced computer skills. They thus assume that panellists with low computer literacy are more likely to need encouragement by reminders than panellists who are familiar with computers (Göritz 2014).

Furthermore, reminders are a useful counter to survey fatigue, even in an established panel study. Numerous studies demonstrate a strong correlation between the number of reminders and the response rate (Muñoz-Leiva et al. 2010). It is usually recommended to use at least one reminder. A maximum of three follow-ups, the typical number of e-mail reminders in web surveys, is considered to be enough to increase an invitee’s propensity to participate in an online–telephone sequential mixed-mode survey (Becker 2022; Sauermann and Roach 2013; Rao and Pennington 2013; Muñoz-Leiva et al. 2010). Virtanen et al. (2007: 392) recommend using SMS reminders, which increase the response rate rather more than traditional mail reminders. Ashby et al. (2007) provide evidence that electronic reminders reduce the time taken in survey response and lead to a higher response rate overall. Furthermore, as an example of a number of empirical evaluations of follow-up contacts, a study by Crawford et al. (2001) showed that a quick reminder after two days works better than a first reminder after five days. Koitsalu et al. (2018), however, found a positive effect of reminders on the response rate in a web-based survey sent two weeks after the postal invitation letter. Svensson et al. (2012: 333) discovered that four or five reminders led to an increase in the response rate of 15 per cent, and 11 e-mail reminders led to an increase of 21 per cent. Based on this positive effect on overall participation, they conclude that “(…) the more reminders used, the higher the increases in response rate. The ‘short-lived’ effects of e-mail reminders documented in previous research still support the method of reminding frequently” (Svensson et al., 2012: 338). The high number of e-mails did not lead to negative criticism from the participants who received multiple reminders (see also: Schirmer 2009). On the one hand, therefore, the use of multiple reminders in web-based surveys is recommended. However, on the other hand, it seems evident that using more reminders (more than four or five) does not produce a relevant increase in response rates (Deutskens et al. 2004). The meta-analysis by Cook et al. (2000: 831) reports that the positive effects of frequent reminders on response in web surveys decrease with their number due to the recipients reaching a saturation point as regards reading e-mails (Muñoz-Leiva et al. 2010; Fan and Yan 2010). Additionally, it is stressed by Sahlqvist et al. (2011) that the personalisation of reminders is important, particularly if multiple reminders are used. Finally, it is evident for online surveys that sending a reminder does not reduce the number of completed questionnaires (Göritz 2014); rather, it seems useful “to finally ‘catch’ hard-to-reach respondents at an opportune moment” (Malhotra et al. 2014: 316).

Overall, we can ask: how many reminders of different types are most effective in an online survey and how should these be timed? These are still open questions. On the one hand, it is obvious that different timings and types of reminders seem to have different effects on the target persons receiving reminders in web surveys. The significance and size of the effects on response rates vary across the studies, ranging from modest to doubling the response rate, but it is concluded that for web surveys the number of follow-ups is one of the most important factors for enhancing the response rate (Fan and Yan 2010). A literature review by Muñoz-Leiva et al. (2010: 1040) revealed “a considerably low number of studies concerning the effect caused by the frequency or periodicity of reminder messages (…) or that caused by the interval between initial contact and later reminders”. Our contribution, in contrast to previous studies, uses an alternative means of detecting effects caused by the frequency or periodicity of digital reminders on the response rate in a multiple-wave panel. Due to the lack of studies in this area, it also analyses if there are cumulative effects of reminders on survey response (Misra et al. 2013).

2.2 How to explain the effects of multiple follow-ups?

According to Van Mol (2017: 325), “all reminders significantly contributed to a response rate that is quite acceptable for current standards on web survey response rates (…)” among young individuals. However, there are inconsistent findings regarding the timing of reminders. For example, Crawford et al. (2001) report that first reminders sent two days after invitation provide a more positive effect compared to an initial reminder sent five days after the invitation. However, Archer (2007) found positive effects from sending three reminders in three weeks, with positive effects in each group of target persons every time throughout the four contacts, whereby the largest effects were found for reminders sent on days seven and 11 in the course of the fieldwork period. These facts, the effect of reminders on survey participation, and the role played by their number and timing, can be explained by several rational action theories (RATs) (Singer 2011).

First of all, it is consistently observed in regard to voluntary survey participation that panellists often respond habitually, perhaps for altruistic reasons, after they are invited and prepaid using cash (Singer 2011). These invitees usually do not need any reminders. If they forget to respond, a first reminder might have a strong effect on participation because the target person will remember that they have received a gift (“money in the hand”) already. This might refresh the cognitive dissonance that induces a pressure to reciprocate, which increases the propensity to complete the online questionnaire (Becker et al. 2019; Fox et al. 1988).

Second, beyond this ad hoc argument, which is valid for a special sample of invitees, it is argued against the background of RAT that the main mechanism of reminders is based on their impact on an invitee’s evaluation of trust in (as well as the cost and benefits of) survey participation in each of the survey modes offered to them (Roose et al. 2007: 414). From the perspective of RAT—such as the social exchange approach, leverage-salience theory, or the theory of subjective expected utility—survey participation is a function of an invitee’s expected cost, benefit, and trust (Singer 2011). Among other features, follow-ups might reduce the perceived costs of response as they can provide further assurance of anonymity. These reminders could work like a kind of social control, increasing the cost of not responding (Roose et al. 2007: 414). Generally speaking, reminders should minimise the possible impact of the cost by reducing the burden of participation and increasing the amount of benefit and trust. Changes in expected costs and benefits may be likely with repeated follow-ups. According to Sauermann and Roach (2013), multiple digital reminders result in increased trust in the legitimate e-mails, a decreased fear of spam mail (cost), and an increased perception of the importance of the survey (benefit). In multiple-wave panel studies in particular, where there is no single social exchange between the panellists and the survey management, multiple reminders might be a trust-establishing strategy that results in an increased willingness of panellists to take part in a sequence of surveys (Roose et al. 2007). An immediate response decreases the cost of receiving additional reminders, while postponing an immediate response could be valid if alternative actions provide greater benefits than survey participation. At a later point in time, when the second reminder arrives, the cost of alternative actions could be higher than the expected benefits of survey participation. This means that survey participation is per se a stochastic process, but this could also be true for the positive reaction to multiple reminders. This process might explain the varying timing of an invitee’s reaction to follow-ups in a sequential mixed-mode survey. In sum, these theoretical approaches clarify the immediate or delayed effect of digital reminders on the timing of survey participation and on the overall response rate.

2.3 Which hypotheses are to be tested?

The following hypotheses have been deduced from the empirical evidence contained in previous studies in survey research, as well as from the theoretical approaches that have been discussed above.

H1

Due to their positive impact on an invitee’s cost–benefit calculation and on trust in survey participation, follow-ups such as e-mails or SMS will increase response rates.

H2

Due to their time-dependent and socially selective impact on an invitee’s motivation to participate in a survey, earlier reminders are more likely to increase response rates than later reminders. Since the effect of former reminders fades across time, there are no sustainable effects of reminders on an invitee’s response in a sequential mixed-mode survey.

H3

Due to the fact that opportunities to participate in surveys are occasional, reminders might have short-term and long-term effects.

3 Data, design, variables, and statistical procedures

3.1 Data set and design

The empirical analysis is based on longitudinal data from the DAB panel study (2020) and paradata from its fieldwork period. The aim of the panel study is to investigate the educational and occupational trajectories of youth born around 1997 and living in German-speaking cantons of Switzerland (Becker et al. 2020). The sample of this target population is random. It consists of eighth-graders in the 2011/12 school year who were enrolled in regular classes in public schools (Glauser 2015). The panel study started in 2012. Since then, eight surveys have been realised (Becker 2021). Each of the panellists holds a valid e-mail address and possesses a telephone connection and a smartphone. All of them are “digital natives”, having access to the internet as well as experience in online surveys. Their mailboxes can be checked only by the personally addressed recipients which likely occurs many times a day. It has been observed that individuals with computer skills are more likely to participate in web-based online surveys (Göritz and Crutzen 2012).

In this contribution, only the two last study waves (Waves 7 and 8), which were conducted in May/June 2018 and 2020, have been considered, to control for the significant effect of prepaid incentives. In contrast to previous waves, in these most recent surveys, the eligible panellists received a prepaid monetary incentive (10 Swiss Francs in cash); in former waves, panellists received other incentives, such as vouchers or a ballpoint pen (Becker et al. 2019). Furthermore, as in the previous waves, a sequential mixed-mode design was established. The eligible panellists were pushed towards the first survey mode, which was an online (web-based) mode (Callegaro et al. 2014), by a personalised advanced invitation letter that included “money in the hand” and that was sent by regular postal mail. Using the first-class postage option offered by Swiss Post, the A-post, it was guaranteed that eligible target persons would receive this letter the day after it was sent (on Saturday, with Friday as the day of invitation; see also Faught et al. 2004). Panellists were informed that the panel study was financed by the Secretary of Education, Research and Innovation (SERI), a governmental agency, and that it was conducted always by one and the same researcher team at a cantonal university. One day later, they received the clickable URL and password to log on to the website by a personalised e-mail. If they did not start completing the questionnaire after three days, they received personalised reminders containing text and a link to the initial online survey. About two weeks after survey launch, nonrespondents were invited to take part in another survey mode, the CATI. If they did not react to call attempts and reminders, a traditional paper-and-pencil survey (PAPI) was offered as a final mode (Becker 2022). The last mode is not considered in the analysis due to the very low number of offers to the nonrespondents (less than 2 per cent of the invitees) and the very low number of questionnaires completed in the PAPI mode (Becker 2022).

3.2 Reminders

In the context of the sequential mixed-mode design of this panel study, multiple reminders were sent out to increase the response rate substantially. In the online mode, the personalised reminders were sent a few days after survey launch, i.e., after about four to five, seven to eight, and 10 to 11 days (i.e., at three-day intervals) (Archer 2007; Van Selm and Jankowski 2006: 447). The first reminder was a text sent by e-mail (Wave 7) or by SMS (Wave 8); the second reminder was an SMS or an e-mail (Waves 7 and 8); and the third reminder was an SMS (Waves 7 and 8) (Ashby et al. 2007: 209; Virtanen et al. 2007). In sum, in the initial survey mode, invitees could receive a maximum of three reminders according to a fixed schedule if they failed to start the online survey.

After about 12 days, nonrespondents received a prenotification for the CATI. The exact references on time and status were documented for each contact in this survey mode. In this survey mode, the nonrespondents received a reminder via SMS after three call attempts. This means that, in this survey mode, the number of reminders was not limited, and their timing was not standardised, in contrast to the initial online mode. Three weeks after survey launch, participants received a final e-mail reminding them to take part in the CATI.

The total fieldwork period lasted 40 days for the seventh wave and 52 days for the eighth wave. For comparative reasons, the multivariate analysis of the effect of reminders on the survey participation of invitees is limited to a time interval of 40 days as an “observation window” for each of the waves. The impact of different durations of fieldwork periods potentially biasing model estimations is controlled for by this limitation, which makes comparisons between panel waves feasible. Finally, it is emphasised that some rare events took place after this time. In the context of a sequential mixed-mode survey with a push-to-web method, the greatest dynamic in survey participation is observed during the initial stages of the fieldwork period (Becker 2022; Becker et al. 2019).

3.3 Dependent variables

Two dependent variables have been considered. The main dependent variable is a panellist’s survey response in one of either the online mode or the CATI mode across the two panel waves. In particular, the short- and long-term effects of reminders on the duration (measured on a daily basis) between the receipt of reminders and respondents’ beginning to fill out the questionnaire is of special interest. The response rate (RR1) is defined as the ratio of eligible units and their response in terms of starting and completing the online questionnaire or telephone interview (AAPOR 2016: 61).

In order to understand the necessity and selectivity of reminders, nonrespondents’ receipt of a reminder during the fieldwork period is considered as another dependent variable. It is differentiated between the number and types of reminders (e-mails or SMS). Furthermore, the timing of the reminders is considered, indicated by their transmission date and order. Finally, whether the reminders were sent to target persons in different survey modes is considered.

3.4 Independent variables

Regarding the panellists’ social characteristics, different time-constant sociodemographic characteristics of the panellists have been considered in order to control for their impact on the response to the invitation to participate in the survey, as well as to the reminders sent during the course of the fieldwork period. Based on previous studies that have found that women are more likely to respond to surveys than men (Keusch 2015: 186) and less likely to need reminders than men, the panellists’ gender (reference category: male) is used. Since there is consistent evidence that the socioeconomic conditions in which target persons have grown up (including welfare, integration, and the environment) affect their survey participation, their social origin is included in the multivariate analysis (Groves and Couper 1998: 30). Their social origin, correlating with computer literacy and openness to scientific surveys, is indicated by the class scheme suggested by Erikson and Goldthorpe (1992). This class scheme is a well-established concept in research into social stratification and mobility to indicate the class position of employees and their families. The social classes are categorised by an employee’s market situation, employment relationship, and working conditions.

The education of a target person correlates positively with survey response rates. This correlation might also correspond with their computer literacy and language skills. The education of the panellists is measured by the school type—such as lower secondary schools with basic or intermediate requirements and pre-gymnasiums implying advanced requirements (reference category: miscellaneous school types such as integrative schools without selection)—in which they were enrolled at the end of their compulsory schooling. Target persons’ education is also positively correlated with appreciation of the utility of social–scientific research and information-gathering activities (Groves and Couper 1998: 128), as well as with computer literacy. Panellists’ language proficiency is indicated by their standardised grade point average in the German language class (Wenz et al. 2021). In the survey research, it is supposed that a respondent’s personality traits—such as persistence, control beliefs, and decisiveness—are correlated with their propensity to take part in a survey (Saßenroth 2013; Marcus and Schütz 2005). It could also be proposed that they correlate with a panellist’s need for reminders (Becker 2022: 266, footnote 2).

3.5 Statistical procedures

Since participation in surveys are stochastic processes taking place across time, and because the stochastic effect of reminders takes time, statistical procedures of event history analysis are appropriate tools to estimate the occurrence of events, such as beginning to complete the questionnaire or the reaction of panellists to reminders they receive (Blossfeld et al. 2019; Becker et al. 2019). These procedures are applied to reveal endogenous and exogenous factors influencing the likelihood and timing of survey participation (before and after the receipt of reminders) and the receipt of reminders since survey launch, as well as the structure and timing of the short- and long-term effects of reminders on survey participation.

Different models of parametric estimations are utilised to analyse the time until interesting events—such as receiving a reminder and survey participation—occur within the fieldwork period. First of all, a piecewise constant exponential model is used to describe the hazard rates for the effect of reminders on a panellist’s survey participation—i.e., to reveal the effect of reminders on a panellist’s action relating to completing the questionnaire at each point in time during the fieldwork period. In general, the hazard rate \(r(t)\) is defined as the marginal value of the conditional probability of such an event occurring (namely the instantaneous rate for survey participation or receiving a reminder) in the time interval \((t, t+\Delta t)\), given that this event has not occurred before time \(t\) (Blossfeld et al. 2019: 29). According to Blossfeld et al. (2019: 124), the basic idea of piecewise constant rate analysis is to split the time axis into time periods (e.g., on a daily basis) and to assume that transition rates are constant in each of these intervals but can change between them. Using this procedure, it is possible to describe the effects of reminders for different samples of nonrespondents in different phases of the fieldwork period.

Second, for single events (such as survey participation) or repeated events (such as receiving reminders across the fieldwork period and their repeated or cumulative effects on panellists’ participation), the hazard rate is estimated on the basis of the exponential model:\({\text{r}}({\text{t}}|{\text{x}}\left( {\text{t}} \right)){ } = {\text{ exp}}\left( {\beta \prime {\text{x}}\left( {\text{t}} \right)} \right)\), whereby \(x\left( t \right)\) is the time-dependent vector of exogenous variables whose unknown coefficients β have to be estimated. To account for time-varying covariates, such as the number of reminders or their sustainable effects, the technique of episode splitting is used: i.e. the initial waiting time is split into sub-episodes on a daily basis. For each of these short sub-episodes, a constant hazard rate is assumed. By applying this procedure, it is possible to model step functions displaying the empirically observed hazard function for the entire process until a reminder is received, as well as until the occurrence of a reminder’s effect on an invitee’s survey participation.

Third, in the case of the overlapping offer of both survey modes during the fieldwork period about 12 days after survey launch, the competing risks in terms of receiving reminders and participation in the online survey versus the CATI has to be considered. Therefore, the exponential model (including the episode splitting) is equivalent to the proportional cause-specific hazards model suggested by Kalbfleisch and Prentice (2002). The exponential model is applied for estimating the long-term effects of reminders across the entire fieldwork period. However, another approach—the subdistribution hazards approach of Fine and Gray (1999)—is often seen as the most appropriate method to use for analysing competing risks, and it is also applied for the empirical test of the hypotheses. By taking competing risks into account, the coefficients estimated by the stcrreg module implemented in the statistical package Stata can be used to calculate the cumulative incidence of participation in one of the survey modes (Austin and Fine 2017).

4 Empirical results

4.1 Description of patterns of reminders and responses in a sequential mixed-mode design

As a first step, the piecewise constant exponential model is applied separately for both survey modes and waves to describe time-related patterns of response since survey launch, paying special attention to the timing and effect of reminders (Fig. 1).Footnote 1 For both waves, there was a typical pattern of a declining response rate across the fieldwork period. The three vertical tightly dotted lines tag the point in time when the panellists remaining in the risk set for survey response because they had not yet responded received a personalised reminder in the initial online mode. For these panellists who had not yet started completing the questionnaire, an immediate response to the reminders on the same day was observed.

Fig. 1
figure 1

Reminders and their effects on response rates in the online and CATI modes in Waves 7 and 8

In Wave 7, estimated by the life-table method, 68 per cent of invitees did not respond until they received a reminder five days after their invitation to participate in the survey. According to the observed hazard rate, the response rate has increased by about 10 percentage points. When the second reminder was sent after eight days, 57 per cent of invitees remained at risk of response; the response rate increased immediately by about 4 percentage points. After 11 days, the third and last reminder in the online mode was sent, with 51 per cent of eligible panellists having postponed their response. This reminder resulted in an increase in the response rate of 7 percentage points. For Wave 8, a greater dynamic and different timing of survey responses was observed in the initial stage of the survey within the online mode. When the reminders were sent after four, seven, and 10 days, about 65 per cent, 50 per cent, and 43 per cent of the invitees, respectively, had not yet responded up until these points in time. These reminders resulted in an increase in the response rate of about 18, 10, and 7 percentage points, respectively. The overall increase in the response rate observed for the same time the invitees received a reminder was about 21 percentage points in Wave 7 and 35 percentage points in Wave 8 for the invitees who did not respond until they received reminders. There was a decline in the number of invitees who received a reminder for each wave, as well as a decline in the potential effect of reminders on a panellist’s response.

At first glance, an immediate effect of reminders on a panellist’s response in the online mode might be obvious. The next steps of the analysis must test if there were systematic effects of reminders on an individual’s response. In other words, it is also interesting to see whether the reminders sent at later stages of the fieldwork period (i.e. during the second CATI mode) were significant for the survey response of the remaining nonrespondents. Before the effects of the reminders are analysed, it is necessary to understand which panellists received reminders that should have motivated them to respond to the invitation to participate in the survey.

4.2 Short- and long-term effects of reminders on survey response

In line with the methodological research on reminders, it is found that reminders had positive and significant effects on panellists’ survey response in the sequential mixed-mode design (Fig. 2). First, by controlling for the target persons’ characteristics, it is found that for both survey modes reminders had an immediate effect on the response rate (see also Table 3 in the “Appendix”). This finding is in line with Hypothesis 1.

Fig. 2
figure 2

Short-term effects of reminders on response rate (subdistribution hazards approach)

Additionally, in line with Hypothesis 2, it has to be stressed that the effects of the early reminders on the response rate were stronger than the effects of the late reminders.Footnote 2 This is true for the online mode at the earlier stages of the fieldwork as well as for the CATI mode at later stages of the fieldwork. Second, it is also obvious that the reminders’ effects were limited to the local points at the time the reminders arrived, stressing their short-term effects. Third, the effect of the reminders faded with their number—i.e., the more reminders an invitee had previously received, the lower the effect of the next reminder on their survey response. In particular, this is obvious for the last reminders in the online and CATI modes. These results only partially confirm Hypothesis 3. Finally, it is worth noting that there were no systematic effects of the interaction between reminders and the target persons’ characteristics on the survey response. Thus, the main effects of reminders seem to have been universal effects, which were invariant to a panellist’s characteristics. Therefore, it is found that it was not necessary to tailor personalised reminders to different groups within the eligible sample.

Finally, it is worth noting that there were no systematic effects of the interaction between reminders and the target persons’ characteristics on the survey response. Thus, the main effects of reminders seem to be universal effects, which were invariant to a panellist’s characteristics. Therefore, it is not necessary to tailor personalised reminders to different groups within the eligible sample.

The previous analysis focused on the short-term effect of reminders. In the next step, potential long-term effects of different reminders on the panellists’ propensity to respond are analysed. In order to capture a long-term effect, it is assumed that the reminder is not an event, in terms of treatment, but rather a state of being treated by a reminder. Therefore, it is pretended for the next estimations that a reminder had a rather sustainable long-term effect on the likelihood of a reminded panellist’s survey participation up to the arrival of the next reminder. In order to avoid multicollinearity of simultaneous treatments, it is assumed that the long-term effect of a reminder diminished when nonresponding invitees remained in the risk set for survey response received an additional reminder. Since reminders in the online mode were sent at fixed points in time, the “sustainable” long-term effect is only considered for them. For the CATI mode, long-term effects were less likely for logical reasons. In the sequential mixed-mode design plus “push-to-web” strategy, the nonrespondents were invited to take part in the CATI mode since they had not replied to the previous reminders in the expected way. Explorative analysis provides evidence that there was no carry-over effect of reminders sent out in the initial stage of fieldwork in which only the online mode was accessible to the invitees. Therefore, the results for the CATI mode are reported for the sake of completeness.

The results of the estimations are straightforward (Table 1). In line with Hypothesis 3, the short-term effects of reminders are revealed again for each of the survey modes. Long-term effects of reminders on the responses of unassertive panellists are again found when the analysis focuses on the online mode. These results support Hypothesis 3 completely. Furthermore, the estimation for each of the survey modes again reveals that the short-term effects of each reminder in particular contributed significantly to an increased response rate (Model 1 in Table 2). The negative long-term effects of the first three reminders indicates again that the treated individuals who were not motivated by these reminders were less willing either to start completing the online questionnaire or to take part via CATI. The negative effect of both final reminders confirms Hypothesis 2 that panellists who postponed their survey response not only received many reminders but were also at highest risk of unit nonresponse. This result will be replicated if the estimations are conducted separately for the survey modes.

Table 1 Short- and long-term effects of reminders on survey participation
Table 2 Survey participation during the online mode between the reminders

In the case of the initial survey mode, which was the online mode, there is seemingly a gross long-term carry-over effect of the three reminders (Model 2.1). However, when controlling for an individual’s characteristics, this effect vanishes and there is no significant effect of the last reminder sent in the fieldwork period of the online mode (Model 2.2). The insignificant effect of the third reminder indicates that individuals who received this reminder were not more likely to respond at later stages when the CATI mode was offered to them up to the end of the fieldwork period.

This is also true when the reminders sent during the CATI mode are taken into account (Model 2.3). Panellists remaining in the risk set until they received the final reminder in the second survey mode were indeed not willing to take part in the survey. For reminders in the sequential mixed-mode design, it is obvious that their positive effects were limited to one of the modes. There was no cross-over effect whereby the reminders sent out initiated participation in the CATI mode and increased the target persons’ inclination to complete the online questionnaire. Overall, Hypothesis 2 is confirmed empirically, since there were no cumulative or overlapping effects of multiple reminders sent out across the fieldwork period.

Finally, a robustness check of Model 2.2 in Table 1 is undertaken in order to demonstrate that the long-term effect of the first reminders in the online mode is not a statistical artefact based on the coding of the “long-term effect”. The results are depicted in Fig. 3. They demonstrate that there were indeed sustainable effects of the first reminders on the survey response across the entire fieldwork period. In the initial stage of the fieldwork period (which lasted about two weeks), it was found for Wave 7 that only the first reminder contributed to an increased response, in line with the researchers’ intentions. For Wave 8, however, the first two reminders had a positive short-term effect on survey response, while the third reminder had no significant effect on an invitee’s participation. In sum, Hypothesis 3 on the short- and long-term effects of personalised digital reminders is confirmed again.

Fig. 3
figure 3

Long-term effects of reminders on response rates (coefficients estimated by competing risk models)

However, if the entire fieldwork period of the surveys is considered and the proportional cause-specific hazards model is applied, it is found for Wave 7 that the first and third reminders sent within two weeks of survey launch resulted in a substantially increased response rate. The effect of the second reminder was statistically insignificant. In Wave 8, it is obvious that each of the three reminders contributed positively to a response rate that increased across the fieldwork period. Here, it is observed that the effects of the reminders faded with their frequency. Overall, the previous findings estimated by the competing risk models are valid. Indeed, there are long-term effects, but no effects of initial reminders on the survey responses at later stages crossed over the survey modes (Hypothesis 2).

This finding is not a statistical artefact based on the extended “observation window” catching the participation of those who postponed their response and neglecting additional reminders sent to the procrastinating invitees at later points in time. On the one hand, it has already been reported above that there are short- and long-term effects of reminders sent during the initial stage of the fieldwork (when only the online mode was offered to the panellists) on their survey response at later stages in the fieldwork period (when the CATI mode was also offered to them) (Model 2.3 in Table 1). On the other hand, this finding is reliable even when applied to separate estimations from each panel wave (Fig. 4). There are positive effects of the first three reminders on an invitee’s survey response for the online mode across the entire fieldwork period. In both waves, the reminders sent during the CATI mode had negative effects on the survey response during the online mode. In sum, e-mails or SMS reminders improved response and the timing of the response; this is valid even for a long-running panel study using a sequential mixed-mode design.

Fig. 4
figure 4

Reliability of long-term effects of reminders in the online mode only (exponential model)

The final analysis relates to those who might have been affected by the long-term effects of each of the reminders during the initial online mode (Table 2). By splitting the original episodes into sub-episodes by using the point in time when the nonresponding panellists received the reminder as the split threshold, each point in time of the fieldwork period during the online mode (limited to two weeks) is considered. For these points in time, the invitees who reacted immediately to the reminders they received are excluded from the risk set. First of all, it is obvious that the response decreases across the time intervals. Second, for each of the time intervals except the third, the response rate is higher in Wave 8 than in Wave 7. Third, the social selectivity of response regarding the long-term effect of reminders vanishes for social origin, while the disparities in terms of educational level and language proficiency remain rather constant. The social selectivity of response initiated by the reminder diminishes across consecutive reminders. Furthermore, there is no gender effect regarding the long-term effect of reminders. In sum, there is an indirect indication at least that the reminders did catch those individuals with low levels of willingness to participate in the survey. However, regarding the long-term effect of reminders, it is again obvious that the better educated nonrespondents in particular were more likely to be stimulated by the reminders.

When survey participation is a stochastic process, it might be proposed that panellists with higher reading and computer literacy among the nonrespondents who postpone their response after the receipt of an electronic reminder will be more affected by the reminders. For the less-qualified target persons, however, the short-term and the long-term effect of reminders are significantly lower compared to the same effect on socially privileged panellists. This finding contributes to the “education bias” identified in surveys.

5 Discussion and conclusion

In conjunction with research on the need for and effects of follow-up reminders, such as e-mails or SMS, on survey response, the aim of this contribution was to reveal the short-term and long-term effects of such reminders on a multiple-wave panel study with a sequential mixed-mode design. On the one hand, the study analysed how long it took for nonrespondents to comply with the reminder and start taking part in the survey. On the other, it revealed how long a reminder was able to “remind” the postponing nonrespondents to take part in one of the survey modes. In this way, this empirical study makes a significant contribution regarding how to overcome both the nonresponse of survey invitees and panel attrition, which are major concerns of survey researchers (Dillman et al. 2014). In particular, by employing a longitudinal design, it was possible to evaluate the usefulness of digital follow-up reminders and their role as a powerful method to boost survey response rates in sequential mixed-mode surveys with a push-to-web methodology (Lynn 2020).

The empirical analysis was based on paradata on the fieldwork period for the two most recent panel waves of a multiple-wave panel study (see Kreuter 2013b). The target population was juveniles born around 1997 and living in German-speaking cantons of Switzerland (Becker et al. 2020). The paradata were stored in an event-oriented design with exact time references for survey launch and survey participation, as well as for the timing of multiple reminders. This design allowed for fine-grained analysis by techniques (such as episode splitting and period-specific observation windows) and the statistical procedures of event history analysis (Blossfeld et al. 2019). This approach has the potential to detect more or less sustainable impacts of reminders on the timing of panellists’ responses. By applying longitudinal modelling of the receipt of reminders and the effects of those reminders on survey participation in an event-oriented design, the results avoid suffering from a quasi-experimental design, or from biased estimates of the effects of reminders due to the social selectivity of receiving a reminder. The strength of the techniques and procedures that were applied is that there was no need for a correction of sample selection bias in order to reveal the “true” effect of the treatment by reminders.

The results confirm that multiple personalised reminders such as e-mails or SMS—sent out early and frequently within a short period after the survey launch—are among the most successful techniques that can be used to increase the response rate of panellists. Taking into account the impact of prepaid monetary incentives, personalised cover letters, invitations by e-mail, and the push-to-web method, frequent reminders in a sequential mixed-mode approach indeed prevent panellists from postponing their response. On the one hand, as in a previous study, it was found that low-achieving target persons as well as panellists with a lower educational level and who come from socially disadvantaged social classes are more likely to need multiple reminders, perhaps due to a higher burden of response as a consequence of limited reading and computer literacy (Becker 2022). On the other hand, previous research was confirmed: reminders have a short-term effect, resulting in the immediate response of the panellist receiving the reminders. This effect fades across the fieldwork period and with a greater number of reminders. Furthermore, long-term effects of former reminders on the delayed response were also detected, although it is worth noticing that these were found to be smaller than the short-term effects. Overall, there were found to be no cumulative effects of reminders, and three reminders—sent out as early as after the request for participation—seem to have been sufficient for each of the survey modes used in this panel study. This finding suggests that when running a sequential mixed-mode design, survey researchers should send two or three reminders, at least, in each of the different survey modes. It should be noted here that the type of electronic reminder used does not matter, only its timing. With respect to the optimal timing for sending a follow-up, our results indicate that this depends on the duration of the fieldwork period: the shorter the anticipated fieldwork period, the lower the number of reminders that need to be sent out and the shorter the timespans between them should be. For online surveys, it seems evident that early reminders sent out three or four days after survey launch would be an optimal strategy. This is also true for multiple reminders sent in a brief three-day sequence.

To answer, briefly, the question posed by Rao and Pennington (2013), “Should the third reminder be sent?”, the answer is clear: “Yes, of course!” Taking the long-term effect of early reminders on a target person’s delayed response into account, it is obvious that each reminder (particularly the efficient and effective electronic reminders) counts. This has been underestimated in previous studies that have run comparative-static analyses or longitudinal analyses, which neglect the sustainable effect of reminders.

As our findings on this important issue in contemporary survey research are based on empirical evidence, and because the study’s hypotheses have been tested by longitudinal data and the statistical procedures of event history analysis, we are convinced of this study’s contribution to promote a better understanding of the impact of follow-ups in a sequential mixed-mode design. The study has also revealed the competing effects of other impacts on the effects of follow-ups on the response rate (Roth and BeVier 1998: 113).

However, our study also has some serious limitations. First, despite our findings being in line with—and extending—previous findings, the external validity of the results is still not clear since our target population consisted specifically of juveniles who are “digital natives”. Nevertheless, there are “good” theoretical reasons to propose that the findings for this special population would be valid for other target groups since the number of individuals with advanced computer skills is increasing across birth cohorts. It is therefore proposed that the effects of reminders on social surveys are universal. Second, due to missing information, we were not able to identify the social mechanisms of reminders (such as reciprocity, cognitive dissonance, subjective evaluation of additional benefits, decreasing cost of survey participation, and trust in social interaction) posited by RAT. In general, from the point of view of a direct and complete test of RAT, it must be emphasised that the arguments of these theoretical approaches seeking to explain survey participation consistently are still untested hypotheses (Becker et al. 2019: 232; Becker and Glauser 2018: 89–90). To the best of our knowledge, there is no empirical evidence in survey research confirming these rational choice theories through a direct test. Therefore, the question of why nonrespondents react in a positive way to reminders, as anticipated by survey managers, is still open. Third, our study obtained no information on the subjective perceptions and evaluations of multiple reminders by the panellists who received at least one reminder. These data would be necessary to assess whether reminders sent out frequently in short sequences are useful for boosting the response rate or are counterproductive and seen as an additional burden by target persons.