1 Introduction

Recent advances in web survey methodology rest on the assumption that respondents increasingly use mobile devices, such as smartphones and tablets, to participate in web surveys. The claim of growing mobile device use frequently serves as a motivation for developing innovative and novel approaches in web survey research. For instance, survey researchers have pointed out that mobile devices may help to survey hard-to-reach populations (e.g., Keusch et al. 2019; Sugie 2018) and can enable the collection of rich paradata and sensor data about respondents’ answering behavior (e.g., Diedenhofen and Musch 2017; Schlosser and Höhne 2020; Struminskaya et al. 2020; Wenz et al. 2019). Callegaro et al. (2014) and Couper et al. (2017) have described further advantages (and challenges) of a trend towards the use of more mobile devices in web surveys. However, the frequently mentioned observation that smartphone and tablet use in surveys is growing lacks a systematic and comprehensive empirical foundation. The use of mobile devices for survey participation has two requirements: first, the availability of mobile devices and internet access via local or mobile networks, and second, respondents who decide to participate in web surveys with a smartphone or tablet. While the first requirement has been researched relatively well, the latter needs more refined research.

The availability of mobile devices and internet access for the general public has been systematically investigated across multiple countries and time periods. For example, Taylor and Silver (2019) reported an increase in smartphone ownership in advanced and developing countries between 2015 and 2018. Mohorko et al. (2013a) also showed that access to mobile devices increased in 33 European countries between 2000 and 2009. Based on data collected between 2012 and 2016, Couper et al. (2018) estimated that about 82% of the US residential population aged between 15 and 44 have access to a smartphone. Investigating the development of access to mobile internet, Fuchs and Busse (2009) found an increase for 18 European countries between 2005 and 2007. In line with this finding, Mohorko et al. (2013b) reported an increase in internet coverage in 32 European countries between 2005 and 2009. Poushter (2016) also has provided evidence for an increase in internet use in 16 developing countries between 2013 and 2015, and Taylor and Silver (2019) have found an increase in internet use in 14 advanced countries and 8 developing countries between 2015 and 2018. Regarding the US, Sterrett et al. (2017) found an increase in internet access from 69% of adults in 2006 to 86% of adults in 2014. However, the general availability of mobile devices and access to the internet does not mean that respondents will participate in web surveys with a smartphone or tablet.

Actual participation in web surveys via smartphones and tablets has received much less attention, a notable exception being the study by Peterson et al. (2017). These authors found an increase in smartphone use in a diverse set of commercial and academic surveys in the US and several other countries, such as the opt-in Netquest access panel in Spain and the probability-based LISS panel in the Netherlands. With few exceptions, the data cover the period of 2011–2014. In other studies, mobile device use for surveys often is reported only as descriptive information to justify the respective study or as a methodological sidenote for describing the data used. For instance, Poggio et al. (2015) analyzed the prevalence and determinants of mobile device use in eight waves of a panel fielded in Germany between 2011 and 2012. In addition, Gummer et al. (2019) investigated systematic differences between smartphone and non-smartphone respondents and whether these differences diminished when smartphone participation increased over time. These authors pooled data from 18 web surveys fielded between 2012 and 2016 in Germany, finding a rise in survey participation via smartphones. Also, Revilla et al. (2016) investigated whether an increase in smartphone participation required an adaption of surveys to mobile devices. Based on data from Argentina, Brazil, Chile, Colombia, Spain, Mexico, and Portugal, the authors reported an increase in mobile device use between 2013 and 2014 and recommended web survey adaptations for participation via mobile devices. Overall, the existing research contributions to determining actual participation in web surveys via smartphones and tablets mostly cover limited periods of time and/or analyze data from only one study or panel. In addition, most of the data used are comparatively old and, most likely, do not reflect the current state of mobile device use for contemporary web surveys.

This lack of knowledge on mobile device use in web surveys is unfortunate for at least two reasons. First, the observation that smartphones and tablets are being used increasingly to participate in surveys—as undisputed as this might be—is not based on comprehensive empirical evidence. Thus, the motivation and rationale for an important stream of web survey research have not been properly described. Second, an investigation into how survey participation via different devices has changed across time is important because it may provide valuable insights into future developments. Survey practitioners can build on these insights and anticipate the need for proactive action and planning, such as survey adaptation, mobile friendliness, and survey protocol changes. Refined knowledge about the development of mobile device use can enable survey practitioners to make informed decisions concerning future surveys, and so allocate accordingly limited resources.

In the present study, we explored the research gap on mobile device use in web surveys in Germany. More specifically, we addressed the following research question: Is there a growing use of mobile devices in web surveys? For this purpose, our study is descriptive and aims to identify the change of mobile device use in web surveys.

We also investigated the development of mobile device use in web surveys over time (i.e., between 2012 and 2020) and whether future developments are indicated. For this purpose, we drew on data from four large-scale academic studies in Germany that use web survey data collection: (1) the probability-based GESIS Panel (GP), (2) the probability-based German Internet Panel (GIP), (3) the probability-based German Longitudinal Election Study – Panel (GLES-P), and (4) the nonprobability German Longitudinal Election Study – Tracking (GLES-T). Overall, we relied on 128 web surveys conducted as part of these large-scale academic studies. This unique data set covered a period of nine years and included different web survey designs with respect to sampling, recruitment, and operations, which enabled us to gather comprehensive insights on mobile device use in web surveys over time.

2 Data

In the following, we describe the four academic studies: (1) the probability-based GP, (2) the probability-based GIP, (3) the probability-based GLES-P, and (4) the nonprobability GLES-T. We selected these studies because they use web surveys for data collection and represent state-of-the-art academic social science surveys. Data from these studies are available for scientific purposes, although access to sensitive information or specific paradata may be restricted and/or require on-site access.Footnote 1

2.1 GESIS panel (GP)

The GP is a probability-based panel of the German-speaking population in Germany operated by GESIS—Leibniz Institute for the Social Sciences (for more details see Bosnjak et al. 2018). It offers researchers (from various fields) the opportunity to collect data by submitting proposals for survey questions. Thus, the GP covers a wide range of social science topics. The GP was recruited in 2013 and started data collection in 2014. On a bimonthly basis, between 4000 and 5000 respondents participate in self-administered mixed-mode surveys (i.e., six waves per year). During panel recruitment, respondents were able to choose between mail (paper-based questionnaires) and web mode (web-based questionnaires). In the present study, we analyzed only the panelists who chose the web mode. Each panel wave takes about 20 min to complete. All panelists receive an unconditional prepaid incentive of 5€ for each panel wave.

The initial sample was recruited in 2013 based on a gross sample drawn from population registers (stratified by regions) covering persons aged between 18 and 70 years. The gross sample consisted of 21,870 individuals, of which 7599 were interviewed face-to-face in the recruitment interview with an AAPOR RR1 (AAPOR 2016) of 35.5%. In total, 6210 potential panelists were recruited, of which 496 started the first self-administered survey. In 2016 and 2018, refreshment samples were recruited via the German General Social Survey (ALLBUS). The ALLBUS also is based on a register sample of the German general population. From the ALLBUS 2016 (AAPOR RR1: 33.2%), 1,710 new panelists were recruited (Schaurer and Weyandt 2016, p. 3). From the ALLBUS 2018 (AAPOR RR1: 30.7%), 1,607 new respondents were recruited (Schaurer et al. 2020, p. 3).

For the present study, we relied on 44 waves of the GP that were collected between 2013 and 2020 (waves “aa” to “hf”). In each panel wave under investigation, we had between 679 (wave “aa”) and 3,925 (wave “fe”) panelists available for statistical analyses. In wave “hd” (August 2020), the GP started using a responsive questionnaire and mobile-first design (to ease mobile device participation). In this wave, half the sample received a questionnaire in a mobile friendly design. In wave “he” (October 2020), the full sample received a questionnaire in responsive and mobile-first design. The responsive questionnaire design adapts the layout to a device’s screen resolution, including the scaling of visual elements, button sizes, and line breaks of text. Since the design changes, question batteries are displayed item-by-item, and response scales are presented vertically.

For each panel wave, we computed the share of respondents who participated via smartphones, tablets, desktop PCs (including laptops), and devices that could not be assigned to one of these device categories by using the Stata module parseuas (Roßmann and Gummer 2020). This module utilizes user agent string (UAS) information automatically collected by the survey software to determine device types (Roßmann et al. 2020).The share of devices that could not be identified as smartphone, tablet, or desktop PC was zero, except for seven waves. In these seven waves, “other devices” were detected for less than 0.1% of the panelists, which represents a negligibly small number of undetected devices and indicates that the categorization of devices worked as intended.

2.2 German internet panel (GIP)

The GIP is a probability-based online panel of the German general population operated by the University of Mannheim (for more details, see Blom et al. 2015). The topics of the GIP cover individual attitudes and preferences with respect to political, social, and economic spheres. The GIP was recruited in 2012 and started data collection in the same year. GIP panelists are surveyed on a bimonthly basis, resulting in a total of six waves per year. Panelists participate exclusively via web surveys. Each panel wave takes between 20 and 25 min to complete. For their participation, respondents receive a conditional incentive of 4€ per wave and a bonus of 10€ for participating in all six waves in a year or a bonus of 5€ for participating in five out of six waves in a year.

The initial sample was recruited in 2012 using a three-stage random route sampling procedure that yielded 4878 eligible households. The target population was persons living in private households aged between 16 and 75 years at the time of recruitment (Blom et al. 2015). Recruitment interviews were conducted face-to-face (AAPOR RR2: 52.1%), and all eligible household members were subsequently invited to register for the panel. Households without internet access and/or suitable devices for web survey participation were offered to be equipped with a device and/or internet connection. In total, 1603 respondents were recruited in 2012.

A refreshment sample was drawn in 2014 using the same method as in 2012 (AAPOR RR1: 47.5%). Again, households without internet access and/or suitable devices for web survey participation were offered the necessary equipment. In total, 3401 panelists were recruited. A second refreshment sample was recruited in 2018 using a gross sample drawn from population registers. In contrast to the previous recruitments, panelists were invited via mailed paper invitations, and only persons with internet access and suitable devices were considered. The gross sample consisted of 13,050 persons, from which 3069 new panelists were recruited.

For the present study, we relied on 50 waves of the GIP that were collected between 2012 and 2020 (waves 1–50). The number of panelists per wave ranged from 936 (wave 12) to 5411 (wave 37). Since wave 24 in July 2016, the GIP has used a responsive questionnaire and mobile-first design. The responsive questionnaire design adapts the layout to a device’s screen resolution, including the scaling and position of visual elements, button sizes, and line breaks of text. Since the design changes, question batteries are displayed item-by-item on separate pages, and response scales are presented vertically on all devices.

For each wave, the information concerning whether respondents used a smartphone, tablet, or desktop PC (including laptops) is available on-site at the University of Mannheim. The Stata module parseuas was used to determine the device used for survey participation—no undetected devices existed.

2.3 German longitudinal election study-panel (GLES-P)

The GLES is a survey program in Germany for the continuous collection and provision of data for national and international election research. The GLES is conducted in close cooperation between the German Society for Electoral Studies (DGfW) and the GESIS—Leibniz Institute for the Social Sciences. The GLES includes multiple studies, each of which aims to collect data for different research purposes, such as studying electoral campaigns or candidates (for further information, see https://gles-en.eu/). In the present study, we focused on the GLES Panel (GLES-P) and GLES Tracking (GLES-T) (for more information on GLES-T, see the following section).

Although the GLES-P includes a variety of samples, we decided to focus solely on sample B, which uses a self-administered web mode. After recruitment in 2017, the data collection for sample B began with the first web-based re-interviews in 2018 (referred to as wave 10 of the GLES-P). Panelists are surveyed up to twice a year. Starting with wave 10, the panelists of sample B were invited to participate in self-administered mixed-mode push-to-web surveys (i.e., by mail or web mode). In wave 13, the survey design was changed to web mode only. We analyzed only the panelists in the web mode. Each panel wave takes about 25 min to complete. For their participation, respondents receive an unconditional 5€ incentive per wave.

Sample B of the GLES-P was recruited from a gross sample drawn from population registers that covered persons living in private households with German citizenship and a minimum age of 16 years (at the time of the federal election in September 2017). This gross sample included an oversample of persons living in East Germany. The recruitment survey was conducted face-to-face as part of the GLES Pre- and Post-Election Cross-Section Survey with an AAPOR RR1 of 27.9%. In total, 3,412 panelists were recruited.

For the present study, we relied on data from four waves of the GLES-P that were collected between 2018 and 2020 (waves 10–13). These waves included all the web surveys of sample B. Across these four waves, the number of participants ranged from 1638 (wave 13) to 2368 (wave 10). For all waves, a responsive questionnaire design was used that adapts the layout to a device’s screen resolution, including vertical/horizontal scale orientation and button and font sizes.

Again, for each wave, we computed indicators for the share of respondents using smartphones, tablets, desktop PCs (including laptops), and other devices by using the Stata module parseuas. The share of other devices was zero in three waves, and below 0.2% in one wave.

2.4 German longitudinal election study-tracking (GLES-T)

The GLES-T, another study of the GLES survey program in Germany, aims at capturing the long-term processes of the formation and change of public opinion between federal elections. Topics include attitudes toward the most important political and societal issues, the political parties and their top politicians, and the performance of the federal government and the opposition. For this purpose, the GLES-T is a repeated cross-sectional survey, and all surveys are conducted in the web mode. Since 2009, between three and four surveys have been conducted per year. For the sake of simplicity, we refer to the GLES-T surveys as waves. These waves take between 20 and 30 min to complete, and respondents receive conditional incentives that vary between 2.00€ and 3.50€.

Each wave of the GLES-T was sampled from a German online access panel. The respondents for each wave were selected using quota-sampling based on gender, age, and level of formal education by using the German online population as reference distribution. Since panel providers were changed during the course of the GLES-T, it includes samples from Respondi (2009–2011 and 2018–2020), LINK (2012–2016), and forsa.main (2016–2017).

For the present study, we relied on data from 30 waves of the GLES-T collected between 2012 and 2020 (waves 17–47). Due to an error by the panel provider (i.e., respondents with a mobile device were excluded), we had to omit wave 39 (April 2018). The number of participants ranged from 1008 (wave 35) to 1232 (wave 45). In each wave, a responsive questionnaire design was used that adapts the layout to a device’s screen resolution, including vertical/horizontal scale orientation and button and font sizes.

For each wave, we computed indicators for the share of respondents using smartphones, tablets, desktop PC (including laptops), and other devices by using the parseuas module. The share of devices that could not be categorized was zero in 19 waves, and below 0.3% in 11 waves.

2.5 Aggregated data set

To investigate our research question, we aggregated the device information of the four studies (i.e., GP, GIP, GLES-P, and GLES-T) into one survey-level dataset. This dataset contains the share of respondents who used smartphones, tablets, or desktop PCs in each wave of the studies. In total, we drew on 128 waves (nGP = 44, nGIP = 50, nGLES-P = 4, and nGLES-T = 30). Due to the low magnitude in all four studies, we do not present the share of other devices. We also included the time of the field start of each wave to make possible an investigation of changes over time.

2.6 Modelling approach

Through an application of regression analyses, we examined how mobile device use in web surveys has changed over time and whether future developments are indicated. Using the aggregated data set, we relied on ordinary least square (OLS) regressions with the share of respondents using a specific device \(d\) as the dependent variable \({y}_{d}\) and time \(t\) (i.e., each waves’ start of data collection) as the independent variable. Time was measured in months since January 2012. We fitted separate regression models for each device type (i.e., one model for smartphones, one for tablets, and one for desktop PCs, respectively). We refer to these models as pooled models, and their regression equation is:

$${y}_{d}=\alpha +\beta t+\varepsilon,$$

where α is the intercept, β the slope for \(t\), and \(\varepsilon\) an error term.

Since data points in the aggregated data set are clustered within surveys, we computed the pooled models with cluster robust standard errors. Moreover, to investigate whether the development of device use over time differed between surveys, we also computed the regression models separately for each study. Since the GLES-P covers only four waves, we did not compute a separate model for the GLES-P. We refer to the models as GP model, GIP model, and GLES-T model or as separate models.

To test whether the relationship between the share of respondents using a mobile device and time was linear—as assumed in our initial models—we recalculated all the models with a quadratic term of the time variable \({t}_{i}\). The regression equation of these models is

$${y}_{d}=\alpha +{\beta }_{1}t+{\beta }_{2}{t}^{2}+\varepsilon$$

The quadratic term was included as a robustness check that enabled us to model a nonlinear relationship. For the separate models, we employed likelihood ratio-tests to assess whether modelling a quadratic instead of a linear relationship was more adequate. These tests were not possible for the pooled models due to the use of cluster robust standard errors.

For the linear OLS models, we plotted the regression functions of the pooled and the separate models. To further illustrate a possible nonlinear relationship, we also added plots computed with locally weighted scatterplot smoothing (lowess).

3 Results

3.1 Describing device use

To answer our research question, we initially looked at the share of devices. Figure 1 shows the respective share of respondents using smartphones, tablets, and desktop PCs (including laptops) in 128 web surveys that were conducted as part of the GP, GIP, GLES-P, and GLES-T between 2012 and 2020. For GP, GIP, and GLES-T, which cover comparatively long periods of time, the increase in smartphone use was evident. Starting with a negligibly small share of smartphone respondents varying between 2.7% in the GIP (2012) and 6.3% in the GP (2013), smartphone use reached its maximum in 2020 ranging between 25.1% (GP) to 39.0% (GLES-T). Even though the four waves of the GLES-P cover only a short period of time, our analyses show that a non-negligible share of respondents participated via smartphones in 2019 and 2020. Across the four waves of the GLES-P, an average of 20.9% of respondents used smartphones for survey participation with a min of 20.1% and a max of 22.0%.

Fig. 1
figure 1

Share of device use in 128 web surveys between 2012 and 2020. Note. Dashed vertical lines indicate implementation of a responsive questionnaire and/or mobile-first design. Desktop PCs includes laptops

Two of the studies that we analyzed implemented a responsive questionnaire and mobile-first design between 2012 and 2020: the GIP in July 2016 and the GP in August 2020. Regarding the GIP, we found that the share of smartphone respondents increased by 2 percentage points after it switched to a responsive questionnaire and mobile-first design (from 9.8% in wave 23 to 12.0% in wave 24). With respect to the GP, in contrast, we did not find an effect of similar magnitude (independent of whether we looked at wave “hd” in which only half of the sample received a questionnaire in the new design or wave “he” in which the full sample received a questionnaire in the new design).

In contrast to the increase in smartphone use, we did not find an increasing trend of tablet use. With respect to the GP, GIP, and GLES-T, our analyses showed an initial increase in tablet use until a ceiling was reached at between 10.9% (GLES-T in March 2013) and 12.9% (GIP in January 2018). After reaching these peaks, tablet use declined and consolidated at an average of about 7% in 2020. Overall, tablets are the least frequently used devices for web surveys.

In parallel to the increase in smartphone use, a successive decline in desktop PC (including laptops) use occurred, which applied to the GP, GIP, and GLES-T. Starting with a large share of desktop PC respondents in 2012 and 2013 (87.3% for the GP and 94.5% for the GIP, respectively), desktop PC use declined. In 2020, the share of desktop PC use ranged between 54.8% (GLES-T) and 65.6% (GP). Across the four waves of the GLES-P, an average of 69.7% of respondents used a desktop PC for participation with a min of 66.7% and a max of 72.6%. Nevertheless, desktop PCs still remain the most commonly used devices to participate in web surveys.

Finally, we noticed some device differences across the four studies that we investigated. More specifically, the GIP and GLES-T had the highest adaption rates with respect to smartphones, whereas the GLES-P showed the lowest adaptation rate. Tablet use, in contrast, was surprisingly similar across all four studies. In addition, compared to the GP and GIP, the GLES-T showed more variation between waves, which might be related to its cross-sectional design, implying that new samples were drawn for each wave.

3.2 Modelling device use

To extend the previous analyses, we also modelled the relationship between the share of devices (i.e., smartphones, tablets, and desktop PCs including laptops, respectively) and time. Figure 2 shows the results for smartphones as OLS and lowess functions (the regression models with linear and quadratic time effects are reported in “Appendix Table 2”). We found a positive main effect of time (in months) across all regression models. According to the pooled model, we estimate that the share of smartphone respondents increased by 0.3 percentage points per month (p < 0.001). This finding was supported by the separate models for the GP, GIP, and GLES-T. In all the models, the explained variance was high (R2 > 0.79), which provides further support for the increase in smartphone use over time. Interestingly, for the GP and GLES-T, the likelihood ratio tests indicated that modelling a linear relationship instead of a quadratic relationship was more appropriate, whereas for the GIP, the likelihood ratio test indicated that modelling a quadratic relationship instead of a linear relationship was more appropriate (p < 0.001). However, a comparison of the linear and lowess functions for the GIP (see Fig. 2) revealed differences of low magnitude. In addition, regarding the most recent surveys of the GIP, the nonlinearity in the relationship between smartphone use and time only manifested as a slightly steeper increase. Overall, our findings show no indication that the increasing smartphone use in web surveys is slowing down.

Fig. 2
figure 2

Relationship between smartphone use and time as OLS and lowess functions

Figure 3 shows the relationship between tablet use and time (the regression models are reported in “Appendix Table 3”). In contrast to smartphone use, we found a nonlinear concave relationship. The likelihood ratio tests for GP, GIP, and GLES-T showed that modelling a quadratic relationship instead of a linear relationship was more appropriate (all p < 0.001). This finding was supported by the high R2-values in the models with quadratic time effects (R2 > 0.79) and the low R2-values in the models with a linear time effect (R2 < 0.26).

Fig. 3
figure 3

Relationship between tablet use and time as OLS and lowess functions

Finally, Fig. 4 shows the relationship between desktop PC (including laptops) use and time (the regression models are reported in “Appendix Table 4”). In contrast to the findings on smartphone use, all models showed a decrease in desktop PC use over time. Even though the lowess function indicated some variation between studies, the general decrease in desktop PC use seemed to be linear. Following the pooled model, we estimate that the share of desktop PC respondents decreased by 0.3 percentage points per month (p < 0.001). This finding was in line with the separate models for the GP, GIP, and GLES-T. In these models, R2-values ranged between 0.77 (GLES-T) and 0.98 (GIP), which provides supporting evidence for the decrease in desktop PC use. When testing for a quadratic relationship between time and desktop PC use, only the likelihood ratio test for the GP indicated that modelling a nonlinear relationship seemed to be more appropriate (p < 0.001). However, as before, when inspecting the lowess function, the differences between the linear and lowess function appeared to be of small magnitude. Overall, the data indicate a continuous decrease in desktop PC use in web surveys.

Fig. 4
figure 4

Relationship between desktop PC use and time as OLS and lowess functions

4 Discussion and conclusion

The aim of this study was to investigate the use of mobile devices, such as smartphones and tablets, in web surveys in Germany over time. For this purpose, we relied on data from four large-scale academic studies using web surveys for data collection: (1) the probability-based GESIS Panel (GP), (2) the probability-based German Internet Panel (GIP), (3) the probability-based German Longitudinal Election Study-Panel (GLES-P), and (4) the nonprobability German Longitudinal Election Study-Tracking (GLES-T). In total, we drew on data from 128 web surveys that were conducted between 2012 and 2020. The results indicate an increase in smartphone use, a stagnation in tablet use, and a decrease in desktop PC (including laptops) use.

The availability of mobile devices and internet access is a necessary requirement for respondents to use mobile devices for web survey participation. While the availability of mobile devices and internet access can be considered generally high in Germany, we provide evidence that the use of smartphones in web surveys is catching up. Survey respondents seem to increasingly utilize the advantages of smartphones (i.e., fewer time or location limitations). Smartphones have gone through major advancements over time, such as improved on-screen navigation and larger screen sizes, which in turn may have supported their continuous rise in web surveys. At the same time, many people have developed increased skills for operating smartphones, which also may support smartphone use in web surveys.

Increasing technological competencies among respondents and advancement of smartphones may also explain the concave development of tablet use over time. Tablets are a technological hybrid combining the characteristics of smartphones (i.e., mobility and intuitive touch-navigation) and desktop PCs (i.e., larger screen size). However, the relevance of these benefits of tablets seems to have diminished due to the advancements of smartphones and the increased operating skills of people, which would help to explain the trend in our data: an initial increase of tablet use followed by a decrease and stagnation. Data of the Continuous Household Budget Survey (LWR) of the German statistical office seem to add further support for our conclusion. This study reported a high availability of mobile phones (including smartphones) for the period between 2015 and 2020 with only a slight increase from 93.5% (2015) to 97.5% (2020), whereas tablets were less common in households and increased from 31.8% in 2015 to 51.0% in 2020 (DESTATIS 2020, p. 12). Thus, despite their increasing availability, we did not detect an increasing use of tablets to participate in surveys, which might indicate a different mechanism at play (e.g., increased operating skills in favor of smartphones). Also note that, as we have argued above, the availability of devices is only one of two preconditions for using a specific device to participate in a survey.

The present study has some limitations that provide avenues for future research. First and foremost, we focused on one single country (i.e., Germany). Even though this strategy enabled us to collect data from a diverse set of studies with respect to sampling, recruitment, and operations, we cannot draw any conclusions about the situation in other countries. A cross-national data set would be a great advantage for generalizing our findings beyond Germany. Second, in our study, we focused solely on web surveys from academic studies that are situated in the social sciences. In our opinion, it would be important to go one step further by taking commercial surveys into account. In line with this, we would welcome replications investigating mobile device use for commercial web surveys over time, if these data can be obtained for research purposes. Third, we devoted considerable effort to including probability-based web surveys because they are considered the “gold-standard” in survey research (Cornesse et al. 2020). However, it is important to note that nonprobability web surveys are used often in the social sciences and many adjacent research fields. Even though we included the GLES-T that relies on samples drawn from nonprobability panels, we suggest that future studies cover a more diverse set of nonprobability samples, which also means covering different sampling approaches, such as river sampling from websites and social media platforms.

The results of our study point to the importance of considering mobile devices when designing web surveys. Thus, we see merit in continuing the research on mobile friendly question design. Otherwise, web survey participation might result in a less than optimal experience for the continuously increasing share of smartphone respondents. A semi-optimal survey experience could introduce measurement errors or increase dropouts and non-response for a considerable share of a sample. The increasing use of smartphones for web surveys also facilitates novel ways to collect additional data, such as sensor data. Smartphones have a variety of built-in sensors, such as Global Positioning System (GPS) and accelerometer, which have the potential to augment and extend respondents’ answers (Struminskaya et al. 2020). In other words, the data collected from or via smartphone sensors may help researchers to supplement survey data with additional measures. For instance, GPS data provide information about respondents’ geolocation and, thus, they can be used to infer the environmental setting (Kelly et al. 2013; Struminskaya et al. 2020). Similarly, acceleration data can help to provide information about the different motion conditions of smartphone respondents, such as standing or walking, during survey completion (Kern et al. 2020). However, utilizing these potentials of mobile devices will require researchers to overcome device effects on consent behavior that have been reported by prior research (Wenz et al. 2019). Moreover, the increasing use of mobile devices also includes possible challenges for data quality, such as distractions when answering surveys while in motion (Höhne and Schlosser 2019) and when multitasking (Zwarun and Hall 2014) as well as issues with answering (complex) survey questions on small screen sizes (Wenz 2021). Finally, we recommend that researchers continue monitoring device use in their web surveys. This type of monitoring would be key for determining the importance of mobile friendly survey designs, the possibility of collecting additional sensor data, and future directions in web survey research.