Constructing a well-defined survey plan is critical for three main reasons: (1) to obtain a viable response rate—Marchetti (2015) estimated that the average response rate in interest organization surveys is c. 41%; (2) to reduce survey bias due to a lack of response from specific respondents (Dillman et al. 2014); and (3) to combat survey fatigue—many interest groups are frequently invited by researchers and policymakers to take part in various research projects. One of the unique features of the CIG-surveys is that highly similar surveys were undertaken in various European countries. This entailed establishing a coherent and equivalent approach within each country taking into account local circumstances. We cannot report in detail on each separate CIG-survey (see www.cigsurvey.eu for an exhaustive overview of how national surveys were conducted). Here we sketch the general contours of the overall survey plan which entails developing an appropriate questionnaire, identifying respondents within organizations and detailing how to approach and motivate these respondents.
Based on our experiences within previous projects, especially INTEREURO, we developed a core questionnaire that included supplementary space for country-specific questions relevant to different national contexts (Beyers et al. 2016). The core questionnaire was piloted among a small group of respondents in Belgium, the EU, Slovenia and The Netherlands. Following these pilots, we refined and tightened the questionnaire. In total, 12 colleagues from various participating countries were involved in creating an English version of the core questionnaire that was translatable into eleven languages. The following colleagues were involved in developing the core questionnaire: Joost Berkhout, Patrick Bernhagen, Jan Beyers, Frida Boräng, Caelesta Braun, Danica Fink-Hafner, Marcel Hanegraaff, Frederik Heylen, William Maloney, Daniel Naurin, Meta Novak and Dominique Pakull. The questionnaire was translated in Czech, Dutch, English, French, Italian, Lithuanian, Polish, Portuguese, Slovenian and Swedish.
Great care was taken to ensure that the questions were framed in a way that would make responses comparable. For this, we took inspiration from a large number of excellent survey projects that have been conducted during the past decades (e.g. the INTERARENA project led by Anne Binderkrantz; Binderkrantz et al. 2015). The CIG-survey addresses a number of questions that have hitherto not been extensively discussed in the empirical literature, e.g. how decisions within groups are made; advocacy activities; how members get involved; and dependence on institutional funding. Some questionnaires also included questions on rarely studied topics, including group ideological positions, and interactions with the judiciary. Table 3 gives an overview of the main dimensions of the questionnaire.
One important challenge in designing interest group questionnaires is the fact that many concepts and terms are value-loaded. Terms such as ‘lobbying’ or ‘interest group’ have a different, and sometimes pejorative, connotations in various national contexts. Many group representatives, in particular those who work for civil society organizations, recoil at being referred to as ‘lobbyists’ who work in ‘interest groups’. Thus, care had to be taken to employ the most appropriate label like civil society organisations in English, middenveldorganisatie in Dutch and organisations intermédiaires, in French, and instead of using the term ‘lobbying’, we used more neutral language, such as ‘seeking to influence public policy’ or ‘informing politicians’.
Also when designing the questionnaire, care should be taken as the language should find a balance in its specificity. For instance, as some groups are only marginally politically active, specialized language or jargon directed at highly politically active groups might bias responses towards these group types. In addition, we needed to take into account a considerable variation among our respondents; some small civil society groups are led by a handful of volunteers, while others are highly differentiated and have a large cohort of professional staff. Accordingly, the questionnaire allowed for the possibility that it could be completed by different people within one organization.
Interest group representatives are often embedded in a professional environment that is internally differentiated. Many, especially larger, organizations have a front office, a number of policy experts (in some cases working groups/committee structures), a secretary-general, a board of directors headed by a president supported by auxiliary staff, interns and volunteers. Furthermore, large peak associations are usually clusters of organizations with a national office and regional/local chapters. In such organizations, it is not always easy to locate the most appropriate respondent and sometimes it is unlikely that one person will be able to answer a wide variety of questions. In contrast, smaller organizations, especially those located in consolidating democracies, tend to not employ professional staff. Organizational complexity and diversity is a critical consideration when seeking to identify potential respondents.
It is crucial to personalize communications—i.e. try to avoid sending invitation letters to a general e-mail address. Personalization can engender ties and trust between the research team and the respondent (Dillman et al. 2014; Cook et al. 2000; Cycyota and Harrison 2002). Accordingly, in each case we identified an organizational official or spokesperson (e.g. the chairperson, the director, the secretary-general). In most countries, it is considered polite and courteous to use full names and titles when approaching potential respondents. It also demonstrates that the research team has done its homework and due diligence on basic organizational facts before inviting experts to take part in a survey.
The national CIG-survey teams invested a significant amount of time creating lists with names of key spokesperson, and in approximately 95% of the cases, it was possible to identify at least one person (e.g. chairperson or director) as well as their e-mail address. A small number of cases (less than 1%) were dropped from the sample because we could not identify an individual or an e-mail address. For large organizations we attempted to identify two individuals (e.g. the president and the director). Efforts to identify key spokespersons included website searches and telephone calls to the organizations (if such information was not available on the website). A personalized approach required collecting evidence on the name of the organization, abbreviation, full name of the respondent, e-mail address, telephone number and gender. Great care was taken when collecting contact information as errors in gender, and the related grammatical errors or spelling mistakes may invariably lead to non-responses.
Approaching and motivating respondents
It is crucial to have a well-thought through plan on how to approach and motivate respondents. In contacting respondents, the CIG-survey teams adopted various techniques aimed at reducing barriers and positively influencing respondent’s willingness to take part in the survey (Cycyota and Harrison 2006; Frohlich 2002). The most important aspect here is the careful planning and implementation of follow-up reminders via e-mail and/or telephone (Roth and Be Vier 1998). Experimental research shows that repeated contacts signal the legitimacy of a survey project and the willingness of the researcher to invest time and resources in reaching out to respondents (Sauermann and Roach 2013). Initial invitation letters help to establish trust by providing contact e-mails, telephone numbers and the project website. Several CIG-surveys used postal letters (with university logos) as the initial form of communication. Sending personalized prenotification letters might make a survey stand out and shows the commitment of the research team, especially if the letter is signed by the project leader. All the CIG-surveys adopted procedures involving repeated interactions with respondents, and this led to substantial improvements in response rates. However, care was taken to avoid being seen as ‘pushy’ and we limited our reminder contacts to four (via e-mail and/or telephone). The final reminder included a closing date. Our exchanges with our respondents were sensitive to the varying national contexts. For instance, the Slovenian invitation letter stressed that this was a major international project, while the Polish project emphasized that the survey was being led by researchers based at a Polish University. Finally, we incentivized our respondents in a variety of ways, in the Belgium and EU surveys the research teams made a one Euro donation to a charity for each completed survey, other incentives included a report on preliminary research results or an invitation to the presentation of the first results.