Political trust—that is, trust in ruling elites on the domestic and international level—and its decline has been among the most intensely discussed topics over the recent years (Citrin and Stoker 2018). While it is clear from survey evidence in the Global North that political trust is generally declining, in the Global South, in regions such as Africa, there are mixed findings on whether political trust is in decline over time (Catterberg and Moreno 2006; Askvik 2010; Hutchison and Johnson 2017). Independent of the actual findings in the literature, nearly all of the measures of political trust rely on surveys (Schlipphak and Isani 2020). However, what if respondents do not trust these surveys to begin with? Would that influence respondents’ survey behavior, and especially so when it comes to trust in government institutions?

The paper argues that this is indeed the case. Dependent on whom respondents think of as being behind the survey (the survey sponsor), their attitudes toward domestic and international actors will vary accordingly. This may especially be a problem in non-democracies in the Global South where respondents may be uncertain of who is conducting the survey and hence become afraid of giving truthful answers if they think this may lead to harmful repercussions for themselves (Isani 2018). The paper expects respondents of the Afrobarometer surveys perceiving the government to be the survey sponsor to show much higher levels of political trust than other respondents.

The paper analyzes these expectations using data from the Afrobarometer, surveying respondents in Northern and Sub-Saharan Africa. The Afrobarometer is one of the few surveys that include democratic and authoritarian countries that also explicitly asks for the survey sponsor. The findings demonstrate that the expectations are confirmed by the data: perceiving governmental actors as being the (secret) sponsor of the survey makes citizens report higher levels of trust in these actors.

Literature review and theory

Political trust

“The dynamics of political trust have been a popular research topic in political science for decades” (Faulkner et al. 2015, p. 164). Therefore, there is an abundance of literature dealing with the subject. In the Annual Review of Political Science, Citrin and Stoker (2018) summarize the existing literature on the sources and effects of what they describe as a steady decline in political trust, operationalized most prominently as the trust or confidence in major political institutions. Taking up suggestions by the seminal works of Citrin and Green (1986) and Hetherington and Rudolph (2015), after presenting evidence on personality factors influencing individual levels of political trust, they focus on six ps and the mass media: policy dissatisfaction, performance, partisanship and polarization, process and probity. The first two ps strongly point to the importance of political outcomes—with a decline of trust being either dependent on citizens’ dissatisfaction with the policies put in place by elites or governmental performance ineffectiveness or inefficiency (mostly in the economic branch). Partisanship and polarization, and process and probity are somewhat connected, explaining the decline of political trust with increasing polarization between societal and political groups, which is also connected to the perception of a lack of procedural justice and—especially—an increasing distrust in politicians perceived as being corrupt. While there exists a large amount of research on this topic, the relative effects of each of these factors—which are additionally not independent from each other—are not yet agreed upon.

Following Citrin and Stoker (2018), the same is true for the effects of mass media on political trust. There is inconsistent evidence on whether negative advertising and increasing incivility is increasing political distrust (Lau et al. 1999). Still, it seems fair to say that the mass media coverage only emphasizes or strengthens mechanisms that are already included in the ps: increasing incivility and negative advertising is a phenomenon of increasing polarization, etc.

Interestingly, even this research overview by Citrin and Green (1986) does not take levels of political trust in the Global South into account. This is despite the fact that there exists more research on political trust from countries in the Arab region (Kong 2014), in East Asia and Southeast Asia (Tang and Huhe 2016), Latin America (Zmerli and Castillo 2015) and Africa (Hutchison 2011) in recent years. This neglect cannot be ascribed anymore to the lack of available data—the Global Barometers, the World Values Survey and the like provide a range of datasets including variables on political trust. Rather, the neglect might be attributed to the feeling of researchers that data collection in the Global South, specifically in autocracies, is problematic to say the least. One might suspect the problem of interviewer and social desirability biases to peak in these countries, leading to data with a lot of noise and very limited sound. Therefore, scholars are hesitant to take these trends from the Global South into account.

Survey sponsor misperception

As a result, researchers have most recently turned to the concept of self-censorship in surveys in the Global South and the Global East especially in autocratic settings (Tannenberg 2021). In essence, they argue that self-censorship “as a self-imposed constraint on self-expression based on rational assessment of risks and benefits to uncensored expression” should substantially shape the answers of respondents on sensitive questions (Gueorguiev et al. 2018, p. 5). Tannenberg (2021) has argued that this should especially be true for questions on trust in political actors. With regard to China, Jiang and Yang (2016) report an upsurge in preference fabrication following a significant political purge in Shanghai. While these studies reveal that approval and political support remain astonishingly high, they back up worries that respondents in autocratic circumstances overstate their approval. Kalinin (2016) uses a series of list experiments to discover that Russian electoral support for Vladimir Putin is overstated by roughly 20%. Frye et al. (2017), employing list experiments as well, estimate Putin's approval ratings to be roughly ten percentage points lower than those obtained from direct questioning, but find that direct survey questions largely reflect the sentiments of the Russian people.

Due to social desirability bias, respondents are known to offer untruthful responses to survey questions about sensitive topics like sexuality, ethnicity, or wealth (Tourangeau and Yan 2007). Survey questions are problematic in authoritarian regimes for reasons other than just privacy and social adaptation, especially questions about citizens’ views toward government. Respondents under autocratic rule may engage in preference falsification in order to better align their responses with the regime’s supposed views (Kuran 1997). Considering how authoritarian regimes frequently monitor what their citizens say and do in order to punish individuals who question the official narrative, there is a good chance that respondents will identify public opinion polls with government intelligence gathering (Linz 2000). As a result, respondents are likely to placate the regime with their replies, fearing that failing to do so will result in bodily or psychological repression. To the extent that citizens engage in any form of self-censorship—and that the prevalence of this varies depending on the perceived risk of repressive action—responses to sensitive issues are systematically biased across countries, and thus are not comparable between countries with differing perceived risks.

Survey respondents may feel the need to censor their responses if a question involves private matters, if it elicits responses that may be viewed by others as socially undesirable or politically incorrect, or if the respondent fears that their responses may have negative consequences if disclosed (Tourangeau and Yan 2007). Questions regarding income, voter participation, prejudice against other ethnic or religious groups, drug addiction or other criminal actions, for example, can drive respondents to conceal the truth out of concern for their reputation, fear of social sanctioning from peers, or fear of more formal punishment (Kuklinski et al. 1997; Holbrook and Krosnick 2010; Krumpal 2013). This can result in a high rate of systematic non-responses and/or skewed responses thus resulting in inaccurate statistics.

Indeed, Kuran (1997) argues that citizens subjected to authoritarian rule have strong incentives to engage in preference falsification, while Schedler (1999) expresses reservations about the feasibility of obtaining reliable measures of regime legitimacy in autocracies through representative public-opinion surveys or qualitative interviews due to the opaque and repressive nature of those regimes. Fear of sanctions if respondents do not provide the officially intended answer is likely to influence responses. This is especially the case when a respondent’s identity is in doubt. So, if we are interested in political trust, what does this entail for cross-country comparison studies?

Things are less problematic if self-censorship levels are more or less identical across nations on proxies for, or components of, a question or indexes. In such cases, we would only have to deal with inflated or deflated numbers. However, if the proclivity to self-censor is based on characteristics that fluctuate across countries—such as levels of democracy or political repression—the size of the bias varies systematically between countries thus making comparative study difficult. Guriev and Treisman (2016) attempt to account for the influence of fear in a comparative study of political approval ratings in 128 countries by incorporating repression in their research and controlling for whether or not leaders earn higher ratings during times of heightened repression. Recent research raises concern about the credibility of survey responses in oppressive and non-democratic environments. In Zimbabwe, where government repression and insecurity were widespread, Garcia-Ponce and Pasquale (2015) found that recent experiences of state-led repression had an impact on reported levels of trust in the president and the ruling party. Individuals who had witnessed state-sanctioned violence during the previous 30 days were more inclined to believe the survey was commissioned by the government (Garcia-Ponce and Pasquale 2015). While this suggests that a recent reminder of the regime’s repressive disposition will lead to exaggerated levels of reported trust, it is reasonable to believe that citizens do not easily forget what kind of rule they are subjected to, and as a result, should falsify their preferences even if they have not recently experienced government sponsored violence.

The research question that the paper answers is at the intersection of the literature on political trust and survey methodology. The expectation is that citizens’ levels of political trust are influenced by their degree of fear of the perceived sponsor of the survey. This is of course not the first time the issue has been probed.

Groves et al. (2006, 2012) and Tourangeau et al. (2009, 2016) have discussed the issue of survey sponsorship but found only limited evidence of its effects on nonresponse. However, and somewhat in contrast, Tourangeau et al. (2009, 2014) and Galesic and Tourangeau (2007) identified stronger effects of survey sponsorship on societal attitudes of citizens. Yet, when it comes again to political attitudes, Tourangea et al. (2014) demonstrate that “even in the context of partisan election surveys, sponsorship may not be the powerful cue it is often thought to be” (Tourangeau et al. 2014, p. 510).

Furthermore, researchers have delved into the question of whether or not academic institutions or researchers being a survey’s sponsor influences nonresponse rates, and whether it has any impact on the responding sample. In a nutshell, again, explicitly mentioning an academic institution (but not the person of the project representative) as being the sponsor of a survey increases response rates (Corstange 2016). Yet, the size of this effect is dependent on further conditions, with Edwards et al. (2014) demonstrating that response rates increase more strongly among potential respondents if the academic institutions is located within the state of the potential respondent as opposed to academic institutions in other states.

When it comes to research being more closely aligned to my research question, Corstange (2016) demonstrates that the perception of academic institutions being the sponsor of a survey has no effect on survey participation in Lebanon. Yet, perceiving the US or Canadian Embassy as being the sponsor not only leads to higher refusal rates but also—in consequence—to more pro-American attitudes among the respondents willing to answer survey questions sponsored by these embassies.

Tannenberg (2021) and Corstange (2016) both argue that misperceiving the government to being the hidden/secret sponsor of the survey makes respondents less likely to participate and/or give honest—and hence meaningful—answers. The mechanism behind this effect is fear. Suspecting that the government is secretly sponsoring the survey—most probably in an attempt to identify and in the future prosecute regime critics—makes citizens less convinced of the anonymity and integrity of the survey, and in consequence, to report higher levels of government-friendly positions. Hence, one should expect respondents misperceiving the government to being the sponsor of the survey to demonstrate higher levels of political trust. Yet, as Tannenberg (2021), and also Benstead (2018) notes in more detail, such a behavior should be more common in autocratic contexts. However, note that even in established democracies, there are groups of citizens that substantially distrust their government, sometimes—such as the Reichsbuerger in Germany—even claiming the state to be non-existent and the government, in consequence, to be no government at all or illegitimate. In the US, conspiracy theories—often involving domestic political actors—are quite common with a quarter of the population believing that George W. Bush “orchestrated or knew in advance about the 9/11 attacks” (Uscinski et al. 2016, p. 57). Hence, while the mechanism, feeling threat of a government secretly spying on its population, seems more plausible in autocratic regimes, we should expect it to also work for some groups of citizens in democracies as well.

Misperceptions of survey sponsorship


If a respondent perceives the government or an actor close to it as being the sponsor of the survey, s/he will demonstrate higher levels of trust into the government.


The more autocratic a regime in which the respondent lives in, the more likely s/he is to show trust in government.

As noted earlier, governmental performance and its evaluation by citizens also plays a large role for levels of political trust: The more citizens are satisfied, the more trust they have in political actors (Isani and Schlipphak 2017a).

Other explanations of political trust


The more a respondent is satisfied with their own economic/living conditions or with those in the country more generally, the more they will show trust in the government.

The degree of societal polarization also seems to play a role (Citrin and Stoker 2018). The more a society is shaped by distrust based on characteristics such as race, ethnicity or religion, the more one should observe distrust in political actors as well. Identity-based factors affecting trust in government institutions have been posited to matter in the Global South (Blaydes and Linzer 2012; Isani and Schlipphak 2017a). The same holds for a feeling of being connected to the community that is represented by governmental actors: the stronger the feeling of belonging together within a state/nation, the higher the levels of trust.

Other explanations of political trust


Higher degrees of societal polarization will result in respondents showing more trust in the government.


The more a respondent perceives that they belong to the community represented by the government, the more they will show trust in the government.

This paper also controls for other factors that may affect political trust in the African context. To deal with this, the paper turns to bundles of factors identified in the literature as influencing political trust. The paper is not only interested in whether there is an effect of survey sponsorship on self-censoring political trust in a bivariate setting, but also in whether this effect is valid if we include other variables affecting political trust. Hence, the paper focuses on the effect of survey sponsorship under the conditions of (theoretically deduced) contextual and individual-level variables.

Research design


As has been pointed out elsewhere, the Afrobarometer is one of the only known cross cultural surveys that include a question for respondents’ perceptions of survey sponsorship (Schlipphak 2013; Tannenberg 2021; Zimbalist 2018). Hence, this dataset has been chosen for analyzing the effects of perceived sponsorship. In addition, the data set is perfect to test the moderating effect of regime types since it includes democracies as well as autocracies. The data collected is of high quality, with full information on survey methodology and fieldwork and many variables in the dataset are included that evaluate data quality. Moreover, face-to-face surveys are carried out by trained surveyors with support from some of the top social scientists that study the region. A representative sample of between 1200 and 2400 respondents participated in the survey from each country. The questionnaires were carried out in the countries’ national and local languages, and lasted for approximately one hour. This article restricts itself to the merged data file for wave 6, fielded between 2014 and 2016 which includes 36 countries and 53,935 respondents;Footnote 1 This also was the last wave asking the question about survey sponsorship.


Since the dependent variable is ordinal in nature, several models come to mind that can be used to estimate the effect of the independent variables on the dependent variables. An ordinal logit model is first estimated but since the proportional odds assumption is not satisfied following a Brant (1990) test, so this model is not used.Footnote 2 The paper uses an ordinal generalized linear model (oglm) with a logit link function as its main model, as one does not have worry about the proportional odds assumption and the coefficient of independent variables are interpretable by calculating the predicted probabilities (Williams 2010).Footnote 3 As a robustness check the models are also estimated using hierarchical regressions using STATA’s GLLAMM package.Footnote 4 The results for which are quite similar and are presented in the “Appendix”.

While the general dependent variable is political trust, the paper actually uses four different variables to measure it thus providing for a more profound analysis. Besides using a combined variable for governmental trust, the paper uses trust in the president or prime minister, trust in the local government, and trust in the parliament in separate models. I expect that the effect of survey sponsor would be significant for all dependent variables. This paper does not aim to determine which government institutions will be most affected due to misperceptions regarding the survey sponsor, however one could expect that the survey sponsor would have the strongest effect on trust in the president, and having weaker effects on attitudes toward the local government and other less powerful bodies.

H1 expects the perception of the survey sponsorship to influence respondents’ attitudes. I differentiate between respondents who perceive the survey to be carried out by the government and those who perceive otherwise (to include Afrobarometer, a research company, university college, international organization or school). As this is the main hypothesis that underpins the paper, I shall also calculate the predicted probabilities to give an idea of the size of the effect.

H2 expects of regime type to effect political trust.Footnote 5 To differentiate between democratic and autocratic regimes I opt for scores provided by Freedom House. This has several benefits but the following two are the most important: First, focusing on measures that are as parsimonious as possible, the Freedom House Index does not involve itself with different dimensions of democracy and autocracy compared to other indices. Second, by only concentrating on civil and political rights—which is a disadvantage elsewhere but an advantage for my aim—it concentrates on what the paper actually wants to measure: whether citizens feel free and safe to state their opinion in their country or not.Footnote 6

H3 focuses on the important role of outcomes on respondents’ levels of political trust. Unfortunately, if one takes the perception of survey sponsorship not only on trust but also on other sensitive issues seriously—and Zimbalist (2018) argues that one should—then one should not take variables evaluating the work of the government, the extent of democracy, the fight against corruption or the economic status of the country into account. All of these indicators might be influenced by the effect of fear of the government being the sponsor of the survey as well. Therefore, to measure respondents’ satisfaction with the current state, this paper uses the question on respondent’s own living conditions, asking whether a respondent considers their present condition to be very bad, fairly bad, neither good nor bad, fairly good, or very good. The paper expects a positive correlation between better living conditions and political trust (Isani and Schlipphak 2020).

H4 formulates expectations on the effect of societal polarization based on Citrin and Stoker (2018). This paper measures the extent of societal polarization on the context as well as the individual level. On the contextual level, I calculate the average of citizens per country that would refuse to live beside a neighbor that is from a different religion or from a different ethnic group. I argue that the higher the average, the greater societal polarization is at the aggregate level. Therefore, I expect this polarization variable to have a negative impact on respondents’ levels of trust in political actors. Furthermore, at the individual level, I observe whether an individual feels connected with the community, which the government and the parliament is responsible for, by measuring respondents’ degree of national identity feelings. The higher the values are on that variable, the higher the connection to the community represented by the government and therefore the higher the levels of trust they will have in that political actor. Consequently, I do not expect this variable to influence trust in the local government.


The paper expects personal demographics to influence political trust. Hence, the usual socio-demographics of age, gender, and education are measured. In the literature, there is a debate on whether youth is more strongly correlated with distrust and whether this is a cohort effect or not. The debate has most explicitly has taken place in the Journal of Democracy (Alexander and Welzel 2017; Foa and Mounk 2016, 2017a, 2017b; Voeten 2017). Second, education has been shown to influence trust in political actors, with higher levels of education leading to higher levels of trust. Third, in the literature, it seems that female respondents are more trusting toward political actors than male respondents (Isani and Schlipphak 2017b). In addition, the article also looks at the religiosity of respondents. It is expected that religiosity will negatively influence political trust although previous literature has shown mixed results (Isani and Schlipphak 2017a).

Empirical findings

Descriptive findings

Table 1 shows the descriptive statistics and coding for both the dependent and independent variables. The dependent variables are measured on a four-point scale (0–3) of increasing trust. There is neither too much trust nor distrust in all three institutions as the means for all three variables is around 1.5. Trust in government, which the sum of trust in the three institutions, has a mean value of 4.67. When comparing means among the dependent variables, the mean trust in president/prime minister is a bit higher than trust in local government and parliament.

Table 1 Descriptive statistics from the Afrobarometer.

Regarding the independent variables, 32% of African citizens incorrectly perceive the survey sponsor to be the government. Hence, this result shows that there is enough variation in the main independent variable. Overall, in the sample of African countries in the Afrobarometer, the average freedom house rating is 3.73 which means that on average the countries are somewhere between democracies and autocracies in terms of civic and political rights. The Afrobarometer sample also seems well-balanced in terms of gender and age. The mean education value shows that the average level is below that of primary education completed which mirrors quite well with the more general African population. On average, the sample is more religious and identifies moderately strong with a national identity. The self-perceived living condition has a mean value of 2.72 which is slightly below the neither good nor bad situation.

Table 3 in the “Appendix” depicts the country variation in perception of survey sponsor. There is wide variation in the belief that the government is behind the survey with as much as 51% in Mali and as low as 5% in Liberia. This signals the need for at least controlling for the country level in the empirical model even if one is not able to explain the variation between individual countries. Therefore, all of the main model are estimated with country-fixed effects. It is also worthwhile to point out that in almost all countries significant numbers of people misperceive the survey sponsor.

Multivariate models

To test the hypotheses, I estimate four ordinal generalized linear models with a logit link function.Footnote 7 Table 2 presents the results of these calculations. For all models, the misperception of government as survey sponsor relates positively and significantly to trust in government, trust in the president/prime minister, local government and parliament. This indicates strong support in favour of H1.

Table 2 Ordinal generalized linear model (OGLM) explaining political trust by survey sponsor misperception.

There is also support for H2, higher levels of authoritarianism correlate with greater trust in government, trust in the president, trust in parliament and trust in the local government. As expected, there is significant support for H3, as better self-reported living conditions are significantly and positively related to greater trust in governmental institutions. There is also strong support for H4a as higher level of polarization is significantly related to a higher trust in government institutions. Moreover, as expected a strong national identity (H4b) is positively and significantly related to trust in national institutions but not in local government.

As far as the control variables are concerned, gender and religiosity are not significantly related to trust in all three institutions, however, older people are slightly more likely to trust the government. The more policy relevant finding is that education is negatively and significantly related to trust in the president and the local government (Fig. 1).

Fig. 1
figure 1

The effect of survey sponsor misperception on government trust

In order to probe the effect of survey sponsorship on combined governmental trust, the paper turns to graphing the predicted probabilities. All other independent variables are held at their means. If an individual misperceives the government to be the survey sponsor, the less is his or her predicted probability of having low trust in government institutions (Trust in Government Combined = 0–4). On the contrary, if an individual misperceives the government to be the survey sponsor the higher is his or her predicted probability of having high trust in government in government institutions (Trust in Government Combined = 6–9). Misperceiving the survey sponsor, for example, increases one’s probability of being of having high trust in government (Trust in Government Combined = 9) by more than 3 percentage points (or about 33%), and decreases one’s probability of having low trust in government institutions (Trust in Government Combined = 1) by 2.5 percentage points (or about 25%).

Robustness checks and limitations

The paper utilizes a Spearman rank correlation matrix to examine the independent variables for multicollinearity in Table 4. The table demonstrates that there is no reason to be concerned about any of the correlations between the independent variables. To check whether the results presented above are not artefacts of model selection, I recalculated the models using multilevel models as shown in Table 5 in the “Appendix” (Rabe-Hesketh et al. 2004). The results remained mostly the same in terms of the direction and significance of the effects of the independent variables on the dependent variables. Secondly, shown in Table 6, I checked whether the effect was survey sponsorship on governmental trust is moderated by the type of regime. This resultant interaction variable is insignificantly related to governmental trust. In addition, I also estimated the main models without the national identity and polarization variables in Table 7, as these questions were not asked in some of the countries included in the dataset. The results here also remain mostly the same in terms of the direction and significant of the effects. Finally, I used polity instead of freedom house scores as independent variables in my main models in Table 8 to check whether measuring democracy differently would affect the results of the model. The polity scores had an insignificant effect on political trust.

One of the major limitations of the analysis presented that the paper only used observational data. There has been literature before that has used experiments before to measure the effect of survey sponsor as discussed in the literature review but not in the African context. It is much difficult to make causal inferences from observational data, so a possible direction for future work in this context would be to use experimental methods.


Do survey sponsor misperceptions affect political trust in Africa? Using robust and original empirical models in a novel context, this article finds that, in line with previous literature, survey sponsor misperceptions make citizens more likely to report higher levels of trust in political actors. The effect of survey sponsor is significant and substantive even when we have other variables in the model that have been hypothesized to affect political trust. It is noteworthy to point out that those respondents living in more authoritarian regimes who agree to participate in the survey show significantly higher trust in the president/prime minister, parliament and local government. Misperceptions that the government or any other foreign actor is behind the survey significantly affects political trust irrespective of whether the government is democratic or authoritarian. If the fear mechanism works, this would imply that—at least in the context this data—people may be fearful of democratic as well as authoritarian governments when answering survey questions.

One implication that derives from this research is that survey agencies should try to mitigate the absolute levels of misperceiving respondents. To do so, one would need to know who exactly these individuals are and whether there are characteristics identifying them that could be easily ascertained at the beginning of a survey or recognized by the interviewer even before conducting the interview. If such characteristics exist, then one could think about introducing further stimuli such as repeatedly emphasizing the survey sponsor, or clearly show that there really is an academic project coordinator behind the survey by showing pictures or websites depicting the coordinator respondents in order to clear up any misperceptions regarding the survey sponsor. Some preliminary evidence from my side indicates that higher education—that is, more information—might be key here.

As the paper’s empirical findings are not only statistically significant but also the paper’s extensive robustness checks lend further credence to the theoretical claims, I am confident that the paper has presented a strong general picture upon which further research and survey practice can and should be based. I look forward to participating in such research on understanding and reducing sponsor misperceptions in even further depth in the future, which shall be based on experiments.