Measuring Non-electoral Political Participation: Bi-factor Model as a Tool to Extract Dimensions

Political participation is a mainstay of political behavior research. One of the main dilemmas many researchers face pertains to the number of dimensions of political participation, i.e. whether we should model political participation as a unidimensional or multidimensional latent construct. Over the years, scholars usually have favored the solution with more than one dimension of political participation and they have backed the claim of multiple dimensions with a number of empirical tests. In this paper, I argue that the results from the frequently used testing procedures which rely on the model fit inspection and the Kaiser criterion can be very misleading and may yield in extracting too many dimensions. By employing bi-factor modeling to a European Social Survey dataset, I show that in a majority of countries political participation can be considered an essentially unidimensional latent quantity. I demonstrate that additional dimensions of political participation are very weak and unreliable and that we cannot regress them on external variables nor build composite scores based on them. These findings cast doubt on the conclusions of numerous previous studies where researchers modeled more than one dimension of political participation.


Introduction
While conducting research on political participation, scholars oftentimes face the decision of how many dimensions to extract. This is a crucial step regardless of whether we want to model political participation as a latent variable in a structural equation model or to build a composite score and run a regression. Even though we could find authors who argue for the unidimensional structure of political participation (e.g. van Deth 1986), various participatory forms have been usually divided into two or more dimensions, also known as modes (Barnes and Kaase 1979;Inglehart and Catterberg 2002;Teorell et al. 2007;Marien et al. 2010). The preference for more than one dimension has been backed with numerous 1 3 empirical tests. Yet, once we take a closer look at those testing procedures, we may have doubts if the alleged multiple dimensions are really mirrored by the data. If they are not, conclusions from numerous studies may need to be reconsidered.
To investigate the number of dimensions of political participation, researchers commonly use two methods: principal component analysis (PCA) and factor analysis. In the case of the PCA, they usually run a model and they check the number of eigenvalues greater than 1.0 (e.g. Teorell et al. 2007; Theocharis and van Deth 2018). But as shown by Russell (2002) and van der Eijk and Rose (2015), this method often leads to solutions which suffer from overdimensionalisation, i.e. extracting too many factors, and, therefore, should not be used. As to the factor analysis, the procedure frequently consists in checking the model fit (e.g. Gibson and Cantijoch 2013;Ohme et al. 2018;Talò and Mannarini 2015). However, in the psychometric literature, where seeking dimensionality is a core enterprise, determining dimensionality via a test of model fit has been criticized for being oversimplistic (e.g. Bentler 2009;Berge and Sočan 2004) and for wrongly conflating model fit with model validity (Reise et al. 2018). The problem is that "every set of responses by real individuals is multidimensional to some degree" (Harrison 1986). In practice, whenever we have more than three items, perfect unidimensionality is impossible (Berge and Sočan 2004) and the one-factor model can rarely describe data with a reasonably large number of items (Bentler 2009). Still, this does not automatically imply that the construct we are interested in is multidimensional. To treat a construct as having two or more distinct dimensions, these should uniquely explain variation in item responses. However, even when a multidimensional model fits better than a model with a single dimension, the factors can offer a very little unique (unshared) contribution (Gignac and Kretzschmar 2017). If we ignore this and model more than one dimension, we may end up explaining largely the measurement error and not the variance attributable to each dimension. Consequently, any estimate of the association between that dimension and an external variable will be attenuated or inflated and so will the measures of uncertainty of that estimate (Wiernik et al. 2015).
To find out how many dimensions of political participation to extract, I propose to use a bi-factor modeling strategy along with factor strength, internal consistency reliability, and construct replicability indices: Explained Common Variance, omega coefficients, and the H index. Bi-factor model allows us to inspect if we deal with a truly multidimensional construct or a construct that is essentially unidimensional, i.e. having one major latent dimension and a couple of minor dimensions that can be ignored (Strout 1990). With a bi-factor model, we can disentangle various sources of variance, which otherwise often remain indistinguishable. I apply this strategy to the European Social Survey, a dataset frequently employed by the students of political participation.
The results show that once we take into account the common variance among the indicators, additional dimensions of political participation explain very little variance in most of the countries and are likely to be unreplicable across different samples. If we were to build separate subscales for each dimension and score individuals on them, the scores would be severely unreliable as most of the variance in the unit-weighted composite scores is due to the general trait of political participation. Overall, additional dimensions of political participation do not acquire sufficient specificity nor stability to represent empirically identifiable constructs in most cases. They are virtually impossible to interpret and when regressed on external variables (e.g., in structural equation modeling context), the coefficient will be unreliable at least. These findings have direct consequences for how to model political participation irrespective of whether this is done in a structural equation model framework or in the form of composite scores and regression analysis.

Dimensions of Political Participation
When trying to measure political participation, researchers often are not interested in just one form, like signing petitions, but they want to reveal a broader picture of how citizens participate in politics. Among others, they want to know whether young people participate differently than adults (García-Albacete 2014) or if political trust impacts the choice of forms of participation (Hooghe and Marien 2013). To capture the whole phenomenon of political participation, scholars usually treat political participation as a continuous latent variable and the multiple participatory forms as manifestations of this latent quantity (e.g. Marsh 1974;Verba et al. 1978;Vráblíková 2014). The relation between an observable indicator, x i , and the unobserved quantity, i , can be expressed with the following equation: where i denotes a measurement error, i refers to the unit of observation. Latent formulation of political participation allows us to incorporate into the analysis the idea that survey measures do not resemble the real-world behavior of individuals perfectly and that these measures come with an error. Additionally, and more importantly, the formulation reminds us that an indicator, x , is not the latent variable of interest, (Jackman, 2008). In the case of political participation, a reported signing of a petition ( x ), for instance, even if accurately reported, is not political participation ( ); it is related to political participation and the Eq. (1) shows this relationship. The latent construct of political participation is comprised of multiple observable indicators and not just one. If we believe that political participation is made up of many different acts and, still, we equal it to one indicator, we make a translation error, i.e. we fail to translate the theoretical concept into the measurement specification (Fariss et al. 2020;Trochim andDonnelly 2007).
The core issue about conceiving of political participation as a latent variable is the question of the number of dimensions. Definitions of political participation are agnostic as to whether there are single or multiple dimensions. Most of the efforts have been devoted to imposing clear boundaries on what political participation is and what is not (Almond and Verba 1963;Conge 1988;Fox 2014;Pattie et al. 2004;Verba and Nie 1972). For example, in the recent work by van Deth (2014), political participation is defined as an activity or action that is carried out voluntarily by non-professional citizens, and which is located within the realm of politics, government, or the state. Alternatively, the action should be targeted at a subject from one of those realms or aimed at solving a community problem. When even the target is unclear, one should look at the political context or the motives of participants ( van Deth 2014;van Deth and Theocharis 2018). By doing so, we can capture all forms of political participation, both traditional like voting and new ones like online participation.
In practice, these various forms are often divided into two or more dimensions, also known as modes of participation. There is an underlying assumption that people choose forms from either of those dimensions and rarely people's participatory repertoires consist of acts that stem from different modes. In other words, people tend to participate in (1) forms from either of the modes. This view on political participation as a multidimensional construct has been popularized by numerous authors. They differ as to the number of dimensions and the criteria according to which we should divide the forms (e.g., Barnes and Kaase 1979;Inglehart and Catterberg 2002;Teorell et al. 2007;Marien et al. 2010;Hooghe and Marien 2013). In this article, I concentrate on the division on institutionalized and non-institutionalized forms which is prevalent in the empirical research and which builds on the proposition of Barnes and Kaase (1979). Institutionalized forms are directly related to the institutional process. These forms are usually regulated by the members of the political elite, primarily by political parties. By contrast, non-institutionalized forms are conceived of as having no direct link to the electoral process or to the functioning of the political institutions. They are used predominantly by actors who are not members of the political elite, who want to manifest their dissatisfaction with the elite's actions or to impact the decision process. The impact of non-institutionalized forms on the political system is usually indirect (Hooghe and Marien 2013;Marien et al. 2010).
However, some scholars indicate that talking about multiple dimensions is essentially dated. Today, participatory repertoires of citizens are made up of multiple diverse forms, like taking part in demonstrations, consumerism, and petitioning (Norris 2007). Having multiple forms at their disposal, people opt for acts which allow them to express themselves, which match their interests and resources (Bang 2009; van Deth and Theocharis 2018). Politically skilled and policy-oriented citizens will choose whatever means appropriate to influence policies they care about (Dalton 2008). They may combine different forms, like voting and demonstrating, which is not a rare practice nowadays (Norris et al. 2005;van Aelst and Walgrave 2001). Given all this, it makes sense to treat political participation as an essentially unidimensional latent construct.

Data
To solve the controversy around the number of dimensions of political participation, I analyze the battery of participatory items of the 8th edition of the European Social Survey (ESS) (2016) for each country. I use ESS for two reasons. First, ESS is frequently used by scholars researching political participation (e.g., Bäck and Christensen 2016;Kostelka 2014). Second, it includes numerous participatory items, representing both institutionalized and non-institutionalized forms.
From the whole battery of items, I utilize eight items measuring political participation: contacting politicians or government, working for a political party or action group, working for another organization or association, badging, signing petitions, taking part in lawful demonstrations, boycotting products, and posting or sharing things online. Each respondent was asked whether he or she had participated in each form in the past 12 months. Thanks to this, the variation in the levels of political activity across individuals only due to different timespans is greatly limited. The ESS battery of political participation items was intended to measure a variety of political participation acts. The items were not meant to conform to any particular conceptualization of political participation and, therefore, to form any particular scale (Thomassen 2001). Descriptive statistics for the items can be found in Table 3 in the "Appendix".
For the purpose of this analysis, I exclude voting. Many studies have shown a special character of voting: it is a very frequent activity and therefore an exception on other indicators (Parry et al. 1992;Marien et al. 2010). The difference in prevalence between voting and other forms of participation often leads to problems with estimating measurement models. For these reasons, voting is often left-out from testing political participation scales in practice (e.g. Hooghe and Marien 2013; Theocharis and van Deth 2018;Vráblíková 2014).
The items which measure political participation were recoded into dichotomous variables, with values 1 "participation" and 0 "no participation". Refusals, "No answer", and "Don't know" answers were recoded as missing values and were not used in the analysis. In addition, all interviewees-minors were excluded from the analysis as they could not participate in some participatory forms for legal reasons.

Classical Methods of Establishing the Number of Dimensions
Before dwelling on the bi-factor approach, it is worth checking what conclusions we would draw if we relied either on the model fit comparison or the PCA with the Kaiser-1 rule. To conduct model fit comparison, we have to estimate two factor models: a unidimensional model, where all items load on one common factor, and a bidimensional model, where participatory items are grouped into correlated institutionalized and non-institutionalized modes. Three items-contacting politicians, working for a party, working for an organization-load on the group factor "institutionalized mode". Five items-badging, signing petitions, taking part in public demonstrations, boycotting products, and sharing or posting politics-related content online-load on the group factor "non-institutionalized mode" (Table 1).
By comparing the typically used global fit indices, i.e. Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Standardized Root Mean Squared Residual (SRMR), and the Root Mean Squared Error of Approximation (RMSEA), we would conclude that in all but one cases (Russia) the bidimensional model fits the data better than the unidimensional model. Chi-Square difference test (not reported here) also indicates that the bidimensional model is to be preferred over the unidimensional in all the countries but Russia. What is more, whereas CFI, TLI, and SRMR signal a poor fit of the unidimensional model in roughly half of the cases, only 1-4 cases with the bidimensional solution seem to have an unsatisfactory fit. Based on this, we would argue probably that there exist two well-defined separate dimensions of political participation, which are better resembled by the data than the model with one dimension.
Using the PCA with the Kaiser-1 rule would not change the picture substantially. In six out of 23 countries the number of eigenvalues greater than one equals 1. That is, in these countries we would extract just one dimension of political participation. For all the others, the model with two dimensions would be more appropriate.
Hence, irrespective of whether we use the model fit inspection or the Kaiser-1 rule, we would end up extracting two dimensions of political participation in most of the countries.

Confirmatory Bi-factor Model
To investigate the latent structure of political participation, I use a bi-factor modeling strategy. A major advantage of the bi-factor model is that we can easily partition the variance in item responses between the general factor and the group factors (Reise 2012;Rodriguez et al. 2015). The general factor reflects what is common among all the items. Group factors "represent common factors measured by the items that potentially explain item response variance not accounted for by the general factor" (Reise et al. 2010). In this work, bi-factor model allows us to check if the additional dimensions of political participation explain the variance in items responses beyond the common variance among all the indicators, and if so, how much.
To identify a bi-factor model we need at least two group factors. For each group factor, we need at least three items that uniquely load on that factor and the general factor (Zinbarg et al. 2007). The model assumes orthogonal relationships between the group factors, i.e. group factors should be independent of each other. Even though the orthogonality restriction could be relaxed, the estimation in such a case is far from being easy and the interpretation of the group factors is rather difficult (Reise et al. 2018). Figure 1 shows a bi-factor model of political participation. Three items-contacting politicians, working for a party, working for an organization-load on the group factor "institutionalized mode" and on the general factor "political participation". Five itemsbadging, signing petitions, taking part in public demonstrations, boycotting products, and sharing or posting politics-related content online-load on the group factor "non-institutionalized mode" and on the general factor "political participation". The general trait, or general factor, of political participation accounts for the common variance among the indicators, and the group factors for variance attributable to the specific modes of participation after controlling for the common variance.
The model for each country was estimated using full-information item factor analysis where the entire item responsematrix was a part of the calibration. To be more precise, I employed the dimensional reduction EM algorithm as incorporated in the bfactor function from the mirt package (Chalmers, 2012) in R. 1 I used the factor-analytic metrics of the estimates. Factors were scaled using the default option, i.e. standardized by imposing a unit variance identification constraint on factor variances, and by fixing factor means to zero. The code can be found in the Online Resource 1.

Indices
To answer the question about the number of dimensions of political participation we need a measure that would help us (1) to quantify the amount of common variance accounted by the general factor and group factors, a measure of (2) how reliable the factors are and, therefore, replicable across studies, and a measure which would allow us (3) to quantify the amount of reliable observed variance in a unit-weighted composite score explained by each factor uniquely. Whereas the first and second measures will drive the decisions on how to model political participation in the structural equation modeling framework, the third will instruct us how to score individuals: whether we should score them just on the whole scale (the general trait of political participation), only on the subscales (institutionalized and non-institutionalized modes), or on the whole scale and subscales. The third measure is particularly important in this case because unit-weighted composite scores in form of additive indices are widely used in the research on political participation (e.g., Dubrow et al. 2008; García-Albacete 2014; Moor and Verhaegen 2020; Vráblíková 2014).
To quantify the amount of common variance which is attributable to the general factor and group factors, I use the Explained Common Variance index, which is often conceptualized as a factor strength index (Rodriguez et al. 2016). The ECV can be computed by taking the common variance explained by the general factor and dividing it by the common variance explained by the general factor and group factors (Reise et al. 2010). In the case of the bi-factor model of political participation, ECV can be calculated as: where PP , NI , and I are vectors of standardized factor loadings for the general trait of political participation, group factor "non-institutionalized mode", and group factor "institutionalized mode", respectively. It is also possible to use ECV to quantify the amount of common variance due to each specific factor. The only thing that changes then is the numerator where we would have vectors of loadings from each specific factor.
As shown in a simulation study by Bonifay et al. (2015), the critical values of the ECV depend largely on the percent of uncontaminated correlations (PUC), i.e. a ratio of the number of correlations between items from different group factors to the total number of correlations (Rodriguez et al. 2015). When PUC is low, which is the case here (54%), ECV values above 0.7 on the general factor should indicate that the construct we are interested in is essentially unidimensional and we might fit a simpler unidimensional measurement model in structural equation model context without much bias introduced. In such a case, the difference in factor loadings between the general factor and a unidimensional model should be very small and so the structural parameter bias. Values lower than 0.7 make fitting a unidimensional model risky because it would introduce an unacceptable amount of bias (Bonifay et al. 2015;. To measure how reliable and replicable the factors are, I exploit the index H (Gagne and Hancock 2006;Hancock and Mueller 2001). H is a function of the sum of the proportion of variance explained by a given factor to the proportion of variance unexplained by that factor. For the general trait of political participation this would be: H values smaller than 0.7 indicate that the latent factor is poorly defined and likely to be unstable and change across samples and studies, values from 0.7 to 0.79 are in the acceptable range, while values equal to or above 0.8 indicate a good and well-defined latent factor, i.e. likely to show stability across samples and studies (Arias et al. 2018;Rodriguez et al. 2015). As stressed by Rodriguez et al. (2016), if the H value for a given factor is low, we cannot trust the structural coefficient between that factor with an external variable.
In order to quantify the amount of reliable observed variance in unit-weighted composite scores, I make use of the omega coefficient ( ). Unlike the coefficient alpha, a much more popular alternative, the omega coefficient does not necessitate the relation between the items and the latent variable to be essentially tau-equivalent (McDonald 1999;Zinbarg et al. 2005). In the case of a factor model, essential tau-equivalence would mean that the factor loadings are all equal, with factor intercepts varying (Kline 2016)-a highly implausible assumption. Even though omega does not differentiate between various sources of variance, i.e. it represents the proportion of variance in the observed total score attributable to all "modeled" sources of common variance (Revelle and Zinbarg 2009), there exist versions of omega coefficient which allow us to quantify the reliable variance due to particular factors in a multidimensional solution Zinbarg et al. 2005Zinbarg et al. , 2007. This is an important feature because, as Fig. 1 shows, in the case of the bifactor model of political participation, the variance in item responses is explained by two factors simultaneously.
Omega coefficient for the total score (the general trait of political participation) is computed as: In the numerator of the Eq. (4) we have all the reliable, or common, sources of unitweighted total score variance, i.e. the sum of the factor standardized loadings on the general factor "political participation", squared, plus the sum of the factor loadings on each group factor, squared. In the denominator we have the reliable variance plus items' unique variances, squared-(1 − h) 2 . We can calculate the omega coefficient for the subscales in a similar way, by taking into account a subset of items relevant to each group factor. For the subscale "institutionalized mode", we take the top three items, as presented in Fig. 1: Yet, as mentioned earlier, what goes into the omega coefficient is all modeled sources of variance. If the aim was to quantify the proportion of variance in total scores due to the general trait of political participation only, we would have to use omega hierarchical ( H ), which treats the variability in scores attributable to the group factors as measurement error (3) Zinbarg et al. 2005). Omega hierarchical can be regarded as a measure of the extent to which the unit-weighted total scores are essentially unidimensional (Rodriguez et al. 2016).
Equation (6) shows that the only difference between omega and omega hierarchical is that in the numerator we have just one source of reliable variance in unit-weighted total scores, that is due to the general factor "political participation". To improve interpretability, I will follow  and I will calculate the ratio of omega hierarchical to omega, which denotes the percentage of reliable variance in total scores due to the general factor.
The last problem pertains to the interpretability of subscale scores; namely, to what degree is the interpretation of subscale scores for institutionalized and non-institutionalized modes confounded by the general trait of political participation? For this type of questions,  propose to use a version of omega, omega subscale ( S ), which is calculated for a subset of items loading on each group factor. Similarly to omega hierarchical, I will compute ratios of omega subscale to omega calculated for the relevant subset of items.
Here is an example of omega subscale for the subscale "institutionalized mode" where we take into account top three items, as presented in Fig. 1: Table 2 shows the indices for the bi-factor model of political participation. In 15 out of 24 countries, the ECV value for the general trait of political participation is higher than 0.7, meaning that in those countries the general factor of political participation accounts for more than 70% of common variance. This is a strong indication that in these countries political participation is essentially unidimensional. Also, we should be able to fit a simpler unidimensional model for these countries without much bias introduced. In the other countries, the general factor explains still more than half of the common variance, except Finland, where the ECV is below 0.5. The amount of common variance explained by the specific factors is small. On average, the factor representing institutionalized mode explains 13% of the common variance and the factor representing non-institutionalized mode-15%. Only in Finland and Iceland the ECV for a specific mode (non-institutionalized mode) is noticeably higher, exceeding 30%.

Results
H values show that the general trait of political participation is not only strong but also reliable and well-defined. In almost every country this index is higher than 0.8, and only in Finland it is below that value but still acceptable. By contrast, H values for specific modes of political participation are all below 0.7, with the exception of the non-institutionalized mode in Finland. It means that they are poorly-defined by the set of indicators and that the group factors representing the modes are likely to lack stability across samples. Overall, ECV and H values suggest that there exists one strong general trait of political participation, and two weak and unreliable group factors-specific modes of political participation.
Based on this, we can say that common variance among all the indicators cannot be ignored if political participation were to be modeled in the structural equation modeling framework and that we should not interpret the structural coefficients between an external variable and specific modes of participation.
As to the omega ratios, the results are rather unequivocal. In 20 out of 23 countries, the ratio of omega hierarchical to omega is above 0.8. It means that more than 80% of reliable unit-weighted total score variance is due to the general factor. Subscales in all the countries are largely confounded by the general factor. By using the rule of thumb, i.e. assuming that at least 50% of reliable variance in unit-weighted subscale scores is needed to score individuals on the subscales (Canivez 2015), we could do it in just five countries: in Germany, Norway, and Slovenia on the subscale "institutionalized mode"; in Finland and Iceland on the subscale "non-institutionalized mode". Still, this would be risky and it is up to the researcher if he or she wants to take that risk.
Hence, in the case of political participation unit-weighted subscale scores generally do not provide meaningful and reliable information about the modes that is unique from the Table 2 Indices for the bi-factor model of political participation Columns two to four present ECV values for the general factor and specific factors. Columns five to seven show H for the general factor, institutionalized mode, and non-institutionalized mode. Column eight includes the ratio of omega hierarchical to omega. Columns nine and ten present the ratios of omega subscale to omega for the relevant subset of items: column nine for the institutionalized mode and column ten for the non-institutionalized mode general trait of political participation. Given the results, building composite scores for subscales cannot be justified, with five possible exceptions mentioned earlier. If the subscale scores, e.g. as composite scores, were to be correlated with an external variable, the correlation will be largely inflated if the external variable predicts the general trait of political participation and a specific mode in the same direction (i.e., enhancing conflation) or attenuated if the external variable predicts in opposite directions (i.e., suppressive conflation) (Wiernik et al. 2015). Inflation and attenuation would impact the correlation between the total score and an external variable too, but the impact would be significantly lower since the general trait of political participation accounts for most of the variance. Lastly, as a robustness check, I repeated the procedure on the 9th edition of the ESS (2018). The analysis corroborates the findings based on the 8th edition. Details can be found in the Online Resource 2.

Discussion
The main goal of this study was to establish how we should model political participation: as an essentially unidimensional or multidimensional construct, with institutionalized and non-institutionalized modes, in structural equation modeling framework and as a unit-weighted composite score. To establish this, I examined the battery of participation items in the European Social Survey using a bi-factor modeling strategy along with factor strength, internal consistency reliability, and construct replicability indices: Explained Common Variance, omega coefficients, and the H index.
Previous research relied strongly on procedures which produced a false impression that we can and should model political participation as a multidimensional construct, composed of separate modes, with the common variance being ignored. Those procedures usually involved a simple inspection of the model fit (e.g. Talò and Mannarini 2015) or other shortcuts in deciding about the optimal number of dimensions like using PCA with the Kaiser criterion (e.g. Theocharis and van Deth, 2018). But as argue a growing number of methodologists (Bentler 2009;Berge and Sočan 2004;Reise et al. 2018;van der Eijk and Rose 2015), these often employed methods suffer from some flaws which may lead to erroneous conclusions about the number of dimensions that we should extract. To model political participation as a multidimensional construct, these dimensions should offer some unique (unshared) contribution in explaining the variance in item responses (Gignac and Kretzschmar, 2017).
If we were to decide how to build unit-weighted scores of political participation, i.e. by using the total score (political participation as a whole), subscale scores (institutionalized and non-institutionalized modes), or both, the results are rather unequivocal. In 20 out of 23 countries, 80% of reliable total score variance was due to the general factor. Subscales in all the countries were largely confounded by the general factor and only in five instances we could try to score individuals on the subscales. Therefore, if modeled as a unit-weighted composite score, political participation should be treated as a whole in most countries. Otherwise, that is by using subscales for each mode, we would incorporate plenty of measurement error and the observed score correlations will be largely inflated or attenuated as estimates of relations between the specific modes and external variables. Total scores are affected significantly less by the attenuation and inflation.
If we were to decide how to model political participation in the structural equation modeling framework, the most optimal solution is to use a bi-factor model. However, only the general trait of political participation from this model is reliable and welldefined in most of the countries. By contrast, the specific factors representing institutionalized and non-institutionalized modes of participation are poorly-defined and unreliable, and, therefore, very likely not replicable across samples. With the bi-factor model, we can regress the general factor on external variables and account for the noise caused by the specific factors, which may distort the structural coefficients with external variables. Any structural coefficient between a specific factor and an external variable would be unreliable at least and cannot be interpreted.
Overall, the analysis shows that only the general trait of political participation is interpretable in most countries. The specific factors representing institutionalized and non-institutionalized modes of participation, generally, do not acquire sufficient specificity nor stability to represent empirically identifiable constructs. This makes the interpretability of the specific modes virtually impossible. The results lend strong evidence that in most countries political participation is an essentially unidimensional construct, with negligible group factors representing specific modes. The multidimensionality of political participation cannot be backed by sufficient empirical evidence and the multiple dimensions seem to be methodological artifacts.
Future research should concentrate on testing political participation with larger sets of items, and using other theoretical divisions of forms of political participation. Moreover, researchers should systematically investigate the structure of political participation in time-an issue that was beyond the scope of this article.

Supplementary Information
The online version of this article (https ://doi.org/10.1007/s1120 5-021-02637 -3) contains supplementary material, which is available to authorized users. licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.