Advertisement

Quality and Quantity

, Volume 18, Issue 4, pp 377–385 | Cite as

Multiple indicators: Internal consistency or no necessary relationship?

  • Kenneth A. Bollen
Note

Conclusions

The preceding discussion demonstrates the importance of having an explicit measurement model before analyzing measures. It is not valid to make any blanket statement on whether or not indicators should correlate until we know what type of indicators they are. If they are effect-indicators that have “well-behaved” errors and are positive measures of a single latent variable, then the internal-consistency view is appropriate and positive correlations of the indicators should occur. If cause-indicators are used then the NNR view is correct; indicator intercorrelations may be positive, negative, or zero. Finally, in general MIMIC models, cause-indicators have NNR while effect-indicators should be positively related under the assumptions of the model. In general, a cause- or an effect-indicator may have any type of relation.

Given the dominance of the internal-consistency perspective these simple results have serious implications. The empirical practice of factor-analyzing items to determine which measures “hang together” makes little sense if some of the indicators are cause-indicators. Similarly, computing “item-total” (cf. Nunnally, 1978, pp. 279–287) correlations as a means to select items for an index is not valid if cause-indicators are present. It seems quite possible that a number of items (or indicators) have not been used in research because of their low or negative correlation with other indicators designed to measure the same concept. If some of these are cause-indicators, researchers may have unknowingly removed valid measures.

On the other hand, these findings are not an excuse to include any indicators of interest in a measure. Ideally, the researcher should decide in advance which are effect- and which are cause-indicators. On the basis of the assumed measurement model, the expected associations may be predicted and tested.

In sum, the advice of Blalock seems particularly appropriate: “One should be especially on guard against procedures that supposedly permit one to appraise the ‘validity’ of an indicator on the basis of magnitudes of correlation coefficients, without the benefit of a specific theoretical model” (Namboodiri et al., 1975, p. 600).

Keywords

Negative Correlation Theoretical Model Internal Consistency Latent Variable Measurement Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Babbie, E. (1983). The Practice of Social Research. Belmont, CA: Wadsworth.Google Scholar
  2. Blalock, H.M. (1964). Causal Inferences in Nonexperimental Research. Chapel Hill, NC: University of North Carolina Press.Google Scholar
  3. Blalock, H.M. (1971). “Causal models involving unmeasured variables in stimulus-response situations”, pp. 335–347 in H.M. Blalock, Jr., ed., Causal Models in the Social Sciences. Chicago, IL: Aldine Press.Google Scholar
  4. Curtis, R.F. and Jackson, E.F. (1962). “Multiple indicators in survey research”, American Journal of Sociology 68: 195–204.Google Scholar
  5. Hauser, R.M. (1973). “Disaggregating a social-psychological model of educational attainment”, pp. 255–284 in A.S. Goldberger and O.D. Duncan, eds., Structural Equation Models in the Social Sciences. New York: Seminar Press.Google Scholar
  6. Hauser, R.M. and Goldberger, A.S. (1971). “The treatment of unobservable vairables in path analysis”, pp. 81–117 in H.L. Costner, ed., Sociological Methodology 1971. San Francisco, CA: Jossey-Bass.Google Scholar
  7. Jöreskog, K.G. and Goldberger, A.S. (1975). “Estimation of a model with multiple indicators and multiple causes of a single latent variable”, Journal of the American Statistical Association 70: 631–639.Google Scholar
  8. Miller, A.D. (1971). “Logic of causal analysis: from experimental to nonexperimental designs”, pp. 273–294 in H.M. Blalock, Jr., ed., Causal Models in the Social Sciences. Chicago, IL: Aldine Press.Google Scholar
  9. Namboodiri, N.K., Carter, L.F. and Blalock, H.M., Jr. (1975). Applied Multivariate Analysis and Experimental Design. New York: McGraw-Hill.Google Scholar
  10. Nunnally, J.C. (1978). Psychometric Theory. New York: McGraw-Hill.Google Scholar
  11. Robins, P.K. and West, R.W. (1977). “Measurement errors in the estimation of home value”, Journal of the American Statistical Association 72: 290–294.Google Scholar
  12. Zeller, R.A. and Carmines, E.G. (1980). Measurement in the Social Sciences. Cambridge: Cambridge University Press.Google Scholar

Copyright information

© Elsevier Science Publishers B.V 1984

Authors and Affiliations

  • Kenneth A. Bollen
    • 1
  1. 1.Dartmouth CollegeHanoverUSA

Personalised recommendations