Measures of Association: Correlation and Regression

  • Lothar Sachs
Part of the Springer Series in Statistics book series (SSS)

Abstract

In many situations it is desirable to learn something about the association between two attributes of an individual, a material, a product, or a process. In some cases it can be ascertained by theoretical considerations that two attributes are related to each other. The problem then consists of determining the nature and degree of the relation. First the pairs of values (x i , y i ) are plotted in a coordinate system in a two dimensional space. The resulting scatter diagram gives us an idea about the dispersion, the form and the direction of the point “cloud”.

Keywords

Cholesterol Covariance Sine Tate Paral 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

[8:5] Chapter 5

  1. Abbas, S.: Serial correlation coefficient. Bull. Inst. Statist. Res. Tr. 1 (1967), 65–76MathSciNetGoogle Scholar
  2. Acton, F. S.: Analysis of Straight-Line Data. New York 1959MATHGoogle Scholar
  3. Anderson, R. L., and Houseman, E. E.: Tables of Orthogonal Polynomial Values Extended to N = 104. Res. Bull. 297, Agricultural Experiment Station, Ames, Iowa 1942 (Reprinted March 1963)Google Scholar
  4. Anderson, T. W.: An Introduction to Multivariate Statistical Analysis. New York 1958MATHGoogle Scholar
  5. Anderson, T. W., Gupta, S. D., and Styan, G. P. H.: A Bibliography of Multivariate Statistical Analysis. (Oliver and Boyd; pp. 654) Edinburgh and London 1973Google Scholar
  6. Subrahmaniam, K. and K.: Multivariate Analysis, A Selected and Abstracted Bibliography 1957–1972. (M. Dekker; pp. 265) New York 1973MATHGoogle Scholar
  7. Bancroft, T. A.: Topics in Intermediate Statistical Methods. (Iowa State Univ.) Ames, Iowa 1968Google Scholar
  8. Bartlett, M.S.: Fitting a straight line when both variables are subject to error. Biometrics 5 (1949), 207–212MathSciNetCrossRefGoogle Scholar
  9. Barton, D. E., and Casley, D. J.: A quick estimate of the regression coefficient. Biometrika 45 (1958), 431–435MATHGoogle Scholar
  10. (1).
    Blomqvist, N.: On a measure of dependence between two random variables. Ann. Math. Statist. 21 (1950), 593–601.MathSciNetMATHCrossRefGoogle Scholar
  11. (2).
    Blomqvist, N.: Some tests based on dichotomization. Ann. Math. Statist. 22 (1951), 362–371MathSciNetMATHCrossRefGoogle Scholar
  12. Brown, R. G.: Smoothing, Forecasting and Prediction of Discrete Time Series. (Prentice-Hall, pp. 468) London 1962 [cf. E. McKenzie, The Statistician 25 (1976), 3–14]Google Scholar
  13. Carlson, F. D., Sobel, E., and Watson, G. S.: Linear relationships between variables affected by errors. Biometrics 22 (1966), 252–267CrossRefGoogle Scholar
  14. Chambers, J. M.: Fitting nonlinear models: numerical techniques. Biometrika 60 (1973), 1–13MathSciNetMATHCrossRefGoogle Scholar
  15. See also Murray, W. (Ed.): Numerical Methods for Unconstrained Optimization. (Acad. Press) London 1972Google Scholar
  16. Cohen, J.: A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20 (1960), 37–46 [see Biometrics 36 (1980), 207–216]CrossRefGoogle Scholar
  17. Cole, La M. C.: On simplified computations. The American Statistician 13 (February 1959), 20Google Scholar
  18. Cooley, W. W., and Lohnes, P. R.: Multivariate Data Analysis. (Wiley, pp. 400) London 1971MATHGoogle Scholar
  19. Cornfield, J.: Discriminant functions. Rev. Internat. Statist. Inst. 35 (1967), 142–153 [see also J. Amer. Statist. Assoc. 63 (1968), 1399–1412, Biometrics 35 (1979), 69–85 and 38 (1982), 191–200 and Biometrical Journal 22 (1980), 639–649]MathSciNetCrossRefGoogle Scholar
  20. Cowden, D. J., and Rucker, N. L.: Tables for Fitting an Exponential Trend by the Method of Least Squares. Techn. Paper 6, University of North Carolina, Chapel Hill 1965Google Scholar
  21. Cox, D. R., and Snell, E. J.: A general definition of residuals. J. Roy. Statist. Soc. B 30 (1968), 248–275 [see also J. Qual. Technol. 1 (1969), 171–188, 294; Biometrika 58 (1971), 589–594; Biometrics 31 (1975), 387–410; Technometrics 14 (1972), 101–111, 781–790; 15 (1973), 677–695, 697–715; 17 (1975), 1–14]MathSciNetGoogle Scholar
  22. Cureton, E. E.: Quick fits for the lines y = bx and y = a + bx when errors of observation are present in both variables. The American Statistician 20 (June 1966), 49Google Scholar
  23. Daniel, C., and Wood, F. S. (with J. W. Gorman): Fitting Equations to Data. Computer Analysis of Multifactor Data for Scientists and Engineers. (Wiley-Inter-science, pp. 342) New York 1971 [2nd edition, pp. 458, 1980] [see also Applied Statistics 23 (1974), 51–59 and Technometrics 16 (1974), 523–531]Google Scholar
  24. Dempster, A. P.: Elements of Continuous Multivariate Analysis. (Addison-Wesley, pp. 400) Reading, Mass. 1968Google Scholar
  25. Draper, N. R., and Smith, H.: Applied Regression Analysis. 2nd ed. (Wiley; pp. 709) New York 1981MATHGoogle Scholar
  26. Duncan, D. B.: Multiple comparison methods for comparing regression coefficients. Biometrics 26 (1970), 141–143 (see also B. W. Brown, 143–144)CrossRefGoogle Scholar
  27. Dunn, O. J.: A note on confidence bands for a regression line over a finite range. J. Amer. Statist. Assoc. 63 (1968), 1028–1033CrossRefGoogle Scholar
  28. Ehrenberg, A. S. C.: Bivariate regression is useless. Applied Statistics 12 (1963), 161–179MATHCrossRefGoogle Scholar
  29. Elandt, Regina, C.: Exact and approximate power function of the non-parametric test of tendency. Ann. Math. Statist. 33 (1962), 471–481MathSciNetMATHCrossRefGoogle Scholar
  30. Emerson, Ph. L.: Numerical construction of orthogonal polynomials for a general recurrence formula. Biometrics 24 (1968), 695–701CrossRefGoogle Scholar
  31. Enderlein, G.: Die Schätzung des Produktmoment-Korrelationsparameters mittels Rangkorrelation. Biometrische Zeitschr. 3 (1961), 199–212MATHCrossRefGoogle Scholar
  32. Ferguson, G. A.: Nonparametric Trend Analysis. Montreal 1965MATHGoogle Scholar
  33. Fisher, R. A.: Statistical Methods for Research Workers, 12th ed. Edinburgh 1954, pp. 197–204Google Scholar
  34. Friedrich, H.: Nomographische Bestimmung und Beurteilung von Regressions- und Korrelationskoeffizienten. Biometrische Zeitschr. 12 (1970), 163–187MATHCrossRefGoogle Scholar
  35. Gallant, A. R.: Nonlinear regression. The American Statistician 29 (1975), 73–81, 175 [see also 30 (1976), 44–45]MathSciNetMATHGoogle Scholar
  36. Gebelein, H., and Ruhenstroth-Bauer, G.: Über den statistischen Vergleich einer Normalkurve und einer Prüfkurve. Die Naturwissenschaften 39 (1952), 457–461CrossRefGoogle Scholar
  37. Gibson, Wendy, M., and Jowett, G. H.: “Three-group” regression analysis. Part I. Simple regression analysis. Part II. Multiple regression analysis. Applied Statistics 6 (1957), 114–122 and 189–197CrossRefGoogle Scholar
  38. Glasser, G. J., and Winter, R. F.: Critical values of the coefficient of rank correlation for testing the hypothesis of independence. Biometrika 48 (1961), 444–448MATHGoogle Scholar
  39. Gregg, I. V., Hossel, C. H., and Richardson, J. T.: Mathematical Trend Curves - An Aid to Forecasting. (I.C.I. Monograph No. 1), Edinburgh 1964Google Scholar
  40. Griffin, H. D.: Graphic calculation of Kendall’s tau coefficient. Educ. Psychol. Msmt. 17 (1957), 281–285CrossRefGoogle Scholar
  41. Hahn, G. J.: Simultaneous prediction intervals for a regression model. Technometrics 14 (1972), 203–214MATHGoogle Scholar
  42. Hahn, G. J., and Hendrickson, R. W.: A table of percentage points of the distribution of the largest absolute value of k student t variates and its applications. Biometrika 58 (1971), 323–332MathSciNetMATHGoogle Scholar
  43. Hocking, R. R., and Pendleton, O. J.: The regression dilemma. Commun. Statist.- Theor. Meth. 12 (1983), 497–527MATHCrossRefGoogle Scholar
  44. (1).
    Hotelling, H.: The selection of variates for use in prediction with some comments on the general problem of nuisance parameters. Ann. Math. Statist. 11 (1940), 271–283MathSciNetMATHCrossRefGoogle Scholar
  45. Cf. O. J. Dunn et al., J. Amer. Statist. Assoc. 66 (1971), 904–908, Biometrics 31 (1975), 531–543 and Biometrika 63 (1976), 214–215.Google Scholar
  46. (2).
    Hotelling, H.: New light on the correlation coefficient and its transforms. J. Roy. Statist. Soc. B 15 (1953), 193–232MathSciNetGoogle Scholar
  47. Hiorns, R. W.: The Fitting of Growth and Allied Curves of the Asymptotic Regression Type by Stevens Method. Tracts for Computers No. 28. Cambridge Univ. Press 1965MATHGoogle Scholar
  48. Hoerl, A. E., Jr.: Fitting Curves to Data. In J. H. Perry (Ed.): Chemical Business Handbook. (McGraw-Hill) London 1954, 20–55/20–77 (see also 20–16)Google Scholar
  49. (1).
    Kendall, M. G.: A new measure of rank correlation. Biometrika 30 (1938), 81–93.MathSciNetMATHGoogle Scholar
  50. (2).
    Kendall, M. G.: Multivariate Analysis. (Griffin; pp. 210) London 1975.MATHGoogle Scholar
  51. (3).
    Kendall, M. G.: Rank Correlation Methods, 3rd ed. London 1962, pp. 38–41 (4th ed. 1970).Google Scholar
  52. (4).
    Kendall, M. G.: Ronald Aylmer Fisher, 1890–1962. Biometrika 50 (1963), 1–15.MathSciNetMATHCrossRefGoogle Scholar
  53. (5).
    Kendall, M. G.: Time Series. (Griffin; pp. 197) London 1973Google Scholar
  54. Kerrich, J. E.: Fitting the line y = ax when errors of observation are present in both variables. The American Statistician 20 (February 1966), 24Google Scholar
  55. (1).
    Koller, S.: Statistische Auswertung der Versuchsergebnisse. In Hoppe-Seyler/Thier-felder’s Handb. d. physiologisch- und pathologisch-chemischen Analyse, 10th edition, vol. II, pp. 931–1036, Berlin-Göttingen-Heidelberg 1955, pp. 1002–1004.Google Scholar
  56. (2).
    Koller, S.: Typisierung korrelativer Zusammenhänge. Metrika 6 (1963), 65–75 [see also 17 (1971), 30–42].CrossRefGoogle Scholar
  57. (3).
    Koller, S.: Systematik der statistischen Schlußfehler. Method. Inform. Med. 3 (1964), 113–117.Google Scholar
  58. (4).
    Koller, S.: Graphische Tafeln zur Beurteilung statistischer Zahlen. 3rd edition. Darmstadt 1953 (4th edition 1969)MATHGoogle Scholar
  59. Konijn, H. S.: On the power of certain tests for independence in bivariate populations. Ann. Math. Statist. 27 (1956), 300–323MathSciNetMATHCrossRefGoogle Scholar
  60. Kramer, C. Y.: A First Course in Methods of Multivariate Analysis. (Virginia Polytech. Inst.; pp. 353) Blacksburg, Virginia 1972Google Scholar
  61. Kramer, C. Y., and Jensen, D. R. : Fundamentals of multivariate analysis. Part I-IV. Journal of Quality Technology 1 (1969), 120–133, 189–204, 264–276, 2 (1970), 32–40 and 4 (1972), 177–180Google Scholar
  62. Kres, H.: Statistische Tafeln zur Multivariaten Analysis. (Springer; pp. 431) New York 1975MATHGoogle Scholar
  63. Krishnaiah, P. R. (Ed.): Multivariate Analysis and Multivariate Analysis II, III. (Academic Press; pp. 592 and 696, 450), New York and London 1966 and 1969, 1973Google Scholar
  64. Kymn, K. O.: The distribution of the sample correlation coefficient under the null hypothesis. Econometrica 36 (1968), 187–189CrossRefGoogle Scholar
  65. (1).
    Lees, Ruth, W., and Lord, F. M.: Nomograph for computing partial correlation coefficients. J. Amer. Statist. Assoc. 56 (1961), 995–997.MathSciNetMATHGoogle Scholar
  66. (2).
    Lees, Ruth, W., and Lord, F. M.: Corrigenda 57 (1962), 917–918Google Scholar
  67. Lieberson, S.: Non-graphic computation of Kendall’s tau. Amer. Statist. 17 (Oct. 1961), 20–21Google Scholar
  68. (1).
    Linder, A.: Statistische Methoden für Naturwissenschaftler, Mediziner und Ingenieure. 3rd edition. Basel 1960, page 172.MATHGoogle Scholar
  69. (2).
    Linder, A.: Anschauliche Deutung und Begründung des Trennverfahrens. Method. Inform. Med. 2 (1963), 30–33.Google Scholar
  70. (3).
    Linder, A.: Trennverfahren bei qualitativen Merkmalen. Metrika 6 (1963), 76–83MathSciNetMATHCrossRefGoogle Scholar
  71. Lord, F. M.: Nomograph for computing multiple correlation coefficients. J. Amer. Statist. Assoc. 50 (1955), 1073–1077 [see also Biometrika 59 (1972), 175–189]MathSciNetMATHGoogle Scholar
  72. Ludwig, R.: Nomogramm zur Prüfung des Produkt-Moment-Korrelationskoeffizienten r. Biometrische Zeitschr. 7 (1965), 94–95CrossRefGoogle Scholar
  73. Madansky, A.: The fitting of straight lines when both variables are subject to error. J. Amer. Statist. Assoc. 54 (1959), 173–205 [see also 66 (1971), 587–589 and 77 (1982), 71–79]MathSciNetMATHGoogle Scholar
  74. (1).
    Mandel, J.: Fitting a straight line to certain types of cumulative data. J. Amer. Statist. Assoc. 52 (1957), 552–566.MathSciNetMATHGoogle Scholar
  75. (2).
    Mandel, J.: Estimation of weighting factors in linear regression and analysis of variance. Technometrics 6 (1964), 1–25Google Scholar
  76. Mandel, J., and Linning, F. J.: Study of accuracy in chemical analysis using linear calibration curves. Analyt. Chem. 29 (1957), 743–749CrossRefGoogle Scholar
  77. Meyer-Bahlburg, H. F. L.: Spearmans rho als punktbiserialer Korrelationskoeffizient. Biometrische Zeitschr. 11 (1969), 60–66CrossRefGoogle Scholar
  78. Miller, R. G.: Simultaneous Statistical Inference. (McGraw-Hill, pp. 272), New York 1966 (Chapter 5, pp. 189–210)MATHGoogle Scholar
  79. Morrison, D. F.: Multivariate Statistical Methods. 2nd ed. (McGraw-Hill, pp. 425), New York 1979Google Scholar
  80. Natrella, M. G.: Experimental Statistics, National Bureau of Standards Handbook 91, U.S. Govt. Printing Office, Washington, D.C., 1963, pp. 5–31Google Scholar
  81. Neter, J., and Wasserman, W.: Applied Linear Statistical Models. R. D. Irwin, Homewood, IL, 1974Google Scholar
  82. Nowak, S.: in Blalock, H. M., et al.: Quantitative Sociology. Academic Press, New York, 1975, Chapter 3 (pp. 79–132)Google Scholar
  83. Olkin, I., and Pratt, J. W.: Unbiased estimation of certain correlation coefficients. Ann. Math. Statist. 29 (1958), 201–211MathSciNetCrossRefGoogle Scholar
  84. Olmstead, P. S., and Tukey, J. W.: A corner test of association. Ann. Math. Statist. 18 (1947), 495–513MathSciNetMATHCrossRefGoogle Scholar
  85. Ostle, B., and Mensing, R. W.: Statistics in Research. 3rd edition. (Iowa Univ. Press; pp. 596), Ames, Iowa 1975Google Scholar
  86. Pfanzagl, J.: Über die Parallelität von Zeitreihen. Metrika 6 (1963), 100–113MATHCrossRefGoogle Scholar
  87. Plackett, R. L.: Principles of Regression Analysis. Oxford 1960MATHGoogle Scholar
  88. Potthoff, R. F.: Some Scheffé-type tests for some Behrens-Fisher type regression problems. J. Amer. Statist. Assoc. 60 (1965), 1163–1190MathSciNetMATHGoogle Scholar
  89. Press, S. J.: Applied Multivariate Analysis. (Holt, Rinehart and Winston; pp. 521) New York 1972MATHGoogle Scholar
  90. Prince, B. M., and Tate, R. F.: The accuracy of maximum likelihood estimates of correlation for a biserial model. Psychometrika 31 (1966), 85–92MathSciNetCrossRefGoogle Scholar
  91. Puri, M. L., and Sen, P. K.: Nonparametric Methods in Multivariate Analysis. (Wiley, pp. 450) London 1971MATHGoogle Scholar
  92. Quenouille, M. H.: Rapid Statistical Calculations. Griffin, London 1959MATHGoogle Scholar
  93. Raatz, U.: Die Berechnung des SPEARMANschen Rangkorrelationskoeffizienten aus einer bivariaten Häufigkeitstabelle. Biom. Z. 13 (1971), 208–214MathSciNetMATHCrossRefGoogle Scholar
  94. Radhakrishna, S.: Discrimination analysis in medicine. The Statistician 14 (1964), 147–167CrossRefGoogle Scholar
  95. (1).
    Rao, C. R.: Multivariate analysis: an indispensable aid in applied research (with an 81 reference bibliography). Sankhya 22 (1960), 317–338.MathSciNetMATHGoogle Scholar
  96. (2).
    Rao, C. R.: Linear Statistical Inference and Its Applications. New York 1965 (2nd ed. 1973).MATHGoogle Scholar
  97. (3).
    Rao, C. R.: Recent trends of research work in multivariate analysis. Biometrics 28 (1972), 3–22MathSciNetCrossRefGoogle Scholar
  98. Robson, D. S.: A simple method for constructing orthogonal polynomials when the independent variable is unequally spaced. Biometrics 15 (1959), 187–191 [see Int. Statist. Rev. 47 (1979), 31–36]MathSciNetMATHCrossRefGoogle Scholar
  99. Roos, C. F.: Survey of economic forecasting techniques. Econometrica 23 (1955), 363–395MATHCrossRefGoogle Scholar
  100. Roy, S. N.: Some Aspects of Multivariate Analysis. New York and Calcutta 1957Google Scholar
  101. Sachs, L.: Statistische Methoden. 6th revised edition. (Springer, 133 pages) Berlin, Heidelberg, New York 1984, pages 92–94Google Scholar
  102. Sahai, H.: A bibliography on variance components. Int. Statist. Rev. 47 (1979), 177–222.MathSciNetMATHGoogle Scholar
  103. Salzer, H. E., Richards, Ch. H., and Arsham, Isabelle: Table for the Solution of Cubic Equations. New York 1958MATHGoogle Scholar
  104. Samiuddin, M.: On a test for an assigned value of correlation in a bivariate normal distribution. Biometrika 57 (1970), 461–464MathSciNetGoogle Scholar
  105. Cf., 65 (1978), 654–656 and K. Stange: Statist. Hefte 14 (1973), 206–236MathSciNetMATHCrossRefGoogle Scholar
  106. Saxena, A. K.: Complex multivariate statistical analysis: an annotated bibliography. International Statistical Review 46 (1978), 209–214MathSciNetGoogle Scholar
  107. Saxena, H. C., and Surendran, P. U.: Statistical Inference. (Chand, pp. 396), Delhi, Bombay, Calcutta 1967 (Chapter 6, 258–342), (2nd ed. 1973)Google Scholar
  108. Schaeffer, M. S., and Levitt, E. E.: Concerning Kendall’s tau, a nonparametric correlation coefficient. Psychol. Bull. 53 (1956), 338–346CrossRefGoogle Scholar
  109. Scharf, J.-H.: Was ist Wachstum? Nova Acta Leopoldina NF (Nr. 214) 40 (1974), 9–75 [see also Biom. Z. 16 (1974), 383–399 23 (1981), 41–54Google Scholar
  110. Kowalski, Ch. J. and K. E. Guire, Growth 38 (1974), 131–169Google Scholar
  111. Peil, J., Gegenbaurs morph. Jb. 120 (1974), 832–853, 862–880; 121 (1975), 163–173, 389–420; 122 (1976), 344–390; 123 (1977), 236–259; 124 (1978), 525–545, 690–714; 125 (1979), 625–660 and Biometrics 35 (1979), 255–271, 835–848; 37 (1981), 383–390]Google Scholar
  112. Seal, H.: Multivariate Statistical Analysis for Biologists. London 1964MATHGoogle Scholar
  113. Searle, S. R.: Linear Models. (Wiley, pp. 532) New York 1971MATHGoogle Scholar
  114. (1).
    Spearman, C.: The proof and measurement of association between two things. Amer. J. Psychol. 15 (1904), 72–101.CrossRefGoogle Scholar
  115. (2).
    Spearman, C.: The method “of right and wrong cases” (“constant stimuli”) without Gauss’ formulae. Brit. J. Phychol. 2 (1908), 227–242Google Scholar
  116. Stammberger, A.: Ein Nomogramm zur Beurteilung von Korrelationskoeffizienten. Biometrische Zeitschr. 10 (1968), 80–83CrossRefGoogle Scholar
  117. Stilson, D. W., and Campbell, V. N.: A note on calculating tau and average tau on the sampling distribution of average tau with a criterion ranking. J. Amer. Statist. Assoc. 57 (1962), 567–571MathSciNetMATHGoogle Scholar
  118. Stuart, A.: Calculation of Spearman’s rho for ordered two-way classifications. American Statistician 17 (Oct. 1963), 23–24Google Scholar
  119. Student: Probable error of a correlation coefficient. Biometrika 6 (1908), 302–310Google Scholar
  120. Swanson, P., Leverton, R., Gram, M. R., Roberts, H., and Pesek, I.: Blood values of women: cholesterol. Journal of Gerontology 10 (1955) 41–47Google Scholar
  121. cited by Snedecor, G. W., Statistical Methods, 5th ed., Ames 1959, p. 430Google Scholar
  122. (1).
    Tate, R. F.: Correlation between a discrete and a continuous variable. Pointbiserial correlation. Ann. Math. Statist. 25 (1954), 603–607.MathSciNetMATHCrossRefGoogle Scholar
  123. (2).
    Tate, R. F.: The theory of correlation between two continuous variables when one is dichotomized. Biometrika 42 (1955), 205–216.MathSciNetMATHGoogle Scholar
  124. (3).
    Tate, R. F.: Applications of correlation models for biserial data. J. Amer. Statist. Assoc. 50 (1955), 1078–1095.MATHGoogle Scholar
  125. (4).
    Tate, R. F.: Conditional-normal regression models. J. Amer. Statist. Assoc. 61 (1966), 477–489MathSciNetMATHGoogle Scholar
  126. Thöni, H.: Die nomographische Bestimmung des logarithmischen Durchschnittes von Versuchsdaten und die graphische Ermittlung von Regressionswerten. Experientia 19 (1963), 1–4CrossRefGoogle Scholar
  127. Tukey, J. W.: Components in regression. Biometrics 7 (1951), 33–70CrossRefGoogle Scholar
  128. Waerden, B. L. van der: Mathematische Statistik. 2nd edition. (Springer, 360 pages), Berlin 1965, page 324MATHGoogle Scholar
  129. Wagner, G.: Zur Methodik des Vergleichs altersabhängiger Dermatosen. (Zugleich korrelationsstatistische Kritik am sogenannten „Status varicosus“). Zschr. menschl. Vererb.-Konstit.-Lehre 53 (1955), 57–84Google Scholar
  130. Walter, E.: Rangkorrelation und Quadrantenkorrelation. Zúchter Sonderh. 6, Die Frühdiagnose in der Züchtung und Züchtungsforschung II (1963), 7–11CrossRefGoogle Scholar
  131. Weber, Erna: Grundriß der biologischen Statistik. 7th revised edition. (Fischer, 706 pages), Stuttgart 1972, pages 550–578 [Discr. Anal.: see also Technometrics 17 (1975), 103–109] (8th revised edition 1980)Google Scholar
  132. Williams, E. J.: Regression Analysis. New York 1959MATHGoogle Scholar
  133. Yule, G. U., and Kendall, M. G.: Introduction to the Theory of Statistics. London 1965, pp. 264–266Google Scholar

[8:5a] Factor analysis

  1. Adam, J., and Enke, H.: Zur Anwendung der Faktorenanalyse als Trennverfahren. Biometr. Zeitschr. 12 (1970), 395–411CrossRefGoogle Scholar
  2. Bartholomew, D. J.: Factor analysis for categorical data. J. Roy. Statist. Soc. B 42 (1980), 293–321MathSciNetGoogle Scholar
  3. Browne, M. W.: A comparison of factor analytic techniques. Psychometrika 33 (1968), 267–334MathSciNetCrossRefGoogle Scholar
  4. Corballis, M. C., and Traub. R. E.: Longitudinal factor analysis. Psychometrika 35 (1970), 79–98 [see also 36 (1971), 243–249 and Brit. J. Math. Statist. Psychol. 26 (1973), 90–97]MATHCrossRefGoogle Scholar
  5. Derflinger, G.: Neue Iterationsmethoden in der Faktorenanalyse. Biometr. Z. 10 (1968), 58–75MathSciNetMATHCrossRefGoogle Scholar
  6. Gollob, H. F.: A statistical model which combines features of factor analytic and analysis of variance techniques. Psychometrika 33 (1968), 73–115MathSciNetMATHCrossRefGoogle Scholar
  7. Harman, H. H.: Modern Factor Analysis. 2nd rev. ed. (Univ. of Chicago, pp. 474), Chicago 1967MATHGoogle Scholar
  8. Jöreskog, K. G.: A general approach to confirmatory maximum likelihood factor analysis. Psychometrika 34 (1969), 183–202 [see also 36 (1971), 109–133, 409–426 and 37 (1972), 243–260, 425–440 as well as Psychol. Bull. 75 (1971), 416–423]CrossRefGoogle Scholar
  9. Lawley, D. N., and Maxwell, A. E.: Factor Analysis as a Statistical Method. 2nd ed. (Butterworths; pp. 153) London 1971 [see also Biometrika 60 (1973), 331–338]MATHGoogle Scholar
  10. McDonald, R. P.: Three common factor models for groups of variables. Psychometrika 35 (1970), 111–128 [see also 401–415 and 39 (1974), 429–444]MATHCrossRefGoogle Scholar
  11. Rummel, R. J.: Applied Factor Analysis. (Northwestern Univ. Press, pp. 617) Evanston, Ill. 1970MATHGoogle Scholar
  12. Sheth, J. N.: Using factor analysis to estimate parameters. J. Amer. Statist. Assoc. 64 (1969), 808–822MATHGoogle Scholar
  13. Überla, K.: Faktorenanalyse. Eine systematische Einführung in Theorie und Praxis für Psychologen, Mediziner, Wirtschafts- und Sozialwissenschaftler. 2nd edition. (Springer, 399 pages), Berlin-Heidelberg-New York 1971 (see in particular pages 355–363)Google Scholar
  14. Weber, Erna: Einführung in die Faktorenanalyse. (Fischer, 224 pages), Stuttgart 1974MATHGoogle Scholar

[8:5b] Multiple regression analysis

  1. Abt. K.: On the identification of the significant independent variables in linear models. Metrika 12 (1967), 1–15, 81–96MathSciNetMATHCrossRefGoogle Scholar
  2. Anscombe, F. J.: Topics in the investigation of linear relations fitted by the method of least squares. With discussion. J. Roy. Statist. Soc. B 29 (1967), 1–52 [see also A 131 (1968), 265–329]MathSciNetGoogle Scholar
  3. Beale, E. M. L.: Note on procedures for variable selection in multiple regression. Technometrics 12 (1970), 909–914 [see also 16 (1974), 221–227, 317–320 and Biometrika 54 (1967), 357–366 (see J. Amer. Statist. Assoc. 71 (1976), 249)]Google Scholar
  4. Bliss, C. I.: Statistics in Biology. Vol. 2. (McGraw-Hill, pp. 639), New York 1970, Chapter 18MATHGoogle Scholar
  5. Cochran, W. G.: Some effects of errors of measurement on multiple correlation. J. Amer. Statist. Assoc. 65 (1970), 22–34MATHGoogle Scholar
  6. Cramer, E. M.: Significance tests and tests of models in multiple regression. The American Statistician 26 (Oct. 1972), 26–30 [see also 25 (Oct. 1971), 32–34, 25 (Dec. 1971), 37–39 and 26 (April 1972), 31–33 as well as 30 (1976), 85–87]Google Scholar
  7. Darlington, R. B.: Multiple regression in psychological research and practice. Psychological Bulletin 69 (1968), 161–182 [see also 75 (1971), 430–431]CrossRefGoogle Scholar
  8. Donner, A.: The relative effectiveness of procedures commonly used in multiple regression analysis for dealing with missing values. Amer. Statist. 36 (1982), 378–381Google Scholar
  9. Draper, N. R., and Smith, H.: Applied Regression Analysis. (Wiley, pp. 407), New York 1966 [2nd edition, pp. 709, 1981]Google Scholar
  10. Dubois, P. H.: Multivariate Correlational Analysis. (Harper and Brothers, pp. 202), New York 1957MATHGoogle Scholar
  11. Enderlein, G.: Kriterien zur Wahl des Modellansatzes in der Regressionsanalyse mit dem Ziel der optimalen Vorhersage. Biometr. Zeitschr. 12 (1970), 285–308 [see also 13 (1971), 130–156]MathSciNetMATHCrossRefGoogle Scholar
  12. Enderlein, G., Reiher, W., and Trommer, R.: Mehrfache lineare Regression, polynomial Regression und Nichtlinearitätstests. In: Regressionsanalyse und ihre Anwendungen in der Agrarwissenschaft. Vorträge des 2. Biometr. Seminars d. Deutsch. Akad. d. Landwirtschaftswissensch. Berlin, März 1965. Tagungsber. Nr. 87, Berlin 1967, pages 49–78Google Scholar
  13. Folks, J. L., and Antle, C. E.: Straight line confidence regions for linear models. J. Amer. Statist. Assoc. 62 (1967), 1365–1374MathSciNetGoogle Scholar
  14. Goldberger, A. S.: Topics in Regression Analysis. (Macmillan, pp. 144), New York 1968Google Scholar
  15. Graybill, F. A., and Bowden, D. C.: Linear segment confidence bands for simple linear models. J. Amer. Statist. Assoc. 62 (1967), 403–408MathSciNetGoogle Scholar
  16. Hahn, G. J., and Shapiro. S. S.: The use and misuse of multiple regression. Industrial Quality Control 23 (1966), 184–189 [see also Applied Statistics 14 (1965), 196–200; 16 (1967), 51–64, 165–172; 23 (1974), 51–59]Google Scholar
  17. Herne, H.: How to cook relationships. The Statistician 17 (1967), 357–370CrossRefGoogle Scholar
  18. Hinchen, J. D.: Multiple regression with unbalanced data. J. Qual. Technol. 2 (1970), 1, 22–29Google Scholar
  19. Hocking, R. R.: The analysis and selection of variables in linear regression. Biometrics 32 (1976), 1–49MathSciNetMATHCrossRefGoogle Scholar
  20. Huang, D. S.: Regression and Econometric Methods. (Wiley, pp. 274), New York 1970MATHGoogle Scholar
  21. La Motte, L. R., and Hocking, R. R.: Computational efficiency in the selection of regression variables. Technometrics 12 (1970), 83–93 [see also 13 (1971), 403–408 and 14 (1972), 317–325, 326–340]Google Scholar
  22. Madansky, A.: The fitting of straight lines when both variables are subject to error. J. Amer. Statist. Assoc. 54 (1959), 173–205MathSciNetMATHGoogle Scholar
  23. Robinson, E. A.: Applied Regression Analysis. (Holden-Day, pp. 250), San Francisco 1969Google Scholar
  24. Rutemiller, H. C., and Bowers, D. A.: Estimation in a heteroscedastic regression model. J. Amer. Statist. Assoc. 63 (1968), 552–557MathSciNetCrossRefGoogle Scholar
  25. Schatzoff, M., Tsao, R., and Fienberg, S.: Efficient calculation of all possible regressions. Technometrics 10 (1968), 769–779 [see also Mandel, J. (1972), 317–325]CrossRefGoogle Scholar
  26. Seber, G. A. F.: The Linear Hypothesis. A General Theory. (No. 19 of Griffin’s Statistical Monographs and Courses. Ch. Griffin, pp. 120), London 1966MATHGoogle Scholar
  27. Smillie, K. W.: An Introduction to Regression and Correlation. (Acad. Pr., pp. 168), N.Y.1966Google Scholar
  28. Thompson, M. L.: Selection of variables in multiple regression. Part I. A review and evaluation. Part II. Chosen procedures, computations and examples. International Statistical Review 46 (1978), 1–19 and 129–146MathSciNetMATHCrossRefGoogle Scholar
  29. Toro-Vizcarrondo, C., and Wallace, T. D.: A test of the mean square error criterion for restrictions in linear regression. J. Amer. Statist. Assoc. 63 (1968), 558–572MathSciNetMATHCrossRefGoogle Scholar
  30. Ulmo, J.: Problèmes et programmes de regression. Revue de Statistique Appliquée 19 (1971), No. 1, 27–39MathSciNetGoogle Scholar
  31. Väliaho, H.: A synthetic approach to stepwise regression analysis. Commentationes Physico-Mathematicae 34 (1969), 91–131 [supplemented by 41 (1971), 9–18 and 63–72]Google Scholar
  32. Wiezorke, B.: Auswahlverfahren in der Regressionsanalyse. Metrika 12 (1967), 68–79MathSciNetMATHCrossRefGoogle Scholar
  33. Wiorkowski, J. J.: Estimation of the proportion of the variance explained by regression, when the number of parameters in the model may depend on the sample size. Technometrics 12 (1970), 915–919MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag New York Inc. 1984

Authors and Affiliations

  • Lothar Sachs
    • 1
  1. 1.Abteilung Medizinische Statistik und Dokumentation im Klinikumder Universität KielKiel 1Federal Republic of Germany

Personalised recommendations