How much variance can be explained by ecologists and evolutionary biologists?
The average amount of variance explained by the main factor of interest in ecological and evolutionary studies is an important quantity because it allows evaluation of the general strength of research findings. It also has important implications for the planning of studies. Theoretically we should be able to explain 100% of the variance in data, but randomness and noise may reduce this amount considerably in biological studies. We performed a meta-analysis using data from 43 published meta-analyses in ecology and evolution with 93 estimates of mean effect size using Pearson's r and 136 estimates using Hedges' d or g. This revealed that (depending on the exact analysis) the mean amount of variance (r2) explained was 2.51–5.42%. The various 95% confidence intervals fell between 1.99 and 7.05%. There was a strongly positive relationship between the fail-safe number (the number of null results needed to nullify an effect) and the coefficient of determination (r2) or effect size. Analysis at the level of individual tests of null hypotheses showed that the amount of variance key factors explained differed among fields with the largest amount in physiological ecology, lower amounts in ecology and the lowest in evolutionary studies. In all fields though, the hypothesized relationship (e.g. main effect of a fixed treatment) explained little of the variation in the trait of interest. Our finding has important implications for the interpretation of scientific studies. Across studies, the average effect size reported is between Pearson r=0.180 and 0.193 and Hedges' d=0.631 and 0.721. Thus the average sample sizes needed to conclude that a particular relationship is absent with a power of 80% and α=0.05 (two-tailed) are considerably larger than usually recorded in studies of evolution and ecology. For example, to detect r=0.193, the required sample size is 207.