Each study that is published about the effect of an independent variable on a dependent one, or the strength of the relationship between two variables, constitutes only a single piece in a constantly growing body of evidence. For example, over one hundred studies have been carried out to determine the relationship between employment interview performance and job performance (McDaniel, Whetzel, Schmidt, & Maurer, 1994). Each study yields a measure of the strength and direction of the association, typically in the form of a correlation coefficient. In some studies, the correlation coefficient is statistically significant, while others do not find a statistically significant association. To make sense of the often-conflicting results found in the literature, one can conduct a meta-analysis (e.g., Cooper, 1998; Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunter & Schmidt, 2004; Lipsey & Wilson, 2001; Rosenthal, 1991). In essence, the correlation coefficients that we extract from the various studies then become the data for further analysis. For example, if we assume that the observed correlation coefficients only differ from each other due to sampling variability, then the average of the correlation coefficients provides an estimate of the overall validity of employment interviews.

However, if the correlation coefficients of published studies differ systematically from those of unpublished studies, then the estimate may be biased (e.g., when studies with statistically significant results are more likely to be published, then the true correlation may be overestimated). In fact, regardless of whether we use meta-analysis, or simply conduct a narrative review to synthesize the relevant literature, the conclusions that we draw may be wrong if the accessible studies (and these typically coincide to a great deal with the ones we find in the published literature) differ systematically from the population of completed studies.

This is known as the publication bias problem and constitutes the topic of the book Publication bias in meta-analysis: Prevention, assessment and adjustments, edited by Rothstein, Sutton, and Borenstein (2005). This is the first book to address this issue in such detail and is likely to become a standard reference for those who carry out systematic literature reviews. The chapters, which were written by leading experts in the field of research synthesis, summarize a substantial amount of research that has been conducted on the issue of publication bias. The following topics are addressed in the chapters:

  • various forms of publication bias, evidence of its existence, the extent of its influence, and potential causes and consequences of publication bias;

  • statistical techniques for detecting publication bias, for assessing the sensitivity of conclusions to the possible presence of publication bias, and for adjusting meta-analytic estimates for publication bias;

  • capabilities of various software packages with respect to these techniques;

  • strategies for eliminating or at least minimizing the influence of publication bias;

  • other forms of missing data or data suppression mechanisms besides publication bias that may bias the conclusions from systematic literature reviews; and

  • other factors that may mimic the appearance of publication bias, but should not be confused with it.

Many of the chapters (especially those dealing with statistical techniques) assume some basic familiarity with meta-analytic methodology. As prior readings, I would therefore suggest The handbook of research synthesis (Cooper & Hedges, 1994), Statistical methods for meta-analysis (Hedges & Olkin, 1985), or Practical meta-analysis (Lipsey & Wilson, 2001). Unfortunately, the meta-analytic approach proposed by Hunter and Schmidt (2004) is not directly addressed in the book. Nevertheless, it needs to be stressed that researchers using any method for research synthesis need to be aware of the potential influence of publication bias on their conclusions and therefore would benefit greatly from reading this book.

The chapters dealing with the statistical methods are generally accessible to those with an undergraduate level training in statistics. Technical details are largely skipped, so those interested in the more technical aspects of particular methods will have to turn to the original sources (ample references are provided). Most notable for those with a methodological interest is probably the chapter on selection models (written by Larry Hedges and Jack Vevea, who have conducted a substantial amount of research in this area), which may be considered the most sophisticated approach for detecting and adjusting for publication bias.

The use of a standardized notation across the chapters greatly facilitates their readability. Another noteworthy feature is the use of three common examples throughout the book to illustrate the various methods (the examples deal with the effects of teacher expectancy on student intelligence, the relationship between second-hand tobacco smoke and lung cancer, and the validity of employment interviews for predicting job performance). Since the datasets for these examples are provided in the appendix along with some background information, readers can replicate many of the results presented in the chapters.

Besides datasets, the appendix also includes an annotated bibliography regarding key articles on the research of publication bias. The bibliography is given in chronological order, allowing readers to trace methodological developments and the emergence of empirical evidence of publication bias over time. Finally, recognizing that the intended audience comes from various disciplines, a short glossary was included at the end of the appendix, which provides definitions for some frequently used terms and concepts in the book.

Although publication bias is a concern in all disciplines, most of the empirical evidence regarding its existence and influence comes from the medical literature. The emphasis on this discipline is therefore also notable throughout the book. For example, some proposed strategies to minimize the influence of publication bias, like the registration of clinical trials at their inception, may not be easily transferred to other disciplines. Nevertheless, it is apparent that an effort has been made to address various disciplines.

A website has been created to supplement the book, but the address given in the book is incorrect (the actual address is http://www.meta-analysis.com/pages/pub_bias.html). Chapter 1 (which provides a short introduction to the publication bias problem and the contents of the book) and Chapter 11 (on software) are freely available for download at this website. Unfortunately, the website appears to be still under construction, as several links are not working at this point.

In conclusion, the book definitely succeeds in raising the awareness of the reader to an issue that unfortunately still remains underappreciated by those who conduct systematic literature reviews and by researchers in general. Although the book does not delve into issues of epistemology or the philosophy of science, one should easily recognize how publication bias, in its various forms, may seriously threaten the entire scientific method, which involves the acquisition of knowledge through the accumulation of research findings. Sir Isaac Newton once wrote: “If I have seen further, it is by standing on the shoulders of giants.” However, only an unobstructed view will allow us to benefit from the work of those who came before us. Hopefully, the book will be an impetous for all researchers to ensure that the view is as unobstructed as possible.