Society

, Volume 52, Issue 1, pp 28–34 | Cite as

Why so Few Conservatives and Should we Care?

Symposium: Liberals and Conservatives in Academia

Abstract

We take mild issue with some of the conclusions Gross draws from his research into the political commitments of academics, and we draw attention to other research that suggests there are epistemic costs associated with the political imbalance that Gross observes. We question whether incentives and controls currently existing within the social sciences are sufficient to counter these epistemic costs.

Keywords

Sociology of science Mertonian norms Marketplace of ideas Bias 

Professions are inherently pretentious: they lay claim to special knowledge not available to the mass public, and they portray properly certified professionals as possessing the cognitive and personal-integrity prerequisites for applying that knowledge (in the case of medicine or law) or even for advancing that knowledge (in the case of science) (Bell 1973; Wilensky 1964). Occasionally, scandals erupt and puncture the pretensions. The Roman Catholic priesthood’s commitment to celibacy is supposed to signal that priests are committed solely to the spiritual care of their parishioners (Daly 2009; Schmidt 2013), but it turns out that priests have the same libidinal needs as regular mortals. Lawyers, medical doctors and accountants make less grandiose claims for themselves, but they too have been beset by reputation-tarnishing scandals that imply even high-status members of the profession can be lured to compromise client or patient interests for personal gain (Dubois et al. 2012; Roiphe 2006, 2012).

Behavioral and social scientists cultivate their own pretensions. When representatives of the field go before Congressional committees to argue (dare we say “lobby”?) for public funding for research, they claim to represent an intellectual priesthood of sorts. They are 100 % committed to pursuit of the truth—and to letting the chips fall where they may. And they recognize that each individual scientist may be fallible, so they organize themselves into self-correcting epistemic communities that supposedly embrace Merton’s (1942) “CUDOS” norms (Bell 1973; Mahoney 1976; Ziman 2000): Communism (data are public property as soon as published); Universalism (the race or gender or politics of the investigator are irrelevant to judging the truth claims of the investigator); Disinterestedness (investigators are supposed to apply the same standards of evidence and proof to competing hypotheses); and Organized Skepticism (scientists hold each other accountable for observing these demanding norms—via peer review processes in hiring, publication and grant awards). In short, our representatives tell Congressional committees, “You can trust us—collectively, even if not always individually.”

In our experience, behavioral and social scientists respond much like other professionals to critiques that cut to the core of their professional self-images. It is tacky, inappropriately ad hominem, to challenge the disinterested truth-seeker persona—to suggest that researcher x may have found effect y because he or she has a political ax to grind.

But is it? Most behavioral and social scientists we know are no longer orthodox positivists — at least not when considering the work of others. We live in a post-Kuhnian/post-Lakatosian world, in which it is generally conceded that hypothesis testing is a complex process and that it is not unusual for investigators committed to a research program to display remarkable ingenuity in neutralizing dissonant findings and finding reasons to turn a blind eye to weaknesses in confirmatory evidence (Slife and Williams 1995; Ziman 1995, 2000) — despite the now large bodies of research in social psychology and micro-sociology on motivated reasoning that imply that biased information processing is wired deeply into human nature and organizational practices (Lilienfeld 2010; Lilienfeld et al. 2009; Nickerson 1998). It seems odd for social scientists to act offended when questions are raised about political motives when they subscribe to philosophies of science in which extra-scientific assumptions often guide inquiry and when they endorse research findings that highlight flaws in human information processing and cast doubt on the impartiality of science (Faust 1984; Hancock 2012; Greenwald 2012; Lilienfeld 2010, 2012; Mahoney 1976; Proctor and Capaldi 2006).

We suspect that many of our colleagues are suffering from yet-to-be-fully acknowledged cognitive dissonance between their professional pretensions and a less-than-pristine epistemological reality (Boring 1964). It would be embarrassing if important segments of the behavioral and social sciences were an ideologically rigged game in which overwhelmingly liberal investigators feel free to pursue research programs that dovetail suspiciously smoothly with their political agenda. It would undercut our idealized self-image as an intellectual priesthood. And it would be awfully awkward to explain before those check-dispensing Congressional committees.

Gross (2013) ventures into taboo territory in writing a book that acknowledges the ideological lopsidedness of the behavioral and social sciences and that explores possible explanations and implications. The picture that emerges is moderately embarrassing, but that picture, as painted by Gross, is not epistemologically damning and leaves intact many of the pretensions of the scientific professions. By choosing to place great weight on shaky evidence about the sources and effects of academia’s ideological imbalance, Gross can tell a story in which conservatives primarily choose not to enter elite classrooms and or not to study certain fields of research, while the liberals who control both the classrooms and the laboratories strive to be fair and objective. Gross gives voice to some dissenters within his sample, acknowledges that his evidence is limited, and concedes that there may be epistemic costs from the ideological lopsidedness of the professoriate, but he does not – in our view – explore the full dimensions of the problem revealed by his survey of the academy’s ideological terrain.

We divide our reaction to Gross’s book into three parts. First, we revisit the explanations considered by Gross for why conservatives appear to be so dramatically underrepresented in the social sciences. We take no position on the correct explanation, but we do explain why the evidence Gross offers for discounting the discrimination explanation is weak. Second, we discuss how ideological lopsidedness affects what research gets performed, published and funded, and we then finally consider whether the epistemic costs associated with political bias are net positive, neutral, or negative. Liberals often wisely doubt the ability of unregulated markets to produce socially beneficial results, and we posit that the same skepticism should be applied to the market for social scientific evidence.

Are Liberal Academics Really So Fair-minded?

Gross (2013) does an admirable job considering possible explanations for why conservatives constitute such a small percentage of professors in many academic departments, but these hypotheses are difficult to test and Gross’s evidence for and against them is often limited and weak. Gross’s consideration of the discrimination explanation provides a good illustration why a diversity of viewpoints can be so helpful in the development of social scientific knowledge.

In discounting the role of discrimination as an explanation for the ideological skew he finds within the academy, Gross relies heavily on his audit study of information requests to directors of graduate studies by students associated with the 2008 presidential campaign of McCain or Obama. What is so surprising about this study, from the perspective of two social psychologists, is not the study results but that Gross would believe that he had conducted a good test for ideological discrimination, for the e-mail messages used by Gross (which he helpfully supplies in the notes to this book) contain large amounts of individuating information, information that serves to personalize and ingratiate the message sender to the recipient and thus greatly reduced the likelihood of stereotyping and discrimination (see, e.g., Kunda and Thagard 1996; Singletary and Hebl 2009). And it seems to us that much of the information conveyed in the message fits a stereotype of a liberal rather than a conservative, or at least provides information contrary to a stereotype of the sender as a socially-conservative, unthinking right-wing ideologue: the McCain volunteer graduated from UCSB as a sociology major, has an interest in studying the sociology of culture, performed volunteer work to be “well-rounded,” and has the transcript of a “serious student.” To make the odds of observing discrimination even smaller, the recipients/potential discriminators were not anonymous and were potentially accountable for their reactions to both the sender and higher-ups in their own departments and universities—conditions that further reduce the likelihood of observing discrimination by the directors of graduate studies (see, e.g., Self et al., 2014). It would have been quite surprising and disturbing if, under these conditions, Gross had found much evidence of discrimination against the student who had volunteered for McCain’s campaign. We mention these weaknesses in Gross’s study not to argue that alternative methods would have yielded greater evidence of discrimination against conservatives, but to illustrate how studies can be accidentally or intentionally designed to obtain results that confirm a pleasing hypothesis or contradict a disturbing hypothesis.

We see a similar lack of self-critical review in Gross’s heavy reliance on interview data and cautious acceptance of responses professing objectivity in the treatment of ideas and fairness in the treatment of conservatives. Had Gross conducted interviews of a small sample of employers in a highly segregated industry, asking them to disclose how biased they were against minorities, and then submitted an article to a psychology journal claiming on the basis of this data that bias is a small part of the under-representation problem, we are confident that one or more of the reviewers would recommend rejection on grounds that self-reports are unreliable measures of bias and discrimination. Psychologists and sociologists often distrust explicit measures of bias, such as interview questions, on grounds that they suffer from social desirability bias and are incapable of detecting subtle and unconscious forms of bias (e.g., Greenwald and Krieger 2006; Quillian 2006). It would be odd if, on the basis of the data provided, it were possible to obtain a professional consensus in support of both Gross’s preferred self-selection explanation for the under-representation of conservatives in social science and unconscious-bias explanations for the under-representation of minorities in other lines of work. One would need to argue that social scientists are more honest, more aware of their own biases, or somehow immune to the effects of unconscious bias, or concede that unconscious bias is not as inexorable as it is sometimes portrayed. It should be worrisome when the thresholds of proof that social scientists use in evaluating claims of bias and discrimination depend on “whose ox is being gored” (Tetlock and Mitchell 2009).

Do Ideological Imbalances Affect Scientific Production?

Putting to the side causal questions and how embarrassing conservative under-representation should be to disciplines that claim to be driven by purely epistemic goals when they lobby for public support for research, there is the more pressing question whether this under-representation has negative implications for the development of scientific knowledge, and for public policy derived from that knowledge. In Chapter 5 of his book, Gross discusses the “knowledge-politics problem,” but he focuses primarily on its implications for teaching rather than research (which is understandable given right-wing pundits’ attacks on the “liberal classroom”). And Gross devotes most of Chapter 5 to describing the mindsets of his interviewees and how they deal with the risk that their politics might intrude on the knowledge that they impart in the classroom or develop in the laboratory; he does not critically compare what his interviewees believe to what the empirical literature on the objectivity of scientific research and review processes demonstrates.

Many studies have documented that the characteristics of researchers and reviewers are associated with the outputs of the scientific process – although the prevalence, size and causes of these associations are often debated. For instance, Eagly and Carli (1981) found that male investigators reported larger differences between men and women in persuasibility and conformity than female investigators, Sherwood and Nataupsky (1968) found an association between investigator socioeconomic background and the reported size of racial differences in intelligence, and Russell et al. (1994) found that industrial-organizational (I-O) psychologists who worked for academic institutions were more likely to publish results critical of personnel selection instruments (i.e., they lower reported criterion-related validities) than I-O psychologists working for private sector companies. Debates are common about whether women, minorities, junior scholars, and professors at lower prestige institutions face greater obstacles in publishing (e.g., Budden et al. 2008; Ceci and Peters 19821984; Primack et al. 2009; Ross et al. 2006) or receiving funding (e.g., Viner et al. 2004; Wennerås and Wold 1997; Wessely 1998; but see Marsh et al. 2009). Commitment to a particular theory or to an overarching theoretical perspective has been found to predict the level of scrutiny applied to the results of submitted manuscripts (e.g., Koehler 1993; Mahoney 1987), and scientific contributions are surely not the only determinant of obtaining a place on an editorial board for many journals (e.g., Bedeian et al. 2009).

Although a number of the individual characteristics of scientists that have been studied correlate with the scientists’ political orientation, a few studies have directly examined how the politics of a researcher or the political valence of a research project affects scientific outcomes. Ceci et al. (1985) submitted matched research proposals to institutional review boards, with one set aimed at examining discrimination against women or racial minorities and one set aimed at examining reverse discrimination against White males. The reverse discrimination proposals were approved at lower rates, even though they contained no more defects than the discrimination proposals, and deliberations about the reverse discrimination proposal elicited more explicit political criticism by IRB members. Ceci et al. (1985) concluded that “IRB deliberations reflect the sociopolitical ideologies of their members in ways not entirely congruent with the federal mandate (p. 1000).” Abramowitz et al. (1975) submitted identical manuscripts except that half presented political activists as psychologically healthier than non-activists and half presented the reverse pattern of results. The version of the manuscript in line with reviewer values was rated higher on publishability and scientific merit. More recently, Inbar and Lammers (2012) surveyed social and personality psychologists about the role of politics in their field, and over 18 % reported being at least somewhat willing to discriminate against conservatives in their manuscript reviews and over 23 % were somewhat willing to do so in their grant reviews (but see Skitka 2012, for a critique of the survey). Hunt (1999) and Lilienfeld (2002) provide case studies of research greatly affected by political forces, from both inside and outside the profession and from both the left and right.

These studies indicate that political bias on the part of science’s gatekeepers can have a homogenizing effect on scientific production, but an investigator’s own politics are likely to affect what questions get studied and how. Many years ago, we discussed how psychologists who study social justice and inequality tend to frame research questions in ways that favor liberal conceptions of justice and portray conservative counter-views as the product of defective cognitions and selfish motivations (Tetlock and Mitchell 1993; for some signs of positive change on this front, see Haidt and Graham 2009; Wetherell et al. 2013). Redding (2001) provides additional examples of areas of psychological research where the predominance of liberal views appears to have affected research design and results. That investigator interests and commitments—both theoretical and ideological—affect the content of social scientific output is surely not controversial given the socially constructed nature of many social scientific constructs, but there is ample room for politics to intrude on the hard sciences as well (witness debates over the role and impartiality of science with respect to global warming and the safety of the morning-after pill), notwithstanding the claims of disinterest and objectivity by many of Gross’s interviewees from the hard sciences.

To see how these biases can play out, consider the case of Gross’s audit study. Gross probably can offer good reasons for the methodological choices he made, and conscious bias in favor of the null hypothesis was probably not at work, but his study provides at best weak evidence against the hypothesis that conservative students face discrimination when applying to graduate programs. Nonetheless, we predict that his audit study, with its soothing message for liberal academics, will find a home in a respected sociology or psychology journal, because reviewers in those fields are likely to be sympathetic to the results and are unlikely to detect or emphasize flaws in the study. But a biased editorial review process is just one part of a system of self-reinforcing bias. Gross’s study, despite its limitations, will be tendentiously cited by liberal academics for the broad proposition that ideological discrimination plays little role in the under-representation of conservatives in the academy, and the biased editorial review process will ensure that other papers confirming the results are published but disconfirming results are subjected to rigorous review. And the journal’s conventions, such as a bias in favor of statistically significant results, a preference for original research over replications, and a policy against publishing comments on prior studies, will further serve to protect the empirical status quo. With the uphill battle facing researchers who doubt Gross’s findings, not only from journals but also from funding agencies and from tenure and promotion committees likely to be dominated by liberals, few skeptical researchers will conclude that this is an area of study worth their time.

Perhaps this imagined future is too optimistic about the prospects for Gross’s study and too pessimistic about the fate of contrary results. We know from our own experiences that, with enough determination, research findings questioning dominant paradigms – that produce results that just happen to coincide with egalitarian worldviews — can be published in prominent psychology journals (e.g., Oswald et al. 2013). Perhaps scientists are exceptional not only in their ability to avoid subtle and unconscious biases but also in their ability to find and apply measures of merit. Perhaps scientific norms and practices work as intended, forcing extraneous influences out of the scientific process in most cases, but leaving the occasional outlier to be detected by the disgruntled author who is determined to show that the review process that rejected his paper was corrupt. Some will say, in short, that Tetlock and Mitchell are making mountains out of molehills -- and are providing ammunition to conservatives who cynically want to exploit the political imbalance within academia to cast doubt on scientific findings that conflict with the conservative agenda. It is to this question of the aggregate epistemic costs associated with liberal dominance of academic research institutions that we now turn.

But in the end, Does it Really Matter?

We readily admit that estimating the epistemic costs of scientists’ political biases is near impossible, but it is useful to contrast three alternative views on the overall impact of this bias on the scientific community, to help put in perspective the upsides and downsides of taking no action to correct the imbalance documented by Gross.

Net Gain: Liberal Bias in the Scientific Community Increases the Reliability of Scientific Knowledge

More conservative researchers, or more testing of conservative-backed ideas, would not increase the accuracy or integrity of science. Conservatives raise frivolous objections that slow the advance of knowledge, making it harder to discover, for instance, covert forms of prejudice such as symbolic or unconscious racism and the institutionalized forms of racism that perpetuate societal inequalities. Conservative ideas about the market and human nature have repeatedly been falsified, rendering the promotion of “conservative science” not only oxymoronic but nothing more than an attempt to justify the socioeconomic status quo (Fox 2011; Mooney 2006). Recent experience with economists, who tend to be much more conservative than the average social scientist, demonstrates the truth of this net-gain view: after it was revealed that the economists Rogoff and Reinhart had relied on shoddy work to justify austerity measures in response to the financial crisis of 2008, they continued to cling stubbornly to the view that high public debt-to-GDP ratios greatly hinder economic growth and that austerity was thus called for despite its costs on the middle and lower classes (Herndon et al. 2013; Pollin and Ash 2013). The conduct of Rogoff and Reinhart demonstrates the fundamental mismatch between conservative psychology and the requirements of open scientific inquiry: liberals are cognitively flexible and open to new ideas and evidence, are comfortable with uncertainty, and are tolerant of competing views; conservatives are none of those things (e.g., Jost et al. 2003, 2007; Thórisdóttir and Jost 2011).

Over time, science has become dominated by liberals because truth and the pursuit of truth favors liberal beliefs and values; those who cling to conservative beliefs and values get penalized in a community that prizes truth. This view is consistent with Gross’s favored, self-selection explanation for conservative under-representation: conservatives select out of academia because they realize the scientific endeavor is not congenial to their preferences, and because they simply do not have what it takes to be good scientists. We should not worry that this opt-out leads to liberal dominance of academic institutions, however, because liberals, given their psychological make-up, exercise a benevolent dictatorship over the kingdom of knowledge.

As should be apparent, this net-gain view has a circular character to it, begging the very question that it supposedly answers: can we really count on a commitment to abstract liberal values to overcome the effects of localized political bias when the public policy consequences of a particular scientific result become apparent? No doubt right-wing attacks on science occur, and a history can be told in which conservatives are hostile to science, but a counter-history can be told as well in which the left and left-leaning professors take aim at scientific research that threatens progressive values and beliefs (e.g., Hunt 1999; Kabat 2008; Medvedev 1978), such as research on extrinsic versus intrinsic motivation, research on the role of genetics in intelligence and behavior, research into whether criticisms of high achieving Blacks for “acting White” have adverse consequences, research on second-hand smoke and environmental causes of cancer, and any research traveling under the banner of evolutionary psychology. It is useful to keep in mind that, before the Bush II presidency and Mooney’s (2006) book about the “Republican War on Science,” many scientists would have identified the radical left, not Republicans, as the main source of know-nothing attacks on science (see, e.g., Koertge 1998).

We thus ultimately find the net-positive position implausible because it relies too heavily on discredited views of scientific rationality and requires one to accept that nature just happens to produce scientific results that more often than not support a liberal political agenda (for a strong view of this claim in the domain of morality, see Harris 2011). The more interesting question, once one concedes that both left-wing and right-wing politics influence scientific research, is whether there is adequate competition among the funders and producers of left-wing-friendly and right-wing-friendly science to ensure that external measures of accuracy and utility, rather than politics, selects scientific winners. This view, that scientific reliability depends not on investigator disinterest and rationality but on vigorous viewpoint competition, has important proponents.

Net Neutral: The Scientific Process is in the Long run Self-Correcting

Scientific competition often occurs both within and across academic disciplines and extends outside the academy. Although psychology and sociology are dominated by liberal viewpoints, conservatives have much greater say in economics (which has a disproportionate influence on public policy among the behavioral and social sciences), and all of these disciplines often attack the same problems from competing theoretical and methodological perspectives. Particularly within the natural and physical sciences, much important research occurs in the private sector, and scientists in these settings tend to be much more open to fiscally conservative ideas. As information has become freer, inter-disciplinary communication and competition have become more common, as reflected, for example, in the growing influence of experimental methods and psychological research within economics, where both experimental and behavioral economics are now thriving sub-fields.

This broader understanding of science and its sources is important to those who believe that science advances through idea advocacy and competition, not through the dispassionate pursuit of truth (e.g., Hull 1988; Mitroff 1974, 1980). Mitroff (1974), for instance, would replace Merton’s universalism and disinterestedness norms with norms of emotional commitment, dogmatism and identity-based rejoinder in the face of challenges to one’s scientific claims and evidence, for he believes that committed advocacy is necessary to ensure that controversial but correct ideas are not discarded too soon. A proponent of a competition model of science does not embrace Gross’s findings, as does a proponent of the net-benefits view, but she is not terribly worried by Gross’s findings.

Net Loss: Liberal Dominance of Many Scientific Fields has Adverse Effects on the Reliability of Scientific Knowledge

The competition model of science, in which political biases wash out, is an appealing solution to the problem of theory-laden observations (few conservatives would see “system-justifying symbolic racists” as a fair characterization of their views). But this model can only work where there is some minimal diversity of viewpoints to give rise to competition. As Solomon (1992), who is congenial to the advocacy model, notes, cognitive and motivational biases can be counted on to produce a distribution of research effort to drive scientific competition only where “differences in individual experience and prior belief arise (p. 452).” In some areas of the social sciences, it is easier to find an anarchist than a conservative. It is not the personal political values of researchers that matter, so much as the willingness of researchers to challenge orthodox ideas within a field, but if the costs of dissent outweigh the benefits of dissent, then scientific competition can never drive out spurious results produced by political bias rather than by true empirical causes and effects.

The state of the scientific world one believes we inhabit will likely determine whether one believes that Gross’s findings call for no response or an active response. If the effects of the ideological skew that Gross identifies are net beneficial, then ideological purification should continue until its natural end point: the de facto extinction of conservatism among the disciplines currently dominated by liberals. If the effects of ideological skew are net harmful, and if Gross’s relatively benign self-selection explanation for the skew is correct, then the trend toward the ideological purification of different disciplines should not be irreversible. For if the effects are harmful enough and avenues of funding and publication are not foreclosed to those who challenge liberal orthodoxy, then the epistemic opportunity costs of imbalance should eventually become painfully obvious to moderates in the field who will see opportunities for advancement by making discoveries that most of the field would fail to recognize or be unwilling to pursue.

We worry, however, that the cross-disciplinary pressures are not sufficient and that the ideological purification already has advanced so far in some fields that the competition model of science, in which political biases from the left and right eventually wash out, cannot function effectively. If that worry is well-founded, then what is needed is top-down regulation of science production within these fields, with affirmative actions taken not to hire more conservatives but to encourage and reward theoretical and methodological pluralism and to ensure that the gatekeepers of science make decisions on the basis of identity-blind procedures. Exactly what those procedures should look like, and how they can be imposed, would require a much longer paper and another debate (for some initial thoughts, see Tetlock and Mitchell 2009). Suffice it to say here that in our view government organizations that fund and regulate science should be conducting tournaments that require transparency in predictions, methods, data and results, that require researchers to declare ex ante their priors and to state how surprising different results would be, and that impose external, objective measures of success. By building a collective Bayesian element into the tournament, researchers who demonstrate the success of unpopular ideas, or who disconfirm widely-held beliefs, should receive a boost in their rankings, while researchers who confirm received wisdom receive little in return. An incentive system such as this might even induce conservatives to re-enter fields that Gross tells us they have been abandoning.

Further Reading

  1. Abramowitz, S. I., Gomes, B., & Abramowitz, C. V. 1975. Publish or politic: referee bias in manuscript review. Journal of Applied Social Psychology, 5, 187–200.CrossRefGoogle Scholar
  2. Bedeian, A. G., Van Fleet, D. D., & Hyman, H. H. 2009. “Circle the wagons and defend the faith” Slicing and dicing the data. Organizational Research Methods, 12, 276–295.CrossRefGoogle Scholar
  3. Bell, D. 1973. The coming of the post-industrial society. New York: Basic Books, Inc.Google Scholar
  4. Boring, E. G. 1964. Cognitive dissonance: Its use in science. Science, 145(3633), 680–685.CrossRefGoogle Scholar
  5. Budden, A. E., Tregenza, T., Aarssen, L. W., Koricheva, J., Leimu, R., & Lortie, C. J. 2008. Double-blind review favours increased representation of female authors. Trends in Ecology & Evolution, 23, 4–6.CrossRefGoogle Scholar
  6. Ceci, S. J., & Peters, D. 1982. Peer review: A study of reliability. Change: The Magazine of Higher Learning14, 44–48.Google Scholar
  7. Ceci, S. J., & Peters, D. 1984. How blind is blind review? American Psychologist, 39, 1491–1494.Google Scholar
  8. Ceci, S. J., Peters, D., & Plotkin, J. 1985. Human subjects review, personal values, and the regulation of social science research. American Psychologist, 40, 994–1002.CrossRefGoogle Scholar
  9. Daly, B. 2009. Priestly celibacy: The obligations of continence and celibacy for priests. COMPASS: A Review of Topical Theology, 33, 20–33.Google Scholar
  10. Dubois, J. M., Anderson, E. E., Gibb, T., Carroll, K., Kraus, E., Rubbelke, T., & Vasher, M. 2012. Environmental factors contributing to wrongdoing in medicine: A criterion-based review of studies and cases. Ethics & Behavior, 22, 163–188.CrossRefGoogle Scholar
  11. Eagly, A. H., & Carli, L. L. 1981. Sex of researchers and sex-typed communications as determinants of sex differences in influenceability: a meta-analysis of social influence studies. Psychological Bulletin, 90, 1–20.CrossRefGoogle Scholar
  12. Faust, D. 1984. The limits of scientific reasoning. Minneapolis, MN: University of Minnesota Press.Google Scholar
  13. Fox, J. 2011. The myth of the rational market. New York: HarperBusiness.Google Scholar
  14. Greenwald, A. G. 2012. Scientists are human: Implicit cognition and researcher conflict of interest. In R. W. Proctor & E. J. Capaldi (Eds.), Psychology of science: Implicit and explicit processes (pp. 255–266). Oxford: Oxford University Press.CrossRefGoogle Scholar
  15. Greenwald, A. G., & Krieger, L. H. 2006. Implicit bias: Scientific foundations. California Law Review, 94, 945–967.CrossRefGoogle Scholar
  16. Gross, N. 2013. Why are professors liberal and why do conservatives care? Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  17. Haidt, J., & Graham, J. 2009. The planet of the Durkheimians, where community, authority and sacredness are foundations of morality. In J. T. Jost, A. C. Kay, & H. Thorisdottir (Eds.), Social and psychological bases of ideology and system justification (pp. 371–401). New York: Oxford University Press.CrossRefGoogle Scholar
  18. Hancock, P. A. 2012. Notre Trahison des Clercs: Implicit aspirations—explicit explorations. In R. W. Proctor & E. J. Capaldi (Eds.), Psychology of science: Implicit and explicit processes (pp. 479–495). Oxford: Oxford University Press.CrossRefGoogle Scholar
  19. Harris, S. 2011. The moral landscape: How science can determine human values. New York: Free Press.Google Scholar
  20. Herndon, T., Ash, M., & Pollin, R. 2013. Does high public debt consistently stifle economic growth?: A critique of Reinhart and Rogoff. Political Economy Research Institute, available at: http://www.peri.umass.edu/fileadmin/pdf/working_papers/working_papers_301-350/WP322.pdf.
  21. Hull, D. L. 1988. Science as a process: an evolutionary account of the social and conceptual development of science. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
  22. Hunt, M. 1999. The new know-nothings: The political foes of the scientific study of human nature. Piscataway, NJ: Transaction Publishers.Google Scholar
  23. Inbar, Y., & Lammers, J. 2012. Political diversity in social and personality psychology. Perspectives on Psychological Science, 7, 496–503.CrossRefGoogle Scholar
  24. Jost, J. T., Napier, J. L., Thorisdottir, H., Gosling, S. D., Palfai, T. P.,& Ostafin, B. 2007. Are needs to manage uncertainty and threat associated with political conservatism or ideological extremity? Personality and Social Psychology Bulletin, 33, 989–1007.Google Scholar
  25. Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. 2003. Political conservatism as motivated social cognition. Psychological Bulletin, 129, 339–375.Google Scholar
  26. Kabat, G. C. 2008. Hyping health risks: Environmental hazards in daily life and the science of epidemiology. New York: Columbia University Press.Google Scholar
  27. Koehler, J. 1993. The influence of prior beliefs on scientific judgments of evidence quality. Organizational Behavior & Human Decision Processes, 56, 28.CrossRefGoogle Scholar
  28. Koertge, N. (Ed.). 1998. A house built on sand: Exposing postmodernist myths about science. Oxford: Oxford University Press.Google Scholar
  29. Kunda, Z., & Thagard, P. 1996. Forming impressions from stereotypes, traits, and behaviors: A parallel-constraint-satisfaction theory. Psychological Review, 103, 284.CrossRefGoogle Scholar
  30. Lilienfeld, S. O. 2002. When worlds collide: Social science, politics, and the Rind et al. (1998) child sexual abuse meta-analysis. American Psychologist, 57, 176–188.CrossRefGoogle Scholar
  31. Lilienfeld, S. O. 2010. Can psychology become a science? Personality and Individual Differences, 49, 281–288.Google Scholar
  32. Lilienfeld, S. O. 2012. Public skepticism of psychology: Why many people perceive the study of human behavior as unscientific. American Psychologist, 67, 111–129.Google Scholar
  33. Lilienfeld, S. O., Ammirati, R., & Landfield, K. 2009. Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4, 390–398.CrossRefGoogle Scholar
  34. Mahoney, M. J. 1976. Scientist as subject: The psychological imperative. Cambridge, MA: Ballenger Publishing Co.Google Scholar
  35. Mahoney, M. J. 1987. Scientific publication and knowledge politics. Journal of Social Behavior & Personality, 2, 165–176.Google Scholar
  36. Marsh, H. W., Bornmann, L., Mutz, R., Daniel, H. D., & O’Mara, A. 2009. Gender effects in the peer reviews of grant proposals: A comprehensive meta-analysis comparing traditional and multilevel approaches. Review of Educational Research, 79, 1290–1326.CrossRefGoogle Scholar
  37. Medvedev, Z. A. 1978. Soviet science. New York: W.W. Norton & Co.Google Scholar
  38. Merton, R. K. 1942. Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.Google Scholar
  39. Mitroff, I. I. 1974. The subjective side of science: A philosophical inquiry into the psychology of the Apollo moon scientists. New York: American Elsevier Pub. Co.Google Scholar
  40. Mitroff, I. I. 1980. Reality as a scientific strategy: Revising our concepts of science. Academy of Management Review, 5, 513–515.Google Scholar
  41. Mooney, C. 2006. The Republican war on science. New York: Basic Books.Google Scholar
  42. Nickerson, R. S. 1998. Confirmation bias: a ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175.CrossRefGoogle Scholar
  43. Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. 2013. Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 105, 171–192.CrossRefGoogle Scholar
  44. Pollin, R., & Ash, M. 2013. Debt and growth: A response to Reinhart and Rogoff, New York Times, April 29, available at: http://www.nytimes.com/2013/04/30/opinion/debt-and-growth-a-response-to-reinhart-and-rogoff.html?_r=0.
  45. Primack, R. B., Ellwood, E., Miller-Rushing, A. J., Marrs, R., & Mulligan, A. 2009. Do gender, nationality, or academic age affect review decisions? An analysis of submissions to the journal Biological Conservation. Biological Conservation, 142, 2415–2418.CrossRefGoogle Scholar
  46. Proctor, R. W., & Capaldi, E. J. 2006. Why science matters: Understanding the methods of psychological research. Maldin, MA: Blackwell Publishing.CrossRefGoogle Scholar
  47. Quillian, L. 2006. New approaches to understanding racial prejudice and discrimination. Annual Review of Sociology, 32, 299–328.CrossRefGoogle Scholar
  48. Redding, R. E. 2001. Sociopolitical diversity in psychology: The case for pluralism. American Psychologist, 56, 205.CrossRefGoogle Scholar
  49. Roiphe, R. 2006. The Most Dangerous Profession. Connecticut Law Review, 39, 603–665.Google Scholar
  50. Ross, J. S., Gross, C. P., Desai, M. M., Hong, Y., Grant, A. O., Daniels, S. R., & Krumholz, H. M. 2006. Effect of blinded peer review on abstract acceptance. JAMA: the journal of the American Medical Association, 295, 1675–1680.CrossRefGoogle Scholar
  51. Russell, C. J., Settoon, R. P., McGrath, R. N., Blanton, A. E., Kidwell, R. E., Lohrke, F. T., & Danforth, G. W. 1994. Investigator characteristics as moderators of personnel selection research: A meta-analysis. Journal of Applied Psychology, 79, 163–170.CrossRefGoogle Scholar
  52. Schmidt, K. W. 2013. Thoughts about celibacy. The Priest.Google Scholar
  53. Self, W. T., Mitchell, G., Tetlock, P. E., Mellers, B. A., & Hildreth, A. D. 2014. Calibrating process and outcome accountability systems to workplaces. Unpublished manuscript.Google Scholar
  54. Sherwood, J. J., & Nataupsky, M. 1968. Predicting the conclusions of negro-white intelligence research from biographical characteristics of the investigator. Journal of Personality and Social Psychology, 8, 53–58.Google Scholar
  55. Singletary, S. L., & Hebl, M. R. 2009. Compensatory strategies for reducing interpersonal discrimination: The effectiveness of acknowledgments, increased positivity, and individuating information. Journal of Applied Psychology, 94, 797.CrossRefGoogle Scholar
  56. Skitka, L. J. 2012. Multifaceted problems: Liberal bias and the need for scientific rigor in self-critical research. Perspectives on Psychological Science, 7, 508–511.CrossRefGoogle Scholar
  57. Slife, B. D., & Williams, R. N. 1995. What’s behind the research? Discovering hidden assumptions in the behavioral sciences. Thousand Oaks, CA: Sage Publications.CrossRefGoogle Scholar
  58. Solomon, M. 1992. Scientific rationality and human reasoning. Philosophy of Science, 59, 439–455.CrossRefGoogle Scholar
  59. Tetlock, P. E., & Mitchell, G. 1993. Liberal and conservative approaches to justice: Conflicting psychological portraits. In B. A. Mellers & J. Baron (Eds.), Psychological perspectives on justice: Theory and applications (pp. 234–255). New York, NY: Cambridge University Press.CrossRefGoogle Scholar
  60. Tetlock, P. E., & Mitchell, G. 2009. Implicit bias and accountability systems: What must organizations do to prevent discrimination? Research in Organizational Behavior, 29, 3–38.CrossRefGoogle Scholar
  61. Thórisdóttir, H., & Jost, J. T. 2011. Motivated closed-mindedness mediates the effect of threat on political conservatism. Political Psychology, 32, 785–811.CrossRefGoogle Scholar
  62. Viner, N., Powell, P., & Green, R. 2004. Institutionalized biases in the award of research grants: a preliminary analysis revisiting the principle of accumulative advantage. Research Policy, 33, 443–454.CrossRefGoogle Scholar
  63. Wennerås, C., & Wold, A. 1997. Nepotism and sexism in peer-review. Nature, 387(6631), 341–343.CrossRefGoogle Scholar
  64. Wessely, S. 1998. Peer review of grant applications: What do we know? The Lancet, 352(9124), 301–305.CrossRefGoogle Scholar
  65. Wetherell, G. A., Brandt, M. J., & Reyna, C. 2013. Discrimination across the ideological divide: The role of value violations and abstract values in discrimination by liberals and conservatives. Social Psychological and Personality Science, 4, 658–667.CrossRefGoogle Scholar
  66. Wilensky, H. L. 1964. The professionalization of everyone? American Journal of Sociology, 70, 137–158.CrossRefGoogle Scholar
  67. Ziman, J. 1995. Of one mind: The collectivization of science. New York: Springer.Google Scholar
  68. Ziman, J. 2000. Real science: What it is and what it means. Cambridge: Cambridge University Press.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Solomon LabsUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations