In our last editorial (Lederman & Lederman, 2015) we provided the annual publication statistics for the Journal of Science Teacher Education (JSTE) and we outlined the peer review process utilized by the journal. In addition, we assured readers that their manuscripts would be reviewed as fairly and as unbiased as possible. That is, our Associate Editors, Editorial Review Board, and we make a concerted effort to insure that professional biases do not impact the review of manuscripts. However, there appears to be an emerging trend/awareness of ways in which the peer review process can be circumvented to the detriment of the academic community. Most recently, Science magazine retracted another published article (LaCour & Green, 2014) for reasons related to misrepresented or fictitious data. This particular article was related to gay equality, but there is a growing list of scientific papers retracted from other prestigious science journals (e.g., Lancet, Nature) on topics ranging from vaccines and autism, stem cell production, heart research, physics discoveries at Bell Labs, and cognition research (Roston, 2015). These unfortunate situations could be blamed on “cracks” in the peer review process, but how realistic is this claim? Reviewers of manuscripts are not privy to the raw data from investigations, but rather summarized or aggregated data sets. And, if authors were required to submit raw data, virtually no reviewers would have the time to carefully sift through the data and re-analyze it. Such a process would not only tax the time of volunteer reviewers, but also exorbitantly increase the already extensive turnaround time from submission to publication. Many science journals are published on a weekly basis, but imagine what would happen to the turnaround time for educational research, which is typically published in journals that are published monthly or less frequently.

The Editorial Board of the New York Times has written about “Scientists Who Cheat” (2015), but, perhaps a more forgiving characterization was that provided by Stephen Jay Gould in his seminal work The Mismeasure of Man (1981). In the beginning of research on intelligence, it was assumed that brain size directly correlated to intelligence. Since human skulls were widely available, scientists used to fill skulls with lead shot as an approximate measure of brain size. The more lead shot used, the greater the brain size. Unfortunately, scientists were aware of the origin of the skull they were measuring (e.g., sex, race, etc.), and at the time it was believed that white, western European males at the most superior intellect. Gould imagined scientists subconsciously packing the lead shot into the skulls of the white, western European males, while just pouring loosely lead shot into the other skulls. The scientists, in his interpretation, were producing the results they expected to find. In our continuing work with teachers and students, the idea that science is done by humans and their biases and subjectivity unavoidably influences their work. Teachers and students quickly realize the influence of bias and subjectivity as they reflect upon the research questions they asked, how they collected and interpreted data, and how their operationalized variables during various science investigations.

We sincerely hope that what we see emerging from investigations of academic integrity in the sciences is not also true in educational research. But it does give one pause. There is the general impression that “fraud” in science research is somehow more damaging than “fraud” in educational research, but we would prefer to disagree. You may also be asking yourself what does any of this have to do with science teacher education. As teacher educators, we are constantly reading the research provided in journals such as JSTE. And there is now an emerging base of publications offering practical advice for science teacher educators. Hopefully, our practice has been influenced by what we have read, as well as the results of our own research. However, we must be more vigilant in our scrutiny of the research we read and the advice we are given. We must, as good scientists should do, constantly question the findings of research and the data provided as support for the recommendations provided. Are the recommendations truly supported by the data provided? As for the practical recommendations provided from the “wisdom of practice,” we should endeavor to collect systematic data to investigate the veracity of the advocated instructional techniques. Nobel Laureate Richard Feynman was continuously driven by his “belief in the ignorance of experts” (British Broadcast Corporation, 1981). Inertia is prevalent, and it is much easier to continue business as usual. We all know this from the professional development experiences we have had with teachers.

We all work with teachers (preservice and/or inservice), and many of us provide experiences or teach courses in action research. The belief is that having teachers systematically study their own teaching or their colleagues’ teaching will improve science teaching and learning. The belief certainly has intuitive appeal, but whether there is an extant preponderance of research supporting this belief is a question for another time. But, more to the point of this editorial, a part of any action research experience/course (or any research course for that matter) is an emphasis on what one should attend to while critically reviewing a research article (e.g., assumptions, sampling, data collection, data analysis, conclusions, etc.). In essence, we want our teachers to value research and to be critical consumers of research. We all know that most teachers do not routinely read research journals, even though they are often subjected to policies that are supposedly research-based. While at Oregon State University, Norman worked with many teachers whose prime evaluation came from “on task” percentages gathered by their principals. This was in the 1980s and 1990s, even though the invalidity of the relationship between raw “on task” and student achievement had already been de-bunked in the 1970s. Similarly, Judith worked in a professional development center in Rhode Island where the validity of educational research was decided upon by a collaborative of 10 district superintendents, and teachers were never privy to the research details and implications.

What we hope to develop in our teachers is an approximation of the same critical eye that we expect of reviewers of manuscripts submitted for publication. There is no obvious solution to the possible problem of some individuals misrepresenting or creating fictitious data. But, as teacher educators, we need to be more critical in our evaluation of research findings and not be too quick to jump on any “bandwagons.” We should educate teachers using approaches that are well founded in the research literature and we must always keep our eyes open to change, when new insights arise or old insights are validly challenged. As for the teachers we work with, they too need to be more critical of what they read. They need to demand empirical support for what they are told to do and they also should demand of themselves supportive evidence for their own practices. In the end, this can’t override the damage done by some less than honest individuals, but it can only serve to improve the teaching and learning of science.