Avoid common mistakes on your manuscript.
The 2010s have been exceptionally good for Behavior Research Methods (BRM). The number of papers and submissions almost doubled from 2010 to 2019, and the number of article downloads grew exponentially. In the first 6 months of 2020, there were 800,000 downloads of articles, an amazing number that was unimaginable at the start of the decade. The journal’s success is partly due to the good stewardship of the previous editors, who leave big shoes to fill, and partly to the fact that all 5989 papers published since the start in 1968 up to the end of 2019 are freely available for download at the BRM website. Indeed, the Psychonomic Society takes pride in that all articles become open access 1 year after publication, and even before articles become open access, authors can share their articles in view-only form via the ‘share this article’ link at the BRM website, or make post-prints available through institutional repositories. The Society values open access to research much more than the money it could make by keeping findings behind a paywall. For authors, this benefit is appealing, because their findings become freely available without payment of article processing charges.
Another reason for the journal’s success is that it fills an important niche. The founders were right when they decided that cognitive psychology needed a journal for research methods, in addition to theory-oriented journals. It is sometimes overlooked that good stimulus materials and methods for stimulus presentation and data analysis are the bricks and mortar of the work we do. You cannot interpret the results of a test if you do not have good stimuli and good ways to measure your variables. For those tools to be available, it is necessary that the research community rewards peers for developing them by providing a dedicated outlet. Otherwise, there is little incentive (apart from idealism) for doing so. BRM offers that outlet. The journal is the premier place that researchers turn to for advice on which stimuli to use and how to present them, on how to measure responses, and on how to analyze data properly. The best way to guarantee this is by making sure that our articles are useful to the questions and needs that researchers have. Below we list a few features that we have come across in our handling of manuscripts so far, and that are important to consider in future submissions to BRM.
Articles we are looking for
BRM wants to improve cognitive-psychology research by making it more effective, less error-prone, and easier to run. Therefore, we publish articles with new or improved tools. We publish tutorials alerting readers to avoidable mistakes that are made time and time again. We also publish articles and reviews that make existing practices more agile. The best way to know which type of articles we are looking for is to think of yourself as a BRM reader. What do you expect from a BRM article? What kind of article would excite you? What kind of article would disappoint you? The following are some elements you may want to consider.
Provide data on reliability and validity
The main reason why readers want to use materials published in BRM is that the quality of these materials has been verified. Such verification requires information about reliability and validity. Reliability refers to the internal consistency or test–retest consistency of the materials presented. There is no point in using materials that do not lead to consistent results. This facet is particularly important for correlational research, which is becoming increasingly popular in cognitive science now that more researchers are looking at individual differences and differences between stimuli (e.g., Ackerman & Hambrick, 2020; Hedge et al., 2018).
Validity refers to the fact that we are measuring what we claim to measure, which is crucial for the accurate interpretation of measurements. For research tools, this is often ascertained by correlating the measure with an external criterion. If, for instance, we see a new measure of word frequency, we want to know how well it predicts an important criterion, such as the processing speed of words. Otherwise, it is possible that the new norm measures something else (e.g., because a calculation error was made). Another way to collect evidence of validity is to compare the new measure with an existing measure (convergent validity). If the new measure is useful, it will correlate well with the existing measure, while improving on it in interesting ways. The same is true when a new statistical analysis is proposed. We want to see it applied to a relevant dataset, to show that it analyzes what it claims to analyze and is superior to what is already available. Information about reliability and validity is central to BRM articles.
Give access to your materials
For journal readers, nothing is more frustrating than reading an article with an interesting new method and at the end discovering that the authors are not sharing their materials. Such practice might be acceptable for theory-oriented journals (although it violates the transparency principle), but not for BRM. If authors aim for a BRM article, it is because they want to share their materials with the research community, and to get credit for it. We do not give badges for open data and materials, because we think such openness is self-evident. The norm is that the information described is freely available in an appendix (if short enough) or in a repository linked to in the article. This practice also allows the reviewers to check the materials.
Authors of submissions often write that the materials will be made available upon reasonable request, but this practice is not acceptable for BRM, because it creates too high a threshold for readers who sometimes want to look whether the materials could be of interest, even though they are not sure yet about whether they can use them. In addition, authors move or leave the academic world; and it is our experience that many authors no longer respond to requests after the first few (see also Vanpaemel et al., 2015; Vines et al., 2014). Therefore, BRM requires the information to be easily available in a repository or in an appendix. This requirement entails that authors have copyrights on the materials. If the authors are validating stimulus materials collected by someone else, it is essential to make sure that they can make the materials available if they are aiming for a BRM publication. We are also willing to consider manuscripts validating commercial materials, if there are no free alternatives (e.g., because it is too expensive to produce the materials or if the technique involves specialized hardware). In that case, however, we want a clear statement about a possible conflict of interest related to the company selling the equipment. Notice that unrestricted access also applies to surveys and questionnaires. We are not interested in publishing papers about them that do not provide access to the contents.
Give computer code and provide a working example
BRM readers search for practical answers to technical questions. It is so much more rewarding to find an article about a statistical analysis when it also includes computer code and working examples. We all cherish the few articles where this information is available, because it allows us to seamlessly apply the solution and to know we are doing it correctly. Authors sometimes object to a working example because the data cannot be made available for privacy or copyright reasons. In that case, it is possible to make a synthetic database allowing readers to try out the analysis (e.g., Quintana, 2020). Again, the best way of making this information available is by putting the required files in a repository to which the article links.
Make sure that your manuscript is more than a method section or a supplementary analysis section
Sometimes we receive a manuscript that looks very much like the method section of a larger article (to be sent to a theoretical journal) or that presents a supplementary analysis of data already published. Unless such a manuscript contains important new information, it is of little value to BRM readers.
Collect norms for many stimuli based on many participants
With respect to stimulus norms, it happens that we receive new norms for some 100–300 stimuli. Unfortunately, such norms are of little use if they cover only a small part of the stimulus space (e.g., the words in a language) and can be collected easily. On the other hand, information about a small stimulus set can be important if it addresses a limited stimulus population and requires a lot of effort and/or expertise to compile (e.g., carefully matched groups of sentences, pictures, or videos). It is difficult to give clear guidelines about the fuzzy border between acceptable and unacceptable samples, but the best way to avoid discussion is to aim well above the border. If few people are able to collect the information presented and if the dataset covers a large part of the stimulus space, everyone will be excited about it and want to see it published. The same is true for validation studies: Make sure they are adequately powered. The last thing anyone wants is a method published because it passed a small pilot study, but upon proper testing turns out not to be good. A look at recently published articles in BRM will give you a sense of what is likely to be acceptable.
Editorial decisions
When making editorial decisions, action editors, guided by reviewers, try to distinguish manuscripts that mainly contain noise from manuscripts that contain a signal (and noise). In other words, editorial decisions are signal-detection situations. Ideally, action editors are given the means to devote all their time to handling your manuscript and have access to five or more knowledgeable reviewers. In this ideal world, we could aim for maximum sensitivity and a stable response criterion across all submissions. Unfortunately, this ideal world does not exist. For action editors, handling your manuscript is only one of many tasks that demand their attention. And fewer than one out of three invited reviewers respond positively (Think of this next time you are asked to review a manuscript!). Therefore, we are bound to have a suboptimal system with rather large standard deviations (reviewers and action editors do not always agree about a manuscript’s quality) and some noise in the criterion (action editors do not always use the exact same threshold for acceptance). In such a situation, it is unrealistic to expect that all decisions will be correct (true positives and true negatives). It is much more realistic to devise a strategy for handling false positives and false negatives.
A false positive is a manuscript that has been accepted for publication, even though it has no signal or – worse – conveys a wrong signal. Such publications go against the aims of the journal and we would very much appreciate it if our readers attended us to such articles. If you see something published in BRM and you know there is a much better solution, by all means let us know by submitting a commentary. We will carefully consider your commentary, have it reviewed if necessary, and publish it if we feel that it improves the situation. When writing such a commentary, please use a constructive and collegial tone. We all want to improve research, and this is achieved more efficiently by helping each other than by trying to blame each other.
A false negative is a submission that has been rejected, even though it contains an important signal. As authors, we know how upsetting this can be. However, from the journal’s point of view, it is better to not reconsider negative decisions, because objections from authors are one-sided: they happen only when a paper has been rejected, and not when a paper has been accepted. Although the authors hope to increase sensitivity, in reality they are more likely to put pressure on the criterion and the pressure will always be downward, such that the average quality of accepted manuscripts is reduced. This is not offset by the occasional good paper we manage to salvage. Therefore, we want to be up front that we will not reconsider rejections. We will try to prevent false negatives, because it is in the journal’s interest to publish all good articles that advance its goals, but if we have made them, we will accept our loss. We will take comfort in the knowledge that BRM does not have a monopoly on the publication of articles and that authors can prove us wrong by submitting their manuscript to another journal, whose selection process will act as an independent replication.
So, for the reasons just outlined, our editorial policy is that we welcome critical-but-constructive commentaries on articles published in BRM and do not reconsider manuscript rejections.
Conclusion
As a new editorial team, we are aware that we have an important task ahead of us: to make sure that BRM remains the premier outlet for new, exciting solutions to problems encountered by cognitive psychologists. Psychology will flourish only if we have good tools to work with, and good ways of analyzing data. It is BRM’s task to make sure that the research community has access to the latest and finest developments. Therefore we look forward to receiving many interesting submissions and we will do our utmost best to serve you well.
Change history
15 February 2022
A Correction to this paper has been published: https://doi.org/10.3758/s13428-020-01506-0
References
Ackerman, P. L., & Hambrick, D. Z. (2020). A primer on assessing intelligence in laboratory studies. Intelligence, 80, 101440.
Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50(3), 1166–1186.
Quintana, D. S. (2020). A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis-generation. eLIFE,9: e53275. https://doi.org/10.7554/eLife.53275
Vanpaemel, W., Vermorgen, M., Deriemaecker, L., & Storms, G. (2015). Are we wasting a good crisis? The Availability of Psychological Research Data after the Storm. Collabra, 1(1), Art. 3. https://doi.org/10.1525/collabra.13
Vines, T. H., Albert, A. Y., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., … Rennison, D. J. (2014). The availability of research data declines rapidly with article age. Current biology, 24(1), 94–97.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Brysbaert, M., Bakk, Z., Buchanan, E.M. et al. Into a new decade. Behav Res 53, 1–3 (2021). https://doi.org/10.3758/s13428-020-01497-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13428-020-01497-y