Overview

Within the disciplines of business and psychology, evaluation of research team productivity has been explored for some time (e.g., Stahl et al. 1988; Gordon and Smith 1989; Green and Bauer 1995; Lee and Bozeman 2005). In contrast to our current focus on manuscript quality, many of these studies evaluated the quantity of publications produced. For example, researchers have reported positive relationships between collaboration and productivity (Katz and Martin 1997; Lee and Bozeman 2005). Other studies have sought to explain research impact by examining specific factors, for example, aspects of the published article, the journal published in, or the characteristics of the author(s), that relate to total number of citations received (e.g., Judge et al. 2007; Starbuck 2005). However, studies assessing research impact using citations suffer to some extent from range restriction, in that they are restricted to the evaluation of only final, published works.

Taking a different approach, our sample includes a wide variety of manuscripts submitted to the Journal of Business and Psychology. In addition to accepted, published articles, this study incorporates submitted revisions; as well as papers that were rejected following peer-review. In addition to including a more comprehensive sample than has been incorporated in past research, this study links reviewer ratings and editorial decisions to specific team compositional factors and experiential resources, methods of communication, and publication-related development activities as reported by the first author. This unique approach allows us to identify potential factors that can aid research teams in selecting impactful practices that contribute to manuscript quality, while avoiding those that may impede ultimate publication success.

Approach

Our research team began this process by posing the broad research question: what do research teams submitting their work to a journal do that make them more (or less) successful with publication efforts? To identify the ways in which the development of manuscripts differ, we engaged in a comprehensive review of the extant research, reflected on our own experiences working in authorship teams, discussed with our Journal of Management collaborators, and conducted in-depth interviews with ten experienced scholars who had engaged in many team research projects. For this editorial, we focus on the following categories for differentiating among authorship teams:

  • Team composition and experiential resources: Authors’ disciplinary background, geographic location, and employment status (i.e., graduate student, associate professor, applied).

  • Team communication: Frequency of the use of different forms of communication throughout different phases of the research process.

  • Team publication-related development activities: Use of friendly reviews and prior submission to conferences.

The categories used map quite nicely with current scholarly reviews of the extant team literature. For example, our categories are subsumed within McGrath’s (1964) input-process-output (IPO) framework, a principal framework for team-based research that includes team composition, team structure, and team processes.

Method

To assess the full-range of authorship team and publication factors mentioned above, we invited first authors who submitted manuscripts to the JBP between the years of 2009 and 2012 to participate in our study. In order to match authorship team and publication data with reviewer ratings, only papers that proceeded to full review were included in our sample. As such, first authors of papers that were “desk rejected” were not invited to participate. In the fall of 2012, we emailed 240 first authors a link to our online survey. Of the 240 first authors whom we initially contacted, 104 authors completed the survey. This yielded a 65 % response rate on accepted papers, and a 26 % response rate on rejected papers. The higher response rate from authors of accepted papers, compared to those whose papers were rejected, was anticipated. Because a favorable outcome (e.g., paper acceptance) likely yielded a more positive association with the survey sponsor (e.g., Journal of Business and Psychology), it was expected that these authors would have a greater propensity to respond (Anseel et al. 2010; Spitzmuller et al. 2006).

After reviewing the survey data, it was found that the majority (85 %) of authorship teams in our sample consisted of smaller groups comprised of two, three, or four members. While future exploration of the differences between small and large authorship teams may yield fruitful findings, the number of research teams comprised of more than four persons in our current sample (N = 16) was too small to conduct comparisons between these two types of teams. Consequently, for purposes of this study, we focused solely on small research teams comprised of 2–4 members (N = 88).

Criteria Variables

To assess manuscript quality, we used two primary indicators: editorial decision and reviewer recommendations. The former is a dichotomous categorical variable that distributes papers into either accept or reject categories based on the decision made by an action editor of JBP. The latter represents the average overall recommendation made by either two or three reviewers following their initial manuscript review. Reviewer recommendations were measured using a 7-point scale (1 = reject to 7 = accept) that, with the exception of its end points, included varying degrees of risk associated with the acceptance of a manuscript.

To conduct agreement comparisons, we assessed the degree to which reviewers differed in their recommendation ratings. We limited the comparison of these ratings to only the first and second reviewers, as it was very rare to have a third reviewer. Overall, reviewers were well aligned in their recommendations—the large majority of reviewer ratings (89 %) fell within 0–2 categories of each other on the 7-point scale.

Findings

For this study, we examined the correlations between team factors (i.e., team composition and experiential resources, team communication, and team publication-related development activities) and manuscript quality criteria (i.e., editorial decision and reviewer recommendations). We restricted our analyses to the examination of correlations because our small sample size did not provide us with the statistical power needed for more sophisticated statistical analyses.

Overall, we found that manuscript quality criteria were significantly related to communication methods (see Table 1), friendly reviews, conference submissions, adding co-authors, and components of research team diversity (see Table 2). Below we will highlight and discuss each of these findings.

Table 1 Correlations of communication variables and outcomes (manuscript quality)
Table 2 Means, standard deviations, and correlations

Team Communication

To capture team communication, we asked survey respondents to indicate how frequently their team used different communication methods (i.e., face-to-face communication, telephone/internet-based audio, and email) during five stages of their research project. We identified these five research stages as being: the development of the research idea, the design of the study, the collection of data, the analyzing of data, and the writing of the manuscript.

We found that face-to-face communication that took place when designing the study correlated with editorial decision (r = 0.27, p < .05), face-to-face communication that took place during data collection correlated with reviewer recommendation (r = 0.32, p < .05), and face-to-face communication that took place while writing the paper correlated with both editorial decision (r = 0.30, p < .05) and reviewer recommendation (r = 0.27, p < .05). Communication via telephone that occurred during data analysis had a positive relationship with editorial decision (r = 0.46, p < .05). Email, as the primary communication medium, had a significant negative relationship with editorial decisions when it took place during data analysis (r = −0.27, p < .05) and when authors were writing the paper (r = −0.25, p < .05). We also collected data on whether authors used web-based video chat (i.e., Skype) for communication, but due to the low frequency reported for this method, did not have enough data to test for relationships with manuscript quality.

Our results are in-line with media richness theory (Daft and Lengel 1984, 1986), such that research teams using richer communications (e.g., face-to-face) tended to experience more positive outcomes, compared to research teams using leaner communications (e.g., email). Existing team research highlights the importance of face-to-face meetings early on in groups for the purpose of building trust and rapport (Hill et al. 2009). In addition, face-to-face meetings often promote greater levels of inclusion across team members, and therefore, may help to maintain active participation among all team members in a given project (Triana et al. 2012). While existing team research has focused on the benefits of rich communication early on, scholarly research endeavors are somewhat unique in that they often span long periods of time and may have substantial phases of variability in terms of the work required from teammates. Within the research context, our preliminary findings suggest that continued face-to-face meetings throughout many phases of the research project might be beneficial in promoting clarity of communication, integration of ideas, and sense-making, despite the ease and convenience of email communication.

Friendly Reviews

Participants were asked to report the number of friendly reviews that their manuscript received prior to submission to Journal of Business and Psychology. The use of informal friendly reviews prior to submission, where scholars outside of the research team are asked to read and provide feedback on the manuscript, was negatively related with reviewer recommendations (r = −0.29, p < .01), such that greater numbers of friendly reviews were associated with lower ratings of manuscript quality. To further explore this finding, we tested for a curvilinear relationship to assess whether a smaller number of friendly reviews were helpful, while an over abundance of reviews became problematic. Our analyses showed no sign of a curvilinear relationship. Unfortunately, a lack of variability and a small sample size prevented us from examining a host of potentially meaningful moderators, for example, team size, job/career level of the primary author, and international authorship.Footnote 1

What should we take from this surprising and counter-intuitive finding regarding friendly reviews? First and foremost, we are certainly not advocating that papers should not go through a friendly review process. However, these findings may advise research teams to be particularly thoughtful when soliciting and implementing feedback from friendly reviewers. Friendly reviews may not be critical to the success of manuscript given that they are not anonymous and a relationship with the author typically exists. This could generate feedback that is overly positive, incomplete, and not appropriate. For example, it is possible that, based on positive friendly reviews, an author team may develop a false sense of security about their paper and perhaps miss key flaws that need to be addressed. Authors should remain aware that feedback from friendly reviews is likely not equivalent to the actual review process associated with academic journals submissions. As blind reviews are difficult to simulate, authors may look for other opportunities to submit their manuscript for anonymous feedback. One such avenue is conference submissions.

Conference Submissions

Participants were asked to disclose whether their manuscript had been accepted to a conference prior to submission at Journal of Business and Psychology. Conference submissions had a positive relationship with reviewer recommendations (r = 0.22, p < .05). The positive relationship of conference submissions and reviewer recommendations highlight many of the potential benefits associated with submitting a paper to an academic conference. We postulate this is the case for two reasons. First, most conference submissions also go through a formal review process, like journal submissions, where the reviewers evaluate all aspects of the paper and highlight weaker areas. Unlike friendly reviews, conference submissions are blind-reviewed which may provide more objective information for the authors to improve the paper. Second, and perhaps most importantly, the feedback that presenters receive during or subsequent to the actual conference presentation is another potential key benefit of conference submission. Many scholars already versed in the specific research area are likely to attend the presentation, where they can offer ideas and insights to help improve the research. Often these initial connections made at conferences pave the way for future insights regarding both current and future research endeavors. In addition, many meaningful conversations and connections occur at conferences. For example, in our sample, 10 % of first authors reported meeting their second author at a conference, with an additional 7 % meeting their third author at a conference. Although we believe that it is likely that conference submission and presentation processes aid in the improvement of manuscripts, it is important to point out the obvious potential confound to the above conclusion—the best papers submitted to conferences are also likely to be the ones accepted for presentation.

Team Composition and Co-authors

We investigated a number of team compositional factors. All first authors were asked to indicate the disciplinary background of their co-authors,Footnote 2 whether co-authors were located at the same institution as the first author, and the employment status of all of the co-authors (e.g., graduate student, associate professor, assistant professor, full professor, and applied). Authors were also asked to indicate whether each of their co-authors were located in the same institution as the first author.

When co-authors were located at the same institution as the first author, there was a negative relationship with editorial decision (r = −0.23, p < .05), suggesting that author team heterogeneity in terms of affiliation may be a positive indicator of ultimate research quality. This could be due to increased access to resources or to an increased diversity of ideas.

Although no significant correlations were found between the first author’s level inside of academia (e.g., graduate student, assistant professor, associate professor, or full professor) and the criteria, having an assistant professor located in either the second or third authorship position was positively related to both editorial decision (r = 0.31, p < .05) and reviewer recommendations (r = 0.29, p < .01). It may be the case that assistant professors are highly motivated to produce and publish impactful research, and therefore contribute to a greater extent when in the second or third authorship positions, compared to both less experienced graduate students and tenured faculty members; thus improving the quality of the paper.

Alternatively, a team with a first author that primarily works outside of academia (e.g., applied settings) had a negative relationship with reviewer recommendations (r = −0.33, p < .01). We also found a negative relationship between applied team members in the 2nd author position and both editorial decision (r = −0.21, p < .05) and reviewer recommendations (r = −0.26, p < .05). While practitioners, just as those in academia, bring invaluable knowledge and resources to an authorship team, those working in the applied sector typically have different skill sets compared to those working within academia. Just as a consultant needs to know the culture of an organization before pitching a new change effort, manuscript authors must be similarly familiar with trends and norms in publishing, dealing with reviewers, etc. It may be the case that practitioners are less versed in this process.

Finally, as JBP is a journal focusing on research in management and psychology, we compared teams with members outside of those disciplines to teams where all authors were housed within either business or psychology. A negative relationship was found between co-authors outside of the disciplines of management and psychology and editorial decisions (r = −0.28, p < .01), such that papers with more negative quality ratings were more likely than others to contain a greater number of authors outside of these two core disciplines. Similar to what was mentioned above regarding practitioners, it may be the case that authors from outside the core disciplines do not have a sense of the nuance and approach that is necessary when publishing in a psychology or management journal. Stated differently, journal standards and approaches vary by discipline, and individuals in the applied setting and those outside of target journal’s main discipline(s) may be disadvantaged by not having a great sense of this prior to submission.

Caveats and Conclusion

It is important to remember that the above findings are based on a small sample size. Although we believe that much can still be learned, it is important to keep this in mind from a generalizability perspective. Furthermore, interpreting null findings is not at all suggested. Further, the entire sample was also drawn from papers submitted to the Journal of Business & Psychology, and future studies may want to combine samples across journals to parse out similarities and differences. Additionally, we decided only to focus on small research teams of two to four members. Therefore, the applicability to larger research teams is not known. In addition, our small sample precluded a comparison of domestic and international teams. However, we feel such a study would be a valuable contribution as structure and processes may exhibit varying effectiveness, and additional practices may be needed to overcome communication and organization constraints.

Many excellent papers have been published that provide authors guidance on building theory (e.g., Colquitt and Zapata-Phalen 2007; Corley and Gioia 2011; Fulmer 2012; Hillman 2011; Mayer and Sparrowe 2013; Okhuysen and Bonardi 2011), writing review papers (e.g., Bauer 2009; Cropanzano 2009; LePine and Wilcox-King 2010; Short 2009), and developing quality, high impact research (e.g., Aguinas et al. 2011; Bono and McNamara 2011; Feldman 2004; George 2012; MacKinnon et al. 2012; Ployhart and Ward 2011; Rogelberg et al. 2009). This paper took a different approach in order to identify team compositional factors, aspects of team communication, and publication-related development activities, that relate to manuscript quality. This approach yielded some potential tips for authorship teams beginning research projects and developing manuscripts. We identified significant correlations between communication methods, friendly reviews, conference submissions, team diversity, author background and employment status, and both editorial decision and reviewer recommendations. We noted, for example, the importance of face-to-face communication throughout the research process, the positive relationship between conference presentations and reviewer ratings, and the beneficial outcomes associated with co-authors located at different institutions. In additional to our potential practical implications, we hope this editorial prompts additional research on the science of organizational science. This is something that the Journal of Business and Psychology encourages and hopes to publish more of in the future.