Abstract
The growing need for academic impact requires researchers to develop and address important ideas. In this paper, we analyze how theory has been framed and operationalized within international business scholarship, which has a long tradition of producing research that accounts jointly for multiple research contexts and levels of analysis. We focus on two key aspects of published articles: the complexity of their research questions and how the research questions are translated into testable hypotheses. We further assess how the complexity and operationalization of research questions have been received by business/management, interdisciplinary, and practice-oriented research audiences. To achieve this, we examine a sample of 423 quantitative articles published in the Journal of International Business Studies between 2005 and 2015, and consider the articles’ citations during 2010–2020. Our paper provides suggestions about how authors might better frame research questions that are both important and impactful.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
There is no shortage of articles that aim to help scholars to publish impactful papers in the social sciences. Following the publication of Davis’ (1971) well-known “That’s Interesting!” paper, the literature addressing what constitutes a “good” paper has flourished. To date, the focus of this literature has been largely on what theory is and how authors can make important contributions (e.g., Byron & Thatcher, 2016; Morrison, 2010; Murmann, 2017; Okhuysen & Bonardi, 2011). What is still missing, though, is a comprehensive treatment of how scholars can approach the task of developing, in a practical sense, research ideas that can yield strong theoretical contributions and impactful papers, while addressing the requests for impact that come from a multitude of stakeholders (e.g., Hicks, 2012; Pölönen & Auranen, 2022). In this paper, our focus is on how theory can be framed and operationalized, and how doing so more effectively can help authors to generate research questions that are important and impactful, and that contribute to the development and testing of theory. For our analysis, we operationalize “important and impactful” using citations of the article. While this proxy is imperfect and represents only one specific aspect for considering the value of a work, it has the benefit of being consistent with the current assessment focus in business schools and social science departments around the world.
In essence, our purpose is to identify the anatomy of an “important” research question based on two key attributes: its complexity and how it is translated into testable hypotheses. We use content analysis to study the research questions specified in a sample of quantitative articles published in the Journal of International Business Studies (JIBS) from 2005 to 2015. Based on each paper’s citations during the five years following its publication (2010–2020), we assess how the complexity of the research question, and how it is translated into testable hypotheses, are related to its impact. In this way, we aim to address the following research questions:
-
What is the relationship between the complexity of the research question and the impact of the paper, as measured by citations? Are important papers cited differently across business/management, interdisciplinary, and practice-oriented outlets?
-
Is there evidence that a clearer operationalization of research questions attracts more citations?
Background
Davis (1971) argued that one of the most important characteristics of a scholarly work is its ability to disconfirm some, but not all, of the assumptions that are held by its audience. The partial nature of the denial of underlying assumptions is crucial. If an article attempts to negate all of a reader’s fundamental expectations about a topic, the reader is likely to discount the findings, treating them as absurd. In contrast, if an article is completely consistent with all of a reader’s assumptions, the argument is likely to be viewed as obvious—and, thus, not important.
Davis’s argument is much broader: scholarship must “stand out” in some way. Barley (2006: 16–17) elaborated on the meaning of this by identifying papers whose characteristics “differed in some significant and striking way from most of the other papers in academic journals”, arguing that three key features—the subject, the methodology, and the theory—are what make a paper memorable. In particular, papers that address fresh topics, use theories or methodologies that are novel, or depart substantially from the norms of the discipline are viewed as being important. However, this perspective is not universal. Bartunek et al. (2006: 14) concluded that “no one single factor makes an empirical research project interesting”. (Note that, for this paper, we assume that “interesting” and “important” are strongly related in the minds of most researchers.)
Colquitt and Zapata-Phelan (2007) adopted a different approach to understanding what makes an article important by introducing a taxonomy of research based on the building and testing of theory. They argued that articles that build on theory and/or test it (they used the terms “builders”, “testers”, and “expanders” in their taxonomy) are cited more widely and are considered more important. While Judge et al. (2007) contended that the quality of the idea itself is one of the strongest predictors of citations, others have noted that the vast majority of articles that are deemed to be “important” enough to be published receive almost no citations (e.g., Boyd et al., 2005). Ladik and Stewart (2008) and Corley and Gioia (2011) assessed theoretical contribution based on two dimensions—originality (incremental or revelatory) and utility (scientific or practical)—and indicated that contribution rests largely on the ability of the research to provide original insights regarding a phenomenon by advancing knowledge in a way that is perceived to be useful.
The debate continues with respect to what represents an “important” article. Advice for developing stronger theoretical contributions includes, for example, assessing the role of assumptions (Foss & Hallberg, 2014) and the nature (e.g., linear, quadratic, etc.) of relationships (Haans et al., 2016; Pierce & Aguinis, 2013). Shepherd and Suddaby (2017) summarized this literature in a review of theory building and identified commonly-employed approaches to developing new and important ideas, such as problematizing (Alvesson & Sandberg, 2011) and identifying paradoxes (Poole & Van de Ven, 1989).
In this paper, we add to the debate on what constitutes an important article by considering the complexity of research questions and their translation into testable hypotheses, and how these are related to citation numbers. Our intent is to offer suggestions on how to turn important papers into impactful ones. We further elaborate on the debate by considering two dimensions that are relevant to multidisciplinary fields such as international business: how the research questions and their operationalization reach beyond the core body of the international business literature, in terms of their impact on practice and on other academic disciplines.
Methodology
Our sample consists of 423 of the 428 quantitative articles published in JIBS during 2005–2015; citation numbers for the five excluded JIBS articles were not available using Scopus. We focus on international business because it is an interdisciplinary field that welcomes articles from business, psychology, economics and more, and it is inherently complex due to the joint consideration of both multiple countries and levels of analysis. Given the nature of our study, with its emphasis on testable hypotheses, we do not consider theoretical, methodological, or qualitative papers. For each quantitative article, we identify and code the research questions and how they had been translated into hypotheses. All three authors participated in coding the articles, across two rounds. First, two authors coded all 423 papers, with an interrater reliability of Cohen’s κ = 0.79. All disagreements were discussed, and agreements reached. Subsequently, the other author coded a subset of the articles that presented differences in the first round of coding; the interrater reliability for the second round was κ = 1.00.
We evaluate the complexity of each research question along two dimensions. The first pertains to whether the study assesses solely a main effect or also includes some contingency analysis, mediation analysis, or a combination of moderation and mediation. The second complexity dimension refers to whether or not the theorization is same- or cross-level. The resulting seven complexity categories are: (1) main effect on a single level of analysis, (2) main effect across levels, (3) moderation on a single level, (4) moderation across levels, (5) mediation on a single level, (6) mediation across levels, and (7) a combination of moderation and mediation. In order to determine the level of complexity, we looked at the phrasing of the hypotheses and, in case of doubts, we also checked the empirical analysis of the paper.Footnote 1 Table 1 reflects our characterization of complexity via sample research questions.
We also consider the match between the research questions and the hypotheses used to test them. A binary variable is used to operationalize the nature of the alignment between the research question(s) and the hypotheses, with 1 representing close alignment and 0 otherwise. We coded as 1 those articles where we could infer the complexity expressed in the hypotheses, based on the research question as expressed by the authors in their paper. For instance, when research questions express that a relationship varies based on an another and the authors test for moderation, we coded the research question and the hypothesis as matching (see Table 2 for examples of alignment and the lack thereof). We recognize that this is a relatively blunt measure; following Colquitt and Zapata-Phelan (2007), our intention is not to capture every nuance of theory building and testing, but rather to create a parsimonious conceptual analysis that can help scholars in developing their thinking.
To assess the impact of the published articles, we consider four aspects of citations: the overall count, citations in journals within the broad discipline of business, interdisciplinary citations (in non-business journals), and citations in journals with more of a practitioner orientation. The citation data, from Scopus, are based on a five-year window following the publication of each article. Citing articles are coded as “business” if published in business or management, according to the Harzing Journal Quality List (i.e., classified as business history, economics, entrepreneurship, finance and accounting, general and strategy, innovation, international business, management information systems, knowledge management, marketing, operations research, management science, production and operations management, organization behavior/studies, human resource management, or industrial relations); citations from all other academically-oriented journals are coded as “interdisciplinary”. The “practice-oriented” journals are adapted from Lowry et al. (2004): Academy of Management Executive, Business Horizons, Business Week, California Management Review, Communications of the ACM, Fortune, Harvard Business Review, IEEE Engineering Management Review, Industry Week, Interfaces, Journal of Knowledge Management, Management (France), Management Revue, McKinsey Quarterly, MIS Quarterly Executive, (MIT) Sloan Management Review, T + D, The Economist, TQM Magazine, and Singapore Management Review.
We estimate the relationships of interest using associative modelling. The overall and business journal citation data have sufficient spread to be modelled as continuous variables with ordinary least squares (OLS) regression; we utilize ln(citations + 1) as the dependent variables to accommodate skewness and observations with no citations. Residual analysis for OLS modelling of the interdisciplinary citations displays patterns that suggest that an alternative approach would be preferable, consistent with the small citation counts (maximum value of 3) for practitioner journals. Therefore, the models for the interdisciplinary and practitioner citations are estimated with negative binomial regression, using the untransformed count data (we also report the OLS regression for comparison, although the negative binomial regression results should be interpreted, rather than the OLS). For all of the models, multicollinearity is assessed using variance inflation factor (VIF) values; the maximum VIF values are all well below 3.0, suggesting the absence of problem multicollinearity. We include control variables in all of the models, including dummy variables to represent whether or not the study is a meta-analysis (as meta-analyses tend to be cited more often than other articles) and the year of publication. We also include the number of authors and the length of the paper (in pages), as these variables have been shown to influence citation numbers (Abramo & D’Angelo, 2015; Acedo et al., 2006; Xie et al., 2019; Tahamtan et al., 2016).
Results
Table 3 provides information regarding citation numbers for the various categories that we consider, and Fig. 1 shows the distribution of the complexity of the research questions by year. The descriptive statistics for the variables used in the modelling are shown in Table 4.
The results for modelling the relationship between citation numbers and the match between the research question(s) and the hypotheses are presented in Table 5. Model 1, which considers citations in any of the three types of outlets (business, interdisciplinary, and practice-oriented), provides evidence that a match between research questions and hypotheses is associated with more citations (p < 0.01). This finding is consistent (p < 0.05) for citations in both business and interdisciplinary journals (models 2 and 3) and for practitioner-oriented journals (p < 0.10; model 4). With regard to our key control variables, meta-analysis articles and longer papers are associated with larger numbers of citations in all but practice-oriented journals. In model 3bis and 4bis, we replicate the analysis in models 3 and 4 using OLS regression rather than negative binomial, for comparison; the results are consistent.Footnote 2
We analyze the relationship between citation numbers and research question complexity using three model specifications. The first, shown in Table 6, considers single- vs cross-level theorization, operationalized using a dummy variable. We find evidence that the level of theorization is associated with a difference in the number of citations for the combined journal categories (model 5), such that articles engaging in cross-level research tend to receive fewer citations, marginal to the other variables in the model. This effect is driven by business journals (model 6), as the estimated coefficients do not differ significantly from zero for interdisciplinary or practice-oriented journals (models 7 and 8). The findings pertaining to the control variables, which account for most of the explained variance, remain similar to those discussed above. In model 7bis and 8bis, we replicate the analysis in models 7 and 8 using OLS regression rather than negative binomial, for comparison; the results are consistent.Footnote 3
The models shown in Table 7 consider another aspect of complexity: whether the hypotheses are specified using main effects, moderation, mediation, or a combination of moderation and mediation. With main effects as the baseline, models 9–11 provide evidence that simpler conceptualization is associated with more citations, given the negative coefficients (p < 0.01) associated with the moderation dummy variable for the collection of all journal types, driven by business and interdisciplinary journals. The results for the control variables are quite consistent with those of previous models. In models 11bis and 12bis, we replicate the analysis in models 11 and 12 using OLS regression rather than negative binomial, for comparison; the results are consistent.Footnote 4
Finally, Table 8 shows the results of considering complexity with a more granular approach, considering whether the reported analysis addresses main effects, moderation, mediation, or moderation and mediation, along with same- vs. cross-level effects. The estimated coefficients for the explanatory variables represent the expected increase in the number of citations for each category relative to the baseline of same-level main effects. The results are broadly consistent with the earlier findings that simpler studies tend to be cited more often in business and interdisciplinary journals (models 14 and 15). The control variables pertaining to meta-analysis and article length provide consistent results. In models 15bis and 16bis, we replicate the analysis in models 15 and 16 using OLS regression rather than negative binomial, for comparison; the results are consistent.Footnote 5
To assess the risk that omitted variables could invalidate our results, or the presence of possible bias in our inferences, we applied the “impact threshold of a confounding variable” approach (Busenbark et al., 2022; Xu et al., 2019). This analysis revealed that we would need to replace 40–60% of the sample in order to turn our significant results to non-significant. Therefore, our results appear to be robust to omitted variables.
Discussion
Our findings suggest that, first, articles whose research questions are less complex tend to be seen as more important and impactful, based on citation numbers in the five years after publication, particularly among business journals and outlets with stronger interdisciplinary orientations. In addition, research questions that address only the main effects have, on average, more citations. These findings are somewhat counterintuitive, as we are typically trained to investigate boundary conditions and to identify moderators and mediators of established relationships. Arguably, the bulk of the research currently published in the key business and social science journals focuses on research questions that would be characterized as having moderate complexity in our coding system, where contingency models and mediation models dominate (Boyd et al., 2017; Casper et al., 2007; Colquitt & Zapata-Phelan, 2007).
Second, our analysis highlights the different nature of what is viewed as important by social science researchers, compared to what matters to practitioners and possibly funders. In academic research, theoretically-driven "importance" is valued (Davis, 1971); this is consistent with the premium that academics put on the development of theory (Suddaby, 2014). However, practitioners and funders often do not share academics’ views with respect to what constitutes important work. They are typically more interested in how to solve a problem, or how to make organizations more competitive and successful. Practitioners also seek “evidence-based” suggestions (Rousseau, 2006) that require a different type of knowledge accumulation. Most of the papers submitted to top journals aim to develop new theories, rather than testing existing ones (Kacmar & Whitfield, 2000); this arguably reduces the value of the work for practical use. In this regard, the fact that replicability has been gaining momentum in the broad business literature may help to increase the practical relevance of international business studies in the future, as replication will allow better assessment of which theories have greater predictive power and produce consistent results (Aguinis & Solarino, 2019; Aguinis et al., 2017; Bettis et al., 2016), and thus can better inform practitioners' decisions.
There is also the potential for a mismatch within academia. While complexity is presumed to be beneficial in terms of achieving publication, it may limit the article’s impact, especially if there is not a good match between the complexity of the work and the intended audience. While complex and more narrow studies may be accessible to specialists in a particular field, this approach may alienate the broader and more generalist audience that might otherwise facilitate wider utilization of the research. More complex studies that test boundary conditions, while important for expanding the knowledge base of a specific field, may be of less interest to non-specialists, yielding lower impact, and potentially reducing the chances of obtaining grants from funders that value impact (Pölönen & Auranen, 2022). Adding a new boundary condition does not necessarily enhance applicability or help managers to find solutions to their business problems.
Thus, our study emphasizes the importance of researchers’ thinking, early in the project, about the audience with whom a paper is meant to engage. Studies that originate in business and management and are cited by authors in other fields will tend to offer solutions to broader problems through methodological advances or intuition that can be extended to other contexts, making the work understandable by non-specialist readers. For example, Berry et al. (2010) offered a novel way of conceptualizing, measuring, and examining the influence of cross-national distance, extending the notion of distance into nine dimensions: economic, financial, political, administrative, cultural, demographic, knowledge, connectedness, and geographic. In doing so, the authors made the construct of cross-national distance more accessible to a broader audience; this contribution has found application in education (Fang et al., 2013), policy (Rivera et al., 2013), and sociology (Alcacer et al., 2013).
Looking qualitatively at articles that are cited by more practice-oriented journals, we noted that they share some characteristics. One is that they tend to make a contribution that involves organizing and extending existing knowledge. In addition, these studies often identify causal mechanisms that are clearly applicable to organizations. As an example, Luo and Tung (2007) presented a framework on the spring-boarding activities of emerging-market multinational firms. The framework is accessible to managers, and practitioner-oriented outlets have picked up on its utility. The article has been discussed in outlets including Business Horizons, California Management Review, and The Economist.
The upshot is that researchers in the field of international business have a great deal to juggle. On the one hand, they need to be productive and publish, and there is a general understanding that complexity can help in achieving publication in top-level journals. On the other hand, many business schools and funders are now placing a high premium on "impact", which pertains to the demonstrated use of research by organizations (Tihanyi, 2020), including firms and governments. Therefore, researchers in the social sciences and in business need to find a balance between being able to achieve the expected volume and quality of publications for their schools and career progression, while achieving impact. Our analyses suggest that one potential approach to achieving both goals is to design simpler studies.
We also looked for qualitative commonalities among studies that received few citations. One notable characteristic is that these papers tend to have many hypotheses, which may be a consequence of the authors' approach to problematizing the issue (Alvesson & Sandberg, 2011). Authors have options with respect to problematization; these include revisiting an idea to offer new perspectives, making deliberate efforts to challenge the assumptions underlying existing theories, and delving more deeply by increasing the complexity with which the topic is considered. A larger number of hypotheses is consistent with problematization via complexity. By developing more complicated models to test, authors are able to examine more fine-grained aspects of theories or phenomena; however, this may come at the expense of the importance of the paper, as reflected in citation counts.
This reflects some consistency with our quantitative finding that a stronger match between research question(s) and hypotheses is associated with higher citations among academically-oriented journals. It may be that this alignment means that the reader can better understand and engage with the flow of the article and more clearly understand its results. Such clarity may make the article easier to absorb and more memorable, leading to more citations.
Finally, the need to account for the impact of different levels of data (e.g., investigating both country- and industry-level effects on internationalization strategy decisions) is widely recognized in the literature (e.g., Aguinis & Molina-Azorín, 2015). Unexpectedly, our findings did not support the idea that cross-level research yields articles that are more important or impactful. We certainly do not dispute the theoretical value of doing multilevel research. It is possible, however, that this value is not as well recognized in the community because multilevel research is more difficult to interpret, both conceptually and empirically, making it less accessible to readers who lack deep familiarity with either the context or the methodology, and thereby yielding fewer citations.
What’s in it for authors?
Our analysis of the citations generated by quantitative articles published in JIBS during 2005–2015 has led us to identify some key recommendations for researchers, especially those early in their careers. Alvesson and Sandberg (2011: p. 247) discussed the role of problematizing in theory building: “It is increasingly recognized that what makes a theory important and influential is that it challenges our assumptions in some significant way”. Problematizing is important in social science research because it encourages scholars to go beyond identifying and filling a gap in the literature (which, per Colquitt and Zapata-Phelan (2007), often takes the form of the testing of a new moderator). The gap-filling approach involves identifying aspects of existing knowledge that are incomplete, inconclusive, or underdeveloped in some respect, and may result in very complex theoretical models, with potentially adverse effects on citations. Problematization, when taken seriously (i.e., not as a cover for gap filling), may yield deeper insights that are less cluttered by complexity, offering greater potential for interest and impact. The advice here is that simpler research questions may yield articles that are considered to be more important.
Second, the manner in which research questions are framed matters. Framing a research question is not the same as posing a question. The research question is a core part of a paper's underlying framework and has implications for its goals, design, and methods. Maxwell (2012) advised that research questions should have coherence with the conceptual framework. Flick (2006) further suggested that research questions should drive the methodology of the paper. Our analysis yields another piece of advice: hypotheses should be closely aligned with the research question(s). Our empirical findings regarding citation numbers may reflect the potential for reader confusion when research questions presented in the introduction are not consistent with the actual operationalization of the hypotheses, leading the reader to wonder what questions have actually been answered in the article. Such papers may be less memorable or credible, and therefore cited less frequently in academic journals. Aligning the hypotheses closely with the research questions may help to generate greater academic impact.
Authors should also be aware of the debate regarding “important for academia” versus “important for practice”, which has been going on for quite some time and has recently experienced a revival (Tihanyi, 2020). This debate concerns whether researchers should trade doing theoretically important research for doing work that is “useful” (i.e., practical). While there is certainly the potential for academically valuable articles to be important to the managers who are presumably readers of practitioner-oriented journals, the issue might reside in the accessibility of the ideas to a wider audience.
We are not advocating for the “science of success” (Barabási & Musciotto, 2019), such that authors should take specific actions in order to obtain higher impact. The aim of doing scientific research is to advance knowledge by discovering new relationships between constructs and to find evidence that these relationships hold and under what circumstances. Our evidence suggests that the manner in which authors design studies and align research questions and hypotheses may affect how important an article is perceived to be, thereby influencing the “success” of the authors along specific dimensions.
Limitations
Our paper has some limitations that should be noted. First, there are many factors that contribute to explaining the number of citations that a paper receives; this is reflected in the relatively limited explanatory power of our models. For example, in the interest of parsimony, our taxonomy does not capture the full range of research designs. Certainly, there is also variation in the goals of different articles—for example, some aim to test the relationships among constructs, while others aim to develop new measures—and there are differences related to the perceived importance of specific relationships among constructs, which we have not captured in our coding. Second, our study does not incorporate a consideration of clarity in terms of writing and theoretical explanation. Some authors have a talent for writing that elevates the contributions of their articles, and we have not operationalized this attribute. These are all elements that might affect how widely a study will be cited. Third, we have only considered publications in one journal. While JIBS is certainly an influential journal, it is not the sole publication outlet of interest for international business and social science researchers; future work will need to consider a larger collection of source publications. Finally, our measure of “important” is a key limitation. We have treated each citation equally, regardless of the importance of the cited article to the manuscript (Kacmar & Whitfield, 2000; Li et al., 2022). Despite these limitations, we believe that our study provides insights that can begin to help scholars in framing their research more effectively, to support the goal of achieving greater impact.
Notes
The full dataset is available at https://collections.durham.ac.uk/files/r2m613mx574.
Note that the OLS results should not be interpreted on their own, given the nature of the dependent variables in models 3 and 4, which create inconsistencies between the residuals and the OLS assumptions.
Note that the OLS results should not be interpreted on their own, given the nature of the dependent variables in models 7 and 8, which create inconsistencies between the residuals and the OLS assumptions.
Note that the OLS results should not be interpreted on their own, given the nature of the dependent variables in models 11 and 12, which create inconsistencies between the residuals and the OLS assumptions.
Note that the OLS results should not be interpreted on their own, given the nature of the dependent variables in models 15 and 16, which create inconsistencies between the residuals and the OLS assumptions.
References
Abramo, G., & D’Angelo, C. A. (2015). The relationship between the number of authors of a publication, its citations and the impact factor of the publishing journal: Evidence from Italy. Journal of Informetrics, 9(4), 746–761.
Acedo, F. J., Barroso, C., Casanueva, C., & Galán, J. L. (2006). Co-authorship in management and organizational studies: An empirical and network analysis. Journal of Management Studies, 43(5), 957–983.
Aguinis, H., Cascio, W. F., & Ramani, R. S. (2017). Science’s reproducibility and replicability crisis: International business is not immune. Journal of International Business Studies, 48, 653–663.
Aguinis, H., & Molina-Azorín, J. F. (2015). Using multilevel modeling and mixed methods to make theoretical progress in microfoundations for strategy research. Strategic Organization, 13(4), 353–364.
Aguinis, H., & Solarino, A. M. (2019). Transparency and replicability in qualitative research: The case of interviews with elite informants. Strategic Management Journal, 40(8), 1291–1315.
Alcacer, J., & Ingram, P. (2013). Spanning the institutional abyss: The intergovernmental network and the governance of foreign direct investment. American Journal of Sociology, 118(4), 1055–1098.
Alvesson, M., & Sandberg, J. (2011). Generating research questions through problematization. Academy of Management Review, 36(2), 247–271.
Barabási, A. L., & Musciotto, F. (2019). Science of success: An introduction. Computational Social Science and Complex Systems, 203, 57–72.
Barley, S. R. (2006). When I write my masterpiece: Thoughts on what makes a paper interesting. Academy of Management Journal, 49(1), 16–20.
Bartunek, J. M., Rynes, S. L., & Ireland, R. D. (2006). What makes management research interesting, and why does it matter? Academy of Management Journal, 49(1), 9–15.
Berry, H., Guillén, M. F., & Zhou, N. (2010). An institutional approach to cross-national distance. Journal of International Business Studies, 41(9), 1460–1480.
Bettis, R. A., Helfat, C. E., & Shaver, J. M. (2016). The necessity, logic, and forms of replication. Strategic Management Journal, 37(11), 2193–2203.
Boyd, B. K., Finkelstein, S., & Gove, S. (2005). How advanced is the strategy paradigm? The role of particularism and universalism in shaping research outcomes. Strategic Management Journal, 26(9), 841–854.
Boyd, B. K., Gove, S., & Solarino, A. M. (2017). Methodological rigor of corporate governance studies: A review and recommendations for future studies. Corporate Governance: An International Review, 25(6), 384–396.
Busenbark, J. R., Yoon, H., Gamache, D. L., & Withers, M. C. (2022). Omitted variable bias: Examining management research with the impact threshold of a confounding variable (ITCV). Journal of Management, 48(1), 17–48.
Byron, K., & Thatcher, S. M. (2016). What I know now that I wish I knew then—teaching theory and theory building. Academy of Management Review, 41(1), 1–8.
Campbell, J. T., Eden, L., & Miller, S. R. (2012). Multinationals and corporate social responsibility in host countries: Does distance matter? Journal of International Business Studies, 43(1), 84–106.
Casper, W. J., Eby, L. T., Bordeaux, C., Lockwood, A., & Lambert, D. (2007). A review of research methods in IO/OB work-family research. Journal of Applied Psychology, 92(1), 28–43.
Colquitt, J. A., & Zapata-Phelan, C. P. (2007). Trends in theory building and theory testing: A five-decade study of the Academy of Management Journal. Academy of Management Journal, 50(6), 1281–1303.
Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: What constitutes a theoretical contribution? Academy of Management Review, 36(1), 12–32.
Davis, M. S. (1971). That’s interesting! Towards a phenomenology of sociology and a sociology of phenomenology. Philosophy of the Social Sciences, 1(2), 309–344.
Fang, E., & Zou, S. (2009). Antecedents and consequences of marketing dynamic capabilities in international joint ventures. Journal of International Business Studies, 40, 742–761.
Fang, Z., Grant, L. W., Xu, X., Stronge, J. H., & Ward, T. J. (2013). An international comparison investigating the relationship between national culture and student achievement. Educational Assessment, Evaluation and Accountability, 25(3), 159–177.
Flick, U. (2006). An introduction to qualitative research. Sage.
Foss, N. J., & Hallberg, N. L. (2014). How symmetrical assumptions advance strategic management research. Strategic Management Journal, 35(6), 903–913.
Gao, G. Y., & Pan, Y. (2010). The pace of MNEs’ sequential entries: Cumulative entry experience and the dynamic process. Journal of International Business Studies, 41(9), 1572–1580.
Haans, R. F., Pieters, C., & He, Z. L. (2016). Thinking about U: Theorizing and testing U- and inverted U-shaped relationships in strategy research. Strategic Management Journal, 37(7), 1177–1195.
Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41, 251–261.
Ivus, O. (2015). Does stronger patent protection increase export variety? Evidence from US product-level data. Journal of International Business Studies, 46, 724–731.
Judge, T. A., Cable, D. M., Colbert, A. E., & Rynes, S. L. (2007). What causes a management article to be cited—article, author, or journal? Academy of Management Journal, 50(3), 491–506.
Kacmar, K. M., & Whitfield, J. M. (2000). An additional rating method for journal articles in the field of management. Organizational Research Methods, 3(2), 392–406.
Katsikeas, C. S., Skarmeas, D., & Bello, D. C. (2009). Developing successful trust-based international exchange relationships. Journal of International Business Studies, 40(1), 132–155.
Ladik, D. M., & Stewart, D. W. (2008). The contribution continuum. Journal of the Academy of Marketing Science, 36(2), 157–165.
Lam, S. K., Ahearne, M., & Schillewaert, N. (2012). A multinational examination of the symbolic–instrumental framework of consumer–brand identification. Journal of International Business Studies, 43, 306–331.
Li, X., Zhao, C., Hu, Z., Yu, C., & Duan, X. (2022). Revealing the character of journals in higher-order citation networks. Scientometrics, 127(11), 6315–6338.
Lowry, P. B., Romans, D., & Curtis, A. M. (2004). Global journal prestige and supporting disciplines: A scientometric study of information systems journals. Journal of the Association for Information Systems, 5(2), 29–80.
Lu, J., Liu, X., Wright, M., & Filatotchev, I. (2014). International experience and FDI location choices of Chinese firms: The moderating effects of home country government support and host country institutions. Journal of International Business Studies, 45(4), 428–449.
Luo, Y., & Tung, R. L. (2007). International expansion of emerging market enterprises: A springboard perspective. Journal of International Business Studies, 38(4), 481–498.
Maxwell, J. A. (2012). Qualitative research design: An interactive approach. Sage.
Morrison, E. (2010). OB in AMJ: What is hot and what is not? Academy of Management Journal, 53(5), 932–936.
Murmann, J. P. (2017). More exploration and less exploitation: Cultivating blockbuster papers for MOR. Management and Organization Review, 13(1), 5–13.
Obadia, C., Bello, D. C., & Gilliland, D. I. (2015). Effect of exporter’s incentives on foreign distributor’s role performance. Journal of International Business Studies, 46, 960–983.
Okhuysen, G., & Bonardi, J. P. (2011). The challenges of building theory by combining lenses. Academy of Management Review, 36(1), 6–11.
Pierce, J. R., & Aguinis, H. (2013). The too-much-of-a-good-thing effect in management. Journal of Management, 39(2), 313–338.
Pölönen, J., & Auranen, O. (2022). Research performance and scholarly communication profile of competitive research funding: The case of Academy of Finland. Scientometrics, 127(12), 7415–7433.
Poole, M., & Van de Ven, A. (1989). Using paradox to build management and organization theories. Academy of Management Review, 14, 562–578.
Ravlin, E. C., Liao, Y., Morrell, D. L., Au, K., & Thomas, D. C. (2012). Collectivist orientation and the psychological contract: Mediating effects of creditor exchange ideology. Journal of International Business Studies, 43, 772–782.
Reiche, B. S., Harzing, A. W., & Pudelko, M. (2015). Why and how does shared language affect subsidiary knowledge inflows? A social identity perspective. Journal of International Business Studies, 46(5), 528–551.
Rivera, J., & Oh, C. H. (2013). Environmental regulations and multinational corporations’ foreign market entry investments. Policy Studies Journal, 41(2), 243–272.
Rousseau, D. (2006). Is there such a thing as “evidence-based management”? Academy of Management Review, 31(2), 256–269.
Salomon, R., & Wu, Z. (2012). Institutional distance and local isomorphism strategy. Journal of International Business Studies, 43, 343–367.
Samiee, S., Shimp, T. A., & Sharma, S. (2005). Brand origin recognition accuracy: Its antecedents and consumers’ cognitive limitations. Journal of International Business Studies, 36, 379–397.
Shepherd, D. A., & Suddaby, R. (2017). Theory building: A review and integration. Journal of Management, 43(1), 59–86.
Suddaby, R. (2014). Why theory? Academy of Management Review, 29(4), 407–411.
Tahamtan, I., Safipour Afshar, A., & Ahamdzadeh, K. (2016). Factors affecting number of citations: A comprehensive review of the literature. Scientometrics, 107, 1195–1225.
Tihanyi, L. (2020). From “that’s interesting” to “that’s important.” Academy of Management Journal, 63(2), 329–331.
Tröster, C., & Van Knippenberg, D. (2012). Leader openness, nationality dissimilarity, and voice in multinational management teams. Journal of International Business Studies, 43, 591–613.
Van Essen, M., Heugens, P. P., Otten, J., & van Oosterhout, J. H. (2012). An institution-based view of executive compensation: A multilevel meta-analytic test. Journal of International Business Studies, 43(4), 396–423.
Venaik, S., Midgley, D. F., & Devinney, T. M. (2005). Dual paths to performance: The impact of global pressures on MNC subsidiary conduct and performance. Journal of International Business Studies, 36(6), 655–675.
Weitzel, U., & Berns, S. (2006). Cross-border takeovers, corruption, and related aspects of governance. Journal of International Business Studies, 37, 786–806.
Xie, J., Gong, K., Cheng, Y., & Ke, Q. (2019). The correlation between paper length and citations: A meta-analysis. Scientometrics, 118(3), 763–786.
Xu, D., Zhou, C., & Phan, P. H. (2010). A real options perspective on sequential acquisitions in China. Journal of International Business Studies, 41(1), 166–174.
Xu, R., Frank, K. A., Maroulis, S. J., & Rosenberg, J. M. (2019). konfound: Command to quantify robustness of causal inferences. The Stata Journal, 19(3), 523–550.
Zhou, C., & Li, J. (2008). Product innovation in emerging market-based international joint ventures: An organizational ecology perspective. Journal of International Business Studies, 39(7), 1114–1132.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We thank Herman Aguinis (George Washington U.) and Jason Colquitt (U. of Notre Dame) for comments and suggestions that helped to improve this work.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Solarino, A.M., Rose, E.L. & Luise, C. Going complex or going easy? The impact of research questions on citations. Scientometrics 129, 127–146 (2024). https://doi.org/10.1007/s11192-023-04907-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-023-04907-y