The goal of JIBS is to publish insightful, innovative and impactful research on international business.


The opening quote, taken from the “JIBS statement of editorial policy,” sets high expectations for articles published in JIBS. Making a theoretical contribution is central, and the evidence available supporting the causal relationship(s) proposed in the theory is a significant component when judging the level of insight and impact the study’s conclusions warrant. Causal ambiguity can be a significant limitation at best, and is often a fatal flaw in research studies. Evidence for causal relationships should be a concern for IB scholars because our theories, and the empirical evidence supporting them, are used to advise founders, leaders, organizational members, and other stakeholders about policies, interventions, strategies and more, that “have a profound impact on the lives and wellbeing of people all over the world” (Rousseau, 2006; Rynes, Rousseau, & Barends, 2014: 319). Offering such advice becomes uncomfortable when the evidence supporting any given theory is limited or suspect. In this article we point out the limited application of experimental designs in IB research, highlight the value (and limitations) of experimental methods for IB research, provide a reminder of the basic tenets of experimental design, and encourage more papers using experimental designs to be submitted to JIBS. The purpose of this essay is to restate the fact that JIBS welcomes (and encourages) experimental research and reinforce the view that IB research could benefit from more widespread application of experimental methods, when possible, to evaluate internal validity of IB theories. We draw on recent examples to illustrate how experimental methods can be used to make strong theoretical contributions to IB, and we suggest that by thinking experimentally both JIBS authors and reviewers can better evaluate the origins of constructs and the robustness of the causal relationships in theory. Our goal is to highlight one avenue to elevate the quality of evidence we accumulate for IB theories.

Research Methods and Evidence

Choices about research methods have important implications for the accumulation of IB knowledge. There are many ways to accumulate strong evidence, for example through meta-analysis of large numbers of high quality correlational research. However, because they provide the only unequivocal method for demonstrating causality, many consider random-assignment, controlled experiments the gold standard for evidence (Pfeffer & Sutton, 2006). Controlled experiments isolate causal variables and enable a strong test of the robustness of a theory: they provide convincing evidence for theories, especially when followed by field studies. In describing the usefulness of experimentation in economics research, Croson, Anand, and Agarwal (2007: 176) noted that experiments can “be designed to capture what researchers believe are the relevant dimensions of the field and to replicate the regularity in controlled conditions. Then, one by one, the real-world features can be eliminated (or relaxed or changed) until the regularity observed disappears or significantly weakens. This exercise identifies the cause (or causes) of the observationally observed regularity, and can result in theory construction.”

There is considerable debate about the usefulness of ranking the quality of evidence in management research (e.g., Learmonth & Harding, 2006; Morrell, 2012). Our intention here is not to argue for the pre-eminence of one method over another, but to highlight the importance of method choice, and the need for so-called triangulation of methods (McGrath, 1982) to evaluate internal, external, construct, and statistical conclusion validity of our work (Cook & Campbell, 1976). For readers without a deep grounding in experimental methods we summarize the basics in Box 1.

Experimentation in IB Research

Compared with other management areas, experiments in IB, and correspondingly JIBS, are rare. A search of the past twenty years of empirical publications in JIBS illustrates this point. Of 900 empirical research articles published, a mere eight (or less than 1%) used an experimental design. In these eight papers the topics ranged across marketing/advertising consumer behavior, sales communication, venture capitalists’ (VCs’) decision-making, cultural differences in decision-making, and empowerment and job satisfaction. Furthermore, most of these studies would be considered quasi-experiments rather than a true experiment because of their sampling approaches.

There may be a number of reasons for the paucity of true experimental studies, including that much of the research published on JIBS is multidisciplinary and examines macro, often long-term phenomena. Many researchers in IB phenomena cannot engage in random assignment into experimental or control groups; assigning countries to political economies, companies to globalization strategies, or country of origin to individuals would be impossible to say the least. In cases like these, we do not have IB experiments because random assignment is not possible. Yet while not right for all areas of IB, there are some IB theories and applications that would benefit from experimental designs, from quasi-experimental designs, and from experimental thinking.

Another reason we may see few experiments published in JIBS is the nature of the samples used in many experimental studies. In other disciplines, such as psychology, experiments are commonly conducted using student samples, and there may be a false perception by authors that JIBS does not publish research using student samples. For some IB topics, such as the study of the cross-cultural perceptions of advertising strategies or global career choices, student samples may be quite appropriate (Bello et al., 2009). The choice of sample alone should not discourage IB scholars from using the experimental method. The real question should be, as with all other studies, whether the results found from a given sample can generalize to the broader population?

Beyond the issue of student samples, a key challenge for IB scholars is recruiting participants from varying cultural and institutional backgrounds and conducting experiments with people from different geographic locations. For many questions, it may be important for participants to be located in their home context – for example, a researcher may wish to evaluate a question about differences between Chinese participants and German participants, but the expected effects may vary importantly if the study were conducted in the United States vs conducting the experiment with Chinese participants in China and with Germans in Germany. The cognitive and behavioral manifestations and effects of culture vary based on one’s location and associated contextual cues (e.g., Chiu, Gelfand, Yamagishi, Shteynberg, & Wan, 2010, Hong, Morris, Chiu, & Benet-Martinez, 2000). For instance, the effects of requiring English as the common language of communication may look quite different if the experiment were conducted in the US vs with the participants located in their native language environments (e.g., Ji, Zhang, & Nisbett, 2004). While participant recruiting and location remains a barrier to IB experiments, advances in technology, combined with creative study designs allow new ways to conduct international experiments. For example, subjects can be recruited globally, and experiments can be conducted virtually – through synchronous interaction (e.g., Jang, 2014). These developments allow experimental applications in areas previously not considered.

Exemplars in IB research

While experimental designs are somewhat rare in IB research, and particularly in JIBS, there are subfields within IB where experiments are more common, especially those examining individual-level or team-level outcomes. These areas are good places to start in looking for how experiments can be conducted on IB topics. Consider the following examples within international business subfields where controlled experiments have made contributions that would have been impossible or where other methods would have provided weaker evidence.

International marketing

Marketing, and by extension international marketing, has a long history of experimental research, including areas such as consumer behavior and advertising (e.g., Bazerman, 2001; Peterson, Albaum, & Beltramini, 1985; Torelli, Monga, & Kaikati, 2012), decisions about retail store environments (e.g., Baker, Levy, & Grewal, 1992), sales and influence tactics (e.g., Busch & Wilson, 1976; Griskevicius, Goldstein, Mortensen, Sundie, Cialdini, & Kenrick, 2009), and even partner satisfaction in marketing alliances (e.g., Shamdasani & Sheth, 1995). Of the few experimental studies published in JIBS, marketing has been a key focus. For instance, in a 1999 JIBS article, Chanthika Pornpitakpan used an experimental design to conclude that Americans who adapt to Thai and Japanese language and behavioral norms will have more positive sales outcomes when working in those national contexts (Pornpitakpan, 1999). More recently, Wan and colleagues (Wan, Luk, & Chow, 2014) published a paper based on an experiment in six Chinese cities to examine consumer responses to sexual advertising and found support for their expectation that men’s responses to nudity in advertising were less affected by modernization than were women’s responses. When addressing potentially sensitive topics such as sexuality and arousal, norms for which are likely to vary importantly across cultures, survey methods may be biased (Hui & Triandis, 1985). Under such conditions, experimental manipulation may provide much clearer results. Such potential for response bias is common in IB research (Hui & Triandis, 1985). Furthermore, intersubjective theories of culture indicate that people may not act in accordance with stated values or beliefs (Chiu et al., 2010; Hong et al., 2000). As a result, insights as found in Wan and colleague’s work may not be possible without an experimental design. Together these examples illustrate experimental research published in JIBS, and rich tradition of experimental research in marketing suggests the opportunity for more such studies being submitted to JIBS.

International management

Experimental research is very common in many subfields of management. There is a particularly strong tradition of experimental research in cross-cultural management, cross-cultural psychology, and cross-cultural HRM with many experimental studies conducted each year. However, few of these studies are finding their way to JIBS. An example in international management is an experiment published in JIBS identified in our literature review. The paper (Zacharakis, McMullen, & Shepherd, 2007) used a policy-capturing experiment involving 119 VCs in three countries to examine the influence of economic institutions on VC decision-making. Each VC was provided with randomized access to 50 ventures and evaluated them based on eight decision factors. Results indicated cross-national differences in the type of information emphasized in investment decisions.

In another example of experimental work in this domain, but one that did not appear in JIBS is by Caligiuri and Phillips (2003) who, in the context of expatriate candidates’ decision-making on whether to accept an international assignment, conducted a true experiment, randomly assigning actual candidates for expatriate assignments to receive or not a self-assessment decision-making tool. Compared with the control group, participants receiving a realistic job preview (RJP) and self-assessment tool reported greater self-efficacy for success on the international assignment and had greater confidence in their decision to accept an international assignment. If this study had employed a correlational design, one which examined the level of use of the tool against the outcome measures, they could have only inferred the influence of the tool itself. With that type of design, there was an alternative explanation for the results: the expatriate candidates with higher efficacy (in the first place) might have been those more likely to seek out and use the tool. Random assignment and the pre-test allowed the researchers to disentangle baseline individual differences and isolate causality: the RJP self-assessment tool caused the expatriates to have greater efficacy and to increase confidence in their ability to make an effective decision about an international assignment. We encourage international management scholars to consider experimental approaches, and those who are conducting work of this type to consider JIBS as an outlet.

International economics

Economics scholars have increasingly embraced experimental methods and this has extended to research in international economics (e.g., Hey, 1998; Roth, 1995). Again, however, little of his work has found its way to JIBS. Most applications of experiments within economics have been to test theories of individual choice, game theory, and the organization and functioning of markets (Buchan, 2003). An example is a study by Roth and colleagues (Roth, Prasnikar, Okuno-Fujiwara, & Zamir, 1991) in which they conducted a multicountry comparison of bargaining behavior. The authors compared two-party bargaining games with multiparty bargaining games in four countries. Differences observed were determined to illustrate cross-country differences in what is seen as an acceptable offer. In a follow-up study, Buchan and her colleagues (Buchan, Johnson, & Croson, 2006) manipulated the balance of power among players in the bargaining games, and found that power had a more differential influence in Japan than in the US in terms of offers made. These two examples illustrate a history of experimentation in international economics, at least in the areas around individual choice and game theory. Experimental approaches to cultural differences in ultimatum games have proliferated enough to allow meta-analyses (Oosterbeek, Sloof, & Van De Kuilen, 2004). Finally, with respect to understanding markets, experimental international economics research has also been proposed as beneficial to test proposed interventional policies – in the hope that conducting experiments can reveal unintended consequences of policy decisions (Camerer, 2003; Friedman and Sunder, 1994).

Opportunities for Experimental Approaches in IB

One obvious opportunity for additional experimental studies in IB is to encourage more experimental work in the three domains identified above. However, in keeping with improving the evidence for causal relationships more broadly across IB, there are several other opportunities to apply experimental approaches. These are the control of possible alternative causes, designing longitudinal field experiments, employing experiments as part of a multiple methods approach, and thinking experimentally.

Controlling nuisance variables

A number of experimental design methods can be used to control nuisance variables, or alternative causal explanations. Two examples are matching and identifying comparable groups of cases, then varying the treatment level across the groups. Matching equates cases on a number of dimensions (determined by theory). Done correctly, matching on nuisance variables can strongly rule out alternative causal explanations. For example, Earley (1989) matched an American sample to a harder to obtain sample from the P.R.C. on age, gender, job tenure, education, job duties, and career aspirations in a study of the effect of social loafing across cultures. Matching can be effective but becomes difficult to correctly execute as the number of cases required increases and/or as the number of nuisance variables needing control increases (Schwab, 2013). An alternative method of control involves identifying comparable groups of cases for each level of the independent variable. A recent example is provided in the previously mentioned article by Wan et al. (2014) in JIBS, in which they test the effect of modernization on consumer responses to advertising. In that study several Chinese cities at various stages of modernization serve as an independent variable, while the context of a single country controlled for a wide range of societal-level variables. Similarly, Meyer and Peng (2005) suggest that changes in Central and Eastern Europe since the 1990s provide unique societal quasi-experiments that offer an opportunity to test the applicability of existing theories and develop new ones in the areas of (1) organizational economics; (2) resource-based theories and (3) institutional theories.

Longitudinal field experiments

Longitudinal field experiments are another quasi-experimental method that could work well in international business research. When pretest or baselines levels of outcome measure are understood, the independent variable is introduced – consecutively – to comparable units. At each point in time, change in the outcome is assessed. If the independent variable affects the dependent variable only at the point in time when it is introduced to the group, a stronger cause-effect case can be made. This design is desirable because organizations often run pilot programs in certain subsidiaries or introduce practices across subsidiaries consecutively (not introducing a new technology platform or training practice organization-wide). While desirable because it is naturally occurring, this longitudinal field design can present a problem if the groups being compared are not similar from the onset or if some other concurrent occurrences, such as currency fluctuation or a geopolitical crisis, affect the outcomes tested over time.

Experiments and multiple methods

The nature of international business questions very often means considering business problems in the rich context of a multicounty or multicultural contexts. Satisfying the competing desiderata of strong evidence in these contexts may require multiple methods. As a part of a multi methods approach, experiments present the opportunity to make a stronger case for internal validity for studies that are set in a context that is rich in generalizability. For example, extracting the essential elements of the relationship between key variables in a correlational study and bringing them into a controlled environment can demonstrate the robustness of the relationship. That is, combining an experiment with a survey offers evidence of both internal and external validity (Scandura & Williams, 2000). For example, Grinstein and Riefler’s (2015) JIBS article reports the results of four studies, which combine correlational studies and experiments conducted in three countries, to show that high cosmopolitan consumers demonstrate environmental concern and engage in sustainable behavior. When possible, the results of an experimental design could be coupled with a correlational design. The former could provide evidence of causality while the latter could satisfy a threat to external validity.

Thinking experimentally

Perhaps the approach with the most universal applicability to IB is to think experimentally. In evaluating research and reflecting on our own research designs we can benefit from thinking experimentally, even if we are unable to implement true experimental designs. Thinking experimentally involves, among other things, critical thinking to rule out plausible alternatives, better understanding of our theoretical constructs by considering the research context, and thoughtful effort to enhance conclusions about covariation, causal order, and alternative explanations through research design.

Thinking experimentally can help researchers better understand the nature of their constructs by separating the function of the construct from the context in which it is embedded. The window in someone’s living quarters provides an example.Footnote 1 In an American farmhouse we might find several small openings in the walls consisting of segmented frames, while in a traditional native American tepee there is an opening only at the top, and in a medieval castle there exists a number of very small trapezoidal orifices. It can be argued that each of these openings is unique, having developed within their specific context. The tepee opening is basically a vent for fresh air, while the farmhouse window also acts a viewing port, and the castle window serves both these functions, but is designed to be a defensive structure. However, if we focus on their function (letting in light, ventilation) we discover their similarity. By thinking experimentally we evaluate constructs in terms of their relationship to other constructs, while controlling for the effects of specific contexts. Because international business research is typically embedded in different societal contexts it is essential that the universal vs country or culture specific aspects of constructs are clearly specified, no matter what method is applied.

Thinking experimentally also helps us to answer the question of whether a study’s results are an artifact of the sample or sampling approach and whether the change in the dependent variable cannot be explained by other endogenous variables beyond those proposed in the theory (Reeb, Sakakibara, & Mahmood, 2012). In the peer review process, these are often labeled fatal flaws – something other than the theory proposed is explaining the results present in the data. An example provided by Reeb et al. (2012) shows how firm-level internationalization affects corporate decision-making. As they note, an experimental design would assign firms randomly to conditions such as multinational and domestic, and then evaluate decision-making. In practice, firms in the two groups are not randomly distributed across levels if internationalization, and the threat to causal conclusion about the relationship between level of internationalization and corporate decision-making is that both internationalization and decision-making could be driven by a third, unobserved cause. In such a case, we say that internationalization is endogenous. Another way to look at this is to see the independent variable as a “non-random treatment.” The presence of endogeneity creates inconsistent regression estimates and biased test statistics (Woolridge, 2010). A consideration of approaches to address endogeneity are beyond the scope of this article (see Reeb et al., 2012 for more discussion); however, from the standpoint of encouraging experimental thinking, it is important for scholars to both think through and apply available tests to evaluate the risks of endogeneity to causal inference and statistical conclusion validity in their studies.

Experimental thinking can help in evaluating evidence for cause and effect, and to reinforce design choices and analytical approaches that allow stronger causal tests if true experiments are not possible. Analytical approaches such as identifying natural experiments (see Choudhury & Khanna, 2014 for a recent example of a natural experiment application in JIBS; Khanna, 2009) and instrumental variables regression, regression discontinuity, or matched sample designs can aid in such efforts. However, see Thomas, Cuervo-Cazurra, and Brannen, (2011) as a caution against substituting analytical crutches for critical thinking.

Case studies as natural experiments

Finally, experimental thinking can be applied in qualitative research. For instance, case studies, often regarded for their utility in inducing new theory from empirical data (Eisenhardt, 1989), can be a vehicle for experimental thinking. Yin (2013) suggests that case studies are best suited to addressing how and why questions because of their high degree of internal validity. Welch, Piekkari, and Plakoyiannaki (2011) reiterate this position by suggesting the case study is a natural experiment. Considering the multiple and complex causal links often revealed in a case is an example of thinking experimentally even if the case itself is not used to explain these relationships. For instance, Wong and Ellis published a paper in JIBS (2002) in which they used applied an “experiment-like replication logic” to select cases for inclusion.

These examples offer some insight into how thinking experimentally can lead us to conceptual, design, and analytical approaches that improve the internal validity of our research when true experiments are not possible. The need to think experimentally extends beyond academe. For example, Davenport (2009) in Harvard Business Review entitled How to Design Smart Business Experiments highlights the many ways organizations are investing in training and software to conduct experiments before making strategic decisions in marketing, advertising, operations, and the like. Across a range of organizations and functional areas “these organizations are finding out whether supposedly better ways of doing business are actually better. Once they learn from their tests, they can spread confirmed better practices throughout their business” (Davenport, 2009).

Future of Experimentation in JIBS

We emphasize that JIBS does not privilege one type of research (i.e., experiments) over others. In fact, our current editorial team has led the way to encourage more qualitative studies in IB and JIBS alike (Birkinshaw, Brannen, & Tung, 2011). As stated in the JIBS editorial policy:

JIBS is a methodologically pluralistic journal. Quantitative and qualitative research methodologies are both encouraged, as long as the studies are methodologically rigorous. Conceptual and theory-development papers, empirical hypothesis-testing papers, and case-based studies are all welcome. Mathematical modeling papers are welcome if the modeling is appropriate and the intuition explained carefully.

This statement clearly indicates that experiments are welcome at JIBS. The impact of such a statement is reduced, however, if potential contributors look through the journal and see little evidence that this methodological pluralism extends to experiments. One of the evaluation criteria applied for manuscripts is the appropriateness for JIBS. We hope that our essay highlights that experiments are indeed appropriate for JIBS and dispels any view among potential contributors that there is a bias against experiments that may discourage them from submitting their research to the journal.

Experiments are just one arrow in our quiver of methods, and any method chosen needs to be appropriate for the research question(s) pursued. No single method can address all the necessary elements of validity (internal, constructs, external, statistical conclusion). We encourage mixed methods. Ultimately, improving our justification for methods chosen, and description of the evidence and limitations produced by those methods will add value to the IB research published in JIBS. We encourage more experimental research, where appropriate. Experiments are notably underrepresented in JIBS and they offer an opportunity to improve the evidence for causal relationships in international business research.