From the Editors: Experimental designs in international business research
- 2.5k Downloads
The question that motivated this editorial was, “where are the IB experiments?” The short answer is that experiments are largely absent from the IB literature; and we argue that they shouldn’t be. Experimental methods offer the opportunity to significantly improve the evidence for the causal relationships in international business research in a variety of ways. In this article we highlight the value and limitations of experiments in IB research, and explain the basic tenets of experimental design and thinking with the goal of encouraging the submission of more papers with an experimental design to JIBS.
Keywordsexperimental methods international business research research design
The goal of JIBS is to publish insightful, innovative and impactful research on international business.
The opening quote, taken from the “JIBS statement of editorial policy,” sets high expectations for articles published in JIBS. Making a theoretical contribution is central, and the evidence available supporting the causal relationship(s) proposed in the theory is a significant component when judging the level of insight and impact the study’s conclusions warrant. Causal ambiguity can be a significant limitation at best, and is often a fatal flaw in research studies. Evidence for causal relationships should be a concern for IB scholars because our theories, and the empirical evidence supporting them, are used to advise founders, leaders, organizational members, and other stakeholders about policies, interventions, strategies and more, that “have a profound impact on the lives and wellbeing of people all over the world” (Rousseau, 2006; Rynes, Rousseau, & Barends, 2014: 319). Offering such advice becomes uncomfortable when the evidence supporting any given theory is limited or suspect. In this article we point out the limited application of experimental designs in IB research, highlight the value (and limitations) of experimental methods for IB research, provide a reminder of the basic tenets of experimental design, and encourage more papers using experimental designs to be submitted to JIBS. The purpose of this essay is to restate the fact that JIBS welcomes (and encourages) experimental research and reinforce the view that IB research could benefit from more widespread application of experimental methods, when possible, to evaluate internal validity of IB theories. We draw on recent examples to illustrate how experimental methods can be used to make strong theoretical contributions to IB, and we suggest that by thinking experimentally both JIBS authors and reviewers can better evaluate the origins of constructs and the robustness of the causal relationships in theory. Our goal is to highlight one avenue to elevate the quality of evidence we accumulate for IB theories.
Research Methods and Evidence
Choices about research methods have important implications for the accumulation of IB knowledge. There are many ways to accumulate strong evidence, for example through meta-analysis of large numbers of high quality correlational research. However, because they provide the only unequivocal method for demonstrating causality, many consider random-assignment, controlled experiments the gold standard for evidence (Pfeffer & Sutton, 2006). Controlled experiments isolate causal variables and enable a strong test of the robustness of a theory: they provide convincing evidence for theories, especially when followed by field studies. In describing the usefulness of experimentation in economics research, Croson, Anand, and Agarwal (2007: 176) noted that experiments can “be designed to capture what researchers believe are the relevant dimensions of the field and to replicate the regularity in controlled conditions. Then, one by one, the real-world features can be eliminated (or relaxed or changed) until the regularity observed disappears or significantly weakens. This exercise identifies the cause (or causes) of the observationally observed regularity, and can result in theory construction.”
There is considerable debate about the usefulness of ranking the quality of evidence in management research (e.g., Learmonth & Harding, 2006; Morrell, 2012). Our intention here is not to argue for the pre-eminence of one method over another, but to highlight the importance of method choice, and the need for so-called triangulation of methods (McGrath, 1982) to evaluate internal, external, construct, and statistical conclusion validity of our work (Cook & Campbell, 1976). For readers without a deep grounding in experimental methods we summarize the basics in Box 1.
Basics of Experiments and Quasi-Experiments
Basics of Experiments and Quasi-experiments
Experimentation, offers strong tests of internal validity. Internal validity concerns causality (Cook & Campbell, 1976: Chapter 1) – or the assessment of a cause and effect relationship between two variables. For causation to be determined there must be (1) true covariation between two variables, (2) demonstration that the cause preceded the effect in time, and (3) alternative explanations must have been ruled out (Sackett & Larson, 1990; Schwab, 2013). The ability of experiments to provide such strong inference (Platt, 1964) varies based on design and execution.
The hallmarks of experiments are manipulation of independent variables or trials, which are potential causes, and control of extraneous variables (Cook & Campbell, 1976: Chapter 1). Thus to be able to conduct an experimental test of a theory, the researcher must be able to control (manipulate) the level(s) of the independent variable under study, or the independent variable must vary due to an exogenous event outside the control or influence of the cases studied. Manipulation (or exogeneity) is necessary to establish covariation and demonstrate that the cause preceded the effect – that is, the dependent variable cannot be responsible for variation in the independent variable. Control is required to establish covariation and rule out alternative explanations. True experiments involve independent variable control through random assignment of cases to treatment conditions, and the true experiment is typically considered the only research method that can assess a cause and effect relationship.
Random assignment controls for unobserved (extraneous) variables by equalizing treatment groups in order to rule out alternative explanations and enhance the determination of covariation between two variables. Researchers decide which cases are assigned to a condition using a random number table or generator, or roll of the die or comparable technique. Random assignment serves to average out the effect of nuisance variables across cases in a study. This means that because the choice to assign to a condition lacks any particular pattern (it is random), unmeasured variables should not be meaningfully correlated with the independent or treatment variable. It is particularly beneficial over other means of statistical control used in non-experimental work because researchers do not need to measure the nuisance variables, and it controls such nuisance variables “whether or not the researchers are aware of them” (Schwab, 2013: 87). Random assignment becomes more effective at averaging out the effects of nuisance variables as sample sizes become larger.
Limits of experimentation
Experiments, even when involving manipulation of the treatment and random assignment to conditions, still remain vulnerable to a variety of threats to internal validity. For instance, despite random assignment across treatment conditions, it is possible that the groups obtained had pre-existing differences (as noted previously, could result from random assignment with small sample sizes) on a quality that systematically altered the response of one group to the treatment as compared with the other. In this case, the observed XY relationship was spuriously generated by the unobserved difference between the groups. This problem, known as selection threat, can be evaluated (and ruled out) by implementing a research design with a pretest of the dependent variable. Likewise, there are times when the lab setting results can be misleading. For instance, lab studies bring participants into an artificial setting and may seriously reduce realism and limit generalizability of the results (Meltzoff, 1998). Field experiments also involve the manipulation of one or more independent variables but are conducted in a realistic or natural situation with conditions controlled as the situation will permit. While the more realistic context of these types of experiments improves the external validity of the results, they have many of the same limitations of studies conducted in laboratory environments.
Another limitation of all experiments is that the observed covariation between independent and dependent variables may be disturbed by the research environment itself, such as demand characteristics or researcher expectations. It is beyond the scope of this editorial to describe the numerous experimental designs created to handle various threats, but when planning and designing experiments extreme care must be taken to choose and correctly implement the design elements that best manage potential threats to internal validity.
Beyond limits to internal validity, experiments also face threats to external validity. External validity concerns the generalizability of research results. The artificial qualities of an experiment often involve an unrealistic setting and results obtained in such a controlled and different environment may not accurately reflect the relationship in an organizational setting. While this critique may be valid under some circumstances, at least one study, utilizing data from many meta-analyses, compared the results of research done in laboratory experiments with research done in field settings. The researchers discovered that the effect sizes from the laboratory studies correlated 0.73 with the effect sizes from the field studies (Anderson, Lindsay, & Bushman, 1999). These results suggest that the external validity problem may not always be as significant as some might automatically think; however, as Colquitt noted (2008), the correspondence between results of field studies and experiments probably varies across research streams and theories. Experiments may not be well suited for studying “complex, multicomponent, nonlinear” phenomena (Rynes et al., 2014: 313). To determine the appropriateness of an experimental design, scholars must think through the ways in which things like constructs may vary in the contrived setting of the laboratory vs an organization, industry, or country, context, for example.
There are practical realities in designing experiments. Not all independent variables can be randomly assigned to conditions. For example, if a researcher wanted to understand the effect of pre-departure training on a short-term international assignment for R&D engineers being sent to help set up a new lab in another part of the world, the organization may be unwilling to randomly assign engineers to different training conditions. We can assign individuals, teams, units, and even organizations to conditions but we cannot assign countries to conditions. And, it may impossible, impractical, or unethical to withhold an independent variable. In such circumstances, researchers may be able to conduct a quasi-experiment.
Quasi-experiments (sometimes called natural experiments) are often thought of as a special case of field experiments. Like true experiments, quasi-experiments involve research where the independent variable (treatment) is not determined by or controlled by the cases being studied. In a quasi-experiment, the independent variable might be (1) controlled by the experimenter, (2) be produced by an exogenous event, or (3) vary exogenously across groups. The key distinction between true experiments and quasi-experiments is that in quasi-experiments, assignment to the treatment condition is not random. In quasi-experiments, researchers design methods other than random assignment to improve evidence of internal validity (Schwab, 2013).
Experimentation in IB Research
Compared with other management areas, experiments in IB, and correspondingly JIBS, are rare. A search of the past twenty years of empirical publications in JIBS illustrates this point. Of 900 empirical research articles published, a mere eight (or less than 1%) used an experimental design. In these eight papers the topics ranged across marketing/advertising consumer behavior, sales communication, venture capitalists’ (VCs’) decision-making, cultural differences in decision-making, and empowerment and job satisfaction. Furthermore, most of these studies would be considered quasi-experiments rather than a true experiment because of their sampling approaches.
There may be a number of reasons for the paucity of true experimental studies, including that much of the research published on JIBS is multidisciplinary and examines macro, often long-term phenomena. Many researchers in IB phenomena cannot engage in random assignment into experimental or control groups; assigning countries to political economies, companies to globalization strategies, or country of origin to individuals would be impossible to say the least. In cases like these, we do not have IB experiments because random assignment is not possible. Yet while not right for all areas of IB, there are some IB theories and applications that would benefit from experimental designs, from quasi-experimental designs, and from experimental thinking.
Another reason we may see few experiments published in JIBS is the nature of the samples used in many experimental studies. In other disciplines, such as psychology, experiments are commonly conducted using student samples, and there may be a false perception by authors that JIBS does not publish research using student samples. For some IB topics, such as the study of the cross-cultural perceptions of advertising strategies or global career choices, student samples may be quite appropriate (Bello et al., 2009). The choice of sample alone should not discourage IB scholars from using the experimental method. The real question should be, as with all other studies, whether the results found from a given sample can generalize to the broader population?
Beyond the issue of student samples, a key challenge for IB scholars is recruiting participants from varying cultural and institutional backgrounds and conducting experiments with people from different geographic locations. For many questions, it may be important for participants to be located in their home context – for example, a researcher may wish to evaluate a question about differences between Chinese participants and German participants, but the expected effects may vary importantly if the study were conducted in the United States vs conducting the experiment with Chinese participants in China and with Germans in Germany. The cognitive and behavioral manifestations and effects of culture vary based on one’s location and associated contextual cues (e.g., Chiu, Gelfand, Yamagishi, Shteynberg, & Wan, 2010, Hong, Morris, Chiu, & Benet-Martinez, 2000). For instance, the effects of requiring English as the common language of communication may look quite different if the experiment were conducted in the US vs with the participants located in their native language environments (e.g., Ji, Zhang, & Nisbett, 2004). While participant recruiting and location remains a barrier to IB experiments, advances in technology, combined with creative study designs allow new ways to conduct international experiments. For example, subjects can be recruited globally, and experiments can be conducted virtually – through synchronous interaction (e.g., Jang, 2014). These developments allow experimental applications in areas previously not considered.
Exemplars in IB research
While experimental designs are somewhat rare in IB research, and particularly in JIBS, there are subfields within IB where experiments are more common, especially those examining individual-level or team-level outcomes. These areas are good places to start in looking for how experiments can be conducted on IB topics. Consider the following examples within international business subfields where controlled experiments have made contributions that would have been impossible or where other methods would have provided weaker evidence.
Marketing, and by extension international marketing, has a long history of experimental research, including areas such as consumer behavior and advertising (e.g., Bazerman, 2001; Peterson, Albaum, & Beltramini, 1985; Torelli, Monga, & Kaikati, 2012), decisions about retail store environments (e.g., Baker, Levy, & Grewal, 1992), sales and influence tactics (e.g., Busch & Wilson, 1976; Griskevicius, Goldstein, Mortensen, Sundie, Cialdini, & Kenrick, 2009), and even partner satisfaction in marketing alliances (e.g., Shamdasani & Sheth, 1995). Of the few experimental studies published in JIBS, marketing has been a key focus. For instance, in a 1999 JIBS article, Chanthika Pornpitakpan used an experimental design to conclude that Americans who adapt to Thai and Japanese language and behavioral norms will have more positive sales outcomes when working in those national contexts (Pornpitakpan, 1999). More recently, Wan and colleagues (Wan, Luk, & Chow, 2014) published a paper based on an experiment in six Chinese cities to examine consumer responses to sexual advertising and found support for their expectation that men’s responses to nudity in advertising were less affected by modernization than were women’s responses. When addressing potentially sensitive topics such as sexuality and arousal, norms for which are likely to vary importantly across cultures, survey methods may be biased (Hui & Triandis, 1985). Under such conditions, experimental manipulation may provide much clearer results. Such potential for response bias is common in IB research (Hui & Triandis, 1985). Furthermore, intersubjective theories of culture indicate that people may not act in accordance with stated values or beliefs (Chiu et al., 2010; Hong et al., 2000). As a result, insights as found in Wan and colleague’s work may not be possible without an experimental design. Together these examples illustrate experimental research published in JIBS, and rich tradition of experimental research in marketing suggests the opportunity for more such studies being submitted to JIBS.
Experimental research is very common in many subfields of management. There is a particularly strong tradition of experimental research in cross-cultural management, cross-cultural psychology, and cross-cultural HRM with many experimental studies conducted each year. However, few of these studies are finding their way to JIBS. An example in international management is an experiment published in JIBS identified in our literature review. The paper (Zacharakis, McMullen, & Shepherd, 2007) used a policy-capturing experiment involving 119 VCs in three countries to examine the influence of economic institutions on VC decision-making. Each VC was provided with randomized access to 50 ventures and evaluated them based on eight decision factors. Results indicated cross-national differences in the type of information emphasized in investment decisions.
In another example of experimental work in this domain, but one that did not appear in JIBS is by Caligiuri and Phillips (2003) who, in the context of expatriate candidates’ decision-making on whether to accept an international assignment, conducted a true experiment, randomly assigning actual candidates for expatriate assignments to receive or not a self-assessment decision-making tool. Compared with the control group, participants receiving a realistic job preview (RJP) and self-assessment tool reported greater self-efficacy for success on the international assignment and had greater confidence in their decision to accept an international assignment. If this study had employed a correlational design, one which examined the level of use of the tool against the outcome measures, they could have only inferred the influence of the tool itself. With that type of design, there was an alternative explanation for the results: the expatriate candidates with higher efficacy (in the first place) might have been those more likely to seek out and use the tool. Random assignment and the pre-test allowed the researchers to disentangle baseline individual differences and isolate causality: the RJP self-assessment tool caused the expatriates to have greater efficacy and to increase confidence in their ability to make an effective decision about an international assignment. We encourage international management scholars to consider experimental approaches, and those who are conducting work of this type to consider JIBS as an outlet.
Economics scholars have increasingly embraced experimental methods and this has extended to research in international economics (e.g., Hey, 1998; Roth, 1995). Again, however, little of his work has found its way to JIBS. Most applications of experiments within economics have been to test theories of individual choice, game theory, and the organization and functioning of markets (Buchan, 2003). An example is a study by Roth and colleagues (Roth, Prasnikar, Okuno-Fujiwara, & Zamir, 1991) in which they conducted a multicountry comparison of bargaining behavior. The authors compared two-party bargaining games with multiparty bargaining games in four countries. Differences observed were determined to illustrate cross-country differences in what is seen as an acceptable offer. In a follow-up study, Buchan and her colleagues (Buchan, Johnson, & Croson, 2006) manipulated the balance of power among players in the bargaining games, and found that power had a more differential influence in Japan than in the US in terms of offers made. These two examples illustrate a history of experimentation in international economics, at least in the areas around individual choice and game theory. Experimental approaches to cultural differences in ultimatum games have proliferated enough to allow meta-analyses (Oosterbeek, Sloof, & Van De Kuilen, 2004). Finally, with respect to understanding markets, experimental international economics research has also been proposed as beneficial to test proposed interventional policies – in the hope that conducting experiments can reveal unintended consequences of policy decisions (Camerer, 2003; Friedman and Sunder, 1994).
Opportunities for Experimental Approaches in IB
One obvious opportunity for additional experimental studies in IB is to encourage more experimental work in the three domains identified above. However, in keeping with improving the evidence for causal relationships more broadly across IB, there are several other opportunities to apply experimental approaches. These are the control of possible alternative causes, designing longitudinal field experiments, employing experiments as part of a multiple methods approach, and thinking experimentally.
Controlling nuisance variables
A number of experimental design methods can be used to control nuisance variables, or alternative causal explanations. Two examples are matching and identifying comparable groups of cases, then varying the treatment level across the groups. Matching equates cases on a number of dimensions (determined by theory). Done correctly, matching on nuisance variables can strongly rule out alternative causal explanations. For example, Earley (1989) matched an American sample to a harder to obtain sample from the P.R.C. on age, gender, job tenure, education, job duties, and career aspirations in a study of the effect of social loafing across cultures. Matching can be effective but becomes difficult to correctly execute as the number of cases required increases and/or as the number of nuisance variables needing control increases (Schwab, 2013). An alternative method of control involves identifying comparable groups of cases for each level of the independent variable. A recent example is provided in the previously mentioned article by Wan et al. (2014) in JIBS, in which they test the effect of modernization on consumer responses to advertising. In that study several Chinese cities at various stages of modernization serve as an independent variable, while the context of a single country controlled for a wide range of societal-level variables. Similarly, Meyer and Peng (2005) suggest that changes in Central and Eastern Europe since the 1990s provide unique societal quasi-experiments that offer an opportunity to test the applicability of existing theories and develop new ones in the areas of (1) organizational economics; (2) resource-based theories and (3) institutional theories.
Longitudinal field experiments
Longitudinal field experiments are another quasi-experimental method that could work well in international business research. When pretest or baselines levels of outcome measure are understood, the independent variable is introduced – consecutively – to comparable units. At each point in time, change in the outcome is assessed. If the independent variable affects the dependent variable only at the point in time when it is introduced to the group, a stronger cause-effect case can be made. This design is desirable because organizations often run pilot programs in certain subsidiaries or introduce practices across subsidiaries consecutively (not introducing a new technology platform or training practice organization-wide). While desirable because it is naturally occurring, this longitudinal field design can present a problem if the groups being compared are not similar from the onset or if some other concurrent occurrences, such as currency fluctuation or a geopolitical crisis, affect the outcomes tested over time.
Experiments and multiple methods
The nature of international business questions very often means considering business problems in the rich context of a multicounty or multicultural contexts. Satisfying the competing desiderata of strong evidence in these contexts may require multiple methods. As a part of a multi methods approach, experiments present the opportunity to make a stronger case for internal validity for studies that are set in a context that is rich in generalizability. For example, extracting the essential elements of the relationship between key variables in a correlational study and bringing them into a controlled environment can demonstrate the robustness of the relationship. That is, combining an experiment with a survey offers evidence of both internal and external validity (Scandura & Williams, 2000). For example, Grinstein and Riefler’s (2015) JIBS article reports the results of four studies, which combine correlational studies and experiments conducted in three countries, to show that high cosmopolitan consumers demonstrate environmental concern and engage in sustainable behavior. When possible, the results of an experimental design could be coupled with a correlational design. The former could provide evidence of causality while the latter could satisfy a threat to external validity.
Perhaps the approach with the most universal applicability to IB is to think experimentally. In evaluating research and reflecting on our own research designs we can benefit from thinking experimentally, even if we are unable to implement true experimental designs. Thinking experimentally involves, among other things, critical thinking to rule out plausible alternatives, better understanding of our theoretical constructs by considering the research context, and thoughtful effort to enhance conclusions about covariation, causal order, and alternative explanations through research design.
Thinking experimentally can help researchers better understand the nature of their constructs by separating the function of the construct from the context in which it is embedded. The window in someone’s living quarters provides an example.1 In an American farmhouse we might find several small openings in the walls consisting of segmented frames, while in a traditional native American tepee there is an opening only at the top, and in a medieval castle there exists a number of very small trapezoidal orifices. It can be argued that each of these openings is unique, having developed within their specific context. The tepee opening is basically a vent for fresh air, while the farmhouse window also acts a viewing port, and the castle window serves both these functions, but is designed to be a defensive structure. However, if we focus on their function (letting in light, ventilation) we discover their similarity. By thinking experimentally we evaluate constructs in terms of their relationship to other constructs, while controlling for the effects of specific contexts. Because international business research is typically embedded in different societal contexts it is essential that the universal vs country or culture specific aspects of constructs are clearly specified, no matter what method is applied.
Thinking experimentally also helps us to answer the question of whether a study’s results are an artifact of the sample or sampling approach and whether the change in the dependent variable cannot be explained by other endogenous variables beyond those proposed in the theory (Reeb, Sakakibara, & Mahmood, 2012). In the peer review process, these are often labeled fatal flaws – something other than the theory proposed is explaining the results present in the data. An example provided by Reeb et al. (2012) shows how firm-level internationalization affects corporate decision-making. As they note, an experimental design would assign firms randomly to conditions such as multinational and domestic, and then evaluate decision-making. In practice, firms in the two groups are not randomly distributed across levels if internationalization, and the threat to causal conclusion about the relationship between level of internationalization and corporate decision-making is that both internationalization and decision-making could be driven by a third, unobserved cause. In such a case, we say that internationalization is endogenous. Another way to look at this is to see the independent variable as a “non-random treatment.” The presence of endogeneity creates inconsistent regression estimates and biased test statistics (Woolridge, 2010). A consideration of approaches to address endogeneity are beyond the scope of this article (see Reeb et al., 2012 for more discussion); however, from the standpoint of encouraging experimental thinking, it is important for scholars to both think through and apply available tests to evaluate the risks of endogeneity to causal inference and statistical conclusion validity in their studies.
Experimental thinking can help in evaluating evidence for cause and effect, and to reinforce design choices and analytical approaches that allow stronger causal tests if true experiments are not possible. Analytical approaches such as identifying natural experiments (see Choudhury & Khanna, 2014 for a recent example of a natural experiment application in JIBS; Khanna, 2009) and instrumental variables regression, regression discontinuity, or matched sample designs can aid in such efforts. However, see Thomas, Cuervo-Cazurra, and Brannen, (2011) as a caution against substituting analytical crutches for critical thinking.
Case studies as natural experiments
Finally, experimental thinking can be applied in qualitative research. For instance, case studies, often regarded for their utility in inducing new theory from empirical data (Eisenhardt, 1989), can be a vehicle for experimental thinking. Yin (2013) suggests that case studies are best suited to addressing how and why questions because of their high degree of internal validity. Welch, Piekkari, and Plakoyiannaki (2011) reiterate this position by suggesting the case study is a natural experiment. Considering the multiple and complex causal links often revealed in a case is an example of thinking experimentally even if the case itself is not used to explain these relationships. For instance, Wong and Ellis published a paper in JIBS (2002) in which they used applied an “experiment-like replication logic” to select cases for inclusion.
These examples offer some insight into how thinking experimentally can lead us to conceptual, design, and analytical approaches that improve the internal validity of our research when true experiments are not possible. The need to think experimentally extends beyond academe. For example, Davenport (2009) in Harvard Business Review entitled How to Design Smart Business Experiments highlights the many ways organizations are investing in training and software to conduct experiments before making strategic decisions in marketing, advertising, operations, and the like. Across a range of organizations and functional areas “these organizations are finding out whether supposedly better ways of doing business are actually better. Once they learn from their tests, they can spread confirmed better practices throughout their business” (Davenport, 2009).
Future of Experimentation in JIBS
JIBS is a methodologically pluralistic journal. Quantitative and qualitative research methodologies are both encouraged, as long as the studies are methodologically rigorous. Conceptual and theory-development papers, empirical hypothesis-testing papers, and case-based studies are all welcome. Mathematical modeling papers are welcome if the modeling is appropriate and the intuition explained carefully.
This statement clearly indicates that experiments are welcome at JIBS. The impact of such a statement is reduced, however, if potential contributors look through the journal and see little evidence that this methodological pluralism extends to experiments. One of the evaluation criteria applied for manuscripts is the appropriateness for JIBS. We hope that our essay highlights that experiments are indeed appropriate for JIBS and dispels any view among potential contributors that there is a bias against experiments that may discourage them from submitting their research to the journal.
Experiments are just one arrow in our quiver of methods, and any method chosen needs to be appropriate for the research question(s) pursued. No single method can address all the necessary elements of validity (internal, constructs, external, statistical conclusion). We encourage mixed methods. Ultimately, improving our justification for methods chosen, and description of the evidence and limitations produced by those methods will add value to the IB research published in JIBS. We encourage more experimental research, where appropriate. Experiments are notably underrepresented in JIBS and they offer an opportunity to improve the evidence for causal relationships in international business research.
- Baker, J., Levy, M., & Grewal, D. 1992. An experimental approach to making retail store environmental decisions. Journal of Retailing, 68 (4): 445.Google Scholar
- Buchan, N. R. 2003. Experimental economic approaches to international marketing research. Handbook of research in international marketing 190–208. Cheltenham: Edward Elgar Publishing.Google Scholar
- Camerer, C. 2003. Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: Princeton University Press.Google Scholar
- Colquitt, Jason A. 2008. From the editors publishing laboratory research in AMJ: A question of when, not if. Academy of Management Journal, 51 (4): 616–620.Google Scholar
- Cook, T. D., & Campbell, D. T. 1976. The design and conduct of quasi-experiments and true experiments in field settings. In M. D. Dunnette (Ed), Handbook of industrial and organizational psychology 223–336. Chicago, IL: Rand-McNally.Google Scholar
- Davenport, T. 2009. Make better decisions. Harvard Business Review, 87 (11): 117–123.Google Scholar
- Earley, P. C., & Mosakowski, E. 1996. Experimental international management research. In B. J. Punnett, & O. Shenkar (Eds), Handbook of international management research 83–114. London: Blackwell Publishers.Google Scholar
- Eisenhardt, K. M. 1989. Building theories from case study research. Academy of Management Review, 14 (4): 532–550.Google Scholar
- Jang, S. 2014. Bringing worlds together: Cultural brokerage in multicultural teams. Doctoral Dissertation, Harvard University.Google Scholar
- McGrath, J. 1982. Dilemmatics: The study of research choices and dilemmas. In J. E. McGrath, J. Martin, & R. A. Kulka (Eds), Judgment calls in research 69–102. Newbury Park, CA: Sage.Google Scholar
- Meltzoff, J. 1998. Critical thinking about research: Psychology and related fields. Washington, DC: American Psychological Association.Google Scholar
- Pfeffer, J., & Sutton, R. I. 2006. Hard facts, dangerous half-truths, and total nonsense: Profiting from evidence-based management. Cambridge, MA: Harvard Business Press.Google Scholar
- Roth, A. E., Prasnikar, V., Okuno-Fujiwara, M., & Zamir, S. 1991. Bargaining and market behavior in Jerusalem, Ljubljana, Pittsburgh, and Tokyo: An experimental study. The American Economic Review, 81 (5): 1068–1095.Google Scholar
- Sackett, P. R., & Larson, Jr., J. R. 1990. Research strategies and tactics in industrial and organizational psychology.Google Scholar
- Schwab, D. P. 2013. Research methods for organizational studies. Abingdon: Psychology Press.Google Scholar
- Woolridge, J. 2010. Econometric analysis of cross section and panel data, 2nd edn. Cambridge, MA: MIT Press.Google Scholar
- Yin, R. K. 2013. Case studies research: Design and methods. Thousand Oaks, CA: Sage.Google Scholar