Marketing Letters

, Volume 27, Issue 3, pp 437–447 | Cite as

A method for evaluating and selecting field experiment locations

  • David Trafimow
  • James M. Leonhardt
  • Mihai Niculescu
  • Collin Payne


When marketing researchers perform field experiments, it is crucial that the experimental location and the control location are comparable. At present, it is difficult to assess the comparability of field locations because there is no way to distinguish differences between locations that are due to random versus systematic factors. To accomplish this, we propose a methodology that enables field researchers to evaluate and select optimal field locations by parsing these random versus systematic effects. To determine the accuracy of our proposed methodology, we performed computer simulations with 10,000 cases per simulation. The simulations demonstrate that accuracy increases as the number of data points increases and as consistency increases.


Marketing research Field experiments Experimental methodology and design Potential performance theory 


  1. Algesheimer, R., Borle, S., Dholakia, U. M., & Singh, S. S. (2010). The impact of customer community participation on customer behaviors: an empirical investigation. Marketing Science, 29(4), 756–769.CrossRefGoogle Scholar
  2. Anderson, C. A., Lindsay, J. M., & Bushman, B. J. (1999). Research in the psychological laboratory: truth or triviality? Current Directions in Psychological Science, 8(1), 3–9.CrossRefGoogle Scholar
  3. Anderson, E. T., & Simester, D. I. (2001). Are sale signs less effective when more products have them? Marketing Science, 20(2), 121–142.CrossRefGoogle Scholar
  4. Heckman, J. J. (1998). Detecting discrimination. The Journal of Economic Perspectives, 12(2), 101–116.CrossRefGoogle Scholar
  5. Heckman, J. J., & Smith, J. A. (1995). Assessing the case for social experiments. The Journal of Economic Perspectives, 9(2), 85–110.CrossRefGoogle Scholar
  6. Levav, J., & Zhu, R. J. (2009). Seeking freedom through variety. Journal of Consumer Research, 36(4), 600–610.CrossRefGoogle Scholar
  7. Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? The Journal of Economic Perspectives, 21(2), 153–174.CrossRefGoogle Scholar
  8. Lichtenstein, S., & Slovic, P. (1973). Response-induced reversals of preference in gambling: an extended replication in Las Vegas. Journal of Experimental Psychology, 101(1), 16–20.CrossRefGoogle Scholar
  9. MacDonald, J. A., & Trafimow, D. (2013). A measure of within-participant consistency. Behavior Research Methods, 45(4), 950–954.CrossRefGoogle Scholar
  10. Nisbett, R. E., Krantz, D. H., Jepson, C., & Kunda, Z. (1983). The use of statistical heuristics in everyday inductive reasoning. Psychological Review, 90(4), 339–363.CrossRefGoogle Scholar
  11. Raju, S., Rajagopal, P., & Gilbride, T. J. (2010). Marketing healthful eating to children: the effectiveness of incentives, pledges, and competitions. Journal of Marketing, 74(3), 93–106.CrossRefGoogle Scholar
  12. Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research: methods and data analysis. New York: McGraw-Hill.Google Scholar
  13. Trafimow, D., & Rice, S. (2008). Potential performance theory (PPT): a general theory of task performance applied to morality. Psychological Review, 115(2), 447–462.CrossRefGoogle Scholar
  14. Trafimow, D., & Rice, S. (2009). Potential performance theory (PPT): describing a methodology for analyzing task performance. Behavior Research Methods, 41(2), 359–371.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of PsychologyNew Mexico State UniversityLas CrucesUSA
  2. 2.Department of MarketingNew Mexico State UniversityLas CrucesUSA

Personalised recommendations