Advertisement

Experiments in Market Research

  • Torsten Bornemann
  • Stefan Hattula
Living reference work entry

Abstract

The question of how a certain activity (e.g., the intensity of communication activities during the launch of a new product) influences important outcomes (e.g., sales, preferences) is one of the key questions in applied (as well as academic) research in marketing. While such questions may be answered based on observed values of activities and the respective outcomes using survey and/or archival data, it is often not possible to claim that the particular activity has actually caused the observed changes in the outcomes. To demonstrate cause-effect relationships, experiments take a different route. Instead of observing activities, experimentation involves the systematic variation of an independent variable (factor) and the observation of the outcome only. The goal of this chapter is to discuss the parameters relevant to the proper execution of experimental studies. Among others, this involves decisions regarding the number of factors to be manipulated, the measurement of the outcome variable, the environment in which to conduct the experiment, and the recruitment of participants.

Keywords

Experimental design Laboratory experiment Data collection Cause-effect relationship Manipulation Experimental units 

References

  1. Aaker, D. A., Kumar, V., Day, G. S., & Leone, R. P. (2011). Marketing research. Hoboken: Wiley.Google Scholar
  2. Albrecht, C.-M., Hattula, S., Bornemann, T., & Hoyer, W. D. (2016). Customer response to interactional service experience: The role of interaction environment. Journal of Service Management, 27(5), 704–729.CrossRefGoogle Scholar
  3. Albrecht, C.-M., Hattula, S., & Lehmann, D. R. (2017). The relationship between consumer shopping stress and purchase abandonment in task-oriented and recreation-oriented consumers. Journal of the Academy of Marketing Science, 45(5), 720–740.CrossRefGoogle Scholar
  4. Anderson, E. T., & Simester, D. (2011). A step-by-step guide to smart business experiments. Harvard Business Review, 89(3), 98–105.Google Scholar
  5. APA. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57(12), 1060–1073.CrossRefGoogle Scholar
  6. Arnold, V. (2008). Advances in accounting behavioral research. Bradford: Emerald Group Publishing.Google Scholar
  7. Baum, D., & Spann, M. (2011). Experimentelle Forschung im Marketing: Entwicklung und zukünftige Chancen. Marketing – Zeitschrift für Forschung und Praxis, 33(3), 179–191.Google Scholar
  8. Bearden, W. O., & Etzel, M. (1982). Reference group influence on product and brand decisions. Journal of Consumer Research, 9(April), 183–194.CrossRefGoogle Scholar
  9. Benz, M., & Meier, S. (2008). Do people behave in experiments as in the field?—Evidence from donations. Experimental Economics, 11(3), 268–281.CrossRefGoogle Scholar
  10. Berkowitz, L., & Donnerstein, E. (1982). External validity is more than skin deep: Some answers to criticisms of laboratory experiments. American Psychologist, 37(3), 245–257.CrossRefGoogle Scholar
  11. Bornemann, T., & Homburg, C. (2011). Psychological distance and the dual role of price. Journal of Consumer Research, 38(3), 490–504.CrossRefGoogle Scholar
  12. Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 3–5.CrossRefGoogle Scholar
  13. Bullock, J. G., Green, D. P., & Ha, S. E. (2010). Yes, but what’s the mechanism? (don’t expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550–558.CrossRefGoogle Scholar
  14. Camerer, C. F. (2011). The promise and success of lab-field generalizability in experimental economics: A critical reply to levitt and list. Available at SSRN 1977749.Google Scholar
  15. Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1), 7–42.CrossRefGoogle Scholar
  16. Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization, 81(1), 1–8.CrossRefGoogle Scholar
  17. Christian, B. (2012). The a/b test: Inside the technology that’s changing the rules of business. http://www.wired.com/business/2012/04/ff_abtesting. Accessed 15 Mar 2018.
  18. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  19. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  20. Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods, 14(3), 202–224.CrossRefGoogle Scholar
  21. Cox, D. R. (1992). Planning of experiments. Hoboken: Wiley.Google Scholar
  22. Dean, A., Voss, D., & Draguljić, D. (2017). Design and analysis of experiments. Cham: Springer.CrossRefGoogle Scholar
  23. Deutskens, E., de Ruyter, K., Wetzels, M., & Oosterveld, P. (2004). Response rate and response quality of internet-based surveys: An experimental study. Marketing Letters, 15(1), 21–36.CrossRefGoogle Scholar
  24. Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  25. Eriksson, L., Johansson, E., Kettaneh-Wold, N., Wikström, C., & Wold, S. (2008). Design of experiments: Principles and applications. Stockholm: Umetrics AB, Umeå Learnways AB.Google Scholar
  26. Evans, A. N., & Rooney, B. J. (2013). Methods in psychological research. Los Angeles: Sage.Google Scholar
  27. Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538.CrossRefGoogle Scholar
  28. Feldman, J. M., & Lynch, J. G., Jr. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention and behavior. Journal of Applied Psychology, 73(3), 421–435.CrossRefGoogle Scholar
  29. Festinger, L. A. (1957). Theory of cognitive dissonance. Stanford: Stanford University Press.Google Scholar
  30. Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18.CrossRefGoogle Scholar
  31. Glasman, L. R., & Albarracín, D. (2006). Forming attitudes that predict future behavior: A meta-analysis of the attitude-behavior relation. Psychological Bulletin, 132(5), 778–822.CrossRefGoogle Scholar
  32. Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of mechanical turk samples. Journal of Behavioral Decision Making, 26(3), 213–224.CrossRefGoogle Scholar
  33. Greenwald, A. G. (1976). Within-subjects designs: To use or not to use? Psychological Bulletin, 83(2), 314–320.CrossRefGoogle Scholar
  34. Hakel, M. D., Ohnesorge, J. P., & Dunnette, M. D. (1970). Interviewer evaluations of job applicants’ resumes as a function of the qualifications of the immediately preceding applicants: An examination of contrast effects. Journal of Applied Psychology, 54(1, Pt.1), 27–30.CrossRefGoogle Scholar
  35. Hansen, R. A. (1980). A self-perception interpretation of the effect of monetary and nonmonetary incentives on mail survey respondent behavior. Journal of Marketing Research, 17(1), 77–83.CrossRefGoogle Scholar
  36. Harris, A. D., McGregor, J. C., Perencevich, E. N., Furuno, J. P., Zhu, J., Peterson, D. E., & Finkelstein, J. (2006). The use and interpretation of quasi-experimental studies in medical informatics. Journal of the American Medical Informatics Association, 13(1), 16–23.CrossRefGoogle Scholar
  37. Harrison, G. W., & List, J. A. (2003). What constitutes a field experiment in economics? Working paper. Columbia: Department of Economics, University of South Carolina http://faculty.haas.berkeley.edu/hoteck/PAPERS/field.pdf. Accessed 15 Mar 2018.Google Scholar
  38. Harrison, G. W., & List, J. A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009–1055.CrossRefGoogle Scholar
  39. Hattula, J. D., Herzog, W., Dahl, D. W., & Reinecke, S. (2015). Managerial empathy facilitates egocentric predictions of consumer preferences. Journal of Marketing Research, 52(2), 235–252.CrossRefGoogle Scholar
  40. Hauser, D. J., & Schwarz, N. (2016). Attentive turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48(1), 400–407.CrossRefGoogle Scholar
  41. Hegtvedt, K. A. (2014). Ethics and experiments. In M. Webster Jr. & J. Sell (Eds.), Laboratory experiments in the social sciences (pp. 23–51). Amsterdam/Heidelberg: Elsevier.CrossRefGoogle Scholar
  42. Hertwig, R., & Ortmann, A. (2008). Deception in experiments: Revisiting the arguments in its defense. Ethics and Behavior, 18(1), 59–92.CrossRefGoogle Scholar
  43. Hibbeln, M., Jenkins, J. L., Schneider, C., Valacich, J. S., & Weinmann, M. (2017). Inferring negative emotion from mouse cursor movements. MIS Quarterly, 41(1), 1–21.CrossRefGoogle Scholar
  44. Horswill, M. S., & Coster, M. E. (2001). User-controlled photographic animations, photograph-based questions, and questionnaires: Three internet-based instruments for measuring drivers’ risk-taking behavior. Behavior Research Methods, Instruments, & Computers, 33(1), 46–58.CrossRefGoogle Scholar
  45. Kalkoff, W., Youngreen, R., Nath, L., & Lovaglia, M. J. (2014). Human participants in laboratory experiments in the social sciences. In M. Webster Jr. & J. Sell (Eds.), Laboratory experiments in the social sciences (pp. 127–144). Amsterdam/Heidelberg: Elsevier.Google Scholar
  46. Koschate-Fischer, N., & Schandelmeier, S. (2014). A guideline for designing experimental studies in marketing research and a critical discussion of selected problem areas. Journal of Business Economics, 84(6), 793–826.CrossRefGoogle Scholar
  47. Kuipers, K. J., & Hysom, S. J. (2014). Common problems and solutions in experiments. In M. Webster Jr. & J. Sell (Eds.), Laboratory experiments in the social sciences (pp. 127–144). Amsterdam/Heidelberg: Elsevier.Google Scholar
  48. Larsen, R. J., & Fredrickson, B. L. (1999). Measurement issues in emotion research. In D. Kahneman, E. Diener, & N. Schwarz (Eds.), Well-being: Foundations of hedonic psychology (pp. 40–60). New York: Russell Sage.Google Scholar
  49. Laugwitz, B. (2001). A web-experiment on colour harmony principles applied to computer user interface design. Lengerich: Pabst Science.Google Scholar
  50. Levitt, S. D., & List, J. A. (2007). Viewpoint: On the generalizability of lab behaviour to the field. Canadian Journal of Economics, 40(2), 347–370.CrossRefGoogle Scholar
  51. Li, J. Q., Rusmevichientong, P., Simester, D., Tsitsiklis, J. N., & Zoumpoulis, S. I. (2015). The value of field experiments. Management Science, 61(7), 1722–1740.CrossRefGoogle Scholar
  52. List, J. A. (2011). Why economists should conduct field experiments and 14 tips for pulling one off. The Journal of Economic Perspectives, 25(3), 3–15.CrossRefGoogle Scholar
  53. Lynch, J. G. (1982). On the external validity of experiments in consumer research. Journal of Consumer Research, 9(3), 225–239.CrossRefGoogle Scholar
  54. Lynch, J. G., Marmorstein, H., & Weigold, M. F. (1988). Choices from sets including remembered brands: Use of recalled attributes and prior overall evaluations. Journal of Consumer Research, 15(2), 169–184.CrossRefGoogle Scholar
  55. Madzharov, A. V., Block, L. G., & Morrin, M. (2015). The cool scent of power: Effects of ambient scent on consumer preferences and choice behavior. Journal of Marketing, 79(1), 83–96.CrossRefGoogle Scholar
  56. Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: A model comparison perspective. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  57. Meyvis, T., & Van Osselaer, S. M. J. (2018). Increasing the power of your study by increasing the effect size. Journal of Consumer Research, 44(5), 1157–1173.Google Scholar
  58. Mitra, A., & Lynch, J. G. (1995). Toward a reconciliation of market power and information theories of advertising effects on price elasticity. Journal of Consumer Research, 21(4), 644–659.CrossRefGoogle Scholar
  59. Montgomery, D. C. (2009). Design and analysis of experiments. New York: Wiley.Google Scholar
  60. Morales, A. C., Amir, O., & Lee, L. (2017). Keeping it real in experimental research—Understanding when, where, and how to enhance realism and measure consumer behavior. Journal of Consumer Research, 44(2), 465–476.CrossRefGoogle Scholar
  61. Morton, R. B., & Williams, K. C. (2010). Experimental political science and the study of causality: From nature to the lab. New York: Cambridge University Press.CrossRefGoogle Scholar
  62. Myers, H., & Lumbers, M. (2008). Understanding older shoppers: A phenomenological investigation. Journal of Consumer Marketing, 25(5), 294–301.CrossRefGoogle Scholar
  63. Nielsen, J. (2000). Why you only need to test with 5 users. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users. Accessed 15 Mar 2018.
  64. Nielsen, J. (2012). How many test users in a usability study. https://www.nngroup.com/articles/how-many-test-users. Accessed 15 Mar 2018.
  65. Nisbett, R. E. (2015). Mindware: Tools for smart thinking. New York: Farrar, Straus and Giroux.Google Scholar
  66. Nordhielm, C. L. (2002). The influence of level of processing on advertising repetition effects. Journal of Consumer Research, 29(3), 371–382.CrossRefGoogle Scholar
  67. Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872.CrossRefGoogle Scholar
  68. Pascual-Leone, A., Singh, T., & Scoboria, A. (2010). Using deception ethically: Practical research guidelines for researchers and reviewers. Canadian Psychology, 51(4), 241–248.CrossRefGoogle Scholar
  69. Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163.CrossRefGoogle Scholar
  70. Perdue, B. C., & Summers, J. O. (1986). Checking the success of manipulations in marketing experiments. Journal of Marketing Research, 23(4), 317–326.CrossRefGoogle Scholar
  71. Pirlott, A. G., & MacKinnon, D. P. (2016). Design approaches to experimental mediation. Journal of Experimental Social Psychology, 66(September), 29–38.CrossRefGoogle Scholar
  72. Postmes, T., Spears, R., & Cihangir, S. (2001). Quality of decision making and group norms. Journal of Personality and Social Psychology, 80(6), 918–930.CrossRefGoogle Scholar
  73. Rashotte, L. S., Webster, M., & Whitmeyer, J. M. (2005). Pretesting experimental instructions. Sociological Methodology, 35(1), 151–175.Google Scholar
  74. Reips, U.-D. (2002). Standards for internet-based experimenting. Experimental Psychology, 49(4), 243–256.CrossRefGoogle Scholar
  75. Remler, D. K., & Van Ryzin, G. G. (2010). Research methods in practice: Strategies for description and causation. Thousand Oaks: Sage.Google Scholar
  76. Reynolds, N., Diamantopoulos, A., & Schlegelmilch, B. (1993). Pretesting in questionnaire design: A review of the literature and suggestions for further research. Journal of the Market Research Society, 35(2), 171–183.CrossRefGoogle Scholar
  77. Robertson, D. H., & Bellenger, D. N. (1978). A new method of increasing mail survey responses: Contributions to charity. Journal of Marketing Research, 15(4), 632–633.CrossRefGoogle Scholar
  78. Sawyer, A. G., & Ball, A. D. (1981). Statistical power and effect size in marketing research. Journal of Marketing Research, 18(3), 275–290.CrossRefGoogle Scholar
  79. Sawyer, A. G., Lynch, J. G., & Brinberg, D. L. (1995). A bayesian analysis of the information value of manipulation and confounding checks in theory tests. Journal of Consumer Research, 21(4), 581–595.CrossRefGoogle Scholar
  80. Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51(3), 515–530.CrossRefGoogle Scholar
  81. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  82. Sieber, J. E. (1992). Planning ethically responsible research: A guide for students and internal review boards. Newbury Park: Sage.CrossRefGoogle Scholar
  83. Simester, D. (2017). Field experiments in marketing. In E. Duflo & A. Banerjee (Eds.), Handbook of economic field experiments Amsterdam: North-Holland (pp. 465–497).CrossRefGoogle Scholar
  84. Singer, E., & Couper, M. P. (2008). Do incentives exert undue influence on survey participation? Experimental evidence. Journal of Empirical Research on Human Research Ethics, 3(3), 49–56.CrossRefGoogle Scholar
  85. Singer, E., Van Hoewyk, J., Gebler, N., & McGonagle, K. (1999). The effect of incentives on response rates in interviewer-mediated surveys. Journal of Official Statistics, 15(2), 217–230.Google Scholar
  86. Smith, V. L., & Walker, J. M. (1993). Rewards, experience and decision cost in first price auctions. Economic Inquiry, 31(2), 237–244.CrossRefGoogle Scholar
  87. Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89(6), 845–851.CrossRefGoogle Scholar
  88. Stuart, E. A., & Rubin, D. B. (2007). Best practices in quasi-experimental designs: Matching methods for causal inference. In J. Osborne (Ed.), Best practices in quantitative methods (pp. 155–176). New York. Thousand Oaks, CA: Sage.Google Scholar
  89. Thye, S. R. (2014). Logical and philosophical foundations of experimental research in the social sciences. In M. Webster Jr. & J. Sell (Eds.), Laboratory experiments in the social sciences (pp. 53–82). Amsterdam/Heidelberg: Elsevier.CrossRefGoogle Scholar
  90. Trafimow, D., Leonhardt, J. M., Niculescu, M., & Payne, C. (2016). A method for evaluating and selecting field experiment locations. Marketing Letters, 7(3), 437–447.CrossRefGoogle Scholar
  91. Trafimow, D., & Rice, S. (2009). What if social scientists had reviewed great scientific works of the past? Perspectives on Psychological Science, 4(1), 65–78.CrossRefGoogle Scholar
  92. Verlegh, P. W. J., Schifferstein, H. N. J., & Wittink, D. R. (2002). Range and number-of-levels effects in derived and stated measures of attribute importance. Marketing Letters, 13(1), 41–52.CrossRefGoogle Scholar
  93. Völckner, F., & Hofmann, J. (2007). The price-perceived quality relationship: A meta-analytic review and assessment of its determinants. Marketing Letters, 18(3), 181–196.CrossRefGoogle Scholar
  94. Wetzel, C. G. (1977). Manipulation checks: A reply to kidd. Representative Research in Social Psychology, 8(2), 88–93.Google Scholar
  95. Zhao, X., Lynch, J. G., Jr., & Chen, Q. (2010). Reconsidering baron and kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206.CrossRefGoogle Scholar
  96. Zikmund, W., & Babin, B. (2006). Exploring marketing research. Mason: Thomson South-Western.Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of MarketingGoethe University FrankfurtFrankfurtGermany

Personalised recommendations