Experimental Economics

, Volume 8, Issue 1, pp 21–33

The Limitations of Experimental Design: A Case Study Involving Monetary Incentive Effects in Laboratory Markets

Article

Abstract

We replicate an influential study of monetary incentive effects by Jamal and Sunder (1991) to illustrate the difficulties of drawing causal inferences from a treatment manipulation when other features of the experimental design vary simultaneously. We first show that the Jamal and Sunder (1991) conclusions hinge on one of their laboratory market sessions, conducted only within their fixed-pay condition, that is characterized by a thin market and asymmetric supply and demand curves. When we replicate this structure multiple times under both fixed pay and pay tied to performance, our findings do not support Jamal and Sunder’s (1991) conclusion about the incremental effects of performance-based compensation, suggesting that other features varied in that study likely account for their observed difference. Our ceteris paribus replication leaves us unable to offer any generalized conclusions about the effects of monetary incentives in other market structures, but the broader point is to illustrate that experimental designs that attempt to generalize effects by varying multiple features simultaneously can jeopardize the ability to draw causal inferences about the primary treatment manipulation.

Keywords

experimental design monetary incentives market power 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, M.J. and Sunder, S. (1995). “Professional Traders as Intuitive Bayesians.” Organizational Behavior and Human Decision Processes. 64, 185–202.CrossRefGoogle Scholar
  2. Arkes, H.R. (1991). “Costs and Benefits of Judgment Errors: Implications for Debiasing.” Psychological Bulletin. 110, 486–498.CrossRefGoogle Scholar
  3. Bonner, S.E. and Sprinkle, G.B. (2002). “The Effects of Monetary Incentives on Effort and Task Performance: Theories, Evidence, and a Framework for Research.” Accounting, Organizations and Society. 27, 303–345.Google Scholar
  4. Brandouy, O. (2001). “Laboratory Incentive Structure and Control-Test Design in an Experimental Asset Market.” Journal of Economic Psychology. 22, 1–26.CrossRefGoogle Scholar
  5. Camerer, C.F. and Hogarth, R.M. (1999). “The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework.” Journal of Risk and Uncertainty. 19, 7–42.CrossRefGoogle Scholar
  6. Friedman, D. and Sunder, S. (1994). Experimental Methods: A Primer for Economists. Cambridge: Cambridge University Press.Google Scholar
  7. Glantz, S.A. and Slinker, B.K. (1990). Primer of Applied Regression and Analysis of Variance. New York: McGraw-Hill.Google Scholar
  8. Glass, G.V., Willson, V.L., and Gottman, J.M. (1975). Design and Analysis of Time Series Experiments. Boulder, CO: Colorado Associated University Press.Google Scholar
  9. Hertwig, R. and Ortmann, A. (2001). “Experimental Practices in Economics: A Methodological Challenge for Psychologists?” Behavioral and Brain Sciences. 24, 383–451.PubMedGoogle Scholar
  10. Holt, C.A., Langan, L.W., and Villamil, A.P. (1986). “Market Power in Oral Double Auctions.” Economic Inquiry. 24(1), 107–123.Google Scholar
  11. Huynh, H. and Feldt, L.S. (1976). “Estimation of the Box Correction for Degrees of Freedom from Sample Data in the Randomized Block and Split Plot Designs.” Journal of Educational Statistics. 1, 69–72.Google Scholar
  12. Jamal, K. and Sunder, S. (1991). “Money vs. Gaming: Effects of Salient Monetary Payments in Double Oral Auctions.” Organizational Behavior and Human Decision Processes. 49, 151–166.CrossRefGoogle Scholar
  13. Jamal, K. and Sunder, S. (1996). “Bayesian Equilibrium in Double Auctions Populated by Biased Heuristic Traders.” Journal of Economic Behavior and Organization. 31, 273–291.CrossRefGoogle Scholar
  14. Krahnen, J.P. and Weber, M. (2001). “Marketmaking in the Laboratory: Does Competition Matter?” Experimental Economics. 4, 55–85.CrossRefGoogle Scholar
  15. Kuehl, R.O. (1994). Statistical Principles of Research Design and Analysis. Belmont, CA: Duxbury Press.Google Scholar
  16. McCloskey, D.N. and Ziliak, S.T. (1996). “The Standard Error of Regressions.” Journal of Economic Literature. 34, 97–114.Google Scholar
  17. Plott, C.R. (1991). “A Computerized Laboratory Market System and Research Support Systems for the Multiple Unit Double Auction.” Social Science Working Paper 783. Pasadena: California Institute of Technology.Google Scholar
  18. Shadish, W.R., Cook, T.D., and Campbell, D.T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.Google Scholar
  19. Smith, V.L. and Walker, J.M. (1993). “Monetary Rewards and Decision Cost in Experimental Economics.” Economic Inquiry. 21, 245–261.CrossRefGoogle Scholar
  20. Tung, Y.A. and Marsden, J.R. (2000). “Trading Volumes with and without Private Information: A Study Using Computerized Market Experiments.” Journal of Management Information Systems. 17, 31–57.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  1. 1.McCombs School of BusinessUniversity of Texas at AustinAustinUSA
  2. 2.Goizueta Business SchoolEmory UniversityAtlantaUSA

Personalised recommendations