The Limitations of Experimental Design: A Case Study Involving Monetary Incentive Effects in Laboratory Markets
- 314 Downloads
We replicate an influential study of monetary incentive effects by Jamal and Sunder (1991) to illustrate the difficulties of drawing causal inferences from a treatment manipulation when other features of the experimental design vary simultaneously. We first show that the Jamal and Sunder (1991) conclusions hinge on one of their laboratory market sessions, conducted only within their fixed-pay condition, that is characterized by a thin market and asymmetric supply and demand curves. When we replicate this structure multiple times under both fixed pay and pay tied to performance, our findings do not support Jamal and Sunder’s (1991) conclusion about the incremental effects of performance-based compensation, suggesting that other features varied in that study likely account for their observed difference. Our ceteris paribus replication leaves us unable to offer any generalized conclusions about the effects of monetary incentives in other market structures, but the broader point is to illustrate that experimental designs that attempt to generalize effects by varying multiple features simultaneously can jeopardize the ability to draw causal inferences about the primary treatment manipulation.
Keywordsexperimental design monetary incentives market power
Unable to display preview. Download preview PDF.
- Bonner, S.E. and Sprinkle, G.B. (2002). “The Effects of Monetary Incentives on Effort and Task Performance: Theories, Evidence, and a Framework for Research.” Accounting, Organizations and Society. 27, 303–345.Google Scholar
- Friedman, D. and Sunder, S. (1994). Experimental Methods: A Primer for Economists. Cambridge: Cambridge University Press.Google Scholar
- Glantz, S.A. and Slinker, B.K. (1990). Primer of Applied Regression and Analysis of Variance. New York: McGraw-Hill.Google Scholar
- Glass, G.V., Willson, V.L., and Gottman, J.M. (1975). Design and Analysis of Time Series Experiments. Boulder, CO: Colorado Associated University Press.Google Scholar
- Holt, C.A., Langan, L.W., and Villamil, A.P. (1986). “Market Power in Oral Double Auctions.” Economic Inquiry. 24(1), 107–123.Google Scholar
- Huynh, H. and Feldt, L.S. (1976). “Estimation of the Box Correction for Degrees of Freedom from Sample Data in the Randomized Block and Split Plot Designs.” Journal of Educational Statistics. 1, 69–72.Google Scholar
- Kuehl, R.O. (1994). Statistical Principles of Research Design and Analysis. Belmont, CA: Duxbury Press.Google Scholar
- McCloskey, D.N. and Ziliak, S.T. (1996). “The Standard Error of Regressions.” Journal of Economic Literature. 34, 97–114.Google Scholar
- Plott, C.R. (1991). “A Computerized Laboratory Market System and Research Support Systems for the Multiple Unit Double Auction.” Social Science Working Paper 783. Pasadena: California Institute of Technology.Google Scholar
- Shadish, W.R., Cook, T.D., and Campbell, D.T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.Google Scholar
- Tung, Y.A. and Marsden, J.R. (2000). “Trading Volumes with and without Private Information: A Study Using Computerized Market Experiments.” Journal of Management Information Systems. 17, 31–57.Google Scholar