Advertisement

Situational Bias

  • Gideon J. MellenberghEmail author
Chapter

Abstract

Situational bias is a systematic error that is caused by the research situation and participants’ reactions to this situation. Situational factors that equally affect E- and C-participants do not cause spurious differences between conditions, but factors that differentially affect E- and C-participants cause bias. Standardization of the research situation equalizes the situation for all participants, and calibration equalizes the research situation across time. Participants may differentially react to conditions, and experimenters and data analysts may differentially affect conditions. Blinding of these persons prevents the differential influence. Random assignment of research persons (e.g., experimenters, interviewers, etc.) to conditions turns their systematic influence on participants into random error. Necessary conditions for a causal effect of an IV on a DV are that the conditions are correctly implemented, and conditions are not contaminated. A manipulation check is a procedure to check the implementation of conditions. Contamination of conditions is prevented by separating conditions in location or time. Random assignment of participants counteracts selection bias, but it may induce randomization bias, for example, if participants dislike their assigned condition. Usually it cannot be prevented in randomized experiments, but it can be assessed by applying a double randomized preference design. Pretest-posttest studies are threatened by pretest effects, which are the effects of a pretest on participants’ behavior. It can be prevented by, for example, replacing the pretest by an unobtrusive proxy pretest, and it can be assessed by using Solomon’s four-group design. Additionally to pretest effects, studies that use a self-report pretest may show a response shift, which is a change in meaning of a participant’s self-evaluating from pretest to posttest. It can be assessed by administering a retrospective pretest.

Keywords

Blinding Calibration Manipulation checks Pretest effects Proxy pretest Randomization bias Response shift Retrospective pretest Standardization 

References

  1. Adèr, H. J. (2008a). The main analysis phase. In H. J. Adèr & G. J. Mellenbergh (with contributions by D. J. Hand), Advising on research methods: A consultant’s companion (pp. 357–386). Huizen, The Netherlands: van Kessel.Google Scholar
  2. Goeleven, E., de Raedt, R., & Koster, H. W. (2007). The influence of induced mood on the inhibition of emotional information. Motivation and Emotion, 31, 208–218.CrossRefGoogle Scholar
  3. Holland, P. W., & Dorans, N. (2006). Linking and equating. In R. L. Brennan (Ed.), Educational measurement (4th ed.). Westport, CT: American Council on Education/Praeger.Google Scholar
  4. Hoogstraten, J. (1985). Influence of objective measures on self-reports in a retrospective pretest-posttest design. Journal of Experimental Education, 53, 207–210.CrossRefGoogle Scholar
  5. Hoogstraten, J. (2004). De machteloze onderzoeker: Voetangels en klemmen van sociaal-wetenschappelijk onderzoek [The helpless researcher: Pittfalls of social science research]. Amsterdam, The Netherlands: Boom.Google Scholar
  6. Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-posttest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.CrossRefGoogle Scholar
  7. Marcus, S. M., Stuart, E. A., Wang, P., Shadish, W. R., & Steiner, P. M. (2012). Estimating the causal effect of randomization versus treatment preference in a doubly randomized preference trial. Psychological Methods, 17, 244–254.CrossRefGoogle Scholar
  8. Moerbeek, M. (2005). Randomization of clusters versus randomization of persons within clusters: Which is preferable? American Statistician, 59, 173–179.CrossRefGoogle Scholar
  9. Orne, M. T. (1962). On the social psychology of the experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783.CrossRefGoogle Scholar
  10. Rhoads, C. H. (2011). The implications of ‘contamination’ for experimental design in education. Journal of Educational and Behavioral Statistics, 36, 76–104.CrossRefGoogle Scholar
  11. Rosenberg, M. J. (1965). When dissonance fails: On eliminating evaluation apprehension from attitude measurement. Journal of Personality and Social Psychology, 1, 18–42.CrossRefGoogle Scholar
  12. Rücker, G. (1989). A two-stage trial design for testing treatment, self-selection and treatment preference effects. Statistics in Medicine, 4, 477–485.CrossRefGoogle Scholar
  13. Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103, 1334–1344.CrossRefGoogle Scholar
  14. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin.Google Scholar
  15. Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.CrossRefGoogle Scholar
  16. Sprangers, M. A. G. (1989). Response shift and the retrospective pretest: On the usefulness of retrospective pretest-posttest designs in detecting training related response shifts. Unpublished doctoral dissertation. The Netherlands: University of Amsterdam.Google Scholar
  17. Sprangers, M. A. G., & Schwartz, C. E. (2000). Integrating response shift into health-related quality-of-life research: A theoretical model. In C. E. Schwartz & M. A. G. Sprangers (Eds.), Adaptation to changing health: Response shift in quality-of-life research (pp. 11–23). Washington, DC: American Psychological Association.CrossRefGoogle Scholar
  18. van Belle, G. (2002). Statistical rules of thumb. New York, NY: Wiley.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Emeritus Professor Psychological Methods, Department of PsychologyUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations