Skip to main content
Log in

Modeling Motivated Misreports to Sensitive Survey Questions

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Asking sensitive or personal questions in surveys or experimental studies can both lower response rates and increase item non-response and misreports. Although non-response is easily diagnosed, misreports are not. However, misreports cannot be ignored because they give rise to systematic bias. The purpose of this paper is to present a modeling approach that identifies misreports and corrects for them. Misreports are conceptualized as a motivated process under which respondents edit their answers before they report them. For example, systematic bias introduced by overreports of socially desirable behaviors or underreports of less socially desirable ones can be modeled, leading to more-valid inferences. The proposed approach is applied to a large-scale experimental study and shows that respondents who feel powerful tend to overclaim their knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.
Figure 6.
Figure 7.

Similar content being viewed by others

References

  • Benitez-Silva, H., Buchinsky, M., Chan, H.-M., Cheidvasser, S., & Rust, J. (2004). How large is the bias in self-reported disability? Journal of Applied Econometrics, 19, 649–670.

    Article  Google Scholar 

  • Böckenholt, U., & van der Heijden, P.G.M. (2007). Item randomized-response models for measuring noncompliance: risk-return perceptions, social influences, and self-protective responses. Psychometrika, 72, 245–262.

    Article  Google Scholar 

  • Bound, J., Brown, C.C., & Mathiowetz, N. (2001). Measurement error in survey data. In E.E. Learner & J.J. Heckman (Eds.), Handbook of econometrics (pp. 3705–3843). Amsterdam: North-Holland.

    Google Scholar 

  • Bowman, D., Heilman, C., & Seetharaman, P. (2004). Determinants of product-use compliance behavior. Journal of Marketing Research, 41, 324–338.

    Article  Google Scholar 

  • Bradlow, E.T., & Zaslavsky, A.M. (1999). A hierarchical latent variable model for ordinal data from a customer satisfaction survey with “no answer” responses. Journal of the American Statistical Association, 94, 43–52.

    Google Scholar 

  • Brinõl, P., Petty, R.E., Valle, C., Rucker, D.D., & Becerra, A. (2007). The effects of message recipients’ power before and after persuasion: a self-validation analysis. Journal of Personality and Social Psychology, 93, 1040–1053.

    Article  PubMed  Google Scholar 

  • Cacioppo, J.T., & Petty, R.E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116–131.

    Article  Google Scholar 

  • Cacioppo, J.T., Petty, R.E., Feinstein, J.A., & Jarvis, W.B.G. (1996). Dispositional differences in cognitive motivation: the life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197–253.

    Article  Google Scholar 

  • Cacioppo, J.T., Petty, R.E., & Kao, C.F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48, 306–307.

    Article  PubMed  Google Scholar 

  • Campbell, W.K., Goodie, A.S., & Foster, J.D. (2004). Narcism, confidence, and risk attitude. Journal of Behavioral Decision Making, 17, 297–311.

    Article  Google Scholar 

  • Galinsky, A.D., Gruenfeld, D.H., & Magee, J.C. (2003). From power to action. Journal of Personality and Social Psychology, 85, 453–466.

    Article  PubMed  Google Scholar 

  • Gill, P., Murray, W., & Wright, M. (1981). Practical optimization. San Diego: Academic Press.

    Google Scholar 

  • Harvey, J.W., & McCrohan, K. (1988). Voluntary compliance and the effectiveness of public and non-profit institutions: American philanthropy and taxation. Journal of Economic Psychology, 9, 369–386.

    Article  Google Scholar 

  • Hewitt, P.L., Flett, G.L., Sherry, S.B., Habke, M., Parkin, M., Lam, R.W., McMurtry, B., Ediger, E., Fairlie, P., & Stein, M.B. (2003). The interpersonal expression of perfection: perfectionistic self-presentation and psychological distress. Journal of Personality and Social Psychology, 84, 1303–1325.

    Article  PubMed  Google Scholar 

  • Holtgraves, T. (2004). Social desirability and self-reports: testing models of socially desirable responding. Personality & Social Psychology Bulletin, 30, 161–172.

    Article  Google Scholar 

  • Hsiao, C., Sun, B.-H., & Morwitz, V.G. (2002). The role of stated intentions in new product purchase forecasting. Advances in Econometrics, 16, 11–28.

    Article  Google Scholar 

  • John, L.K., Acquisti, A., & Loewenstein, G. (2011). Strangers on a plane: context-dependent willingness to divulge sensitive information. Journal of Consumer Research, 37, 858–873.

    Article  Google Scholar 

  • Johnson, T.R., & Bolt, D.M. (2010). On the use of factor-analytic multinomial logit item response models to account for individual differences in response style. Journal of Educational and Behavioral Statistics, 35, 92–114.

    Article  Google Scholar 

  • Magee, J.C., & Galinsky, A.D. (1992). Social hierarchy: the self-reinforcing nature of power and status. Academy of Management Annals, 2, 351–398.

    Article  Google Scholar 

  • Mazar, N., & Ariely, D. (2006). Dishonesty in everyday life and its policy implication. Journal of Public Policy & Marketing, 25, 117–126.

    Article  Google Scholar 

  • Mittal, V., & Kamakura, W. (2001). Satisfaction, repurchase intent, and repurchase behavior: investigating the moderating effect of customer characteristics. Journal of Marketing Research, 38, 131–142.

    Article  Google Scholar 

  • Öhman, N. (2011). Buying or lying - the role of social pressure and temporal disjunction of intention assessment and behavior on the predictive ability of good intentions. Journal of Retailing and Consumer Services, 18, 194–199.

    Article  Google Scholar 

  • Orlando, M., & Thissen, D. (2000). New item fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24, 50–64.

    Article  Google Scholar 

  • Paulhus, D.L. (2002). Socially desirable responding: the evolution of a construct. In H. Braun, D.N. Jackson, & D.E. Wiley (Eds.), The role of constructs in psychological and educational measurement (pp. 67–88). Hillsdale: Erlbaum.

    Google Scholar 

  • Paulhus, D.L., Harms, P.D., Bruce, M.N., & Lysy, D.C. (2003). The over-claiming technique: measuring bias independent of accuracy. Journal of Personality and Social Psychology, 84, 681–693.

    Article  Google Scholar 

  • Reingen, P. (1978). On inducing compliance with requests. Journal of Consumer Research, 5, 96–102.

    Article  Google Scholar 

  • Rorer, L.G. (1965). The great response-style myth. Psychological Bulletin, 63, 129–156.

    Article  PubMed  Google Scholar 

  • Sadowski, C.J., & Gülgöz, S. (1992). Internal consistency and test-retest reliability of the need for cognition scale. Perceptual and Motor Skills, 74, 610.

    Article  Google Scholar 

  • Samejima, F. (1997). Graded response model. In W.J. van der Linden & R.K. Hambleton (Eds.), Handbook of modern item response theory (pp. 85–100). Berlin: Springer.

    Chapter  Google Scholar 

  • Simon, A.F., Fagley, N.S., & Halleran, J.G. (2004). Decision framing: moderating effects of individual differences and cognitive processing. Journal of Behavioral Decision Making, 17, 77–93.

    Article  Google Scholar 

  • Sinha, R.K., & Mandel, N. (2008). Preventing music piracy: the carrot or the stick? Journal of Marketing, 72, 1–15.

    Article  Google Scholar 

  • Swets, J.A. (1964). Signal detection and recognition by human observers. New York: Wiley.

    Google Scholar 

  • Tellis, G.J., & Chandrasekaran, D. (2010). Extent and impact of response biases in cross-national survey research. International Journal of Research in Marketing, 27, 329–341.

    Article  Google Scholar 

  • Toma, C.L., Hancock, J., & Ellison, N. (2008). Separating fact from fiction: an examination of deceptive self-presentation in online dating profiles. Personality & Social Psychology Bulletin, 34, 1023–1036.

    Article  Google Scholar 

  • Tourangeau, R., Rips, L.J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859–883.

    Article  PubMed  Google Scholar 

  • van Soest, A., & Hurd, M. (2008). A test for anchoring and yea-saying in experimental consumption data. Journal of the American Statistical Association, 103, 126–136.

    Article  Google Scholar 

  • Wirtz, J., & Kum, D. (2004). Consumer cheating on service guarantees. Journal of the Academy of Marketing Science, 32, 159–175.

    Article  Google Scholar 

  • Wlaczyk, J., Schwartz, J.P., Clifton, R., Adams, B., Wei, M., & Zha, P. (2005). Lying person-to-person about life events: a cognitive framework for lie detection. Personnel Psychology, 58, 141–170.

    Article  Google Scholar 

  • Wosinska, M. (2005). Direct-to-consumer advertising and drug therapy compliance. Journal of Marketing Research, 42, 323–332.

    Article  Google Scholar 

  • Yang, S., Zhao, Y., & Dhar, R. (2010). Modeling the underreporting bias in panel survey data. Marketing Science, 29, 525–539.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by grants from the Social Sciences and Humanities Research Council of Canada and the Canadian Foundation of Innovation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ulf Böckenholt.

Appendices

Appendix A. Simulation Studies

To assess the estimation bias of the RES model, a number of simulation studies were performed. Here, we present the results of an RES model with five items having either three or four response categories, respectively.

1.1 A.1 RES Model with Three Response Categories

The parameter values for the RES model with three response categories are reported in Table 5. The random effects \(\theta_{i}^{(R)}\), \(\theta_{i}^{(E)}\), and \(\theta_{i}^{(S)}\) were specified to be equally correlated with

$$\begin{aligned} \boldsymbol{\Sigma} = \left ( \begin{array}{c@{\quad}c@{\quad}c} \sigma_1+ \sigma_2 & \sigma_2 & \sigma_2\\ \sigma_2 & \sigma_1 + \sigma_2 & \sigma_2\\ \sigma_2 & \sigma_2 & \sigma_1 + \sigma_2 \end{array} \right ), \end{aligned}$$
(A.1)

and σ 1=σ 2=0.5. Table 5 summarizes the estimation results for the three sample sizes n=5,000, n=1,000 and n=500 based on 500 replications each. We report the estimated mean parameter values, the mean standard error, as well as the ratio of the mean standard error and standard deviation of the estimated parameter values. For n=5,000, the estimated bias is small and the mean standard errors agree well with the standard deviations of the estimated parameters. For the smaller sample sizes n=1,000 and n=500, the bias of the item parameters continues to be small but the bias in the standard errors increases. They appear to be systematically smaller than the standard deviations of the estimated parameter values. For each of the fitted models, we also computed the expected a posteriori (EAP) person scores. The results of these analyses are reported in the section “Recovery of Item and Person Parameters”.

Table A.1. Simulation results of RES model with three item categories.

1.2 A.2 RES Model with Four Response Categories

The setup of the RES model with four response categories differed from the previous simulation study in two ways. First, covariates were included at the R and E stages of the model. Second, although the elements of the covariance matrix were of equal size as in the previous simulation study, their estimation was unconstrained. Specifically, we set \(\boldsymbol{\Sigma} = \Bigl( \scriptsize \begin{array}{ccc} 1 & 0.5 & 0.5\\ 0.5 & 1 & 0.5\\0.5 & 0.5 & 1 \end{array} \Bigr)\) and estimated the corresponding elements of the Cholesky matrix of Σ=ΛΛ′ with \(\boldsymbol{\Lambda} = \Bigl( \scriptsize \begin{array}{ccc} \lambda_{11} & 0 & 0\\\lambda_{21} & \lambda_{22} & 0\\\lambda_{31} & \lambda_{32} & \lambda_{33} \end{array} \Bigr)\).

The first two columns of Table 6 list the effects and the chosen parameter values for five items for both the response-formation and editing stages, the elements of the Cholesky matrix, as well as the threshold values of the response-formation stage and the category attractiveness values of the editing stage. Four groups are specified that differ in the item parameters for the R and E stages of the model. Specifically, for Group 1 the item effects are γ (R)=(1,0.5,0,0.5,1) and γ (E)=(0.3,0.15,0,0.15,0.3). The corresponding item effects for Group 2 are γ (R) and γ (E)+φ 2, for Group 3, γ (R)+φ 1 and γ (E) and, for Group 4, γ (R)+φ 1 and γ (E)+φ 2, where φ 1=0.5 and φ 2=−0.5. The sample sizes of the four groups were specified to be equal. The remaining columns of Table 6 report the estimated parameters, mean standard errors, as well as the ratio of the mean standard error and standard deviation of the estimated parameter values for the two sample sizes n=500 and n=1,000. These values were obtained based on 500 replications. As in the previous simulation study, we find that the estimation bias is small for both sample sizes and that for n=500, the standard errors appear systematically smaller compared to the standard deviations of the estimated parameter values. Likelihood-ratio tests may provide more accurate inferences at this sample size.

Table A.2. Simulation results of RES model with four item categories.

Appendix B. Item Questionnaire

  1. 1.

    Sciatica is:

    • an anxiety-reducing drug

    • caused by the compression of nerves

    • a hormone

    • a protein

    • none of the above

  2. 2.

    Meiosis is:

    • a chromosome

    • a hormone

    • a type of cell division

    • a skin disease

    • none of the above

  3. 3.

    Antigen is:

    • a hormone

    • a protein

    • a disease

    • a virus

    • none of the above

  4. 4.

    Meta-toxins are:

    • produced by cancer cells

    • pain relievers

    • chemical agents

    • used to develop vaccines

    • none of the above

  5. 5.

    Bio-sexual

    • refers to the reproduction of plants

    • refers to non-chemical birth-control methods

    • refers to the passion for biology

    • refers to an account of someone’s sexual life

    • none of the above

  6. 6.

    Retroplex is:

    • a part of cell structures

    • a neck muscle

    • an involuntary movement

    • the inability to recall past events

    • none of the above

Rights and permissions

Reprints and permissions

About this article

Cite this article

Böckenholt, U. Modeling Motivated Misreports to Sensitive Survey Questions. Psychometrika 79, 515–537 (2014). https://doi.org/10.1007/s11336-013-9390-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-013-9390-9

Key words

Navigation