Abstract
Previous research has demonstrated that selection decision making is improved with the use of valid pre-employment assessments, but these assessments can often engender negative reactions on the part of job candidates. Reactions to personality assessments tend to be particularly negative, and these reactions are even worse for forced-choice personality assessments. This latter issue is particularly troubling given the evidence showing that forced-choice measurement is quite effective at reducing deliberate response distortions (i.e., faking). Given the importance organizations place on candidate experience during the recruitment and selection process, improving applicants’ reactions to valid selection assessments is important. Previous research has not, however, discussed the reasons or mechanisms behind why test takers have negative reactions to forced-choice assessments in particular. Here, we propose that forced-choice measurement threatens elements of the test taker’s self-concept thereby engendering negative reactions to the assessment. Based on these theoretical arguments, we develop and test the efficacy of four format variations to a forced-choice assessment to improve test taker reactions. Results suggest that, compared to a traditional/standard forced-choice assessment, test takers reacted more positively to forced-choice assessment formats that (1) used a graded, as opposed to dichotomous, response scale (i.e., allowing for slightly (dis)agree responses); (2) included post-assessment performance feedback; and (3) removed the most socially undesirable items from the test. The theoretical and practical implications of these results are discussed.
Similar content being viewed by others
Notes
We note that there is a debate regarding the prevalence and implications of applicant faking behavior in general (e.g., Griffith et al., 2011) and regarding personality specifically (e.g., Hough & Oswald, 2008). A full review of applicant faking behavior is beyond the scope of this article; interested readers are directed to Griffith et al., (2011).
As noted in the “Method” section, the measure used in this study is computer adaptive. Therefore, removing undesirable items in this case means reducing the item pool, not the number of items test takers completed. In instances wherein the forced-choice measure is not computer adaptive, this format variation may be inappropriate as it will shorten the length of an existing test. As such, if one wishes to remove undesirable items from a non-computer adaptive forced-choice measure, additional items may need to be written.
We note that this, as well as the next, intervention is not specific to forced-choice personality measurement, and may be valuable for other types of pre-employment testing to increase test taker reactions.
Employment status was not collected in this sample. However, given the similar sampling strategies between studies 1 and 2, it can be reasonably assumed that employment rates are relatively the same. As such, we estimate that over 70% of the respondents in study 1 are likely employed.
Repeated testing was used to allow test developers to assess the psychometric properties of the measure
Reactions to feedback measure was not included in the CFA given that only about half of the respondents completed this measure (i.e., those in the feedback provided conditions).
Full CFA results are available from the corresponding author upon request.
Full results for these analyses available from the corresponding author.
For perceptions of appropriateness, a significant instruction formality × feedback interaction was observed. When explored further, instruction type did not significantly influence perceptions within the feedback, F(2, 381) = 1.23, n.s., nor no feedback, F(2, 362) = 2.52, n.s., conditions.
The authors would like to thank an anonymous reviewer for bringing this point to our attentions.
References
Alicke, M. D., & Sedikides, C. (2009). Self-enhancement and self-protection: What they are and what they do. European Review of Social Psychology, 20, 1–48.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Anderson, N., Salgado, J. F., & Hulsheger, U. R. (2010). Applicant reactions in selection: Comprehensive meta-analysis into reaction generalization versus situational specificity. International Journal of Selection and Assessment, 18, 291–304.
Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1–26.
Bauer, T. N., Truxillo, D. M., Sanchez, R. J., Craig, J. M., Ferrara, P., & Campion, M. A. (2001). Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel Psychology, 54, 387–419.
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5, 323–370.
Borman, W. C., Buck, D. E., Hanson, M. A., Motowidlo, S. J., Stark, S., & Drasgow, F. (2001). An examination of the comparative reliability, validity, and accuracy of performance ratings made using computerized adaptive rating scales. Journal of Applied Psychology, 86, 965–973.
Boyce, A. S., & Capman, J. F. (2017). ADEPT-15 technical report. New York, NY: Aon Assessment Solutions.
Brehm, J. W. (1966). A theory of psychological reactance. New York.
Brooks, M. E., Dalal, D. K., & Nolan, K. P. (2014). Are common language effect size indicators easier to understand? Journal of Applied Psychology, 99, 332–340.
Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York, NY: Cambridge University Press.
Brown, A., & Maydeu-Olivares, A. (2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71, 460–502.
Campion, M. C., Campion, E. D., & Campion, M. A. (2019). Using practice employment tests to improve recruitment and personnel selection outcomes for organizations and job seekers. Journal of Applied Psychology, 104, 1089-1102.
Cao, M., & Drasgow, F. (2019). Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. Journal of Applied Psychology, 104, 1347-1368.
Chapman, D. S., Uggerslev, K. L., Carroll, S. A., Piasentin, K. A., & Jones, D. A. (2005). Applicant attraction to organizations and job choice: A meta-analytic review of the correlates of recruiting outcomes. Journal of Applied Psychology, 90, 928–944.
Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18, 267–307.
Christiansen, N. D., Edelstein, S., & Fleming, B. (1998). Reconsidering forced-choice formats for applicant personality assessment. Paper presented at the annual meeting of the Society for Industrial/Organizational Psychology, Dallas, TX.
Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: A review and assessment. Journal of Management, 37, 39–67.
Converse, P. D., Oswald, F. L., Imus, A., Hedricks, C., Roy, R., & Butera, H. (2008). Comparing personality test formats and warnings: Effects on criterion-related validity and test-taker reactions. International Journal of Selection and Assessment, 16, 155–169.
Conway J. S., Boyce, A. S., Caputo, P. M., & Huber, C. (2015). Development of a computer adaptive forced-choice personality test. Poster presented at the 30th Annual Meeting of the Society for Industrial and Organizational Psychology, Philadelphia, PA.
Cortina, J. M., & Folger, R. G. (1998). When is it acceptable to accept a null hypothesis: No way, Jose? Organizational Research Methods, 1, 334–350.
Cortina, J. M., & Landis, R. S. (2009). When small effect sizes tell a big story, and when large effect sizes don’t. In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 287–308). New York, NY: Routledge.
Cunningham, M. R. (1989). Test-taking motivations and outcomes on a standardized measure of on-the-job integrity. Journal of Business and Psychology, 4, 119–127.
Dilchert, S., Ones, D. S., Viswesvaran, C., & Deller, J. (2006). Response distortion in personality measurement: Born to deceive, yet capable of providing valid self-assessments? Psychology Science, 48, 209–225.
Foldes, H. J., Duehr, E. D., & Ones, D. S. (2008). Group differences in personality: Meta-analyses comparing five U.S. racial groups. Personnel Psychology, 61, 579–616.
Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18, 694–734.
Goffin, R. D., & Christiansen, N. D. (2003). Correcting personality tests for faking: A review of popular personality tests and an initial survey of researchers. International Journal of Selection and Assessment, 11, 340–344.
Griffith, R. L., Lee, L. M., Peterson, M. H., & Zickar, M. J. (2011). First dates and little white lies: A trait contract classification theory of applicant faking behavior. Human Performance, 24, 338–357.
Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639–683.
Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology, 1, 333–342.
Highhouse, S. (2009). Designing experiments that generalize. Organizational Research Methods, 12, 554–566.
Highhouse, S., & Gillespie, J. Z. (2009). Do samples really matter that much? In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 249–268). New York, NY: Routledge.
Huber, C., Conway, J. S., & Boyce, A. S. (2015). Convergent validity of a MDPP CAT for high-stakes personality testing. Poster presented at the 30th Annual Meeting of the Society for Industrial and Organizational Psychology, Philadelphia, PA.
Huber, C. R., Capman, J., Boyce, A. S., Lobene, E. V. (2017). Cross-cultural generalization of a multidimensional pairwise preference personality inventory. Presented at the 32nd Annual Meeting of the Society for Industrial and Organizational Psychology, Orlando, FL.
Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution? Human Performance, 13, 371–388.
Jeong, Y. R., Christiansen, N. D., Robie, C., Kung, M. C., & Kinney, T. B. (2017). Comparing applicants and incumbents: Effects of response distortion on mean scores and validity of personality measures. International Journal of Selection and Assessment, 25, 311–315.
McCarthy, J. M., Bauer, T. N., Truxillo, D. M., Anderson, N. R., Costa, A. C., & Ahmed, S. M. (2017). Applicant perspectives during selection: A review addressing “So what?,” “What’s new?,” and “Where to next?”. Journal of Management, 43, 1693–1725.
McCloy, R. A., Heggestad, E. D., & Reeve, C. L. (2005). A silk purse from the sow’s ear: Retrieving normative information from multidimensional forced-choice items. Organizational Research Methods, 8, 222–248.
McFarland, L. A. (2013). Applicant reactions to personality tests: Why do applicants hate them? In N. Christiansen & R. Tett (Eds.), Handbook of personality at work (pp. 281–298). New York, NY: Routledge.
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455.
Mount, M. K., Barrick, M. R., Scullen, S. M., & Rounds, J. (2005). Higher-order dimensions of the big five personality traits and the big six vocational interest types. Personnel Psychology, 58, 447–478.
Murphy, K. R. (1986). When your top choice turns you down: Effect of rejected offers on the utility of selection tests. Psychological Bulletin, 99, 133–138.
Murphy, K. R., & Davidshofer, C. O. (2005). Psychological testing: Principles and applications (6th edition). Upper Saddle River, New Jersey: Pearson.
Noels, K. A., Giles, H., & Le Poire, B. (2003). Language and communication processes. In M. A. Hogg & J. Cooper (Eds.), The SAGE handbook of social psychology (pp. 232–257). Thousand Oaks, CA: SAGE Publications.
O'Neill, T. A., Lewis, R. J., Law, S. J., Larson, N., Hancock, S., Radan, J., Lee, N., & Carswell, J. J. (2017). Forced-choice pre-employment personality assessment: Construct validity and resistance to faking. Personality and Individual Differences, 115, 120–127.
Oostrom, J. K., Born, M. P., Serlie, A. W., & van der Molen, H. T. (2010). Effects of individual differences on the perceived job relatedness of a cognitive ability test and a multimedia situational judgment test. International Journal of Selection and Assessment, 18, 394–406.
Ployhart, R. E., & Harold, C. M. (2004). The applicant attribution-reaction theory (AART): An integrative theory of applicant attributional processing. International Journal of Selection and Assessment, 12, 84–98.
Ployhart, R. E., McFarland, L. A., & Ryan, A. M. (2002). Examining applicants’ attributions for withdrawal from a selection procedure. Journal of Applied Social Psychology, 32, 2228–2252.
R Core Team. (2013). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing URL http://www.R-project.org/.
Revelle, W. (2019). psych: Procedures for psychological, psychometric, and personality research. R package version 1.9.7.
Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2000). A generalized item response theory model for unfolding unidimensional polytomous responses. Applied Psychological Measurement, 24, 3–32.
Rosse, J. G., Miller, J. L., & Stecher, M. D. (1994). A field study of job applicants' reactions to personality and cognitive ability testing. Journal of Applied Psychology, 79, 987–992.
Rosse, J. G., Stecher, M. D., Miller, J. L., & Levin, R. A. (1998). The impact of response distortion on preemployment personality testing and hiring decisions. Journal of Applied Psychology, 83, 634–644.
Rothstein, M. G., & Goffin, R. D. (2006). The use of personality measures in personnel selection: What does current research support? Human Resource Management Review, 16, 155–180.
Ryan, A. M., & Ployhart, R. E. (2000). Applicants’ perceptions of selection procedures and decisions: A critical review and agenda for the future. Journal of Management, 26, 565–606.
Rynes, S. L., Bretz Jr., R. D., & Gerhart, B. (1991). The importance of recruitment in job choice: A different way of looking. Personnel Psychology, 44, 487–521.
Sackett, P. R., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59, 419–450.
Schmit, M. J., & Ryan, A. M. (1993). The Big Five in personnel selection: Factor structure in applicant and nonapplicant populations. Journal of Applied Psychology, 78, 966–974.
Schwarz, N. (1996). Cognition and communication: Judgmental biases, research methods and the logic of conversation. Hillsdale, NJ: Erlbaum.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93–105.
Sedikides, C., & Gregg, A. P. (2008). Self-enhancement: Food for thought. Perspectives on Psychological Science, 3, 102–116.
Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R. W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46, 49–76.
Stark, S., Chernyshenko, O. S., & Drasgow, F. (2005). An IRT approach to constructing and scoring pairwise preference items involving stimuli on different dimensions: The multi-unidimensional pairwise-preference model. Applied Psychological Measurement, 29, 184–203.
Stark, S., Chernyshenko, O. S., & Drasgow, F. (2006). Detecting differential item functioning with confirmatory factor analysis and item response theory: Toward a unified strategy. Journal of Applied Psychology, 91, 1292–1306.
Stark, S., Chernyshenko, O. S., & Drasgow, F. (2010). Tailored Adaptive Personality Assessment System (TAPAS-95s). Expanded enlistment eligibility metrics (EEEM): Recommendations on a non-cognitive screen for new soldier selection.
Stark, S., Chernyshenko, O. S., Drasgow, F., & White, L. A. (2012). Adaptive testing with multidimensional pairwise preference items: Improving the efficiency of personality and other noncognitive assessments. Organizational Research Methods, 15, 463–487.
Steiger, J. H. (1980). Testing pattern hypotheses on correlation matrices: Alternative statistics and some empirical results. Multivariate Behavioral Research, 15, 335–352.
Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
Swann Jr., W. B. (1983). Self-verification: Bringing social reality into harmony with the self. In J. Suls & A. G. Greenwald (Eds.), Social psychological perspectives on the self (Vol. 2, pp. 33–66). Hillsdale, NJ: Erlbaum.
Swann Jr., W. B. (1984). Quest for accuracy in person perception: A matter of pragmatics. Psychological Review, 91, 457–477.
Swann, W. B. (1987). Identity negotiation: Where two roads meet. Journal of Personality and Social Psychology, 53, 1038–1051.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703–742.
The Talent Board (2018a). CANDE Winners. Retrieved from: http://thetalentboard.org/cande-awards/cande-winners/ (Retrieved July 4, 2018).
The Talent Board (2018b). The 2018 Talent Board Candidate Experience Award and Benchmark Program. Retrieved from: http://thetalentboard.org/cande-awards/how-to-apply/north-america/ (Retrieved July 4, 2018).
Van Hoye, G., & Lievens, F. (2007). Social influences on organizational attractiveness: Investigating if and when word of mouth matters. Journal of Applied Social Psychology, 37, 2024–2047.
Van Hoye, G., & Lievens, F. (2009). Tapping the grapevine: A closer look at word-of-mouth as a recruitment source. Journal of Applied Psychology, 94, 341–352.
Williams, M. L., & Bauer, T. N. (1994). The effect of a managing diversity policy on organizational attractiveness. Group & Organization Management, 19, 295–308.
Wortman, C. B., Brehm, J. W., & Berkowitz, L. (1975). Responses to uncontrollable outcomes. Advanced Experimental Social Psychology, 8, 277–336.
Zhu, X., Barnes-Farrell, J., & Dalal, D. K. (2015). Stop apologizing for your samples, start embracing them. Industrial and Organizatiosnal Psychology: Perspectives on Science and Practice, 8, 228–232.
Acknowledgments
The authors would like to thank the editor and the anonymous reviewers for their assistance with improving this manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Dalal, D.K., Zhu, X.(., Rangel, B. et al. Improving Applicant Reactions to Forced-Choice Personality Measurement: Interventions to Reduce Threats to Test Takers’ Self-Concepts. J Bus Psychol 36, 55–70 (2021). https://doi.org/10.1007/s10869-019-09655-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10869-019-09655-6