Abstract
Polynomial regression with response surface analysis is a sophisticated statistical approach that has become increasingly popular in multisource feedback research (e.g., self-observer rating discrepancy). The approach allows researchers to examine the extent to which combinations of two predictor variables relate to an outcome variable, particularly in the case when the discrepancy (difference) between the two predictor variables is a central consideration. We believe this approach has potential for application to a wide variety of research questions. To enhance interest and use of this technique, we provide ideas for future research directions that might benefit from the application of this analytic tool. We also walk through a step-by-step example of how to conduct polynomial regression and response surface analysis and provide all the tools you will need to do the analyses and graph the results (including SPSS syntax, formulas, and a downloadable Excel spreadsheet). Our example involves how discrepancies in perceived supervisor and organizational support relate to affective commitment. Finally, we discuss how this approach is a better, more informative alternative to difference scores and can be applied to the examination of two-way interactions in moderated regression.
Similar content being viewed by others
Change history
09 August 2013
An Erratum to this paper has been published: https://doi.org/10.1007/s10869-013-9317-6
Notes
There are times one would want to examine variables that would predict agreement (i.e., examining agreement as a “dependent” or “outcome” variable). Edwards (1995) discusses how to use multivariate procedures to examine ratings considered jointly as outcome variables. Ostroff et al. (2004), Gentry et al. (2007) and Gentry et al. (in press) give recent empirical examples.
References
Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage.
Aselage, J., & Eisenberger, R. (2003). Perceived organizational support and psychological contracts: A theoretical integration. Journal of Organizational Behavior, 24, 491–509.
Atwater, L. E., Ostroff, C., Yammarino, F. J., & Fleenor, J. W. (1998). Self-other agreement: Does it really matter? Personnel Psychology, 51, 577–598.
Box, G. E. P., & Draper, N. R. (1987). Empirical model-building and response surfaces. New York: Wiley.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
Edwards, J. R. (1994). The study of congruence in organizational behavior research: Critique and proposed alternative. Organizational Behavior and Human Decision Processes, 58, 51–100.
Edwards, J. R. (1995). Alternatives to difference scores as dependent variables in the study of congruence in organizational research. Organizational Behavior and Human Decision Processes, 64, 307–324.
Edwards, J. R. (2007). Polynomial regression and response surface methodology. In C. Ostroff & T. A. Judge (Eds.), Perspectives on organizational fit (pp. 361–372). San Francisco: Jossey-Bass.
Edwards, J. R., & Parry, M. E. (1993). On the use of polynomial regression equations as an alternative to difference scores in organizational research. Academy of Management Journal, 36, 1577–1613.
Eisenberger, R., Huntington, R., Hutchison, S., & Sowa, D. (1986). Perceived organizational support. Journal of Applied Psychology, 71, 500–507.
Fleenor, J. W., McCauley, C. D., & Brutus, S. (1996). Self-other rating agreement and leader effectiveness. Leadership Quarterly, 7, 487–506.
Gentry, W. A., Hannum, K. M., Ekelund, B. Z., & de Jong, A. (2007). A study of the discrepancy between self- and observer-ratings on managerial derailment characteristics of European managers. European Journal of Work and Organizational Psychology, 16, 295–325.
Gentry, W. A., Yip, J., & Hannum, K. M. (in press). Self-observer rating discrepancies of managers in Asia: A study of derailment characteristics and behaviors in Southern and Confucian Asia. International Journal of Selection and Assessment.
Gibson, C. B., Cooper, C. D., & Conger, J. A. (2009). Do you see what we see? The complex effects of perceptual distance between leaders and teams. Journal of Applied Psychology, 94, 62–76.
Harris, M. M., Ansaal, F., & Lievens, F. (2008). Keeping up with the Joneses: A field study of the relationships among upward, lateral, and downward comparisons and pay level satisfaction. Journal of Applied Psychology, 93, 665–673.
Kottke, J. L., & Sharafinski, C. E. (1988). Measuring perceived supervisory and organizational support. Educational and Psychological Measurement, 48, 1075–1079.
Meyer, J. P., & Allen, N. J. (1997). Commitment in the workplace: Theory research and application. Thousand Oaks: Sage Publications.
Ostroff, C., Atwater, L. E., & Feinberg, B. J. (2004). Understanding self-other agreement: A look at rater and ratee characteristics, context, and outcomes. Personnel Psychology, 57, 333–375.
Rhoades, L., & Eisenberger, R. (2002). Perceived organizational support: A review of the literature. Journal of Applied Psychology, 87, 698–714.
Rhoades, L., Eisenberger, R., & Armeli, S. (2001). Affective commitment to the organization: The contribution to perceived organizational support. Journal of Applied Psychology, 86, 825–836.
Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Statistics (5th ed.). Boston: Allyn and Bacon.
Yammarino, F. J., & Atwater, L. E. (1997). Do managers see themselves as others see them? Implications of self-other rating agreement for human resources management. Organizational Dynamics, 25, 35–44.
References to Consult for More Information about Response Surface Analysis
Edwards, J. R. (2002). Alternatives to difference scores: Polynomial regression analysis and response surface methodology. In F. Drasgow & N. W. Schmitt (Eds.), Advances in measurement and data analysis (pp. 350–400). San Francisco: Jossey-Bass.
Atwater, L., Waldman, D., Ostroff, C., Robie, C., & Johnson, K. M. (2005). Self-other agreement: Comparing its relationship with performance in the U.S and Europe. International Journal of Selection and Assessment, 13, 25–40.
Acknowledgment
This research was supported, in part, by funds from the University of North Carolina at Charlotte.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1
Appendix 2: Sample Excel Screen Shot for Step 3 (Graphing the Results)
Instructions to go with the Excel spreadsheet
-
1.
To download a copy of the Excel spreadsheet for your use, please visit: www.springer.com/psychology/community+psychology/journal/10869.
-
2.
Enter the unstandardized regression coefficients and their associated standard errors from your polynomial regression run in SPSS into the ‘Data Entry Area’ at the top left of the spreadsheet. Also enter the sample size of your data set in the space provided.
-
3.
Enter the covariances for your regression coefficients in the righthand column of the ‘Data Entry Area’. You will get these from your SPSS output from the polynomial regression run (remember, as Table 3 in Appendix 1 indicates, you will include the ‘bcov’ subcommand in SPSS so that you will get the covariance matrix of regression coefficients).
-
4.
The ‘Testing Slopes and Curves’ box to the right of the spreadsheet calculates the surface values a 1 through a 4 and assesses their significance. These calculations will occur automatically once you have entered your data into the ‘Data Entry Area’. The formulas for calculating a 1 through a 4 also appear in Table 4 in Appendix 1 in case you want to calculate them by hand or choose not to download this Excel spreadsheet.
-
5.
The ‘Points to Plot’ box shows the predicted values of the outcome variable (affective commitment) for each combination of the two predictor variables based on the polynomial regression equation and associated unstandardized beta weights. The values −2 to 2 in the gray area represent the points along the X- and Y-axes and are based on centered-scores of a 5-point Likert-type scale ranging from 1 to 5 (you can change these values to fit the parameters of your measures, keeping within the 5-point framework, e.g., if you have a 7-point scale you could use −4, −2, 0, 2, and 4 as your values). Values are dependent upon the original metric of the scale. PSS values are across the top, POS values are down the left-hand side.
-
The polynomial regression formula from our data was:
$$ {\text{AC}} = \left( { 3. 10} \right) + - . 2 3\left( {\text{PSS}} \right) + . 7 7\left( {\text{POS}} \right) + - .0 7\left( {{\text{PSS}}^{ 2} } \right) + . 2 7\left( {{\text{PSS}}*{\text{POS}}} \right) + - . 10\left( {{\text{POS}}^{ 2} } \right). $$ -
The Excel sheet calculates the predicted values of AC in the ‘Points to Plot’ box automatically, but as an example, to create the predicted value for the cell −2 PSS and 2 POS, plug −2 and 2 into the polynomial regression formula as follows:
$$ 3. 10 + - . 2 3\left( { - 2} \right) + . 7 7\left( 2\right) + - .0 7\left( { - 2^{ 2} } \right) + . 2 7\left( { - 2* 2} \right) + - . 10\left( { 2^{ 2} } \right) = 3. 3 4. $$
-
-
6.
The Excel spreadsheet automatically creates the graph. But if you want to do this on your own, you would create a table like the ‘Points to Plot’ table. Then highlight the table including the −2 to 2 values on both sides, choose “insert chart,” then choose “surface chart.” If using the table set up as shown above, choose “series in columns” and continue with the chart wizard. The minimum and maximum on the Z-axis (the axis for the outcome variable, in our case, AC) should span the minimum and maximum values of the scale of the original outcome variable (in our case, 1 is the minimum and 5 is the maximum). Values on the diagonal represent the line of perfect agreement. Values below the diagonal represents when PSS > POS. Values above the diagonal represents when PSS < POS.
Rights and permissions
About this article
Cite this article
Shanock, L.R., Baran, B.E., Gentry, W.A. et al. Polynomial Regression with Response Surface Analysis: A Powerful Approach for Examining Moderation and Overcoming Limitations of Difference Scores. J Bus Psychol 25, 543–554 (2010). https://doi.org/10.1007/s10869-010-9183-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10869-010-9183-4