Skip to main content
Log in

Outcomes of Peak, Typical, and Variability in Performance of College Football Teams

  • Published:
Journal of Business and Psychology Aims and scope Submit manuscript

Abstract

Purpose

The purpose of this study was to investigate the typical, peak, and variability in performance of both the offensive and defensive units of college football teams over the course of a season in predicting three objective team-level outcomes (win percentage, fan home game attendance, and bowl game payout).

Design/Methodology/Approach

Data were obtained from an archival sports database for 193 Bowl Subdivision college football teams for three separate seasons.

Findings

When all three types of performance were considered simultaneously, only typical performance significantly predicted win percentage and bowl game payout outcomes, and it explained between 19 % (for bowl game payout) and 49 % (for record) of the variance. All interactions between typical performance and performance variability were non-significant.

Implications

These null results point to a boundary condition in the relationship between performance variability and outcomes: whether the outcome is subject to evaluator attributional processes (e.g., raises, performance evaluations) or is more objective in nature. Although null, the present results question a sometimes implicit assumption that performance inconsistency is detrimental to organizational functioning.

Originality/Value

This is one of the first studies to examine outcomes of peak performance, typical performance, and performance variability at the team level. Additionally, most studies examining the outcomes of such performance use subjective outcomes such as performance ratings, whereas this study provides one of the first examinations using objective outcomes such as bowl game payout.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The core results when offensive and defensive performance were both included in the regression analyses did not differ.

  2. We acknowledge the potential non-independence of the data based on differences across conferences and seasons. However, the observed group effects were small, with Intraclass Correlation Coefficients for record, bowl game payout, and attendance equal to .00, .00, and .15, respectively, when teams were nested within a conference, and .00, .00, .05, respectively, when teams were nested within a season. Moreover, regardless of whether teams were nested within conference or season, the results of the hierarchical linear models did not differ substantively from the ordinary least square regression results. These analyses are available from the primary author upon request.

  3. The direction, pattern, and magnitude of the regression results did not differ when variables without controls for opponent quality were used.

References

  • Austin, J. T., Humprheys, L. G., & Hulin, C. L. (1989). Another view of dynamic criteria: A critical reanalysis of Barrett, Caldwell, and Alexander. Personnel Psychology, 42, 583–596.

    Article  Google Scholar 

  • Barnes, C. M., & Morgeson, F. P. (2007). Typical performance, maximal performance, and performance variability: Expanding our understanding of how organizations value performance. Human Performance, 20, 259–274.

    Article  Google Scholar 

  • Barnes, C. M., Reb, J., & Ang, D. (2012). More than just the mean: Moving to a dynamic view of performance-based compensation. Journal of Applied Psychology, 97, 711–718.

    Article  PubMed  Google Scholar 

  • Barrett, G. V., & Alexander, R. A. (1989). Rejoinder to Austin, Humphreys, and Hulin: Critical reanalysis of Barrett, Caldwell, and Alexander. Personnel Psychology, 42, 597–612.

    Article  Google Scholar 

  • Barrett, G. V., Caldwell, M. S., & Alexander, R. A. (1985). The concept of dynamic criteria: A critical reanalysis. Personnel Psychology, 38, 41–56.

    Article  Google Scholar 

  • Bass, B. M. (1962). Further evidence on the dynamic nature of criteria. Personnel Psychology, 7, 93–114.

    Article  Google Scholar 

  • Bommer, W. H., Johnson, J. L., Rich, G. A., Podsakoff, P. A., & Mackenzie, S. B. (1995). On the interchangeability of objective and subjective measures of employee performance: A meta-analysis. Personnel Psychology, 48, 587–605.

    Article  Google Scholar 

  • Brannick, M. T., & Prince, C. (1997). An overview of team performance measurement. In M. T. Brannick, E. Salas, & C. Prince (Eds.), Team performance assessment and measurement (pp. 3–16). Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Cardy, R. L., & Dobbins, G. H. (1994). Performance appraisal: A consideration of alternative perspectives. Cincinnati, OH: South-Western.

    Google Scholar 

  • Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). New York, NY: Routledge.

    Google Scholar 

  • Colley, W. N. (2002). Colley’s bias free college football ranking method: The Colley matrix explained. Princeton, NJ: Princeton University.

    Google Scholar 

  • Cronbach, L. J. (1960). Essentials of psychological testing. New York, NY: Harper.

    Google Scholar 

  • Day, D. V., Gordon, S., & Fink, C. (2012). The sporting life: Exploring organizations through the lens of sport. The Academy of Management Annals, 6, 397–433.

    Article  Google Scholar 

  • Deadrick, D. L., & Gardner, D. G. (2008). Maximal and typical measures of job performance: An analysis of performance variability over time. Human Resource Management Review, 18, 133–145.

    Article  Google Scholar 

  • Deadrick, D. L., & Madigan, R. M. (1990). Dynamic criteria revisited: A longitudinal study of performance stability and predictive validity. Personnel Psychology, 13, 717–744.

    Article  Google Scholar 

  • DeNisi, A. S., & Stevens, G. E. (1981). Profiles of performance, performance evaluations, and personnel decisions. Academy of Management Journal, 24, 592–602.

    Article  Google Scholar 

  • Dubois, L. Z., Sackett, P. R., Zedeck, S., & Fogli, L. (1993). Further exploration of typical and maximum performance criteria: Definitional issues, prediction, and white-black differences. Journal of Applied Psychology, 78, 205–211.

    Article  Google Scholar 

  • Fisher, C. D. (2008). What if we took within-person performance variability seriously? Industrial and Organizational Psychology, 1, 185–189.

    Article  Google Scholar 

  • Fox, S., Bizman, A., Hoffman, M., & Oren, L. (1995). The impact of variability in candidate profiles on rater confidence and judgments regarding stability and job suitability. Journal of Occupational and Organizational Psychology, 68, 13–23.

    Article  Google Scholar 

  • Ghiselli, E. E., & Haire, M. (1960). The validation of selection tests in light of the dynamic character of criteria. Personnel Psychology, 13, 225–231.

    Article  Google Scholar 

  • Grant, A. M. (2008). The significance of task significance: Job performance effects, relational mechanisms, and boundary conditions. Journal of Applied Psychology, 93, 108–124.

    Article  PubMed  Google Scholar 

  • Green, S. G., & Mitchell, T. R. (1979). Attributional processes of leaders in leader-member interactions. Organizational Behavior and Human Performance, 23, 429–458.

    Article  Google Scholar 

  • Hofmann, D. A., Jacobs, R., & Gerras, S. J. (1992). Mapping individual performance over time. Journal of Applied Psychology, 77, 185–195.

    Google Scholar 

  • Hunsley, J., & Meyer, G. (2003). The incremental validity of psychological testing and assessment: Conceptual, methodological, and statistical issues. Psychological Assessment, 15, 446–455.

    Article  PubMed  Google Scholar 

  • Ilgen, D., Hollenbeck, J., Johnson, M., & Jundt, D. (2005). Teams in organizations: From IPO models to IMOI models. Annual Review of Psychology, 56, 517–543.

    Article  PubMed  Google Scholar 

  • Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction-job performance relationship: A qualitative and quantitative review. Psychological Bulletin, 127, 376–407.

    Article  PubMed  Google Scholar 

  • Kane, J. S. (1983). Performance distribution assessment: A new breed of appraisal methodology. In H. J. Bernardin & R. W. Beatty (Eds.), Performance appraisal: Assessing human behavior at work (pp. 325–441). Boston, MA: Kent.

    Google Scholar 

  • Kane, J. S. (1986). Performance distribution assessment. In R. A. Berk (Ed.), Performance assessment: Methods and applications (pp. 237–273). Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Kane, J. S. (2000). Accuracy and its determinants in distributional assessment. Human Performance, 13, 47–84.

    Article  Google Scholar 

  • Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3–90). San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Lance, C. E. (1988). Residual centering, exploratory and confirmatory moderator analysis, and decomposition of effects in path models containing interactions. Applied Psychological Measurement, 12, 163–175.

    Article  Google Scholar 

  • Landis, R. S. (2001). A note on the stability of team performance. Journal of Applied Psychology, 86, 446–450.

    Article  PubMed  Google Scholar 

  • Laverie, D. A., & Arnett, D. B. (2000). Factors affecting fan attendance: The influence of identity salience and satisfaction. Journal of Leisure Research, 32, 225–246.

    Google Scholar 

  • Le, H., Schmidt, F. L., Harter, J. K., & Lauver, K. J. (2010). The problem of empirical redundancy of constructs in organizational research: An empirical investigation. Organizational Behavior and Human Decisions Processes, 113, 112–125.

    Article  Google Scholar 

  • Lee, H., & Dalal, R. S. (2011). The effects of performance extremities on ratings of dynamic performance. Human Performance, 24, 99–118.

    Article  Google Scholar 

  • Mangos, P. M., Steele-Johnson, D., LaHuis, D., & White, E. D. (2007). A multiple-task measurement framework for assessing maximum-typical performance. Human Performance, 20, 241–258.

    Article  Google Scholar 

  • Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26, 355–376.

    Google Scholar 

  • Morgeson, F. P., & Hofmann, D. A. (1999). The structure and function of collective constructs: Implications for multilevel research and theory development. Academy of Management Review, 24, 249–265.

    Google Scholar 

  • Murphy, K. R. (2008). Explaining the weak relationship between job performance and ratings of job performance. Industrial and Organizational Psychology, 1, 148–160.

    Article  Google Scholar 

  • Ones, D. S., & Viswesvaran, C. (2007). A research note on the incremental validity of job knowledge and integrity tests for predicting maximal performance. Human Performance, 20, 293–303.

    Article  Google Scholar 

  • Ployhart, R. E., Lim, B. C., & Chan, K. Y. (2001). Exploring relations between typical and maximum performance ratings and the five factor model of personality. Personnel Psychology, 54, 809–843.

    Article  Google Scholar 

  • Ployhart, R., Schneider, B., & Schmitt, N. (2005). Staffing organizations: Contemporary practice and theory. Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Podsakoff, N. P., Whiting, S. W., Podsakoff, P. M., & Blume, B. D. (2009). Individual- and organizational-level consequences of organizational citizenship behaviors: A meta-analysis. Journal of Applied Psychology, 94, 122–141.

    Article  PubMed  Google Scholar 

  • Rabbitt, P., Osman, P., Moore, B., & Stollery, B. (2001). There are stable individual differences in performance variability, both from moment to moment and from day to day. The Quarterly Journal of Experimental Psychology, 54, 981–1003.

    Article  PubMed  Google Scholar 

  • Reb, J., & Cropanzano, R. (2007). Evaluating dynamic performance: The influence of salient gestalt characteristics on performance ratings. Journal of Applied Psychology, 92, 490–499.

    Article  PubMed  Google Scholar 

  • Reb, J., & Greguras, G. J. (2008). Dynamic performance and the performance–performance rating relation. Industrial and Organizational Psychology, 1, 194–196.

    Article  Google Scholar 

  • Reb, J., & Greguras, G. J. (2010). Understanding performance ratings: Dynamic performance, attributions, and rating purpose. Journal of Applied Psychology, 95, 213–220.

    Article  PubMed  Google Scholar 

  • Sackett, P. R. (2007). Revisiting the origins of the typical-maximum distinction. Human Performance, 20, 179–185.

    Article  Google Scholar 

  • Sackett, P. R., Zedeck, S., & Fogli, L. (1988). Relations between measures of typical and maximum job-performance. Journal of Applied Psychology, 73, 482–486.

    Article  Google Scholar 

  • Schmidt, F. L., & Hunter, J. E. (1988). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.

    Article  Google Scholar 

  • Schmidt, F. L., & Rader, M. (1999). Exploring the boundary conditions for interview validity: Meta-analytic validity findings for a new interview type. Personnel Psychology, 52, 445–464.

    Article  Google Scholar 

  • Stewart, G. L., & Nadkeolyar, A. K. (2007). Exploring how constraints created by other people influence intraindividual variation in objective performance measures. Journal of Applied Psychology, 92, 1149–1158.

    Article  PubMed  Google Scholar 

  • Stryker, S. (1980). Symbolic interactionism: A social structural version. Menlo Park, CA: Benjamin Cummings.

    Google Scholar 

  • Wernimont, P. F., & Campbell, J. P. (1968). Signs, samples, and criteria. Journal of Applied Psychology, 52, 372–376.

    Article  PubMed  Google Scholar 

  • Witt, L. A., & Spitzmuller, C. (2007). Person-situated predictors of maximum and typical performance. Human Performance, 20, 305–315.

    Article  Google Scholar 

  • Woehr, D. J., & Miller, M. J. (1997). Distributional ratings of performance: More evidence for a new rating format. Journal of Management, 23, 705–720.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian J. Hoffman.

Additional information

A Version of this manuscript was Benjamin L. Overstreet’s Master Thesis at the University of Georgia.

Rights and permissions

Reprints and permissions

About this article

Cite this article

LoPilato, A.C., Hoffman, B.J. & Overstreet, B.L. Outcomes of Peak, Typical, and Variability in Performance of College Football Teams. J Bus Psychol 29, 221–233 (2014). https://doi.org/10.1007/s10869-013-9336-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10869-013-9336-3

Keywords

Navigation