Science and Engineering Ethics

, Volume 24, Issue 2, pp 727–754 | Cite as

Continuous Evaluation in Ethics Education: A Case Study

  • Tristan McIntosh
  • Cory Higgs
  • Michael Mumford
  • Shane Connelly
  • James DuBois
Original Paper


A great need for systematic evaluation of ethics training programs exists. Those tasked with developing an ethics training program may be quick to dismiss the value of training evaluation in continuous process improvement. In the present effort, we use a case study approach to delineate how to leverage formative and summative evaluation measures to create a high-quality ethics education program. With regard to formative evaluation, information bearing on trainee reactions, qualitative data from the comments of trainees, in addition to empirical findings, can ensure that the training program operates smoothly. Regarding summative evaluation, measures examining trainee cognition, behavior, and organization-level results provide information about how much trainees have changed as a result of taking the ethics training. The implications of effective training program evaluation are discussed.


Evaluation Ethics training Ethical decision making Process improvement 



We would like to thank Tyler Mulhearn, Logan Steele, Megan Turner, and Alison Antes for their contributions to the present effort.


  1. Aguinis, H., & Kraiger, K. (2009). Benefits of training and development for individuals and teams, organizations, and society. Annual Review of Psychology, 60, 451–474.CrossRefGoogle Scholar
  2. Antes, A. L. (2014). A systematic approach to instruction in research ethics. Accountability in Research, 21, 50–67.CrossRefGoogle Scholar
  3. Antes, A. L., Brown, R. P., Murphy, S. T., Waples, E. P., Mumford, M. D., Connelly, S., & Devenport, L. D. (2007). Personality and ethical decision-making in research: The role of perceptions of self and others. Journal of Empirical Research on Human Research Ethics: International Journal, 2, 15–34.CrossRefGoogle Scholar
  4. Antes, A., & DuBois, J. M. (2014). Aligning objectives and assessment in responsible conduct of research instruction. Journal of Microbiology & Biology Education, 15, 108–116.CrossRefGoogle Scholar
  5. Antes, A. L., Murphy, S. T., Waples, E. P., Mumford, M. D., Brown, R. P., Connelly, S., & Devenport, L. D. (2009). A meta-analysis of ethics instruction effectiveness in the sciences. Ethics & Behavior, 19, 379–402.CrossRefGoogle Scholar
  6. Antes, A. L., Thiel, C. E., Martin, L. E., Stenmark, C. K., Connelly, S., Devenport, L. D., & Mumford, M. D. (2012). Applying cases to solve ethical problems: The significance of positive and process-oriented reflection. Ethics & Behavior, 22, 113–130.CrossRefGoogle Scholar
  7. Aronson, E., & Patnoe, S. (1997). The jigsaw classroom: Building cooperation in the classroom (2nd ed.). New York: Longman.Google Scholar
  8. Bagdasarov, Z., Thiel, C. E., Johnson, J. F., Connelly, S., Harkrider, L. N., Devenport, L. D., & Mumford, M. D. (2013). Case-based ethics instruction: The influence of contextual and individual factors in case content on ethical decision-making. Science and Engineering Ethics, 19, 1305–1322.CrossRefGoogle Scholar
  9. Borenstein, J., Drake, M., Kirkman, R., & Swann, J. (2008). The test of ethical sensitivity in science and engineering (TESSE): A discipline-specific assessment tool for awareness of ethical issues. In American Society for engineering education annual conference & exposition. Pittsburgh, PA.Google Scholar
  10. Brinkerhoff, R. (2010). The success case method: A strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7, 86–101.CrossRefGoogle Scholar
  11. Brock, M. E., Vert, A., Kligyte, V., Waples, E. P., Sevier, S. T., & Mumford, M. D. (2008). Mental models: An alternative evaluation of a sensemaking approach to ethics instruction. Science and Engineering Ethics, 14, 449–472.CrossRefGoogle Scholar
  12. Butterfield, K. D., Trevino, L. K., & Weaver, G. R. (2000). Moral awareness in business organizations: Influences of issue-related and social context factors. Human Relations, 53, 981–1018.CrossRefGoogle Scholar
  13. Caughron, J. J., Antes, A. L., Stenmark, C. K., Thiel, C. E., Wang, X., & Mumford, M. D. (2011). Sensemaking strategies for ethical decision making. Ethics and Behavior, 21, 351–366.CrossRefGoogle Scholar
  14. Clarkeburn, H. (2002). A test for ethical sensitivity in science. Journal of Moral Education, 31, 439–453.CrossRefGoogle Scholar
  15. Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.CrossRefGoogle Scholar
  16. Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678–707.CrossRefGoogle Scholar
  17. Council of Graduate Schools. (2012). Research and scholarly integrity in graduate education. Washington, DC: Author.Google Scholar
  18. Cullen, J. G., Sawzin, S. A., Sisson, G. R., & Swanson, R. A. (1978). Cost effectiveness: A model for assessing the training investment. Training and Development Journal, 32, 24–29.Google Scholar
  19. Dalton, R. (2000). NIH cash tied to compulsory training in good behaviour. Nature, 408, 629–629.Google Scholar
  20. Davis, M. A., Curtis, M. B., & Tschetter, J. D. (2003). Evaluating cognitive training outcomes: Validity and utility of structural knowledge assessment. Journal of Business and Psychology, 18, 191–206.CrossRefGoogle Scholar
  21. Day, D. V., Gronn, P., & Salas, E. (2004). Leadership capacity in teams. The Leadership Quarterly, 15, 857–880.CrossRefGoogle Scholar
  22. Drake, M. J., Griffin, P. M., Kirkman, R., & Swann, J. L. (2005). Engineering ethical curricula: Assessment of two approaches and recommendations. Journal of Engineering Education, 94, 223–231.CrossRefGoogle Scholar
  23. DuBois, J. M., Chibnall, J. T., & Gibbs, J. (2015). Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics, 22, 965–988.CrossRefGoogle Scholar
  24. DuBois, J. M., & Dueker, J. M. (2009). Teaching and assessing the responsible conduct of research: A delphi consensus panel report. Journal of Research Administration, 40, 49–70.Google Scholar
  25. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4, e5738.CrossRefGoogle Scholar
  26. Frazis, H. J., Herz, D. E., & Horrigan, M. W. (1995). Employer-provided training: Results from a new survey. Monthly Labor Review, 118, 3–17.Google Scholar
  27. Goldstein, I. L. (1978). The pursuit of validity in the evaluation of training programs. Human Factors: The Journal of the Human Factors and Ergonomics Society, 20, 131–144.CrossRefGoogle Scholar
  28. Goldstein, I. L. (1980). Training in work organizations. Annual Review of Psychology, 31, 229–272.CrossRefGoogle Scholar
  29. Goldstein, I. L., & Ford, J. K. (2002). Training in organizations (4th ed.). Belmont, CA: Wadswroth Cengage Learning.Google Scholar
  30. Goldvarg, E., & Johnson-Laird, P. N. (2001). Naive causality: A mental model theory of causal meaning and reasoning. Cognitive Science, 25, 565–610.CrossRefGoogle Scholar
  31. Griffin, R. P. (2011). Workplace learning evaluation: A conceptual model and framework. Industrial and Commercial Training, 43, 172–178.CrossRefGoogle Scholar
  32. Harkrider, L., MacDougall, A. E., Bagdasarov, Z., Johnson, J. F., Thiel, C. E., Mumford, M. D., & Devenport, L. D. (2013). Structuring case-based ethics training: How comparing cases and structured prompts influence training effectiveness. Ethics & Behavior, 23, 179–198.CrossRefGoogle Scholar
  33. Harkrider, L. N., Thiel, C. E., Bagdasarov, Z., Mumford, M. D., Johnson, J. F., Connelly, S., & Devenport, L. D. (2012). Improving case-based ethics training with codes of conduct and forecasting content. Ethics & Behavior, 22, 258–280.CrossRefGoogle Scholar
  34. Helton-Fauth, W., Gaddis, B., Scott, G., Mumford, M., Devenport, L., Connelly, S., & Brown, R. (2003). A new approach to assessing ethical conduct in scientific work. Accountability in Research: Policies and Quality Assurance, 10, 205–228.CrossRefGoogle Scholar
  35. Hogarth, R. M., & Makridakis, S. (1981). Forecasting and planning: An evaluation. Management Science, 27, 115–138.CrossRefGoogle Scholar
  36. Johnson, J. F., Bagdasarov, Z., Harkrider, L. N., MacDougall, A. E., Connelly, S., Devenport, L. D., & Mumford, M. D. (2013). The effects of note-taking and review on sensemaking and ethical decision making. Ethics & Behavior, 23, 299–323.CrossRefGoogle Scholar
  37. Kirkpatrick, D. L. (1983). Four steps to measuring training effectiveness. Personnel Administrator, 28, 19–25.Google Scholar
  38. Kligyte, V., Marcy, R. T., Waples, E. P., Sevier, S. T., Godfrey, E. S., Mumford, M. D., & Hougen, D. F. (2008). Application of a sensemaking approach to ethics training in the physical sciences and engineering. Science and Engineering Ethics, 14, 251–278.CrossRefGoogle Scholar
  39. Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (Ed.), Creating, implementing, and maintaining effective training and development: State-of-the-art lessons for practice (pp. 331–375). San Francisco, CA: Jossey-Bass.Google Scholar
  40. Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311–328.CrossRefGoogle Scholar
  41. Loe, T. W., Ferrell, L., & Mansfield, P. (2000). A review of empirical studies assessing ethical decision making in business. Journal of Business Ethics, 25, 185–204.CrossRefGoogle Scholar
  42. MacDougall, A. E., Harkrider, L. N., Bagdasarov, Z., Johnson, J. F., Thiel, C. E., Peacock, J., Mumford, M. D., Devenport, L. D., & Connelly, S. (2014). Examining the effects of incremental case presentation and forecasting outcomes on case-based ethics instruction. Ethics & Behavior, 24, 126–150.CrossRefGoogle Scholar
  43. Mecca, J. T., Medeiros, K. E., Giorgini, V., Gibson, C., Mumford, M. D., Connelly, S., & Devenport, L. D. (2014). The influence of compensatory strategies on ethical decision making. Ethics & Behavior, 24, 73–89.CrossRefGoogle Scholar
  44. Morgan, R. B., & Casper, W. J. (2000). Examining the factor structure of participant reactions to training: A multidimensional approach. Human Resource Development Quarterly, 11, 301–317.CrossRefGoogle Scholar
  45. Mulhearn, T. J., Watts, L. L., Todd, E. M., Medeiros, K. E., Connelly, S., & Mumford, M. D. (2017). Validation and use of a predictive modeling tool: Employing scientific findings to improve responsible conduct of research education. Accountability in Research, 24, 195–210.CrossRefGoogle Scholar
  46. Mumford, M. D. (2006). Pathways to outstanding leadership: A comparative analysis of charismatic, ideological, and pragmatic leaders. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.Google Scholar
  47. Mumford, M. D., Connelly, S., Brown, R. P., Murphy, S. T., Hill, J. H., Antes, A. L., Waples, E. & Devenport, L. D. (2008). A sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics & Behavior, 18, 315–339.CrossRefGoogle Scholar
  48. Mumford, M. D., Connelly, S., Murphy, S. T., Devenport, L. D., Antes, A. L., Brown, R. P., Hill, J. H., & Waples, E. P. (2009). Field and experience influences on ethical decision making in the sciences. Ethics & Behavior, 19, 263–289.CrossRefGoogle Scholar
  49. Mumford, M. D., Devenport, L. D., Brown, R. P., Connelly, S., Murphy, S. T., Hill, J. H., & Antes, A. L. (2006). Validation of ethical decision making measures: Evidence for a new set of measures. Ethics & Behavior, 16, 319–345.CrossRefGoogle Scholar
  50. Mumford, M. D., & Gustafson, S. B. (1988). Creativity syndrome: Integration, application, and innovation. Psychological Bulletin, 103, 27–43.CrossRefGoogle Scholar
  51. Mumford, M. D., Steele, L., & Watts, L. L. (2015). Evaluating ethics education programs: A multilevel approach. Ethics and Behavior, 25, 37–60.CrossRefGoogle Scholar
  52. National Academy of Sciences. (2015). Optimizing the nation’s investment in academic research. A new regulatory framework for the 21st century part 1. In Committee on Federal research regulations and reporting requirements: A new framework for Research Universities in the 21st Century. The National Academies Press: Washington, DC.Google Scholar
  53. Noe, R. A. (2013). Employee training and development. New York, NY: McGraw Hill.Google Scholar
  54. Office of Research Integrity. (2001). United States Department of Health and Human Services, Rockvill, MD. Available from
  55. Online Ethics Center for Engineering and Science. (2001). Case Western Reserve University, Cleveland, OH. Available from
  56. Ostroff, C. (1991). Training effectiveness measures and scoring schemes: A comparison. Personnel Psychology, 44, 353–374.CrossRefGoogle Scholar
  57. Patalano, A. L., & Siefert, C. M. (1997). Opportunistic planning: Being reminded of pending goals. Cognitive Psychology, 34, 1–36.CrossRefGoogle Scholar
  58. Peacock, J., Harkrider, L. N., Bagdasarov, Z., Connelly, S., Johnson, J. F., Thiel, C. E., MacDougall, A. E., & Devenport, L. D. (2013). Effects of alternative outcome scenarios and structured outcome evaluation on case-based ethics instruction. Science and Engineering Ethics, 19, 1283–1303.CrossRefGoogle Scholar
  59. Pieters, H. M., Touw-Otten, F. W., & De Melker, R. A. (1994). Simulated patients in assessing consultation skills of trainees in general practice vocational training: A validity study. Medical Education, 28, 226–233.CrossRefGoogle Scholar
  60. Salas, E., Milham, L. M., & Bowers, C. A. (2003). Training evaluation in the military: Misconceptions, opportunities, and challenges. Military Psychology, 15, 3–16.CrossRefGoogle Scholar
  61. Scott, G., Leritz, L. E., & Mumford, M. D. (2004). The effectiveness of creativity training: A quantitative review. Creativity Research Journal, 16, 361–388.CrossRefGoogle Scholar
  62. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Stamford, CT: Cengage Learning.Google Scholar
  63. Steele, L. M., Mulhearn, T. J., Medeiros, K. E., Watts, L. L., Connelly, S., & Mumford, M. D. (2016). How do we know what works? A review and critique of current practices in ethics training evaluation. Accountability in Research., 23(6), 319–350.CrossRefGoogle Scholar
  64. Thiel, C. E., Connelly, S., Harkrider, L., Devenport, L. D., Bagdasarov, Z., Johnson, J. F., & Mumford, M. D. (2013). Case-based knowledge and ethics education: Improving learning and transfer through emotionally rich cases. Science and Engineering Ethics, 19, 265–286.CrossRefGoogle Scholar
  65. Watts, L. L., Medeiros, K. E., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford, M. D. (2016a). Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics & Behavior, 1–34. doi: 10.1080/10508422.2016.1182025.
  66. Watts, L. L., Mulhearn, T. J., Medeiros, K. E., Steele, L. M., Connelly, S., & Mumford, M. D. (2016b). Modeling the instructional effectiveness of responsible conduct of research education: A meta-analytic path-analysis. Ethics & Behavior, 1–19. doi: 10.1080/10508422.2016.1247354.
  67. Weick, K. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications.Google Scholar
  68. Welber, M. (2001). Web-based compliance training. E-learning, 2, 14–16.Google Scholar
  69. Werhane, R. (2002). Moral imagination and systems thinking. Journal of Business Ethics, 28, 33–42.CrossRefGoogle Scholar
  70. White, B. S., & Branch, R. M. (2001). Systematic pilot testing as a step in the instructional design process of corporate training and development. Performance Improvement Quarterly, 14, 75–94.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  • Tristan McIntosh
    • 1
  • Cory Higgs
    • 1
  • Michael Mumford
    • 1
  • Shane Connelly
    • 1
  • James DuBois
    • 2
  1. 1.Department of PsychologyCenter for Applied Social Research, The University of OklahomaNormanUSA
  2. 2.Washington University School of Medicine in St. LouisSt. LouisUSA

Personalised recommendations