Advertisement

Evaluating Peer Review

  • Lee Sechrest
  • Abram Rosenblatt

Abstract

To be effective, any quality assurance program in psychology must be based on systematic research designed to inform both the development and the overall effectiveness of that program. These programs must be built on systematic data collection, experimental research, and program evaluation. Once built, the completed program must be judged by these same methods. Although it is debatable and perhaps even unlikely that the quality of mental health care can ever be assured (enhanced might be the preferable term), any given program will make major progress toward assuring quality of care only if it is based on quality scientific information.

Keywords

Quality Assurance Program Peer Review Psychodynamic Psychotherapy Professional Psychology Peer Review System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aronson, S. H., & Sherwood, C. C. (1967). Researcher versus practitioner: Problems in social action research. Social Work, 12, 89–96.Google Scholar
  2. Babbie, E. (1983). The practice of social research. Belmont, CA: Wadsworth.Google Scholar
  3. Baum, M. L., Amish, D.S., Chalmers, T. C., Sacks, H. S., Smith, H., & Fagerstrom, R. M. (1981). A survey of clinical trials of antibiotic prophylaxis in colon surgery: Evidence against further use of no-treatment controls. New England Journal of Medicine, 305, 795–799.PubMedCrossRefGoogle Scholar
  4. Bejar, I. I. (1980). Biased assessment of program impact due to psychometric artifacts. Psychological Bulletin, 87, 513–524.CrossRefGoogle Scholar
  5. Borgatta, E. (1955). Research: Pure and applied. Group Psychotherapy, 8, 236–277.Google Scholar
  6. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.Google Scholar
  7. Clairborn, W. L., Biskin, B. H., & Friedman, L. S. (1982). CHAMPUS and quality assurance. Professional Psychology, 13, 40–49.CrossRefGoogle Scholar
  8. Cohen, L., & Holstein, C. (1982a). Characteristics and attitudes of peer reviewers and providers in psychology. Professional Psychology, 13, 66–73.CrossRefGoogle Scholar
  9. Cohen, L., & Holstein, C. (1982b). Year of degree and psychologists’ attitudes toward peer review. Professional Psychology, 13, 175–180.CrossRefGoogle Scholar
  10. Cohen, L., & Oyster-Nelson, C. (1981). Clinicians, evaluations of psychodynamic psychotherapy: Experimental data on psychological peer review. Consulting and Clinical Psychology, 49, 583–589.PubMedCrossRefGoogle Scholar
  11. Cook, T. D., & Campbell, D. T. (1979). Quasi-experiments: Design and analysis issues for field settings. Chicago: Rand McNally.Google Scholar
  12. Cronbach, L. J. (1964). Evaluation for course improvement. In R. W. Heath (Ed.), New curricula. New York: Harper & Row.Google Scholar
  13. Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116–134.CrossRefGoogle Scholar
  14. Fisher, K. (1985). Possible deficits threaten future of APA peer review. APA Monitor, 16, 1.Google Scholar
  15. Gilbert, J. P., Light, R. J., & Mosteller, F. (1977). Progress in surgery and anesthesia: Benefits and risks of innovative therapy. In J. P. Bunker, B. A. Barnes, & F. Mosteller (Eds.), Costs, risks and benefits of surgery. New York: Oxford University Press.Google Scholar
  16. Gottman, J. M., & Glass, G. V. (1978). Analysis of interrupted time series experiments. In T. R. Kratochwill (Ed.), Single-subject research: Strategies for evaluating change. New York: Academic Press.Google Scholar
  17. Hirschi, T., & Selvin, H. (1973). Principles of survey analysis. New York: Free Press.Google Scholar
  18. Holland, R. S. (1984). Perceived truthfulness and perceived usefulness of program evaluations by direct services staff.Unpublished doctoral dissertation, University of Michigan.Google Scholar
  19. Judd, C. M., & Kenny, D. A. (1981). Estimating the effects of social interventions. New York: Cambridge University Press.Google Scholar
  20. Kazdin, A. E. (1974). Methodological and interpretive problems of single-case experimental designs. Journal of Consulting and Clinical Psychology, 4, 629–642.Google Scholar
  21. Konecni, V. J., & Ebbesen, E. B. (1979). External validity of research in legal psychology. Law and Human Behavior, 3, 39–70.CrossRefGoogle Scholar
  22. Luft, L. L., Sampson, L. M., & Newman, D. E. (1976). Effects of peer review on outpatient psychotherapy: Therapist and patient follow-up survey. American Journal of Psychiatry, 13, 891–895.Google Scholar
  23. McCall, N., & Rice, T. (1983). A summary of the Colorado clinical psychology/expanded mental health benefits experiment. American Psychologist, 38, 1279–1291.PubMedCrossRefGoogle Scholar
  24. Neale, J. M., & Libert, R. M. (1973). Science and behavior: An introduction to methods of research. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  25. Peterson, O. L., Andrews, L. T., Spain, R. S., & Greenberg, B. G. (1956). An analytical study of North Carolina general practice, 1953-54. Journal of Medical Education, 31, 1–165.Google Scholar
  26. Pizzirusso, C., & Cohen, L. (1983). Psychologist’s evaluations of psychodynamic psychotherapy: A peer review analogue experiment. Professional Psychology, 14, 57–66.CrossRefGoogle Scholar
  27. Reichardt, C. S. (1979). The statistical analysis of data from nonequivalent group designs. In T. D. Cook & D. T. Campbell (Eds.), Quasi-experimentation: Design and analysis issues for field settings. Skokie, IL: Rand McNally.Google Scholar
  28. Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.), Perspectives of curriculum evaluation, (AERA Monograph Series on Curriculum Evaluation, No. 1.). Chicago: Rand McNally.Google Scholar
  29. Sechrest, L. (1984). Evaluating health care. Unpublished manuscript, University of Arizona.Google Scholar
  30. Sechrest, L., & Hoffman, P. E. (1982). The philosophical underpinnings of peer review. Professional Psychology, 13, 14–18.CrossRefGoogle Scholar
  31. Sechrest, L., & Redner, R. (1979). Strength and integrity of treatments in evaluation studies.Washington, DC: National Criminal Justice Reference Service, National Institute of Law Enforcement and Criminal Justice, Law Enforcement Assistance Administration, U.S. Department of Justice.Google Scholar
  32. Sechrest, L., & Yeaton, W. H. (1981a). Assessing the effectiveness of social programs: Methodological and conceptual issues. In S. Ball (Ed.), New directions for program evaluation. Beverly Hills, CA: Sage.Google Scholar
  33. Sechrest, L., & Yeaton, W. H. (1981b). Empirical bases for estimating effect size. In R. F. Boruch, P. M. Wortman, & D. S. Cordray (Eds.), Reanalyzing program evaluations. San Francisco: Jossey-Bass.Google Scholar
  34. Sechrest, L., West, S. G., Phillips, M. A., Redner, R., & Yeaton, W. (1977). Some neglected problems in evaluation research: Strength and integrity of treatments. In L. Sechrest, S. G. West, M. A. Phillips, R. Redner, & W. Yeaton (Eds.), Evaluation studies review annual (Vol. 4). Beverly Hills, CA: Sage.Google Scholar
  35. Sechrest, L., Ametrano, I. M., & Ametrano, D. A. (1982). Program evaluation. In J. R. McNamara & A. G. Barclay (Eds.), Critical issues, developments and trends in professional psychology.New York: Praeger. Stricker, G., & Sechrest, L. (1982). The role of research in criteria construction. Professional Psychology, 13, 19-22. Sutcliffe, J. P. (1980). On the relationship of reliability to statistical power. Psychological Bulletin, 88, 509-515. Tharp, R. G., & Gallimore, R. (1979). The ecology of program research and evaluation: A model of evaluation succession. In L. Sechrest, S. G. West, M. A. Phillips, R. Redner, & W. Yeaton (Eds.), Evaluation studies review annual(Vol. 4). Beverly Hills, CA: Sage.Google Scholar
  36. Trochim, W. M. K. (1984). Research design for program evaluation: The regression-discontinuity approach. Beverly Hills, CA: Sage.Google Scholar
  37. Weiss, C. H. (1971). Organizational constraints on evaluation research. New York: Bureau of Applied Social Research.Google Scholar
  38. Weiss, C. H. (1972). Evaluation research: Methods of assessing program effectiveness. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
  39. Weiss, C. H., & Bucuvales, M. J. (1980). Truth tests and utility tests: Decision makers, frames of reference for social science research. American Sociological Review, 45, 302–313.CrossRefGoogle Scholar
  40. Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156–167.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1988

Authors and Affiliations

  • Lee Sechrest
    • 1
  • Abram Rosenblatt
    • 1
  1. 1.Department of PsychologyUniversity of ArizonaTucsonUSA

Personalised recommendations