Some Considerations in Evaluating School Consultation Programs
Evaluation of social service programs is difficult under the best of circumstances. Problems of program variation, lack of control over the context in which evaluation is conducted, and difficulty in defining a single set of objectives and criteria applicable across programs all pose special challenges. An input-process-output model often proves inadequate because of difficulty in standardizing its components. In an effort to get beyond reliance on case studies, evaluators are urged to employ experimental and quasi-experimental designs.1 Of course, the values which govern service delivery have generally mitigated against the random assignment of clients to groups from which services are withheld, thus making true experimental designs virtually out of the question. The use of quasi-experimental designs, using “comparison groups” rather than true control groups, provides a partial answer to this dilemma, but one far from satisfactory.
KeywordsJuvenile Delinquency School Mental Health Mental Health Consultation Social Service Program Benefit Category
Unable to display preview. Download preview PDF.
- 1.Wholey, Joseph S., John W. Scanlon, Hugh G. Duffy, James S. Fukumoto, and Leona M. Vogt. Federal evaluation policy. Washington, D. C.: Urban Institute, 1970.Google Scholar
- 3.Haylett, Clarice H. and L. Rapport, “Mental health consultation,” Handbook of community psychiatry and community mental health. New York: Grune and Stratton, 1964.Google Scholar
- 4.Readers are referred to Donald T. Campbell and Julian C. Stanley, Experimental and quasi-experimental designs for research and teaching. Chicago: Rand McNally, 1963, for a systematic discussion of such designs.Google Scholar
- 5.Montague, Ernest K., and Elaine N. Taylor, Preliminary handbook on evaluating mental health indirect service programs in schools. Alexandria, Va.: Human Resources Research Office (HumRRO), 1971, (HumRRO Technical Report 71–18).Google Scholar