Recent United States Experiences in Evaluation Research with Implications for Latin America

  • Thomas D. Cook
  • Emile G. McAnany
  • Robinson G. Hollister


This chapter is intended to define and illustrate “evaluation” and to describe some lessons that have been learned over the last ten years by persons conducting evaluation research in the United States. Our hope is that these lessons will prove useful for those who commission or conduct evaluations of nutrition-related projects in lesser developed countries, especially Latin America. The lessons are relevant to:
  1. a.

    Deciding which projects are or are not worth evaluating;

  2. b.

    Determining who should ask the evaluation questions;

  3. c.

    Deciding who should conduct the evaluations;

  4. d.

    Determining whether random assignment to treatments is possible and where it is not possible;

  5. e.

    Ascertaining which qua si-experimental or nonexperimental designs can be implemented;

  6. f.

    Deciding upon the measures of project or program impact;

  7. g.

    Measuring the extent to which a promised treatment has actually been delivered;

  8. h.

    Ascertaining the extent to which the findings from evaluating a single project or program can be generalized to other settings, times, service providers, and service recipients; and

  9. i.

    Determining the relative emphasis to be given to “summative” and “formative” goals.



Random Assignment Evaluation Research Summative Evaluation Evaluation Question Project Developer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    McKay, H., L. Sinisterra, A. McKay, et al. Improving cognitive ability in chronically deprived children. Science 200: 270–278, 1978.CrossRefGoogle Scholar
  2. 2.
    Cook, T. D., B. R. Flay, R. A. Haag, et al. An evaluation model for assessing the effects of Peace Corps Programs in health and agriculture. Unpublished report prepared for ACTION by Practical Concepts, Inc., Washington, D. C. September, 1977.Google Scholar
  3. 3.
    Cook, T. D., and W. E. Pollard. Guidelines: How to recognize and avoid some common problems of misutilization of evaluation research findings. Evaluation, 4 161–164, 1977.Google Scholar
  4. 4.
    Suchman, E. Evaluative Research, New York: Russell Sage Foundation, 1967.Google Scholar
  5. 5.
    Cook, T. D. The potential and limitations of secondary evaluation. In: M. W. Apple, M. J. Subkoviak and H. S. Lufler, Jr. (eds.). Educational Evaluation: Analysis and Responsibility. Berkeley: McCutchan, 1974.Google Scholar
  6. 6.
    Riecken, H. W., and R. F. Boruch (eds.). Social Experimentation: A Method for Planning and Evaluating Social Intervention. New York: Academic Press, 1974.Google Scholar
  7. 7.
    Cook, T. D. and D. T. Campbell. The design and conduct of quasi-experiments for field settings. In: M. D. Dunnette (ed.). Handbook of Organizational and Industrial Psychology. Skokie, Illinois: Rand-McNally, 1976.Google Scholar
  8. 8.
    Rogers, E. M. Communication and development: The passing of the dominant paradigm. Communication Research, 3:213–240, 1976.CrossRefGoogle Scholar
  9. 9.
    Diaz-Guerrero, R., I. Reyes-Lagunes, D. B. Wittke, and W. H. Hotzman. Plaza Sesamo in Mexico: an evaluation. J. Communication, 26. 145–154, 1976.CrossRefGoogle Scholar
  10. 10.
    Baertl, J. M., E. Morales, G. Verastegui, and G. G. Graham. Diet supplementation for entire communities: growth and mortality of infants and children. Am. J. Clin. Nutr., 23:707–715, 1970.Google Scholar
  11. 11.
    Graham, G. Feeding trials in children, pp. 358–364. In: Proceedings of the Third International Congress of Food Science and Technology, New York: Stuart and Wiley, 1970.Google Scholar
  12. 12.
    Gordon, G. and E. V. Morse. Evaluation research. pp. 339–361. In: Annual Review of Sociology. Vol. I. Palo Alto: Annual Reviews, Inc., 1975.Google Scholar
  13. 13.
    Campbell, D. T. and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand-McNally, 1963.Google Scholar
  14. 14.
    Dodd, S. A Controlled Experiment on Rural Hygiene in Syria. Social Science Series No. 7. Beirut: American University of Beirut, 1934.Google Scholar
  15. 15.
    Freedman, R. and J. Y. Takeshita. Family Planning in Taiwan. Princeton: Princeton University Press, 1969.Google Scholar
  16. 16.
    Freeman, H. E., R. E. Klein, J. Kagan, and C. Yarbrough. Relations between nutrition and cognition in rural Guatemala. Am. J. Publ. Health, 67:233–239, 1977.CrossRefGoogle Scholar
  17. 17.
    Ross, L. and L. Cronbach. Review essay evaluating the “Handbook of Evaluation Research”. In: G. Glass (ed.) Evaluation Studies Review Annual. Beverly Hills: Sage Publications, Inc., 1976.Google Scholar
  18. 18.
    Carnoy, M. and H. Levin. Evaluation of educational media: some issues. Instructional Soi., 4: 385–406, 1975.CrossRefGoogle Scholar
  19. 19.
    Berg, A. The Nutrition Factor. Washington, D. C: The Brookings Institution, 1973.Google Scholar
  20. 20.
    Kreimer, O. Health Care and Satellite Radio Communications in Village Alaska: Final Report of the ATS-I Biomedical Satellite Experiment Evaluation. Stanford: Institute for Communication Research, Stanford University, and The Lister Hill National Center for Biomedical Communication, June, 1974.Google Scholar
  21. 21.
    Manoff, R. K. Mass Media Nutrition Education: Ecuador. Washington, D. C.: Manoff International, Inc., 1975.Google Scholar
  22. 22.
    Manoff, R. K. Mass Media Nutrition Education: Philippines. Vol. I. Washington, D. C.: Manoff International, Inc., 1977.Google Scholar
  23. 23.
    Manoff, R. K. Mass Media Nutrition Education: Nicaragua. Vol.11. Washington, D. C.: Manoff International, Inc., 1977.Google Scholar
  24. 24.
    Freij, L., Y. Kidane, G. Sterky, and S. Wall. Exploring child health and its ecology: The Kirkos Study in Addis Ababa. Ethiopian J. Development Res., 11 (2) Supplement: 1976.Google Scholar
  25. 25.
    Cook, T. D. and C. L. Gruder. Metaevaluation research. Evaluation Quarterly, 2(1):5–51, 1978.CrossRefGoogle Scholar
  26. 26.
    Schmelkes, S. The radio schools of the Tarahumara, Mexico: an evaluation. In: P. Spain, D. Jamison and E. McAnany (eds.). Radio for Education and Development: Case Studies. Working Paper No. 266 of the World Bank. Vol 2. Washington, D. C: 1977.Google Scholar
  27. 27.
    Spain, P. The Mexican radio primaria project. In: P. Spain, D. Jamison and E. McAnany (eds.). Radio for Education and Development: Case Studies. Working Paper No. 266 of the World Bank. Vol 2. Washington, D. C, 1977.Google Scholar
  28. 28.
    Campbell, D. T. and R. F. Boruch. Making the case for randomized assignment to treatments by considering the alternatives: six ways in which quasi-experimental evaluations in compensatory education tend to underestimate effects. In: C. A. Bennett and A. A. Lumsdaine (eds.) Evaluation and Experiment: Some Critical Issues in Assessing Social Programs. New York: Academic Press, 1975.Google Scholar
  29. 29.
    Searle, B., P. Suppes and J. Friend. Formal Evaluation of the Radio Mathematics Instructional Program: Nicaragua Grade I, 1976. pp. 651–672. In: T. D. Cook (ed.). Annual Review of Evaluation Studies, 3. Beverly Hills: Sage Publications, Inc., 1978.Google Scholar
  30. 30.
    Jamison, D. and E. McAnany. Radio for Education and Development. Beverly Hills: Sage Publications, Inc., In press.Google Scholar
  31. 31.
    Spain,P., D. T. Jamison and E. G. McAnany (eds.). Radio for Education and Development: Case Studies. Working Paper No. 266 of the World Bank. Vol. 2. Washington, D. C, 1977.Google Scholar
  32. 32.
    Mayo, J. K. and J. A. Mayo. An Administrative History of El Salvador’s Educational Reform. Stanford: Institute for Communication Research, Stanford University, 1971.Google Scholar
  33. 1.
    Watts, H., D. Rees, D. Kershaw,and J. Fair. New Jersey Income Maintenance Experiments, Vol. I–III, New York: Academic Press, 1976-1977.Google Scholar
  34. 2.
    Cain, G. G. Regression and selection models to improve non-experimental comparisons. In: C. Bennet and A. Lumsdaine, (eds.). Evaluation and Experiment: Some Critical Issues in Assessing Social Programs. New York: Academic Press, 1975.Google Scholar
  35. 3.
    Hollister, R. G, and C. Metcalf. Family labor-supply responses of young adults in experimental families. In: H. Watts and A. Rees, (eds.). The New Jersey Income-Maintenance Experiment. New York: Academic Press, 1977.Google Scholar
  36. 4.
    Metcalf, C. Making inferences from controlled income maintenance experiments. Am. Econ. Eev., 63(3): 478–483, 1973.Google Scholar
  37. 5.
    Boruch, R. F.,and J. S. Cecil. Assuring Privacy and Confidentiality in Social Research. Philadelphia: University of Pennsylvania Press. (In press), 1979.Google Scholar
  38. 6.
    Golladay, F., and R. Haveman. Regional and distributional effects of a negative income tax. Am. Econ. Rev., 66: 629–641, 1976.Google Scholar
  39. 7.
    Greenberg, D. Problems of Model Specification and Measurement: The Labor Supply. Santa Monica: Rand Corporation, 1972.Google Scholar
  40. 8.
    Cain, G. G.,and H. Watts. Income Maintenance and Labor Supply. Chicago: Markham, 1973.Google Scholar
  41. 9.
    Conlisk, J. Choice of sample size in evaluating manpower programs: A comment. In: F. Bloch,(ed.). Evaluating Manpower Training. Greenwich, Connecticut: Johnson Associates, Inc. (In press), 1978.Google Scholar
  42. 10.
    Conlisk, J. and H. Watts. A model for optimizing experimental designs. American Statistical Association — Proceedings of the Social Statistics Section, pp. 150-156, 1969.Google Scholar

Copyright information

© Plenum Press, New York 1979

Authors and Affiliations

  • Thomas D. Cook
    • 1
  • Emile G. McAnany
    • 2
  • Robinson G. Hollister
    • 3
  1. 1.Northwestern UniversityEvanstonUSA
  2. 2.Stanford UniversityPalo AltoUSA
  3. 3.Swarthmore CollegeSwarthmoreUSA

Personalised recommendations