Advertisement

Outcome-Based Evaluation Data Collection

  • Robert L. Schalock

Overview

Believe me, there is no lack of data floating around most education and social programs! The problem is that much of it either is not usable for meaningful outcome-based evaluation, or not retrievable. An unfortunate truism is that many programs are data rich but information poor. Part of the reason for this situation is that program administrators are bombarded with requests for data from funding, licensing, or accrediting bodies who frequently ask for different data sets; but part of the reason is also that most program’s data systems have evolved over time, with little forethought given to the importance of developing an ongoing data-based management system that will provide data for multiple purposes, including outcome-based evaluation. I hope that this chapter will help that situation, but we also need to realize that data collection is neither easy nor inexpensive. Thus, our sixth guiding principle:

Guiding Principle 6: Choose your data sets very carefully, because data collection is neither easy nor inexpensive.

Choosing one’s data sets very carefully is done partially by asking clear questions, but it also involves simplifying your data needs. My strong bias and recommendation is that using a small number of reliable and valid core data sets is better than using a larger number haphazardly. As we all know, there is an important distinction between nice-to-know and need-to-know data. Thus, in approaching the topic of OBE data collection, you might want to keep the following data-simplification techniques in mind (Johnston, 1987):
  • Measure at the level of functional impairment and its amelioration, rather than at the level of the pathology or limitation.

  • Group many separate measures into a smaller number.

  • Group many categorical items (such as diagnoses) into a smaller number of groups according to an index of similarity (e.g., all persons with mental retardation).

  • Convert items with market prices to dollar values (e.g., wages) rather than leaving them in terms such as number of treatments.

  • Reduce several nonmonetary outcome measures into a single composite index of effectiveness using judgmental techniques (e.g., enhanced quality of life).

The process of data collection cannot be separated from the other processes involved in outcome-based evaluation, including knowing how to store and retrieve data, and what evaluation designs and data analyses to use. For example, whereas Chapter 6 discusses data collection, Chapter 7 includes the actual formats whereby data are collected. The formats are included in Chapter 7 because of the interface of actual data collection and data management. Chapter 8 then discusses how those data can be analyzed, based on the evaluation design that you have used. The discerning reader will note that logically, an evaluation design precedes data collection, because the evaluation design selected is the one that will best test your hypothesis or answer your evaluation question. However, I feel that evaluation designs go best in Chapter 8 because of the close connection between evaluation designs and data analysis.

The interrelationship among these processes is shown clearly in Figure 6.1. Note in the figure that the process begins with the questions asked and ends with outcome-based evaluation analyses. I will refer to this figure repeatedly throughout the remainder of the text.

This chapter contains six sections dealing with the interrogatories of data collection. Four sections deal with collecting information about the four core data sets (recipient characteristics, core-service functions, cost estimates, and person-referenced outcomes). The remaining two sections discuss a conceptual approach to measurement and a number of guidelines regarding data collection.

Keywords

Mental Retardation Cost Estimate Average Cost Role Status Adaptive Skill 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Additional Readings

  1. Burnstein, L., Freeman, H., Sirotnik, K. Delanshere, G., & Hollis, M. (1985). Data collection: The Achilles of evaluation research. Sociological Methods and Research, 14, 65–80.CrossRefGoogle Scholar
  2. Kaplan, R. M. (1990). Behavior as the central outcome in health care. American Psychologist, 45(11), 1211–1220.PubMedCrossRefGoogle Scholar
  3. Killaugh, L. N., & Leininger, W. E. (1987). Cost accounting: Concepts and techniques for management (2nd ed.). New York: West.Google Scholar
  4. Martin, P., & Bateson, P. (1993). Measuring behavior: An introductory guide (2nd ed.). Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  5. McGrew, K. S., & Bruininks, R. H. (1992). A multidimensional approach to the measurement of community adjustment. In M. Hayden & B. Avery (Eds.), Community living for persons with mental retardation and related conditions (pp. 124–142). Baltimore: Brookes.Google Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Robert L. Schalock
    • 1
  1. 1.Hastings CollegeHastingsUSA

Personalised recommendations