Politics in Program Evaluation

  • Kandyce FernandezEmail author
  • Jenna Gonzales
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-31816-5_2517-1

Keywords

Evaluation Result Program Evaluation Evaluation Report Stakeholder Engagement Program Component 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Synonyms

Introduction

Over the past 50 years, scholars have proposed various definitions of program evaluation, primarily from the standpoint of assessing the effects of social programs. Some definitions include the following:
  • Evaluation determines the merit or worth of a program, policy, or entity (Scriven 1967).

  • Evaluation determines what is needed in societal programs through the systematic collection and use of data (Mertens and Wilson 2012).

  • Evaluation objectively examines the outcomes of programs to determine whether goals are met successfully or unsuccessfully (Weiss 1973).

Cumulatively, program evaluation involves the systematic study of social programs in the public sector. Rooted firmly in social science research methodology, program evaluation determines the merit or worth of program components or the effectiveness of program processes as they relate to program goals intended to address societal ills. Evaluation as a formal, practical discipline gained prominence in the 1960s with national programs related to the “Great Society” and the “War on Poverty” that were rigorously evaluated to determine the return on investment (Datta 2011). Early practices in evaluation emphasized the use of information in terms of the ability to influence a decision and the likelihood that decision-makers would act on evaluators’ recommendations (Rossi et al. 2003). The evaluator’s ability to influence decisions and promote action as a result of evaluation motivated early professional evaluators to focus on the primary goals of research methodology and empirical data collection.

While continuing to improve evaluation from a methodological perspective, attention in the field shifted in the 1970s to include not only the uses of evaluation findings but also the politics of the evaluation process. The politics of evaluation refers to the interactions of stakeholders involved in approving, funding, and implementing public programs that have different positions of power, influence, and authority in relation to a specific program (Palumbo 1987). As social programming increased the involvement of policymakers, administrators, program managers, clientele, and other stakeholders, evaluators experienced political pressures from special interest groups within the political process who had a stake in program resources (Datta 2011). Stakeholder involvement therefore contributes to the political context of evaluating public programs.

Political Context of Evaluation in the Public Sector

Whether programs are carried out by public agencies or nonprofits, they often originate from political processes and decisions within the context of public policymaking (Mohan and Sullivan 2006). From policy approval to program implementation, and then assessment of policy or program components, evaluation is relied upon throughout the policymaking process. In 1973, Carol Weiss wrote Where Politics and Research Meet (Weiss 1973) arguing for the importance of understanding the high-stakes environment that programs operate in. Weiss was one of the first professional evaluators to underscore the significance of being aware of stakeholder interests in evaluations (Azzam and Levine 2015; Datta 2011). Within a democracy evaluation serves to not only provide governmental oversight and accountability in relation to policymaking (Chouinard 2013) but also to help determine program decisions, advance knowledge in society, and inform the public more broadly (Chelimsky 2006).

It is through the use of different approaches to evaluation that results are used by stakeholders to inform, defend, support, or counter a proposed course of action (Mohan and Sullivan 2006). Since Weiss (1973) first wrote about politics in evaluation, others have echoed her thoughts (Chelimksy 1987; Cronbach et al. 1980; Palumbo 1987) in selecting which programs undergo evaluation, who will be asked to conduct the evaluation, which stakeholders will participate in the evaluation process, and how evaluation results are used. The politics of evaluation often begins with the choice to select certain programs over others to undergo evaluation.

Selection of Programs to be Evaluated

Before programs are assessed, there are choices made as to which programs or specific parts of a program will undergo an evaluation. These choices typically fall to agency heads, program staff, funders, elected officials and their staff, or some combination of stakeholders referred to as the “evaluation sponsor.” As a result of stakeholder involvement, programs may be selected for evaluation based on different criteria (Mohan and Sullivan 2006). These include understanding the impact of a program on clients, determining if program goals were met, or assessing whether the program fits the needs of the population being served. Nevertheless, evaluations can be commissioned for a myriad of other reasons beyond understanding the program outcomes. Considering the politics that may arise out of public and sometimes controversial programming, specific evaluations may be commissioned over others based on the likelihood of having positive/beneficial results, the timeframe in which evaluation results are needed, and whether or not programs may benefit from increased attention or resources that may occur as a result of the evaluation process (Mertens and Wilson 2012).

Weiss (1973) called attention to the demands for evaluators to deal with competing interests and loosely defined goals by using political tact to decide on what to evaluate (Chouinard and Cousins 2015). Political tact refers to an evaluator’s efforts to determine why a program or agency is seeking an evaluation, identifying the goals for undertaking an evaluation, and determining how the information will be subsequently used (Patton 2008). Understanding the reasons why a program is chosen for evaluation is helpful for determining other influences on the evaluation process. These may include the need to impress funders with credible information on the program, to provide evidence justifying an already determined course of action in the policy process, or even to delay a decision in the face of criticism (Rossi et al. 2003). While the choice of programs to be evaluated may be driven by any number of justifications put forth by the evaluation sponsor(s), several competing interests may equally influence the choice of who will conduct the evaluation.

Selection of Evaluators

Evaluation professionals or teams of professionals can be found within or outside an organization or agency that is sponsoring the evaluation. A primary distinction in selecting an evaluator is the perceived and actual difference between an internal and external evaluator (Mertens and Wilson 2012). The individual(s) assigned or hired to conduct an evaluation can influence the credibility of the evaluation process and results depending on their relationship to the program (Mohan and Sullivan 2006). For example, an internal evaluator works for the organization or agency that houses the program being evaluated and may have some stake in the outcome of the evaluation. On the other hand, an external evaluator conducts the evaluation, but not as an employee of the organization or agency where the program resides. When seeking an external evaluator, the organization, agency, or program manager may weigh the benefits of gaining an outside perspective given the costs in time and resources associated with understanding the internal dynamics of a program. While an external evaluator is often relied upon to be objective and independent in the evaluation process and therefore less likely to influence the evaluation results in favor of a certain outcome, they may be at a disadvantage in not knowing the inner workings of the program or agency. Internal evaluators may overcome the perceptions of potential bias in their work by designing a more rigorous, valid, and detailed evaluation plan that is closely followed and well documented (Rossi et al. 2003).

The qualifications necessary to be an evaluator require expertise in the areas of program content, knowledge of evaluation examples from other programs, and appropriate research norms and methods for the program context (Chouinard and Cousins 2015; Datta 2011; Vanlandingham 2010). Evaluation of public programs often occurs in targeted problem areas like education, crime, health care, or substance use. As a result, evaluators may be asked to conduct long and complex evaluations in terms of program substance and length, or they may be asked to conduct more limited evaluations of a single program requiring less complex research methodologies or time (Rossi et al. 2003). The field of evaluation relies on the Guiding Principles for Evaluators published by the American Evaluation Association to inform evaluation practices and ethics within the field. The principles acknowledge essential practices of evaluation that include involving stakeholders, maintaining independence, identifying conflicts of interest, protecting confidential and sensitive information, ensuring quality, presenting balanced results, and preserving evidence of accepted evaluation methods. After an evaluator is selected, the evaluation design is considered in light of the best approach for the subject matter, the extent to which stakeholders are expected to be involved, and possible future uses of evaluation results.

Evaluation Approach and Stakeholder Engagement

Program evaluation takes place within a broader context of policymaking. With a growing interest in determining the impact of public programs, evaluators acknowledge that satisfying stakeholder needs and ensuring legislative use are both high priorities (Azzam and Levine 2015; Vanlandingham 2010). In the past, an emphasis on the methods used in evaluation studies took priority over the involvement of clients or other stakeholders. A methodological orientation allowed evaluators to focus on quantitative research design over other priorities in determining a specific approach to evaluation. However, as studies on evaluation utilization arrived at the conclusion that the impact that evaluation has on legislative decision-making is minimal, some studies have correlated stakeholder engagement with increased evaluation use (Vanlandingham 2010). As a result, evaluation of public programs shifted attention to democratic values to ensure that stakeholder voices are heard and evaluation reports are ultimately used to inform public programs (Chouinard and Cousins 2015; Datta 2011). The history of evaluation research reflects that interactions with local partners have become increasingly important for understanding the complexities and uncertainties involved with collecting data for evidence to build a case for recommendations (Chouinard 2013; Datta 2011).

Originating from the theoretical frameworks that came about during the 1970s, responsive evaluation became a chosen method for evaluators interested in a more participatory approach (Chouinard 2013). By empowering managers, clients, and others to gain knowledge through participation, responsive evaluation extended the scope of evaluation to focus on understanding shortcomings of the program in terms of responsiveness to stakeholders. Using this framework, evaluator’s factor in as many stakeholder interests as possible in an effort to evolve from the previous methods that only included the motivations of policymakers or decision-makers (Vanlandingham 2010). A participatory approach allows for understanding of how the program or policy impacts the individuals that have contact with the program components. This approach also supports a more democratic perspective on evaluation, as individuals beyond program administrators are able to engage with the evaluation process from beginning to end.

Participatory evaluation became more popular in the 1980s and involved the coordination of trained evaluators and community partners to increase knowledge about program decision-making (Chouinard and Cousins 2015). Emphasizing partnerships between evaluators and stakeholders, participatory evaluation seeks to not only serve as an approach for reporting but also to address power imbalances that exist in society (Chouinard and Cousins 2015). By opening up communication channels between evaluators and stakeholders, information can be gleaned about pressing needs and methods that deliver timely results, all within an atmosphere of organizational learning (Chouinard 2013).

Participatory approaches have made way for empowerment and social justice approaches to evaluation, which also focus on stakeholder involvement (Mertens and Wilson 2012). An evaluation design with a social justice perspective tends to focus on viewpoints and participation from marginalized groups, while empowerment evaluations allow often unheard voices and perspectives to be included in the evaluation process and results. Through these approaches, evaluators often rely on mixed methodology including both quantitative and qualitative data to inform and expand the uses of evaluation results. The decision to use different approaches beyond a purely methodological approach may be influenced by various stakeholders and with some consideration for the time it will take to complete the evaluation and how evaluation results will be used.

Uses of Evaluation Reports and Research

While evaluation projects may be judged by their outcomes, it is more common that evaluation results are judged by their utility or how useful they are to decision-makers and stakeholders (Rossi et al. 2003). Organization and agency stakeholders use evaluation information in different ways. When evaluation results, reports, or data are used directly by stakeholders, they then provide information for supporting decisions related to a specific course of action or policy. This is considered to be the highest form of utility in evaluation research. However, there is a tendency to use positive evaluation results to support program initiatives, while negative evaluation results are disregarded (Palumbo 1987). Where evaluation results are used to inform thinking on specific subjects or contexts, information is shared in a way that supports a certain perspective or preconceived understanding of a problem. This may also include the use of evaluation results to sensitize society to the frequency, prevalence, or specific dimensions of a social problem (Rossi et al. 2003). Given that evaluation results can identify certain nuances to a problem or issue, within the context of politics and elected officials, results may also be used to persuade others to defend or challenge the status quo of different social programs. Even when reports may be read or accessed by only a few elected officials, depending on the subject matter, considerable media attention may be given to a report. This will extend the impact of the evaluation results to beyond the specifics of the program or policy (Rossi et al. 2003).

Utilization of evaluation results may also be influenced by the credibility of the evaluator and evaluation methods documented in a report. For example, external evaluators with strong social science research backgrounds may be perceived as objective, qualified, and credible. Therefore, the reports they produce may gain greater support and validation from others. Likewise, where sound research methods are relied upon, evaluation reports may be referred to and even replicated in other studies of similar contexts. Evaluation reports are therefore judged not only by their content and substance but also by the source of the research.

Conclusion

As public programs originate from the policy process, any assessment or evaluation of their outcomes, processes, impacts, successes, or failures will have an inherently political component involving various stakeholders. As professional evaluators continue to develop their skills and approaches to evaluation in an effort to meet the public demand for information in a democracy, so, too, will the need to consider the influence of politics on evaluation design, implementation, and reporting. A strong balance between methodological rigor and practical use will still need to be foremost when conducting evaluations of public programs.

Cross-References

References

  1. Azzam T, Levine B (2015) Politics in evaluation: politically responsive evaluation in high stakes environments. Eval Program Plann 53:44–56CrossRefGoogle Scholar
  2. Chelimksy E (1987) The politics of program evaluation. Society 25(1):24–32CrossRefGoogle Scholar
  3. Chelimsky E (2006) The purposes of evaluation in a democratic society. In: Shaw I, Greene J, Mark M (eds) The SAGE handbook of evaluation. Sage, Thousand Oaks, pp 33–55Google Scholar
  4. Chouinard JA (2013) The case for participatory evaluation in an era of accountability. Am J Eval 34(2):237–253CrossRefGoogle Scholar
  5. Chouinard JA, Cousins JB (2015) The journey from rhetoric to reality: participatory evaluation in a development context. Educ Assess Eval Account 27(1):5–39CrossRefGoogle Scholar
  6. Cronbach LJ, Ambron SR, Dornbusch SM, Hess RD, Hornik RC, Phillips DC,…Weiner SS (1980) Toward reform of program evaluation. Jossey-Bass, San FranciscoGoogle Scholar
  7. Datta L-e (2011) Politics and evaluation: more than methodology. Am J Eval 32(2):273–294CrossRefGoogle Scholar
  8. Mertens DM, Wilson AT (2012) Program evaluation theory and practice: a comprehensive guide. Guilford Press, New YorkGoogle Scholar
  9. Mohan R, Sullivan K (2006) Managing the politics of evaluation to achieve impact. N Dir Eval 2006(112):7–23CrossRefGoogle Scholar
  10. Palumbo DJ (1987) The politics of program evaluation. Sage, Newbury ParkGoogle Scholar
  11. Patton MQ (2008) Utilization-focused evaluation. Sage, Thousand OaksGoogle Scholar
  12. Rossi PH, Lipsey MW, Freeman HE (2003) Evaluation: a systematic approach. Sage, Thousand OaksGoogle Scholar
  13. Scriven M (1967) The logic of evaluation. In: Tyler RW, Gagné RM, Scriven M (eds) Perspectives of curriculum evaluation (AERA monograph series-curriculum evaluation). Rand McNally, ChicagoGoogle Scholar
  14. Vanlandingham GR (2010) Escaping the dusty shelf: legislative evaluation offices’ efforts to promote utilization. Am J Eval 1–14Google Scholar
  15. Weiss CH (1973) Where politics and evaluation research meet. Evaluation 1(3):37–45Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.The University of Texas at San AntonioSan AntonioUSA