We briefly elaborate and give examples of each of the seven activities that were identified through the SPR Task Force’s progressive process. In Table 1, column 1, we identify an activity of prevention researchers and practitioners. In column 2, we identify both the public good likely to result from the activity and ethical dilemmas that could arise. In column 3, we identify ethical principles from multiple fields that can be applied to guide ethical behavior.
Core ethical principles are interdependent and transcend specific disciplines. They are at the root of the collective understanding and ethical decision-making, and are uncontroversial in their generality. By definition, ethical challenges are dilemmas or hard choices that call into question how best to balance ethical principles and moral values in the context of specific actions. For example, the APA (http://www.apa.org/ethics/code/) provides definitions for the following five core aspirational principles that are at the foundation of ethical guidelines for psychologists: beneficence and non-maleficence, fidelity and responsibility, integrity, justice, and respect for peoples’ rights and dignity. Given the trans-disciplinary nature of prevention science, the principles emphasized in the ethical codes of several scientific and professional organizations are also informative (see Table 2).
Table 2 Disciplined-based professional ethics resources Activity 1. Consulting With Communities, Institutions, and Public Agencies Regarding Selection of Evidence-Based Preventive Interventions for Implementation
Researchers who have a significant role in the development and evaluation of specific programs are frequently involved in consulting on the selection of EBPs for implementation and scale-up. Such dual roles are often complementary and can help to ensure implementation fidelity, as well as access to program resources, technical assistance, and evaluation tools. However, dual roles can also lead to conflicts of interest and ethical dilemmas when the scientist directly benefits from the implementation or scale-up (e.g., being compensated financially or in terms of enhanced status or reputation). Conflicts of interest can lead to overstating effects or to minimizing limitations or negative findings of existing evaluations of the researchers’ own program and understating the impact of alternative programs, or failing to identify other programs that may meet the needs of the communities, institutions, or agencies. Failure to clearly state conflicts of interest, in turn, undermines the autonomy of the stakeholders in decision-making. To make fully informed and free choices of best practices for their communities, stakeholders or representatives need accurate and complete information about available choices, adequacy of the evidence supporting a program, and initial and continued costs before adopting or scaling-up a specific preventive intervention.
Example
A prevention scientist is on an advisory board to a Governors’ council that has been charged with improving the quality of state services for children and families that support mental health and prevent substance use. The prevention scientist is also the CEO of a non-profit agency that disseminates and provides technical assistance on a specific family-centered program that she developed, evaluated, and trademarked. Based on her long-term relationship with the board members, her familiarity with the intervention, and prior evidence of its effectiveness, she would like to enter into a contractual arrangement to scale-up her program across the state. Despite the strong evidence in favor of her program, she is concerned that there is a potential conflict of interest, as she does not have the knowledge base to present in detail other potential EBPs.
In addition to the obvious ethical tension, the reality of the situation is that the relationship between the program developer and the community can present a powerful opportunity leading to community benefits. Questions for consideration here relate to the researcher’s prior relationship to the board, conflict of interest, the power or authority derived from perceived expertise, and respect for the autonomy (free and informed choice) of the community stakeholders. A number of questions are relevant to addressing this ethical dilemma. For example, what benefits will the consulting scientist obtain? What other programs address the same problems and what is the scientific evidence for their efficacy? Can another scientist present a review of available programs that can address the board’s needs? How do the costs, accessibility of resources, appropriateness, and likelihood of positive effects compare across programs? It is possible to reduce the ethical tension of this situation by declaring conflict of interest, directing community stakeholders to alternative information sources that can inform decisions leading to major investments, and disclosing potential pitfalls and benefits of an EBP.
Activity 2. Forming Contractual or Collaborative Relationships With Communities or Public Agencies to Implement or Scale-Up EBPs
Prevention scientists in general and program developers in particular have invaluable expertise that can inform community stakeholders’ efforts to implement and scale-up EBPs. However, these relationships are potentially complex both in how they are defined initially and in how they change dynamically through the various stages of implementation of a chosen EBP.
Some collaborations are initiated by community request to solve a problem or address a need that they have identified. In other cases, scientists who are evaluating scale-up or implementation of an evidence-based program with a track record of efficacy initiate these relationships. In both conditions, balancing community needs with sound methods for evaluation can be challenging. This concern is particularly true, for example, for selective and indicated interventions in which communities, schools, or individuals are recruited because of their high-risk status. Often, study participants are recruited by appealing to their need to address a particular problem or risk and a specific program or practice is represented as likely to be effective for addressing their problem or risk. If the program requested or offered has already been demonstrated in rigorous research to produce positive benefits, as has been implied during the recruitment, the preferred research design involving randomizing participants into a treatment or no treatment control group may be unethical because it will result in a denial of the known benefits to one group. If the program requested or offered has not yet been demonstrated to be effective for reducing the problem, a recruitment strategy that implies that benefits will accrue from study participation appears unethical. The challenge is to tailor both the participant recruitment strategy and the research design to the situation. If little is known about the effects of the program with the targeted population, a randomized experiment with a no-treatment control group is the preferred design, but the subject recruitment strategy must accurately describe the state of current knowledge about the effects of the program. If positive effects have already been clearly demonstrated with the targeted population, a research design involving a no-treatment control group would appear unnecessary and potentially unethical. Alternative designs could be considered. These may involve, for example, assessing different levels of the natural course of implementation, assessing different implementation strategies (standard training compared to standard training plus ongoing coaching and feedback), or comparing implementations offering standard and community-adapted versions of the same intervention.
In fact, ethical challenges can be encountered at any step or phase of the often lengthy course of implementation of EBPs. These include early phases of consultation, relationship building, and program selection, as noted in the previous section, as well as later stages that focus on activities related to adapting, sustaining, and evaluating the impact of the intervention. Hence, it is not the case that all concerns are likely to be recognized and solved at the beginning of the process; rather, ethical challenges need to be anticipated and decisions that are made may need to be revisited as the implementation process unfolds. Again, the prevention scientist as both program developer and implementer may encounter conflicts of interest if personal benefits are accrued in the scale-up process. Conflicts can also arise when scientists are acting on behalf of both funders and communities. Several potential ethical issues were raised, in the course of the Task Force’s work, in relation to ongoing consultation for the implementation of a specific program, which we reduced to two examples.
Example 1
A prevention scientist is employed as a consultant by a state agency to assess the effectiveness of an intervention targeting mental illness and substance use in several communities. Findings are mixed, and some agencies implemented the intervention with considerable fidelity, whereas others did not. One agency that shows particularly poor implementation and results acknowledges the problems, asks the consultant not to report the findings in the final report or in publications because of concerns that the agency will lose future funding and support from the state. The consultant is challenged, as he does not wish to harm the agency with poor implementation outcomes, but endorses the conventional standards of integrity in reporting scientific results. The consultant is also concerned that the poor outcomes for one agency reflect poorly on the EBP he developed.
Discussion to articulate and anticipate partnership agreements at the outset of implementation partnerships may be vital to moving implementation efforts forward and avoiding costly impasses. Questions to consider include the following: How are the identity or confidentiality of specific communities protected? What are the limits of confidentiality? How will agreements be reached about the reporting of unexpected or negative findings? What lessons can be learned about implementation readiness or successes? Would other communities benefit from the knowledge gained by considering the success and failures of implementation efforts? What procedures can be established to address conflicts between scientists and participating communities if they do occur in the course of implementation?
Example 2
A partnership has been formed between a prevention scientist and a national non-profit agency to implement and scale-up an EBP. The agreement involves the prevention scientist's training agency staff to deliver the EBP to their clients. Funding is available to support the initial stages of the collaboration including consulting on EBP adaptations, evaluating the outcomes of implementation and training staff to fidelity. Results from the evaluation are positive, but the agency is struggling with staff turnover and leadership changes. The agency requests additional support from the prevention scientist to sustaining the EBP and implementation fidelity, but it does not have adequate funds to support these consultations. The prevention scientist has mixed feelings about continued involvement. She sees the potential value of her continued involvement with the community to sustain program uptake and fidelity, on the one hand, but on the other hand, she is unable to continue in these efforts without financial support for her continued work.
Questions for consideration in this scenario relate to the continued involvement of the scientist after an implementation trial. What are the best practices for implementation consultants with respect to enabling agencies to acquire the support they need to sustain positive change, without incurring unrealistic costs? What is the obligation of the implementation team to support continued efforts of the non-profit to train its staff and deliver the intervention? Could best practices involve proactive discussion and planning for sustainment of the program following the availability of support for the program developer? What is the implied obligation of the prevention scientist(s) to assist in the sustainment of the program? What is the ongoing obligation in the absence of financial remuneration? Should the implementation team expect payment for their continued involvement? What are the best practices for implementation consultants with respect to enabling agencies to acquire the support they need to sustain positive change, without incurring unrealistic costs?
Activity 3. Implementation of EBPs That Involve Youth, Disadvantaged Groups, Minorities, Immigrants, and Aboriginal Peoples
A core mission of prevention science is to apply the knowledge gained to the benefit of vulnerable groups, to promote health and well-being, and to reduce disparities. Historically, there are many examples in which disadvantaged groups have been either excluded from research (and thus from its potential benefits) or in which they have been included but have neither been adequately informed about risks to them nor compensated for their participation. Implementation of EBPs in partnerships between prevention scientists and organizations that speak to or provide services for disenfranchised or vulnerable populations (e.g., groups who are vulnerable due to age, poverty, disabilities sexual orientation, or minority, immigrant, or aboriginal status) pose particular challenges. When prevention scientists consult with agencies that serve vulnerable populations, there is a heightened need for anticipating and articulating concerns (e.g., data ownership, reporting requirements) and benefits, and for adhering to ethical practices when forming collaborations with disadvantaged groups.
Example 3
A program developer is working closely within an aboriginal community advisory board to achieve funding to implement a family-centered intervention that has been extensively evaluated with non-aboriginal populations. After receiving funding, the advisory board revises and adapts much of the original program without directly consulting the program developer. Some of the core features of the program were removed from the original EBP because of incongruence with the culture and values of the aboriginal community. After 2 years, the community reports high engagement rates using the revised program. The community advisory group again seeks support from the program developer in order to apply for ongoing funding. The community advisors are reluctant to agree to an evaluation of the adapted program because they believe that this would attenuate the trusting relationship between families and the health agency serving the community. They fear that collecting data could violate privacy for needy families and reduce the reach of the program threshold. The program developer is concerned about the lack of evidence of effectiveness for the adapted program and the communities’ reliance on past evidence related to the original program to support their claims of effectiveness. He is conflicted about the most ethical course of action in reapplying for funding.
There is a pressing need for greater expertise, in general, on how EBPs can best be adapted to fit some communities. The example reveals several issues. Who are the representatives of the group targeted (parents, elders, policy decision-makers)? Who speaks for the members of vulnerable communities? How can openness, transparency, reliability, accountability, and reciprocity be assured between prevention scientists and members of vulnerable communities? Can the positive or negative effects of the evaluation outcomes be anticipated? Whose responsibility is it to develop implementation and evaluation procedures that do not conflict with core values of some cultural communities? What mechanism can be put in place to manage any conflicts in ongoing partnerships?
Activity 4. Balancing Implementation Fidelity and Adaptations With Community Needs and Resources
In the course of scientist-driven and funded implementation efforts, resources are often available to assess and monitor implementation to ensure that providers deliver interventions with adequate adherence and competence. Yet, many factors can affect fidelity in the implementation of interventions. Fidelity, itself, has many components (e.g., adherence, quality, dosage, participant engagement, differentiation from similar intervention, adaptations) that can each affect the outcomes of the intervention (Berkel et al. 2011; Hansen 2014). In the implementation and scale-up of interventions, prevention scientists may be particularly aware of the difficulties in maintaining fidelity of the intervention to achieve desired outcomes. The effectiveness of evidence-based interventions typically depends on the implementation quality and capacity of core program components (that are often unspecified or unknown). Fidelity and user adaptations were once seen as opposite ends of effective implementation. However, acknowledging the limits of generalizability of a program developed in one community for use in another is also important (Chambers et al. 2013). Some adaptations, particularly those that do not detract from the delivery of core program elements, can enhance user buy-in and local or cultural relevance of the intervention (Van der Kreeft et al. 2014; Zayas et al. 2012). However, communities also may incur opportunity costs when adaptations that are made due to cultural differences or lack of resources result in poor intervention effects. More prevention research is needed to examine the effect of cultural adaptation while controlling for other differences in duration or intensity of the intervention.
Ethical issues can arise in the tensions between the need for fidelity in implementation quality and real-world practice. For example, communities may not have the resources to scale-up an intervention with fidelity, and the impact of the community’s efforts may be negligible at best, or iatrogenic at worse, because of low implementation quality. Adaptations may be motivated by the implementer’s desire to more closely align the intervention with the consumers’ needs and preferences or to increase feasibility and thus likelihood of its sustainability. However, local adaptations are rarely empirically informed, they are almost never documented or assessed, and they may be made without consultation with the developers. This raises questions about the obligations that prevention scientists may have to train implementers in the core features of a program and their latitude in adapting core features. Does the scientist have a responsibility to monitor local adaptations to ensure that the intervention consumers receive includes the intervention’s core components? What are the limits of the scientists’ responsibility?
Example 4
A prevention scientist is the director of a non-profit agency that is the purveyor of a specific family-centered intervention. The director of a community-based organization that is located within an inner-city, poor community approaches the prevention scientist to contract with his agency to implement an intervention. The director explains that this particular intervention fits the needs and preferences of the community very well and the providers employed at the organization are highly skilled and motivated. However, they plan to implement a shortened version of the program to enhance the number of families who can receive the program. The community organization resources are very limited, so they cannot afford to monitor implementation or collect any implementation-related data. To support the non-profit agency’s positive intentions to promote the EBP, the prevention scientist considers consulting with the agency despite the lack of evidence for the shortened program, and the lack of fidelity monitoring.
The example raises concerns related to ethical principles of respect for autonomy of the stakeholders, accessibility of the program by disenfranchised populations, and beneficence versus the prevention of harm. The tension between implementation rigor, adaptations, and data collection capacity of community-based organizations introduces several questions with ethical implications: What is the evidence for the core features of the EBP? Does the prevention scientist, who is also a program developer and purveyor of intervention, have a responsibility to ensure that community organizations monitor the implementation to ensure that it is delivered as intended to enhance benefits and reduce negative or iatrogenic effects? Does the scientist have a responsibility to include feasible, efficient methods to help the organization monitor or evaluate an implementation before making programs available for dissemination? If the prevention scientist forms a contract with the community organization to deliver the intervention, does the prevention scientist or community organization have an obligation to inform the consumers that they may or may not receive the active intervention components? Should the prevention scientist only make the intervention available if the community organization agrees to implement on a smaller scale and reallocate resources for a more robust assessment of implementation, even if this approach means that fewer families will have an opportunity to participate?
Activity 5. Linking or Accessing Publicly Available Data in the Absence of Consent
Increasingly, administrative (e.g., medical, hospital, or court, education records, vital statistics) data are being used in prevention science, particularly in the evaluation of monitoring changes in public welfare that could be attributed to the scale-up of an EBP. Administrative data are typically collected as part of government documentation and service delivery in sectors such as education, health, welfare, justice, and labor. The quality of administrative data has historically been poor, but it has improved with advances in technology and demands for use. Examples include birth records, death records, education records, child protective service files, juvenile justice files, divorce records, hospital data, medical claims, and tax records. Such data can be a valuable source of information about individuals and their contexts. The data can also offer unparalleled advantages for including entire populations, often with little missing data, high accuracy, low bias and costs, and low participant burden. Policies such as the Health Insurance Privacy and Portability Act (HIPPA) and the Family Educational Rights and Privacy Act (FERPA) protect individuals to some extent, but they may not address the use of administrative data for implementation research despite the potential public benefits. Often, it is required to establish a HIPAA Waiver of Consent and an HIPAA Waiver of Authorization to gain access to the required data for surveillance or evaluation purposes.
Secondary analyses of existing administrative or existing research data can also be used to assess the outcomes of preventive interventions or to corroborate or critique published findings. Inclusion of administrative data can also extend the questions that can be answered from existing longitudinal data and evaluations. For example, administrative data can inform prevention scientists about long-term outcomes of interventions and extend the value of longitudinal data (e.g., linking high school performance data to distinguish youth most likely to benefits from post-secondary promotion programs). The use of administrative data can also limit the burdens of research for participants, be highly cost-effective, and improve generalizability to large representative populations.
When the public health benefits are clear, ethical concerns related to secondary analyses of de-identified or limited data sets (obtained and protected) through agency established processes are often minimal. However, some benefits are accrued only when secondary analyses are linked to individuals’ identifiers (for example, when public data are linked to existing longitudinal data sets). When consent was given for the original purposes for collecting the data, but not for the use of the linked data, considerations of the need to balance potential benefits of the research with respect for individuals’ autonomy and self-determination are of concern. Privacy laws and data security standards that govern the use of existing data also need careful consideration. For some data, gaining individual’s consent may be possible. However, obtaining active consent can also be impractical or even harmful, for example, if the population is widely dispersed or deceased, when resources and manpower to contact the individuals are costly, or confidentiality or identity as a research participant would be compromised in the process of re-consenting individuals.
Example 5
A prevention scientist who randomized children in a community and implemented an early childhood prevention program found predictable short-term benefits to the children and families. Twelve years later, the prevention scientist would like to see if the short-term benefits extend to long-term outcomes that are meaningful to the participants and the community that promoted the prevention study. Public data are available for the individual participants in the intervention and control groups (e.g., high school grades, achievement scores, and graduation dates). Recently, government employees agreed to link the earlier consented data with publicly available high school graduation records on file at the state level. Consent was provided for families to participate in the trial and to provide data for the target child up to age of 3, but accessing publicly available data was not mentioned in the consent forms.
Several questions for discussion can be considered to illuminate potential conflicts between the benefits of research and the privacy of individuals in relation to the needed administrative data linkages. Policies and procedures on linking publicly available data with consented data may vary across universities, state-level government agencies, and national settings. While most existing guidelines are silent on this issue, the Canadian Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans (2014, p. 64) explicitly suggests considerations that are relevant to the using administrative data. Is identifiable information essential to the research? What are the benefits of information to public health and well-being or to understanding the benefits or effects of the proposed linkages? Is the use of identifiable information without the participants’ consent unlikely to adversely affect the welfare of individuals to whom the information relates? How will the planned use of the data ensure individuals’ anonymity? What measures protect the privacy of individuals, and safeguard the identifiable information? Can institutions that safeguard administrative data provide de-identified data linked to existing data sets? Can the researchers comply with any known preferences previously expressed by individuals about any use of their information? Did the individual opt out of the use of their data for research purposes when it was given? Is it impossible or impractical to seek consent from individuals to whom the information relates (for example, if they had moved or died)? Would identifying the individual as a member of the data set in the course of re-consenting create harm or violate the individuals’ confidentiality? Who owns the linked data file and is responsible for maintaining data integrity and compliance with privacy laws? Who is responsible for establishing, monitoring, implementing, and revising data user agreements between all parties involved? Who has access to the linked data files versus limited access to aggregate data? Have the researchers obtained other necessary permissions for secondary use of information for research purposes?
Activity 6. Building Capacity to Implement and Scale-Up EBPs Through Commercialization
A key strategy for building the capacity to support dissemination and implementation of EBPs is to trademark and commercialize the practice for public use. A clear advantage of commercialization is it can provide revenue from use of the intervention, training, assessment, and support services for growing the capacity to scale-up the intervention. Although research funding often supports program development and evaluation, it is less often available for the dissemination of public health or preventive interventions. Thus, commercialization can also increase funding needed to market the resource to users, make training available, and enhance the capacity to monitoring of fidelity and outcomes.
At the same time, commercialization can create conflicts of interest for prevention scientists working with communities and organizations (Caulfield & Ogbogu 2015). When charges for intervention protocols, assessments, and support services exceed the budget of schools or community agencies, access to programs by disadvantaged groups may be restricted. Restrictions of use due to intellectual property and copyright issues may further limit access and use of EBPs in community settings with low resources. Commercialization of an intervention can create marketing pressures that can be seen in claims that overstate the scope of findings of an intervention or incomplete disclosures of effects as compared with competing programs (Caulfield & Ogbogu 2015). Premature commercialization can erode public confidence if expectations are not realized. Replication research can also be limited when commercialization makes an intervention a proprietary product that others cannot access without cost and the consent of the owner. Scientific study of the preventive intervention can be restricted if the owner does not allow others to independently replicate the effects or to study program effects in comparative effectiveness trials.
Example 6
A community agency hired a prevention practitioner with extensive training and certification in a commercially available evidence-based preventive intervention for parents of children with behavior problems. Over five years, the practitioner trained most of the agency supervisors to implement the intervention. Parent reports indicate some success in reducing child behavior problems. The agency is interested in expanding its services, but due to their financial constraints, it would be cost-effective for the agency to expand the supervisors’ role in training new providers and monitoring fidelity, rather than paying the cost of the program-certified practitioner. The prevention practitioner is consulted about the plan and advises the agency that they may be infringing on the copyright of the program. She is conflicted about whether or not the program developer should be informed of the potential copyright violation and wonders if the program continues to meet the developers’ certification standards.
Several questions are raised about conflict of interest of the developer, autonomy of the users, and social justice or accessibility of the intervention across socio-economic groups. What are the limits of the users’ obligation to pay for the most effective program? The user agency is on a very limited budget. Can the agency continue to use the program independently and train new providers or do these actions infringe on copyright if the agency does not renew their site license annually as is required by the commercialized program? What are reasonable costs for renewing the license? How can the commercial value of an intervention be assessed if it is directed at users who have difficulty paying? If the developer used public funding to create and evaluate the program, is there an obligation to provide free access to the resources? What are the consequences of open access for promoting implementation fidelity? What role can the prevention scientist play in helping the agency to consider its options?
Activity 7. Facilitating Replication by Independent Research Groups
Due to the complexities of procedures, training, and assessment protocols, replication studies of EBPs with new providers or in diverse communities often involve the program developer in a variety of roles. Developers can facilitate the rigor of the replication study by mapping precise research procedures, providing access to program resources, training to ensure implementation fidelity, and data sharing. Independent replication studies conducted by new research teams that do not involve the program developer can also add important information about the conditions under which an EBP will be effective, or about what is needed to implement the program with fidelity in real-world settings, or what can enhance the generalizability of the outcome effects shown in developer-driven evaluations (see Gottfredson et al. 2015). Replication studies can also provide estimates of developer tendencies to emphasize positive effects to the neglect of null effects, or potential iatrogenic effects under some conditions. On the other hand, when EBPs are not well understood, training is inadequate, implementation is poor, or the study lacks adequate controls, the outcomes of the replication can be confusing to the field and potentially set back progress in dissemination of EBPs.
Example 7
The developer of a school-based EBP is asked to provide the training manuals, fidelity measurements, and procedures for an independent research team to conduct a replication study. The research team is interested in an independent evaluation and is not seeking assistance from the developer. The research team is well trained in prevention and implementation science, and has published prevention research involving school-based prevention strategies. The program developer values independent replication but is concerned that the research team will not implement the EBP with fidelity, without extensive support and consultation. He doubts the schools’ readiness to implement the program and is concerned about possible needed program adaptations for the population to be recruited. He fears that the reputation of the EBP will be harmed by the lack of effects resulting from poor implementation, and wonders if it would be best for the reputation of the EBP if he withdraws support for the replication effort.
The conditions that entail a solid replication involve harmony between an array of procedures and ecological conditions. The investment of a single prevention study often involves years of a prevention scientist’s career and considerable efforts to maintain funding for development and evaluation. Thus, caution is natural in consenting to independent replications that may involve investigators who are less invested in the details of the prevention protocol, or who have less control over how it is implemented. Ethical questions also arise concerning who controls the intellectual property that emerges from prevention science research. How do we promote replication studies if core EBPs are unavailable or too costly to use? What conditions on the collaboration for replication are justified? What costs could be incurred and who will pay for them? Is the program adequately developed to support implementation fidelity in the absence of the developer? Can the practitioner groups’ readiness to implement the intervention be assessed? Are tools for fidelity monitoring available and clear? Will the engagement of populations not previously involved and adaptations be documented?