The aforementioned discussions are pertinent to the issue of the applicability of conclusions drawn from randomized trials to healthcare decisions made under practical conditions. However, they do not address the broader issue of when randomization and blinding do, and do not, reflect a study design appropriate to the objectives of pragmatic trials. Specifically, the roles of patient expectation, patient preference and the so-called placebo effect may in many situations play a decisive role in determining the effectiveness of an intervention aimed at improving patient outcomes in actual care settings. Treatment preferences of doctors, nurses and patients are different and are likely to have different impact on the final treatment decisions. Such preferences can in fact be measured by methods such as conjoint analysis, but require complex and intensive studies (Porzsolt et al. 2010a). Such studies are common in market research but are rarely completed in health care research. Even if the reported preferences in mind can be identified in preference studies, one cannot exclude that the communicated theoretical preferences will be identical to the options that have to be selected under real conditions. More important than the identification of the hypothetical preferences of patients, doctors and nurses might be the option that will finally be selected. Hence, research with the objective of illuminating the practical effectiveness of such interventions must choose a design that incorporates these ‘co-factors’. In such situations, randomization and blinding must yield to an alternative design.
Waber et al. (2008) elegantly demonstrated the potential role of patient expectation in determining outcomes by using 82 medical students as subjects. The students were informed that they would receive different pain-killers to control pain in their feet induced by electrical strikes. The probands, in fact, received the same placebo but reported different degrees of pain relief depending upon the type of information provided. This experiment demonstrated that expectations induced by information might influence the comparative effectiveness of a pain treatment. However, this experiment cannot be replicated among patients for ethical reasons. More broadly, Brewin and Bradley (1989) compellingly described the circumstances under which patient preferences are inherently bound to potential effectiveness. In particular, in the case of counselling interventions, and other interventions requiring active participation of the patient, it is well established that patient preference and willingness constitute an inherently necessary condition for effectiveness (Brewin and Bradley 1989). Hence, a design randomizing patients to a treatment mode, irrespective of their preferences, could not, by definition, yield results relevant to real world effectiveness. Finally, Kaptchuk et al. (2008), in a three-arm randomized trial, demonstrated a ‘dose-response’ effect on quality of life outcomes of what otherwise might be called the ‘placebo’ effect. While their design did not directly address patient or practitioner preferences, their observations suggest that anticipation, expectation and clinical interaction constitute strong determinants of real world effectiveness, i.e., the bonding of preference between practitioner and patient for a treatment may well influence the magnitude of the ‘placebo’ effect achieved in association with a treatment choice.
Practitioners and patients with strong preferences may refuse participation in a randomized trial and consequently will increase the risk of sampling bias. Patients with weak preferences will increase the risk of performance bias as those who receive their preferred treatment will more benefit from treatment than those who will get the not-preferred treatment. Figure 1 illustrates that these ‘preference-based effects’ can only be avoided if either the same proportions of patients prefer one of the choices or the treatments can successfully be blinded. It is interesting that systematic reviews which explicitly searched for ‘preference-based effects’ could not confirm the postulated results (Stengel et al. 2006; King et al. 2005). This is likely attributable to the fact that most preference-based trials have conformed to a design in which allocation by preference follows initial acceptance or rejection of randomization (Stengel et al 2006; Porzsolt and Stengel 2006). Hence, patients with a strong preference for one of the therapeutic options were likely to reject randomization, thereby masking the preference effect across the study arms. These experiments confirm that it is rather difficult to quantify ‘preference-based effects’.
In summary, the conventional design of trials utilizing randomization, concealment of allocation and blinding to treatment arm with the objective to test efficacy of interventions is likely to fail to accurately assess real world effectiveness due its inability to capture the multiple mechanisms contributing to clinical outcomes. Although such trials may incorporate some features of a pragmatic or ‘real world’ orientation, such as broad patient selection, laxity on compliance controls and intensity of monitoring, preference-based effects will be distorted or obscured. Although shared preference for therapeutic options will generally increase the real world effectiveness, other practical issues such as co-morbidity, patient and practitioner compliance may decrease it. Accordingly, a trial design is required which is capable of capturing most real-world issues, including preference-based effects, in order to fully assess effectiveness. We here describe the outline of three-arm non-randomized pragmatic controlled-trial design.