Dear Sir,

We read with interest the article by Bednarska et al. in which they assess the willingness of surgeons to participate in an expertise-based randomized controlled trial (RCT) to compare the effectiveness of high tibial osteotomy (HTO) with unicompartmental knee arthroplasty (UKA) for treating isolated medial compartment osteoarthritis [2]. There are, however, certain issues that require discussion and consideration before one chooses this design.

The first issue regards the validity of expertise-based RCTs. It is claimed that expertise-based RCTs are less biased than conventional RCTs because of the risks for differential expertise biases in conventional RCTs [5]. Devereaux et al. argue that in a conventional RCT if there are more surgeons experienced with one of the procedures—which may be expected in practice—then the estimate of the treatment effect will be biased in favor of the preferred procedure [2]. We have several comments.

Foremost, the first reason to perform a RCT is to distribute known and unknown confounders at random between both treatment arms so that the difference measured will be that of the effect of treatment only. The first thing that an expertise-based RCT does is to break that rule by attributing a different group of selected surgeons for each treatment arm. Therefore, if there are differences between the surgeons in the two groups, which is likely, then this difference will systematically create a bias. In fact, the low likelihood for differential expertise bias in an expertise-based RCT assumes there is no interaction between experience and treatment. In other words, the influence of experience has the same effect in both treatments. This hypothesis, however, may not be supportable. For instance, in a recent analysis [3] of 496 patients of a previously published trial [4] investigating the results of three different uncemented total hip prosthesis systems, we showed that an interaction between volume (a surrogate for experience) and treatment occurred. What if surgeons who are experienced in UKA were on average more skilled than those experienced in HTO, or the opposite? We believe in this case, the conventional RCT will yield a better estimation of treatment effect. Second, we agree that a differential expertise effect may occur in a conventional RCT, but statistics can help us adjust and report that effect instead of hiding it. In a comparative simulation of 1000 conventional RCTs and expertise-based RCTs, we found that in a conventional RCT: (1) not accounting for expertise yields a biased estimate of treatment effect (which is the authors’point [2]), (2) adjusting the analysis for expertise allows an unbiased estimation of treatment effect and of the effect of expertise, and (3) accounting for interaction when relevant allows an unbiased estimation of treatment effect. In an expertise-based randomized RCT: (1) there is no need to account for expertise to obtain an unbiased estimate of treatment effect if there is no interaction, and (2) in case of interaction between expertise and treatment, such effect cannot be separated from the effect of treatment and the estimation of treatment effect is biased (Table 1).

Table 1 Estimation of treatment effect in 1000 simulated conventional and expertise-based randomized controlled trials

Another issue is the applicability of expertise-based RCTs. Bednarska et al. theorize that expertise-based RCTs are more representative of real life. Namely, that because in real life surgeons prefer one treatment over another, it makes more sense that the trial only asks surgeons to perform their preferred operation. To whom, however, is it relevant that surgeons in the trial performed 100 UKAs or 50 HTOs a year? Because only experienced surgeons have performed the treatment, then the results of the trial are applicable only to those who are experienced in this treatment. If we want the results to be applicable to others, they have to develop that same experience. But how does one become experienced in UKA if they treat only a few patients for unicompartmental osteoarthritis? Assume that one has expertise in UKAs, that the trial shows little superiority of HTOs, and that accordingly, he or she is ready to change his or her practice to HTO and allow for the necessary learning curve. How can one be persuaded that his or her experience will eventually compare with that of surgeons who chose to routinely perform HTOs, given their previous preference to routinely perform UKAs? Differences between surgeons who perform HTOs and those who perform UKAs likely exist at all levels, such as the inability to achieve the same preoperative care, surgical skills, or postoperative care, and could prevent one from obtaining the expected results. Allowing the design of the trial to include surgeons with and without expertise (high and low volume surgeons, etc), is more pragmatic and truly representative of the application of surgical techniques in real life.

Finally, there are ethical and practical issues with expertise-based RCTs. The most prominent problem we see with expertise-based RCTs is that of the patients’, and not the surgeons’, willingness to participate. Large expertise-based RCTs have been reported when both treatments could not possibly be realized by the same care providers, precluding a conventional RCT design, such as when comparing coronary angioplasty with coronary artery bypass surgery [1]. The four expertise-based RCTs conducted in orthopaedics and cited by Bednarska et al. were conducted in the emergency setting for treatment of fractures [69]. In this particular case, patients had no plan to go to the hospital and to be operated on that day and therefore probably few demanded to be operated on by a particular surgeon. In elective surgery, however, patients often see a surgeon to whom they have been referred and are willing to trust. Patients who see a surgeon for medial compartment osteoarthritis probably would be reluctant to be operated on by his or her colleague regardless of the treatment offered. Therefore, the main drawbacks of expertise-based RCTs limiting their feasibility and ethical integrity are that patients have to accept that their treatment will be determined at random, and they have half a chance of not being treated by the surgeon they came to see. This, we believe, is a major drawback of expertise-based RCTs of most treatments we would like to study. This consideration is probably different from one country to another where sometimes patients are on waiting lists for surgery and do not expect a specific surgeon to take care of them.

We agree with Bednarska et al. that researchers should consider using expertise-based RCTs, but only after carefully considering their arguments and those above. There are situations where expertise-based RCTs are more likely to yield what the researchers are looking for, but for the majority, conventional RCTs will answer the question more easily and precisely.