How did the illusion of the superiority of megatrials come about? There are probably three main reasons - historical, managerial, and methodological.
When large randomized controlled trials emerged from the middle 1960s, it was as a methodology intended to come at the end of a long process of drug development . For instance, tricyclic and monoamine-oxidase-inhibitor antidepressants were synthesized in the 1950s, and their toxicity, dosage, clinical properties, and side effects were elucidated almost wholly by means of clinical observations, in animal studies, 'open', uncontrolled studies, and small, highly controlled trials . Only after about a decade of worldwide clinical use was a large (by contemporary standards), placebo-controlled, comparison, randomized trial executed by the UK Medical Research Council (MRC), in 1965 - and even then, the dose of the monoamine-oxidase inhibitor chosen was too low. So, a great deal was already known about antidepressants before a large RCT was planned. It was already known that antidepressants worked - and the function of the trial was merely to estimate the magnitude of the effect size.
Nowadays, because of the widespread overvaluation of megatrials, the process of drug development has almost been turned upon its head. Instead of megatrials coming at the end of a long process of drug development, after a great deal of scientific information and clinical experience has accumulated, it is sometimes argued that drugs should not even be made available to patients until after megatrials have been completed. For instance, 1999 saw the National Institute for Clinical Excellence (NICE) delay the introduction of the anti-influenza agent Relenza® (zanamivir) with the excuse that there had been insufficient evidence from RCTs to justify clinical use, thus preventing the kind of detailed, practical, clinical evaluation that is actually a prerequisite to rigorous trial design.
It is not sufficiently appreciated that one cannot design an appropriate megatrial until one already knows a great deal about the drug. This prior knowledge is required to be able to select the right subjects, choose an optimal dose, and create a protocol that controls for distorting variables. If a megatrial is executed without such knowledge, then it will simplify where it ought to be controlling: eg patients will be recruited who are actually unsuitable for treatment, they will be given the trial drug in incorrect doses, patients taking interfering drugs will not be excluded, etc. Consequently, such premature megatrials will usually tend systematically to underestimate the effect size of a new drug.
2. Managerial - changes in research personnel
Before megatrials could become so widely and profoundly misunderstood, it was necessary that the statistical aspects of research should become wildly overvalued. Properly, statistics is a means to the end of scientific understanding  - and when studying medical interventions, the nature of scientific understanding could be termed 'clinical science' - an enterprise for which the qualifications would include knowledge of disease and experience of patients . People with such qualifications would provide the basis for a leadership role in research into the effectiveness of drugs and other technologies.
Instead, recent decades have seen biostatisticians and epidemiologists rise to a position of primacy in the organization, funding, and refereeing of medical research - in other words, people whose knowledge of disease and patients in relation to any particular medical treatment is second-hand at best and nonexistent at worst.
The reason for this hegemony of the number-crunchers is not, of course, anything to do with their possessing scientific superiority, nor even a track record of achievement; but has a great deal to do with the needs of managerialism - a topic that lies beyond the scope of this essay .
3. Methodological - masking of clinical inapplicability by statistical precision
There are also methodological reasons behind the aggrandizement of megatrials. As therapy has advanced, clinicians have come to expect incremental, quantitative improvements in already effective interventions, rather than qualitative 'breakthroughs' and the development of wholly new treatment methods. This has led to demands for ever-increasing precision in the measurement of therapeutic effectiveness, as the concern has been expressed that the modest benefits of new treatment could be obscured by random error. Furthermore, when expected effect sizes are relatively small, it becomes increasingly difficult to disentangle primary therapeutic effects from confounding factors. Of course, where confounders (such as age, sex, severity of illness) are known, they can be controlled by selective recruitment. But selective recruitment tends to make trials small.
Megatrials appear to offer the ability to deal with these problems. Instead of controlling confounders by rigorous selection of subjects and tight protocols, confounding is dealt with by randomly allocating subjects between the comparison groups, and using sufficiently large numbers of subjects so that any confounders (including unknown ones) may be expected to balance each other out . The large numbers of subjects also offer unprecedented discriminative power to obtain statistically precise measurements of the outcomes of treatment . Even modest, stepwise increments of therapeutic progress could, in principle, be resolved by sufficiently large studies.
Resolving power, in a strictly statistical sense, is apparently limited only by the numbers of subjects in the trial -and very large numbers of patients can be recruited by using simple protocols in multiple research centres . Analysis of megatrials requires comparison of the average outcome in each allocation group (ie by 'intention to treat') rather than by treatment received. This is necessitated by the absolute dependence upon randomization rather than rigorous protocols to deal with confounding . So, in pursuit of precision, randomized trials have grown ever larger and simpler. More recently, there has been a fashion for pooling data from such trials to expand the number of subjects still further in a process called meta-analysis  - this can be considered an extension of the megatrial idea, with all its problems multiplied . For instance, results of meta-analyses differ among themselves, in relation to RCT information, and may diverge from scientific and clinical knowledge of pharmacology and physiology 
The problem is that 'simplification' of protocol translates into scientific terms as deliberate reduction in the level of experimental control. This is employed with good intentions - in order to increase recruitment, consistency, and compliance , and is vital to the creation of huge databases from randomized subjects. However, as I have argued elsewhere, the strategy of expanding size by diminishing control is a methodological mistake . Reduced experimental control inevitably means less informational content in a trial. At the absurd extreme, the ultimate megatrial would recruit an unselected population of anybody at all, and randomize subjects to a protocol that would not, however, necessarily bear any relation to what actually happened to the subject from then on. So long as the outcomes were analysed according to the protocol to which the subject had originally been randomized, then this would be statistically acceptable. The apparent basis for the mistake of deliberately reducing experimental rigour in megatrials seems to be an imagined, but unreal, tradeoff between rigour and size - perhaps resulting from the observation that small, rigorous trials and large, simple trials may have similar 'confidence interval' statistics . Yet these methodologies are not equivalent: in science the protocol defines the experiment, and different protocols imply different studies examining different questions in different populations .