Abstract
Cost effectiveness is becoming an increasingly important factor for stakeholders faced with decisions about adding a new vaccine into national immunization programmes versus alternative use of resources. Evaluating cost effectiveness, taking into account the relevant biological, clinical, epidemiological and economic factors of a vaccination programme, generally requires use of a model. This review examines the modelling approaches used in cost-effectiveness analyses (CEAs) of vaccination programmes.
After overviewing the key attributes of models used in CEAs, a framework for categorising theoretical models is presented. Categories are based on three main attributes: static/dynamic; stochastic/deterministic; and aggregate/individual based. This framework was applied to a systematic review of CEAs of all currently available vaccines for the period of 1976 to May 2007.
The systematic review identified 276 CEAs of vaccination programmes. The great majority (83%) of CEAs were conducted in the setting of high-income countries. Only a few vaccines were widely studied, with 57% of available CEAs being focused on the varicella, influenza, hepatitis A, hepatitis B or pneumococcal vaccine. Several time trends were evident, indicating that the number of vaccine CEAs being published is increasing; the main health outcome measures are moving away from the number of cases prevented towards quality-adjusted and unadjusted life-years gained, and more complex models are beginning to be used.
The modelling approach was often not adequately described. Of the 208 CEAs that could be categorized according to the framework, around 90% were deterministic, aggregate-level static models. Although a dynamic transmission model is required to account for herd-immunity effects, only 23 of the CEAs were dynamic. None of the CEAs were individual based.
To improve communication about the cost effectiveness of vaccination programmes, we believe the first step is for analysts to be more transparent with each other. A clear description of the model type using consistent terminology and justification for the model choice must begin to accompany all CEAs. As a minimum, we urge modellers to provide an explicit statement about the following attributes: static/dynamic; stochastic/deterministic; aggregate/individual based; open/closed. Where relevant, time intervals (discrete/continuous) and (non)linearity should also be described. Enhanced methods of assessing model performance and validity are also required.
Our results emphasize the need to improve modelling methods for CEAs of vaccination programmes; specifically, model choice, construction, assessment and validation.
Similar content being viewed by others
Stakeholders faced with the choice of adding a new vaccine product into a national immunization programme are increasingly requesting information on the relative value (i.e. cost effectiveness) of the vaccine compared with alternative uses of resources.[1,2] To assess the cost effectiveness of a vaccine programme, and to formally take into account the relevant epidemiologic, socioeconomic and population-level data, some kind of model is generally necessary.
This review examines the various modelling approaches taken to evaluate the cost effectiveness of vaccination programmes. First we provide an overview of key attributes of general models used to conduct cost-effectiveness analyses (CEAs). We then describe the basic characteristics of the CEAs of currently available vaccines that we identified from a systematic search of the literature. We focus on the models used and their attributes. Based on our findings, we highlight the relevant implications for models used to evaluate the cost effectiveness of vaccines.
1. Background
1.1 Cost-Effectiveness Analysis (CEA) in Health
Economic evaluations differ in the manner in which health outcomes are valued and expressed.[3] CEA is one particular type of economic evaluation in which “costs and effects of a program and at least one alternative are calculated and presented in a ratio of incremental cost to incremental effect.”[4] The underlying principle guiding the valuation of resources in CEAs is opportunity cost, which reflects competing societal demands for limited resources. The implication is that resources should be used as efficiently as possible in order to maximize the health benefits to the population.[3,5]
CEA is distinguished from cost-benefit analysis (CBA) in that CEA measures a programme’s effects in health outcomes (e.g. deaths averted, life-years saved) while CBA measures the effects in monetary units.[3,4] Depending on whether the health outcomes are weighted by utilities (e.g. QALYs or disability-adjusted life-years [DALYs]), analyses may be further distinguished as cost-utility analyses (CUAs) or CEAs.[3,4] Here, we use the term CEA in a broad sense to be inclusive of both. The popularity of CEAs over CBAs has grown, partly because the valuation of health in monetary terms is difficult and has not been universally accepted in the public health community.[4,6] For example, according to our search results using the UK NHS Economic Evaluation Database (NHS EED), among 659 English-language full economic evaluations that were published in 2005 and abstracted in the database, there were 651 CEAs (including CUAs), six CBAs, and two studies that performed both CEA and CBA.
The majority of CEAs conducted for general health interventions have concentrated on questions of ‘technical efficiency’, in that they ask, “given several competing options for reducing morbidity and mortality due to a specific disease, which are the most cost effective to achieve the given objective.”[3] When CEA is used to inform ‘allocative efficiency’, the intention is to provide insight into an optimal set of interventions across multiple disease areas (i.e. within the healthcare sector); this type of analysis needs health outcomes to be expressed in a common unit to facilitate comparison (e.g. life-years saved, QALYs or DALYs). Considerations of allocative efficiency directly allow for broader questions of resource allocation, such as whether a Ministry of Health should invest in AIDS prevention as opposed to childhood vaccination.[3]
Our main focus in this paper is on modelling methods used for CEA of vaccines and, therefore, a comprehensive review of economic evaluation, and in particular CEA, is beyond the scope of this paper. However, we refer the reader to several excellent references, many of which were motivated by the need to improve the quality and comparability of CEAs for priority setting in health.[3,4,6–14] Although there is consensus on the need for standardized methods and assumptions for CEA, there are different areas of emphasis among sources focusing on developed countries,[3,4,10,11] developing countries[12–15] and context-specific decision making within a single country.[16–19] In addition to these general references, numerous sources provide information on specific methodological challenges in CEA, such as the measurement, valuation and transferability of costs.[12,15,20–34] Others have focused on tracking the quality of CEAs[35–38] and cataloguing published CEAs.[38–41]
1.2 Assessing the Cost Effectiveness of Vaccination
In general, vaccines are considered one of the most cost-effective health interventions in both developing and developed countries.[42,43] With new vaccines becoming available (e.g. rotavirus, pneumococcal, meningococcal and human papillomavirus [HPV]), the number of CEAs focused on vaccination has been growing rapidly.[44] This may reflect, at least to some extent, the increasing demand for information on cost effectiveness by those making immunization policy recommendations (e.g. the WHO and country policy makers), financing coordination mechanisms (e.g. the GAVI Alliance [formerly the Global Alliance for Vaccines and Immunisation]) and donors.[1,2] As the number of new vaccines (many of which are quite costly) grows, they will inevitably compete for a limited budget, particularly in developing countries.[45]
Although the controversial methodological issues surrounding all comparative economic evaluations also apply to CEAs of vaccination programmes, there are modelling challenges that are of particular relevance to vaccination programmes.[46,47] The representative example most often described relates to how vaccination programmes can lead to complex epidemiological consequences at the population level as the result of potential herd immunity effects.[48] An additional example relates to challenges associated with modelling certain vaccine-preventable diseases that involve a long lag time between the intervention and the ultimate health benefits.[33,47,49–51] For example, vaccines for hepatitis B virus (HBV) and HPV prevent cancers in adulthood several decades after vaccination.[52] Finally, in select cases, vaccines may prevent only a subset of serotypes (e.g. pneumococcal conjugate vaccine) or viral types (e.g. HPV vaccine), introducing additional challenges related to the modelling of potential type replacement (i.e. effects on the proportion of disease caused by serotypes not targeted by the vaccine).[49,50,53,54]
1.3 Modelling in CEA
Like most public health decisions, decisions related to vaccination policy have to be made in the context of incomplete evidence, imperfect data and uncertainty. Evaluating cost effectiveness, taking into account the relevant biological, clinical, epidemiological and economic factors of a vaccination programme, generally requires use of a model.[55,56]
‘Model’ is an umbrella term that denotes any theoretical construct describing behaviours of a system, and the term is widely used across different disciplines.[8,57] Models can be mathematical or non-mathematical, depending on whether mathematical languages are used in describing the system.[56] Broadly speaking, models in the healthcare context can also be categorized as either ‘empirical’ (in which all model inputs are obtained from a clinical trial or an observational study providing patient-level data) or ‘theoretical’ (in which model parameters are synthesized using mathematical formulae, statistical techniques or simulations).[56] Even when model inputs are directly obtained from a single trial or observational study, some use of mathematical language is required in CEAs to present the final model outcomes in the form of incremental cost-effectiveness ratios (ICERs).[8] Accordingly, both theoretical and empirical approaches to performing CEAs can be described as mathematical models. Because it is difficult to obtain all necessary model inputs from a single trial or observational study, purely empirical models are rare.[58]
Accordingly, in our review, we informally refer to ‘empirical’ CEA models as approaches in which the main inputs for estimating effectiveness are directly obtained from clinical trials or observational studies, and ‘theoretical’ models of CEAs as those in which the main inputs are synthesized to estimate effectiveness and cost using a mathematical formula or simulation technique. In practice, theoretical models may be further categorized into relatively complex ‘simulation models’ (in which simulation techniques of various degrees of complexity — such as a Markov model — are performed to estimate final outcomes) and simpler ‘non-simulation models’ (in which final outcomes are obtained using a simple calculation, mathematical function or statistical model). Figure 1 summarizes this set of concepts for classifying model types in a broad context.
Based on this set of classifications, models used in health economic evaluations could be described generally as falling into the category of theoretical models that use mathematical languages. However, models used in health economic evaluations (typically referred to as decision analytic models) often involve complex computer-based simulation modelling.[59,60] Accordingly, although we refer generally to mathematical theoretical models, in our review we focus on the subset of those models that involve more complex simulation techniques.
The mathematical models most commonly used for health economic evaluations have been described through the lens of health-decision science, epidemiology and, more recently, operations research and industrial engineering. In addition to general sources that describe models commonly used in decision analysis,[56,59–61] there is a vast literature on epidemiological models of infectious diseases,[62–67] and a growing body of literature on modelling techniques used more commonly in the field of operations research and industrial engineering, such as discrete-event simulation (DES).[68–70] Publications from multiple disciplines have described characteristics of specific types of models,[61,62,65,70–73] and have suggested general guidelines for developing and evaluating models.[37,44,74–76]
While a number of studies have discussed specific methodological issues surrounding modelling of vaccination programmes, such as the importance of including herd immunity,[44,65–67] and others have suggested guidelines for evaluating the cost effectiveness of vaccines,[77–79] most of these have focussed on a review of a specific vaccine, although the issues raised generally have broad application. The Viral Hepatitis B Prevention Board (Antwerp, Belgium) facilitated a discussion on methodological aspects of economic evaluations as applied to vaccines, and from this recommendations were made for future economic evaluations of vaccination programmes.[77] The resulting statement[77] is the only formal statement specifically focused on vaccine modelling issues underwritten by a diverse, international group of academics active in the field. Beutels et al.[44] presented a general but comprehensive review on the economic evaluation of vaccines. This paper provided a brief overview of the published economic analyses of vaccines, showing the trend in the number of studies on vaccines and examining the distribution of studies across settings, and discussed a broad range of methodological issues, including the choice of model, outcome valuation and costing. Comprehensive reviews are available for several particular vaccines, including vaccines against rubella,[80] hepatitis A virus (HAV),[81,82] HBV,[83] meningococcal meningitis,[84,85] pneumococcal pneumonia,[86–88] Haemophilus influenzae type b (Hib),[89] influenza,[90] pertussis,[91] HPV,[92,93] varicella[94] and rotavirus.[95] Among these, articles by Beutels[83] and Thiry et al.[94] provide specific discussion about alternative types of models. In addition to two reviews of previously published models of HPV,[92,93] Goldie et al.[49] and Garnett et al.[50] provide general discussion of newer modelling approaches required for HPV vaccination. Finally, Brisson and Edmunds[47] have recently described the impact of different sources of uncertainty (i.e. model, methodological and parameter uncertainty) in models of vaccines.
While the use of computer-based mathematical models to synthesize evidence as part of decision modelling for health and economic evaluation has grown dramatically, there has been a relative lack of transparency about analysts’ rationale for model choice. In part, communication about alternative modelling techniques has been hampered by a lack of universal language that describes different kinds of models and their features. The technical expertise for different types of models also often resides within different disciplines (e.g. decision analysis, operations research, statistics and epidemiology), and each has used modelling to address different kinds of questions, and have therefore emphasized different model attributes.[59,72] Presumably though, the choice of model type is a function of several considerations for analysts of any disciplinary background. Factors likely to drive most model type and structure decisions include the question at hand and the nature of the decision problem; the natural history and features of the particular disease of interest; data available to parameterize and/or calibrate the model; familiarity of the analyst with different modelling techniques; time requirements for model development; and ease and speed of simulation.[70,96]
2. A Framework Based on Model Attributes
Unfortunately, taxonomy to indicate model types often differs from study to study. There have been suggested taxonomies for models,[66,97,98] although not focused on vaccine-preventable diseases. In this section, we review key dimensions (or attributes) of modelling approaches, emphasizing those of particular significance to the modelling of vaccine-preventable infectious diseases. We then describe some of the main advantages and disadvantages of models that possess the different attributes described below. Finally, building upon the categorization of models suggested by others,[66,97] we suggest an informal framework that we have found helpful, and use this in our literature review of vaccine models for CEA.
Models can be categorized depending on whether they possess the following attributes:
-
1.
the main features of the model change over time (dynamic) or not (static);
-
2.
any changes in the model occur randomly (stochastic or probabilistic) or the rules of changes are pre-specified (deterministic);
-
3.
the population’s behaviour in the model is simulated using aggregate variables of which values are population averages (aggregate) or the behaviours of individuals in the population are tracked (individual based);
-
4.
events are assumed to occur at a discrete time interval (discrete) or at a point on a continuum (continuous);
-
5.
the model allows individuals to enter the model (open) or not (closed); and
-
6.
the model is expressed in equations that are functions of linearly linked parameters (linear) or not (non-linear).
2.1 Static versus Dynamic
In modelling an infectious disease, a key question is how to define the force of infection, (i.e. the rate at which susceptible individuals become infected) in order to estimate incidence, which is calculated as the product of the force of infection and the proportion of uninfected (susceptible) individuals in a population.[62] In a static model, the force of infection is constant over time (or changes as a function of age or other individual-based factors). In a dynamic model, the probability of an individual acquiring an infection is dependent on the contact patterns of that individual (i.e. interaction between individuals), the transmissibility (e.g. infectiousness) of the infection and the distribution of the infection within the population over time. Models that reflect transmission dynamics are the appropriate model when consideration of herd immunity effect is important.[48,63,65,67] Dynamic models are usually expressed as a set of differential equations.[62,65]
When the parameter values depend on a time variable only (as opposed to other parameters defined within the model), ordinary differential equations are used. When the parameter values depend on another important variable (e.g. age) in addition to the time variable, partial differential equations are used.[63,67,73]
Of note, the term ‘dynamic’ is often used to refer to a model that is ‘open’ (see section 2.5). While dynamic transmission models are often open, these models may also be closed, assuming an epidemic in a closed population.[57,99] In addition, other kinds of models that do not reflect transmission dynamics can also be open. Therefore, we distinguish open from closed models as a separate model attribute.
2.2 Deterministic versus Stochastic
In designing a model for vaccination programmes (or health interventions in general), an important analytic decision is whether to incorporate the concept of chance into the model.[70] That is, to simulate the world realistically events can be set to occur by chance (randomly); this type of model is considered stochastic (also known as probabilistic).[60] In contrast, in a deterministic model, all events occur in a pre-specified way depending on the parameter values and initial conditions of the model. Both dynamic transmission and static models can be deterministic or stochastic.[97] A stochastic dynamic transmission model might be an appropriate choice when modelling an infectious disease epidemic in a small population; such epidemics can be highly dependent on chance, since infectious agents are transmitted with different transmission probabilities. A stochastic model, used within either a static or dynamic transmission model framework, adds complexity, but can allow for a more comprehensive evaluation of the impact of variability and uncertainty.[49,50,60,100,101]
Confusion between the terms ‘stochastic’ and ‘variable’ is often observed, as well as with the closely related terms ‘uncertainty’ and ‘variability’.[60]‘Stochastic’ is usually combined with the term ‘uncertainty’ to indicate the uncertainty associated with chance outcomes (also known as first-order uncertainty).[60] When parameter values are changed stochastically (or probabilistically) in an aggregate or compartmental model (see section 2.3), the model is said to be simulating parameter uncertainty (also known as second-order uncertainty), which indicates measurement error of the parameter from various sources. In contrast, ‘variability’ refers to the often ‘known’ heterogeneity across subgroups or in a population (e.g. age or gender).[60] For example, variability across subgroups can be analysed using deterministic sensitivity analysis; variability in a population can be simulated using a Markov cohort model with varying initial states or microsimulation techniques (first-order Monte Carlo or DES).[60]
2.3 Aggregate (or Population Average) versus Individual Based
In an aggregate model, individuals are assigned to compartments (sometimes referred to as health states) depending on their health status or other relevant variables. Individuals in each compartment move according to parameter values at the aggregate level (i.e. averages of the individuals belonging to a compartment or the population as a whole), and the model records the number of individuals in each compartment over time. One limitation of this approach is that the number of the compartments quickly increases with complex diseases, as more variables are used to stratify the population in the model. For example, if prognosis, utility, costs and outcomes differ based on individual-based factors, and/or if memory of an individual’s history is an important determinant of future probabilities, then the number of compartments rapidly increases. If the model of interest is a simple deterministic static model, the computational burden may remain quite tolerable even with a large number of compartments or states. If the model is dynamic, as the number of compartments increases, there is a substantial computational burden in capturing the interaction between individuals belonging to different compartments.
An individual-based model (or microsimulation model) can overcome this limitation by keeping track of each individual’s behaviour. Like Weinstein,[101] we use the term microsimulation to describe an analysis “in which individual instantiations of a system — such as a patient’s lifetime or the course of an epidemic — are generated using a random process to ‘draw’ from probability distributions a large number of times, in order to examine the central tendency and possibly the distribution of outcomes.” Since microsimulation randomly samples individuals with their own sets of assigned attributes, microsimulation models are naturally stochastic. As previously mentioned in section 2.2, the variance associated with the individual sampling in microsimulation (first-order uncertainty) is different from the uncertainty related to the parameter values.[60,99] Microsimulation models may or may not allow for interactions among individuals. If individuals are simulated one at a time without interaction, the model is classified as a static microsimulation model. If a model tracks individuals, allowing for interactions with other individuals or the environment, the model is categorized as a dynamic microsimulation model.
Dynamic microsimulation models can be further divided into three subtypes: (i) individual-level Markov models allowing for interaction; (ii) DES models; and (iii) agent-based models. Individual-level Markov models extend the notion of the static microsimulation model based on a Markov structure, allowing for interactions among individuals.[97] DES is a modelling technique that originated from the field of industrial engineering.[72] It can simulate a complex system involving interactions among individuals or between individuals and the environment (e.g. healthcare delivery facilities). The DES model samples the time to the next event and describes the life history of individuals progressing with various events over time. The DES model is known to be particularly useful in simulating a complex system involving queues and delays mainly due to resource constraints.[60,72] The agent-based model is considered one of the most flexible modelling techniques, and differs from other dynamic microsimulation models in that it allows agents (e.g. individuals) to act autonomously with their own behavioural rules.[102,103] The popularity of this technique has been increasing in a wide range of fields including ecology, engineering and management sciences. It has yet to reach its potential in the field of economic evaluation of vaccines or other health interventions.[104]
2.4 Discrete versus Continuous
The main difference between ‘discrete’ and ‘continuous’ models is the measurement of the timing of events. In a discrete model, an event is assumed to occur at one of the discrete time steps, while in a continuous model, an event can occur at any point in time on a continuum. An example of a continuous model is a dynamic model expressed as a set of differential equations. It is known that the time step used in a discrete-time model can affect the model’s results, partly because such models allow for only one event at each time step.[105] Accordingly, the choice of timeframe can be an important consideration in choosing a model type for a health intervention, particularly when there is a need to model multiple simultaneous events in one time step (e.g. presence of co-morbidities).[97] In this case, a discrete-time model with a small fixed interval, or a continuous-time model is recommended.[97] However, there is a trade-off in choosing between the two timeframes: while a continuous model can provide more accurate results, at least in theory, it could involve a higher computational burden. In addition, since it is difficult to obtain the numerical solutions of a continuous model, for practical reasons, continuous differential equations are often approximated into discrete difference equations.
2.5 Open versus Closed
An open model allows individuals to enter and exit the model over time, while a closed model does not allow for new entrances over time. An example of a closed model is a single birth cohort simulation using a Markov model. One example of an open model is a dynamic transmission model that allows for the entry of ‘susceptible’ (or uninfected) individuals into the model (e.g. via birth over time) replenishing the susceptible compartment.[52] While dynamic transmission models are often open, they can be closed. Similarly, individual-based microsimulation models may or may not be open.
Individual-based microsimulation models that are open but static are often referred to as ‘population-based models’, and have recently been used in CEAs of cancer prevention programmes.[106–109] An open-model population-based approach can be very useful in assessing past trends and forecasting future trends in chronic non-communicable diseases.[110] Open models can be more data intensive (if one wants to account for parameters that change over time, such as secular trends) and can be more computationally intensive, depending on the complexity of the underlying structure.
Since outputs of an open model generally vary by both calendar year and age, this is the appropriate modelling approach if one is trying to directly compare the results of a dynamic transmission model and a static microsimulation model, or if one is using output (e.g. incidence over time) from a dynamic transmission model to parameterize a static microsimulation model.
2.6 Linear versus Non-Linear
Although ‘linear’ is often used to describe something with a straight-line feature, it is not straightforward to distinguish between a linear model and a non-linear one. In general, a model is defined as linear if all of its functions are linear (that is, functions are represented by linear equations) and is defined as non-linear otherwise.[111] In mathematics, a ‘linear equation’ refers to an equation whose terms are a constant or the product of a constant and a variable, and the highest degree term is the first degree (e.g. y = a × x + b, where a and b are constants and y and x are variables). This kind of equation is presented as a straight line. Meanwhile, in statistics and econometrics, a linear function is expressed as Y = Xβ + u, where Y is the n × 1 vector of an observable dependent variable, X is the n × k matrix of observed variables, β is the k × 1 vector of parameters and u is a u × 1 vector of unobserved errors.[112] That is, even when the Y and X variables are obtained using a non-linear relationship among data, if the function is linear in the parameters to be estimated, the function is linear (e.g. although an equation Y = β0 + β1X + β2X2 is a polynomial function and is not a straight line, this function is considered linear).[112] Mathematical models used for CEA rarely describe considerations of linearity.
2.7 Classification of Model Types
It should be noted that, of the six dimensions addressed in this section, the first three (static/dynamic, deterministic/stochastic and aggregate/individual based) are considered particularly relevant in that they are more fundamental in characterizing model structures and they often appear in descriptions of CEA models in the literature. Meanwhile, the other three dimensions are more related to the question of how analysts measure or compute model outputs given a model structure, and their applications to CEA models are not often described explicitly. Accordingly, we combine elements of each of the first three dimensions, yielding a total of eight (= 2 × 2 × 2) possible model types. However, although not impossible, the concepts of ‘individual based’ and ‘deterministic’ are often incompatible in the modelling context. Therefore, we considered the two resulting categories (individual-based deterministic static and dynamic models) to be unrealistic, and excluded them. We thus present a framework that classifies model types into six categories (see figure 2). Certain of these categories can then be further stratified based on consideration of two additional dimensions: open versus closed and discrete versus continuous.
Also of note, others have proposed alternative options for categorizing model types, many of which are more comprehensive in scope.[97,98] For example, Brennan et al.[97] propose 14 types of model structures in their taxonomy of model structures for health technology, by applying the dimensions of time and Markovian property in addition to the three dimensions of our choice. Likewise, additionally applying other dimensions to those three dimensions chosen yields more finely divided model types. For example, we could overlay the dimension of openness on the key three dimensions, dichotomizing the current six categories into two subtypes: open versus closed, and this attempt will yield a total of 12 (= 2 × 6) model types. Similarly, by superimposing the dimension of time, we can have two subtypes of model (e.g. discrete-time DES and continuous-time DES) for most model categories.[97] Analogously, if we overlay both openness and time dimensions on the existing categories at the same time, it will yield much more finely divided categories unless those two dimensions are completely incompatible. Furthermore, it is also possible to have a model that is a hybrid between two categories with compatible and supplementary features. For example, Coudeville et al.[113] combined the notions of a transmission dynamics model (type 2) and decision tree (type 1) to evaluate the cost effectiveness of multiple strategies of varicella vaccination in Italy. However, in this paper, we purposefully elected to keep our categories as simple as possible, partly for ease of communication with non-modellers and between modellers of different backgrounds.
For each type of model, examples of applications and its advantages and disadvantages are presented in table I.
3. Considerations of Uncertainty
3.1 General Sources of Uncertainty
A comprehensive economic evaluation should consider many different kinds of uncertainty, and most importantly, should clearly relay to the target audience the impact of those uncertainties on the policy conclusions. In addition to uncertainty related to model parameters, there are analytic choices of model type and model structure, and a number of possible perspectives from which to conduct CEA.[4,47] Although important, it is beyond the scope of this paper to comprehensively address uncertainty in CEA modeling results related specifically to cost-effectiveness methodology (e.g. choice of perspective, analytic and time horizon, costing methods, choice of baseline comparator, discount rate); these topics are discussed elsewhere, both generally[3,4,10–15,35–38,114] and in the context of specific CEAs for vaccine programs.[49,77,80,81,83,84,86]
3.2 Parameter Uncertainty
Manning et al.[115] define parameter uncertainty as “uncertainty about the true numerical values of the parameters used as inputs,” and argue that such uncertainty can be introduced in the following cases: the values of key inputs are unknown or unobservable (e.g. the price of future vaccines); there is no consensus about the appropriate level of the parameter value (e.g. discount rate); uncertainty remains about the key components of the process (e.g. disease epidemiology); there is sampling variability of parameters (e.g. individual differences in response rates to treatment); it is unclear whether the estimates can be interpolated or extrapolated. Prior to the last several years, the overriding focus has been on using sensitivity analysis to assess how changes in model parameters, when varied over a plausible range, lead to changes in the ICER.[115–118]
Sensitivity analyses range from simple deterministic univariate analyses to probabilistic multivariate analyses.[60,100,119] In a probabilistic sensitivity analysis key parameters are assigned distributional forms, and at each run samples are randomly drawn from the distributions to generate a distribution of cost-effectiveness ratios.[117] Probabilistic sensitivity analysis can be performed when using a Markov model analysed as a cohort simulation, and can be performed on a Markov model analysed as a first-order Monte Carlo simulation.[60] Much has been written in the last few years about both conducting probabilistic sensitivity analysis and summarizing the results in the form of cost-effectiveness acceptability curves.[120–125] This curve has been developed to overcome the limitation of the ICER as a summary statistic in presenting the probabilistic sensitivity analysis results; the confidence interval of the ICERs can have negative bounds, which are not straightforward to interpret.[121,122] Cost-effectiveness acceptability curves show the probability that an intervention will be cost effective as the threshold cost-effectiveness ratio is varied.[121,122]
An important area that is beyond the scope of this review is the growing field of calibration methods and how these relate to parameter uncertainty. Model calibration is increasingly being used, in part, because as models become more complex, the number of unobserved parameters quickly increases, and methods to estimate these model inputs are needed. These methods are discussed elsewhere.[47,49,50,101,126–128]
3.3 Model Uncertainty
Historically, there has been less explicit attention paid to how the choice of model structure and type affects the approach to addressing parameter uncertainty. For example, different approaches would be appropriate for a simple decision tree to assess basic questions about rotavirus vaccination versus a stochastic microsimulation model empirically calibrated to data using likelihood-based methods. Obviously, conducting probabilistic sensitivity analysis is somewhat more complicated and labour intensive when using an individual-based stochastic simulation model (e.g. first-order Monte Carlo model) than a static deterministic cohort model.[100] In general, the more complex the analysis, the greater the challenge to achieve transparency and relay the information about the impact of uncertainty succinctly to decision makers.[125,129]
Manning et al.[115] subdivide modelling uncertainty into ‘model-structure uncertainty’ and ‘modelling-process uncertainty’. Model-structure uncertainty is the uncertainty related to the mathematical forms by which parameters are combined to estimate costs, effects and/or combinations of costs and effects. Modelling-process uncertainty is the uncertainty arising from the combination of decisions made through the entire process of the analysis. Suggestions have been made to vary structure to assess the stability of results, or examine analyses by multiple analysts, although very little formal guidance exists on how best to do this.
4. Model Evaluation
Criteria for assessing the quality of models have been suggested,[44,47,56,130] and many comment on the importance of assessment of model structure, model type choice and model validation.[56,70,75,76,101] Weinstein et al.[56] have suggested that, in addition to a model that is transparent, with clearly stated assumptions and data sources, analysts should work towards demonstrating a model’s internal and external consistency, projective validity and convergent validity. Internal consistency is often ascertained by simple exercises to ensure that model output is consistent with assumptions and data used to parameterize the model. External consistency, or verification of a model, involves demonstrating that the output of the model is consistent with known facts about the disease and with epidemiological data. Assessment of projective validity entails demonstrating that the output of a model is consistent with observations that are independent of data used as model inputs.[56] Convergent validity or model corroboration involves comparing model results with those from different models that address the same problem based on similar assumptions. Ultimately, models should be compared with empirical data that become available after the model was built.
5. Literature Search
To collect data on the number and types of models used in the CEAs of vaccines, we performed a systematic search using three databases: MEDLINE, EMBASE and NHS EED. For a comprehensive search (e.g. for MEDLINE), we developed a general search strategy for vaccines, combining a text-string search in titles using (‘vaccine name’[TI] AND (‘vaccin*’[TI] OR ‘immuniz*’[TI] OR ‘immunis*’[TI]) AND (‘cost*’[TI] OR ‘economic*’[TI] OR ‘evaluat*’[TI])] and a Medical Subject Headings (MeSH) term search using [(‘a MeSH term corresponding to disease name caused by an infectious pathogen’ OR ‘a MeSH term indicating the generic vaccine name’) AND (‘cost-benefit analysis’[MeSH] OR ‘decision support techniques’[MeSH]) AND (‘immunisation’[MeSH] OR ‘immunisation programs’[MeSH])]. We then applied this strategy for each of the currently available vaccines listed in table II. We restricted our search to English-language articles published between 1976 and May 2007. We further restricted our search by including a MeSH term for human subjects and specifying the publication type to be included (see the appendix for an example using varicella vaccines). We applied similar search strategies to the other two databases, EMBASE and NHS EED. We also examined the reference lists of several systematic reviews of vaccine CEAs.
The initial searches retrieved >3000 articles. We finally identified 276 CEAs of vaccines (see the supplementary material [‘ArticlePlus’] at http://pharmacoeconomics.adisonline.com for a complete reference list of these CEAs) by screening the abstracts (and the main texts if the abstracts did not explicitly reveal the form of economic evaluations) of the initially retrieved articles, and applying the following inclusion/exclusion criteria: (i) we included studies on the currently available monovalent or combined vaccines only, excluding any hypothetical vaccines under development (e.g. vaccines for malaria, AIDS and dengue fever, etc.); (ii) we included only CEAs (including both CEAs and CUAs), excluding CBAs and partial economic evaluation studies such as cost analyses or outcome description studies, which estimate either effects or costs only, although we did include studies in which both CEA and CBA were performed, reviewing the CEA aspects of such studies; (iii) we included original research only, excluding reviews, commentaries or editorials.
In order to provide a general overview of the CEAs of vaccines, we reviewed the final set of 276 articles, extracting data on the following elements for each vaccine: (i) the type of vaccine; (ii) study setting (i.e. country or region of study); and (iii) the year of publication. Next, in order to examine the trend and evolution of model types used in evaluating the cost effectiveness of vaccines, we specifically focused on the following elements: (i) model types; (ii) types of health outcome measures; and (iii) approaches for uncertainty analysis.
6. Findings of the Systematic Review
6.1 General Overview of CEAs of Currently Available Vaccines
6.1.1 Distribution of the Number of Published CEAs by Vaccine
Table II presents a list of the currently available monovalent and combined vaccines and the number of published CEAs for each vaccine by study setting. The distribution of the number of published CEAs is extremely disproportionate; for example, of the 276 studies identified, 157 CEAs (~57%) have been performed for five vaccines (varicella, influenza, HAV, HBV and pneumococcal) while there have been few or no CEAs conducted for monovalent mumps, tick-borne encephalitis virus, monovalent diphtheria vaccines and many of the combined vaccines. The number of CEAs of pneumococcal vaccines (41) ranked the highest.
6.1.2 Distribution of the Number of Published Vaccine CEAs by Study Setting
Table II also shows that the distribution of study settings is disproportionate. Of the total 275 studies, only 43 (16%, adjusted for those studies that evaluated multiple vaccines) have been conducted in low- to middle-income country settings despite the fact that the populations in these settings may benefit the most from the use of vaccines.
6.1.3 Trend in the Growth of Published Vaccine CEAs
Figure 3 shows that the number of published CEAs of vaccines has rapidly increased since the mid 1990s. Given the highly disproportionate distribution of the numbers of published CEAs among vaccines, this trend may reflect that most of the high-CEA-volume vaccines have been approved only since the mid 1990s. However, as stated previously, the trend may also reflect that the demand for cost-effectiveness data for recently approved vaccines has increased, affected by the general trend that more policy makers are accepting CEA as an aid to their decision making.
6.2 Model Types and Related Features
6.2.1 Distribution of Model Types
The model type used for the CEAs of vaccines varied from study to study. Figure 4 a presents the overall distribution of model categories. Of the 276 studies, 18 CEAs (∼7%) were performed based on clinical trials or observational studies (‘empirical’ category); 14 studies (∼5%) fell into an ‘other’ category, estimating health outcomes using mathematical functions, statistical models (linear or logistic regressions) or using a simple calculation; and 36 studies (13%) did not report model types, or vaguely described their models. For the rest of the studies (i.e. 208 CEAs), most (∼89%) used static models. Twenty three studies developed dynamic transmission models for eight vaccines:Footnote 1 one model for measles,A18 two models for polio,A245,A246 seven models for varicella,A14,A24,A25,A42,A43,A131,A135 three for HBV,A67,A71,A270 two for HAV,A57,A139 two for HPV,A63,A243 four for pertussisA61,A103,A104,A240 and two models for meningitis.A51,A250
When applied to our suggested categories of model types, most studies fall into type 1 (184 of 208 studies; 88.5%). The number of studies that belong to types 2 and 3 were, respectively, 23 (11.0%) and 1 (0.4%). There were no studies including models that fell into types 4, 5 and 6. Thus, all of the CEAs were aggregate-level models, with no microsimulation individual-level models found. Of the 185 CEAs judged to be static models (summing types 1 and 3), in 154 studies (83%) the model type was explicitly stated or was evident from the written description, i.e. that the model structures were decision trees, Markov models or hybrid types (decision trees including Markov models). The rest of the studies described the model types rather vaguely, such as, ‘cohort model’, ‘cost-effectiveness model’, ‘excel model’, ‘simulation model’, ‘epidemiological model’ or ‘economic model’, but based on the details provided (e.g. parameter requirements, actual model inputs, types of model output and format of results), were consistent with static models. Others appeared to estimate effectiveness using models that implicitly followed the structure of a state-transition model.
Our findings demonstrated the evolution of model types over time. For the period 1976–90, all vaccine CEA models fell into the categories of ‘type 1’ or ‘other’. The majority of relatively sophisticated models that belong to types 2 or 3 were presented in CEAs published during the past 5 years.
6.2.2 Distribution of Health Outcome Measures
The types of health outcomes measured with the models were more variant than the model types. Different health outcome measures included QALYs gained, life-years gained, DALYs averted, deaths averted (or lives saved), cases averted, illness days averted and hospitalizations averted, and so on. Figure 4 b presents the distribution of main health outcome measures (note that some studies included more than one health outcome measure; when multiple measures were reported, we categorized the one main measure that was most similar to the fatality outcome or that was adjusted for quality of life). Overall, ‘cases prevented’ (29%) was the most frequently used form of health outcome measure. The second most prevalent type of health measure was QALYs (26%).
Our findings showed that there is a time trend in the distribution of main effectiveness measures. For example, for the period 1976–90, 69% of CEAs measured effectiveness in terms of the number of ‘cases prevented’ while only 14% of studies used ‘cases prevented’ as the main health outcome measure over the past 5 years. The most commonly used health outcome measure in the same period was DALYs averted in low- to middle-income country settings (11 of 25 CEAs) and QALYs gained in high-income country settings (46 out of 110 CEAs). Regardless of the study setting, in the last 5 years, the most commonly used measure was QALYs gained.
6.2.3 Uncertainty Analysis
A majority of the studies (~81%) explored parameter uncertainty by conducting deterministic, univariate (or multivariate in a small number of studies) sensitivity analysis alone or did not report any sensitivity analysis results. The number of studies that performed probabilistic sensitivity analysis was relatively small, although these appear to be more frequent in recent years. For example, of the 53 studies reporting probabilistic sensitivity analysis results, approximately 85% were published over the past 5 years. Less than 5% of studies discussed model uncertainty, and no study formally and explicitly examined model uncertainty using structured methods (e.g. computing final outcome measures such as cost-effectiveness ratios and exploring the magnitude of differences qualitatively or quantitatively under alternative model structures).[115]
7. Discussion
While there are a growing number of model-based analyses that assess the cost effectiveness of vaccination programmes, the distribution of studies is highly disproportionate, particularly in terms of study setting and vaccine of interest. Nearly 83% of studies have focused on developed countries and over half of analyses have addressed one of the five most studied vaccines, namely the varicella, influenza, HAV, HBV and pneumococcal vaccines. In addition, apparent time trends are observed in the distribution of the number of CEAs published, main health outcome measures and model types. The majority of the relatively sophisticated models that belong to types 2 or 3 were published over the past 5 years. However, it should be noted that these findings are subject to various types of biases such as publication, English-language and local literature bias, as often discussed in most systematic review literature.[131,132]
Modelling vaccination programmes might be expected to involve more complexity than modelling health interventions for chronic diseases, mainly due to the complex transmission modes involving interactions between individuals as observed in many infectious diseases.[47,52] Accordingly, in theory, the model types for projecting the cost effectiveness of vaccination programmes might be expected to be more diverse than those for other interventions targeting non-communicable diseases. Our findings show that this has not been the case thus far. For example, Cooper et al.[133] reviewed 61 modelling studies of coronary heart disease treatment (not restricted to CEAs) and found that only three subtypes of model structures were used to model the disease: state-transition models including Markov (32 studies), decision trees (24 studies) and DES models (five studies). According to our suggested model type classification system, these models correspond to types 1 and 6. But, while the decision-analytic models used in CEAs of vaccines fall into three categories (types 1, 2 and 3), approximately 89% of vaccine models would be classified as type 1.
In nearly 40% of studies, it was not straightforward for a reader to quickly ascertain the model type and techniques used. For example, vague model description meant that it was often difficult to further divide the type 1 models into three subtypes (decision tree, state transition model including a Markov model and hybrid). Furthermore, when the model types were explicitly stated or the model features were described in detail, the terminology used by analysts differed widely between studies. At a minimum, a clear description of the model type and justification for why the analyst chose that model type must begin to accompany all analyses. While our suggested framework is only one possibility, we urge modellers to at least provide an explicit statement about each of the following four major model attributes, and if relevant, a statement about the two additional attributes of time and linearity.
Is the model:
-
a dynamic transmission model or a static model, i.e. does the model consider interaction between groups in the population, can the model reflect herd immunity effects?
-
individual based or aggregate (population average), i.e. are individuals in the model assigned to compartments and move between them based on averages or does the model keep track of each individual’s behaviour and attributes?
-
deterministic or stochastic, i.e. do events in the model occur by chance (randomly) or do events occur in a pre-specified way?
-
open versus closed, i.e. does the model follow a single group of people over time with no new individuals entering the model, or does the model allow new individuals to enter and exit the model over time?
The findings of our review imply that, at least for certain types of vaccines, the choice of model structure in evaluating the cost effectiveness of vaccination programmes might be improved. For example, despite the widely known fact that, in general, vaccination programmes can have both direct and indirect effects, and that only dynamic models can capture the indirect effects, only 23 CEAs (8%) included in our review developed dynamic models. Although herd immunity may be negligible for some vaccines (e.g. rotavirus), given the disproportionately small proportion of dynamic models, a number of studies may be underestimating vaccine benefits. Extending this idea, in some cases other factors beyond just herd immunity may be quite influential on the cost-effectiveness results, and are also not being given adequate attention. While the lack of individual-based stochastic models is, in part, likely attributable to the time investment required to develop the model, the increased complexity in evaluating the model performance, and the associated increased computational burden, this type of model may be an appropriate choice for certain CEAs of vaccines.
For example, Goldie et al.[126] have recently developed an individual-based stochastic model to assess the impact and cost effectiveness of HPV 16,18 vaccination in the context of countries with ongoing screening programmes. In part, this choice was made to permit the tracking of individual behaviour and to allow individual attributes (e.g. race, age, vaccination status) and individual history (e.g. prior clinical events) affect future screening behaviour, prognosis and outcomes.[49,50] Furthermore, this model reflects multiple HPV types, both vaccine-targeted types and non-targeted types, allowing for exploration of the potential increase in the proportion of disease expected from HPV types not targeted by the vaccine.[92,126,128,134–136] Kim et al.[134] then developed a companion dynamic transmission model of HPV that can appropriately represent herd immunity effects, but this model reflects only vaccine-targeted HPV types 16 and 18, and cannot be used to project the range of disease outcomes attributable to all HPV types with different vaccination strategies.[33,92,128]
Adopting a hybrid approach, estimates of HPV 16, 18 incidence can be generated for different vaccination scenarios over time with the dynamic transmission model, and used as an input to the individual-based stochastic model, which may be run as an open or closed model. This ‘linkage’ allows one to capture herd immunity effects, explore potential uncertainties such as type replacement, include outcomes for all HPV type-attributable disease, and assess type-specific vaccination in the context of complex screening strategies. The dynamic transmission model can also be used as an independent model to project outcomes of different vaccination strategies in boys and girls, assess the cost effectiveness of vaccinating boys, and compare different age-based catch-up programmes. Other advantages of a hybrid approach are that features that add complexity can be more easily activated or de-activated, and projected results can be compared using two independently structured models, which can greatly enhance evaluation of the impact of model structure on cost-effectiveness results. Although, theoretically, an ideal solution might be to develop an individual-based stochastic model with interaction, considerations such as development time, available data (e.g. transmission parameters for other HPV types), desire to apply the model to many countries with different amounts of data, computational intensity and need for timely results all made a hybrid model approach a reasonable choice in this instance.
In the present review, we also found that assessment of model uncertainty was generally not addressed, or at least not explicitly. A relatively small number of studies have justified the choice of model structure and few studies have assessed model uncertainty formally, by conducting sensitivity analyses varying model structures (i.e. comparing outputs from different model structures based on each model’s specific sets of assumptions and input parameter values). Some studies compared results with the results of other studies using a different model structure, but assumptions and input data were often different, making it difficult to determine whether the inconsistency in results was due to the various model types. Furthermore, only a few studies attempted to evaluate the models formally with specific explicit mention of model verification, model corroboration or assessment of projective validity.
Weinstein[101] emphasizes that “model assumptions regarding causal structure and parameter estimates should be continually assessed against data, and models should be revised accordingly.” It is the responsibility of the analyst to conduct the assessment of model performance more comprehensively, to be more transparent about the conditional relationship between inputs and outputs of models, to illustrate the rationale behind model structure choice, and to present a reasonable representation of how results would differ using an alternative structure. For example, Hammerschmidt et al.[137] present their efforts to validate a model for evaluating varicella vaccination strategies in Germany, based on multiple criteria.
Our review has several limitations. First, we restricted our review to CEAs. CEA is only one type of economic evaluation, although it is increasingly the predominant choice, accounting for more than half of the published economic evaluation studies. When extended to all economic evaluations of vaccines, the results may show different distributions of model types and other related characteristics. Second, our search strategy did not include all possible databases. Thus, we might have missed some CEAs of vaccines conforming to our study’s inclusion criteria but indexed in other databases.[132] Third, we did not include any future vaccines that are under development such as AIDS/HIV, tuberculosis and malaria vaccines. Fourth, we did not include the grey literature, which is defined as “a range of published and unpublished material which is not normally identifiable through conventional methods of bibliographic control.”[27] According to Batt et al.,[27] the grey literature on immunization covers more recent interventions and better represents low- to middle-income countries in West Africa and the Middle East. Fifth, we did not attempt to formally assess the performance and validity of the models in each study. Our review was based solely on the information we were able to ascertain from the manuscripts. Finally, we did not address the important issue of communicating cost-effectiveness results and, more specifically, the modelling approach and techniques to decision makers. We believe the first step is for analysts to be more transparent with each other, beginning with more complete descriptions of the choices around modelling approaches as well as enhanced methods of assessing model performance and validity. This will greatly facilitate the development of effective communication tools to describe these issues to different stakeholders.
Economic studies of vaccines, including vaccine development and delivery issues, are conducted with the general intention of informing policy recommendations and programmatic decisions at an international, national and sub-national level.[138] Although the process by which results of CEAs play a role in shaping vaccination policy is not well understood,[139] stakeholders faced with decisions about whether to introduce a new and costly vaccine are requesting information on its comparative benefits, costs and cost effectiveness compared with alternative uses of resources.[1,2] For decision makers responsible for priority setting, this information must be based on analyses that are comparable, in that they use standard methodology, are of equivalent rigour and are transparent and clear in communication of methods and results.[85]
8. Conclusions
Our results from this focused review emphasize the need to improve modelling methods for CEAs for vaccination programmes, specifically model choice, construction, assessment and validation. Many of the CEAs identified did not adequately describe the modelling approach or explicitly provide the rationale for model choice. The model choice was often not optimal given the issues surrounding vaccination, such as the potential for effects arising from herd immunity. Inadequate attention was given to assessment of uncertainty, particularly model uncertainty.
We propose a framework where models are classified according to three main attributes: static/dynamic; stochastic/deterministic; aggregate/individual based. Models can then be further classified within these categories. Beyond this, models should be identified as open or closed and, where relevant, specify (non)linearity and time intervals (discrete/continuous).
As interest in conducting economic analyses of vaccines is shared by analysts with different disciplinary backgrounds, a common framework would be helpful to engage in dialogue and debate about optimal modelling approaches, inevitable tradeoffs between complexity and simplicity, and the need for innovative approaches to represent complex systems. While a prescriptive approach is unlikely to be successful given the nature of the many factors that should be considered in model choice, adhering to guidelines for good practice will enhance valid comparability of results. Increased transparency regarding analytic choices and improved assessment of model performance and validation will go far in moving the field forward.
Notes
Superscript numbers refer to the reference list that can be found in the supplementary material.
References
Cost effectiveness of rotavirus vaccines and other interventions for diarrhoeal diseases: meeting report 2006. Wkly Epidemiol Rec 2006; 81 (37): 350-3
World Health Organization. Vaccine introduction guidelines: adding a vaccine to a national immunization programme. Decision and implementation: report no. WHO/IVB/05.18. Geneva: WHO, 2005
Drummond MF, Sculpher MJ, Torrance GW, et al., editors. Methods for the economic evaluation of health care programmers. New York: Oxford University Press, 2005
Gold MR, Siegel JE, Russell LB, et al., editors. Cost-effectiveness in health and medicine. New York: Oxford University Press, 1996
Culyer AJ. The normative economics of health care finance and provision. Oxf Rev Ebon Pol 1989; 5: 34–58
Sloan FA, editor. Valuing health care: costs, benefits, and effectiveness of pharmaceuticals and other medical technologies. Cambridge (UK): Cambridge University Press, 1995
Johannesson M. Theory and methods of economic evaluation of health care. Dordrecht: Kluwer Academic Publishers, 1996
Drummond MF, McGuire A. Economic evaluation in health care: merging theory with practice. New York: Oxford University Press, 2001
Donaldson C, Mugford M, Vale L, editors. Evidence-based health economics. London: BMJ Books, 2002
Muennig P. Designing and conducting cost-effectiveness analyses in medicine and health care. San Francisco (CA): Jossey-Bass, 2002
Haddix AC, Teutsch SM, Corso PS, editors. Prevention effectiveness: a guide to decision analysis and economic evaluation. 2nd ed. New York: Oxford University Press, 2003
Evans DB, Edejer TT, Adam T, et al. Methods to assess the costs and health effects of interventions for improving health in developing countries. BMJ 2005 Nov 12; 331: 1137–40
Brenzel L. Selecting an essential package of health services using cost-effectiveness analysis: a manual for professionals in developing countries. Data for Decision Making Project and PHN Department, The World Bank, 1993 [online]. Available from URL: http://www.hsph.harvard.edu/ihsg/publications/pdf/No-5.PDF [Accessed 2008 Feb 12]
Jamison DT, Breman JG, Measham, et al., editors. Disease control priorities in developing countries. 2nd ed. New York: Oxford University Press, 2006
Tan-Torres Edejer T, Baltussen R, Adam T, et al., editors. Making choices in health: WHO guide to cost-effectiveness analysis. Geneva: World Health Organization, 2003
National Institute for Health and Clinical Excellence. Guide to the methods of technology appraisal. London: NICE, 2004 Apr [online]. Available from URL: http://www.nice.org.uk/niceMedia/pdf/rAP_Methods.pdf [Accessed 2008 Jan 30]
Menon D, Schubert F, Torrance GW. Canada’s new guidelines for the economic evaluation of pharmaceuticals. Med Care 1996; 34: DS77–85
Canadian Coordinating Office for Health Technology Assessment. Guidelines for economic evaluation of pharmaceuticals: Canada. 2nd ed. Ottawa: Canadian Coordinating Office for Health Technology Assessment, 1997
Commonwealth of Australia. Guidelines for pharmaceutical industry and preparation of submissions to the Pharmaceuticals Benefits Advisory Committee: including economic analyses. Canberra: Department of Health and Community Services, 1995
Adam T, Evans DB, Murray CJL. Econometric estimation of country-specific hospital costs. Cost Eff Resourc Alloc 2003; 1 (1): 3
Adam T, Koopmanschap MA, Evans DB. Cost-effectiveness analysis: can we reduce variability in costing methods? Int J Technol Assess Health Care 2003; 19 (2): 407–20
Hutton G, Baltussen R. Cost valuation in resource-poor settings. Health Policy Plan 2005; 20: 252–9
Johns B, Adam T, Evans DB. Enhancing the comparability of costing methods: cross-country variability in the prices of non-traded inputs to health programmes. Cost Eff Resourc Alloc 2006; 4: 8
Johns B, Baltussen R, Hutubessy R. Programme costs in the economic evaluation of health interventions. Cost Eff Resourc Alloc 2003; 1 (1): 1
Johns B, Torres TT, WHO-CHOICE. Costs of scaling up health interventions: a systematic review. Health Policy Plan 2005; 20: 1–13
Mulligan J, Fox-Rushby JA, Adam T, et al. Unit costs of health care inputs in low and middle income regions: DCPP working paper no. 9. Bethesda (MD): Disease Control Priorities Project, Fogarty International Center, National Institutes of Health, 2005
Batt K, Fox-Rushby JA, Castillo-Riquelme M. The costs, effects and cost-effectiveness of strategies to increase coverage of routine immunizations in low-and middle-income countries: systematic review of the grey literature. Bull World Health Organ 2004; 82: 689–96
Kou U. Guidelines for estimating costs of introducing new vaccines into the national immunization system. Geneva: WHO/Department of Vaccines and Biologicals, 2002
Wolfson LJ, Gasse F, Lee-Martin S, et al. Estimating the costs of achieving the WHO-UNICEF Global Immunization Vision and Strategy, 2006–2015. Bull World Health Organ 2008; 86: 27–39
Meltzer D. Accounting for future costs in medical cost-effectiveness analysis. J Health Econ 1997; 16 (1): 33–64
Elbasha EH, Messonnier ML. Cost-effectiveness analysis and health care resource allocation: decision rules under variable returns to scale. Health Econ 2004; 13 (1): 21–35
Poulos C, Whittington D. Time preferences for life-saving programs: evidence from six less developed countries. Environ Sci Technol 2000; 34 (8): 1445–55
Jamison DT, Jamison JS. Discounting: DCPP working paper no. 4. Bethesda (MD): Disease Control Priorities Project, Fogarty International Center, National Institutes of Health, 2003
Musgrove P, Fox-Rushby J. Cost-effectiveness for priority setting: disease control priorities in developing countries. 2nd ed. New York: Oxford University Press, 2006: 271–86
Chiou CF, Hay JW, Wallace JF, et al. Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care 2003; 41 (1): 32–44
Walker D, Fox-Rushby JA. Economic evaluation of communicable disease interventions in developing countries: a critical review of the published literature. Health Econ 2000; 9 (8): 681–98
Sculpher M, Fenwick E, Claxton K. Assessing quality in decision analytic cost-effectiveness models: a suggested framework and example of application. Pharmacoeconomics 2000; 17: 461–77
Neumann PJ, Greenberg D, Olchanski NV, et al. Growth and quality of the cost-utility literature, 1976–2001. Value Health 2005; 8 (1): 3–9
Chapman RH, Stone PW, Sandberg EA, et al. A comprehensive league table of cost-utility ratios and a subtable of “panel-worthy” studies. Med Decis Making 2000; 20: 451–67
Bell CM, Chapman RH, Stone PW, et al. An off-the-shelf help list: a comprehensive catalog of preference scores from published cost-utility analyses. Med Decis Making 2001; 21: 288–94
Stone PW, Schackman B, Neukermans CP, et al. A synthesis of cost-utility analysis literature in infectious disease. Lancet Infect Dis 2005; 5: 383–91
Bloom DE, Canning D, Weston M. The value of vaccination. World Econ 2005; 6 (3): 15–39
Brenzel L, Wolfson LJ, Fox-Rushby J, et al. Vaccine-preventable diseases. In: Jamison DT, Breman JG, Measham AR, et al., editors. Disease control priorities in developing countries. 2nd ed. Oxford (UK): Oxford University Press and the World Bank, 2006: 389–411
Beutels P, Van Doorslaer E, Van Damme P, et al. Methodological issues and new developments in the economic evaluation of vaccines. Expert Rev Vaccines 2003; 2: 649–60
Kaddar M, Lydon P, Levine R. Financial challenges of immunization: a look at GAVI. Bull World Health Organ 2004; 82: 697–702
Beutels P. Economic evaluation of vaccination programmes in humans: a methodological exploration with applications to hepatitis B, varicella-zoster, measles, pertussis, hepatitis A and pneumococcal vaccination. Antwerpen: UA, UIA, Faculteit Biomedische Wetenschappen, 2002
Brisson M, Edmunds WJ. Impact of model, methodological, and parameter uncertainty in the economic analysis of vaccination programs. Med Decis Making 2006 Sep–Oct; 26 (5): 434–46
Brisson M, Edmunds WJ. Economic evaluation of vaccination programs: the impact of herd immunity. Med Decis Making 2003; 23: 76–82
Goldie SJ, Goldhaber-Fiebert, Garnett G. Public health policy for cervical cancer prevention: role of decision science, economic evaluation, and mathematical modeling. Vaccine 2006: 24 Suppl. 3; S155–S163
Garnett G, Kim JJ, French K, et al. Modelling the impact of HPV vaccines on cervical cancer and screening programmes. Vaccine 2006: 24 Suppl. 3: SI78–86
Bos JM, Beutels P, Annemans L, et al. Valuing prevention through economic evaluation: some considerations regarding the choice of discount model for health effects with focus on infectious diseases. Pharmacoeconomics 2004; 22 (18): 1171–9
Beutels P. Economic evaluations applied to HB vaccination: general observations. Vaccine 1998; 16 Suppl.: S84–92
Lipsitch M. Bacterial vaccines and serotype replacement: lessons from Haemophilus influenzae and prospects for Streptococcus pneumoniae. Emerg Infect Dis 1999; 5 (3): 336–45
Goldie SJ, Grima D, Kohli M, et al. A comprehensive natural history model of HPV infection and cervical cancer to estimate the clinical impact of a prophylactic HPV 16/18 vaccine. Int J Cancer 2003; 106: 896–904
Buxton MJ, Drummond MF, Van Hout BA, et al. Modelling in economic evaluation: an unavoidable fact of life. Health Econ 1997 May; 6 (3): 217–27
Weinstein MC, Toy EL, Sandberg EA, et al. Modeling for healthcare and other policy decisions: uses, roles, and validity. Value Health 2001 Sep; 4 (5): 348–61
Allman ES, Rhodes JA. Mathematical models in biology: an introduction. Cambridge (UK): Cambridge University Press, 2004
Office of Health Economics. From efficacy to cost-effectiveness [OHE briefing, no 37]. London: OHE, 1998 May
Briggs A, Sculpher M, Claxton K, editors. Decision modelling for health economic evaluation. New York: Oxford University Press, 2006
Hunink MGM, Glasziou P, Siegel J, et al. Decision making in health and medicine: integrating evidence and values. Cambridge (UK): Cambridge University Press, 2001
Kuntz KM, Weinstein MC. Modeling in economic evaluation. In: Drummond MF, McGuire A, editors. Economic evaluation in health care: merging theory with practice. New York: Oxford University Press, 2001: 141–71
Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press, 1991
Anderson RM, Garnett GP. Mathematical models of the transmission and control of sexually transmitted diseases. Sex Transm Dis 2000; 27: 636–43
Halloran ME, Cochi SL, Lieu TA, et al. Theoretical epidemiologic and morbidity effects of routine varicella immunization of preschool children in the United States. Am J Epidemiol 1994; 140: 81–104
Edmunds WJ, Medley GF, Nokes DJ. Evaluating the cost-effectiveness of vaccination programmes: a dynamic perspective. Stat Med 1999; 18: 3263–82
Garnett GP. An introduction to mathematical models in sexually transmitted disease epidemiology. Sex Transm Inf 2002; 78: 7–12
Garnett GP. Role of herd immunity in determining the effect of vaccines against sexually transmitted disease. J Infect Dis 2005 Feb 1; 191 Suppl. 1: S97–106
Davies R, Davies HTO. Modelling patient flows and resource provision in health systems. Omega Int J Mngt Sci 1994; 22: 123–31
Karnon J. Alternative decision modeling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation. Health Econ 2003; 12: 837–48
Barton P, Bryan S, Robinson S. Modelling in the economic evaluation of health care: selecting the appropriate approach. J Health Serv Res Policy 2004; 9: 110–8
Sonnenberg FA, Beck JR. Markov models in medical decision making: a practical guide. Med Decis Making 1993; 13: 322–38
Law AM, Kelton WD. Simulation modeling and analysis. 3rd ed. Boston (MA): McGraw-Hill, 2000
Chick SE, Adams AL, Koopman JS. Analysis and simulation of a stochastic, discrete-individual model of STD transmission with partnership concurrency. Math Biosci 2000; 166: 45–68
Philips Z, Ginnelly L, Sculpher M, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004; 8 (36): iii–iv, ix-xi, 158
Philips Z, Bojke L, Sculpher M, et al. Good practice guidelines for decision-analytic modeling in health technology assessment: a review and consolidation of quality assessment. Pharmacoeconomics 2006; 24 (4): 355–71
Weinstein MC, O’Brien B, Hornberger J, et al, ISPOR Task Force on Good Research Practices-Modeling Studies. Principles of good practice for decision analytic modeling in healthcare evaluation: report of the ISPOR Task Force on Good Research Practices-Modeling Studies. Value Health 2003; 6: 9–17
Beutels P, Edmunds WJ, Antonanzas F, et al. Economic evaluation of vaccination programmes: a consensus statement focusing on viral hepatitis. Pharmacoeconomics 2002; 20: 1–7
Spier R, Jeffereson T, Demicheli V. An editorial policy statement: submission of economic evaluations of vaccines. Vaccine 2002; 20: 1693–5
Szucs TD. Health economic research on vaccinations and immunization practices: an introductory primer. Vaccine 2005; 23: 2095–103
Hinman AR, Irons B, Lewis M, et al. Economic analysis of rubella and rubella vaccines: a global review. Bull World Health Organ 2002; 80: 264–70
Rosenthal P. Cost-effectiveness of hepatitis A vaccination in children, adolescents, and adults. Hepatology 2003; 37: 44–51
Postma MJ, Bos JM, Beutels P, et al. Pharmacoeconomic evaluation of targeted hepatitis A vaccination for children of ethnic minorities in Amsterdam (The Netherlands). Vaccine 2004; 22: 1862–7
Beutels P. Economic evaluations of hepatitis B immunization: a global review of recent studies (1994–2000). Health Econ 2001; 10: 751–74
Miller MA, Shahab CK. Review of the cost-effectiveness of immunisation strategies for the control of epidemic meningococcal meningitis. Pharmacoeconomics 2005; 23: 333–43
Welte R, Trotter C, Edmunds J, et al. The role of economic evaluation in vaccine decision making: focus on meningococcal C vaccination. Pharmacoeconomics 2005; 23 (9): 855–74
De Graeve D, Beutels P. Economic aspects of pneumococcal pneumonia: a review of the literature. Pharmacoeconomics 2004; 22: 719–40
Thiry N, Beutels P, Van Damme P. Economic evaluations of pneumococcal vaccination strategies in adults: a summary of the results. Acta Clin Belg 2005; 60 (6): 338–44
Beutels P, Thiry N, Van Damme P. Convincing or confusing? Economic evaluations of childhood pneumococcal conjugate vaccination: a review (2002–2006). Vaccine 2007; 25 (8): 1355–67
Brinsmead R, Hill S, Walker D. Are economic evaluations of vaccines useful to decision-makers? Case study of Haemophilus influenzae type b vaccines. Pediatr Infect Dis J 2004; 23: 32–7
Burls A, Jordan R, Barton P, et al. Vaccinating healthcare workers against influenza to protect the vulnerable: is it a good use of healthcare resources? A systematic review of the evidence and an economic evaluation. Vaccine 2006; 24: 4212–21
Caro JJ, Getsios D, Payne K, et al. Economic burden of pertussis and the impact of immunization. Pediatr Infect Dis J 2005; 24: S48–54
Newall AT, Beutels P, Wood JG, et al. Cost-effectiveness analyses of human papillomavirus vaccination. Lancet Infect Dis 2007; 7: 289–96
Dasbach EJ, Elbasha EH, Insinga RP. Mathematical models for predicting the epidemiologic and economic impact of vaccination against human papillomavirus infection and disease. Epidemiol Rev 2006; 28: 88–100
Thiry N, Beutels P, Van Damme P, et al. Economic evaluations of varicella vaccination programmes: a review of the literature. Pharmacoeconomics 2003; 21 (1): 13–38
Walker DG, Rheingans R. Cost-effectiveness of rotavirus vaccines. Expert Rev Pharmacoeconomics Outcomes Res 2005; 5 (5): 593–601
Cooper K, Brailsfor SC, Davies R. Choice of modelling technique for evaluating health care interventions. J Oper Res Soc 2007; 58: 168–76
Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. Health Econ 2006; 15: 1295–310
Sulistio A, Yeo CS, Buyya R. A taxonomy of computer-based simulations and its mapping to parallel and distributed systems simulation tools. Softw Pract Exper 2004; 34: 653–73
Lipsitch M, Cohen T, Cooper B, et al. Transmission dynamics and control of severe acute respiratory syndrome. Science 2003; 300 (5627): 1966–70
Halpern EF, Weinstein MC, Hunink MG, et al. Representing both first- and second-order uncertainties by Monte Carlo simulation for groups of patients. Med Decis Making 2000 Jul–Sep; 20 (3): 314–22
Weinstein MC. Recent developments in decision-analytic modelling for economic evaluation. Pharmacoeconomics 2006; 24 (4): 1043–53
Bonabeau E. Agent-based modeling: methods and techniques for simulating human systems. Proc Natl Acad Sci USA 2002; 99: 7280–7
Darley V, von Tessin P, Sanders D. An agent-based model of a corrugated box factory: the tradeoff between finished-goods stock and on-time-in-full delivery. In: Coelho H, Espinasse B, editors. Proceedings of the 5th Workshop on Agent-Based Simulation; 2004 May 3–5; Lisbon. Society for Modelling and Simulation Europe BVBA, 2004
Muller G, Grebaut P, Gouteux JP. An agent-based model of sleeping sickness: simulation trials of a forest focus in southern Cameroon. C R Biol 2004 Jan; 327 (1): 1–11
Scott A. A computing strategy for SAGE: 1. Model options and constraints. SAGE technical note no 2. London: Simulating Social Policy in an Ageing Society (SAGE), August 2001 [online]. Available from URL: http://www.lse.ac.uk/collections/sage/pdf/sage_tn2.pdf [Accessed 2008 Jan 30]
Knudsen AB, McMahon PM, Gazelle GS. Use of modeling to evaluate the cost-effectiveness of cancer screening programs. J Clin Oncol 2007 Jan 10; 25 (2): 203–8
Feuer EJ, Etzioni RD, Cronin KA, et al. The use of modeling to understand the impact of screening on US mortality: examples from mammography and PSA testing. Stat Methods Med Res 2004; 13: 421–42
Feuer EJ. Modeling the impact of adjuvant therapy and screening mammography on US breast cancer mortality between 1975 and 2000: introduction to the problem. J Natl Cancer Inst Monogr 2006; 36: 2–6
National Cancer Institute, Cancer Intervention and Surveillance Monitoring Network (CISNET) [online]. Available from URL: http://www.cisnet.cancer.gov [Accessed 2007 Oct 22]
Dewilde S, Anderson R. The cost-effectiveness of screening programs using single and multiple birth cohort simulations: a comparison using a model of cervical cancer. Med Decis Making 2004; 24: 486–92
Aris R. Mathematical modelling techniques. New York: Dover, 1994
Wooldridge JM. Introductory econometrics: a modern approach. 2nd ed. Cincinnati (OH): South-Western, 2003
Coudeville L, Brunot A, Giaquinto C, et al. Varicella vaccination in Italy. Pharmacoeconomics 2004; 22 (13): 839–55
World Health Organization. WHO-CHOICE: World Health Organization statistical information system. CHOICE (CHOosing Interventions that are Cost Effective) [online]. Available from URL: http://www.who.int/choice/en/ [Accessed 2007 Oct 22]
Manning WG, Fryback DG, Weinstein MC. Reflecting uncertainty in cost-effectiveness analysis. In: Gold MR, Siegel JE Russell LB, et al., editors. Cost-effectiveness in health and medicine. New York: Oxford University Press, 1996
Fryback DG, Stout NK, Rosenberg MA, et al. The Wisconsin breast cancer epidemiology simulation model. J Natl Cancer Inst Monogr 2006; 36: 37–47
Briggs A, Sculpher M, Buxton M. Uncertainty in the economic evaluation of health care technologies: the role of sensitivity analysis. Health Econ 1994; 3 (2): 95–104
Walker D, Fox-Rushby J. Allowing for uncertainty in economic evaluations: qualitative sensitivity analysis. Health Policy Plan 2001; 16 (4): 435–43
Baltussen RM, Hutubessy RC, Evans DB, et al. Uncertainty in cost-effectiveness analysis: probabilistic uncertainty analysis and stochastic league tables. Int J Technol Assess Health Care 2002; 18 (1): 112–9
Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ 2001; 10 (8): 779–87
Fenwick E, O’Brien BJ, Briggs A. Cost-effectiveness acceptability curves: facts, fallacies and frequently asked questions. Health Econ 2004; 13 (5): 405–15
Fenwick E, Byford S. A guide to cost-effectiveness acceptability curves. Br J Psychiatry 2005; 187: 106–8
Fenwick E, Briggs A. Cost-effectiveness acceptability curves in the dock: case not proven? Med Decis Making 2007; 27 (2): 93–5
Groot Koerkamp B, Hunink MGM, Stijnen T, et al. Limitations of acceptability curves for presenting uncertainty in cost-effectiveness analysis. Med Decis Making 2007; 27 (2): 101–11
Briggs AH, O’Brien BJ, Blackhouse G. Thinking outside the box: recent advances in the analysis and presentation of uncertainty in cost-effectiveness studies. Annu Rev Public Health 2002; 23: 377–401
Goldie SJ, Kim JJ, Kobus K, et al. Cost-effectiveness of HPV 16,18 vaccination in Brazil. Vaccine 2007; 25 (33): 6257–70
Salomon JA, Weinstein MC, Hammitt JK, et al. Empirically calibrated model of hepatitis C virus infection in the United States. Am J Epidemiol 2002; 156: 761–73
Kim JJ, Kuntz KM, Stout NK, et al. Multiparameter calibration of a natural history model of cervical cancer. Am J Epidemiol 2007; 166 (2): 137–50
Hutubessy RC, Baltussen RM, Evans DB, et al. Stochastic league tables: communicating cost-effectiveness results to decision-makers. Health Econ 2001; 10 (5): 473–7
Sculpher MJ, Pang FS, Manca A, et al. Generalisability in economic evaluation studies in healthcare: a review and case studies. Health Technol Assess 2004; 8 (49): iii–iv, 192
Jefferson T, Demicheli V, Vale L. Quality of systematic reviews of economic evaluations in health care. JAMA 2002; 287: 2809–12
Sassie F, Archard L, McDaid D. Searching literature databases for health care economic evaluations: how systematic can we afford to be? Medical Care 2002; 40: 387–94
Cooper K, Brailsford SC, Davies R, et al. A review of health care models for coronary heart disease interventions. Health Care Mngt Sci 2006; 9 (4): 311–24
Kim JJ, Andres-Beck B, Goldie S J. The value of including boys in an HPV vaccination programme: a cost-effectiveness analysis in a low-resource setting. Br J Cancer 2007 Nov 5; 97 (9): 1322–8
Goldhaber-Fiebert JD, Stout NK, Salomon JA, et al. Cost-effectiveness analysis of HPV-DNA testing and prophylactic HPV vaccination for the prevention of cervical cancer in the U.S. J Natl Cancer Inst. In press
Goldhaber-Fiebert JD, Stout NK, Ortendahl JO, et al. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination. Popul Health Metr 2007; 5: 11
Hammerschmidt T, Goertz A, Wagenpfeil S, et al. Validation of health economic models: the example of EVITA. Value Health 2003; (5): 551–9
Shepard DS, Walsh JA, Kleinau E, et al. Setting priorities for the Children’s Vaccine Initiative: a cost-effectiveness approach. Vaccine 1995; 13 (8): 707–14
Fine PEM. The contribution of modeling to vaccination policy. In: Cutts FT, Smith PG, editors. Vaccination and world health. West Sussex: John Wiley & Sons, 1994: 177–94
Acknowledgements
We are extremely grateful to Kara Cotich, Meredith O’Shea and Steve Sweet from the Program in Health Decision Science at Harvard School of Public Health for their technical assistance. We also greatly appreciate the helpful comments we received from anonymous reviewers. The authors have no conflicts of interest that are directly relevant to the content of this review. No sources of funding were used to assist in the preparation of this review. Drs Goldie and Kim are funded in part by the Bill and Melinda Gates Foundation (#30505 and #37883, respectively).
Author information
Authors and Affiliations
Electronic supplementary material
Appendix
Appendix
1. An Example of the Search Strategy for Vaccine CEAs Using MEDLINE
In searching for CEAs of varicella vaccines using MEDLINE, we used the following search strategy: ((‘varicella’[TI] OR ‘chickenpox’[TI]) AND (‘immuniz*’[TI] OR ‘immunis*’[TI] OR ‘vaccin*’[TI]) AND (‘cost’[TI] OR ‘economic’[TI] OR ‘evaluat*’[TI])) OR ((‘chickenpox’[MeSH] OR ‘chickenpox vaccine’[MeSH]) AND (‘immunisation’[MeSH] OR ‘immunisation programs’[MeSH]) AND (‘cost-benefit analysis’[MeSH] OR ‘decision support techniques’[MeSH])) AND ‘humans’[MeSH] AND ‘English’[LANG] AND ‘journal article’[PT] NOT ‘review’[PT] AND (‘1976/01/01’[PDAT]: ‘2007/05/31’[PDAT]).
A total of 77 articles were retrieved during the initial search for varicella vaccine CEAs.
Of the 27 studies that were included in the final set of varicella vaccine CEAs, 26 of the CEAs were identified by this initial search, demonstrating that this strategy has a high sensitivity to detect vaccine CEAs in general.
Rights and permissions
About this article
Cite this article
Kim, SY., Goldie, S.J. Cost-Effectiveness Analyses of Vaccination Programmes. Pharmacoeconomics 26, 191–215 (2008). https://doi.org/10.2165/00019053-200826030-00004
Published:
Issue Date:
DOI: https://doi.org/10.2165/00019053-200826030-00004