Radiotherapy optimAl Design: An Academic Radiotherapy Treatment Design System
Optimally designing radiotherapy and radiosurgery treatments to increase the likelihood of a successful recovery from cancer is an important application of operations research. Researchers have been hindered by the lack of academic software that supports comparisons between different solution techniques on the same design problem, and this article addresses the inherent difficulties of designing and implementing an academic treatment planning system. In particular, this article details the algorithms and the software design of Radiotherapy optimAl Design (RAD), which includes a novel reduction scheme, a flexible design to support comparative research, and a new imaging construct.
Key words:Optimization Radiotherapy Radiosurgery Medical Physics
21.1 1 Introduction
With the USA spending about 18% of its gross national product on health care, the need to efficiently manage and deliver health services has never been greater. In fact, some distinguished researchers have claimed that if we are not judicious with our resources, then our health care system will burden society with undue costs and vast disparities in availability (Bonder, 2004; Pierskalla, 2004). Developing mathematical models that allow us to study and optimize medical treatments is crucial to the overall goal of efficiently managing the health care industry. Indeed, we have already witnessed medical advances by optimizing existing medical procedures, leading to better patient care, increased probability of success, and better time management.
Much of the previous work focuses on using operations research to improve administrative decisions, but several medical procedures are now being considered. The breadth and importance of these is staggering, and the academic community is poised to not only aid managerial decisions but also improve medical procedures. A prominent example of the latter is the use of optimization to design radiotherapy treatments, which is the focus of this article.
Cancer remedies largely fall into three categories: pharmaceutical — such as chemotherapy, surgical — whose intent is to physically remove cancerous tissues, and radiobiological — which uses radiation to destroy cancerous growths. Radiotherapy is based on the fact that cancerous cells are altered by radiation in a way that prevents them from replicating with damaged DNA. When a cell is irradiated with a beam of radiation, a secondary reaction forms a free radical that damages cellular material. If the damage is not too severe, a healthy cell can likely overcome the damage and replicate normally, but if the cell is cancerous, it is unlikely that it will be able to regenerate. The differing abilities of cancerous and non-cancerous cells to repair themselves is called a therapeutic advantage, and the goal of radiotherapy is to deliver enough radiation so that cancerous cells expire but not so much as to permanently damage nearby tissues.
Radiotherapy treatments are delivered by focusing high energy beams of ionizing radiation on a patient. The goal of the design process is to select the pathways along which the radiation will pass through the anatomy and to decide the amount of dose that will be delivered along each pathway, called a fluence value. Bahr et al (1968a) first suggested that we optimize treatments in 1968, and since then medical physicists have developed a plethora of models to investigate the design process. Currently, commercially available planning systems optimize only the fluence value and do not additionally optimize the pathways. All of these commercially available systems rely on some form of optimization algorithm and these algorithms can range from gradient descent to simulated annealing. To date, the optimization approaches implemented clinically, typically by medical physicists working in the field of radiation oncology, have been reasonably effective but have failed to exploit the significant advances of robust operations research theory. As operations researchers have become aware of such problems, increasingly sophisticated optimization expertise has been brought to bear on the problem, leading to a growing potential for more elegant solutions.
The knowledge barrier between medical physicists, who understand the challenges and nuances of treatment design, and operations researchers, who know little about the clinical environment, is problematic. Clinical capabilities vary significantly, making what is optimal dependent on a specific clinic (treatments also depend on an individual patient). So, the separation of knowledge stems not only from the fact that the operations researchers generally know little about medical physics, but also from the fact that they typically know even less about the capabilities of a specific clinic. This lack of understanding is being overcome by several collaborations between the two communities, allowing academic advances to translate into improved patient care.
standard optimization software to model and solve problems,
a database to store cases in a well-defined manner, and
a web-based interface for visualization.
The use of standard modeling software makes it simple to alter and/or implement new models, a fact that supports head-to-head comparisons of the various models suggested in the literature. Storing problems in a database is an initial step toward creating a test bank that can be used by researchers to make head-to-head comparisons, and the web interface facilitates use. These features agree with standard OR practice in which algorithms and models are compared on the same problems and on the same machine.
The paper proceeds as follows. SubSection 1.1 gives a succinct introduction into the nature of radiotherapy and an overarching description of the technology associated with intensity modulated radiotherapy. Section 2 presents the radiation transport model that is currently implemented in RAD. This deterministic model estimates how radiation is deposited into the anatomy during treatment and provides the data used to form the optimization models. Section 3 discusses the somewhat annoying problem of dealing with the different coordinate systems that are native to the design question. A few optimization models from the literature are presented in Section 4. This section also highlights RAD's use of AMPL, which allows the optimization model to be altered and/or changed without affecting other parts of the system. Unfortunately, addressing the entire anatomy associated with a standard design problem leads to problems whose size is beyond the capability of modern solvers, and our reductions are presented in Section 5. The methods used by RAD to generate the images needed to evaluate treatments are presented in Section 6, and Section 7 discusses our software design, which includes the novel use of a database to store anatomical features. A few closing remarks are found in Section 8.
The fourth author once believed that a rudimentary version of RAD was possible within a year's effort. This was an extremely naive thought. RAD's implementation began in 1999, with the initial code being written in Matlab. The current version is written in C++ and PHP and links to AMPL, CPLEX and a MySQL database. At every turn there were numerical difficulties, software engineering obstacles, and verification problems with the radiation transport model. The current version required the concentrated effort of eight mathematics/computer science students, three Ph.Ds in mathematics/computer science, and one Ph.D in Medical Physics spread over 3 years. The details of our efforts are contained herein.
21.1.1 1.1 The Nature of Radiotherapy
Beam Selection: Select the pathways through which the radiation will pass through the anatomy.
Fluence Optimization: Decide how much radiation (fluence) to deliver along each of the selected beams to best treat the patient.
Delivery Optimization: Decide how to deliver the treatment computed in the first two phases as efficiently as possible.
The first two phases of treatment design are repeated in the clinic as follows. A designer uses sophisticated image software to visually select beams (also called pathways or angles) that appear promising. The fluence to deliver along these beams is decided by an optimization routine, and the resulting treatment is judged. If the treatment is unacceptable, the collection of beams is updated and new fluences are calculated. This trial-and-error approach can take as much as several hours per patient. Once an acceptable treatment is created, an automated routine decides how to sequence the delivery efficiently. There is an inherent disagreement between the objectives of fluence and delivery optimization since a fluence objective improves as beams are added but a delivery objective degrades. Extending the delivery time is problematic since this increases the probability of patient movement and the likelihood of an inaccurate delivery.
The initial interest in optimizing radiotherapy treatments was focused on fluence optimization, and numerous models and solution procedures have been proposed (see Bartolozzi et al, 2000; Holder, 2004; Holder and Salter, 2004; Rosen et al, 1991; Shepard et al, 1999). The variety is wide and includes linear, quadratic, mixed integer linear, mixed integer quadratic, and (non) convex global problems. Clinically relevant fluence problems are large enough to make combining the first two phases, which is trivial to express mathematically, difficult to solve, and much of the current research is directed at numerical procedures to support a combined model. One of RAD's contributions is that it is designed so that different models and solution procedures can easily be compared on the same cases, allowing head-to-head comparisons that were previously unavailable.
While the treatment advantages of a collimator are apparent, the collimator significantly adds to the complexity of treatment design since it provides the ability to control small subdivisions of the beam. This is accomplished by dividing the open-field into sub-beams, whose size is determined by the collimator. For example, the collimator in Figure 2 has 32 opposing leaves that vertically divide the open-field. Although the leaves move continuously, we horizontally divide the open-field to approximate the continuous motion and design a treatment that has a unique value for each rectangular sub-beam. The exposure pattern formed by the sub-beams is called the fluence pattern, and an active area of research is to decide how to best adjust the collimator to achieve the fluence pattern as efficiently as possible (see Ahuja and Hamacher, 2004; Baatar and Hamacher, 2003; Boland et al, 2004).
A radiotherapy treatment is designed once at the beginning of treatment, but the total dose is delivered in multiple daily treatments called fractions. Fractionization allows normal cells the time to repair themselves while accumulating damage to tu-morous tissues. The prescribed dose is typically delivered in 20 to 30 uniform treatments. Patient re-alignment is crucial, and in fact, the beam of radiation can often be focused with greater precision than a patient can be consistently re-aligned. Ra-diosurgery treatments differ from radiotherapy treatments in that the total intended dose is delivered all at once in one large fraction. The intent of a radiosurgery is to destroy, or ablate, tissue. Patient alignment is even more important for radiosurg-eries because the large amount of radiation being delivered makes it imperative that the treatment be delivered as planned.
21.2 2 Calculating Delivered Dose
The radiation transport model that calculates how radiation energy per unit mass is deposited into the anatomy, which is called dose, is crucial to our ability to estimate the anatomical effect of external beam radiation. Obviously, if the model that describes the deposition of dose into the patient does not accurately represent the delivered (or anatomical) dose, then an optimization model that aids the design process is not meaningful.
Numerous radiation transport models have been suggested, with the “gold standard” being a stochastic model that depends on a Monte Carlo simulation. This technique estimates the random interactions between the patient's cells and the beam's photons, and although they are highly accurate, such models generally require prohibitive amounts of computational time (although this is becoming less of a concern). We instead adapt the deterministic model from Nizin and Mooij (1997) that approximates each sub-beam's dose contribution. The primary dose relies on the beam's energy and on the ratio between the radius of the sub-beam and the open-field. The way a sub-beam scatters as it travels through the anatomy depends on the radius of the sub-beam. Small radius beams have a large surface area compared to their volume, and hence, they lose a greater percentage of their photons than do larger radius sub-beams. When many contiguous sub-beams are used in conjunction, much of this scatter is gained by surrounding sub-beams, called scatter dose buildup.
For distances greater than M, Nizin and Mooij (1997) report that the maximum error is 5% for clinically relevant beams when compared to Monte-Carlo models. For extremely narrow beams, which are not clinically relevant, there is a maximum error of 8%. For our purposes, this level of accuracy is sufficient.
Although Figure 3 depicts angle a being divided into ‘flat’ sub-beams, the colli-mator segments a beam into a 2D grid. The column widths of the grid are decided by the width of the leaves, and the row widths depend on the type of collimator. Some collimators are binary, and each leaf is either open or closed. Other collima-tors allow each leaf to move continuously across the column, and in this situation the rows of the grid are used to discretize the continuous motion. The subscript i indexes through this grid at each angle, and hence, i is actually a 2D index. Similarly, the index for angles need not be restricted to a single great circle around the patient, and the index a represents an angle on a sphere around the patient.
21.3 3 Coordinate Systems
A complete foray into the authors' tribulations with the different coordinate systems is beyond the scope of this article, but a few notes are important. There are three coordinate systems that need to be aligned: 1) the coordinates for the patient images, 2) the coordinates for the dose points, and 3) the location of the gantry. The patient images are three dimensional of the form (μ, v, ζ, where each ζ represents atypical cross sectional image. The images are not necessarily evenly spaced, with images often being closer through the target. The dose points are also three dimensional of the form p = (u, v, z). As discussed below, placement of these points is restricted to an underlying, evenly space grid, and hence, the z coordinate does not necessarily agree with ζ. However, each dose point needs to be linked to a tissue that is defined by an image, and we associate (u,v,z) with (u,v,ζ), where ζz is the closest ζ, to z. The gantry coordinates describe the machine and not the anatomy. To link the gantry's position and rotation to the patient, we need to know the location of the isocenter within the patient and the couch angle. Gantry coordinates are calculated in polar form and translated to rectangular coordinates that are synced with the anatomy's position on the couch.
Our solution to aligning the coordinate systems is to build a three dimensional rectangle whose coordinates are fixed and whose center is always the location of the isocenter. We load the patient images into the rectangle so that they position the isocenter accordingly, and then build a three dimensional grid in the rectangle that defines where dose points are allowed to be placed. The couch angle defines a great circle around the fixed rectangle that allows us to position and rotate the gantry, which in turn allows us to track the sub-beams as they gantry moves.
21.4 4 Optimization Models
Prescription information gathered by RAD. Each of these vectors is indexed to accommodate the required number of structures.
A goal dose for a target.
A goal lower bound for a target.
A goal upper bound for a target.
A goal upper bound for a critical structure.
A goal upper bound for the normal tissues.
An absolute maximum dose allowed on any structure.
An absolute minimum dose allowed on the target.
Percent of volume allowed to violate an upper bound.
Percent of volume allowed to violate a lower bound.
Once an optimization model is decided, an optimal treatment can be calculated using a host of different solvers. If the model is linear, solver options include the primal simplex method, the dual simplex method, Lemke's algorithm, and several interior point methods. Unless the optimization problem has a unique solution, it is likely that different solution algorithms will terminate with different fluence patterns (although the objective values will be the same). This phenomena has been observed in Ehrgott et al (2005), where CPLEX's dual simplex was shown to routinely design treatments with a few angles having unusually high fluences. If the model is nonlinear but smooth, typical options are gradient descent, Newton's method, and variants of quasi-Newton's methods.
In the spirit of RAD's academic intent, one of our goals is to allow easy and seamless flexibility in how optimal treatments are defined and calculated. This is possible by using pre-established software that is designed to model and solve an optimization problem. In particular, we separate data acquisition, modeling, and solving. This differs from the philosophy behind most of the in-house systems developed by individual clinics, where modeling and solving are intertwined in a single computational effort. The mingling of the two complicates the creative process because changing either the model or the solution procedure often requires large portions of code to be rewritten, thus hindering exploration. We instead use standard software to model and solve a problem. For instance, RAD uses AMPL to model problems, which makes adjusting existing models and entering new ones simple. AMPL links to a suite of 35 solvers such as (integer) linear and (integer) quadratic models (as well as many others). RAD currently has access to CPLEX, MINOS and PCx. This approach takes advantage of the numerous research careers that have gone into developing state-of-the-art software to model and solve optimization problems, and hence, brings a substantial amount of expertise to the design of radiotherapy treatments.
There are limitations to this design approach, especially with global (non-convex) problems that are often successfully solved with simulated annealing or other heuristics. The lack of access to quality heuristics for global optimization problems is a detriment because one of the industry's primary solution methods is simulated annealing. Simulated annealing has the favorable quality that it can successfully accommodate any model, but the unfavorable quality that there is no guarantee of optimality. However, the medical physics community has learned to trust this technique because it has consistently provided quality treatments. Moreover, some of the suggested optimization models are non-convex global models. A future goal is to link RAD to global solvers like LGO, simulated annealing, and genetic algorithms. Once this is complete, a large scale study of which model and solution methodology provides the best clinical benefit is possible. Wide scale testing of different models and solution procedures has not been undertaken, but RAD has the potential to support this work. These comparisons are important because clinical desires vary from clinic to clinic and from physician to physician. This leads to the situation where the sense of optimality —i.e. the optimization model can be different from one physician to another for the same patient. It is possible, however, that the basic treatment goals pertinent to all clinics and physicians are best addressed by a single model and solver combination. If this is true, then such a combination would provide a consensus on what an optimal treatment is for a specific type of cancer and a subsequent standard of care for clinics with similar technology.
21.5 5 Problem Management
The size of a typical design problem is substantial, making them difficult to solve. Indeed, the results in Cheek et al (2004) show that storing the dose matrix can require over 600 Gigabytes of memory. For this reason, it is necessary to use reduction schemes to control the size of a problem, and this section discusses the methods introduced in RAD, several of which are designed to assist the combination of beam selection and fluence optimization.
The current practice of asking the treatment planner to select beams has the favorable quality that the underlying fluence problem only considers the selected beams, which reduces the number of columns in the dose matrix. RAD is capable of addressing a fluence problem with a large number of candidate beams by judiciously selecting sub-beams and dose points. The first reduction is to simply remove the sub-beams that do not strike the target. One naive way to remove these sub-beams is to calculate an entire dose matrix and then remove the columns whose aggregate rate to the target is below a specified threshold. RAD's approach is different, and before calculating the rates associate with a sub-beam, we search for the minimum off-axis factor over the dose points on the surface of the target. If the minimum value is too great, we classify the sub-beam as non-target-striking and are spared the calculation of this column. This technique requires two calculations that are not needed by the naive approach, that of locating the target's surface and evaluating the minimum off-axis factor. Both of these calculations only consider the target, whereas the naive approach calculates rate information for each point in the anatomy. Our numerical comparisons, even for large targets, show that RAD's approach is significantly faster.
A novel reduction introduced in RAD is to accurately define the anatomical region that will receive significant dose. For example, consider a lesion in the upper part of the cranium. The volume defined by the entire set of patient images is called the patient volume, and for this example it would likely encompass the head and neck. However, it is unlikely that we will need to calculate dose in the area of the neck. We have developed a technique that defines a (typically much) smaller volume within the patient where meaningful dose is likely to accumulate.
The arrangement of dose points over the treatment volume is critical for two reasons: 1) the discrete representation of the anatomical dose needs to accurately estimate the true continuous anatomical dose, and 2) the size of the problem grows as dose points are added. Clinical relevance is achieved once the dose points are within 2mm of each other. Some technologies are capable of taking advantage of 0.1mm differences, and hence, require much finer grids. Our experiments show that using the treatment volume instead of the patient volume reduces the storage requirement to 10s of Gigabytes instead of 100s of Gigabytes with a grid size of 2 mm, assuming a single isocenter and couch angle. Although this is a significant reduction, solving a linear or quadratic problem with a dose matrix in the 10 Gigabyte range is impossible with CPLEX on a 32 bit PC, there are simply not enough memory addresses.
On the first solve we only include normal tissue dose points that are adjacent to the target, forming a collar around the target. This concept was used in the earliest work in the field (see Bahr et al, 1968b).
We segment the patient volume into 1cm3 sectors.
We trace the sub-beams that have sufficiently high fluence values, and each sector is scored by counting the number of high fluence sub-beams that pass through it.
Normal tissue dose points are placed within the treatment volume for the sectors with high scores.
The process repeats with the added dose points until every sector receives a sufficiently small score.
This iterative procedure solves several small problems instead of one large one. On clinical examples, the initial dose matrices are normally under 1 Gigabyte, a size that is appropriate for the other software packages. We point out that RAD does not calculate the anatomical dose to each sector but rather only counts where high exposure sub-beams intersect. Just because a few high exposure sub-beams pass through a sector does not mean that this sector is a hot spot, but it does mean that it is possible. Sectors with low counts can not be hot spots because it is impossible to accumulate enough dose without several high exposure sub-beams. We find that 1 to 5 iterations completely removes hot spots outside the target. At the end of the process, the dose matrix has typically grown negligibly and remains around 1 Gigabyte in size.
The iterative procedure above reduces the number of rows of the dose matrix so dramatically and successfully that we can increase the number of beams. Although a clinic will only use a fraction of the added beams, solving for optimal fluences with many beams provides information about which beams should and should not be selected. A complete development of beam selection is not within the scope of this work, and we direct readers to Ehrgott et al (2005) for a complete development. Beam selectors are classified as uninformed, weakly informed or informed. An uninformed selector is one that only uses geometry, and the current clinical paradigm of selecting beams by what looks geometrically correct is an example. A weakly informed selector is guided by the dose matrix and the prescription, and an informed selector further takes advantage of an optimal fluence pattern calculated with a large set of possible beams. The premise behind an informed selector is that it begins with a large set of possible beams that are allowed to ‘compete’ for fluence through an optimization process. The expectation is that a beam with a high fluence is more important than a beam with a low fluence. The numerical results in Ehrgott et al (2005) demonstrate that informed selectors usually select quality beams in a timely fashion.
Approximate dose matrix sizes for a 20cm3 region with 3 couch angles around a single isocenter in the middle of the patient volume. A 2mm 3D grid spacing is assumed. Each swath is 1cm in width (6 dose points wide), is parallel to one of the axes, and passes through the center of the patient volume. The example assumes that 10,000 dose points in the treatment volume are not normal and that 20,000 additional dose points outside the treatment volume are needed to describe the critical structures. Each beam is assumed to have a 25 ×25 grid of sub-beams, of which 4 are assumed to strike the target. The final treatment has 10 angles.
Number of Rows
Number of Columns
Size of A
1.3 × 105
3.0 × 104
8.4 × 102
2.5 × 107
3.8 × 105
4.0 × 10
1.5 × 107
We conclude this section with a brief discussion about sparse matrix formats and other reductions that did not work. A straightforward approach of reducing the storage requirements of the dose matrix is to store only those values above a predefined threshold. This method requires the calculation of every possible rate coefficient over the patient volume. Our approach of defining the treatment volume preempts the majority of these calculations and is faster. That said, about 90% of the rate coefficients over the treatment volume are insignificant since each sub-beam delivers the majority of its energy to a narrow path through the treatment volume. So, a sparse matrix format over the treatment volume should further reduce our storage requirements. However, our reduction schemes allow us to forgo a sparse matrix format and store a dose matrix as a simple 2D array with double precision. This simplicity has helped us debug and validate the code.
Before arriving at the reductions above, we attempted a different method of placing dose points. The idea was to use increasingly sparse grids for the target, critical structures, and normal tissues. This is not a new idea, with different densities being considered in Lim et al (2002) and Morrill et al (1990). There are two problems with this approach. First, the voxels of different grids have different volumes, and our code to handle the volumes at the interface of different grids was inefficient. Second, the sparsity of the normal tissue grid had to exceed 1 cm (often 2+ cm) to accommodate the use of many angles, which is clinically unacceptable. Moreover, the sparsity did not prevent hot spots from appearing in the normal tissue. We are aware that Nomos's commercial software uses an octree technique that allows varying densities, so it is possible to use this idea successfully, although our attempt failed.
21.6 6 Solution Analysis
A treatment undergoes several evaluations once it is designed. In fact, the number of ways a treatment is judged is at the heart of this research, for if the clinicians could tell us exactly what they wanted to optimize, we could focus on optimizing that quantity. However, no evaluative tool comprehensively measures treatment quality, which naturally makes the problem multiple objective (see Hamacher and Küfer, 2002; Holder, 2001). The issue is further complicated by the fact that treatment desires are tailored to specific patients at a specific clinic. That said, there are two general evaluative tools.
Notice that a DVH curve depends on the volumetric estimate of the corresponding structure, an observation that leads to a subtle issue. Different clinics are likely to create different volumes of the normal tissue by scanning different patient volumes. This means the curve for normal tissue will vary, and in particular, the information provided by this curve diminishes as more normal tissue is included. For example, if we were treating the lesion in Figure 5, we could artificially make it appear as though less than 1% of the normal tissue receives a significant dose by including the tissue in the patient's feet. The authors of this paper are unaware of any standard, and for consistency, all of RAD's DVHs are based on the treatment volume, which is a definable and reproducible quantity that removes the dependence on the clinically defined patient volume.
A DVH visually displays the amount of a structure that violates the prescription but does not capture the spacial position of the violation. If 10% of a structure's volume violates a bound but is distributed throughout the structure, then there is likely no radiobiological concern. However, if the volume is localized, then it might be a hot spot and the treatment is questionable. To gain spatial awareness of where dose is and is not accumulated, each of the patient images is contoured with a sequence of isodose curves. Examples are found in Figure 5. Each of these curves contains the region receiving dose above a specified percentage of the target dose. So, a 90% isodose curve contains the tissue that receives at least 0.9×TG. Isodose curves clearly indicate the spatial location of high dose regions, but they require the user to investigate each image and form a mental picture of the 3D dose. Since there are often hundreds of scans, this is a tedious process.
21.7 7 Software Design & Structure Identification
The previous sections' discussions about the algorithms that support RAD do not address the software engineering aspects, and the authors would be remiss if they did not discuss how the different parts of RAD interlink. Some of the topics in this section are general software issues and others are specific to the design of radiotherapy treatments.
Another of RAD's unique features is that it stores problems in a MySQL database. Beyond being an information repository for RAD, its intent is to become a library of problems for comparative research. The medical literature on treatment design is vast, but each paper highlights a technique on a few examples from a specific clinic, examples that can not be used by others to compare results. This is strange from a computational perspective, and RAD's database will support the numerical work necessary to fairly evaluate different algorithms and models.
Tissue information is captured with a tga image that is generated for each patient image by flooding each tissue with a unique color. For example, the three segments in Figure 10 would be linearly interpolated and the pixels within the outer region but outside the inner region would be flooded with a color unique to kidney tissue. This is possible with a PHP class that generates tga images, which are not stored but rather generated as needed from the list of points in the database (this saves storage requirements). Representative tga images for each tissue are displayed via a web interface that additionally asks the user for the prescription information for each tissue. Each dose point is associated with the closest pixel on a patient image, where ties are decided with a least index rule. Thus, the tga images are the link between the user defined prescription and the associated bounds of the optimization problem.
Another concern about tissue identification is that regions representing different tissues may intersect. Our simple solution follows that of several commercial systems, and we ask the user to rank tissues. In the case of an intersection, the dose points within the intersection are labeled as the tissue with the highest priority.
We have already defined the information available for the prescription in Table 1. The Settings information details the dose point grid, the location of the possible angles, and the type & sub-division of the beam. Whereas the information that comprises a Problem details what is needed to design a treatment, a Solution additionally includes the type of optimization model and the technique used to solve it -i.e. it includes how we are defining and finding optimality. Hence, a Solution is everything needed to define the anatomical dose of an optimal treatment, and with this information it is possible to render a treatment to be evaluated.
Step 1: RTOG files are parsed by a php script that reads Case information into a MySQL database.
Step 2: Representative tga images are generated from the Case and are displayed via a web interface to gain a Prescription.
Step 3: Settings are selected to match a clinical setting.
Step 4: An optimization model and solution technique are selected.
Step 5: Using the reductions and the iterative scheme described above, RAD calculates an optimal treatment. Models are generated in AMPL and solved by the solver selected in Step 4.
Step 6: Solution information is written to the MySQL database.
Step 7: Visualization scripts written in php generate dose-volume histograms and isocontours, which are displayed through a web interface.
21.8 8 Conclusion
Orchestrating the creation of a radiotherapy design system is a significant task that lives at the application edge of operations research and computer science. This paper has discussed many of the fundamental concerns and has introduced several new tactics that allow the underlying optimization problem to be approached with standard software, even in the case of numerous possible angles. It additionally introduces a new method to draw isocontours. The system is based on an efficient and well studied radiation transport model.
Many researchers have faced the challenge of designing their own design software, which is why there are several in-house, research systems. Our goal was to detail the algorithmic and software perspectives of RAD so that others can incorporate our experience as they either begin or continue their research. In the future, the authors will initiate the process of a rigorous, detailed and wide-spread investigation into which model and solution method consistently produces quality treatments. Moreover, RAD will allow others to compare their (new) techniques to ours with the same dose model and patient information. Lastly, RAD is designed to accommodate the amalgamation of beam selection and fluence optimization, a topic that is currently receiving interest.
Acknowledgements Research supported by the Cancer Therapy & Research Center in San Antonio, TX and by the Department of Radiation Oncology, Huntsman Cancer Center, Salt Lake City, UT.
- Ahuja R, Hamacher H (2004) Linear time network flow algorithm to minimize beam-on-time for unconstrained multileaf collimator problems in cancer radiation therapy. Tech. rep., Department of Industrial and Systems Engineering, University of Florida, revised version under review in NetworksGoogle Scholar
- Aleman D, Romeijn H, Dempsey J (2006) A response surface approach to beam orientation optimization in intensity modulated radiation therapy treatment planning. In: IIE Conference ProceedingsGoogle Scholar
- Aleman D, Romeijn H, Dempsey J (2007) A response surface approach to beam orientation optimization in intensity modulated radiation therapy treatment planning. Tech. rep., Department of Industrial Engineering, the University of Florida, to appear in IIE Transactions: Special Issue on Healthcare ApplicationsGoogle Scholar
- Baatar D, Hamacher H (2003) New LP model for multileaf collimators in radiation therapy planning. In: Proceedings of the Operations Research Peripatetic Postgraduate Programme Conference ORP3, Lambrecht, Germany, pp 11–29Google Scholar
- Bahr G, Kereiakes J, Horwitz H, Finney R, Galvin J, Goode K (1968a) The method of linear programming applied to radiation treatment planning. Radiology 91:686–693Google Scholar
- Bahr GK, Kereiakes JG, Horwitz H, Finney R, Galvin J, Goode K (1968b) The method of linear programming applied to radiation treatment planning. Radiology 91:686–693Google Scholar
- Boender C, et al (1991) Shake-and-bake algorithms for generating uniform points on the boundary of bounded polyhedra. Operations Research 39(6)Google Scholar
- Bonder S (2004) Improving or support for healthcare delivery systems: Guidlines from military or experience. INFORMS Annual Conference, Denver, COGoogle Scholar
- Caron R (1998) Personal communicationsGoogle Scholar
- Ehrgott M, Holder A, Reese J (2005) Beam selection in radiotherapy design. Tech. Rep. 95, Trinity University Mathematics, San Antonio, TXGoogle Scholar
- Holder A (2001) Partitioning multiple objective optimal solutions with applications in radiotherapy design. Tech. rep., Department of Mathematics, Trinity University, San Antonio, USAGoogle Scholar
- Holder A (2004) Radiotherapy treatment design and linear programming. In: Bran-deau M, Sainfort F, Pierskalla W (eds) Operations Research and Health Care: A Handbook of Methods and Applications, Kluwer Academic Publishers, chap 29Google Scholar
- Holder A, Salter B (2004) A tutorial on radiation oncology and optimization. In: Greenberg H (ed) Emerging Methodologies and Applications in Operations Research, Kluwer Academic Press, Boston, MAGoogle Scholar
- Lim G, Choi J, Mohan R (2007) Iterative solution methods for beam angle and flu-ence map optimization in intensity modulated radiation therapy planning. Tech. rep., Department of Industrial Engineering, University of Houston, Houston, Texas, to appear in the Asia-Pacific Journal of Operations ResearchGoogle Scholar
- Lim J, Ferris M, Shepard D, Wright S, Earl M (2002) An optimization framework for conformal radiation treatment planning. Tech. Rep. Optimization Technical Report 02–10, Computer Sciences Department, University of Wisconsin, Madison, WisconsinGoogle Scholar
- Pierskalla W (2004) We have no choice — health care delivery must be improved: The key lies in the application of operations research. INFORMS Annual Conference, Denver, COGoogle Scholar