Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The story of computing in nuclear medicine is one of a role enabling continually improving performance and an interaction between technologies and applications, but sometimes with a pause for technical performance to catch up with user demand.

At the beginning of the medical use of radioactivity in clinical research and diagnosis, measurements of nuclear radiation, typically using a Geiger-Muller or a scintillation counter, enabled comparisons between the radioactivity of samples to be made. Subtraction of background was a simple arithmetical task, but calculation of ratios (for example to measure percentage uptake) required division, a more difficult task. It was made easy by use of an analogue computer system, the slide rule, which was a standard item of laboratory equipment until the early 1970s, when small portable electronic calculators replaced the mechanical and electromechanical calculators previously used for numerical calculation and made the slide rule redundant.

Before imaging became the dominant modality in the medical use of radioactivity, many research and clinical procedures had used radioactive tracers to study dynamic processes, typically the passage of a tracer substance through (for example) the kidneys. The countrate from a sodium iodide detector placed over a kidney would be used to drive the pen of a chart recorder to produce an activity-time curve then used to assess organ (e.g. renal) function. To separate the component due to uptake in the kidney from the decreasing contribution from activity in circulating blood, it was necessary to use a separate detector for the heart (blood pool) curve, and then subtract the blood data from the organ data by a process of deconvolution. This was a time-consuming task, while for other physiological investigations, there might be further physiological components to deal with and hence even more complexity. The task was, referred to as compartmental analysis of time-dependent data, and computers were applied to it in the 1960s. One type used was the analogue computer, which employed electronic components such as operational amplifiers and resistors, the connections between which could be altered with plug-in wiring, to model quantitatively the physiological compartments and produce an output signal which could be compared with the observed chart recording [1]. While this method was technically feasible, at the same time digital computers were becoming more generally available, and in the course of a few years they won this particular race and enabled also analysis of the uncertainties involved in fitting models to data [2]. Universities each generally had a mainframe computer which could be used if the data were presented in a suitable form. Initially, that could be an obstacle, with data having to be transferred to punched cards, but then in the late 1960s smaller, more accessible computers were installed in hospitals – the one I used at Hammersmith was installed to process pathology laboratory data, but fortunately was also available to other users. Like other installations at that time the computer was installed in a large air-conditioned room with several cabinets containing the central processor, magnetic tape drives, paper tape punch and reader, line printer and a teletype keyboard for controlling the computer. This computer had a central memory of 16 kB – about one million-fold less than that in a modern smart phone – giving some programming challenges for dealing with image data. By 1 970, interest had shifted from just analysis of organ uptake curves from separate small detectors to what was hoped would be more accurate results obtainable by acquisition and analysis of imaging data. My work at that time centred on use of a whole-body dual-detector rectilinear scanner equipped with a paper tape punch to record counts from each detector in every successive short time period. After getting the scan data into the computer, ‘line printer’ (paper) outputs were used in defining regions of interest and hence determine percentage organ uptake for research applications, including dosimetry for new radiotracers [3]. Meanwhile my colleague was using a specially built electronic system [4] for acquiring image data from the whole field of view of a gamma camera, in two-dimensional 64 × 64 frame mode, by conversion of the X, Y position data for each detected gamma ray using ADCs and two data buffers, acquiring into one while the other was read out onto magnetic tape which could then be taken to the London University computer centre This still left a fairly laborious process for defining regions of interest, but did enable analysis of dynamic studies simultaneously of multiple regions of interest, with tissue background activity correction.

The next step forward came from the advent of minicomputers in the 1970s, among which were for example the systems from Digital Equipment Corporation (DEC): the PDP-8 and later PDP 11. These were relatively small, and with magnetic disk drives and a colour monitor usually occupying just one or two 2 m tall equipment racks. They were dedicated systems for processing data acquired via ADCs from one gamma camera (and they cost about the same amount). The applications of these systems included measuring and correcting for gamma camera non-uniformity, renogram analysis, multiple ECG-gated cardiac bloodpool analysis (in which the patients ECG signal was used by the computer as a time marker enabling separate images for each phase of the cardiac cycle to be built up over several minutes) for measurement of left ventricular ejection fraction. In the early 1980s, gamma cameras capable of rotating the detector around the subject’s body were developed and enabled single photon emission computed tomography [5], also using the dedicated computer. Initial applications included section imaging of the myocardium using thallium-201, bone and cerebral perfusion imaging, but adoption of section imaging generally grew quite slowly. At this time in the late 1970s/early 1980s positron emission (computed) tomography systems came into use initially for research, moving on into the 1990s to clinical diagnostic use, principally for staging cancer using F-18 FDG. As PET became more commonly used, technical advances using computer power followed similar paths to those for SPECT.

The computations for tomographic imaging initially required long processing times, but add-on parallel processing units could speed this up. Filtered back projection was the reconstruction technique used, as the potentially better iterative techniques would take too long. However in the 1990s ordered-subset expectation maximisation (OSEM) was developed and with the faster computers becoming available, that iterative reconstruction method has dominated in the 2000s [6]. So tomographic processing, by a more advanced technique, is now effectively instant, in comparison with the early lengthy procedures.

By about 1990 computing power was incorporated as an essential part of the standard gamma camera. Anger’s resistor network design, at the heart of the gamma camera, was emulated digitally, using digital signal processors to collect and combine the photomultiplier tube outputs. Digital computing also quite dramatically improved the basic performance of the gamma camera through uniformity, linearity and energy corrections, while other corrections permitted the use of thinner detector crystals which improved spatial resolution. The console controlling data acquisition also typically provided a range of processing options. A separate additional computer system was however still popular as it gave some advantages, including continuity of use of processing software, in contrast to the built-in systems, when replacing the camera could mean changing how data were processed and presented. As computer power increased, a stand-alone computer could also acquire data from several cameras, using common systems for processing, archiving and output, providing images in standard formats.

Additional image data processing could attempt to correct for scatter radiation, collimator characteristics etc, and assess the performance of such techniques, as physical experimentation was increasingly replaced by computer simulations [7]. In clinical image processing applications, techniques were developed to define regions of interest automatically, and compare selected regions or whole images with standard image templates. However, while much work has been done on automated pattern recognition, these techniques are still usually seen as a support to a human reporter.

A clinical application of SPECT which has had a large impact is myocardial perfusion imaging, which brought closer involvement of cardiologists in nuclear medicine practice. The addition of ECG-gated SPECT brought a new, graphic dimension [8] which I think helped to sustain interest and helped to make SPECT a standard technique in most nuclear medicine services. In other clinical areas, cerebral perfusion and nigrostriatal SPECT extended clinical interest into the area of neuropsychiatry.

For clinical nuclear medicine, developments in the 2000s decade included the introduction of systems giving more integration of nuclear medicine in diagnostic imaging. One important area was the integration of tomographic image data ( PET or SPECT) with structural imaging – initially CT and later also MRI, initially by computer techniques for co-registration of the images of structure and function, and later by integration of both functional and anatomical tomographic scanners: PET-CT, SPECT-CT and PET-MR. Another area of integration was PACS (picture archiving computer systems) which used common image data formats, or automated conversion between formats, and enabled digital images (now including all radiographic modalities, in addition to nuclear medicine) to be rapidly available across clinical data networks. Here there has been further integration, so the imaging specialist reporting on scans can now use a computer terminal and large display screens to review all relevant images, whatever the modality, and laboratory data, to dictate a report, using speech recognition software so that the text will be immediately available to clinicians.