Abstract
In Xray computed tomography, discrete tomography (DT) algorithms have been successful at reconstructing objects composed of only a few distinct materials. Many DTbased methods rely on a divideandconquer procedure to reconstruct the volume in parts, which improves their runtime and reconstruction quality. However, this procedure is based on static rules, which introduces redundant computation and diminishes the efficiency. In this work, we introduce an update strategy framework that allows for dynamic rules and increases control for divideandconquer methods for DT. We illustrate this framework by introducing TabuDART, which combines our proposed framework with the Discrete Algebraic Reconstruction Technique (DART). Through simulated and real data reconstruction experiments, we show that our approach yields similar or improved reconstruction quality compared to DART, with substantially lower computational complexity.
1 Introduction
In Xray Computed Tomography (XCT), the interior of an object is commonly visualized by reconstructing an image from a large number of radiographs, equiangularly acquired over 180 or 360 degrees. If scan time restrictions or geometrical constraints during scanning apply, only a small number of radiographs or a set of radiographs distributed over a limited angular range will be available, respectively. In such illposed limited data problems, conventional reconstruction methods, such as Filtered Back Projection (FBP) or the Simultaneous Iterative Reconstruction Technique (SIRT) [1], lead to images with severe artefacts [2] and semiconvergent behaviour [3].
Including prior knowledge about the scanned object into the reconstruction process is a wellknown strategy to compensate for limited data in XCT [4,5,6]. A specific type of prior knowledge is exploited in discrete tomography (DT) [7], where the object is assumed to be composed of only a few materials. The variety of work on discrete tomography is vast [8,9,10,11], with several algorithms developed to improve robustness with respect to noise [12,13,14,15], handle partially discrete images [16, 17], and polychromatic data [18, 19].
Despite their strengths, practical DT methods are computationally intensive as they primarily rely on iterative reconstruction. To increase speed, divideandconquer strategies are often employed, in which only a part of the image is updated in each iteration [7, 16, 20]. Amongst the practical DT algorithms that rely on such division strategies, the Discrete Algebraic Reconstruction Technique (DART) [2] is well known for producing highquality reconstructions of objects composed of few different materials, even in cases with a limited number of projections or projections acquired in a limited angular range [21]. DART has been successfully applied in various imaging domains [22,23,24,25] and is a common benchmark method to compare new DT algorithms with [26,27,28,29].New reconstruction methods based on the DART methodology are still being introduced [18, 30,31,32]
Despite the benefits of DART, its computational complexity is high. One of the causes is that update rules in DART are predetermined and hence do not change over the course of the reconstruction [33, 34]. As a result, already wellreconstructed image regions continue to be updated, leading to redundant computation. This problem has been addressed in theoretical DT, where Tabusearch theory has been combined with other DT methods such as combinatorial optimization approaches based on Ryser’s algorithm [35, 36] and with binary reconstruction based on Gibbs priors [7]. However, these approaches are infeasible for largescale problems, due to both memory requirements and computation time. Heuristic methods such as DART have better scalability than theoretical DT methods, but still suffer from long computation times, partly caused by redundant computation.
To reduce redundant computation of DARTlike methods, we propose a framework of dynamic update rules, which combines concepts from Tabusearch theory with update strategies. We introduce a probability map that adapts based on feedback received during subsequent reconstruction steps. By expressing update rules as changes to this probability map, dynamic update strategies during the reconstruction are implemented. Initialization of this map was based on the entropy of the reconstruction, a measure used before in discrete tomography in the context of optimal projection angle selection [37] and in the nondiscrete case for measuring gray value uncertainty [38]. As a proof of concept, we present such a framework for DART. Furthermore, we describe an estimation procedure for the initial state of the probability map based on image uncertainty. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2 Methods
The computed tomography problem can be represented as a linear system which is solved by Algebraic Reconstruction Methods (ARMs). General ARMs and the DART algorithm are described in Sect. 2.1. In Sect. 2.2, we build upon Tabusearch methodology to exploit memory structures inside DART for improved computational efficiency.
2.1 The DART algorithm
DART is built upon an Algebraic Reconstruction Method (ARM), which calculates solutions to the following linear reconstruction problem:
where \(\mathbf {x} \in {\mathbb {R}}^n\) is a vectorized pixel representation of the object, \(\mathbf {p} \in {\mathbb {R}}^m\) is the measured projection data, and \(\mathbf {W} \in {\mathbb {R}}^{m\times n}\) is the system matrix describing the approximately linear relationship between the scanned object and the measured data. A widely used ARM is the Simultaneous Iterative Reconstruction Technique (SIRT) [39], which computes a minimal distance solution to the system (1) with respect to the 2norm. SIRT iteratively computes the following update step:
where \(\mathbf {C} \in {\mathbb {R}}^{n\times n}\) and \(\mathbf {R} \in {\mathbb {R}}^{m\times m}\), are diagonal matrices containing the inverse of the column and row sums of \(\mathbf {W}\), respectively. The vector \(\mathbf {x}^{(k)}\) is the current estimate of the solution to (1) and lambda is the relaxation parameter. SIRT was used as the ARM in this paper, with \(\lambda = 1.0\) as the default choice.
Let \({\mathcal {R}} = \{ \rho _1< \cdots <\rho _k \}\) be the set of gray values representing the different materials of which the object is composed. Then, a solution \(\mathbf {x}\) to (1) is discrete if \(\mathbf {x} \in \{ \rho _1,\ldots ,\rho _k\}^n\). Given an initial SIRT reconstruction \(\mathbf {x}^{(0)}\), the key steps in the DART algorithm can be briefly summarized as follows:

1.
Segmentation: Let \(\mathbf {x}^{(\ell )}\) be the output of the SIRT algorithm, where \(\mathbf {x}^{(0)}\) is the output from the initial SIRT iterations. Since the gray levels in the image are known to be in \({\mathcal {R}}\), the elements of \(\mathbf {x}^{(\ell )}\) are projected (e.g., by thresholding) onto \({\mathcal {R}}\). We denote the segmented image by \(\mathbf {s}^{(\ell )}\).

2.
Partitioning: In the partitioning step, a divideandconquer procedure is initiated by labeling the image pixels into two categories: free pixels (which will be updated) and fixed pixels (which are kept fixed at their current value). If a pixel has at least one neighbouring pixel of different gray value \(\rho _i\), it is considered a boundary pixel and is added to the set of free pixels. Otherwise, the pixel is considered fixed. Furthermore, every nonboundary pixel has a small but constant probability p to be included in the free set. After labeling, the reconstruction process continues on the free pixels only, while keeping the other pixels fixed.

3.
Masked reconstruction: A fixed number of SIRT iterations is then performed on the free pixels and a new image \(\mathbf {x}^{(\ell + 1)}\) is computed by merging the updated free pixels with the fixed pixels.

4.
Smooth and repeat After an optional smoothing is performed by convolution with a \(3 \times 3\) kernel, the steps are repeated until a convergence criterion is met or a maximum number of DART iterations has been reached. In this paper, a \(3 \times 3\) median kernel M was used with weight parameter b
$$\begin{aligned} \mathbf {x} = (1b)\mathbf {x}^{(\ell + 1)} + b (M * \mathbf {x}^{(\ell + 1)})\quad , \end{aligned}$$
where \(*\) denotes the convolution operator.
For a detailed description of the DART method, we refer to [2].
2.2 The TabuDART algorithm
In Sect. 2.2.1, a brief overview of Tabusearch and related concepts is presented, together with potential implications of using memory structures in DART. In Sect. 2.2.2, the DART update step is generalized as a framework that introduces a probability map to function as a memory structure for the partitioning step (step 2) inside the algorithm. The proposed TabuDART algorithm is described, in which the probability map is adapted based on a dynamic set of rules and feedback received from the segmentation step. Finally, in Sects. 2.2.3 and 2.2.4, the map initialization and feedback loop is explained for TabuDART.
2.2.1 Memory structures and Tabusearch
Tabusearch is a variations strategy for mathematical optimization techniques that rely on local search. The nature of local search methods makes them vulnerable to local optima. Tabusearch aids in finding the global optimum through adaptive memory structures and reaches parts of the solution space that would otherwise be left unexplored. It allows to escape from local optima and intensifies searches inside a specific region around a solution. In the next paragraph, a summary of the Tabusearch concepts is given to clarify our contribution. For a more indepth description we refer to [40].
There are four main factors which describe the memory structure used: recency and frequency based memory, quality and influence. Recencybased memory stores information on recent solutions explored, and aids in preventing already visited solutions in favor of exploring worse but yet unvisited solutions. Frequencybased memory stores information on the number of times a certain attribute has appeared in recent solutions. Quality relates to the ability to differentiate between characteristics of good and bad solutions, while influence stores the impact of changes in structure of the solution. It is infeasible to store multiple solutions for large 3D volumes. Hence, recencybased memory has limited function for algebraic reconstruction with DART. The frequency of favourable attributes relating to good reconstruction can, however, be stored and exploited to improve DART. For this reason, our approach relies on frequencybased memory. The use of quality and influence metrics is limited to a feedback loop, which adapts the memory structure we propose for DART. When many reconstructions with a low error share an attribute, exploring locations in the reconstruction space where this attribute will be present increases the probability of finding a reconstruction that minimizes the error. Image features, such as which pixels still change their gray value or whether or not the boundary between different gray values stopped evolving, are valuable attributes that can be tracked in frequency. In Sect. 2.2.4, we describe how changes with respect to such a feature can be tracked to adapt the partitioning step (step 2) in the DART algorithm and make it more efficient over time.
2.2.2 The probability map framework
In DART, the partitioning rules decide which pixels in the image are updated, and hence they have a significant impact on the quality of the resulting reconstruction. The following probability map functions as frequencybased memory for the partitioning step inside DART:
Instead of one parameter p describing the probability that an interior pixel is updated in the next iteration, a probability \(p_{x_i}\) is linked to each pixel \(x_i\), which decides whether or not to update that pixel in the next iteration. The map functions as tracker of the frequency of change for any metric that distinguishes between pixels that are likely to be correctly classified and those that are not.
To correctly incorporate the update probability map, certain steps are different from the original DART algorithm. First, an initial state for the probability map is created after the initial SIRT reconstruction. This state is based on any available or calculated image uncertainty measure. If a region in the reconstructed volume is wellresolved, the probabilities in that region can be lowered to reduce redundancy. During each partitioning step, a random number \(r_i\) is drawn from a uniform distribution between 0 and 1, for each pixel \(x_i\). If \(r_i < p_{x_i}\), the pixel is selected for update. This samples a binary probability distribution in each pixel \(x_i\), with probability \(p_{x_i}\) to be free. Hence, the creation of the fixed and free partitions depends entirely on the probability map. At the end of each DART iteration, a feedback loop is introduced, which updates the probability map based on the current reconstruction data. A flowchart of the TabuDART algorithm is shown in Fig. 1.
2.2.3 Probability map initialization
An initialization scheme is presented for the probability map to eliminate the need for the parameter p in the original DART algorithm. The initialization is based on a generalization of local image uncertainty as proposed by Varga et al. for binary reconstruction [41]. Each pixel can only attain a value in \({\mathcal {R}}\) and the probability of being equal to \(\rho _i\) is spatially dependent. A formula for generating the probability for theoretical DT (illustrated in Fig. 2) is given by
Hence, each pixel \(x_j\) can be linked to a probability vector \(\mathbf {v}_{x_j} \in [0, 1]^k\), where k is the number of distinct gray values in the image. The entropy defined as
translates this vector to a single value representing uncertainty of the gray value of pixel \(x_j\). The logarithm \(\log _k\) is applied pointwise on the different components of the vector \(\mathbf {v}_{x_j}\).
Since it is infeasible to calculate the probabilities for large images directly, we propose an extension of the approximation introduced by Varga et al. [41]. For the pixel \(x_j\) of the initial ARM reconstruction, let
The values \({\mathcal {H}}(x_j)\) are used to initialize the probability map. Note that one of the denominators in (6) may become zero if \(x_j \in {\mathcal {R}}\), e.g. if the condition \(x_j \ge 0\) is enforced during SIRT reconstruction possibly causing \(x_j\) to be set to \(\rho _1\). To avoid division by zero, a lower bound was selected for the denominators in \(\mathbf {d}_{x_j}\).
2.2.4 Dynamic update rules
As the final part of TabuDART, the following set of update rules are introduced to track a stability metric based on changes between gray values for individual pixels: Define \(\mathbf {c_x}, \mathbf {b_x} \in {\mathbb {R}}^n\) such that
Then, the new probability map \(\mathbf {p_x^{(\ell +1)}}\) is given by
These update rules halve the probabilities of all nonboundary pixels that, when segmented, have the same gray value \(\rho _i\) as in the previous iteration. Otherwise, the probabilities are set to 1.
3 Experiments
Two sets of experiments were conducted. First, simulation experiments were performed to test the validity of our approach on four discrete phantoms from previous DART papers [2, 33, 42] before evaluating the accuracy of TabuDART on a polychromatic dataset of a plexiglass object [43]. The simulation experiments are described in Sect. 3.1 and the plexiglass dataset is introduced in Sect. 3.2.
3.1 Simulation experiments
Figure 3 shows the phantoms that were used for the simulation experiments, which are identical to those used in previous DART publications [2, 13, 42]. The size of each phantom is \(512 \times 512\) pixels. With the ASTRA toolbox [44], projections were simulated following a parallel beam geometry with 512 detector values for each angle. Two cases of limited data were studied: In the first case, the acquisition range was \([0^\circ , 180^\circ ]\) and the number of projections was varied from 2 up to 90. To maintain a uniform angular sampling distribution while studying the performance of TabuDART as a function of the number of projections, the latter were generated using a golden ratio angular sampling [45], which means that subsequent projections are \(\frac{1+\sqrt{5}}{2}\pi \) radians apart from each other. In the second case, 90 projections were uniformly simulated, after which an increasing wedge was removed.
To infer whether the update probability parameter p can be avoided with our approach, we compared the TabuDART algorithm to the DART algorithm with a total of 12 choices for p. The best performing value for p for a specific case is denoted by best p. The other DART parameters were chosen according to literature [2, 18] and have been kept constant throughout the experiment. These are shown in Table 1.
3.2 Experimental data: barbapapa plexiglass phantom
The goal of the real data experiment is twofold. First, to provide evidence that TabuDART combines well with other augmentations of the original DART algorithm. Second, to study how the relaxation of the inner ARM iteration influences the overall reconstruction quality compared to DART. We reconstructed the central slice of the Barbapapa experimental dataset [43], which consists of a plexiglass block with two drilled cylindrical holes. Three aluminum rods were inserted into the block, amounting to a total of three different materials present: air, plexiglass and aluminum. A picture of the object is shown in Fig. 4. A total of 2400 cone beam projections were measured over the full \(360^\circ \) range with a tube voltage of 130 kVp. To account for the polychromaticity of the Xray beam, our Tabusearch framework was combined with a polychromatic version of DART, called polyDART [18]. We refer to this polychromatic TabuDART algorithm as TPDART. To this end, the polychromatic spectrum was first estimated. This was done by scanning a PVC stepwedge with steps ranging in thickness from 1 to 18 mm. The spectrum was then estimated using the Maximum Likelihood Expectation Maximization algorithm as explained in [18]. A missing wedge experiment was set up, starting from 400 equiangularly distributed projections over a \(360^\circ \) range. Reconstructions are made from subsets of these projections with an increasingly larger missing angular wedge. These subsets consists of all projections in the range \([\alpha , 180^\circ  \alpha ] \cup [180^\circ + \alpha , 360^\circ  \alpha ]\) with \(\alpha \) varying from \(10^\circ \) to \(60^\circ \). The parameters of polyDART and TPDART are given in Table 2.
The runtime parameters resulted in a total of 500 SIRT/pSIRT iterations being performed for each method. For this experiment, the relaxation factor \(\lambda \) for each run of DART was estimated empirically as follows: At every missing wedge (\(10^\circ \) stepsize), the projection data was reconstructed with 50 choices for \(\lambda \). The relative Number of Misclassified Pixels (rNMP), i.e. the ratio between the pixels belonging to the wrong class and the total nonzero pixels, was calculated for each \(\lambda \). The best \(\lambda \) in terms of the rNMP was kept for each choice of p and TPDART, which yields a table for interpolation of \(\lambda \) for intermediate missing wedge \(\alpha \). Additionally, the data was reconstructed with TPDART where
is the relaxation factor and \(\beta \) controls the ratio between system size and relaxation. We hypothesize that since TPDART iteratively lowers the system size, scaling the relaxation appropriately could lead to better results. An interpolation table was also created for \(\beta \). The results of scaled relaxation for TPDART were collected separately and denoted with TPDART scaled.
4 Results
Two metrics were calculated to evaluate the performance of the algorithms in each experiment: the rNMP and a measure for the computational efficiency. The latter metric is expressed as either the total CPU time of the SIRT iterations inside one DART iteration, or as the size of the linear system. The system size is equal to the number of free pixels and expressed as a percentage.
4.1 Simulation results
First, DART with different values of p was compared to TabuDART in terms of rNMP, for both the fewview and the missing wedge case. This experiment has been repeated ten times with different seeds. Figures 5 and 6 show the mean rNMP for each choice of parameter p and TabuDART for increasing number of projections and increasing missing wedge, respectively. For phantoms 2 and 3 in the fewview case, TabuDART performs noticeably better than the other three DART algorithms in terms of rNMP in the case with varying angles, when the number of projections is very limited. For the other two phantoms, the TabuDART remains competitive towards DART with the best performing value of p.
The missing wedge experiment (Fig. 6) yields shows that TabuDART performs comparably to the best choice for p, especially when the missing wedge is high. Three specific missing wedges (small, medium, and large) were selected for each phantom for an indepth study, and for those the experiment was repeated 50 times with different seeds for the random number generator. Figure 7 shows the boxplot data of the rNMP for DART and TabuDART for the small and medium wedge choices. A lower rNMP and lower variance is observed for TabuDART compared to DART. Both DART and TabuDART start from a different initial map and this map is constant per algorithm in each of the 50 seeded repeats. Our approach consistently feeds back data and dynamically changes the set of pixels to be updated, while DART has no feedback loop. This leads to the higher variance on the rNMP for DART, as the free pixel selection is largely influenced by random chance. The difference in visual quality between DART with the worst and best performing value for p is shown in Fig. 8. The contrast between the best and worst choice is evident, which emphasizes the importance of a good parameter choice for these experiments. TabuDART on the other hands yields a superior visual quality without relying on the p parameter.
Figure 9 shows the average CPU time of 10 SIRT iterations in seconds for varying angles. As very little of the background is selected in the case of \(p = 0.01\), it is not surprising that this choice for p leads to the fastest algorithm. However, our approach is comparable in speed. This is due to the feedback procedure that iteratively removes already stable regions from the reconstruction. Hence, the average size of the linear system decreases, yielding the observed low computation time together with a high reconstruction quality in terms of the rNMP. We conclude that our approach outperforms the DART algorithm both in rNMP and visual quality for different types of noiseless scenarios. A final remark is that earlier simulation studies [2, 33] show that lower values of p lead to a lower rNMP. In practice, once noisy data is involved, the higher values of p tend to yield a lower rNMP. We show evidence for this claim in the next experiment.
4.2 Barbapapa plexiglass phantom
Figure 10a shows the rNMP of all methods tested for the different choices of \(\lambda \). Figure 10b shows the rNMP of TPDART for varying scaling factor \(\beta \). It can be observed that DART with lower pvalues has a better defined minimum compared to high values of p. The same occurs for TPDART and TPDART scaled. The common trait that low pvalues and TPDART share is the lower number of freed pixels. Hence, we reason that this sensitivity to the relaxation factor is related to the system size. A lower system size implies an increased sensitivity of each pixel to noisy data, due to increased convergence speed. Relaxation is necessary to counteract semiconverging behaviour. However, overrelaxation lowers convergence speed. The higher choices of p are innately more resistant to semiconvergent behaviour, and hence the impact of relaxation is lower since their convergence rate is slower. When additional projection data is removed, the reconstruction error increases due to lack of data instead of noise. Therefore, less relaxation is necessary which results in a higher choice for \(\lambda \) in (2). The entries in Table 3 support this since the best performing values for \(\lambda \) and \(\beta \) increase as the missing wedge increases. This also means that smaller choices of p benefit more from relaxation. A final argument is that the best \(\lambda \) for polyDART with \(p = 0.5\) is almost exclusively \(\lambda = 1.0\). In summary: The lower the choice of p for polyDART, the more important the selection of a correct relaxation factor becomes. Furthermore, optimal \(\lambda \) selection for TPDART is similar to the optimal choice for polyDART with a small p.
The reason for only introducing a scaling factor \(\beta \) for the relaxation in TPDART is because polyDART relies on the same update rules as DART, which on average frees 100p percent of the pixels plus the boundary. The change in system size for polyDART iterations is negligible compared to TPDART which makes scaled relaxation with a scaling factor identical to relaxation with a different fixed \(\lambda \).
Figure 11a shows the rNMP of the reconstructed images for varying missing wedge. All methods have very similar rNMP when the missing wedge is low, which was also the case for the simulation experiments. The choice of p has negligible effect if there is sufficient data to reconstruct the object. For large values of \(\alpha \), TPDART shows a consistently lower rNMP than polyDART. Overall, TPDART scaled has the lowest rNMP for each value of \(\alpha \) with a system size that is of the same order as polyDART with \(p = 0.05\) (Fig. 11b).
Despite realworld projection data, the lower choices for p yield a lower rNMP for this object. For the Barbapapa phantom, the optimal relaxation parameter \( \lambda \), with respect to the rNMP, was chosen, (cfr. Table 3). This implies that there exists a cutoff where relaxation stops benefiting the DART algorithm. Our experiments provide evidence that this cutoff depends on both the amount of projection data and the choice p. Table 3 shows a large jump for \(\lambda \) once \( \alpha \ge 50^\circ \). It is also from this point on that \(p = 0.05\) outperforms higher choices of p.
Two conclusions can be drawn from the results. The first is that estimating \(\lambda \) based solely on system size will yield poor results if the available projection data is insufficient. Secondly, relaxation based on system size with a scaling factor \(\beta \) dependent on the amount of data available is indispensable towards the proper functioning of TPDART for experimental projection data. Even in the case of polychromatic data, our approach based on tabusearch showed favourable results with respect to DART. The reconstructed image for a missing wedge of \(40^\circ \) is shown in Fig. 12. Due to the large missing wedge, the pSIRT and SIRT reconstructions show large streak artefacts, which drastically influence the quality of the segmentation (Fig. 12b up to f). The initial probability map used in TPDART captures these artefacts (Fig. 12g), but the final TPDART output contains no missing wedge artefacts. In fact, the TPDART reconstruction is very similar to a reference reconstruction created with pSIRT for the entire dataset of 2400 projections (Fig. 12a). This implies that the feedback structure of the algorithm is able to correct errors created during the initial SIRT reconstruction.
When considering visual quality of the reconstructions, no clear best method emerged.
5 Discussion and outlook
In summary, the proposed probability map plays the role of frequency based memory and aids in choosing more optimal regions for further reconstruction. It is able to retain which regions are already stable and uses this information to completely remove them from the reconstruction problem, increasing the efficiency and speed of the DART iterations over time. The initialization procedure suggested above eliminates the need for the update probability parameter p.
5.1 Complexity analysis based on floating point operations
The experiments show that our update strategy reduces the system size on which the SIRT algorithm is run. A good measure for iterative algorithms, is the number of FLOating Point operations (FLOPs) needed to perform an iteration. In this section, each addition, subtraction, multiplication and division is counted as one FLOP. A theoretical speedup can be measured by counting FLOPs for the SIRT algorithm. Let \(\mathbf {x}, \mathbf {y} \in {\mathbb {R}}^n\). Let \(\mathbf {W} \in {\mathbb {R}}^{m \times n}\) be ssparse, i.e. \(\vert \mathbf {W} \vert = s\). Let \(\mathbf {D} \in {\mathbb {R}}^{n \times n}\) be any diagonal matrix. The FLOP counts for different matrix/vector operations present in SIRT are summarized in Table 4 [46]. In practice, the entries of \(\mathbf {W}\) are calculated on the fly, resulting in additional overhead depending on the number of pixels in the image and the number of nonzero entries in \(\mathbf {W}\). The complexity of multiplying a vector by \(\mathbf {W}\) is hence \({\mathcal {O}}(sn)\).
The only operation in SIRT which is not yet accounted for, is the creation of the \(\mathbf {R}, \mathbf {C}\) matrices, which are diagonal matrices that have the inverses of the row and column sums on their diagonal, respectively. To create a matrix of the form:
the entry \(\frac{1}{\sum ^{m}_{j=1} w_{ij}}\) requires \(n1\) additions and 1 division, which totals n FLOPs when counting divisions as one operation. This is repeated for each row of the mbyn matrix which means m times n FLOPs for a total of mn. The SIRT update step can be decomposed into a sequence of matrixvector multiplications with costs:
Since the creation of matrices \(\mathbf {C}\) and \(\mathbf {R}\) only happens once per sequence of SIRT updates, the cost is omitted in further complexity calculation. From (10) it is trivial to find that the total complexity in terms of FLOPs is of \({\mathcal {O}}(sn+m)\). However, the s nonzero entries of \(\mathbf {W}\) are spread equally across each column since each ray i that passes through a pixel j yields a nonzero value \(w_{ij}\). If instead \(nk\) pixels are removed from reconstruction in the masking step, a total of \(nk\) columns is removed from \(\mathbf {W}\). This leads to a linear decrease in the number of remaining entries \(s_k < s\) and hence the new complexity becomes \({\mathcal {O}}(s_k k + m)\). A study on DART performed earlier [42] pointed out that if enough pixels are set fixed, certain rays only pass through vacuum and fixed pixels. These zero rays lead to zero rows in the matrix \(\mathbf {W}\). Hence, the number of nonzero detector readings is lowered to a value \(m_k < m\). The final complexity of masked SIRT becomes \({\mathcal {O}}(s_k k + m_k)\) which is certainly higher than a linear reduction considering \(m \ll n\) in typical discrete tomography applications.
5.2 Memory requirements
The gain in computational efficiency comes at the price of storage memory. To run the TabuDART algorithm we presented, two additional image sized matrices need to be stored. The first one is for the probability map. This cost cannot be avoided, since the entire purpose of the map is to serve as a memory for the algorithm. The second matrix is required to store the segmentation from the previous DART iteration. This allows to track changes in gray values between two iterations. Since gray value classes of the previous iteration can be represented by integer numbers representing the class, the memory demand can be reduced by working with short bit integers at the cost of extra processing.
5.3 Outlook
The criteria for dynamic update rules are not limited to image stability. Our approach can utilize metrics such as the Reconstructed Residual Error [47], image stability [33], or image uncertainty [41]. Algorithms such as MDART [48] and ADART [42] can be easily represented with a probability map, illustrating that our proposed technique is in fact a generalization of the original DART approach to a dynamic framework. The development of additional dynamic update strategies based on image uncertainty is a point of reference for future work.
6 Conclusion
A generic framework based on Tabusearch was proposed to aid divideandconquer strategies for algebraic discrete tomography methods. Our framework relies on a probability map that functions as a memory structure, which can be adapted through feedback obtained during the runtime of the algorithm. This concept was applied to DART, for which we introduced new dynamic update rules and a stronger initialization phase based on local image uncertainty. The method was subjected to a simulation study using different discrete phantoms and an experimental polychromatic dataset of a plexiglass block with aluminum rods. The experiments provided evidence of increased visual imaging quality as well as lower rNMP rates and lower average computation time compared to the original DART algorithm. The generic nature of our approach makes it ideal to be combined with other discrete algebraic methods that rely on divideandconquer strategies.
References
Hansen, P.C., Jørgensen, J., Lionheart, W.R.: Computed Tomography: Algorithms, Insight, and Just Enough Theory. SIAM, Philadelphia (2021)
Batenburg, K.J., Sijbers, J.: DART: a practical reconstruction algorithm for discrete tomography. IEEE Trans. Image Process. 20(9), 2542–2553 (2011)
van Lith, B.S., Hansen, P.C., Hochstenbach, M.E.: A twin error gauge for Kaczmarz’s iterations. SIAM J. Sci. Comput. 43(5), 173–199 (2021)
Perelli, A., Lexa, M., Can, A., Davies, M.E.: Compressive computed tomography reconstruction through denoising approximate message passing. SIAM J. Imaging Sci. 13(4), 1860–1897 (2020)
Wang, C., Tao, M., Nagy, J.G., Lou, Y.: Limitedangle CT reconstruction via the \(L_1/L_2\) minimization. SIAM J. Imaging Sci. 14(2), 749–777 (2021)
Lukić, T., Balázs, P.: Limitedview binary tomography reconstruction assisted by shape centroid. Vis. Comput. 38(2), 695–705 (2022)
Herman, G.T., Kuba, A.: Discrete Tomography: Foundations, Algorithms, and Applications. Springer, Berlin (2012)
Herman, G.T., Kuba, A.: Advances in Discrete Tomography and Its Applications. Springer, Berlin (2008)
Gritzmann, P., De Vries, S., Wiegelmann, M.: Approximating binary images from discrete Xrays. SIAM J. Optim. 11(2), 522–546 (2000)
Stolk, A., Batenburg, K.J.: An algebraic framework for discrete tomography: revealing the structure of dependencies. SIAM J. Discrete Math. 24(3), 1056–1079 (2010)
Balázs, P.: A decomposition technique for reconstructing discrete sets from four projections. Image Vis. Comput. 25(10), 1609–1619 (2007)
Bleichrodt, F., Tabak, F., Batenburg, K.J.: SDART: an algorithm for discrete tomography from noisy projections. Comput. Vis. Image Underst. 129, 63–74 (2014)
Zhuge, X., Palenstijn, W.J., Batenburg, K.J.: TVRDART: a more robust algorithm for discrete tomography from limited projection data with automated gray value estimation. IEEE Trans. Image Process. 25(1), 455–468 (2015)
Capricelli, T., Combettes, P.: A convex programming algorithm for noisy discrete tomography. In: Herman, G.T., Kuba, A. (eds.) Advances in Discrete Tomography and Its Applications, pp. 207–226. Springer, Berlin (2007)
Zisler, M., Kappes, J.H., Schnörr, C., Petra, S., Schnörr, C.: Nonbinary discrete tomography by continuous nonconvex optimization. IEEE Trans. Comput. Imaging 2(3), 335–347 (2016)
Sanders, T.: Discrete iterative partial segmentation technique (DIPS) for tomographic reconstruction. IEEE Trans. Comput. Imaging 2(1), 71–82 (2016)
Kadu, A., van Leeuwen, T., Batenburg, K.J.: A parametric levelset method for partially discrete tomography. In: International Conference on Discrete Geometry for Computer Imagery, pp. 122–134 (2017)
Six, N., De Beenhouwer, J., Sijbers, J.: polyDART: a discrete algebraic reconstruction technique for polychromatic Xray CT. Opt. Express 27(23), 33670–33682 (2019)
Zeegers, M., Lucka, F., Batenburg, K.J.: A multichannel DART algorithm. In: International Workshop on Combinatorial Image Analysis, pp. 164–178 (2018)
Batenburg, K.: Network flow algorithms for discrete tomography. In: Herman, G.T., Kuba, A. (eds.) Advances in Discrete Tomography and Its Applications, pp. 175–205. Springer, Berlin (2007)
Goris, B., Roelandts, T., Batenburg, K., Mezerji, H.H., Bals, S.: Advanced reconstruction algorithms for electron tomography: from comparison to combination. Ultramicroscopy 127, 40–47 (2013)
Van de Casteele, E., Perilli, E., Van Aarle, W., Reynolds, K.J., Sijbers, J.: Discrete tomography in an in vivo small animal bone study. J. Bone Miner. Metab. 36(1), 40–53 (2018)
Batenburg, K.J., Bals, S., Sijbers, J., Kübel, C., Midgley, P., Hernandez, J., Kaiser, U., Encina, E., Coronado, E., Van Tendeloo, G.: 3D imaging of nanomaterials by discrete tomography. Ultramicroscopy 109(6), 730–740 (2009)
Roelandts, T., Batenburg, K., Biermans, E., Kübel, C., Bals, S., Sijbers, J.: Accurate segmentation of dense nanoparticles by partially discrete electron tomography. Ultramicroscopy 114, 96–105 (2012)
Segers, H., Palenstijn, W.J., Batenburg, K.J., Sijbers, J.: Discrete tomography in MRI: a simulation study. Fundam. Inform. 125(3–4), 223–237 (2013)
Tuysuzoglu, A., Karl, W.C., Stojanovic, I., Castañòn, D., Ünlü, M.S.: Graphcut based discretevalued image reconstruction. IEEE Trans. Image Process. 24(5), 1614–1627 (2015)
Guo, Y., Aveyard, R., Rieger, B.: A multichannel crossmodal fusion framework for electron tomography. IEEE Trans. Image Process. 28(9), 4206–4218 (2019)
Zhao, Y., Xu, J., Li, H., Zhang, P.: Edge information diffusionbased reconstruction for cone beam computed laminography. IEEE Trans. Image Process. 27(9), 4663–4675 (2018)
Wei, Z., Liu, B., Dong, B., Wei, L.: A joint reconstruction and segmentation method for limitedangle XRay tomography. IEEE Access 6, 7780–7791 (2018)
Yang, F., Zhang, D., Huang, K., Gao, Z., Yang, Y.: Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique. Meas. Sci. Technol. 29(2), 025405 (2018)
Liu, J., Liang, Z., Guan, Y., Wei, W., Bai, H., Chen, L., Liu, G., Tian, Y.: A modified discrete tomography for improving the reconstruction of unknown multigraylevel material in the missing wedge situation. J. Synchrotron Radiat. 25(6), 1847–1859 (2018)
AnanthaLakshmi, M., Yamuna, G., SanjeeviKumar, A.: A novel method of 3D image reconstruction using ACObased TVRDART. Int. Trans. J. Eng. Manag. Appl. Sci. Technol. 12(5), 1–10 (2021)
Frenkel, D., Beenhouwer, J., Sijbers, J.: An adaptive probability map for the discrete algebraic reconstruction technique. In: 10th Conference on Industrial Computed Tomography (iCT), (iCT 2020) Wels, Austria (2020)
Frenkel, D., De Beenhouwer, J., Sijbers, J.: TabuDART: a dynamic update strategy for the discrete algebraic reconstruction technique based on Tabusearch. In: Proceedings of the 16th Virtual International Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine, 19–23 July 2021, Leuven, Belgium, pp. 173–177 (2021)
Miklós, P.: Discrete tomographic reconstruction of binary matrices using Tabu search and classic Ryser algorithm. In: 2011 IEEE 9th International Symposium on Intelligent Systems and Informatics, pp. 387–390 (2011)
Miklós, P.: Tabu search reconstruction of HVconvex binary contours using classic Ryser algorithm and smart switching. In: 2011 IEEE 12th International Symposium on Computational Intelligence and Informatics (CINTI), pp. 341–344 (2011)
Lékó, G., Domány, S., Balázs, P.: Uncertainty based adaptive projection selection strategy for binary tomographic reconstruction. In: International Conference on Computer Analysis of Images and Patterns, pp. 74–84 (2019)
Varga, L.G., Lékó, G., Balázs, P.: Grayscale uncertainty and errors of tomographic reconstructions based on projection geometries and projection sets. Vis. Comput. (2022). https://doi.org/10.1007/s0037102202428y
Kak, A.C., Slaney, M.: Principles of Computerized Tomographic Imaging. SIAM, Philadelphia (2001)
Glover, F., Laguna, M.: Tabu search: effective strategies for hard problems in analytics and computational science. In: Pardalos, P.M., Du, D.Z., Graham, R. (eds.) Handbook of Combinatorial Optimization, vol. 21, pp. 3261–3362. Springer, Berlin (2013)
Varga, L.G., Nyúl, L.G., Nagy, A., Balázs, P.: Local and global uncertainty in binary tomographic reconstruction. Comput. Vis. Image Underst. 129, 52–62 (2014)
MaestreDeusto, F.J., Scavello, G., Pizarro, J., Galindo, P.L.: ADART: an adaptive algebraic reconstruction algorithm for discrete tomography. IEEE Trans. Image Process. 20(8), 2146–2152 (2011)
Van Gompel, G., Van Slambrouck, K., Defrise, M., Batenburg, K.J., de Mey, J., Sijbers, J., Nuyts, J.: Iterative correction of beam hardening artifacts in CT. Med. Phys. 38(S1), 36–49 (2011)
van Aarle, W., Palenstijn, W.J., De Beenhouwer, J., Altantzis, T., Bals, S., Batenburg, K.J., Sijbers, J.: The ASTRA toolbox: a platform for advanced algorithm development in electron tomography. Ultramicroscopy 157, 35–47 (2015)
Kohler, T.: A projection access scheme for iterative reconstruction based on the golden section. In: IEEE Symposium Conference Record Nuclear Science 2004, vol. 6, pp. 3961–3965 (2004)
Hunger, R.: Floating Point Operations in MatrixVector Calculus. Institute for Circuit Theory and Signal, Munich University of Technology, Munich (2005)
Roelandts, T., Batenburg, K.J., den Dekker, A.J., Sijbers, J.: The reconstructed residual error: a novel segmentation evaluation measure for reconstructed images in tomography. Comput. Vis. Image Underst. 126, 28–37 (2014)
Dabravolski, A., Batenburg, K.J., Sijbers, J.: A multiresolution approach to discrete tomography using DART. PLOS ONE 9(9), 106090 (2014)
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Frenkel, D., Six, N., De Beenhouwer, J. et al. TabuDART: a dynamic update strategy for efficient discrete algebraic reconstruction. Vis Comput (2022). https://doi.org/10.1007/s0037102202616w
Accepted:
Published:
DOI: https://doi.org/10.1007/s0037102202616w
Keywords
 Xray tomography
 Reconstruction algorithms
 Discrete tomography
 Limited data tomography