Introduction

Magnetic induction tomography (MIT) is a non-invasive and contact-less imaging modality for reconstructing the changes Δκ of the complex conductivity distribution κ = σ + jω ɛ0ɛ r in a target object.8,1316,22 MIT requires an array of excitation (EXC) and receiving coils. Each EXC couples an alternating magnetic field B 0 to the object under investigation (see Fig. 1). Changes Δκ of the complex conductivity cause a field perturbation ΔB due to the induction of eddy currents. The perturbation induces voltage changes ΔV in the receiver coils. It is convenient to normalize ΔV to V 0, the voltage which is induced by the unperturbed field B 0 .

Figure 1.
figure 1

Schematic of a possible coil system for MIT with 16 excitation coils and 32 receiver coils.

Previous reviews of MIT have been given in8,22,32. The method has been developed for industrial process tomography already more than 10 years ago but is comparatively new in medical imaging. Potential medical applications usually aim at the characterization of biological tissues by means of their complex conductivity. The motivation for measuring the electrical properties is their characteristic dependence on the (patho-) physiological state of tissues, especially hydration and membrane disorders. Medical applications so far suggested are: imaging of limbs,2 imaging of the brain, e.g. for the monitoring of brain edema,14,16,24,26 measurement of human body composition,7 monitoring of wound healing.23

In contrast to electrical impedance tomography (EIT) MIT avoids the ill-defined electrode-skin interface due to its inherently contact-less operation.

Figure 1 shows a schematic MIT coil configuration with rectangular coils as receivers and a cylindrical object space. The solenoid excitation coils are distributed on two different rings in order to obtain a true 3-D-arrangement.

The reconstruction of the absolute conductivity in a target region Ω requires the solution of a complex inverse eddy current problem. Let be

$$ {\bf y}=\varvec{\Psi}(\varvec{{\bf \kappa}})$$
(1)

the discretized non-linear forward mapping of the conductivity vector \(\varvec{\kappa}\) to the vector of induced voltages y. y contains M = a × b entries, a being the number of EXC and b that of receiving coils. The corresponding inverse problem

$$\bf\varvec{\kappa}=\varvec{\Psi}^{-1}({\bf y}) $$
(2)

is ill-posed and usually underdetermined. Uniqueness of the solution for this inverse boundary value problem was established in21 provided the angular frequency ω of the AC field is not a resonant frequency. The generic approach for the solution of this type of non-linear problem is the application of an iterative scheme such as the regularized Gauss–Newton approach, including an appropriate regularization scheme.

To the knowledge of the authors the full inverse problem of medical MIT in 3-D has not yet been solved satisfactorily, although some approximate solutions, especially for 2-D, have been presented.3,13,33 Some authors9,15 proposed the use of weighted back-projection, similar to EIT. In all published cases the back-projection is done along magnetic flux tubes between excitation and receiving coils, the weights being calculated for the case of conducting perturbations in the empty space. However, own observations17,27,29 suggest that the basic requirements for the applicability of this kind of back-projection are not fulfilled in realistic anatomical structures, so that a more appropriate inverse approach is necessary. This paper is dedicated to demonstrating the feasibility of the 3-D reconstruction of a spherical perturbation within a cylindrical conducting body by means of a regularized one-step Gauss–Newton reconstructor. The conductivities were chosen in the physiological range of human tissues.

Methods

The solution of (2) requires the target region to be discretized into N voxels. Within each voxel i the assigned component κ i of the vector of conductivity \(\varvec{\kappa}\) is assumed to be constant. A grid of tetrahedral finite elements of second order was employed. In a general setup \(\varvec{\kappa}\) is then found with the iterative scheme:

  1. 1.

    Define the forward problem with an initial parameter vector \(\varvec{\kappa}\).

    $$ {\bf y}=\varvec{\Psi}(\varvec{\kappa}) $$
    (3)
  2. 2.

    Measure the data vector y m .

  3. 3.

    Solve iteratively for the estimated true parameters \(\varvec{\kappa}^{\ast}\)

    $$ \varvec{\kappa}^{\ast}=\arg\mathop{\min}\limits_{\varvec{\kappa}}\left( (\varvec{\Psi}(\varvec{\kappa})-{\bf y}_{\bf m})^T(\varvec{\Psi}(\varvec{\kappa})-{\bf y}_{\bf m})+\lambda \varvec{\kappa}^T{\bf R}^T{\bf R}\varvec{\kappa}\right). $$
    (4)

    whereby \(\varvec{\kappa}^{\ast}\) means the estimated “true” parameter vector. R and λ are a regularization matrix and a regularization parameter, respectively, which are required to stabilize the iteration.

When applying Newton’s method starting from an initial guess the parameter vector \(\varvec{\kappa}\) is updated by an increment p k in each iteration step k+1

$$ \varvec{\kappa}_{k+1}=\varvec{\kappa}_{k}+{\bf p}_{k} $$

with the update step

$$ {\bf p}_k =({\bf G}^T_k {\bf G}_k +\lambda {\bf R}^T{\bf R})^{-1}{\bf G}^T_k {\bf e}_k $$
(5)

with \({\bf e}_k=(\varvec{\Psi}(\varvec{\kappa}_k)-{\bf y}_{\bf m})\). The Jacobian \({\bf G}_k=\frac{d\varvec{\Psi}_k}{d\varvec{\kappa}_k}\), also called sensitivity matrix, must be recalculated in each iteration step. This procedure is very time consuming, hence a complete iterative identification run requires significant computing power, the bottleneck being the solution of the forward model. However, in EIT it could be shown that in practice most features of the image can already be recognized very satisfactorily after the first iteration. This fact led to the development of the so-called Newton-one-step reconstructor (NOSER4). NOSER is especially appropriate for so-called dynamical imaging where only the change in the conductivity between two different states of the object under investigation (e.g. lung ventilation) are of interest. In this case the first Newton step corresponds to the solution of the linearized forward problem

$$ \Delta {\bf y}_{\bf m}={\bf Gp}=\frac{d\varvec{\Psi}}{d\varvec{\kappa}}{\bf p} $$
(6)
$$ {\bf p}=({\bf G}^T{\bf G}+\lambda {\bf R}^T{\bf R})^{-1}{\bf G}^T(\Delta {\bf y}_{\bf m}) $$
(7)

whereby \({\bf p}=\Delta\varvec{\kappa}\) is the change of the conductivity between two states of the observed object, and Δy m is the corresponding change of the measured data. In the case of comparatively small changes the inversion of (6) according to (7) yields a fairly correct localization of the perturbed regions. We evaluated the feasibility of this kind of reconstruction in MIT by implementing a NOSER-approach according to (7) under consideration of four different regularization methods.

Calculation of the Forward Solution and the Sensitivity Matrix

The forward mapping \(\varvec{\Psi}(\varvec{\kappa})\) is given by Maxwell’s equations for harmonic excitation:

$$ \begin{array}{ll} \hbox{curl}{\bf H}={\bf J}& \\ \hbox{curl}{\bf E}=-j\omega{\bf B}&\hbox{in }\Omega \\ \hbox{div}{\bf B}=0&\\ {\bf B}=\mu {\bf H},{\bf J}=\kappa{\bf E},\kappa =\sigma+j\omega \varepsilon&\\ \end{array} $$
(8)

with H: magnetic field intensity, B magnetic flux density, E electric field strength, J: current density, ɛ: dielectric constant, μ: magnetic permeability, σ: real conductivity, κ: complex conductivity, ω: angular frequency. Ω denotes the interior of the object under investigation.

This forward problem is solved with a previously published finite element program,12 , 17 which employs an A r –V, A r – formulation with edge elements of second order for the reduced magnetic vector potential A r and nodal elements of second order for the electric scalar potential V. Boundary conditions on the far boundary (normal component of B vanishes) were prescribed on a spherical surface with a radius sufficiently large such that a change of this radius by 50% resulted in a change of the induced voltages by less than 1 %.

Special attention must be paid to the efficient calculation of the Jacobian \({\bf G}=\frac{d\varvec{\Psi}}{d\varvec{\kappa}}\). A mathematically rigorous treatment of this topic has been given in.31 In our implementation we exploited the integral formulation published by Mortarelli20, which is based on a physical mutual energy concept. With this approach the absolute sensitivity dy/d \(\varvec{\kappa}\) for a certain pair of coils is calculated according to (9).

$$ \frac{dy}{d{\varvec \kappa}}={\varvec I}_\phi \int\limits_{\Omega}{\bf L}_\phi{\bf L}_\psi {\bf d}{\varvec \Omega} $$
(9)

with

$$ {\bf L}_\phi =-\frac{\varvec{j}\omega {\bf A}_\phi +\nabla \varvec{V}_\phi }{{\bf I}_\phi}\quad \quad {\bf L}_\psi=\frac{\varvec{j}\omega {\bf A}_\psi +\nabla \varvec{V}_\psi}{{\bf I}_\Psi} $$
(10)

A ϕ, A Ψ, V ϕ and V ψ denote the total magnetic vector potential and the electric scalar potential in the region Ω due to currents I ϕ and I ψ in the excitation and receiver coils, respectively. The sensitivity matrix \({d}\varvec{\Psi}/{d}\varvec{\kappa}\) is then obtained by evaluating (9) for all individual elements and all coil pairs. The exact numerical implementation of (9) was described in detail in11. The calculation of the sensitivity map for one pair of coils requires only two forward solutions of the eddy current problem for generating L ϕ and L Ψ in (10).

Regularization

In EIT the regularization matrix R T R most frequently used is either the identity matrix I or a discrete spatial derivative operator of first or second order. Such approaches have been discussed extensively in the literature, for a good review see e.g.10 Several regularization matrices can be regarded as simple smoothness criteria for the solution but they have also a more general statistical meaning in the framework of Bayesian estimation theory (see e.g.1). In the case of uncorrelated noise with equal variance for all measurement data the estimator in (7) is a maximum a posteriori (MAP) estimator with R T R being the inverse of the expected covariance matrix of the image E[pp T]. In that sense e.g. the neighbouring matrix accounts for the case that all image values are de-correlated at borders between homogeneous regions with different mean values.

According to our own observations good results can be achieved with variance uniformization5 which imposes a special assumption of the prior distribution. The objective here is to uniformize the expected variance of the reconstructed conductivity changes over the region Ω, thus providing approximately equal image noise in the center as in the periphery. The algorithm has been described in detail in5 and requires singular value decomposition of G according to G = UΣV T. Then the regularization term is expressed as λ R T R = VDV T, with D a diagonal matrix with the entries d i

$$ d_i=\frac{\sigma_i}{\sqrt{c}}-\sigma_i^2 $$
(11)

whereby σ i is the Ith singular value. c is a free scalar tuning parameter.

Alternatively truncated singular value decomposition (TSVD) has been applied in EIT-reconstruction,19 hence this approach was also implemented for MIT. In this case the inverse solution becomes

$$ {\bf p}={\bf V}_t\Sigma_t^{-1} {\bf U}_t \Delta y_m $$
(12)

whereby t denotes the truncation level of the original matrices V, Σ and U, thus removing the contributions of singular values with index > t.

In this paper the results obtained with four different regularization schemes were compared:

  1. (1)

    R T R = I. Using the identity matrix is the most simple Tikhonov-regularization method, penalizing high values of the reconstructed conductivity changes. In the following this method will be abbreviated as ‘IM’.

  2. (2)

    R T R = N with N the neighbouring matrix defined as:

    $$ N_{ij}=\left\{{\begin{array}{ll} {n}_{\rm n}&{\hbox{i}=\hbox{j}}\\ -\hbox{1}&\hbox{i,j}\quad \hbox{neighbours}\\ 0&\hbox{otherwise}\\ \end{array}}\right. $$

n n is the number of neighbouring elements for element i, whereby only elements with common facets are considered as neighbours. N is an approximation of the spatial derivative operator of second order. Due to the irregular structure of the grid this filter is not spatially invariant, nevertheless it gives good results and is common practice for this type of inverse problem. In the following this method will be abbreviated as ‘NM’.

  1. (3)

    λ R T R = VDV T according to the variance uniformization approach. In the following this method will be abbreviated as ‘VU’.

  2. (4)

    TSVD while choosing the truncation level t such as to remove all singular values below an appropriately chosen threshold.

Methods 1 and 2 require the regularization parameter to be chosen optimally while method 3 in addition implies the choice of the tuning parameter c. In practice it turns out that the value of c is not critical over a very wide range of values because the optimal λ depends on c. That means that c can be fixed at a more or less arbitrary value if a method for the automatic determination of λ is applied. In our case c was set to 0.1.

The regularization parameter λ accounts for the degree of smoothness of the reconstructed image and determines the condition number of the term (G T G + λR T R) in (6). Several methods for the optimal choice of this parameter have been published in the past, the most well-known ones being L-curves, Generalized Cross-Validation and the Morozov-criterion.10 Because of its clear physical interpretation we choose the latter method. In this case the optimal λ is the one where the estimated residuals Gp − Δy m have the same variance as the measurement noise. The motivation for this criterion is that it is obviously meaningless to make the residuals lower than expected from the statistics of the data. This method provided always stable images independently of the regularization matrix and was considered as a good basis for a fair comparison between the different regularization methods.

Modeling Setup

The inverse solver was tested with the simple 3-D model comprising a cylindrical conductor with two spherical inhomogeneities placed with their centers at (x =  − 60, y = 0, z = 0) (mm) and (x =  − 30, y = 52 and z = 0) (mm) and the array of 16 excitation coils and 32 receiving coils shown in Fig. 1. The exact geometry is illustrated in Fig. 2. The cylinder had a radius and a height of 100 mm, the perturbing sphere had a radius of 20 mm. The solenoid excitation coils were modeled by cylindrical rings with an inner diameter of 60 mm, a thickness of 1 mm and a width of 21 mm. The exciters were placed in groups of 8 with their centers on two rings with radii of 125 mm in two transversal planes of the cylinder. The lower ring is rotated by 22.5 degrees versus the upper one in order to achieve a lower degree of symmetry. The square receiver coils with an edge length of 40 mm were placed with their centers on two symmetrically arranged parallel rings with a radius of 120 mm, each comprising 16 evenly spaced coils. The orientation of the windings were opposite in both rings so that the 16 vertical pairs can be combined to planar gradiometers as published previously26. The measured data were simulated in terms of induced voltage changes when changing the conductivity of the spherical perturbation from 0.2 S/m (= background conductivity, homogeneous cylinder) to 0.3 S/m. The relative permittivity was kept constant at 80 in all compartments. The excitation frequency was 100 kHz.

Figure 2.
figure 2

Schematic of the simulation model. 16 excitation coils and 32 receiver coils are placed on two concentric rings around the tank, respectively. Two spherical perturbations are placed at the shown locations with (x =  − 60, y = 0, z = 0) and (x =  − 30, y = 52, z = 0). All measures are given in (mm).

This arrangement was chosen as the model system for our analysis because of two reasons:

  1. (1)

    It represents a true 3-D-arrangement which delivers theoretically 512 independent measuring combinations, i.e. 512 data points for one image reconstruction.

  2. (2)

    It is similar to our experimental system which employs 16 excitation sites in one plane and 14 planar gradiometers which are formed by connecting in counter-phase the coils in the upper and in the lower receiver plane.

Two different meshes A and B were used for the generation of an artificial dataset and for the reconstruction. Mesh A comprised 11000 elements within the cylinder and approximately 1200 in the spheres while mesh B comprised 17000 elements for the homogeneous cylinder without perturbation. In both cases the diameter of the surrounding spherical surface which approximated the far boundary was chosen as 1m, requiring approximately 30000 elements (mesh A) and 50000 elements (mesh B), respectively. Uncorrelated Gaussian noise was added to the voltage data in order to simulate the noise of the receiver channels. This type of noise, although common practice in simulations of this kind,, is not entirely valid for real situations. In addition the noise of the excitation coils is propagated to all receiver coils, thus resulting in a certain amount of correlated noise in all receiver channels. This phenomenon has been studied in detail for EIT,6 but should be disregarded here for simplicity.

The calculation of the complete sensitivity matrix required 48 forward solutions according to the Mortarelli-approach.

Theoretical Limits of Resolution and Contrast/Noise Ratio

For EIT the theoretical limits of image quality in terms of contrast/noise ratio (CNR) and resolution have been studied carefully in30 while no such study exists for MIT. A similarly rigorous discussion for MIT is certainly beyond the scope of this paper. However, a simplified analysis on resolution and CNR was carried out for our linear reconstruction scheme so as to have a certain theoretical basis for interpreting our empirical results.

There is a fundamental limit for the resolution which depends on the amount of available information in the data. This information depends on the number of data points, i.e. the number of possible sensor–detector-combinations and on the degree of independence between these data points. In the case of EIT and MIT the number of independent data points is usually much lower than the number of voxels, so that the system is under-determined. Moreover the different data are correlated to a certain degree, so that the effective rank is comparatively low. In EIT, e.g. 16 electrodes provide 104 independent data points so that the information is no more than 104 ‘effective pixels’. Including some a priori-information in the form of the regularization terms leads to a defined ‘smearing’ of this information over the imaging plane and provides the typical diffuse images known from EIT.

We characterized the resolution of MIT with the Raleigh criterion. Accordingly two point-shaped perturbations are still separable if their point spread functions (PSF) overlap in such a way that the peak of the first one coincides with the first zero crossing of the second one. In the case of a sinc-shaped PSF the lowest separable distance is equivalent to the 64%-width of the PSF. In contrast to e.g. X-ray CT in MIT the PSF depends on the location of the perturbation and on the geometry of the object under investigation. In this paper the object is the model cylinder which is also used for the numerical reconstruction examples and for the phantoms. The PSF is calculated by mapping the true parameter values p * to the reconstructed ones p via the reconstruction equation

$$ \begin{aligned} &{\bf y}_{\rm m}={\bf Gp}^{\ast}\\ &{\bf p}={\bf Ay}_{\rm m}={\bf AGp}^{\ast}={\bf Mp}^{\ast} \end{aligned} $$
(13)

A means the expression

A = (G T G + λ R T R)−1 G T for regularized methods 1–3 or A = V t Σ −1 t U t for TSVD.

The ith column of M in Eq. 13 is then the shifted PSF for the ith voxel. The theoretical limit was approximated by applying TSVD with the full set of non-zero singular values, i.e.

$$ {\bf M}={\bf V}_t\Sigma_t^{-1}{\bf U}_t {\bf G}\quad \hbox{with}\ t=\hbox{rank} ({\bf G}^{T}{\bf G}) $$
(14)

We chose TSVD for the estimation of the theoretical limit because it requires the least explicit assumptions about the a-priori distribution of p. In MIT an additional difficulty is that, in general, the 64%-boundary of the three-dimensional PSF is not spherical and thus the resolution is anisotropic. In our simplified analysis we define as Raleigh-width the largest axis of the ellipsoid which best approximates the 64%-boundary.

As CNR at the point x we defined

$$\hbox{CNR}({\bf x})=\frac{\Delta p({\bf x})}{\hbox{std}(np({\bf x}))} $$
(15)

whereby Δp is the reconstructed difference between perturbation and background and std(np) is the standard deviation of the image noise. We recognize that Δp is not a contrast in the classical sense. Usually contrast is the difference between p in the perturbation and background divided by the background value. However, as our method is a differential one, the background value of the image is always zero and the classical contrast definition is meaningless. Moreover the determinant for the detectability of a perturbation is the difference Δp rather than the contrast. The CNR depends on the size and location of the perturbation as well as on the noise level of the measured data. In contrast to resolution there is no theoretical lower limit in the case of noise-free data and a perfect reconstruction method.

We estimated std(np(x)) by calculating the Cramer-Rao lower bound of the covariance of the parameters. Given the covariance matrix X of the voltage changes Δy m the Cramer-Rao lower bound of the covariance matrix of the image noise vector np is

$$ \hbox{Cov}({\bf np})={\bf AXA}^T $$
(16)

From the diagonal elements of this matrix the expected lower bound of the coefficient of variation (CV) of the reconstructed conductivity in the perturbation can be calculated for any voxel. For the evaluation of Eq. 15 the reconstructed value Δp in voxel i follows from Eq. 13:

$$ \Delta \varvec{p}_i=\sum\limits_{\varvec{j}\in \varvec{P}} \varvec{M}_{ij} \varvec{p}_j^{\ast} $$
(17)

whereby P is the set of the indices of all voxels inside the true perturbation. This equation clearly shows that the CNR must decrease when the size of the perturbation shrinks. Assuming a small perturbation and an approximately constant PSF inside this region the CNR is approximately inversely proportional to the volume of the perturbation. The detectability limit can then be defined as the one where the CNR equals 1.

The shown approach is simple and allows the calculation of detectability limits for perturbations with different radii and locations but it is only valid in the linear case. Alternatively Monte-Carlo studies can be carried out for different perturbations varying in size and contrast. Such a study has been published previously for a spherical perturbation in the center of a brain model.18

For simplicity we assumed the noise to be Gaussian with zero mean, although in the general non-linear case this may not be entirely valid. For characterizing the noise level we defined as SNR the ratio max(|ΔV|)/std(V) whereby std(V) is the standard deviation of the noise voltage. ΔV is the vector of voltage changes in all excitation/sensor combinations when a test object is placed into the empty measurement system. In order to be independent on size and location of the perturbation we chose as the test object the homogeneous background cylinder of our model.

Results

The PSF was evaluated at 20 equally spaced points along the x-axis between the center and the border of the cylinder. This set includes also the point (0.6R0,0, 0), i.e. the center of one of the two perturbations in our simulation model. To mitigate discretization errors the PSF was calculated by rotating the coordinate system 8 times about 45° and averaging the data assuming radial symmetry of the true model. Then the normalized resolution was calculated from the PSF as the inverse of the ratio between the Raleigh-width and the cylinder radius. This normalized resolution can be interpreted as the number of points which can be resolved per cylinder radius. The resulting data were plotted in Fig. 3 as a function of the normalized x-coordinate in the xy-plane. Curves are depicted for noise-free data (TSVD with truncation level 512) and TSVD with truncation levels corresponding to a SNR of 44, 50 and 64 dB, respectively. The truncation levels were chosen according to the Morozov-criterion and are listed in table 1. The above three SNR levels were chosen because they correspond to the range in which our current measurement system operates.25

Figure 3.
figure 3

Dependence of the theoretical normalized resolution on the noise level. Curves are depicted for noise-free data (TSVD with truncation level 512) and TSVD with truncation levels corresponding to a SNR of 44, 50 and 64 dB, respectively.

TABLE 1. Tuning parameters for the regularization, chosen according to the Morozov-criterion.

A clear increase of the resolution with the distance from the center is observable. The theoretical limit for a point-shaped perturbation in the xy-plane is 3.5 (corresponding to 2.9 cm separation) in the center and a maximum resolution of more than 6 (corresponding to 1.7 cm separation) at the periphery. At the locations of the test spheres the resolution is in the range of 4, i.e. 2.5 cm separation. The relative loss of resolution with the noise level is stronger in the center than in the periphery. At a SNR of 44 dB the resolution does not increase any more continuously but levels off above a normalized x position of about 0.6.

In analogy to EIT the PSF depends strongly on the location, showing the broadest distribution in the center of the cylinder. This is reflected by increasing resolution when moving from the center towards the periphery. Increasing the amount of regularization or decreasing the truncation level according to increasing measurement noise the PSF broadens and its center of gravity is shifted towards the borders of the cylinder. Moreover increasingly strong ringing in form of star-like patterns becomes observable close to the border (not shown explicitly in this paper).

In Fig. 4 the four regularization methods are compared at a SNR of 50 dB. Except for few points TSVD, performs poorest which is in accordance with the assumption of least explicit a-priori information.

Figure 4.
figure 4

Theoretical normalized resolution for the four regularization methods at a SNR of 50 dB.

Figure 5a shows the CNR for TSVD as a function of the normalized x-coordinate and in dependence on the noise level while Fig. 5b compares the four methods at a SNR of 50 dB. As expected the theoretical CNR depends strongly on the location of the perturbation increasing from values around 2 in the center up to about 60 at the cylinder border. At the location of the perturbing spheres the CNR drops from about 26 to 13 when decreasing the SNR from 64 dB to 44 dB. In the simulated images the CNR drops from 24 to 9, i.e. remains fairly in the same range. Figure 5b reveals that IM and NM yield the highest CNR, followed by TSVD and VU, whereby VU is characterized by dramatically lower values. In the center VU yields a CNR around 2 which is already very close to the limit of detectability. When comparing the theoretical values with the reconstructed ones (see table 2) the reconstructions always produce a lower CNR than expected, the discrepancy being stronger at high noise levels.

Figure 5.
figure 5

Panel A: CNR for TSVD as a function of the normalized x-coordinate (relative to the cylinder radius) and in dependence on the noise level. Panel B: comparison of the four methods at a SNR of 50 dB.

Figures 6 and 7 show the reconstructed mean images from a Monte-Carlo study with 50 runs for each of the four methods and an SNR of 64 and 50 dB, respectively. Representative cross-sections in the xy-plane and in the xz-plane were selected. The respective regularization parameters are listed in Table 1.

Figure 6.
figure 6

Mean images of the Monte-Carlo study. Reconstructed Δ σ (transversal and saggittal section through the origin) for the spherical perturbations with four different regularization matrices and a SNR of 64 dB.

Figure 7.
figure 7

Mean images of the Monte-Carlo study. Reconstructed Δ σ (transversal and saggittal section through the origin) for the spherical perturbations with four different regularization matrices and a SNR of 50 dB.

In all cases the two perturbations can be recognized as diffuse bright disks. The dotted circles in the figures delineate the original position of the perturbing spheres.

A number of performance indices were calculated in order to quantitatively assess the results in Figs. 6 and 7. They are summarized in Table 2 and comprise:

  • Mean and CNR of the pixel values in the center of gravity of each reconstructed perturbation. These parameters quantify the correctness of the reconstructed values as well as their uncertainty. The theoretically expected values are listed for comparison. The center of gravity was chosen as evaluation point because the reconstructed perturbations deviate more or less from the spherical shape and show significant outward shift with increasing noise level.

  • Radial outward shift of the spheres in the reconstructed image (fidelity of the location). This shift was determined by localizing the center of gravity for each spot within a wedge-shaped search region with a height of 2.6 times the sphere’s radius and excluding the outermost 2 mm as well as the innermost 40 mm in radial direction from the center. The restriction of the search region to this volume prevented spurious contributions from outliers and negative image values far away from the real perturbing regions. Also for this parameter we present theoretical values as expected from the PSF.

TABLE 2. Summary of the performance indices defined in the text.

In addition Table 2 lists the theoretical resolution limits for all methods and noise levels at the position of the perturbations.

With 64 dB SNR noise the two spheres can be resolved comparatively easily with all four methods. With 50 dB SNR noise the resolution is theoretically still possible for all methods. In the reconstruction the resolution is already somewhat below the limit for IM and VU, the image values in the notch between the two peaks being around 71% of the peak values. TSVD and NM appear to separate the perturbations even worse although theoretically this should not be the case. At higher noise all algorithms tend to shift the objects towards the borders of the cylinder when looking at the parameter ‘normalized outward shift’ in Table 2. Here again the VU performs best by producing the lowest shift. At 64 dB SNR the mean images (not shown here) are in general comparatively poor. IM and NM interestingly still allow a clear separation of two objects, but their localization is very poor, the outward shift being extremely large (see Table 2). VU still provides a much better localization but only at the cost of CNR. TSVD failed to produce a clear image, an observation which was not expected from theory.

Depending on the regularization method the central voxel value of the perturbation at (− 0.6R0,0,0) decreases from 0.02–0.03 S/m to 0.002–0.006 S/m, compared to the true value of 0.1 S/m. This means that even under nearly ideal conditions (64 dB SNR) the conductivity changes are strongly underestimated. NM and TSVD yield nearly the same central voxel values as TSVD while VU produces much lower values.

Figure 8 shows single reconstructions for IM and VU at all three noise levels. Both methods allow a separation of the perturbations in all cases, but the poor CNR of VU implicates a very noisy image at 44 dB. The pronounced difference in outward shift is clearly visible at 44 dB, where VU still allows a fair localization while IM fails completely to reconstruct the perturbations at the right positions.

Figure 8.
figure 8

Comparison of single-shot images for VU and IM at the three different noise levels.

IM and NM perform nearly identically, also their optimal regularization parameters are almost identical. VU yields, in general, larger values in the perturbed regions but also a larger STD.

Discussion

The results demonstrate the feasibility of image reconstruction from MIT-data with the same methods as suggested for EIT. This finding is not self-evident, as the sensitivity distribution is significantly different in EIT and MIT.27,29 In EIT the region of maximum sensitivity is located between the equipotential surfaces which meet the surface at the detection electrodes, i.e. within a tube-shaped region which connects injection and detection sites. As shown in27,29 in MIT the sensitivity is not concentrated within a field tube between excitation and receiver coil but increases with the distance from the tube axis, according to the increase of the eddy current density. This may be the main reason why the reconstructed solution tends to be displaced towards the nearest border of the cylindrical tank, especially at higher noise levels. An extreme case for this effect can be observed if the perturbation is placed exactly in the origin and if the senders and receivers are all in the same plane (image not shown due to space restrictions). Instead of the expected spot in the origin two widely separated spots appear on the cylinder axis close to the top and bottom of the cylinder, respectively. In fact such a coil arrangement cannot distinguish between an object in the center and two objects on the cylinder axis placed symmetrically with respect to the origin, because it is always possible to find two corresponding conductivitiy changes so that the field perturbations in the median plane are the same. Obviously, in this ambiguous situation, the algorithm favors the splitted solution according to the sensitivity distribution. A similar ambiguity occurs when using differential sensors, such as the gradiometers employed in our setup. For getting rid of such artifacts it is very important to use a less symmetric transceiver setup which provides enough spatial information in 3-D.

The theoretical resolution limit was calculated from the PSF as derived from the TSVD method. This limit depends on the chosen regularization method, the geometry of the object and on the location within the object. The respective dependences are shown in Fig. 3 and Table 2 for some selected positions inside a cylinder. TSVD was chosen for the calculation of the theoretical limits because it requires no explicit assumptions about any prior distribution of p. In this sense it contains less a-priori information than the other methods and thus describes the worst case, as confirmed by Fig. 3 and Table 2. The calculated PSF shows all basic features of the reconstructed images.

The PSF is a 3-D-distribution similar to a 3-D analog of the sinc function. This means that most of its energy is concentrated in a diffuse cloud around the considered point but that there exist three-dimensional ‘side lobes’ which decay with the distance and show some kind of ‘periodicity’. The ‘bean’-shaped artifacts which are visible in most top views of Figs. 6 and 7 are typical features of the PSF as well as the ‘star-artifact’ in the TSVD-images. Therefore these ringing artifacts do not stem from inaccuracies of the reconstruction method or measurement errors, but, instead, are inherent in the PSF.

The resolution clearly also depends on the contrast in the presence of noise, because the contrast determines the SNR. Increased noise requires more regularization and hence leads to a broadening of the PSF-distributions. Figure 3 shows the dependence of the resolution on the noise in the case of TSVD at one single contrast of 0.5 only. A more complete analysis similar to that given for EIT in30 should also show the dependence of the resolution on contrast, size and location of the perturbation at a given noise level. However, such a comprehensive analysis requires a separate paper and should not be given here.

The CNR depends strongly on the radius and, to a less extent, on the noise level. Obviously IM and NM produce very similar values, followed by TSVD. VU in general yields comparatively small CNR but higher central voxel values. For centrally placed spheres with 4 cm diameter VU yields CNRs close to the detection limit. When comparing the theoretical values with the reconstructed ones (see Table 2) the reconstruction always produces a lower CNR than expected, the discrepancy being stronger at high noise levels. One surprising detail of Fig. 5a is that at higher noise levels the CNR-curves cross the curve for 64 dB. This means that very close to the periphery noisier data yield higher CNR values than less noisy data. The reason for this counter-intuitive effect is not yet entirely clear but may be related to the strong outward shift of the PSF at higher noise. In those cases the evaluation of the CNR at the original position of the perturbation may not be appropriate any more and should be interpreted with caution.

In the case of weak perturbation we can assume that the CNR depends approximately linearly on the conductivity difference Δσ. The dependence on the volume of the perturbation is, in general, more complicated because the PSF depends on the location and is therefore not constant throughout the whole perturbation. Only in the case of small spatial extension of the perturbation an approximately linear dependence on the volume can be assumed.

The low number of significant singular values even at comparatively low noise (64 dB SNR) suggests that, similar as in EIT, a significant amount of sensor combinations does not provide enough independent information. Intuitively one would expect this finding because there exist pairs of excitation/receiving coils which nearly fulfill the reciprocity condition and hence reduce the amount of useful combinations to about half of the number of possible combinations, i.e. to 256 in our case.

Further investigations should determine the maximum ‘useful’ number of sensors in one plane, i.e. that number beyond which additional sensors do not increase the resolution significantly. Adding more sensors off-plane may add more 3-D-information and hence still provide improvement. This possibility should be studied in further research.

When comparing the regularization schemes after application of the Morozov-criterion, the IM and the NM approach yield the smoothest visual appearance and the highest CNR. However, they also tend to displace the perturbations towards the border of the tank. The best localization is obtained with VU, probably because the imposed variance counteracts somewhat the lower sensitivity in the center of the object. However, VU yields also the lowest CNR, i.e. the less homogeneous images and more pronounced ghosts. The failure of TSVD at a SNR of 44 dB was not expected theoretically, although, in general, it produces the poorest theoretical resolution. In terms of separability of the two perturbations VU performs best, especially when also taking into account the correct localization.

In neither case, however, the single-step solution provides the correct values for Δσ. Even at a SNR of 64 dB the reconstructed differences are too low by a factor of at least 5, thus demonstrating that the method yields the correct search direction but not the correct step size.

The highest mean voxel values are provided by VU and TSVD, the drop with the noise levels being lowest. However, on the other hand VU yields the highest standard deviations. Moreover VU tends to produce more pronounced ‘ghost objects’ in the homogeneous region than IM and NM. As expected from the PSF TSVD tends to produce ‘star-artifacts’ at the cylinder border, i.e. a periodic pattern with 16 peaks close to the centers of the receiving coils. This artifact gets worse at increasing noise level.

First experiments with smaller models and at least 10 iterations with an iterative solver show that the solution converges towards the correct voxel values. Nevertheless the single-step method may be completely justified in cases where only qualitative changes are sought for or where proportions are to be reconstructed, e.g. in frequency differential spectroscopic imaging. Therefore the area of applicability of a single-step approach has to be analyzed carefully in future work.

At least for the shown examples MIT appears relatively robust against Gaussian measurement noise. A SNR of 64 dB allows for a stable and distinct solution. Even 44 dB allow the recognition of the two spheres when applying the correct regularization. This result is very important for the practical implementation because, due to technical reasons, MIT is expected to yield low SNR (around 50 dB) at frequencies as low as 100 kHz, which are interesting for the imaging of pathophysiological processes.28 , 26 However, our results have only been achieved with two single focal perturbations with a relatively large diameter of 20% of the background object. In a more advanced study the stability and the resolution of the images should be investigated for a series of perturbations with different diameter and spacing. For the monitoring of brain edema which usually do not split up in separate sub-regions our approach may be sufficiently stable. This hypothesis has to be tested both theoretically and empirically for centrally placed perturbations (worst case).

As to the detectability of spherical perturbations in a human brain simulation results in18 have shown that a sphere with a diameter of about 40 mm, a background conductivity of 0.1 S/m a contrast of 2 yields a SNR of 24 dB at 100 kHz when applying 1 A to an excitation coil with 45 turns and a receiver coil with 1 turn. The assumed acquisition time was 200 ms, With our present technology single shot images are generated with an acquisition time of 20 ms, an excitation coil with five turns and a current up to 20 A. The receiver coils have 40 turns with otherwise unchanged geometry. This means an overall increase in SNR by 28 dB. Extrapolating the analysis given in,28 an improvement of the SNR by a factor of 5–10 is still technically possible, thus reaching 50–60 dB, which is obviously sufficient for producing fairly acceptable difference images.

Another open question is the influence of the mesh quality on the reconstruction results. We used a comparatively coarse non-uniform grid for the reconstruction. Therefore non-negligible numerical errors are to be expected which may explain the discrepancies between theoretically expected and the reconstructed values for CNR and radial displacement. Also the apparently somewhat worse spatial resolution in the reconstructed images than theoretically expected may be due to such numerical problems. The influence of the mesh and the optimization of mesh quality should be a major issue for further developments.

The results were obtained at a single frequency only. Future work should concentrate on the exploitation of the frequency dependence of the tissue conductivity and measurements at frequencies up to several MHz. A multi-frequency approach is expected to increase significantly the available information and thus the quality of the images. Possible applications may then in fact be the same as for EIT (lung function monitoring, lung edema monitoring) and hydration monitoring in the brain.