Abstract
We propose a state estimation approach to timevarying magnetic resonance imaging utilizing a priori information. In state estimation, the timedependent image reconstruction problem is modeled by separate state evolution and observation models. In our method, we compute the state estimates by using the Kalman filter and steadystate Kalman smoother utilizing a datadriven estimate for the process noise covariance matrix, constructed from conventional sliding window estimates. The proposed approach is evaluated using radially golden angle sampled simulated and experimental small animal data from a rat brain. In our method, the state estimates are updated after each new spoke of radial data becomes available, leading to faster frame rate compared with the conventional approaches. The results are compared with the estimates with the sliding window method. The results show that the state estimation approach with the datadriven process noise covariance can improve both spatial and temporal resolution.
Similar content being viewed by others
1 Introduction
Magnetic resonance imaging (MRI) is widely used to image soft tissue with high spatial resolution and good contrast. In timevarying MRI, the scanner collects a time series of kspace data that is used to reconstruct a time series of images. Timevarying MRI imaging techniques include, for example, functional MRI (fMRI) and dynamic contrastenhanced (DCE) MRI.
fMRI is an imaging technique used to noninvasively monitor brain activity. The working principle of fMRI is based on the fact that neuronal activity is coupled with cerebral blood flow and volume. When an area of the brain activates, the blood flow and amount of oxygen to that area increase more than the metabolic need for oxygen, enabling the related brain activity to be indirectly evaluated by the blood oxygen leveldependent (BOLD) contrast [3, 15].
DCEMRI is a dynamic imaging technique which uses exogenous, typically gadolinium based, contrast agent combined with high temporal resolution imaging, for investigating microvascular structure and function [26]. In preclinical studies, DCEMRI is often utilized to study cerebral blood flow dynamics. The blood flow dynamics are tightly coupled with metabolism and are altered in many neurological disorders including ischemia, in tumors and in areas where the blood–brain barrier has been disrupted [3, 8, 26]. In the last case, the gadoliniumbased contrast agent accumulates in tissues in the areas where the blood–brain barrier has been disrupted, leading to increased signal intensity. The signal increase also occurs in wellvascularized areas with incomplete or missing blood–brain barrier, for example the pituitary gland and nasal mucosa [8]. However, the tissue signal often exhibits a sudden and transient signal drop observed at the time of the injection due to conditions where a very high contrast agent concentration leads to a transient signal loss due to enhanced \(T_2^*\) relaxation. While DCEMRI is not routinely utilized in clinical studies, it is widely used in preclinical studies [26].
In MRI, the measurement data represents samples of the unknown image in the frequency domain, called the kspace. Each point in the kspace corresponds basically to the Fourier transform of the whole unknown image on a particular frequency, implying a global image to a data point dependence. The parts of the central kspace correspond to the lowfrequency components while the outer parts correspond to the higherfrequency components. Due to this, in timevarying experiments the central parts of the kspace should preferably be sampled frequently as it carries most of the information on the intensity changes in the time series of the unknown images.
Image reconstruction in timevarying MRI is traditionally carried out using static reconstruction methods applied to each time frame of the measured kspace data separately. Static reconstruction methods assume timeinvariance of the unknowns during the acquisition of the data for each image. Typically, when each of the kspace frames is a fully sampled Cartesian trajectory, the reconstruction is carried out using inverse FFT. When employing nonCartesian sampling schemes, the usual reconstruction method is regridding where the nonCartesian data is interpolated onto a Cartesian grid before the inverse FFT [16]. To obtain sufficient spatial resolution with the classical methods, enough kspace samples per image frame are necessary in order to fulfill the Nyquist criterion. This comes at the expense of temporal resolution resulting in a loss of accuracy when studying a physiological process with faster changes than the sampling time for each image frame.
A natural approach to improve the temporal resolution is to use undersampled kspace data. When using undersampled data in timevarying MRI, it is preferable to use a nonCartesian acquisition such as spiral or radial sampling. This leads to highdensity sampling of the central part of the kspace while the higherfrequency components are sampled more sparsely [2]. When collecting the kspace data with radial acquisition, a highly efficient sampling scheme for timevarying MRI is the golden angle (GA) method [46]. In GA acquisition, the radial spokes are collected azimuthally such that there is always an angle of \(\phi _{\text {GA}} = 111.25^{\circ }\) between consecutive spokes. In GA sampling, each new radial spoke always differs from the previous ones, leading to a rather high spatial coverage of the kspace over time with high temporal incoherence.
In conventional uniform radial sampling, the angle between consecutive spokes is \(\phi _{\text {uniform}} = 180^{\circ } / n\), where n is the number of radial spokes per frame. Figure 1 shows the differences between linear and GA sampling when a total of three spokes has been collected with \(\phi _{\text {uniform}} = 10^{\circ }\). A distinct feature of the conventional uniform radial sampling, used in our previous work [45], is that the state estimates exhibit a periodic artifact in the pixel time series of the state estimates with a period equal to the number of spokes in one full circle of sampled spokes. With the GA sampling, we demonstrate that the problem of the periodic artifact can be avoided. The GA method furthermore offers greater flexibility with respect to the selection of the time resolution in a sense that one can afterward choose how many radial spokes per frame are used in the reconstruction of the data.
Although the number of spokes used in GA sampling per image frame can be selected freely, it is beneficial to use a number of spokes equaling a Fibonacci number. Using a Fibonacci number of spokes allows for a nearly uniform angular coverage of the kspace and better signaltonoise ratio [46]. Smaller golden angles, based on a generalized Fibonacci sequence, could also be used instead of the standard golden angle of 111.25\(^{\circ }\) [47].
When undersampled data are used, the image reconstruction problem becomes illposed. This leads to aliasing artifacts when conventional reconstruction methods, such as inverse FFT or regridding, are used. A conventional approach in timevarying MRI to tackle the aliasing effect is to use a sliding window (SW) method [6, 32]. In SW, the last available measurement data are combined with a time window of previous data in order to obtain a sufficiently sampled frame of kspace data that can be treated with the conventional image reconstruction methods. This approach, however, is suboptimal since it produces a moving averaging effect in the time direction.
The problem of illposedness can also be solved by using the wellknown compressed sensing (CS) framework. CS allows the use of drastically undersampled data [4] and can be utilized both in static reconstruction and in timevarying reconstruction, where the whole image sequence is reconstructed at once using a joint reconstruction formulation where temporal coupling between the images is introduced by a temporal regularization functional. CS image reconstruction has been extensively used in static MRI [22, 23], dynamic cardiac MRI [9, 12, 18, 27, 28], and also in fMRI [14, 17, 25, 51].
Another possibility for the reconstruction of undersampled data is the use of state estimation, where the image reconstruction problem is considered explicitly as a timedependent problem. In state estimation, the image can be updated after a new observation, such as a single spoke of radial data, becomes available. A state evolution model is used to model the unknowns (time series of MR images) as a timedependent process while the relation between the unknown MR image and kspace measurements at each time instant is modeled by a separate observation equation. The objective is to estimate the sequence of states (MR images) given the state evolution and observation models, and the time series of the kspace data.
One of the commonly used methods for computing the state estimates is the Kalman filter (KF) [20], which utilizes all the past and present data to compute the estimate at a given time point. Once the full time series of KF estimates is available, the results can usually be improved by using a Kalman smoother (KS). In KS, we compute the filter estimates in the backward direction, using all (past, present and future) data on the computation of the estimates. In dynamic cardiac MRI KF has been utilized in [10, 24, 30, 36, 37, 40] and KS in [30]. In fMRI, KF has been mainly used to estimate certain parameters from the reconstructed data [13, 21, 34]; KF, enhanced with CS, has been used in fMRI image reconstruction [49]. KF has also recently been tested with MRI fingerprinting reconstruction [50]. In our previous work [45], we implemented the Kalman filter for fMRI image reconstruction and studied two different approaches for the inclusion of anatomical prior information into the state estimation problem. The approach was evaluated by using conventional uniform radial sampling of the kspace.
One of the practical problems in state estimation is the selection of the noise parameters of the state evolution model. In this paper, we take a somewhat different approach than in [45] and propose a state estimation approach, where the variances of the evolution noise covariance are estimated by using a datadriven approach, where a timeinvariant estimate for the process noise covariance is constructed based on an anatomical reference image and the conventional sliding window estimates of the data and scaled optimal with KF consistency tests. Previously a timevarying process noise covariance obtained from a buffer of conventional images with spiral sampling has been studied in [37]. Furthermore, to alleviate the computational burden of the Kalman smoother, this work uses steadystate KS instead of computing separate smoother gain matrices for each time step. The proposed approach is combined with the radial golden angle data space acquisition in DCEMRI and fMRI, demonstrating that the periodic artifacts in the state estimates with linear radial sampling present in [45] can be avoided with the golden angle sampling.
In this study, the state estimation approach is evaluated using simulated and experimental, GA collected radial MRI data from a rat brain. The experimental data is obtained by using the DCEMRI and the simulated data represents a fMRI examination. In the computations, the state estimates are updated after each new spoke of radial data becomes available. The state estimates and their uncertainty estimates for both DCEMRI and fMRI are computed using the Kalman filter and the steadystate version of Rauch–Tung–Striebel (RTS) fixedinterval Kalman smoother. The approaches are compared with the sliding window method, where each image is reconstructed with the LSQR algorithm with early truncation for regularization of the underdeterministic leastsquares problem.
2 Theory
2.1 Observation Model for State Estimation
Let \(\tilde{\varvec{g}}_t \in \mathbb {C}^M\) denote the (complexvalued) kspace data for a single spoke in the radial sampling of the kspace at time t, where M is the number of measurements (points) per spoke, and let the whole dynamic MRI experiment consist of a set of T spokes of data \(\{ \tilde{\varvec{g}}_t, \ t = 1,2,\ldots ,T\}\).
Let \(\tilde{\varvec{f}}_t \in \mathbb {C}^{{N_\mathrm{pix}}}\) denote the vectorized representation of the complexvalued \({N_\mathrm{pix}}= N \times N\) MRI image at time index t and \(\tilde{\varvec{G}}_t \in \mathbb {C}^{M \times N_{\text {pix}}}\) denote the (nonuniform) Fourier transform matrix modeling the transform from the image space to the data space. With these notations, the MRI observation model at time t becomes
where \(\tilde{\varvec{v}}_t\) models the measurement noise. Many image reconstruction methods, including state estimation approaches, are intrinsically derived and intuitively more natural for realvalued variables. To make the observation model realvalued, we split the model (1) into real and imaginary parts, leading to a realvalued observation model
where
We remark that the splitting in (2) is applicable for any MRI sampling trajectory. Also, while the splitting increases the dimension, the increase in computational burden is effectively compensated by replacing complexvalued operations by realvalued operations.
In the case of radially sampled data, we can utilize the Fourier slice theorem for reducing the computational cost. By 1D Fourier transforming each measured spoke of the raw kspace data as \(\tilde{\varvec{z}}_t = \mathcal {F}\tilde{\varvec{g}}_t\), we arrive at a model
where \(\tilde{\varvec{u}}_t\) denotes the (transformed) noise and \(\varvec{H}_t\) is a realvalued matrix implementing the Radon transform (i.e., parallel beam Xray tomography transform) with line integrals computed perpendicular to direction of the kspace spoke. By applying the splitting to the model (3), we obtain observation model
where
for a single spoke of radially sampled data. The rationale in using the Fourier slice theorem and carrying out the 1D Fourier transform of the data to obtain (3) stems from the fact that the computations become more efficient as the observation models for real and imaginary parts of data become independent. Furthermore, the observation matrix \(\varvec{H}_t\) is much sparser than \(\tilde{\varvec{G}}_t\) and is used for both the real and imaginary parts with no offdiagonal matrices. We will utilize the model (4) in the computation of the state estimates for the radial golden angle data.
2.2 Sliding Window
The principle in the sliding window (SW) method is to compute reconstruction at time \(\ell \) using the present data and data from \(n_{\text {SW}}  1\) time points prior to the present point, where \(n_{\text {SW}}\) is the number of spokes in one frame of SW, using some conventional reconstruction method such as inverse FFT, regridding or LSQR estimation [6, 32].
The formulation of the sliding window estimation in leastsquares sense at time \(\ell \) can be defined as
where the concatenated data and forward operators are
and \(\varvec{g}_k\):s and \(\varvec{G}_k\):s are as in equation (4) when employing the Fourier transformed spoke data or (2) when employing raw kspace data. However, since the problem of (5) is underdetermined in this case, a leastsquares solution is not unique. In this paper, we regularize the solution of equation (5) by using an iterative LSQR algorithm [29] with the early truncation of the iterations. Note that given the time series of T spokes of data, the sliding window method produces estimates for time points \(t = n_{\text {SW}}, \ldots , T\).
2.3 State Estimation Representation
The state estimation model for the MRI problem is given by the pair of equations
where (7a) is the observation model, given by equation (2) when using raw kspace data or (4) when using the Radon transform formalism. Equation (7b) is the state evolution model where
and \(\tilde{\varvec{F}}_{t1} \in \mathbb {C}^{{N_\mathrm{pix}}\times {N_\mathrm{pix}}}\) is the complexvalued state transition matrix, and \(\tilde{\varvec{w}}_{t1}\) is the process noise.
The Kalman filter (KF) is a linear recursive state estimation algorithm used to estimate the states of a timevarying system of the form (7). Under the standard assumptions that \(\varvec{v}_t\) and \(\varvec{w}_t\) in (7b)–(7a) are Gaussian zeromean noise processes with known covariances and mutually independent in a sense that \(\varvec{v}_m \perp \varvec{v}_j\), \(\varvec{w}_m \perp \varvec{w}_j\) for \(m \ne j\) and \(\varvec{w}_m \perp \varvec{v}_m\), the Kalman filter produces the minimum meansquareerror estimate of \(\varvec{f}_t\) based on the measurements \(\varvec{g}_1, \ldots , \varvec{g}_T\). The derivation of the Kalman filter can be found in [35]. In the statistical setup, the KF produces the means and covariances of the time evolution and observation updating probability densities for a Gaussian Bayes filtering problem, see [19, Chapter 4]. The standard KF equations are given by
where \(\hat{\varvec{f}}_t^\) is the a priori estimate, \(\varvec{P}_t^\) is the a priori error covariance of the estimate, \(\varvec{Q}_t\) is the covariance of the process noise, \(\hat{\varvec{f}}_t^+\) is the a posteriori estimate, \(\varvec{P}_t^+\) is the a posteriori error covariance of the estimate, \(\varvec{K}_t\) is the Kalman gain, and \(\varvec{R}_t\) is the covariance of the observation noise. The estimation error covariances \(\varvec{P}_t^\) and \(\varvec{P}_t^+\) represent the uncertainty in the a priori and a posteriori estimates given the model (7) with their diagonals giving the error variances of the estimates [35]. The a posteriori variances can be used to estimate confidence intervals for the Kalman filter state estimates. We remark that these confidence estimates reflect the uncertainty with respect to the state estimation model used.
Since the observation noise in MRI is thermal noise, \(\varvec{v}_t\) can be modeled as a zeromean white Gaussian process with covariance \(\varvec{R}_t\) [36]. Furthermore, we make an approximation that the variance of the observation noise is stationary and the same for all kspace samples. This leads to a covariance matrix of the form \(\varvec{R}_t=\sigma _v^2 \varvec{I}\) \(\forall \) t.
The state evolution model can be used to incorporate temporal prior information and models about the unknowns into the image reconstruction problem. In this study, we employ a simple random walk formulation, where \(\varvec{F}_{t1} = \varvec{I}\) \(\forall \) t. The process noise \(\varvec{w}_t\) is modeled as stationary zeromean Gaussian process with diagonal covariance \(\varvec{Q}_t = \text {diag}\left( q_1 \ldots , q_{2{N_\mathrm{pix}}}\right) \).
The covariance \(\varvec{Q}_t\) reflects the degree of uncertainty in the state evolution model and can be interpreted as a weighting that is given for the past estimates versus current measurement; large values of \(\varvec{Q}_t\) imply large uncertainty, leading to more weight for the current measurement, while small values of \(\varvec{Q}_t\) imply small uncertainty in the state evolution model, leading to high weight for the past estimates and predictions by the state transition matrix. Small covariance values can lead to both temporally and spatially smoother estimate of the time series, but if the values are too small and the true evolution is not fully described by the state transition matrix, they can also cause the estimate to lag behind. Very large values of \(\varvec{Q}_t\), on the other hand, imply low confidence in the evolution model, leading to low correlation between the states and can lead to highly noisy estimates.
The covariance \(\varvec{Q}_t\) is usually tuned by the visual assessment of the KF estimates based on pilot runs of the Kalman filter through the time series of kspace data. In the next subsection, we describe a simple datadriven construction of \(\varvec{Q}_t\) based on time series of sliding window estimates.
2.3.1 A DataDriven Estimate for the Process Noise Covariance Matrix
A datadriven estimate for the variances of the process noise \(q_1, \ldots , q_{2{N_\mathrm{pix}}}\) is constructed by using the sliding window estimates of the data. First, a baseline image \(\hat{\varvec{f}}_{\text {base}}\) is calculated as the mean of the first \(N_{\text {base}}\) SW estimates
where \(\hat{\varvec{f}}_{\text {SW}}^{(\ell )}\) is the sliding window estimate obtained by minimizing (5) at time \(\ell \). The number \(N_{\text {base}}\) of sliding window frames is selected such that all the measurement data that is used in the computation of \(\hat{\varvec{f}}_{\text {base}}\) comes from an initial baseline measurement before any activation or administration of the contrast agent.
Next, to obtain an estimate for the deviations of the unknowns with respect to the baseline at each time instant \(\ell \), we compute for each sliding window estimate its deviation from the baseline as
Remark that since we are operating with the split realvalued formalism, \(\varvec{b}_{\ell }\) is a realvalued \({2 {N_\mathrm{pix}}}\) vector. As the datadriven estimates of the state evolution variances, we take the maximum value of the deviations for each element in the time series of the SW estimates. Formally, this can be obtained by defining matrix \(\varvec{B} = (\varvec{b}_{n_{\text {SW}}}, \varvec{b}_{{n_{\text {SW}}}+1},\ldots ,\varvec{b}_{T})\) and then setting the variances as
where \(\varvec{B}(j,:)\) denotes the j:th row of \(\varvec{B}\).
In timevarying MRI studies, it is common practice to measure a separate, high accuracy anatomical reference image before and/or after the dynamic experiment. To utilize the reference image in the construction of \(\varvec{Q}_t\), we use the anatomical image to segment the image area to tissue pixels, where changes due to the employed contrast mechanism can occur, and exterior pixels where the changes are due to noise. This segmentation is used to define a logical mask with value one indicating a tissue pixel and zero an exterior pixel. The mask is then used to set the state evolution variances in the exterior pixels to a small fixed value \(\epsilon \), formally as,
where \(\circ \) signifies the Schur (elementwise) product, \(\lnot \) Boolean NOT operator, and mask is the logical masking vector for the image domain. This masking step sets the evolution variances for exterior pixels to much lower values than the variances of the tissue pixels as little to no change is expected in the background. We remark that if no anatomical image is available, one could use the baseline part of the dynamic data for the reconstruction of an image used in the construction of the tissue mask.
2.3.2 Estimation of Observation Noise Covariance
An estimate for the observation noise variance \(\sigma _v\) was obtained by using an empty scan where there was only noise, i.e., no object of interest was scanned. From the data vector, the covariance \(\varvec{R}_j\) was estimated for kspace spoke j as the sample covariance
where \(\bar{\varvec{g}}_j = \frac{1}{N_\mathrm{rep}}\sum _{\ell =1}^{N_\mathrm{rep}} \varvec{g}_{j}^{(\ell )}\) and \(N_\mathrm{rep}\) is the number of repetitions of the spoke j in the empty scan measurement set. Remark that while the golden angle acquisition does not theoretically measure the same spoke twice, the practical implementations often repeat a predefined cycle of GA spokes with some period. In our implementation, one cycle consisted of \(n=610\) golden angle spokes before repeating the same set of spokes.
In principle, one can use the covariance matrix obtained in (13) as is. However, it may be that the number \(N_\mathrm{rep}\) of repetitions of the same spoke is too small for the estimation of the full covariance matrices \(R_j \in \mathbb {R}^{2M \times 2M}\). In such case, one can resort to a diagonal approximation. In this study, we employed a simple approximation using the same variance for all data points
where the variance \(\sigma _v^2\) was computed as average of the individual variances by
2.3.3 Kalman Filter Consistency
The covariances of the state evolution and observation noise processes reflect our uncertainty in the state evolution and observation models. Often their realizations are unknown or their databased estimates may be only approximations of their true realizations, leading to the suboptimal performance of the Kalman filter. In such cases, consistency checks can be helpful for refining the covariances such that the Kalman filter produces optimal estimates. These consistency checks include the normalized innovation squared (NIS) (sometimes called normalized error square, NES) test and autocorrelation (whiteness) test, both based on the innovation process
The criteria for filter consistency are that the innovation has to be approximately zeromean and white and have magnitude commensurate with the innovation covariance, defined as
Zeromean property can be checked with NIS and whiteness with autocorrelation. An initial check for covariance parameter tuning is to seek the minimum value of the mean of the innovation [1]. Sometimes checking the minimum value for the innovation mean might already be enough for optimal filter performance, but if further consistency tests are required the NIS is defined as
NIS should fulfill the condition \(E(\epsilon _{t}) = 2M\), however, since this value will most likely not be achieved we can determine confidence intervals, usually indicated by a 95% confidence interval, where the time average NIS value should be. Time average NIS is defined as
where L is the number of time points included. If \( \varvec{\nu }_t\) is a zeromean, white noise process, then \(L\hat{\varvec{\epsilon }}_t\) has a chisquare density distribution with LM degrees of freedom [5]. The 95% confidence intervals for chisquare distribution with D degrees of freedom can be obtained from [5]
For NIS we want the value of (18) to be within the limits shown in (19).
The autocorrelation should be approximately zero, however, as with NIS, this is most likely not achieved, and we should aim the value of the timeaveraged autocorrelation to be within a certain confidence interval. The timeaveraged autocorrelation for \(\lambda \) steps apart is defined as
For 95% acceptance interval, we have [5]
For the optimal performance of the Kalman filter, NIS should be within the confidence interval (19) and autocorrelation values should be within (21), with autocorrelation values closest to zero indicating a more optimal filter.
For more information on the consistency tests, see [1, 5].
2.4 Kalman Smoother
The Kalman smoother (KS) implemented in this work is the Rauch–Tung–Striebel (RTS) fixedinterval smoother [33]. The RTS smoother is defined with the following equations
where \(\varvec{K}_{fs,t}\) is the smoother gain, \(\varvec{P}_{fs,t}\) the smoothed estimate error covariance, and \(t=T1,\ldots ,0\).
When calculating the KS estimate, the KF recursion (8a)–(8e) is calculated first through \(t=1,\ldots ,T\) (forward pass through the data) and the smoother is calculated as a backward pass. This requires saving the smoother gains (22a), or the estimate error covariances (8b) and (8e), during the forward pass. The smoother error covariance (22c) can be used to evaluate the improvement of the smoothed estimate over the KF estimate by comparing it with a posteriori covariance (8e).
A steadystate version of the RTS smoother that is used here can be applied when the memory requirement to save the smoother gains (22a) are too high. In the steadystate version, the a priori and a posteriori estimation error covariances \(\varvec{P}_t^\) and \(\varvec{P}_t^+\) in (22a) are constant and obtained from equations (8b) and (8e) in the last state of the Kalman filter. Along with the random walk evolution model with fixed state evolution matrix \(\varvec{F}_t\), the steadystate method has a constant smoother gain and only (22b) needs to be updated for all time points. Thus, the smoother equations become
3 Materials and Methods
3.1 Estimates
The following estimates are computed and compared in the results section:

(SW)
Sliding window using a frame of \(n_{\text {SW}} = 55\) spokes of data. The sliding window reconstructions are computed by the leastsquares approach using data of the form (6). The sliding window estimates serve as a reference of a conventional filtering method of the golden angle data and are also used for the estimation of the datadriven estimate of the state process covariance for the state estimation approaches.

(KF)
Kalman filter (8a)–(8e) where the covariance of the process noise has been estimated by using the sliding window estimates. Observation equation is given by (7).

(KS)
Steadystate Kalman smoother (23a)–(23c) applied to the results of the KF.
As conventional in MRI, all the figures and tables are based on the magnitude \(\sqrt{{\text {Re}}(\hat{\varvec{f}})^2 + {\text {Im}}(\hat{\varvec{f}})^2}\) of the reconstructed images.
While the golden angle acquisition in principle produces a unique set of spokes for the entire scan, we employed an experimental (hardware motivated) approximation which measures 610 golden angle spokes before repeating the same cycle of spokes again. The 611th spoke would have had an angle of approximately \(0.06^{\circ }\), but was instead set to \(0^{\circ }\) starting the cycle from the beginning. This method allows us to simplify both the acquisition and reconstruction process while still obtaining the advantages of the golden angle acquisition.
In this study, all the reconstructions are computed using the Radon transform formalism in (4). Thus, before any of the estimates were computed, every spoke of the raw kspace measurement data was 1D Fourier transformed and then split to the real and imaginary parts to arrive at the observation model of the form (4). This procedure was applied in both the experimental test case and the simulated test case, where the raw measurement data was simulated by the nonuniform FFT [11]. Before the 1D Fourier transform, the data spokes were zeropadded to twice the original size per spoke (from \(M = 128\) to \(M = 256\)) as the estimates without the interpolation of the data produced a “hot spot” artifact in the center of the image in the case of Kalman estimates, or were of much lower quality in the case of SW estimates. This artifact was caused by the coarse sampling of the data in the Radon transform domain. Using the raw kspace data in the filtering (observation model (2)) would remove the need for the zero padding of the data, but it is computationally much more demanding to use. Furthermore, as we are applying a linear data transform domain (justified by the Fourier slice theorem) and a standard interpolation technique to the data, the quality of the estimates is not affected and as such the usage of raw kspace data (observation model (2)) or the Radon transform formalism (observation model (4)) lead to equal image quality.
For the sliding window estimates, we used \({n_{\text {SW}}}= 55\) spokes of data for each estimate due to it providing a good compromise between spatial resolution and temporal resolution, and for being a Fibonacci number which allows the most uniform coverage of the kspace in GA sampling [47]. Kalman estimates produce a time series of T images and the sliding window a time series of \(T  n_{\text {SW}} + 1\) images. First sliding window estimate is made after the first 55 spokes have been collected, causing it to be 54 points shorter than the Kalman estimates.
The observation matrix \(\varvec{H}_t\) in (4) was constructed by using the Astra toolbox [39].
3.2 Simulated Data
A \(128 \times 128\) anatomical image of a rat brain, obtained in 9.4 T Agilent MRI system with a gradient pulse sequence, was used for the construction of ground truth images for the fMRI simulation. The baseline target, which is shown in Fig. 2, was a denoised version of the original image. A region of interest (ROI), highlighted in red in the left image of Fig. 2, was selected as an activation area for the fMRI simulation. To generate a time series of ground truth states, a simulated activation function was added to each of the ROI pixels. The mean of the image magnitude of the ROI pixels for the time series of 3050 simulated ground truth states is shown in the right image in Fig. 2. In each of the ROI pixels, the activation was such that its peak value was 110% of the maximum value of the original image.
The golden angle trajectory for the simulation of the raw kspace measurement data was adopted from the experimental test case, which will be explained in sect. 3.3. This GA trajectory consists of a cycle of \(n=610\) golden angle spokes with an angular separation of \(111.246^{\circ }\), and in the simulated test case this cycle was repeated for \(N_\mathrm{rep}=5\) times, leading to times series of \(T=3050\) spokes of simulated golden angle fMRI data. The raw kspace data was simulated by the nonuniform FFT package [11]. Gaussian random noise with standard deviation (STD) of 0.025 was added to the simulated raw kspace data.
In the computations, the original image of the rat brain (i.e., the image before the denoising) was used as the anatomical reference image, which was segmented to sets of pixels inside and outside of tissues for the construction of the mask operation in equation (12).
3.3 Experimental DCEMRI Data from a Small Animal Specimen
3.3.1 Animal Preparation
All animal experiments were approved by the Animal Health Welfare and Ethics Committee of the University of Eastern Finland. 1x106 C6 (ECACC 92090409) rat glioma cells (Sigma) were implanted into the brain of a 200 g female Wistar rat under ketamine/medetomidine hydrochloride anesthesia. Tumor imaging was performed 10 days postimplantation. During the experiments, the animal was anesthetized with isoflurane (5% induction, 12% upkeep) and kept in a fixed position in a holder inserted into the magnet. A needle was placed into the tail vein of the animal for the injection of the contrast agent.
3.3.2 Acquisition of the Data
All MR data were collected using a 9.4 T horizontal magnet interfaced to an Agilent imaging console and a volume coil transmit/quadrature surface coil receive pair (Rapid Biomed, Rimpar, Germany). For the timevarying data, a gradientechobased radial pulse sequence was used with repetition time 38.5 ms, echo time 9 ms, flip angle 30\(^{\circ }\), fieldof view \(32\times 32\) mm, slice thickness 1.5 mm and number of points (measurements) in each spoke \(M = 128\). \(n = 610\) spokes were collected in sequential order with a golden angle interval of 111.246\(^{\circ }\) before repeating the same cycle of spokes for 25 times, leading to an overall measurement sequence of \(T= 15250\) spokes of data. Measurement time for a full cycle of 610 spokes was \(610 \cdot 38.5\) ms \( = 23.46\) s. Gadovist (Gadobutrol, 1 mmol/kg) was injected intravenously (IV) after one minute from the beginning of the dynamic scan over a period of 3 s.
Anatomical reference images were acquired from the same slice before and after the dynamical experiment using a gradientecho pulse sequence with otherwise similar parameters as in the dynamic sequence but using a Cartesian sampling of \(128\times 128\) points of kspace data. The anatomical image before the experiment is shown in Fig. 3. This image was segmented to sets of pixels inside and outside the tissues to produce the mask operation for equation (12).
For the analysis of the results, a region of interest (ROI) was selected. 20 ROI pixels were chosen such that the area is inside of the glioma in the brain. These ROI pixels are highlighted in red in Fig. 3. Furthermore, two pixels are selected for individual examination and are shown with white “+”marks in Fig. 3. The location of the lower “+”mark is inside the glioma and the upper mark is at the location the main artery in the cortex.
3.4 Computations
The sliding window estimates were computed with the LSQR algorithm based on the approach of (5)–(6) using \({n_{\text {SW}}}= 55\) for each estimate, leading to time series of \(T{n_{\text {SW}}}+1 = T54\) images. The LSQR estimates were regularized by using truncated iterations with a total of 15 iterations.
For the datadriven estimation of the process noise covariance for the state estimation, the mean of the baseline in (9) was computed based on the first \(N_{\text {base}} = 610\) SW estimates. These estimates covered the first full cycle of kspace spokes in our GA scheme and were from the initial baseline measurement before any activation or contrast agent administration. The logical mask in (12) for both simulated and experimental test case was obtained from the anatomical reference image by using simple thresholding. Figure 4 shows the logical mask for the experimental case. The value of \(\epsilon \) in (12) was selected as the minimum of the original process noise variance values squared, that is \(\epsilon = \min {(\varvec{\zeta })}^2\).
The observation noise covariance was of the form \(\varvec{R} = \sigma _v^2 \varvec{I}\) where the variance \(\sigma _v^2\) was estimated by (13) and (14). This equal variance form was used since the differences in values between estimated variances for different spokes or measurements were negligible in (13).
As we were using the observation model (4), we had a block diagonal forward matrix and a block diagonal state evolution model with \(\varvec{F}_t = I\). Furthermore, we assumed that the real and imaginary parts had no crosscorrelations, thus making (8b) and (8e) block diagonal as well.
When using equations (10) and (11) we get estimates of the process noise variance values for both the real and imaginary parts separately. As a last step, we summed up the real and imaginary parts of the process noise variances together to yield a combined covariance approximation that is common for both the real and imaginary parts. This (dilated) covariance approximation allows pixels that had large estimated datadriven variances in the imaginary part also have larger process noise uncertainty in the real part of the evolution model and vice versa. The state covariance matrix thus becomes
This approximation further allowed (8b) to be identical to both the real and imaginary parts, decreasing the computational costs.
Regarding the estimation of the process noise variances, we remark that, since the entire time series of sliding window estimates were used with (10) and (11), the temporal variations caused by the contrast agent injection were included in the estimation. If only a subset of the sliding window estimates are used for the estimation of the variances, care should be taken to make sure that the temporal variations are representative for the time steps in question. An alternative approach to the one used in this paper, would be to estimate a time series of process noise covariances by computing them in a fixed interval or lag manner, similar to the KS presented in this paper.
Since the process noise covariance and thus (8b), and observation noise variance were the same for the real and imaginary parts, and we had the block diagonal forms, we were able to compute (8c) and (8e) such that they are identical to both the real and imaginary parts. In the case of \(\varvec{P}^_t\) this means that \(\varvec{\Gamma }^{}_t = \varvec{P}^_t (1:{N_\mathrm{pix}},1:{N_\mathrm{pix}})\), with \(\varvec{P}^+_{t}\) done similarly. The Kalman gain \(\varvec{K}_t\) becomes \( \varvec{\Pi }^{}_t = \varvec{K}_t (1:M,1:{N_\mathrm{pix}})\). These further allowed us to compute the KF estimates (8d) separately and independently for the real and imaginary parts.
Finally, the filter formulas (8a)–(8e) become
This same procedure also applies to the Kalman smoother in (23a), (23b) and (23c). As the initial value \(\hat{\varvec{f}}^+_0\) for the KF we used an LS estimate (5) which was computed using the first 610 spokes of the data.
In order to achieve feasible initial estimates and guarantee their early convergence, equations (25a), (25e) and (25b) were iterated before any of the KF estimates were computed. The initial value for the estimation error covariance \(\varvec{\Gamma }_0\) before the iterations was diagonal and selected such that the diagonal values were the maximum value obtained from the process noise covariance matrix \(\varvec{\Psi }\). The number of initial iterations was the same as the number of spokes used per frame (610). After the iterations, the resulting a posteriori estimation error covariance matrix was used as the initial covariance in the actual filter estimates.
The steadystate smoother was utilized because the standard RTS smoother gains in (22a) would need to be stored during the KF pass, with a single gain matrix in image resolution of \(128 \times 128\) taking 1 GB of memory. With the steadystate smoother, we only require a single gain matrix. The constant smoother gain computed with (23a) and (23b) was used to determine the smoothed estimates. Tests performed on a lower resolution (\(64 \times 64\) image) showed that the differences between the, computationally more efficient, but approximate, steadystate smoother and regular smoother are negligible.
Since the observation matrix is different for each consecutive spoke of data and both (8c) and (8e) being dependent on the current observation matrix, the estimation error covariances (8b) and (8e) do not reach exactly steady state, but rather vary periodically around a steadystate level along the measurement cycle. However, the differences in values between different time steps were less than 5%. The point where this periodic steadystate variation began was also achieved quite quickly as the estimation error covariances had nearly reached the steadystate variation point after the initial 610 iterations.
With both simulated data and experimental data the KF and KS estimates were computed using Algorithm 1. In this work, all computations were computed offline as the datadriven estimation of the evolution noise covariance requires access to the whole time series of the MRI data. However, it would also be possible to implement Algorithm 1 in near realtime by using timevarying process noise covariance which could be estimated by fixed interval or lag computations based on short subsets of the data as it becomes available.
For the computations, we used Arrayfire C++ library [48] with the CUDA backend using single precision. The GPU used was Nvidia Tesla P100 PCIE 16 GB. The codes were run through the MEXinterface in MATLAB (2017a, The MathWorks Inc., Natick, MA).
3.5 Consistency Tests
Both the process noise variances and the observation noise variance were scaled by using the consistency tests resulting in the following modifications of (25a), (25b) and (16)
respectively, where \(\alpha > 0\) and \(\beta > 0\) are the coefficients to be determined with the consistency tests and initially set to 1. Due to the separate computations of the real and imaginary parts, expected value of (17) becomes \(E(\epsilon _t) = M\). The number of time points used to compute the tests was \(L = 610\). First the \(\alpha \) value was varied until the mean of the innovation (15) had reached its minimum value. Next, the value of \(\beta \) was made smaller until the NIS value (17) reached the confidence interval (19), or was at the minimum distance to the confidence interval in case if it didn’t reach the confidence interval. In case that the NIS didn’t reach the confidence interval, an additional step was taken where \(\alpha \) was further incremented until the NIS value reached the confidence interval. In all cases, the autocorrelation values were inside the confidence intervals (21). Since both \(\alpha \) and \(\beta \) were adjusted based on the NIS and autocorrelation tests, their values and intervals were limited by the tests as well.
3.5.1 Simulated Test Case
The reconstructions were computed using image size \({N_\mathrm{pix}}= 128 \times 128\). The process noise variances which were estimated based on the SW estimates are shown in Fig. 5.
The values of the consistency tuning parameters \(\alpha \) and \(\beta \) in (26) were obtained as outlined earlier and were \(\alpha = 500\) and \(\beta = 0.2\).
3.5.2 Experimental Data
The process noise variances which were estimated based on the SW estimates are shown in Fig. 6.
The consistency tuning parameters \(\alpha \) and \(\beta \) in (26) were obtained as outlined in 3.5 and were \(\alpha = 2500\) and \(\beta = 0.4\).
3.6 Image Fidelity Measures
In the simulated test case, the fidelity of the reconstructed images with respect to the true target are assessed using the relative \(L_2\) norm error
peak signaltonoiseratio (PSNR) [41] and mean structural similarity index (SSIM) [42] over the whole image domain. The reconstruction quality of the simulated activation response inside the ROI is assessed using the \(L_2\) norm error and contrasttonoiseratio
where A is the peak (or minimum if the signal decreases) signal magnitude after the stimulus, \(A_{\text {base}}\) is the mean magnitude of the baseline signal (signal before stimulus), and \(\sigma _{\text {base}}\) is the STD of the initial baseline. Other means of calculating the CNR exist for MRI, but (27) is commonly used [43].
4 Results
4.1 Simulated Test Case
The top in Fig. 7 shows the mean of the magnitude in the simulated ROI with respect to time for the whole time series and the bottom shows a closeup of the signal around the time of the activation in the simulated fMRI data, shown by the black bars in the top image. Fig. 8 shows the relative reconstruction error of the ROI and the relative error of the entire image domain \(\varOmega \) with respect to time. The integral of the relative error over time and the time averages of the other image fidelity measures are given in Table 1. Figure 9 shows a snapshot of the simulated and reconstructed images at \(t=1000\) and the temporal variations of one column of image pixels for the entire time series. This column is shown by the white bar on the left image and the temporal location of the snapshot is shown by the white bar on the right.
From Figs. 7 and 8, as well as from Table 1, we can see that the state estimation estimates have smaller temporal and spatial errors when compared with the sliding window estimate. From the bottom of Fig. 7, it can be seen that the KF estimate shows less delay in the recovery of the activation than the SW estimate. KS, on the other hand, shows a slightly premature beginning of the activation.
In the case of spatial errors, the differences in the ROI are small when comparing the KF and SW estimates, but are quite significant when either are compared with the KS estimate. In the whole domain, both the KF and KS have smaller errors than the SW estimate. The other image fidelity measures similarly show better results with the KF and the KS estimates. Based on the fidelity measures and visual examination of the snapshots in Fig. 9, the best estimate is given by the KS.
Figure 10 shows the mean of the KF estimate for the ROI pixels and mean of the 95% confidence intervals (\(\pm 2\sigma \)) of the estimate at the ROI pixels. The confidence intervals were computed with the following formula
separately for the real and imaginary parts, with the same covariance used for both. Finally, the magnitude was taken and used as the confidence intervals in Figs. 10 and 14. We remark that the (posterior) confidence interval of (28) depends on the state estimation model used through both of the noise covariances \(Q_t, R_t\) and the state transition matrix \(F_t\), implying that the confidence intervals reflect the uncertainty with respect to the state estimation model used.
The computational times of one frame with the current implementations were on average 0.08 s for the KF, with KS extending the computational time by 0.006 s per frame on Tesla P100 PCIE 16 GB, and 0.3 s for SW on Intel Core i75820K.
4.2 Experimental Data
The mean of the magnitude in the ROI with respect to time is shown in the top of Fig. 11. The bottom of Fig. 11 shows a closeup of the time window shown by the black bars in the top image. Figure 12 shows a time series of the two pixels shown in Fig. 3 at the same time window as in the bottom of Fig. 11. The left image of Fig. 12 is the signal of one of the ROI pixels (marked with white “+” inside the ROI in Fig. 3) and the right image shows the signal of a pixel at the location of the main cortex artery (the upper “+” mark in Fig. 3). Fig. 13 shows a snapshot of the reconstructions at time index \(t = 6050\) together with time series of one row of pixels indicated by the white line in the left image. The white line in the right of Fig. 13 shows the temporal location of the snapshot.
Signal alterations in the time series in Figs. 11 and 12 reflect changes in water relaxation properties, mainly changes in T1 relaxation time, due to varying contrast agent concentration. An increased signal intensity is observed in the tumor region after the administration of the contrast agent (see Figs. 11 and 13) because the compromised blood–brain barrier leads to the accumulation of the contrast agent into tumor tissue. The time plot of the signal intensity in the vertical line in Fig. 13 indicates that only minimal signal changes are observed outside the tumor region during the experiment. However, the ROI signal in Fig. 9 and the time plot in Fig. 13 (the part of the vertical line inside the brain) reveal that a transient signal drop can be observed in the brain at the time of the injection. This drop may reflect conditions where the very high contrast agent concentration leads to a transient signal loss due to enhanced T2 relaxation.
The results in Figs. 11, 12 and 13 conform to the findings of the fMRI simulated test case; the state estimation approach yields better image quality than the SW in the DCE experiment as well as less noisy time series and clearer recovery of the transient signal loss in the brain and subsequent increase in the intensity in the tumor region. The spatial details in the state estimates are clearer and the images are less noisy compared to the SW approach. The signal changes can also be seen more clearly in the Kalman estimates, especially in the KS. Both the decrease and increase in signal intensity can be seen slightly earlier when compared with the SW estimate. The contrasttonoiseratios are also better with both the KF and KS estimates, see Table 2. The CNR was calculated for this DCE experiment by using the magnitude of the transient signal drop after the injection of the contrast agent.
The left graph of Fig. 12 shows that at some pixels in the brain, the SW estimate does not recover the transient signal loss at all, while the KF estimate, and especially the KS estimate, do recover it. This would be advantageous for diagnostic purposes as it allows to more accurately locate tumor areas, for example. Furthermore, it could allow examination of stimuli that cause either too fast or too small response, as is often the case in BOLD examinations [3]. This would be useful in the early diagnosis Alzheimer’s disease.
Figure 14 shows the mean of the KF estimate for the ROI pixels and the mean of the 95% confidence intervals (\(\pm 2\sigma \)) of the estimate for the ROI pixels. The confidence intervals were computed as in the simulated case with (28).
5 Discussion & Conclusions
In this paper, we proposed a state estimation approach to timevarying MRI. The proposed method utilized the golden angle acquisition method, a datadriven process noise covariance matrix estimated from the sliding window estimates and a steadystate Kalman smoother. The approach was evaluated by using simulated and experimental small animal data. The simulated data corresponded to a fMRI experiment while the experimental data demonstrated a DCEMRI experiment. Both cases showed that the proposed method improved the reconstruction quality, both spatially and temporally, when compared with the sliding window method which is widely used for the reconstruction of timevarying MRI data. Our method also allowed an easy computation of two different estimates in the form of KF and KS estimates, with very little of additional computational time required. The KS estimate was temporally smoother than the KF estimate, but could potentially estimate the signal changes slightly premature.
The steadystate formalism of the Kalman smoother presented here could also be extended to the actual Kalman filter itself. As previously mentioned, the convergence to the (periodic) steadystate estimation error covariance and Kalman gain values happened reasonably fast. Employing, for example, a steadystate a posteriori estimation error covariance would improve the computational times significantly while having only a small effect on the quality of the estimate. Another method could be to store the individual Kalman gain matrices for all the spokes, but the memory requirements would be higher compared with a single covariance matrix.
In our previous paper [45], we demonstrated the use of KF in fMRI by using uniformly spaced radial acquisition and spatial priors. As previously mentioned, when using the conventional uniform radial acquisition method a periodic temporal artifact was present in the time series of the KF estimates despite the number of spokes per cycle. This artifact resembled a shape of a sawtooth in the error curves of the estimates in our previous work. In this work, however, we showed that when using the golden angle acquisition no such periodic artifact was present in the KF estimates.
The GA method also seemed to improve spatial accuracy of KF and KS estimates compared with conventional linear radial sampling. This improvement can be contributed to the combination of the GA trajectory, which fills in the whole kspace gradually, with the state estimation approach, where the states are correlated in time through the models and the structure of the filtering procedure. The spatial regularization methods used in [45] could also be implemented into the state estimation with datadriven noise covariances (Algorithm 1). However, in the present setup with GA sampling, the spatial regularization provided only a minor improvement to the image quality while increased significantly the computational cost. The process noise covariance presented here could also be made to vary with time as it was done in, e.g., [37]. This allows the estimates to be temporally less noisy in times when there are no rapid changes in the data. However, as this effect comes with a decrease in the process noise variances it can lead to lagging during the points of rapid changes, depending on the sensitivity of the covariance. The increase in the variance values to prevent this lag can, on the other hand, lead to noisier estimates, as happened in our experiments. Based on our experiments, we found that the stationary covariance approximation employed in this paper led to a reliable construction for the state noise covariance.
In this paper, we employed the same process noise covariance for both real and imaginary parts. As previously mentioned, using separate covariances is also a possibility. However, in our tests, using separate process noise covariances gave slightly less noisy estimates temporally with some lagging in certain pixels or temporally noisier estimates with no lag, but the computational times were twice as much.
Instead of using the split form presented in this paper, we could do all the Kalman filter computations in complexspace by using the method implemented in [7]. However, this method is computationally more demanding and, in our experiments, provided no improvement in the quality of the KF estimates.
In this work, the state evolution model was a simple random walk process. A topic of future research could be the improvement of the method by using more advanced state evolution models for encoding temporal prior information. These could be, for example, based on simple models of the physiological signals [31], or if slow changes are expected, based on kinematic models [38] where the rate of change of the signal derivatives is controlled by using a higherorder time series model. These approaches, however, have the complication that the computational demand increases due to the increased number of unknown state variables.
All the methods used in this paper, in addition to the methods of our previous paper and methods that were tested for this paper, but not presented here, will be available in the opensource software Highdimensional Kalman filter toolbox (HELMET). This will be freely available from GitHub [44].
In this study, the state estimation approach was evaluated using a time step of single kspace spoke. The time resolution and sampling can in principle be selected quite freely, for example one could use more than one spoke per time instant or use the data in the sliding window manner. However, in our experiments this does not produce improvement in spatial resolution while it does lower the temporal resolution.
References
BarShalom, Y., Kirubarajan, T., Li, X.R.: Estimation with Applications to Tracking and Navigation. John Wiley & Sons Inc, New York, NY, USA (2002)
Bernstein, M.A., King, K.F., Zhou, X.J.: Handbook of MRI Pulse Sequences. Elsevier, Netherlands (2004)
Buxton, R.B.: Introduction to Functional Magnetic Resonance Imaging: Principles and Techniques, 2nd edn. Cambridge University Press, Cambridge (2009)
Candes, E., Wakin, M.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008). https://doi.org/10.1109/MSP.2007.914731
Crassidis, J..L., Junkins, J..L.: Optimal Estimation of Dynamic Systems, 2nd edn. Chapman & Hall/CRC, USA (2011)
d’Arcy, J.A., Collins, D.J., Rowland, I.J., Padhani, A.R., Leach, M.O.: Applications of sliding window reconstruction with Cartesian sampling for dynamic contrast enhanced MRI. NMRI Biomed. 15(2), 174–183 (2002). https://doi.org/10.1002/nbm.755
Dini, D.H., Mandic, D.P.: Class of widely linear complex Kalman filters. IEEE Trans. Neural Netw. Learn. Syst. 23(5), 775–786 (2012). https://doi.org/10.1109/TNNLS.2012.2189893
Edelman, R.R., Mattle, H.P., Atkinson, D.J., Hill., T., Finn, J.P., Mayman, C., Ronthal, M., Hoogewoud, H.M., Kleefield, J.: Cerebral blood flow: assessment with dynamic contrastenhanced T2*weighted MR imaging at 1.5 T. Radiology 176(1), 211–220 (1990)
Feng, L., Srichai, M.B., Lim, R.P., Harrison, A., King, W., Adluru, G., Dibella, E.V.R., Sodickson, D.K., Otazo, R., Kim, D.: Highly accelerated realtime cardiac cine MRI using \(k\)  \(t\) SPARSESENSE. Magn. Reson. Med. 70(1), 64–74 (2013). https://doi.org/10.1002/mrm.24440
Feng, X., Salerno, M., Kramer, C.M., Meyer, C.H.: Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging. Magn. Reson. Med. 69(5), 1346–1356 (2013). https://doi.org/10.1002/mrm.24375
Fessler, J.A., Sutton, B.P.: Nonuniform fast Fourier transforms using minmax interpolation. IEEE Trans. Signal Process. 51(2), 560–574 (2003). https://doi.org/10.1109/TSP.2002.807005
Gamper, U., Boesiger, P., Kozerke, S.: Compressed sensing in dynamic MRI. Magn. Reson. Med. 59(2), 365–373 (2008). https://doi.org/10.1002/mrm.21477
Gössl, C., Auer, D.P., Fahrmeir, L.: Dynamic models in fMRI. Magn. Reson. Med. 43(1), 72–81 (2000)
Holland, D.J., Liu, C., Song, X., Mazerolle, E.L., Stevens, M.T., Sederman, A.J., Gladden, L.F., D’Arcy, R.C.N., Bowen, C.V., Beyea, S.D.: Compressed sensing reconstruction improves sensitivity of variable density spiral fMRI. Magn. Reson. Med. 70(6), 1634–1643 (2013). https://doi.org/10.1002/mrm.24621
Huettel, S.A., Song, A.W., McCarthy, G.: Functional Magnetic Resonance Imaging, second edn. Sinauer Associates (2009)
Jackson, J.I., Meyer, C.H., Nishimura, D.G., Macovski, A.: Selection of a convolution function for Fourier inversion using gridding. IEEE Trans. Med. Imaging 10(3), 473–478 (1991). https://doi.org/10.1109/42.97598
Jeromin, O., Pattichis, M.S., Calhoun, V.D.: Optimal compressed sensing reconstructions of fMRI using 2D deterministic and stochastic sampling geometries. Biomed. Eng. Online 11(25), 1–36 (2012). https://doi.org/10.1186/1475925X1125
Jung, H., Sung, K., Nayak, K.S., Kim, E.Y., Ye, J.C.: kt FOCUSS: A general compressed sensing framework for high resolution dynamic MRI. Magn. Reson. Med. 61(1), 103–116 (2009). https://doi.org/10.1002/mrm.21757
Kaipio, J..P., Somersalo, E.: Statistical and computational inverse problems. Springer, Cham (2005)
Kalman, R.E.: A new approach to linear filtering and prediction problems. J. Basic Eng. 82(1), 35–45 (1960). https://doi.org/10.1115/1.3662552
Li, L., Yan, B., Tong, L., Wang, L., Li, J.: Incremental activation detection for realtime fMRI series using robust Kalman filter. Comput. Math. Methods Med. (2014). https://doi.org/10.1155/2014/759805
Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2008). https://doi.org/10.1002/mrm.21391
Lustig, M., Donoho, D., Santos, J., Pauly, J.: Compressed sensing MRI. IEEE Signal Proc. Mag. 25(2), 72–82 (2008). https://doi.org/10.1109/MSP.2007.914728
Majumdar, A., Ward, R.K., Aboulnasr, T.: Compressed sensing based realtime dynamic MRI reconstruction. IEEE Trans. Med. Imag. 31(12), 2253–2266 (2012). https://doi.org/10.1109/TMI.2012.2215921
Michel, V., Gramfort, A., Varoquaux, G., Eger, E., Thirion, B.: Total variation regularization for fMRIbased prediction of behavior. IEEE Trans. Med. Imaging 30(7), 1328–1340 (2011). https://doi.org/10.1109/TMI.2011.2113378
Moroz, J., Reinsberg, S.A.: Dynamic ContrastEnhanced MRI. Springer, New York, NY (2018)
Otazo, R., Candés, E., Sodickson, D.K.: Lowrank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. Magn. Reson. Med. 73(3), 1125–1136 (2015). https://doi.org/10.1002/mrm.25240
Otazo, R., Kim, D., Axel, L., Sodickson, D.K.: Combination of compressed sensing and parallel imaging for highly accelerated firstpass cardiac perfusion MRI. Magn. Reson. Med. 64(3), 767–776 (2010). https://doi.org/10.1002/mrm.22463
Paige, C.C., Saunders, M.A.: LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8(1), 43–71 (1982). https://doi.org/10.1145/355984.355989
Park, S., Park, J.: Accelerated dynamic cardiac MRI exploiting sparseKalmansmoother selfcalibration and reconstruction (\(k\)  \(t\) SPARKS). Phys. Med. Biol. 60, 3655–3671 (2015). https://doi.org/10.1088/00319155/60/9/3655
Prince, S., Kolehmainen, V., Kaipio, J.P., Franceschini, M.A., Boas, D., Arridge, S.R.: Time series estimation of biological factors in optical diffusion tomography. Phys. Med. Biol. 48, 1491–1504 (2003). https://doi.org/10.1088/00319155/48/11/301
Rasche, V., Boer, R.W.D., Holz, D., Proksa, R.: Continuous radial data acquisition for dynamic MRI. Magn. Reson. Med. 34(5), 754–761 (1995). https://doi.org/10.1002/mrm.1910340515
Rauch, H.E., Striebel, C.T., Tung, F.: Maximum likelihood estimates of linear dynamic systems. AIAA J. 3(8), 1445–1450 (1965). https://doi.org/10.2514/3.3166
Särkkä, S., Solin, A., Nummenmaa, A., Vehtari, A., Auranen, T., Vanni, S., Lin, F.H.: Dynamic retrospective filtering of physiological noise in BOLD fMRI: DRIFTER. Neuroimage 60(2), 1517–1527 (2012). https://doi.org/10.1016/j.neuroimage.2012.01.067
Simon, D.: Optimal State Estimation: Kalman, H\(_\infty \) and Nonlinear Approaches. Wiley, New Jersey (2006)
Sümbül, U., Santos, J.M., Pauly, J.M.: Improved time series reconstruction for dynamic magnetic resonance imaging. IEEE Trans. Med. Imag. 28(7), 1093–1104 (2009). https://doi.org/10.1109/TMI.2008.2012030
Sümbül, U., Santos, J.M., Pauly, J.M.: A practical acceleration algorithm for realtime imaging. IEEE Trans. Med. Imag. 28(12), 2042–2051 (2009). https://doi.org/10.1109/TMI.2009.2030474
Tossavainen, O.P., Vauhkonen, M., Kolehmainen, V., Kim, K.Y.: Tracking of moving interfaces in sedimentation processes using electrical impedance tomography. Chem. Eng. Sci. 61, 7717–7729 (2006). https://doi.org/10.1016/j.ces.2006.09.010
van Aarle, W., Palenstijn, W.J., Beenhouwer, J.D., Altantzis, T., Bals, S., Batenburg, K.J., Sijbers, J.: The ASTRA Toolbox: a platform for advanced algorithm development in electron tomography. Ultramicroscopy 157, 35–47 (2015). https://doi.org/10.1016/j.ultramic.2015.05.002
Vaswani, N.: LSCSresidual (LSCS): compressive sensing on least squares residual. IEEE Trans. Signal Proc. 58(8), 4108–4120 (2010). https://doi.org/10.1109/TSP.2010.2048105
Wang, Z., Bovik, A.C.: Mean squared error: love it or leave it? IEEE Signal Process. Mag. 26(9), 98–117 (2009). https://doi.org/10.1109/MSP.2008.930649
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error measurement to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861
Welvaert, M., Rosseel, Y.: On the definition of signaltonoise ratio and contrasttonoise ratio for fMRI data. PLoS One (2013). https://doi.org/10.1371/journal.pone.0077089
Wettenhovi, V.V.: Highdimensional Kalman filter toolbox (HELMET) (2022). https://github.com/villekf/HELMET
Wettenhovi, V.V., Kolehmainen, V., Huttunen, J., Kettunen, M., Gröhn, O., Vauhkonen, M.: State estimation with structural priors in fMRI. J. Math. Imaging Vis. 60(2), 174–188 (2018). https://doi.org/10.1007/s108510170749x
Winkelmann, S., Schaeffter, T., Koehler, T., Eggers, H., Doessel, O.: An optimal radial profile order based on the golden ratio for timeresolved MRI. IEEE Trans. Med. Imaging 26(1), 68–76 (2007). https://doi.org/10.1109/TMI.2006.885337
Wundrak, S., Paul, J., Ulrici, J., Hell, E., Rasche, V.: A small surrogate for the golden angle in timeresolved radial MRI based on generalized fibonacci sequences. IEEE Trans. Med. Imaging 34(6), 1262–1269 (2015). https://doi.org/10.1109/TMI.2014.2382572
Yalamanchili, P., Arshad, U., Mohammed, Z., Garigipati, P., Entschev, P., Kloppenborg, B., Malcolm, J., Melonakos, J.: ArrayFire  A high performance software library for parallel computing with an easytouse API (2015). https://github.com/arrayfire/arrayfire
Yan, S., Nie, L., Wu, C., Guo, Y.: Linear dynamic sparse modelling for functional MR imaging. Brain Inform. 1(1), 11–18 (2014). https://doi.org/10.1007/s407080140002y
Zhang, X., Zhou, Z., Chen, S., Chen, S., Li, R., Hu, X.: MR fingerprinting reconstruction with Kalman filter. Magn. Reson. Imaging 41, 53–62 (2017). https://doi.org/10.1016/j.mri.2017.04.004
Zong, X., Lee, J., Poplawsky, A.J., Kim, S.G., Ye, J.C.: Compressed sensing fMRI using gradientrecalled echo and EPI sequences. Neuroimage 92, 312–321 (2014). https://doi.org/10.1016/j.neuroimage.2014.01.045
Acknowledgements
This work was supported by Jane and Aatos Erkko Foundation (project 64741) and the Academy of Finland, Centre of Excellence in Inverse Modelling and Imaging (projects 312343 and 312344).
Funding
Open access funding provided by University of Eastern Finland (UEF) including Kuopio University Hospital.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wettenhovi, VV., Kolehmainen, V., Kettunen, M. et al. State Estimation of TimeVarying MRI with Radial Golden Angle Sampling. J Math Imaging Vis 64, 825–844 (2022). https://doi.org/10.1007/s1085102201095x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1085102201095x