1 Introduction

Magnetic resonance imaging (MRI) is widely used to image soft tissue with high spatial resolution and good contrast. In time-varying MRI, the scanner collects a time series of k-space data that is used to reconstruct a time series of images. Time-varying MRI imaging techniques include, for example, functional MRI (fMRI) and dynamic contrast-enhanced (DCE) MRI.

fMRI is an imaging technique used to non-invasively monitor brain activity. The working principle of fMRI is based on the fact that neuronal activity is coupled with cerebral blood flow and volume. When an area of the brain activates, the blood flow and amount of oxygen to that area increase more than the metabolic need for oxygen, enabling the related brain activity to be indirectly evaluated by the blood oxygen level-dependent (BOLD) contrast [3, 15].

DCE-MRI is a dynamic imaging technique which uses exogenous, typically gadolinium based, contrast agent combined with high temporal resolution imaging, for investigating microvascular structure and function [26]. In pre-clinical studies, DCE-MRI is often utilized to study cerebral blood flow dynamics. The blood flow dynamics are tightly coupled with metabolism and are altered in many neurological disorders including ischemia, in tumors and in areas where the blood–brain barrier has been disrupted [3, 8, 26]. In the last case, the gadolinium-based contrast agent accumulates in tissues in the areas where the blood–brain barrier has been disrupted, leading to increased signal intensity. The signal increase also occurs in well-vascularized areas with incomplete or missing blood–brain barrier, for example the pituitary gland and nasal mucosa [8]. However, the tissue signal often exhibits a sudden and transient signal drop observed at the time of the injection due to conditions where a very high contrast agent concentration leads to a transient signal loss due to enhanced \(T_2^*\) relaxation. While DCE-MRI is not routinely utilized in clinical studies, it is widely used in pre-clinical studies [26].

In MRI, the measurement data represents samples of the unknown image in the frequency domain, called the k-space. Each point in the k-space corresponds basically to the Fourier transform of the whole unknown image on a particular frequency, implying a global image to a data point dependence. The parts of the central k-space correspond to the low-frequency components while the outer parts correspond to the higher-frequency components. Due to this, in time-varying experiments the central parts of the k-space should preferably be sampled frequently as it carries most of the information on the intensity changes in the time series of the unknown images.

Image reconstruction in time-varying MRI is traditionally carried out using static reconstruction methods applied to each time frame of the measured k-space data separately. Static reconstruction methods assume time-invariance of the unknowns during the acquisition of the data for each image. Typically, when each of the k-space frames is a fully sampled Cartesian trajectory, the reconstruction is carried out using inverse FFT. When employing non-Cartesian sampling schemes, the usual reconstruction method is regridding where the non-Cartesian data is interpolated onto a Cartesian grid before the inverse FFT [16]. To obtain sufficient spatial resolution with the classical methods, enough k-space samples per image frame are necessary in order to fulfill the Nyquist criterion. This comes at the expense of temporal resolution resulting in a loss of accuracy when studying a physiological process with faster changes than the sampling time for each image frame.

A natural approach to improve the temporal resolution is to use undersampled k-space data. When using undersampled data in time-varying MRI, it is preferable to use a non-Cartesian acquisition such as spiral or radial sampling. This leads to high-density sampling of the central part of the k-space while the higher-frequency components are sampled more sparsely [2]. When collecting the k-space data with radial acquisition, a highly efficient sampling scheme for time-varying MRI is the golden angle (GA) method [46]. In GA acquisition, the radial spokes are collected azimuthally such that there is always an angle of \(\phi _{\text {GA}} = 111.25^{\circ }\) between consecutive spokes. In GA sampling, each new radial spoke always differs from the previous ones, leading to a rather high spatial coverage of the k-space over time with high temporal incoherence.

In conventional uniform radial sampling, the angle between consecutive spokes is \(\phi _{\text {uniform}} = 180^{\circ } / n\), where n is the number of radial spokes per frame. Figure 1 shows the differences between linear and GA sampling when a total of three spokes has been collected with \(\phi _{\text {uniform}} = 10^{\circ }\). A distinct feature of the conventional uniform radial sampling, used in our previous work [45], is that the state estimates exhibit a periodic artifact in the pixel time series of the state estimates with a period equal to the number of spokes in one full circle of sampled spokes. With the GA sampling, we demonstrate that the problem of the periodic artifact can be avoided. The GA method furthermore offers greater flexibility with respect to the selection of the time resolution in a sense that one can afterward choose how many radial spokes per frame are used in the reconstruction of the data.

Fig. 1
figure 1

Left: Linear sampling with the first 3 spokes when \(\phi _{\text {uniform}} = 10^{\circ }\). Right: Golden angle sampling with the first 3 spokes

Although the number of spokes used in GA sampling per image frame can be selected freely, it is beneficial to use a number of spokes equaling a Fibonacci number. Using a Fibonacci number of spokes allows for a nearly uniform angular coverage of the k-space and better signal-to-noise ratio [46]. Smaller golden angles, based on a generalized Fibonacci sequence, could also be used instead of the standard golden angle of 111.25\(^{\circ }\) [47].

When undersampled data are used, the image reconstruction problem becomes ill-posed. This leads to aliasing artifacts when conventional reconstruction methods, such as inverse FFT or regridding, are used. A conventional approach in time-varying MRI to tackle the aliasing effect is to use a sliding window (SW) method [6, 32]. In SW, the last available measurement data are combined with a time window of previous data in order to obtain a sufficiently sampled frame of k-space data that can be treated with the conventional image reconstruction methods. This approach, however, is suboptimal since it produces a moving averaging effect in the time direction.

The problem of ill-posedness can also be solved by using the well-known compressed sensing (CS) framework. CS allows the use of drastically undersampled data [4] and can be utilized both in static reconstruction and in time-varying reconstruction, where the whole image sequence is reconstructed at once using a joint reconstruction formulation where temporal coupling between the images is introduced by a temporal regularization functional. CS image reconstruction has been extensively used in static MRI [22, 23], dynamic cardiac MRI [9, 12, 18, 27, 28], and also in fMRI [14, 17, 25, 51].

Another possibility for the reconstruction of undersampled data is the use of state estimation, where the image reconstruction problem is considered explicitly as a time-dependent problem. In state estimation, the image can be updated after a new observation, such as a single spoke of radial data, becomes available. A state evolution model is used to model the unknowns (time series of MR images) as a time-dependent process while the relation between the unknown MR image and k-space measurements at each time instant is modeled by a separate observation equation. The objective is to estimate the sequence of states (MR images) given the state evolution and observation models, and the time series of the k-space data.

One of the commonly used methods for computing the state estimates is the Kalman filter (KF) [20], which utilizes all the past and present data to compute the estimate at a given time point. Once the full time series of KF estimates is available, the results can usually be improved by using a Kalman smoother (KS). In KS, we compute the filter estimates in the backward direction, using all (past, present and future) data on the computation of the estimates. In dynamic cardiac MRI KF has been utilized in [10, 24, 30, 36, 37, 40] and KS in [30]. In fMRI, KF has been mainly used to estimate certain parameters from the reconstructed data [13, 21, 34]; KF, enhanced with CS, has been used in fMRI image reconstruction [49]. KF has also recently been tested with MRI fingerprinting reconstruction [50]. In our previous work [45], we implemented the Kalman filter for fMRI image reconstruction and studied two different approaches for the inclusion of anatomical prior information into the state estimation problem. The approach was evaluated by using conventional uniform radial sampling of the k-space.

One of the practical problems in state estimation is the selection of the noise parameters of the state evolution model. In this paper, we take a somewhat different approach than in [45] and propose a state estimation approach, where the variances of the evolution noise covariance are estimated by using a data-driven approach, where a time-invariant estimate for the process noise covariance is constructed based on an anatomical reference image and the conventional sliding window estimates of the data and scaled optimal with KF consistency tests. Previously a time-varying process noise covariance obtained from a buffer of conventional images with spiral sampling has been studied in [37]. Furthermore, to alleviate the computational burden of the Kalman smoother, this work uses steady-state KS instead of computing separate smoother gain matrices for each time step. The proposed approach is combined with the radial golden angle data space acquisition in DCE-MRI and fMRI, demonstrating that the periodic artifacts in the state estimates with linear radial sampling present in [45] can be avoided with the golden angle sampling.

In this study, the state estimation approach is evaluated using simulated and experimental, GA collected radial MRI data from a rat brain. The experimental data is obtained by using the DCE-MRI and the simulated data represents a fMRI examination. In the computations, the state estimates are updated after each new spoke of radial data becomes available. The state estimates and their uncertainty estimates for both DCE-MRI and fMRI are computed using the Kalman filter and the steady-state version of Rauch–Tung–Striebel (RTS) fixed-interval Kalman smoother. The approaches are compared with the sliding window method, where each image is reconstructed with the LSQR algorithm with early truncation for regularization of the underdeterministic least-squares problem.

2 Theory

2.1 Observation Model for State Estimation

Let \(\tilde{\varvec{g}}_t \in \mathbb {C}^M\) denote the (complex-valued) k-space data for a single spoke in the radial sampling of the k-space at time t, where M is the number of measurements (points) per spoke, and let the whole dynamic MRI experiment consist of a set of T spokes of data \(\{ \tilde{\varvec{g}}_t, \ t = 1,2,\ldots ,T\}\).

Let \(\tilde{\varvec{f}}_t \in \mathbb {C}^{{N_\mathrm{pix}}}\) denote the vectorized representation of the complex-valued \({N_\mathrm{pix}}= N \times N\) MRI image at time index t and \(\tilde{\varvec{G}}_t \in \mathbb {C}^{M \times N_{\text {pix}}}\) denote the (non-uniform) Fourier transform matrix modeling the transform from the image space to the data space. With these notations, the MRI observation model at time t becomes

$$\begin{aligned} \tilde{\varvec{g}}_t = \tilde{\varvec{G}}_t\tilde{\varvec{f}}_t + \tilde{\varvec{v}}_t, \end{aligned}$$
(1)

where \(\tilde{\varvec{v}}_t\) models the measurement noise. Many image reconstruction methods, including state estimation approaches, are intrinsically derived and intuitively more natural for real-valued variables. To make the observation model real-valued, we split the model (1) into real and imaginary parts, leading to a real-valued observation model

$$\begin{aligned} \varvec{{g}}_t = \varvec{{G}}_t \varvec{{f}}_t + \varvec{{v}}_t, \end{aligned}$$
(2)

where

$$\begin{aligned} \begin{array}{lllll} &{}\varvec{{g}}_t = \begin{pmatrix} {\text {Re}}(\tilde{\varvec{g}}_{t}) \\ {\text {Im}}(\tilde{\varvec{g}}_{t}) \end{pmatrix}, &{}\varvec{{G}}_t = \begin{pmatrix} {\text {Re}} (\tilde{\varvec{G}}_t) &{} -{\text {Im}} (\tilde{\varvec{G}}_t) \\ {\text {Im}} (\tilde{\varvec{G}}_t) &{} {\text {Re}} (\tilde{\varvec{G}}_t) \end{pmatrix}, \\ &{}\varvec{{f}}_t = \begin{pmatrix} {\text {Re}} (\tilde{\varvec{f}}_t) \\ {\text {Im}} (\tilde{\varvec{f}}_t) \end{pmatrix} = \begin{pmatrix} \tilde{\varvec{f}}_{{\text {Re}},t} \\ \tilde{\varvec{f}}_{{\text {Im}},t} \end{pmatrix}, &{}\varvec{{v}}_t = \begin{pmatrix} {\text {Re}} (\tilde{\varvec{v}}_t) \\ {\text {Im}} (\tilde{\varvec{v}}_t) \end{pmatrix}. \end{array} \end{aligned}$$

We remark that the splitting in (2) is applicable for any MRI sampling trajectory. Also, while the splitting increases the dimension, the increase in computational burden is effectively compensated by replacing complex-valued operations by real-valued operations.

In the case of radially sampled data, we can utilize the Fourier slice theorem for reducing the computational cost. By 1D Fourier transforming each measured spoke of the raw k-space data as \(\tilde{\varvec{z}}_t = \mathcal {F}\tilde{\varvec{g}}_t\), we arrive at a model

$$\begin{aligned} \tilde{\varvec{z}}_t = \varvec{H}_t \tilde{\varvec{f}}_t + \tilde{\varvec{u}}_t \end{aligned}$$
(3)

where \(\tilde{\varvec{u}}_t\) denotes the (transformed) noise and \(\varvec{H}_t\) is a real-valued matrix implementing the Radon transform (i.e., parallel beam X-ray tomography transform) with line integrals computed perpendicular to direction of the k-space spoke. By applying the splitting to the model (3), we obtain observation model

$$\begin{aligned} \varvec{{g}}_t = \varvec{{G}}_t\varvec{{f}}_t + \varvec{{v}}_t, \end{aligned}$$
(4)

where

$$\begin{aligned}&\varvec{{g}}_t = \begin{pmatrix} {\text {Re}}(\tilde{\varvec{z}}_{t}) \\ {\text {Im}}(\tilde{\varvec{z}}_{t}) \end{pmatrix} = \begin{pmatrix} \tilde{\varvec{z}}_{{\text {Re}},t} \\ \tilde{\varvec{z}}_{{\text {Im}},t} \end{pmatrix} , \ \&\varvec{{v}}_t = \begin{pmatrix} {\text {Re}}(\tilde{\varvec{u}}_t) \\ {\text {Im}}(\tilde{\varvec{u}}_t) \end{pmatrix}, \\&\varvec{{G}}_t = \begin{pmatrix} \varvec{{H}}_t &{} 0\\ 0 &{} \varvec{{H}}_t, \end{pmatrix}, \end{aligned}$$

for a single spoke of radially sampled data. The rationale in using the Fourier slice theorem and carrying out the 1D Fourier transform of the data to obtain (3) stems from the fact that the computations become more efficient as the observation models for real and imaginary parts of data become independent. Furthermore, the observation matrix \(\varvec{H}_t\) is much sparser than \(\tilde{\varvec{G}}_t\) and is used for both the real and imaginary parts with no off-diagonal matrices. We will utilize the model (4) in the computation of the state estimates for the radial golden angle data.

2.2 Sliding Window

The principle in the sliding window (SW) method is to compute reconstruction at time \(\ell \) using the present data and data from \(n_{\text {SW}} - 1\) time points prior to the present point, where \(n_{\text {SW}}\) is the number of spokes in one frame of SW, using some conventional reconstruction method such as inverse FFT, regridding or LSQR estimation [6, 32].

The formulation of the sliding window estimation in least-squares sense at time \(\ell \) can be defined as

$$\begin{aligned} \hat{\varvec{{f}}}^{(\ell )} = \min _{\varvec{{f}}^{(\ell )}} \left\| \varvec{{g}}^{(\ell )} -\varvec{G}^{(\ell )} \varvec{{f}}^{(\ell )} \right\| ^2, \end{aligned}$$
(5)

where the concatenated data and forward operators are

$$\begin{aligned} \varvec{{g}}^{(\ell )} = \begin{pmatrix} \varvec{{g}}_{\ell } \\ \varvec{{g}}_{\ell -1} \\ \vdots \\ \varvec{{g}}_{\ell -n_{\text {SW}} + 1} \end{pmatrix}, \varvec{G}^{(\ell )} = \begin{pmatrix} \varvec{G}_{\ell } \\ \varvec{G}_{\ell -1} \\ \vdots \\ \varvec{G}_{\ell -n_{\text {SW}} + 1} \end{pmatrix}, \end{aligned}$$
(6)

and \(\varvec{g}_k\):s and \(\varvec{G}_k\):s are as in equation (4) when employing the Fourier transformed spoke data or (2) when employing raw k-space data. However, since the problem of (5) is underdetermined in this case, a least-squares solution is not unique. In this paper, we regularize the solution of equation (5) by using an iterative LSQR algorithm [29] with the early truncation of the iterations. Note that given the time series of T spokes of data, the sliding window method produces estimates for time points \(t = n_{\text {SW}}, \ldots , T\).

2.3 State Estimation Representation

The state estimation model for the MRI problem is given by the pair of equations

$$\begin{aligned} \varvec{g}_t&= \varvec{G}_t \varvec{f}_t + \varvec{v}_t, \end{aligned}$$
(7a)
$$\begin{aligned} \varvec{f}_t&= \varvec{F}_{t-1}\varvec{f}_{t-1} + \varvec{w}_{t-1} \end{aligned}$$
(7b)

where (7a) is the observation model, given by equation (2) when using raw k-space data or (4) when using the Radon transform formalism. Equation (7b) is the state evolution model where

$$\begin{aligned} \varvec{{F}}_{t-1}&= \begin{pmatrix} {\text {Re}} (\tilde{\varvec{F}}_{t-1}) &{} -{\text {Im}} (\tilde{\varvec{F}}_{t-1}) \\ {\text {Im}} (\tilde{\varvec{F}}_{t-1}) &{} {\text {Re}} (\tilde{\varvec{F}}_{t-1}) \end{pmatrix}, \\ \varvec{{w}}_{t-1}&= \begin{pmatrix} {\text {Re}}(\tilde{\varvec{w}}_{t-1}) \\ {\text {Im}}(\tilde{\varvec{w}}_{t-1}) \end{pmatrix}, \end{aligned}$$

and \(\tilde{\varvec{F}}_{t-1} \in \mathbb {C}^{{N_\mathrm{pix}}\times {N_\mathrm{pix}}}\) is the complex-valued state transition matrix, and \(\tilde{\varvec{w}}_{t-1}\) is the process noise.

The Kalman filter (KF) is a linear recursive state estimation algorithm used to estimate the states of a time-varying system of the form (7). Under the standard assumptions that \(\varvec{v}_t\) and \(\varvec{w}_t\) in (7b)–(7a) are Gaussian zero-mean noise processes with known covariances and mutually independent in a sense that \(\varvec{v}_m \perp \varvec{v}_j\), \(\varvec{w}_m \perp \varvec{w}_j\) for \(m \ne j\) and \(\varvec{w}_m \perp \varvec{v}_m\), the Kalman filter produces the minimum mean-square-error estimate of \(\varvec{f}_t\) based on the measurements \(\varvec{g}_1, \ldots , \varvec{g}_T\). The derivation of the Kalman filter can be found in [35]. In the statistical setup, the KF produces the means and covariances of the time evolution and observation updating probability densities for a Gaussian Bayes filtering problem, see [19, Chapter 4]. The standard KF equations are given by

$$\begin{aligned} \hat{\varvec{f}}_{0}^+&= E(\varvec{f}_0) \nonumber \\ \varvec{P}_{0}^+&= E[(\varvec{f}_0-\hat{\varvec{f}}_{0}^+)(\varvec{f}_0-\hat{\varvec{f}}_{0}^+)^T] \nonumber \\ \hat{\varvec{f}}_t^-&= \varvec{F}_{t-1}\hat{\varvec{f}}_{t-1}^+ \end{aligned}$$
(8a)
$$\begin{aligned} \varvec{P}_t^-&=\varvec{F}_{t-1}\varvec{P}_{t-1}^+\varvec{F}_{t-1}^T+\varvec{Q}_{t-1} \end{aligned}$$
(8b)
$$\begin{aligned} \varvec{K}_t&=\varvec{P}_t^-\varvec{G}_t^T(\varvec{G}_t\varvec{P}_t^-\varvec{G}_t^T+\varvec{R}_t)^{-1} \end{aligned}$$
(8c)
$$\begin{aligned} \hat{\varvec{f}}_t^+&=\hat{\varvec{f}}_t^-+\varvec{K}_t(\varvec{g}_t-\varvec{G}_t\hat{\varvec{f}}_t^-) \end{aligned}$$
(8d)
$$\begin{aligned} \varvec{P}_t^+&=(\varvec{I}-\varvec{K}_t\varvec{G}_t)\varvec{P}_t^-, \end{aligned}$$
(8e)

where \(\hat{\varvec{f}}_t^-\) is the a priori estimate, \(\varvec{P}_t^-\) is the a priori error covariance of the estimate, \(\varvec{Q}_t\) is the covariance of the process noise, \(\hat{\varvec{f}}_t^+\) is the a posteriori estimate, \(\varvec{P}_t^+\) is the a posteriori error covariance of the estimate, \(\varvec{K}_t\) is the Kalman gain, and \(\varvec{R}_t\) is the covariance of the observation noise. The estimation error covariances \(\varvec{P}_t^-\) and \(\varvec{P}_t^+\) represent the uncertainty in the a priori and a posteriori estimates given the model (7) with their diagonals giving the error variances of the estimates [35]. The a posteriori variances can be used to estimate confidence intervals for the Kalman filter state estimates. We remark that these confidence estimates reflect the uncertainty with respect to the state estimation model used.

Since the observation noise in MRI is thermal noise, \(\varvec{v}_t\) can be modeled as a zero-mean white Gaussian process with covariance \(\varvec{R}_t\) [36]. Furthermore, we make an approximation that the variance of the observation noise is stationary and the same for all k-space samples. This leads to a covariance matrix of the form \(\varvec{R}_t=\sigma _v^2 \varvec{I}\) \(\forall \) t.

The state evolution model can be used to incorporate temporal prior information and models about the unknowns into the image reconstruction problem. In this study, we employ a simple random walk formulation, where \(\varvec{F}_{t-1} = \varvec{I}\) \(\forall \) t. The process noise \(\varvec{w}_t\) is modeled as stationary zero-mean Gaussian process with diagonal covariance \(\varvec{Q}_t = \text {diag}\left( q_1 \ldots , q_{2{N_\mathrm{pix}}}\right) \).

The covariance \(\varvec{Q}_t\) reflects the degree of uncertainty in the state evolution model and can be interpreted as a weighting that is given for the past estimates versus current measurement; large values of \(\varvec{Q}_t\) imply large uncertainty, leading to more weight for the current measurement, while small values of \(\varvec{Q}_t\) imply small uncertainty in the state evolution model, leading to high weight for the past estimates and predictions by the state transition matrix. Small covariance values can lead to both temporally and spatially smoother estimate of the time series, but if the values are too small and the true evolution is not fully described by the state transition matrix, they can also cause the estimate to lag behind. Very large values of \(\varvec{Q}_t\), on the other hand, imply low confidence in the evolution model, leading to low correlation between the states and can lead to highly noisy estimates.

The covariance \(\varvec{Q}_t\) is usually tuned by the visual assessment of the KF estimates based on pilot runs of the Kalman filter through the time series of k-space data. In the next subsection, we describe a simple data-driven construction of \(\varvec{Q}_t\) based on time series of sliding window estimates.

2.3.1 A Data-Driven Estimate for the Process Noise Covariance Matrix

A data-driven estimate for the variances of the process noise \(q_1, \ldots , q_{2{N_\mathrm{pix}}}\) is constructed by using the sliding window estimates of the data. First, a baseline image \(\hat{\varvec{f}}_{\text {base}}\) is calculated as the mean of the first \(N_{\text {base}}\) SW estimates

$$\begin{aligned} \hat{\varvec{f}}_{\text {base}} = \frac{\sum _{\ell = 1}^{N_{\text {base}}} \hat{\varvec{f}}_{\text {SW}}^{(\ell )}}{N_{\text {base}}}, \end{aligned}$$
(9)

where \(\hat{\varvec{f}}_{\text {SW}}^{(\ell )}\) is the sliding window estimate obtained by minimizing (5) at time \(\ell \). The number \(N_{\text {base}}\) of sliding window frames is selected such that all the measurement data that is used in the computation of \(\hat{\varvec{f}}_{\text {base}}\) comes from an initial baseline measurement before any activation or administration of the contrast agent.

Next, to obtain an estimate for the deviations of the unknowns with respect to the baseline at each time instant \(\ell \), we compute for each sliding window estimate its deviation from the baseline as

$$\begin{aligned} \varvec{b}_{\ell } = \frac{1}{2}\left( \hat{\varvec{f}}_{\text {base}} - \hat{\varvec{f}}_{\text {SW}}^{(\ell )} \right) ^2. \end{aligned}$$
(10)

Remark that since we are operating with the split real-valued formalism, \(\varvec{b}_{\ell }\) is a real-valued \({2 {N_\mathrm{pix}}}\) vector. As the data-driven estimates of the state evolution variances, we take the maximum value of the deviations for each element in the time series of the SW estimates. Formally, this can be obtained by defining matrix \(\varvec{B} = (\varvec{b}_{n_{\text {SW}}}, \varvec{b}_{{n_{\text {SW}}}+1},\ldots ,\varvec{b}_{T})\) and then setting the variances as

$$\begin{aligned} \begin{pmatrix} \zeta _1 \\ \vdots \\ \zeta _{2{N_\mathrm{pix}}} \end{pmatrix} = \begin{pmatrix} {\text {max}}(\varvec{B}(1,:)) \\ \vdots \\ {\text {max}}(\varvec{B}(2{N_\mathrm{pix}},:)) \end{pmatrix}. \end{aligned}$$
(11)

where \(\varvec{B}(j,:)\) denotes the j:th row of \(\varvec{B}\).

In time-varying MRI studies, it is common practice to measure a separate, high accuracy anatomical reference image before and/or after the dynamic experiment. To utilize the reference image in the construction of \(\varvec{Q}_t\), we use the anatomical image to segment the image area to tissue pixels, where changes due to the employed contrast mechanism can occur, and exterior pixels where the changes are due to noise. This segmentation is used to define a logical mask with value one indicating a tissue pixel and zero an exterior pixel. The mask is then used to set the state evolution variances in the exterior pixels to a small fixed value \(\epsilon \), formally as,

$$\begin{aligned} \begin{pmatrix} q_1 \\ \vdots \\ q_{2{N_\mathrm{pix}}} \end{pmatrix} = \begin{pmatrix} \zeta _1 \\ \vdots \\ \zeta _{2{N_\mathrm{pix}}} \end{pmatrix} \circ mask + \epsilon \cdot \lnot (mask), \end{aligned}$$
(12)

where \(\circ \) signifies the Schur (element-wise) product, \(\lnot \) Boolean NOT operator, and mask is the logical masking vector for the image domain. This masking step sets the evolution variances for exterior pixels to much lower values than the variances of the tissue pixels as little to no change is expected in the background. We remark that if no anatomical image is available, one could use the baseline part of the dynamic data for the reconstruction of an image used in the construction of the tissue mask.

2.3.2 Estimation of Observation Noise Covariance

An estimate for the observation noise variance \(\sigma _v\) was obtained by using an empty scan where there was only noise, i.e., no object of interest was scanned. From the data vector, the covariance \(\varvec{R}_j\) was estimated for k-space spoke j as the sample covariance

$$\begin{aligned} \varvec{R}_{j} = {\text {diag}} \left( \frac{1}{N_\mathrm{rep}- 1}\sum _{\ell =1}^{N_\mathrm{rep}}(\varvec{g}_{j}^{(\ell )} - \bar{\varvec{g}}_j)(\varvec{g}_{j}^{(\ell )} - \bar{\varvec{g}}_j)^T\right) , \end{aligned}$$
(13)

where \(\bar{\varvec{g}}_j = \frac{1}{N_\mathrm{rep}}\sum _{\ell =1}^{N_\mathrm{rep}} \varvec{g}_{j}^{(\ell )}\) and \(N_\mathrm{rep}\) is the number of repetitions of the spoke j in the empty scan measurement set. Remark that while the golden angle acquisition does not theoretically measure the same spoke twice, the practical implementations often repeat a predefined cycle of GA spokes with some period. In our implementation, one cycle consisted of \(n=610\) golden angle spokes before repeating the same set of spokes.

In principle, one can use the covariance matrix obtained in (13) as is. However, it may be that the number \(N_\mathrm{rep}\) of repetitions of the same spoke is too small for the estimation of the full covariance matrices \(R_j \in \mathbb {R}^{2M \times 2M}\). In such case, one can resort to a diagonal approximation. In this study, we employed a simple approximation using the same variance for all data points

$$\begin{aligned} \varvec{R}_t \equiv \sigma _v^2 \varvec{I} \end{aligned}$$

where the variance \(\sigma _v^2\) was computed as average of the individual variances by

$$\begin{aligned} \sigma _v^2 = \frac{1}{n} \sum _{j=1}^n \frac{1}{2M} \mathrm{trace}(\varvec{R}_j). \end{aligned}$$
(14)

2.3.3 Kalman Filter Consistency

The covariances of the state evolution and observation noise processes reflect our uncertainty in the state evolution and observation models. Often their realizations are unknown or their data-based estimates may be only approximations of their true realizations, leading to the suboptimal performance of the Kalman filter. In such cases, consistency checks can be helpful for refining the covariances such that the Kalman filter produces optimal estimates. These consistency checks include the normalized innovation squared (NIS) (sometimes called normalized error square, NES) test and autocorrelation (whiteness) test, both based on the innovation process

$$\begin{aligned} \varvec{\nu }_t = \varvec{g}_t-\varvec{G}_t\hat{\varvec{f}}_t^-. \end{aligned}$$
(15)

The criteria for filter consistency are that the innovation has to be approximately zero-mean and white and have magnitude commensurate with the innovation covariance, defined as

$$\begin{aligned} \varvec{E}_t = \varvec{G}_t^T\varvec{P}_t^{-}\varvec{G}_t^T + \varvec{R}_t. \end{aligned}$$
(16)

Zero-mean property can be checked with NIS and whiteness with autocorrelation. An initial check for covariance parameter tuning is to seek the minimum value of the mean of the innovation [1]. Sometimes checking the minimum value for the innovation mean might already be enough for optimal filter performance, but if further consistency tests are required the NIS is defined as

$$\begin{aligned} \varvec{\epsilon }_t = \varvec{\nu }_t^T\varvec{E}_t^{-1}\varvec{\nu }_t. \end{aligned}$$
(17)

NIS should fulfill the condition \(E(\epsilon _{t}) = 2M\), however, since this value will most likely not be achieved we can determine confidence intervals, usually indicated by a 95% confidence interval, where the time average NIS value should be. Time average NIS is defined as

$$\begin{aligned} \hat{\varvec{\epsilon }}_t = \frac{1}{L}\sum _{t = 1}^L\varvec{\nu }_t^T\varvec{E}_t^{-1} \varvec{\nu }_t, \end{aligned}$$
(18)

where L is the number of time points included. If \( \varvec{\nu }_t\) is a zero-mean, white noise process, then \(L\hat{\varvec{\epsilon }}_t\) has a chi-square density distribution with LM degrees of freedom [5]. The 95% confidence intervals for chi-square distribution with D degrees of freedom can be obtained from [5]

$$\begin{aligned} \chi _D^2(0.975)&= \frac{1}{2}\left( 1.96+\sqrt{2D-1}\right) ^2 \end{aligned}$$
(19a)
$$\begin{aligned} \chi _D^2(0.025)&= \frac{1}{2}\left( -1.96+\sqrt{2D-1}\right) ^2. \end{aligned}$$
(19b)

For NIS we want the value of (18) to be within the limits shown in (19).

The autocorrelation should be approximately zero, however, as with NIS, this is most likely not achieved, and we should aim the value of the time-averaged autocorrelation to be within a certain confidence interval. The time-averaged autocorrelation for \(\lambda \) steps apart is defined as

$$\begin{aligned} \varvec{\rho }_\lambda = \frac{1}{\sqrt{L}}\sum _{t=1}^{L} \varvec{\nu }_t^T\varvec{\nu }_{t+\lambda } \left( \sum _{t=1}^{L} \varvec{\nu }_t^T \varvec{\nu }_{t} \sum _{t=1}^{L} \varvec{\nu }_{t+\lambda }^T \varvec{\nu }_{t+\lambda }\right) ^{-\frac{1}{2}}. \end{aligned}$$
(20)

For 95% acceptance interval, we have [5]

$$\begin{aligned} \varvec{\rho }_\lambda \in \left[ -\frac{1.96}{\sqrt{L}}, \frac{1.96}{\sqrt{L}}\right] . \end{aligned}$$
(21)

For the optimal performance of the Kalman filter, NIS should be within the confidence interval (19) and autocorrelation values should be within (21), with autocorrelation values closest to zero indicating a more optimal filter.

For more information on the consistency tests, see [1, 5].

2.4 Kalman Smoother

The Kalman smoother (KS) implemented in this work is the Rauch–Tung–Striebel (RTS) fixed-interval smoother [33]. The RTS smoother is defined with the following equations

$$\begin{aligned} \hat{\varvec{f}}_{fs,T}&= \hat{\varvec{f}}_{T}^+ \nonumber \\ \varvec{P}_{fs,T}&= \varvec{P}_{T}^+ \nonumber \\ \varvec{K}_{fs,t}&= \varvec{P}_t^+\varvec{F}_{t}^T(\varvec{P}_{t+1}^-)^{-1} \end{aligned}$$
(22a)
$$\begin{aligned} \hat{\varvec{f}}_{fs,t}&=\hat{\varvec{f}}_t^+ + \varvec{K}_{fs,t}(\hat{\varvec{f}}_{fs,t+1}-\hat{\varvec{f}}_{t+1}^+) \end{aligned}$$
(22b)
$$\begin{aligned} \varvec{P}_{fs,t}&= \varvec{P}_{t}^+ - \varvec{K}_{fs,t} \left( \varvec{P}_{t+1}^- - \varvec{P}_{fs,t+1} \right) \varvec{K}_{fs,t}^T, \end{aligned}$$
(22c)

where \(\varvec{K}_{fs,t}\) is the smoother gain, \(\varvec{P}_{fs,t}\) the smoothed estimate error covariance, and \(t=T-1,\ldots ,0\).

When calculating the KS estimate, the KF recursion (8a)–(8e) is calculated first through \(t=1,\ldots ,T\) (forward pass through the data) and the smoother is calculated as a backward pass. This requires saving the smoother gains (22a), or the estimate error covariances (8b) and (8e), during the forward pass. The smoother error covariance (22c) can be used to evaluate the improvement of the smoothed estimate over the KF estimate by comparing it with a posteriori covariance (8e).

A steady-state version of the RTS smoother that is used here can be applied when the memory requirement to save the smoother gains (22a) are too high. In the steady-state version, the a priori and a posteriori estimation error covariances \(\varvec{P}_t^-\) and \(\varvec{P}_t^+\) in (22a) are constant and obtained from equations (8b) and (8e) in the last state of the Kalman filter. Along with the random walk evolution model with fixed state evolution matrix \(\varvec{F}_t\), the steady-state method has a constant smoother gain and only (22b) needs to be updated for all time points. Thus, the smoother equations become

$$\begin{aligned} \varvec{K}_{fs}&= \varvec{P}_{T-1}^+(\varvec{P}_{T}^-)^{-1} \end{aligned}$$
(23a)
$$\begin{aligned} \hat{\varvec{f}}_{fs,t}&=\hat{\varvec{f}}_t^+ + \varvec{K}_{fs}(\hat{\varvec{f}}_{fs,t+1}-\hat{\varvec{f}}_{t+1}^+) \end{aligned}$$
(23b)
$$\begin{aligned} \varvec{P}_{fs}&= \varvec{P}_{T-1}^+ - \varvec{K}_{fs} \left( \varvec{P}_T^- - \varvec{P}_{T-1}^+ \right) \varvec{K}_{fs}^T. \end{aligned}$$
(23c)

3 Materials and Methods

3.1 Estimates

The following estimates are computed and compared in the results section:

  1. (SW)

    Sliding window using a frame of \(n_{\text {SW}} = 55\) spokes of data. The sliding window reconstructions are computed by the least-squares approach using data of the form (6). The sliding window estimates serve as a reference of a conventional filtering method of the golden angle data and are also used for the estimation of the data-driven estimate of the state process covariance for the state estimation approaches.

  2. (KF)

    Kalman filter (8a)–(8e) where the covariance of the process noise has been estimated by using the sliding window estimates. Observation equation is given by (7).

  3. (KS)

    Steady-state Kalman smoother (23a)–(23c) applied to the results of the KF.

As conventional in MRI, all the figures and tables are based on the magnitude \(\sqrt{{\text {Re}}(\hat{\varvec{f}})^2 + {\text {Im}}(\hat{\varvec{f}})^2}\) of the reconstructed images.

While the golden angle acquisition in principle produces a unique set of spokes for the entire scan, we employed an experimental (hardware motivated) approximation which measures 610 golden angle spokes before repeating the same cycle of spokes again. The 611th spoke would have had an angle of approximately \(0.06^{\circ }\), but was instead set to \(0^{\circ }\) starting the cycle from the beginning. This method allows us to simplify both the acquisition and reconstruction process while still obtaining the advantages of the golden angle acquisition.

In this study, all the reconstructions are computed using the Radon transform formalism in (4). Thus, before any of the estimates were computed, every spoke of the raw k-space measurement data was 1D Fourier transformed and then split to the real and imaginary parts to arrive at the observation model of the form (4). This procedure was applied in both the experimental test case and the simulated test case, where the raw measurement data was simulated by the non-uniform FFT [11]. Before the 1D Fourier transform, the data spokes were zero-padded to twice the original size per spoke (from \(M = 128\) to \(M = 256\)) as the estimates without the interpolation of the data produced a “hot spot” artifact in the center of the image in the case of Kalman estimates, or were of much lower quality in the case of SW estimates. This artifact was caused by the coarse sampling of the data in the Radon transform domain. Using the raw k-space data in the filtering (observation model (2)) would remove the need for the zero padding of the data, but it is computationally much more demanding to use. Furthermore, as we are applying a linear data transform domain (justified by the Fourier slice theorem) and a standard interpolation technique to the data, the quality of the estimates is not affected and as such the usage of raw k-space data (observation model (2)) or the Radon transform formalism (observation model (4)) lead to equal image quality.

For the sliding window estimates, we used \({n_{\text {SW}}}= 55\) spokes of data for each estimate due to it providing a good compromise between spatial resolution and temporal resolution, and for being a Fibonacci number which allows the most uniform coverage of the k-space in GA sampling [47]. Kalman estimates produce a time series of T images and the sliding window a time series of \(T - n_{\text {SW}} + 1\) images. First sliding window estimate is made after the first 55 spokes have been collected, causing it to be 54 points shorter than the Kalman estimates.

The observation matrix \(\varvec{H}_t\) in (4) was constructed by using the Astra toolbox [39].

3.2 Simulated Data

A \(128 \times 128\) anatomical image of a rat brain, obtained in 9.4 T Agilent MRI system with a gradient pulse sequence, was used for the construction of ground truth images for the fMRI simulation. The baseline target, which is shown in Fig. 2, was a denoised version of the original image. A region of interest (ROI), highlighted in red in the left image of Fig. 2, was selected as an activation area for the fMRI simulation. To generate a time series of ground truth states, a simulated activation function was added to each of the ROI pixels. The mean of the image magnitude of the ROI pixels for the time series of 3050 simulated ground truth states is shown in the right image in Fig. 2. In each of the ROI pixels, the activation was such that its peak value was 110% of the maximum value of the original image.

The golden angle trajectory for the simulation of the raw k-space measurement data was adopted from the experimental test case, which will be explained in sect. 3.3. This GA trajectory consists of a cycle of \(n=610\) golden angle spokes with an angular separation of \(111.246^{\circ }\), and in the simulated test case this cycle was repeated for \(N_\mathrm{rep}=5\) times, leading to times series of \(T=3050\) spokes of simulated golden angle fMRI data. The raw k-space data was simulated by the non-uniform FFT package [11]. Gaussian random noise with standard deviation (STD) of 0.025 was added to the simulated raw k-space data.

In the computations, the original image of the rat brain (i.e., the image before the denoising) was used as the anatomical reference image, which was segmented to sets of pixels inside and outside of tissues for the construction of the mask operation in equation (12).

Fig. 2
figure 2

Left: The rat brain baseline image for fMRI simulation. The ROI is highlighted in red. Right: Mean signal intensity change of the ROI pixels as a function of time

3.3 Experimental DCE-MRI Data from a Small Animal Specimen

3.3.1 Animal Preparation

All animal experiments were approved by the Animal Health Welfare and Ethics Committee of the University of Eastern Finland. 1x106 C6 (ECACC 92090409) rat glioma cells (Sigma) were implanted into the brain of a 200 g female Wistar rat under ketamine/medetomidine hydrochloride anesthesia. Tumor imaging was performed 10 days post-implantation. During the experiments, the animal was anesthetized with isoflurane (5% induction, 1-2% upkeep) and kept in a fixed position in a holder inserted into the magnet. A needle was placed into the tail vein of the animal for the injection of the contrast agent.

3.3.2 Acquisition of the Data

All MR data were collected using a 9.4 T horizontal magnet interfaced to an Agilent imaging console and a volume coil transmit/quadrature surface coil receive pair (Rapid Biomed, Rimpar, Germany). For the time-varying data, a gradient-echo-based radial pulse sequence was used with repetition time 38.5 ms, echo time 9 ms, flip angle 30\(^{\circ }\), field-of view \(32\times 32\) mm, slice thickness 1.5 mm and number of points (measurements) in each spoke \(M = 128\). \(n = 610\) spokes were collected in sequential order with a golden angle interval of 111.246\(^{\circ }\) before repeating the same cycle of spokes for 25 times, leading to an overall measurement sequence of \(T= 15250\) spokes of data. Measurement time for a full cycle of 610 spokes was \(610 \cdot 38.5\) ms \( = 23.46\) s. Gadovist (Gadobutrol, 1 mmol/kg) was injected intravenously (IV) after one minute from the beginning of the dynamic scan over a period of 3 s.

Anatomical reference images were acquired from the same slice before and after the dynamical experiment using a gradient-echo pulse sequence with otherwise similar parameters as in the dynamic sequence but using a Cartesian sampling of \(128\times 128\) points of k-space data. The anatomical image before the experiment is shown in Fig. 3. This image was segmented to sets of pixels inside and outside the tissues to produce the mask operation for equation (12).

For the analysis of the results, a region of interest (ROI) was selected. 20 ROI pixels were chosen such that the area is inside of the glioma in the brain. These ROI pixels are highlighted in red in Fig. 3. Furthermore, two pixels are selected for individual examination and are shown with white “+”-marks in Fig. 3. The location of the lower “+”-mark is inside the glioma and the upper mark is at the location the main artery in the cortex.

Fig. 3
figure 3

Anatomical prior image used with experimental data, the ROI (red pixels) and the two pixels chosen for individual examination (white +)

3.4 Computations

The sliding window estimates were computed with the LSQR algorithm based on the approach of (5)–(6) using \({n_{\text {SW}}}= 55\) for each estimate, leading to time series of \(T-{n_{\text {SW}}}+1 = T-54\) images. The LSQR estimates were regularized by using truncated iterations with a total of 15 iterations.

For the data-driven estimation of the process noise covariance for the state estimation, the mean of the baseline in (9) was computed based on the first \(N_{\text {base}} = 610\) SW estimates. These estimates covered the first full cycle of k-space spokes in our GA scheme and were from the initial baseline measurement before any activation or contrast agent administration. The logical mask in (12) for both simulated and experimental test case was obtained from the anatomical reference image by using simple thresholding. Figure 4 shows the logical mask for the experimental case. The value of \(\epsilon \) in (12) was selected as the minimum of the original process noise variance values squared, that is \(\epsilon = \min {(\varvec{\zeta })}^2\).

Fig. 4
figure 4

The logical mask used in the experimental case

The observation noise covariance was of the form \(\varvec{R} = \sigma _v^2 \varvec{I}\) where the variance \(\sigma _v^2\) was estimated by (13) and (14). This equal variance form was used since the differences in values between estimated variances for different spokes or measurements were negligible in (13).

As we were using the observation model (4), we had a block diagonal forward matrix and a block diagonal state evolution model with \(\varvec{F}_t = I\). Furthermore, we assumed that the real and imaginary parts had no cross-correlations, thus making (8b) and (8e) block diagonal as well.

When using equations (10) and (11) we get estimates of the process noise variance values for both the real and imaginary parts separately. As a last step, we summed up the real and imaginary parts of the process noise variances together to yield a combined covariance approximation that is common for both the real and imaginary parts. This (dilated) covariance approximation allows pixels that had large estimated data-driven variances in the imaginary part also have larger process noise uncertainty in the real part of the evolution model and vice versa. The state covariance matrix thus becomes

$$\begin{aligned} \varvec{\Psi } = {\text {diag}}\begin{pmatrix} q_1\\ \vdots \\ q_{{N_\mathrm{pix}}} \end{pmatrix} + {\text {diag}}\begin{pmatrix} q_{{N_\mathrm{pix}}+ 1} \\ \vdots \\ q_{2{N_\mathrm{pix}}} \end{pmatrix}. \end{aligned}$$
(24)

This approximation further allowed (8b) to be identical to both the real and imaginary parts, decreasing the computational costs.

Regarding the estimation of the process noise variances, we remark that, since the entire time series of sliding window estimates were used with (10) and (11), the temporal variations caused by the contrast agent injection were included in the estimation. If only a subset of the sliding window estimates are used for the estimation of the variances, care should be taken to make sure that the temporal variations are representative for the time steps in question. An alternative approach to the one used in this paper, would be to estimate a time series of process noise covariances by computing them in a fixed interval or lag manner, similar to the KS presented in this paper.

Since the process noise covariance and thus (8b), and observation noise variance were the same for the real and imaginary parts, and we had the block diagonal forms, we were able to compute (8c) and (8e) such that they are identical to both the real and imaginary parts. In the case of \(\varvec{P}^-_t\) this means that \(\varvec{\Gamma }^{-}_t = \varvec{P}^-_t (1:{N_\mathrm{pix}},1:{N_\mathrm{pix}})\), with \(\varvec{P}^+_{t}\) done similarly. The Kalman gain \(\varvec{K}_t\) becomes \( \varvec{\Pi }^{-}_t = \varvec{K}_t (1:M,1:{N_\mathrm{pix}})\). These further allowed us to compute the KF estimates (8d) separately and independently for the real and imaginary parts.

Finally, the filter formulas (8a)–(8e) become

$$\begin{aligned} \varvec{\Gamma }_t^-&= \varvec{\Gamma }_{t-1}^++ \varvec{\Psi } \end{aligned}$$
(25a)
$$\begin{aligned} \varvec{\Pi }_t&= \varvec{\Gamma }_t^-\varvec{H}_t^T(\varvec{H}_t \varvec{\Gamma }_t^-\varvec{H}_t^T+\sigma _v^2\varvec{I})^{-1} \end{aligned}$$
(25b)
$$\begin{aligned} \hat{\varvec{f}}_{{\text {Re}},t}^+&= \hat{\varvec{f}}_{{\text {Re}},t-1}^+ + \varvec{\Pi }_t\left( \tilde{\varvec{z}}_{{\text {Re}},t} - \varvec{H}_t\hat{\varvec{f}}_{{\text {Re}}, t-1}^+\right) \end{aligned}$$
(25c)
$$\begin{aligned} \hat{\varvec{f}}_{{\text {Im}},t}^+&= \hat{\varvec{f}}_{{\text {Im}},t-1}^+ + \varvec{\Pi }_t\left( \tilde{\varvec{z}}_{{\text {Im}},t} - \varvec{H}_t\hat{\varvec{f}}_{{\text {Im}},t-1}^+\right) \end{aligned}$$
(25d)
$$\begin{aligned} \varvec{\Gamma }_t^+&=(\varvec{I}- \varvec{\Pi }_t\varvec{H}_t) \varvec{\Gamma }_t^-. \end{aligned}$$
(25e)

This same procedure also applies to the Kalman smoother in (23a), (23b) and (23c). As the initial value \(\hat{\varvec{f}}^+_0\) for the KF we used an LS estimate (5) which was computed using the first 610 spokes of the data.

In order to achieve feasible initial estimates and guarantee their early convergence, equations (25a), (25e) and (25b) were iterated before any of the KF estimates were computed. The initial value for the estimation error covariance \(\varvec{\Gamma }_0\) before the iterations was diagonal and selected such that the diagonal values were the maximum value obtained from the process noise covariance matrix \(\varvec{\Psi }\). The number of initial iterations was the same as the number of spokes used per frame (610). After the iterations, the resulting a posteriori estimation error covariance matrix was used as the initial covariance in the actual filter estimates.

The steady-state smoother was utilized because the standard RTS smoother gains in (22a) would need to be stored during the KF pass, with a single gain matrix in image resolution of \(128 \times 128\) taking 1 GB of memory. With the steady-state smoother, we only require a single gain matrix. The constant smoother gain computed with (23a) and (23b) was used to determine the smoothed estimates. Tests performed on a lower resolution (\(64 \times 64\) image) showed that the differences between the, computationally more efficient, but approximate, steady-state smoother and regular smoother are negligible.

Since the observation matrix is different for each consecutive spoke of data and both (8c) and (8e) being dependent on the current observation matrix, the estimation error covariances (8b) and (8e) do not reach exactly steady state, but rather vary periodically around a steady-state level along the measurement cycle. However, the differences in values between different time steps were less than 5%. The point where this periodic steady-state variation began was also achieved quite quickly as the estimation error covariances had nearly reached the steady-state variation point after the initial 610 iterations.

With both simulated data and experimental data the KF and KS estimates were computed using Algorithm 1. In this work, all computations were computed offline as the data-driven estimation of the evolution noise covariance requires access to the whole time series of the MRI data. However, it would also be possible to implement Algorithm 1 in near real-time by using time-varying process noise covariance which could be estimated by fixed interval or lag computations based on short subsets of the data as it becomes available.

For the computations, we used Arrayfire C++ library [48] with the CUDA backend using single precision. The GPU used was Nvidia Tesla P100 PCI-E 16 GB. The codes were run through the MEX-interface in MATLAB (2017a, The MathWorks Inc., Natick, MA).

3.5 Consistency Tests

Both the process noise variances and the observation noise variance were scaled by using the consistency tests resulting in the following modifications of (25a), (25b) and (16)

$$\begin{aligned} \varvec{\Gamma }_t^-&= \varvec{\Gamma }_{t-1}^++\alpha \varvec{\Psi } \end{aligned}$$
(26a)
$$\begin{aligned} \varvec{\Pi }_t&= \varvec{\Gamma }_t^-\varvec{H}_t^T(\varvec{H}_t \varvec{\Gamma }_t^-\varvec{H}_t^T+\beta \sigma _v^2\varvec{I})^{-1} \end{aligned}$$
(26b)
$$\begin{aligned} \varvec{Y}_t&= \varvec{H}_t^T \varvec{\Gamma }_t^{-}\varvec{H}_t^T + \beta \sigma _v^2\varvec{I}, \end{aligned}$$
(26c)

respectively, where \(\alpha > 0\) and \(\beta > 0\) are the coefficients to be determined with the consistency tests and initially set to 1. Due to the separate computations of the real and imaginary parts, expected value of (17) becomes \(E(\epsilon _t) = M\). The number of time points used to compute the tests was \(L = 610\). First the \(\alpha \) value was varied until the mean of the innovation (15) had reached its minimum value. Next, the value of \(\beta \) was made smaller until the NIS value (17) reached the confidence interval (19), or was at the minimum distance to the confidence interval in case if it didn’t reach the confidence interval. In case that the NIS didn’t reach the confidence interval, an additional step was taken where \(\alpha \) was further incremented until the NIS value reached the confidence interval. In all cases, the autocorrelation values were inside the confidence intervals (21). Since both \(\alpha \) and \(\beta \) were adjusted based on the NIS and autocorrelation tests, their values and intervals were limited by the tests as well.

3.5.1 Simulated Test Case

The reconstructions were computed using image size \({N_\mathrm{pix}}= 128 \times 128\). The process noise variances which were estimated based on the SW estimates are shown in Fig. 5.

Fig. 5
figure 5

The variances of the process noise obtained from the sliding window estimates in the simulated case

The values of the consistency tuning parameters \(\alpha \) and \(\beta \) in (26) were obtained as outlined earlier and were \(\alpha = 500\) and \(\beta = 0.2\).

3.5.2 Experimental Data

The process noise variances which were estimated based on the SW estimates are shown in Fig. 6.

Fig. 6
figure 6

The process noise variances obtained from the SW estimates in the experimental case

The consistency tuning parameters \(\alpha \) and \(\beta \) in (26) were obtained as outlined in 3.5 and were \(\alpha = 2500\) and \(\beta = 0.4\).

3.6 Image Fidelity Measures

In the simulated test case, the fidelity of the reconstructed images with respect to the true target are assessed using the relative \(L_2\) norm error

$$\begin{aligned} \mathrm{Err}(\tilde{\varvec{f}}) = \frac{\Vert \tilde{\varvec{f}} - \tilde{\varvec{f}}_\mathrm{true} \Vert }{\Vert \tilde{\varvec{f}}_\mathrm{true} \Vert }, \end{aligned}$$

peak signal-to-noise-ratio (PSNR) [41] and mean structural similarity index (SSIM) [42] over the whole image domain. The reconstruction quality of the simulated activation response inside the ROI is assessed using the \(L_2\) norm error and contrast-to-noise-ratio

$$\begin{aligned} CNR = \frac{|A-A_{\text {base}}|}{\sigma _{\text {base}}} \end{aligned}$$
(27)

where A is the peak (or minimum if the signal decreases) signal magnitude after the stimulus, \(A_{\text {base}}\) is the mean magnitude of the baseline signal (signal before stimulus), and \(\sigma _{\text {base}}\) is the STD of the initial baseline. Other means of calculating the CNR exist for MRI, but (27) is commonly used [43].

figure f
Fig. 7
figure 7

Top: Mean of the ROI for the original signal, sliding window (SW) estimate, Kalman filter (KF) and Kalman smoother (KS) in simulated case. Bottom: Close-up of the mean change of the ROI as shown by the black bars on top

Fig. 8
figure 8

Top: Relative error of ROI for SW, KF and KS. Bottom: Relative error for the entire image \(\varOmega \) for SW, KF and KS

4 Results

4.1 Simulated Test Case

The top in Fig. 7 shows the mean of the magnitude in the simulated ROI with respect to time for the whole time series and the bottom shows a close-up of the signal around the time of the activation in the simulated fMRI data, shown by the black bars in the top image. Fig. 8 shows the relative reconstruction error of the ROI and the relative error of the entire image domain \(\varOmega \) with respect to time. The integral of the relative error over time and the time averages of the other image fidelity measures are given in Table 1. Figure 9 shows a snapshot of the simulated and reconstructed images at \(t=1000\) and the temporal variations of one column of image pixels for the entire time series. This column is shown by the white bar on the left image and the temporal location of the snapshot is shown by the white bar on the right.

From Figs. 7 and 8, as well as from Table 1, we can see that the state estimation estimates have smaller temporal and spatial errors when compared with the sliding window estimate. From the bottom of Fig. 7, it can be seen that the KF estimate shows less delay in the recovery of the activation than the SW estimate. KS, on the other hand, shows a slightly premature beginning of the activation.

In the case of spatial errors, the differences in the ROI are small when comparing the KF and SW estimates, but are quite significant when either are compared with the KS estimate. In the whole domain, both the KF and KS have smaller errors than the SW estimate. The other image fidelity measures similarly show better results with the KF and the KS estimates. Based on the fidelity measures and visual examination of the snapshots in Fig. 9, the best estimate is given by the KS.

Figure 10 shows the mean of the KF estimate for the ROI pixels and mean of the 95% confidence intervals (\(\pm 2\sigma \)) of the estimate at the ROI pixels. The confidence intervals were computed with the following formula

$$\begin{aligned} \hat{\varvec{f}}_{t,2\sigma } = \hat{\varvec{f}}_{t} \pm 2\sqrt{{\text {diag}}(\varvec{P}_t^+)} \end{aligned}$$
(28)

separately for the real and imaginary parts, with the same covariance used for both. Finally, the magnitude was taken and used as the confidence intervals in Figs. 10 and 14. We remark that the (posterior) confidence interval of (28) depends on the state estimation model used through both of the noise covariances \(Q_t, R_t\) and the state transition matrix \(F_t\), implying that the confidence intervals reflect the uncertainty with respect to the state estimation model used.

The computational times of one frame with the current implementations were on average 0.08 s for the KF, with KS extending the computational time by 0.006 s per frame on Tesla P100 PCI-E 16 GB, and 0.3 s for SW on Intel Core i7-5820K.

Table 1 Integrated values of the relative \(L_2\) norm error in the ROI and the whole domain \(\varOmega \) from Fig. 8, mean SSIM and PSNR values for \(\varOmega \) and mean CNR values for the ROI in the simulated case
Fig. 9
figure 9

Reconstructions from simulated data. Left column: Image at a given time point which is indicated by the vertical line in the time variation images in the right column. Right: Time variation of signal intensity in one column of image pixels which is indicated by vertical line in the left column. Rows from top to bottom: 1. The true target image. 2. SW reconstruction. 3. KF reconstruction. 4. KS reconstruction. All images have the color scale of the original image

Fig. 10
figure 10

Left: Mean of the ROI values for the KF estimate (red line), 95% confidence intervals of the estimate (\(\pm 2\sigma \)) (blue dashed line). Right: Diagonal of the a posteriori covariance at \(t = 1000\). Simulated case

4.2 Experimental Data

The mean of the magnitude in the ROI with respect to time is shown in the top of Fig. 11. The bottom of Fig. 11 shows a close-up of the time window shown by the black bars in the top image. Figure 12 shows a time series of the two pixels shown in Fig. 3 at the same time window as in the bottom of Fig. 11. The left image of Fig. 12 is the signal of one of the ROI pixels (marked with white “+” inside the ROI in Fig. 3) and the right image shows the signal of a pixel at the location of the main cortex artery (the upper “+” mark in Fig. 3). Fig. 13 shows a snapshot of the reconstructions at time index \(t = 6050\) together with time series of one row of pixels indicated by the white line in the left image. The white line in the right of Fig. 13 shows the temporal location of the snapshot.

Signal alterations in the time series in Figs. 11 and 12 reflect changes in water relaxation properties, mainly changes in T1 relaxation time, due to varying contrast agent concentration. An increased signal intensity is observed in the tumor region after the administration of the contrast agent (see Figs. 11 and 13) because the compromised blood–brain barrier leads to the accumulation of the contrast agent into tumor tissue. The time plot of the signal intensity in the vertical line in Fig. 13 indicates that only minimal signal changes are observed outside the tumor region during the experiment. However, the ROI signal in Fig. 9 and the time plot in Fig. 13 (the part of the vertical line inside the brain) reveal that a transient signal drop can be observed in the brain at the time of the injection. This drop may reflect conditions where the very high contrast agent concentration leads to a transient signal loss due to enhanced T2 relaxation.

The results in Figs. 11, 12 and 13 conform to the findings of the fMRI simulated test case; the state estimation approach yields better image quality than the SW in the DCE experiment as well as less noisy time series and clearer recovery of the transient signal loss in the brain and subsequent increase in the intensity in the tumor region. The spatial details in the state estimates are clearer and the images are less noisy compared to the SW approach. The signal changes can also be seen more clearly in the Kalman estimates, especially in the KS. Both the decrease and increase in signal intensity can be seen slightly earlier when compared with the SW estimate. The contrast-to-noise-ratios are also better with both the KF and KS estimates, see Table 2. The CNR was calculated for this DCE experiment by using the magnitude of the transient signal drop after the injection of the contrast agent.

The left graph of Fig. 12 shows that at some pixels in the brain, the SW estimate does not recover the transient signal loss at all, while the KF estimate, and especially the KS estimate, do recover it. This would be advantageous for diagnostic purposes as it allows to more accurately locate tumor areas, for example. Furthermore, it could allow examination of stimuli that cause either too fast or too small response, as is often the case in BOLD examinations [3]. This would be useful in the early diagnosis Alzheimer’s disease.

Figure 14 shows the mean of the KF estimate for the ROI pixels and the mean of the 95% confidence intervals (\(\pm 2\sigma \)) of the estimate for the ROI pixels. The confidence intervals were computed as in the simulated case with (28).

Fig. 11
figure 11

Experimental DCE data. Mean of the signal in the ROI with respect to time for SW, KF and KS reconstructions. Bottom: Close-up of the mean change as shown by the black bars on top

Fig. 12
figure 12

Time series of the two pixels marked with “+” marks in Fig. 3 around the time of injection of the Gadovist contrast agent. The time interval is denoted by vertical lines in Fig. 9

Fig. 13
figure 13

Reconstructions from experimental data. Left column: Image at a given time point which is indicated by the vertical line in the time variation images in the right column. Right: Time variation of signal intensity in one column of image pixels which is indicated by vertical line in the left column. Rows from top to bottom: 1. SW reconstruction. 2. KF reconstruction. 3. KS reconstruction. All images have the color scale of the original image

5 Discussion & Conclusions

In this paper, we proposed a state estimation approach to time-varying MRI. The proposed method utilized the golden angle acquisition method, a data-driven process noise covariance matrix estimated from the sliding window estimates and a steady-state Kalman smoother. The approach was evaluated by using simulated and experimental small animal data. The simulated data corresponded to a fMRI experiment while the experimental data demonstrated a DCE-MRI experiment. Both cases showed that the proposed method improved the reconstruction quality, both spatially and temporally, when compared with the sliding window method which is widely used for the reconstruction of time-varying MRI data. Our method also allowed an easy computation of two different estimates in the form of KF and KS estimates, with very little of additional computational time required. The KS estimate was temporally smoother than the KF estimate, but could potentially estimate the signal changes slightly premature.

Table 2 Mean CNR values calculated with (27) for the ROI in the experimental DCE case
Fig. 14
figure 14

Left: Mean of the ROI values for the KF estimate (red line), 95% confidence intervals of the estimate (\(\pm 2\sigma \)) (blue dashed line). Right: Diagonal of the a posteriori covariance at \(t = 6050\). Experimental case

The steady-state formalism of the Kalman smoother presented here could also be extended to the actual Kalman filter itself. As previously mentioned, the convergence to the (periodic) steady-state estimation error covariance and Kalman gain values happened reasonably fast. Employing, for example, a steady-state a posteriori estimation error covariance would improve the computational times significantly while having only a small effect on the quality of the estimate. Another method could be to store the individual Kalman gain matrices for all the spokes, but the memory requirements would be higher compared with a single covariance matrix.

In our previous paper [45], we demonstrated the use of KF in fMRI by using uniformly spaced radial acquisition and spatial priors. As previously mentioned, when using the conventional uniform radial acquisition method a periodic temporal artifact was present in the time series of the KF estimates despite the number of spokes per cycle. This artifact resembled a shape of a sawtooth in the error curves of the estimates in our previous work. In this work, however, we showed that when using the golden angle acquisition no such periodic artifact was present in the KF estimates.

The GA method also seemed to improve spatial accuracy of KF and KS estimates compared with conventional linear radial sampling. This improvement can be contributed to the combination of the GA trajectory, which fills in the whole k-space gradually, with the state estimation approach, where the states are correlated in time through the models and the structure of the filtering procedure. The spatial regularization methods used in [45] could also be implemented into the state estimation with data-driven noise covariances (Algorithm 1). However, in the present setup with GA sampling, the spatial regularization provided only a minor improvement to the image quality while increased significantly the computational cost. The process noise covariance presented here could also be made to vary with time as it was done in, e.g., [37]. This allows the estimates to be temporally less noisy in times when there are no rapid changes in the data. However, as this effect comes with a decrease in the process noise variances it can lead to lagging during the points of rapid changes, depending on the sensitivity of the covariance. The increase in the variance values to prevent this lag can, on the other hand, lead to noisier estimates, as happened in our experiments. Based on our experiments, we found that the stationary covariance approximation employed in this paper led to a reliable construction for the state noise covariance.

In this paper, we employed the same process noise covariance for both real and imaginary parts. As previously mentioned, using separate covariances is also a possibility. However, in our tests, using separate process noise covariances gave slightly less noisy estimates temporally with some lagging in certain pixels or temporally noisier estimates with no lag, but the computational times were twice as much.

Instead of using the split form presented in this paper, we could do all the Kalman filter computations in complex-space by using the method implemented in [7]. However, this method is computationally more demanding and, in our experiments, provided no improvement in the quality of the KF estimates.

In this work, the state evolution model was a simple random walk process. A topic of future research could be the improvement of the method by using more advanced state evolution models for encoding temporal prior information. These could be, for example, based on simple models of the physiological signals [31], or if slow changes are expected, based on kinematic models [38] where the rate of change of the signal derivatives is controlled by using a higher-order time series model. These approaches, however, have the complication that the computational demand increases due to the increased number of unknown state variables.

All the methods used in this paper, in addition to the methods of our previous paper and methods that were tested for this paper, but not presented here, will be available in the open-source software High-dimensional Kalman filter toolbox (HELMET). This will be freely available from GitHub [44].

In this study, the state estimation approach was evaluated using a time step of single k-space spoke. The time resolution and sampling can in principle be selected quite freely, for example one could use more than one spoke per time instant or use the data in the sliding window manner. However, in our experiments this does not produce improvement in spatial resolution while it does lower the temporal resolution.