Abstract
Purpose
An intraoperative real-time respiratory tumor motion prediction system with magnetic tracking technology is presented. Based on respiratory movements in different body regions, it provides patient and single/multiple tumor-specific prediction that facilitates the guiding of treatments.
Methods
A custom-built phantom patient model replicates the respiratory cycles similar to a human body, while the custom-built sensor holder concept is applied on the patient’s surface to find optimum sensor number and their best possible placement locations to use in real-time surgical navigation and motion prediction of internal tumors. Automatic marker localization applied to patient’s 4D-CT data, feature selection and Gaussian process regression algorithms enable off-line prediction in the preoperative phase to increase the accuracy of real-time prediction.
Results
Two evaluation methods with three different registration patterns (at fully/half inhaled and fully exhaled positions) were used quantitatively at all internal target positions in phantom: The statical method evaluates the accuracy by stopping simulated breathing and dynamic with continued breathing patterns. The overall root mean square error (RMS) for both methods was between \(0.32\pm 0.06~\hbox {mm}\) and \(3.71\pm 0.79~\hbox {mm}\). The overall registration RMS error was \(0.6\pm 0.4~\hbox {mm}\). The best prediction errors were observed by registrations at half inhaled positions with minimum \(0.27\pm 0.02~\hbox {mm}\), maximum \(2.90\pm 0.72~\hbox {mm}\). The resulting accuracy satisfies most radiotherapy treatments or surgeries, e.g., for lung, liver, prostate and spine.
Conclusion
The built system is proposed to predict respiratory motions of internal structures in the body while the patient is breathing freely during treatment. The custom-built sensor holders are compatible with magnetic tracking. Our presented approach reduces known technological and human limitations of commonly used methods for physicians and patients.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The advantages of surgical tracking technology are used for real-time tumor or organ motion prediction aiming to guide the surgeries or therapies with minimal damage to the surrounding tissue around the target. In particular, treatments in stereotactic ablative radiotherapy (SABR) or stereotactic body radiation therapy (SBRT) while the patient is breathing freely are an important concern in clinical workflows for the safe and effective provision of precision radiotherapy, computer-assisted tumor surgery and biopsy interventions [1,2,3,4]. Existing approaches that are requiring extensive training for the patients and physicians such as respiratory gating and breath hold are often a constraint. For abdominal compression, used pneumatic belts or mechanical pressure systems are inconvenient for the patients when considered the long therapy sessions and periods.
We present a patient-specific approach that predicts internal tumor motions using real-time tracked skin sensors with magnetic tracking while giving to patient relaxed freely breathing condition. The respiratory cycle of the patient and thus the 3D temporospatial movements of the internal targets are observed from patient’s 4D-CT preoperatively. Our optimization technique [5] and custom-made surface sensor holder (SH) allow to determine the best possible localization and number of sensors to be placed on the patient’s surface to predict single or multiple tumor movements at the best possible locations. Accurate positioning of the sensors at the proposed SHs preoperatively allows submillimetric registration accuracy and thus clinical acceptable real-time prediction in intraoperative phase.
Methods
This section describes the hardware and software components of the respiratory motion prediction system, possible clinical workflow and the implementation of respiTrack.
Components
Custom respiratory system model
To replicate the human respiratory system artificially and to predict the tumor motion based on the respiratory simulation, a custom-made realistic phantom model was built (Fig. 1). The standard rubber hot-water bottle simulates, e.g., abdominal region of the human body that contains a spherical rubber balloon inside to simulate a moving organ. The respiratory cycle of the model is performed by inflating/deflating the balloon using water blaster manually. A silicone tube connects the balloon to water blaster. The SHs (Fig. 2) are fixed on the model surface (surface fiducials/markers). The balloon, inside the bottle, contains different markers in size internally (target markers) distributed in various locations and replicates moving tumors along different movement directions such as vertical, lateral or longitudinal.
Sensor holder
Several real-time movement prediction techniques apply external surface sensors [6,7,8]. However, those sensors are placed at discretionary locations and distributed empirically using a fixed number of sensors for each patient that may be sub-optimal for the real-time prediction accuracy [9].
The concept of the custom-made SH can improve the real-time prediction accuracy in intraoperative phase by optimizing the spatiotemporal distribution and alternating number of surface markers to use for each patient and multiple tumor movement prediction with respect to their predictive power in the preoperative phase. The improved SH design enables switching the tracking sensors while maintaining the same sensor origin and provides off-line prediction preoperatively using surface fiducials in SH and the pre-trained predictors with magnetic tracking during the intervention after the known relative transformation between the surface fiducials and the inserted real-time tracker sensor.
The main component of the SH consists of a X-Spot CT skin fiducial (Beekley Medical, Bristol-CT), centered in a sensor attachment point. During the preoperative phase, \(\approx \)10–25 of these main SHs are fixed on the patient’s surface at randomized candidate locations according to the interested body region. In the intraoperative phase, the main SH holds an magnetic sensor holder (EM-SH) within a tracking sensor (NDI Aurora, 40 Hz measurement rate, Northern Digital Inc., Canada) concentrically. The SHs provide user error-free rigid body image-to-patient registration [10], and therefore, more accurate real-time motion prediction can be achieved. A 6D sensor was used as a dynamic reference frame (DRF) for the registration. The SH design allows both automated localization in 4D-CT patient images and during real-time motion tracking. A fully automated registration process eliminates possible user errors and allows high-accuracy registrations potentially with submillimetric errors on the target.
Rhinospider
Rhinospider (RS) is a novel registration technique used in combination with magnetic tracking to determine the accurate fiducial localization and optimize the workflow for patient-to-image registration [11]. In this work, a RS ball was used for the validation of the real-time prediction to determine the correctness of prediction accuracy and identify the positional deviations between the tumor (predicted RS ball center) and the center of tracked 5D sensor in the ball. In the RS ball, a 5D sensor was attached (both centroids of the RS ball and sensor are matching) (Fig. 1, right and Fig. 3, right) and placed inside of the phantom model before 4D-CT scan (Fig. 3 left). The RS ball was detected and localized in CT image space automatically same as other CT skin/internal markers in the model.
respiTrack software
A plugin-based prototype software system (respiTrack) featuring preoperative planning (off-line prediction), intraoperative registration, surgical navigation and real-time prediction was developed. All the required modules [12] were implemented using open-source libraries [13,14,15,16,17,18].
Workflow
The individual steps (Fig. 4) in respiTrack describe the performed procedures from preoperative until postoperative phase consecutively.
Data acquisition
For the 4D-CT, a scanner (cardiac scan with SOMATOM Definition Flash; temporal resolution 75 ms; scan time 0.6 s; Siemens healthineers, Austria) at the University Clinic for Radiology in Medical University of Innsbruck was used.
The phantom model with 20 SHs within 7 targets was placed into the CT gantry and held at fully inhaled position by adjusting air in the balloon. (position 1 in Fig. 5). In total, \(\approx \)10–15 CT scans with discrete time steps of a half breathing cycle were acquired. Before each scan, the air in the balloon was decreased by moving the handle of the water blaster to the next marked position until fully exhaled position was reached. The distance between each marked positions on the handle is 2 cm.
The slice thickness for each CT image (\(512\times 512\) px) was 1.0 mm, and the 12 discrete CT phases consist of 303 images with \(0.488\times 0.488\times 0.488\) mm pixel spacing. The 4D-CT scan was loaded into the respiTrack software and visualized as standard DICOM view (axial, sagittal, coronal and multiplanar) (Fig. 6).
Marker detection and localization
The automatic localization of the surface and target markers was performed using a GPU accelerated volumetric detection method [19] that uses morphological opening and closing operators.
To determine the marker centroids, each 4D-CT image set was loaded into the respiTrack and thresholded with given Hounsfield unit parameter that binarizes the images. A virtual structuring sphere element with given physical dimensions and appropriate scale given the voxel size of the image, was generated and applied to the images. A geometry filter selects best candidates based on the shape and size on the determined spherical blobs and calculates the 3D positional centroids in CT image space.
The detected marker locations for each 4D-CT phase were exported (input and target data) and used for training data during prediction. The observations represent the respiratory cycle of the patient and the 3D temporospatial movement variance of all surface and target markers for a half breathing cycle in 12 discrete time steps (Table 1). The most marker movement amplitude was observed in z image plane (SI-superior/inferior), while less movements were observed in y (AP-anterior/posterior) and x planes (LR-left/right), respectively. The total movements assure consequently that internal marker movements are replicating very similar respiratory organ motions with internal organs of a human body, such as heart, lung, liver, trachea, prostate and spine [20,21,22,23]. The temporospatial movements of surface markers behave similar to target marker movements. Maximum movement in the AP plane was observed for Marker 9 \((-18.25\;\hbox {mm})\) and minimum for marker 15 \((1.20\;\hbox {mm})\). In SI plane, maximum was observed for marker 6 \((1.01\;\hbox {mm})\) and minimum for 13 \((0.02\;\hbox {mm})\), while in LR plane, maximum movement for marker 10 \((2.83\;\hbox {mm})\) and minimum for 20 \((0.26\;\hbox {mm})\).
Respiratory motion prediction and optimization
On the basis of known spatial coordinates of surface and target markers in 4D-CT image space, the optimal number of sensors to be used for desired single or multiple tumors and their best possible sensor locations to be placed on the patient’s surface were determined in off-line prediction phase. This optimization process eliminates one of the major error sources for the prediction accuracy as configurable for each tumor and patient individually. During real-time prediction, the 5D sensors were applied in the corresponding SHs recommended by off-line prediction and tracked while patient is breathing freely.
Off-line prediction
The exported spatial coordinates of both marker locations in 4D-CT reference frame (k time-series, each with T time steps and 3D output dimensions in x, y, z yields the time-series \(p \in R^{T \times 3}\)) were used to determine the optimal surface sensor locations preoperatively by using multi-objective genetic algorithm (GA)-based feature selection method [24, 25], which trains an accurate prediction of tumor motion from few optimally positioned SHs.
An individual I in total population (possible solution in metaheuristic search) during the GA search is represented by a chromosome of a k-dimensional binary vector \(I = \lbrace 0, 1 \rbrace ^{k}\), where the nth bit (gene) in chromosome represents whether the nth SH marker is used for prediction 1 or not 0.
If a SH marker is selected to use, its 3D positional coordinates within the CT reference frame are added to the input coordinate set used for prediction. This yields \(3 \times M\)-dimensional input feature for each time step, where M is the number of enabled markers within the individual. For each I, the fitness function is defined by multi-objective function \(F(I) = \left( F_1(I), S(I) \right) \). The primary component is given by the weighted sum:
where E(I) is the average RMS error between the predicted and target locations using X as the input feature set over a threefold cross-validation on the T time steps, S(I) is the number of features enabled, K is the maximum preferred number of enabled surface markers, and \(\alpha \) is a scaling parameter, which balances the trade-off between additional prediction error and the number of enabled marker. This setup leads to an optimization goal of finding the minimum achievable prediction error with as few marker as possible, but softly punishing configurations that have more than K enabled sensors. The GA in respiTrack was configured with generation size 60 (termination criteria), population size 600, crossover proba 0.5, mutation proba 0.2, cv independent proba 0.5 and mu independent proba 0.05.
For each I, the predictions were evaluated using 3 Gaussian process regressors (GPR) (\(G_i: X \rightarrow t_i\), \(i = {1,2,3}\)) for each coordinate of the target y, with kernel \(C * \hbox {SE} + W\) where C is constant kernel \(\sigma ^{2}\), SE is squared exponential \(\sigma ^2 \exp \left( -\frac{(x,x')^2}{2l^2}\right) \), and W is white noise kernel \(\sigma ^{2}l_{n}\) [26].
The C kernel was configured with variance 1.0 and bounds \((1e-3,1e3)\), while SE kernel with length scale 10.0, bounds \((1e-2, 1e2)\) and W kernel with noise variance 0.1 and bounds \((1e-10, 1e+0.5)\). The GPR was configured with normalized target-data mean value without an optimizer. Off-line prediction was repeated ten times for each individual target y, respectively, that gave same recommendation SH list after each run.
Real-time prediction
The intraoperative image-to-patient registration was established, while 50 sensor location readings (relative to DRF) were averaged for every attached single sensor (dependent on the number of recommended sensor list S(I) for an individual target y) and patient maintains a fixed position relative to the field generator with or without breath held. The combined sensors and SH marker coordinates were matched \(T_t,_p\) to find the minimum registration error (FRE) [27].
During real-time prediction, each observed sensor readings \(L_i \in S(I)\) were transformed from tracker to image coordinate system proposed to use as test data \(T_p,_r\) in GPR by \((\overrightarrow{V}_{L_i(x,y,z,1)})^T*R\), where \(\overrightarrow{V}\) was a \(1\times 4\) vector for each individual sensor coordinate in tracker coordinate space and R is a \(4\times 4\) matrix observed through rigid-body registration (Fig. 7). The GPR was applied with the same kernel and input-data \(L \in X\) to a desired target y with read real-time test data \(L_i\).
Evaluation
Experimental setup
For each target, the recommended number of SHs and their identified locations were used, respectively. The patient was then positioned in the FOV of tracker, and real-time sensor data (test data) were observed. The prediction accuracy was validated using RS ball within located sensor, which was intended to use as a tracked target marker (Target RS1 in all tables).
Evaluation procedure
Two different validation methods were applied. In statical method, the prediction for a selected target was determined in different three fixed positions by reading test data without simulating any breathing. For this, the following steps were executed:
- 1.
Load first patient dataset into respiTrack (4D-CT scan in fully inhaled position) (Fig. 8, top).
- 2.
Fill air in the model until marked position on the handle regarding number of loaded dataset and perform patient-to-image registration.
- 3.
Perform real-time prediction for all targets, respectively, while holding the handle on the fixed position without changing the air in the model.
- 4.
Repeat same experiment for half inhaled (6th) and fully exhaled (12th) dataset.
In dynamical method, the prediction was determined with the same steps (except 3) above while changing the amount of air in the patient between handle positions 1 and 12 repetitively (Fig. 8, bottom). The operator was synchronizing his/her relaxed breathing cycle while simulating the inhalation and exhalation with the patient. Each validation procedure was repeated five times, and standard deviation (SD) for each run was calculated. The correctness of prediction accuracy was determined for target RS1 while comparing the predicted and real-time sensor reading positions (Table 6). For each validation step, 100 predictions were accomplished that took \(\approx \) 1 min. Each individual prediction took 0.62 s. The registrations were established on the three different marked positions, which were not showing any significant influence on registration accuracy but on prediction accuracy (See prediction RMS columns “Reg. at 6th pos.” in Tables 3 and 5). Test data were observed during simulated breathing.
Results
Various external surface markers were discovered to predict temporospatial movements of 7 internal targets from best possible SH locations. The resulting number of SHs was decided by the feature selection algorithm from 20 SHs in total, distributed on the patient’s surface, and predictions were performed by the heuristic GPR algorithm. The best prediction accuracy was observed by combined kernels with their generalization properties. Tables 2 and 3 represent the resulting off-line and real-time prediction accuracy for each target in the phantom. Each input marker in the recommended SH list was processed with an individual target, respectively.
Leave-one-out cross-validation procedure (LOO) [28] was applied to validate off-line prediction from both input and target marker positions \(Nx(L*D)\), respectively, where N is the total number of 4D-CT phases, L is the total number of recommended SHs, and D is the dimension of the data. The training of the predictor was performed on \(N-1xD\)th of the NxD input data, and the prediction was tested on remaining Nth test data for an individual target y. This process was repeated N times each time leaving out a different pair to use as the single test case.
Instead of recommended SHs, randomly selected SH locations were used to cross-validate the results and to investigate different location of SHs on the accuracy effect between the motions of SHs and a target under various registration patterns (Tables 4 and 5).
Discussion and conclusions
In this paper, we proposed a real-time respiratory motion prediction system that uses surface sensors to predict internal tumor motions (Fig. 9). For magnetic tracking, provided nondisposable SH concept ensures user error-free registration and uninterrupted data flow without line-of-side limitations. Automatically identifying best possible sensor locations on the patient’s surface preoperatively that shows the distribution of recommended sensor locations having a high correlation between the surface motion and the internal tumor motion, provides better target accuracy using less numbers of external sensors to use, e.g., in the thorax or abdominal regions in intraoperative phase. In particular, enabling free breathing for the patient during treatments and multiple tumor prediction without additional workload on the medical staff, enhance common workflows in such treatments.
Our internal tests with the system serve reliable prediction accuracy and show a promising potential to be used in SABR and SBRT treatments or tumor and biopsy surgeries. The system overcomes many of the limitations of common clinical approaches and can be integrated into the existing clinical workflows in the medical environment.
More rigid respiratory system designs (due to temporal expansion of the balloon’s volume, Table 6) could further reduce the registration and prediction errors. Further preliminary clinical trial with patients is planned and under way; due to the complexity of the trials, it is foreseen to be published separately.
References
Buzurovic I, Huang K, Yu Y, Podder TK (2011) A robotic approach to 4D real-time tumor tracking for radiotherapy. Phys Med Biol 56(5):1299–1318
Wong JR, Grimm L, Uematsu M, Oren R, Cheng CW, Merrick S, Schiff P (2005) Image-guided radiotherapy for prostate cancer by CT-linear accelerator combination: prostate movements and dosimetric considerations. Int J Radiat Oncol Biol Phys 61(2):561–569
Sawant A, Smith RL, Venkat RB, Santanam L, Cho B, Poulsen P, Cattell H, Newell LJ, Parikh P, Keall PJ (2009) Toward submillimeter accuracy in the management of intrafraction motion: the integration of real-time internal position monitoring and multileaf collimator target tracking. Int J Radiat Oncol Biol Phys 74(2):575–582
D’Souza WD, Naqvi SA, Yu CX (2005) Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study. Phys Med Biol 50(17):4021–4033
Özbek Y, Bardosi Z, Milosavljevic S, Freysinger W (2018) Optimizing external surface sensor locations for respiratory tumor motion prediction. In: Data driven treatment response assessment and preterm, perinatal, and paediatric image analysis, PIPPI, DATRA, Lecture Notes in Computer Science, vol 11076. pp 42–51
Wijenayake U, Park SY (2017) Real-time external respiratory motion measuring technique using an RGB-D camera and principal component analysis. Sensors (Basel) 17(8):9
Borgert J, Krüger S, Timinger H, Krücker J, Glossop N, Durrani A, Viswanathan A, Wood BJ (2006) Respiratory motion compensation with tracked internal and external sensors during CT-guided procedures. Comput Aided Surg 11(3):119–25
Buzurovic I, Podder TK, Huang K, Yu Y (2010) Tumor motion prediction and tracking in adaptive radiotherapy. In: 10th IEEE international conference on bioinformatics and bioengineering, pp 273–278. ISBN:978-1-4244-7495-0. https://doi.org/10.1109/BIBE.2010.52
Sumida I, Shiomi H, Higashinaka N, Murashima Y, Miyamoto Y, Yamazaki H, Mabuchi N, Tsuda E, Ogawa K (2016) Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid. J Appl Clin Med Phys 17(2):74–84
Horn Berthold KP (1987) Closed-form solution of absolute orientation using unit quaternions. J Opt Soc Am A 4:629–642
Bardosi ZR, Özbek Y, Plattner C, Freysinger W (2013) Auf dem Weg zum Heiligen Gral der 3D-Navigation submillimetrische Anwendungsgenauigkeit im Felsenbein. In: Freysinger W (eds) 12th Annual conference of the German Society for Computer- and Roboter-Assisted Surgery (CURAC), pp 155–158
Ganglberger F, Özbek Y, Freysinger W (2013) The use of common toolkits CTK in the computer-assisted surgery—a demo application. In: Proceedings of the 12th annual conference of the German Society for Computer and Robot-Assisted Surgery (CURAC), pp 161–164
Nolden M, Zelzer S, Seitel A, Wald D, Müller M, Franz AM, Wolf I (2013) The medical imaging interaction toolkit: challenges and advances: 10 years of open-source development. Int J Comput Assist Radiol Surg 8(4):607–620
Schroeder W, Martin K, Lorensen B (2006) The visualization toolkit, 4th edn. Kitware, Clifton Park
Yoo TS, Ackerman MJ, Lorensen WE, Schroeder W, Chalana V, Aylward S, Metaxas D, Whitaker R (2002) Engineering and algorithm design for an image processing API: a technical report on ITK-the insight toolkit. In: Westwood J (ed) Proceedings medicine meets virtual reality. IOS Press, Amsterdam, pp 586–592
Enquobahrie A, Cheng P, Gary K, Ibanez L, Gobbi D, Lindseth F, Yaniv Z, Aylward S, Jomier J, Cleary K (2007) The image-guided surgery toolkit IGSTK: an open source C++ software toolkit. J Digit Imaging 20(1):21–33
Tokuda J, Fischer GS, Papademetris X, Yaniv Z, Ibanez L, Cheng P, Liu H, Blevins J, Arata J, Golby AJ, Kapur T, Pieper S, Burdette EC, Fichtinger G, Tempany CM, Hata N (2009) OpenIGTLink: an open network protocol for image-guided therapy environment. Int J Med Robot 5(4):423–434
Mark L (2011) Programming python: powerful object-oriented programming, 4th edn. O’Reilly, Dallas
Bardosi Z (2015) OpenCL accelerated GPU binary morphology image filters for ITK. Insight J 3–5. ISSN:2327-770X
Shimohigashi Y, Toya R, Saito T, Ikeda O, Maruyama M, Yonemura K, Nakaguchi Y, Kai Y, Yamashita Y, Oya N, Araki F (2017) Tumor motion changes in stereotactic body radiotherapy for liver tumors: an evaluation based on four-dimensional cone-beam computed tomography and fiducial markers. Radiat Oncol 12(1):61
Weiss E, Wijesooriya K, Dill SV, Keall PJ (2007) Tumor and normal tissue motion in the thorax during respiration analysis of volumetric and positional variations using 4D CT. Int J Radiat Oncol Biol Phys 67(1):296–307
Juneja P, Kneebone A, Booth JT, Thwaites DI, Kaur R, Colvill E, Ng JA, Keall PJ, Eade T (2015) Prostate motion during radiotherapy of prostate cancer patients with and without application of a hydrogel spacer: a comparative study. Radiat Oncol 10:215
Korreman SS (2015) Image-guided radiotherapy and motion management in lung cancer. Br J Radiol 88:1051
Spolaor N, Lorena AC, Lee HD (2011) Multi-objective genetic algorithm evaluation in feature selection. In: Takahashi RHC, Deb K, Wanner EF, Greco S (eds) Evolutionary multi-criterion optimization. EMO 2011. Lecture notes in computer science, vol 6576. pp 462–476
Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning, 1st edn. Addison-Wesley Professional, Boston
Rasmussen C, Williams C (2005) Gaussian processes for machine learning. MIT University Press Group Ltd, Cambridge
Fitzpatrick JM, West JB (2001) The distribution of target registration error in rigid-body point-based registration. IEEE Trans Med Imaging 20(9):917–927
Sammut C, Webb GI (2011) Leave-one-out cross-validation. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning. Springer, Boston
Acknowledgements
Open access funding provided by University of Innsbruck and Medical University of Innsbruck.
Funding
This study was partly Funded by Medical University of Innsbruck under the project number D-153110-019-014.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors. This article does not contain patient data.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Özbek, Y., Bárdosi, Z. & Freysinger, W. respiTrack: Patient-specific real-time respiratory tumor motion prediction using magnetic tracking. Int J CARS 15, 953–962 (2020). https://doi.org/10.1007/s11548-020-02174-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-020-02174-3