Abstract
Purpose
Needle placement is a challenging problem for applications such as biopsy or brachytherapy. Tip force sensing can provide valuable feedback for needle navigation inside the tissue. For this purpose, fiber-optical sensors can be directly integrated into the needle tip. Optical coherence tomography (OCT) can be used to image tissue. Here, we study how to calibrate OCT to sense forces, e.g., during robotic needle placement.
Methods
We investigate whether using raw spectral OCT data without a typical image reconstruction can improve a deep learning-based calibration between optical signal and forces. For this purpose, we consider three different needles with a new, more robust design which are calibrated using convolutional neural networks (CNNs). We compare training the CNNs with the raw OCT signal and the reconstructed depth profiles.
Results
We find that using raw data as an input for the largest CNN model outperforms the use of reconstructed data with a mean absolute error of 5.81 mN compared to 8.04 mN.
Conclusions
We find that deep learning with raw spectral OCT data can improve learning for the task of force estimation. Our needle design and calibration approach constitute a very accurate fiber-optical sensor for measuring forces at the needle tip.
Introduction
Needle placement is a challenging problem for a variety of medical interventions, including brachytherapy or biopsy [12]. The force acting on the needle tip allows for inference about the currently penetrated tissue. This information can be used to navigate the needle and to prevent injuries of delicate structures [9]. In order to distinguish tissue based on tip forces, it may be required to measure those with an accuracy of approximately \(0.01 \, \hbox {N}\) [8]. Tip forces cannot be measured with external sensors due to friction forces at the needle shaft [5]. Therefore, small-scale fiber-optical force estimation methods have been directly integrated into the needle tip. Several sensor concepts are based on Fabry–Pérot interferometry [1] and fiber Bragg gratings [6]. Here, we consider a setting where optical coherence tomography is available, e.g., to study tissue deformation [10] or to realize elastography [7]. While OCT has been proposed for tip forces estimation before [2, 3], these approaches rely on the reconstructed gray value data. However, using the reconstructed data has two limitations. First, the signal processing is based on a number of assumptions which may cause some loss of signal information. Second, it does not incorporate the phase part of the complex OCT signal that is particularly sensitive to small axial shifts. Therefore, we explore whether the tip force estimation accuracy can be improved by directly using the raw spectral OCT data. Thus, we perform a calibration between the optical signal and forces applied to the needle tip with convolutional neural networks. We validate our approach with three different needles using a new, improved needle design.
The design of the needles used in this work. In the upper image, one of our needles is depicted below, in comparison with a standard G18 biopsy needle above. In the schema, the brass tip with a piston and the brass sleeve are depicted in yellow. The ferrule guiding the fiber is depicted in gray. The protection tube is depicted in orange
Example OCT image data over a period of time. Left, the raw OCT signal is shown. Right, the reconstructed A-Scans (depth profiles) are shown. One individual scan is depicted in one column. The scans were collected as part of a time series. Subsequent scans are adjacently depicted (M-Scan). The reconstructed scans allow human interpretation to some extent, e.g., the piston surface is visible. Patterns in the raw OCT data are much more subtle and not interpretable by the human eye
Methods
Needle design
We used an improved needle design for force estimation at the needle tip. A scheme and an image of the needle are shown in Fig. 1. A brass tip with a piston is put on a brass sleeve such that it is able to perform a sliding motion inside of it. An epoxy layer between tip and sleeve acts as a spring. An optical fiber is embedded into a ceramic ferrule. The ferrule is positioned relative to the tip piston such that the light beam travels a distance of approximately \(1 \, \hbox {mm}\) through air until hitting the surface of the piston. For protection, the ferrule is embedded into a polymer tube which is glued to the brass sleeve. When forces act on the needle tip, the epoxy layer is compressed and the piston moves closer to the exit point of the laser beam which can be detected in the OCT signal. The diameter of the needle is \(2 \, \hbox {mm}\).
In [2], the needle tip was constructed as a cone and attached to the needle shaft with a deformable epoxy layer. Thus, radial forces on the needle tip could easily tilt it. The improved piston construction has the advantage that it guides the tip in axial needle direction and prevents tilt. This contributes to a more reproducible signal, an important aspect in the calibration of the needle, and to the overall durability.
Calibration data
We acquire calibration data for three custom build needles identical in construction. The data acquisition is performed similar to [3] where a needle is driven against a flat surface with a stepper motor. The force in the axial direction of the needle is measured with a force sensor and recorded together with the associated raw OCT data. Approximately 180 000 OCT-force pairs are collected for each needle for forces between \(0 \, \hbox {N}\) and \(1 \, \hbox {N}\). We perform regular OCT data reconstruction, which includes the following steps:
-
1.
Dechirping the data by resampling and interpolating to new sampling points, based on manufacturer specifications
-
2.
Estimation of the DC spectrum using an exponential moving average with a damping coefficient of \(d = 0.05\)
-
3.
Subtraction of the DC spectrum
-
4.
Apodization by using a Hann window for filtering the spectral data
-
5.
Fourier transform for mapping frequency values to spatial values
-
6.
Selection of the absolute signal value as the final reconstructed intensity image
Throughout the reconstruction process, information can be lost due to the DC spectrum estimation strategy (2), the apodization (4) which eliminates high-frequency signal parts and the selection of the absolute signal value (6) as the final image. Figure 2 shows an excerpt of the collected data, both in raw (left) and reconstructed form (right).
The raw OCT signal has a size of \(1024 \times N_t\) where \(N_t\) is the number of scans acquired over time. Reconstruction up to step (5) results in a complex signal which is also of size \(1024 \times N_t\). Finally, by taking the absolute signal value, the A-Scan image sequence of size \(512 \times N_t\) is obtained. This can be interpreted as a sequence of 1D depth images over time.
Deep learning architectures
We compare prediction performance for needle tip forces from both raw and reconstructed OCT data with different convolutional neural network (CNN) architectures. The considered architectures are variants of the ResNet [4], which is an extension of CNNs that enables better training through improved gradient flow. The ResNet models were originally built for 2D images with 2D convolutions. Here, the A-Scan data represent 1D images. Therefore, we adapt the architectures by replacing 2D convolutions by 1D convolutions. Furthermore, we replace the network’s original output layer for multi-class classification with a fully connected layer with one output for needle force regression. We consider several ResNet variants, resembling different network sizes, the regular ResNet34 and ResNet18 architecture as well as a smaller architecture with 2 residual blocks and 6 convolutional layers, which we name ResNet6. All deep learning models are implemented in PyTorch [11]. Learning is performed over 150 epochs with a batch size of \(N_B = 128\) and a learning rate of 0.005 using the Adam optimizer. As a loss function, we use the mean squared error which is defined as
where y is the ground-truth force and \({\hat{y}}\) is the predicted force value. We use \(20\%\) of the data as a hold-out validation set. We performed five training runs with different random seeds and averaged the individual results.
Results
We report the mean absolute error (MAE) in mN between force predictions and force targets on the validation set. Additionally, we report inference times for the examined neural network architectures. All results are shown in Table 1. Figure 3 shows the relative differences in errors graphically. For most combinations of needle and model architecture, the error for learning on raw OCT data is lower compared with learning from reconstructed OCT scans. Particularly for the ResNet34 architecture, the calibration performance improves for all needles.
Discussion and conclusion
In this paper, we address the calibration problem of OCT-based needle tip force estimation using new and improved sensor concept with a piston and a guiding sleeve. In contrast to a previous OCT-based concept [2], the needle is not as sensitive to lateral forces by design while achieving similar calibration results with intensity data.
Furthermore, we study an approach for improving deep learning-based calibration performance even further. During OCT acquisition, a spectral signal is obtained which is typically reconstructed to an intensity depth image. We illustrate that learning is possible in an end-to-end fashion, i.e., the process of image reconstruction can be avoided. Moreover, the results improve in six out of nine setups, indicating that there may be information lost in the original processing that can be used when training the network on the raw signal. The proposed approach allows for precise estimation of forces at the needle tip, which is particularly interesting for force-based robotic needle placement. With typical robot control cycles of \(1 \, \hbox {ms}\), a trade-off between accuracy and inference time must be balanced.
We find that learning forces from raw OCT data instead of reconstructed images work well, in particular, for larger deep learning models. For future work, our approach could be studied in more detail with additional deep learning methods and in different applications scenarios. Also, our approach could be extended to other applications where an imaging modality is used as a sensor signal, for example, in the context of motion tracking with OCT or ultrasound.
References
Beekmans S, Lembrechts T, van den Dobbelsteen J, van Gerwen D (2017) Fiber-optic fabry-pérot interferometers for axial force sensing on the tip of a needle. Sensors 17(1):38
Gessert N, Priegnitz T, Saathoff T, Antoni ST, Meyer D, Hamann MF, Jünemann KP, Otte C, Schlaefer A (2018) Needle tip force estimation using an oct fiber and a fused convgru-cnn architecture. In: MICCAI. Springer, pp 222–229
Gessert N, Priegnitz T, Saathoff T, Antoni ST, Meyer D, Hamann MF, Jünemann KP, Otte C, Schlaefer A (2019) Spatio-temporal deep learning models for tip force estimation during needle insertion. Int J CARS 14:1485–1493
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition, pp 770–778. https://doi.org/10.1109/CVPR.2016.90
Kataoka H, Washio T, Chinzei K, Mizuhara K, Simone C, Okamura AM (2002) Measurement of the tip and friction force acting on a needle during penetration. In: MICCAI. Springer, pp 216–223
Kumar S, Shrikanth V, Amrutur B, Asokan S, Bobji MS (2016) Detecting stages of needle penetration into tissues through force estimation at needle tip using fiber bragg grating sensors. J Biomed Optics 21(12):127009
Latus S, Otte C, Schlüter M, Rehra J, Bizon K, Schulz-Hildebrandt H, Saathoff T, Hüttmann G, Schlaefer A (2017) An approach for needle based optical coherence elastography measurements. MICCAI 201: medical image computing and computer-assisted intervention, pp 655–663
Mccreery G, Trejos A, Naish M, Patel R, Malthaner R (2008) Feasibility of locating tumours in lung via kinaesthetic feedback. Int J Medical Robot Comput Assist Surg 4:58–68. https://doi.org/10.1002/rcs.169
Okamura AM, Simone C, O’leary MD (2004) Force modeling for needle insertion into soft tissue. IEEE Trans Biomed Eng 51(10):1707–1716
Otte C, Hüttmann G, Schlaefer A (2012) Feasibiliy of optical detection of soft tissue deformation during needle insertion. In: SPIE 8316, Medical Imaging 2012: image-guided procedures, robotic interventions, and modeling, vol 8316, pp 282–292
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S (2019) Pytorch: an imperative style, high-performance deep learning library. In: Advances in neural information processing systems 32. Curran Associates, Inc., pp 8024–8035
Taylor RH, Menciassi A, Fichtinger G, Fiorini P, Dario P (2016) Medical robotics and computer-integrated surgery. In: Springer handbook of robotics. Springer, pp 1657–1684
Acknowledgements
Open Access funding provided by Projekt DEAL.
Funding
This work was partially supported by the TUHH \(i^3\) initiative and DFG grants SCHL 1844/2-1 and SCHL 1844/2-2.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors M. Gromniak, N. Gessert, T. Saathoff and A. Schlaefer declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals.
Informed consent
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gromniak, M., Gessert, N., Saathoff, T. et al. Needle tip force estimation by deep learning from raw spectral OCT data. Int J CARS 15, 1699–1702 (2020). https://doi.org/10.1007/s11548-020-02224-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-020-02224-w
Keywords
- Optical coherence tomography
- Deep learning
- Force estimation
- Raw data