1 Introduction

Vascular structure enhancement is to enhance tubular networks in images, which is very useful in for example image processing and computer vision. There is a rich literature dedicated to its development, e.g. those motivated by the need to increase the accuracy and speed of medical diagnosis with angiographic images [3, 12, 13, 19, 29, 32, 36, 47, 56, 63]. Vascular structure enhancement can also be used for segmentation through applying simple thresholding. Note that segmentation in, e.g. two-phase case is the transformation of a non-binary image into a binary image [60], which serves the purpose of exclusively isolating the desired information from the given images for further analyses. As computational hardware becomes increasingly cheap and powerful, interests in performing vascular structure enhancement have risen for three-dimensional (3D) images [6, 7, 12, 19, 29, 32, 36, 44, 56]. The main aim of this paper is twofold: 1) propose a new and easy-to-use 3D vascular structure enhancement transform, which will play a key role in subsequent tasks like segmentation; and 2) apply the proposed transform to the extremely challenging transmission electron microscopy tomograms [1, 16, 62], which is very important in advancing the study of curvilinear sub-cellular components (see Fig. 1), notable examples of which are cytoskeletons [35, 45] and chromatins [26, 37].

Fig. 1
figure 1

A 2D cross section of a 3D electron tomogram. The darker dots and the lighter tubular regions are macromolecules and membrane-bound compartments, respectively

In [50, 51], a 2D orientation field-based method was proposed for vascular enhancement and segmentation. The work elaborated on the idea of an automatic locally orientable line filter to selectively enhance lines and curves in an image. By using an array of line structural elements to detect orientations and measure the alignments of each pixel’s neighbourhood, vascular structures could be enhanced. This method is developed originally for the segmentation of cellular images in electron tomography, as the objects of interest in the 2D slices of such images are usually in the form of vessels. The advantage of this type of method is the capability of segmenting structures with a relatively high noise level, as long as the segmented objects in question bear the semblance of a line or a curve; moreover, it has only one single parameter with a clear physical meaning. This method however does not work with 3D lines or curves.

The 2D orientation field-based method also relates to a class of steerable filters first introduced in [23] and then expanded upon by [28]. Like the 2D orientation field transform, the steerable filter is a directional filter that can detect local orientation; therefore, a ridge detector variant has also been adopted for enhancing vascular images [28, 44]. Unlike the 2D orientation field transform, the steerable filter is designed to be generated from a few Gaussian-derivative-based basis filters. However, the ridge detector variant suggested by [28] only filters vessels of a specific width and would need adaptation of a multi-scale approach (e.g. [52]) in some applications. More detailed recall of the related work regarding vascular structure enhancement, edge detection and segmentation is deferred in Sect. 3.

The enhancement and segmentation of transmission electron micrographs pose its own set of challenges, namely the low signal-to-noise ratio [62] and the monotonicity of information, characterised by a single electromagnetic wave source and the difficulty of differential labelling [1]. When transmission electron microscopy is combined with computational tomography to produce 3D images, it poses the additional problem of anisotropic resolution because of incomplete frequency information around the z-axis [16]. With biological samples comprised mostly of light atoms, imaging is achieved by fixing the sample and staining it with highly oxidising heavy metallic compounds. Such images are typically identifiable with curves denoting strands, lighter regions characterising membrane-bound compartments, and ubiquitous dots representing macromolecules, see Fig. 1. In comparison with other types of images (e.g. medical imaging) in 2D/3D, there have yet to be tried-and-tested methods for reliable segmentation in electron tomography. As a result, it is still a common practice to do manual contouring to segment structures of interest [22, 59].

Fig. 2
figure 2

Curves to be enhanced in the test images. Panels a and b are two types of 2D cross sections of connected lipid membrane-bound compartments. Panel c is a mesh of a 3D curve (a binary image) created by hard thresholding a 3D synthetic noisy image with Gaussian noise. Panel d is a slice of the 3D liquid crystal data shown in (e), and panels f and g are slices of other 3D liquid crystal data. Please refer to Sects. 5 and 6 and Table 1 for detailed description

The popular segmentation methods (e.g. the ones mentioned in Sect. 3) however suffer from major shortcomings for the images that are being tested. For example, contrast enhancement, nonlinear anisotropic diffusion and other noise-reduction techniques require a relatively sharp contrast between objects and the background in order to operate. Watershed transform does not work with objects with a high genus number, which is a characteristic of the test images in this paper (see Fig. 2). Wavelet transform and active contouring would require labour-intensive fine-tuning of ambiguous parameters. Recent developments in segmentation algorithms often rely on the integration of the aforementioned methods. As they are not mutually exclusive, mixing them is often a reliable technique at the expense of computational power and time.

It is important to enhance the curves in the 2D lamellae in plastids (see Fig. 2a, b) and the tubules in 3D liquid crystals (see Fig. 2d–g) [38] so that the curve-like structures could be identified easily. The tubular architectures of the liquid crystals could be seen as scaffolds made of rods or curves in 3D space, see Fig. 2e. Although the lamellar compartments are in fact curved sheets in 3D, their cross sections could also be seen as curves in 2D images. In other words, lamellar compartments and tubules could be treated as curves in 2D and 3D images. With the improving image technology in producing 3D images, there is an increasing need for curve enhancement methods that target 3D structures like the liquid crystal segmentation problem introduced above, facilitating for example the segmentation of curve-like structures.

The main contribution of this paper is twofold. Firstly, we propose a general 3D vascular enhancement method, which is capable of processing images with very low signal-to-noise ratio. Different measures in 3D such as the maximum, mean and absolute deviation of the line integral and alignment integral for individual pixels are defined. Then the 3D orientation field transform is constructed for vascular enhancement purposes. Secondly, we apply the proposed 3D vascular enhancement method to enhance and segment the curvilinear sub-cellular structures in transmission electron microscopy tomograms. Extensive experimental results and comparisons are implemented including synthetic and real-world 2D and 3D images to demonstrate the property and effectiveness of the proposed method. The proposed 3D orientation field transform possesses a concise structure and hence would also serve as an ideal candidate as a preliminary filter. Moreover, it can naturally tackle any dimension, with a simpler methodology against the previous 2D orientation field transform.

The remainder of the paper is organised as follows. In Sect. 2 mathematical preliminaries are introduced. Then the related work, including vascular structure enhancement, edge detection and segmentation, is presented in Sect. 3. In Sect. 4 we propose our 3D orientation field transform. The test electron tomogram data are presented in Sect. 5. The effectiveness of the proposed method in response to different types of 2D and 3D curves in the test data and comparisons are detailed in Sect. 6. Finally, we conclude in Sect. 7.

2 Preliminaries

Given \({\textbf {a}} \in \mathbb {R}^{n}\), let \(\mathbb {H}=\{{\textbf {x}} \mid {\textbf {a}}^{\top }{} {\textbf {x}}\le 0, \forall {\textbf {x}} \in \mathbb {R}^{n} \}\) be a half of the n-dimensional Euclidean space generated by an \((n-1)\)-dimensional hyperplane crossing the origin. Let \(V^n \subset \mathbb {H}\) be a domain containing all the unit vectors in \(\mathbb {H}\) with the origin as the starting point. Let \(\bar{V}^n\) denote the discretised \(V^n\), with \(|V^n|\) number of unit vectors. For example, \(\bar{V}^2\) and \(\bar{V}^3\) denote the sets containing unit vectors in the half of the discretised 2D and 3D Euclidean spaces, respectively.

Let I be an image with domain \(\Omega \subset \mathbb {R}^n\). Let \({\textbf {x}} = (x_1, \cdots , x_n)\in \Omega \) represent individual pixels/voxels of the image I. The value of the image I at \({\textbf {x}}\) is denoted as a function \(I({\textbf {x}})\). The \(l_{2}\)-norm of \({\textbf {x}}\) is denoted by \(\Vert {\textbf {x}}\Vert _{2}=\sqrt{x_{1}^{2} + \cdots + x_n^2}\). Although the practical image is discrete, for ease of discussion continuous functions and the integral will be used, which is also common practice in research (e.g. [50, 51]).

Note that every point, e.g. \({\textbf {y}}\in \mathbb {R}^n\) also corresponds to a unique vector starting from the origin. With abuse of notation, we also call \({\textbf {y}}\) a vector. Given vectors \({\textbf {y}}, {\textbf {z}}\in \mathbb {R}^{n}\), the inner product of vectors \({\textbf {y}}\) and \({\textbf {z}}\) is represented as \({\textbf {y}} \cdot {\textbf {z}}\). The angle between \({\textbf {y}}\) and \({\textbf {z}}\) can be calculated by \(\arccos (\hat{{\textbf {y}}} \cdot \hat{{\textbf {z}}})\), where \(\hat{{\textbf {y}}}\) and \(\hat{{\textbf {z}}}\) are the unit vectors of \({\textbf {y}}\) and \({\textbf {z}}\), calculated by \(\hat{{\textbf {y}}}={\textbf {y}}/\Vert {\textbf {y}}\Vert _{2}\) and \(\hat{{\textbf {z}}}={\textbf {z}}/\Vert {\textbf {z}}\Vert _{2}\), respectively.

3 Related work

3.1 Vascular structure enhancement

We first recall the 2D orientation field transform. It is a top-down image enhancement method aiming to strengthen curves exclusively. As a long string could be approximated by many small pieces of overlapping line intervals, theoretically, using some sort of line filter of fixed length but of unspecified direction should be able to isolate curves out specifically.

The first problem, then, is to determine the directions of the line filters at individual pixel \(\textbf{x}\). The work in [50] proposed to measure the strength of a line with length \(\varepsilon \) centred at \(\textbf{x} \in \Omega \subset \mathbb {R}^2\) along direction \(\hat{\textbf{b}} \in \bar{V}^2\) by a line integral operator \(\mathcal{R}[I]\), i.e.

$$\begin{aligned} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}) = \int _{-{\varepsilon }/{2} }^{{\varepsilon }/{2}}I(\textbf{x}+s\hat{\textbf{b}}) \ \textrm{d}s. \end{aligned}$$
(1)

Obviously,

$$\begin{aligned} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}) = \mathcal{R}[I](\textbf{x},-\hat{\textbf{b}}). \end{aligned}$$
(2)

Therefore, the direction of \(\hat{\textbf{b}}\) is restricted in \([0, \pi )\), half of the plane, to avoid repetitiveness.

Ways to incorporate the directional information in \(\mathcal{R}[I]\) have evolved over the course of several papers [50, 51]. In [51], a primary orientation field at point \(\textbf{x}\), \(\mathcal{F}[\mathcal{R}](\textbf{x})\), was generated by taking the maximal line integral \(\mathcal{R}[I]\) of point \(\textbf{x}\) and the direction \(\hat{\textbf{b}}\) achieving this maximal integral, i.e.

$$\begin{aligned} \mathcal{F}[\mathcal{R}](\textbf{x}) :=&\left\{ \mathcal{F}_1[\mathcal{R}](\textbf{x}), \ \mathcal{F}_2[\mathcal{R}](\textbf{x}) \right\} \nonumber \\ =&\Big \{\max _{\hat{\textbf{b}} \in \bar{V}^2} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}), \ \arg \max _{\hat{\textbf{b}} \in \bar{V}^2} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}) \Big \}. \end{aligned}$$
(3)

In other words, \(\mathcal{F}_1[\mathcal{R}](\textbf{x})\) and \( \ \mathcal{F}_2[\mathcal{R}](\textbf{x})\) are the weight and angle of \(\mathcal{F}[\mathcal{R}](\textbf{x})\), respectively. The alignment integral operator \(\mathcal{G}[\mathcal{F}]\) at point \(\textbf{x}\) along with direction \(\hat{\textbf{b}}\) reads

$$\begin{aligned}&\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}}) \nonumber \\ =&\int _{-{\varepsilon }/{2}}^{{\varepsilon }/{2}} \mathcal{F}_1[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}}) \nonumber \\&\quad \cos \left( 2\arccos (\mathcal{F}_2[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}}) \cdot \hat{\textbf{b}})\right) \textrm{d}s \nonumber \\ =&\int _{-{\varepsilon }/{2}}^{{\varepsilon }/{2}} \mathcal{F}_1[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}}) \nonumber \\&\quad \left( 2(\cos (\arccos (\mathcal{F}_2[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}})\cdot \hat{\textbf{b}})))^{2}-1 \right) \textrm{d}s \nonumber \\ =&\int _{-{\varepsilon }/{2}}^{{\varepsilon }/{2}} \mathcal{F}_1[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}}) \nonumber \\&\quad \left( 2(\mathcal{F}_2[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}})\cdot \hat{\textbf{b}})^{2}-1 \right) \textrm{d}s, \end{aligned}$$
(4)

which can be used to detect curve-like structures. The more out of alignment against \(\hat{\textbf{b}}\) a point on the orientation field is, the lower the overall value of the alignment integral would be. In principle, a strong alignment should wind along the length of a curve while the opposite should be true for objects which do not have a clear orientation.

The 2D orientation field transform is completed by taking the maximum value of the alignment with respect to \(\hat{\textbf{b}}\), i.e.

$$\begin{aligned} \mathcal{L}[\mathcal{G}](\textbf{x})=\mathop {\max }_{\hat{\mathop {\textbf{b}}}\in \bar{V}^2} \mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}}), \end{aligned}$$
(5)

see [51] for more details.

Fig. 3
figure 3

Image in Fig. 1 enhanced by the 2D orientation field transform with the orientation field defined in Eq. (3)

The orientation field defined in Eq. (3) is sensitive to non-curve information (e.g. point-like objects) in the given image. For example, in Fig. 3, a slice of an electron tomogram is processed with the previously described orientation field transform (5) with the orientation field defined in (3). On the one hand, the curves in the given image are successfully amplified. On the other hand, the structures around small dots in the given image are not suppressed but enhanced as curves. To overcome this issue raised by the orientation field defined in (3), a new orientation field, an average orientation, was defined in [50]. The average orientation was then used in Eq. (5) to form the 2D orientation field transform. We refer the readers to [50] for more details.

There are many other methods devoted to the development of vascular structure enhancement, including matched filter [47, 63], region-growing [19], Hessian-based filtering [13, 17, 21, 29, 41], flux-based [3], a vast variety of model-fitting approaches [12, 32, 56] and machine learning [32]. Among these methods, the multi-scale Hessian filter [21, 41] drew lots of attention in recent decades. It is founded on the observation that vascular images have a high absolute value of second derivatives over the vessels’ diameter, and hence could be used to specifically enhance vascular structures with the help of a local scaling filter, e.g. a Gaussian filter [29]. Unlike machine learning and some of the matched filter approaches, the Hessian filter method does not require to match with a predetermined set of patterns which might limit the methods’ versatility for less popular imaging methods and subjects. On the other hand, region-growing and model-fitting methods are typically stepwise and are hence time-consuming. Comparatively, the Hessian filter method is developed upon processing local features of an image, which makes it parallelisable and versatile to different types of vascular images. The 2D orientation field transform in [50, 51] shares the same advantages. However, among the images in published literature demonstrating the effectiveness of the Hessian filter-based algorithm, the data set used often has a much higher signal-to-noise ratio than what is typically found in electron tomograms.

3.2 Edge detection and segmentation

Compared to vascular structure enhancement, edge detection algorithms focus on the edges of the structures. Well-known edge detection methods include, e.g. the Prewitt [48] and Sobel [53] edge detectors that detect strong gradients, the Laplacian of Gaussian image operator [43] and the sophisticated Canny edge detector that combines a directional edge-thinning algorithm with hysteresis thresholding [11]. In particular, the curves selected by the 2D orientation field transform are similar, in principle, to the edges selected by the directional edge-thinning non-maximum suppression which is part of the Canny edge detector. The difference is, unlike the one-pixel thick edges selected by the non-maximum suppression method, the 2D orientation field transform is much more generous on the range of widths of the curves that are enhanced. It is affected by a user-specified parameter which takes a value of about 2 to 3 times the width of the curves.

Vascular structure enhancement followed by thresholding is one way of conducting segmentation. It is worth mentioning that there are many types of segmentation methods which are independent of structure enhancement. Prevailing existing autosegmentation approaches include: (i) general noise-reduction techniques, e.g. contrast enhancement [42, 46, 49], different variations of wavelet transform [6, 7, 22, 27, 46, 55, 59], nonlinear anisotropic diffusion, bilateral filtering [4, 20, 22, 59] or block-matching 3D filter (BM3D) [15, 18, 46]; and (ii) direct segmentation techniques such as thresholding [8,9,10, 22], morphological operations [22], region-based approaches utilising watershed transform [22, 58, 59] and energy-based approaches in the manner of active contour [5, 22, 57, 59]. There are also methods that conduct segmentation by addressing noise characteristics [64]. Moreover, there lately have been attempts at using machine learning algorithms to improve the segmentation quality, e.g. [31, 42, 54, 61]. As both electron tomography and machine learning are fairly new tools that have developed rapidly in the last decade, this would make a new frontier of research.

Fig. 4
figure 4

Demonstration in 2D of the information extracted out of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\). a A binary test image peppered with a light layer of Gaussian noise, where the cross-hairs mark the selected pixels used in (b), (c); b distributions of line integral values at each direction for selected pixels in (a); c distributions of alignment integral values at each direction for selected pixels in (a); and d, e close-up of the disc E8 in (b) and (c). In particular, intensity values in (b)–(e) are normalised to the range of [0, 1], where white and dark colours depict the lowest and highest integral values, respectively

4 Proposed 3D orientation field transform

As evidenced in [50], the average operation noticeably improved the discrimination of curves from other structures compared to [51]. Nevertheless, there is no 3D analogue where vectors from exactly half of a Euclidean space could be transformed bijectively to cover exactly the entirety of Euclidean space. Still, the idea of detecting the directionality of a neighbourhood would be inspirational. We propose the 3D orientation field transform below.

Firstly, the line integral operator \(\mathcal{R}[I]\), the orientation field

$$\begin{aligned} \mathcal{F}[\mathcal{R}] := \left\{ \mathcal{F}_1[\mathcal{R}](\textbf{x}), \ \mathcal{F}_2[\mathcal{R}](\textbf{x}) \right\} , \end{aligned}$$
(6)

and the orientation field transform \(\mathcal{L}[\mathcal{G}]\) , respectively, defined in (1), (3) and (5) are extended from 2D to 3D, i.e. for \(\forall \textbf{x} \in \Omega \subset \mathbb {R}^3\) and \(\forall \hat{\textbf{b}} \in \bar{V}^3\). Then, to reduce the impact of the neighbourhood around the targeted curves, both the line integral operator and alignment integral operator are combined with a 1D Gaussian filter with standard deviation \(\sigma = {\varepsilon }/{4}\). This yields

$$\begin{aligned} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}})&= \frac{1}{\sqrt{2\pi }\sigma }\int _{-{\varepsilon }/{2}}^{{\varepsilon }/{2}}I(\textbf{x}+s\hat{\textbf{b}})\exp {\Big (-\frac{s^{2}}{2\sigma ^{2}}\Big )} \ \textrm{d}s, \end{aligned}$$
(7)
$$\begin{aligned} \mathcal{F}_1[\mathcal{R}](\textbf{x})&= \max _{\hat{\textbf{b}} \in \bar{V}^3} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}), \end{aligned}$$
(8)
$$\begin{aligned} \mathcal{F}_2[\mathcal{R}](\textbf{x})&= \arg \max _{\hat{\textbf{b}} \in \bar{V}^3} \mathcal{R}[I](\textbf{x},\hat{\textbf{b}}), \end{aligned}$$
(9)
$$\begin{aligned} \mathcal{L}[\mathcal{G}](\textbf{x})&=\mathop {\max }_{\hat{\mathop {\textbf{b}}}\in \bar{V}^3} \mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}}), \end{aligned}$$
(10)

where \(\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\) is the alignment integral operator given in (4) combined with the 1D Gaussian filter, i.e.

$$\begin{aligned}&\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}}) \nonumber \\ =&\frac{1}{\sqrt{2\pi }\sigma }\int _{-{\varepsilon }/{2}}^{{\varepsilon }/{2}} \mathcal{F}_1[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}}) \nonumber \\&\quad \exp {\Big (-\frac{s^{2}}{2\sigma ^{2}}\Big )} \Big (2(\mathcal{F}_2[\mathcal{R}](\textbf{x}+s\hat{\textbf{b}})\cdot \hat{\textbf{b}})^{2} -1 \Big ) \textrm{d}s, \end{aligned}$$
(11)

Recall that the line integral operator \(\mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\) of the image I measures the strength of each line with length \(\varepsilon \) centred at \(\textbf{x} \in \Omega \) along direction \(\hat{\textbf{b}} \in {\bar{V}}^3\).

The orientation transform \(\mathcal{L}[\mathcal{G}]\) equipped with the maximum of the line integrals \(\mathcal{F}_1[\mathcal{R}]\) for 2D can enhance curves but suffer from for example point-like objects [50]. Therefore, it is not enough to use \(\mathcal{L}[\mathcal{G}]\) defined in (10) to directly enhance curves in 3D which is much more complicated than the case in 2D. The main issue of only using the maximum to identify the curve direction is that it disregards the number of large line integrals running along different directions at a point; in other words, the maximum criterion in this scenario will mistakenly judge the points, e.g. inside point-like objects to be on a curve.

It is clear that, in a neighbourhood of a point that shows a clear orientation distributed as a ridge or a trench, the integral along this direction will be of a fairly higher value than the others. On the contrary, in a neighbourhood of a point that is centred on a point-like object or covered with a homogeneous signal, integrals along one direction should have little difference from the others. Hence, measuring the magnitude and variability of integrals at a point should be indicative of whether its neighbourhood encloses a curve or not. Since the mean and absolute deviation (acting as low-pass and high-pass filters, respectively) are powerful to estimate this kind of variability, to overcome the challenge above, the mean and absolute deviation of the set of the line integral values \(\{\mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) and the set of the alignment integral values \(\{\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) will be introduced to design our 3D orientation field transform.

Remark 1

Similar to the maximum of the line and alignment integrals, the means of the line and alignment integrals may also be non-selective to point-like objects. The difference between the mean and the maximum is that the mean averages out the signal along different directions, effectively acting as a low-pass filter.

The mean and the absolute deviation of the set of the line integral values \(\{\mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) are defined as:

$$\begin{aligned} \mathcal{M}[\mathcal{R}](\textbf{x}) = \frac{1}{|\bar{V}^3|} \sum _{\hat{\textbf{b}} \in \bar{V}^3}\mathcal{R}[I](\textbf{x},\hat{\textbf{b}}) \end{aligned}$$
(12)

and

$$\begin{aligned} \gamma [\mathcal{R}](\textbf{x}) = \frac{1}{|\bar{V}^3|} \sum _{\hat{\textbf{b}} \in \bar{V}^3}\vert \mathcal{M}[\mathcal{R}](\textbf{x}) - \mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\vert \end{aligned}$$
(13)

, respectively. Analogously, the mean and the absolute deviation of the set of the alignment integral values \(\{\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) are defined as

$$\begin{aligned} \mathcal{M}[\mathcal{G}](\textbf{x}) = \frac{1}{|\bar{V}^3|} \sum _{\hat{\textbf{b}} \in \bar{V}^3}\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}}), \end{aligned}$$
(14)

and

$$\begin{aligned} \gamma [\mathcal{G}](\textbf{x}) = \frac{1}{|\bar{V}^3|} \sum _{\hat{\textbf{b}} \in \bar{V}^3}\vert \mathcal{M}[\mathcal{G}](\textbf{x}) - \mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\vert , \end{aligned}$$
(15)

respectively.

Remark 2

Before computing the mean and absolute deviation defined in (12)–(15), the values in each set of \(\{\mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) and \(\{\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\}_{\hat{\textbf{b}} \in \bar{V}^3}\) can also be polished using some smoothing operators like Gaussian, as there might be high-frequency fluctuations across different angles caused by a high level of image noise.

Figure 4 demonstrates the characteristics of the line integrals (i.e. Eq. (7)) and alignment integrals (i.e. Eq. (11)), which the maximum, mean and absolute deviation measures (i.e. (8), (10), (12)–(15)) are built on. Figure 4a shows the test image with selected pixels marked as cross-hairs. Each disc in Fig. 4b and c corresponds to one pixel and represents the distribution of the integrals values in every direction, see Fig. 4d and e for two close-up discs. The lighter the line, the higher the integral value. Note that the demonstration of Fig. 4 is done in 2D for the purpose of better visualisation. The scenario in higher dimensions like 3D is in the same fashion.

The discs in Fig. 4b and c can disclose which pixels have a high maximum, mean and absolute deviation of the line and alignment integrals, i.e. the pixels on or off the curve structures. For example, in Fig. 4b, discs D4, E8 and H1, which show the line integrals of the corresponding three pixels on the curve in Fig. 4a, indeed possess high maximum, mean or absolute deviation. A similar conclusion can also be seen in Fig. 4c. On the whole, pixels on a curve have an overall higher maximum, mean and/or absolute deviation than those that are off a curve.

For simplicity, let

$$\begin{aligned} \mathcal{W}_1(\textbf{x})&= \mathcal{F}_1[\mathcal{R}](\textbf{x}),&\mathcal{W}_2(\textbf{x})&= \mathcal{L}[\mathcal{G}](\textbf{x}), \nonumber \\ \mathcal{W}_3(\textbf{x})&= \mathcal{M}[\mathcal{R}](\textbf{x}),&\mathcal{W}_4(\textbf{x})&= \mathcal{M}[\mathcal{G}](\textbf{x}), \nonumber \\ \mathcal{W}_5(\textbf{x})&= \gamma [\mathcal{R}](\textbf{x}),&\mathcal{W}_6(\textbf{x})&= \gamma [\mathcal{G}](\textbf{x}). \nonumber \end{aligned}$$

Finally, our proposed 3D orientation field transform \(\mathcal{L}_\textrm{3D}[I](\textbf{x})\) is constructed by leveraging all the measures—the maximum, mean and absolute deviation of the line integral and alignment integral—to detect curves in 3D images, i.e.

$$\begin{aligned} \mathcal{L}_\textrm{3D}[I](\textbf{x}) = f\left( \{\mathcal{W}_i(\textbf{x})\}_{i=1}^6\right) , \end{aligned}$$
(16)

where f is a function with the six measures as inputs. In this work, the forms of f below

$$\begin{aligned} f\left( \{\mathcal{W}_i(\textbf{x})\}_{i=1}^6\right)&= \Pi _{i=1}^6 \mathcal{W}_i(\textbf{x}), \end{aligned}$$
(17)
$$\begin{aligned} f\left( \{\mathcal{W}_i(\textbf{x})\}_{i=1}^6\right)&= \Pi _{i=1, i\ne 4}^6 \mathcal{W}_i(\textbf{x}), \end{aligned}$$
(18)
$$\begin{aligned} f\left( \{\mathcal{W}_i(\textbf{x})\}_{i=1}^6\right)&= \Pi _{i=1, i\ne 4, 5}^6 \mathcal{W}_i(\textbf{x}), \end{aligned}$$
(19)
$$\begin{aligned} f\left( \{\mathcal{W}_i(\textbf{x})\}_{i=1}^6\right)&= \mathcal{W}_1(\textbf{x}) \mathcal{W}_3(\textbf{x}), \end{aligned}$$
(20)

are considered. We leave other choices of f for future investigation.

The 3D orientation field transform is summarised in Algorithm 1. It is worth remarking that the above proposed 3D orientation field transform can naturally be extended to any other dimensions. It has only one parameter to be set—the size \(\varepsilon \) of the paths for the integral in Eqs. (7) and (11). Motivated by the estimation in [50, 51], the size \(\varepsilon \) is set to be \(\sim \) 1.5 times the thickness of the widest curve to be enhanced in order for all curves to be identified properly as a curve rather than a surface.

Algorithm 1
figure a

3D orientation field transform

Table 1 Data description

Finally, we present the time complexity. The line integral \(\mathcal{R}[I](\textbf{x},\hat{\textbf{b}})\) and alignment integral \(\mathcal{G}[\mathcal{F}](\textbf{x},\hat{\textbf{b}})\) of Eqs. (7) and (11) are the most expensive operations. Consequently, for a 3D image, the time complexity reads \(\mathcal{O}(n_{\hat{\textbf{b}}}\varepsilon ^{3}lwh),\) where \(n_{\hat{\textbf{b}}}\) is the number of orientations defined by \(\hat{\textbf{b}}\), \(\varepsilon \) is the length of the line/alignment filter, and lwh are, respectively, the length, width and height of the filtered image. As the operation for each direction \(\hat{\textbf{b}}\) is independent of each other, the parallelism technique can be exploited over \(\hat{\textbf{b}}\) for efficiency enhancement.

Fig. 5
figure 5

Ground truth of the curve-like features in Fig. 2a and b. Rows from top to bottom, respectively, give the original image, the ground truth and the regions where the methods’ accuracy is evaluated

Fig. 6
figure 6

Slices of ground truths of the 3D curve-like features in Fig. 2d, f and g. Rows 1 and 2 give the original images and the ground truths, respectively

5 Test data

Common electron microscopy protocols use heavy metal compounds, namely osmium tetroxide, uranyl acetate and lead citrate as staining agents that adsorb on macromolecular complexes in the biological sample. As a typical cell is made mostly of light atoms, these heavy metal conjugates are responsible for deflecting the electrons to generate image contrast.

The protocol used to create the images here adopted high-pressure freezing and freeze-substitution instead of conventional chemical fixation for immobilising sub-cellular structures in a soft solid and embedding them in hard resin to prepare cell sections [30]. The advantage of using freeze-substitution is that it prevents the distortion of intra-cellular architecture during infiltration of chemical cross-linkers and dehydration for resin embedding that the standard chemical fixation protocols involve. Samples processed with cryofixation have a lower signal-to-noise ratio as chemical fixation at room temperature collapses macromolecules to which heavy metal stain concentrates. Furthermore, the cytosol and organelle lumen are washed away during dehydration, leaving empty backgrounds. As a result, sub-cellular structures in the electron micrographs used here are not distinguished so much as those in conventional electron micrographs.

All samples used as test images (Fig. 2) in this paper were imaged using electron tomography, which is a computational tomography version of transmission electron microscopy. Scanning transmission electron microscopy was used instead of transmission electron microscopy as a sub-process. The former uses a raster scanning method while the latter does not. Therefore, the former would improve the image resolution. For the computational tomography, two series of images were taken for each sample by sequentially tilting the sample along two orthogonal directions with an angular difference of 1.5\(^{\circ }\)  each up to a maximum of \(\pm ~\)60\(^{\circ }\). Then the simultaneous iterative reconstruction technique (SIRT) developed by [24] and adapted in IMOD was used to reconstruct the 3D tomograms using those images.

The samples were tilted only up to \(\pm ~\)60\(^{\circ }\) as otherwise, the paths of the electrons would become too long for them to pass through since electrons are very reactive to any matter. Hence, it is a compromise between the sample thickness and the maximum imaging angle. However, that would create a missing-wedge problem, where reconstructed images were blurred along the z-axis, complicating the curve enhancement and segmentation of any 3D structures. A brief data description summary is given in Table 1, and more details are presented in the Appendix.

6 Experimental results

Table 2 Quantitative results for test images in Fig. 2a (first row), b (second row) and c (third row)
Fig. 7
figure 7

Maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) on test image in Fig. 2a. Rows from top to bottom, respectively, give the maximum, mean and absolute deviation of the line integral (first column) and alignment integral (second column)

To test the performance of the proposed method and make comparisons with the existing methods, five real-world images shown in Fig. 2a–g and one synthetic mesh of a 3D curve among point-like objects shown in Fig. 2c are used in experiments. In particular, the one in Fig. 2a is an image containing sparse 2D curves; the one in Fig. 2b is an image with densely packed and heterogeneously stacked 2D curves with varying thickness; and the ones in Fig. 2d–g are 3D images of interconnecting 3D curves, which are extremely challenging.

To provide more reliable metrics for methods evaluation in terms of segmentation, dice scores are obtained by \(2|O \cap P |/(|O|+ |P |)\), where O is the foreground of the ground truth within the region of interests (Figs. 5 and 6), P is the foreground of the segmentation result in the region of interests and \(|\cdot |\) is the cardinality operation calculating the number of foreground pixels. The most optimal thresholds for each filter within the cropped areas of interest were used to compute the dice scores, e.g. in Table 2. In particular, for the real-world 2D images of Fig. 2a and b, the ground truth and regions of interest are manually annotated and presented in Fig. 5. The ground truth for Fig. 2d–g is manually annotated and presented in Fig. 6. For the synthetically generated 3D curve of Fig. 2c, the ground truth is the generated curve before the addition of Gaussian noise.

The proposed 3D orientation field transform is experimented with 2D images shown in Fig. 2a, b first since (i) 2D image can be regarded as a special case of a 3D image; and (ii) it is easier to demonstrate curve enhancement on 2D images than 3D ones. After that, the proposed transform will be evaluated on the synthetic 3D curve in Fig. 2c and then the 3D image with 3D curves shown in Fig. 2d–g.

The ultimate goal of this study is to properly enhance curve structures in 3D real-world images with low signal-to-noise ratios. To highlight the efficacy of the proposed algorithm in the described 3D real-world images (i.e. Fig. 2d–g), the performance of the proposed method is compared with that of three other representative methods qualitatively and quantitatively. The methods compared include the Frangi filter (i.e. a popular multi-scale Hessian-based filter) [21, 34, 41], the contrast limited adaptive histogram equalisation (CLAHE) (i.e. a contrast enhancement method) [14, 49, 65] and BM3D (i.e. an algorithm that collates self-similar blocks for denoising) [18, 40]. These methods are employed because they are popular, open sourced, and have been used directly or indirectly in recent studies that perform vascular enhancement [17, 42, 46]. Their MATLAB implementations [14, 34, 40] are used here.

Fig. 8
figure 8

Maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) on test image in Fig. 2b. Rows from top to bottom, respectively, give the maximum, mean and absolute deviation of the line integral (first column) and alignment integral (second column)

Fig. 9
figure 9

Performance of the proposed orientation field transform on test image in Fig. 2a. a, c, e and g results of the proposed orientation field transform using f defined in Eqs. (17), (18), (19) and (20); b, d, f and h the segmentation results obtained by hard thresholding of (a), (c), (e) and (g), respectively

Fig. 10
figure 10

Performance of the proposed orientation field transform on test image in Fig. 2b. a, c, e and g results of the proposed orientation field transform using f defined in Eqs. (17), (18), (19) and (20); b, d, f and h the segmentation results obtained by hard thresholding of (a), (c), (e) and (g), respectively

Fig. 11
figure 11

Maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) on the synthetic 3D image in Fig. 2c. Columns from left to right, respectively, give the maximum, mean and absolute deviation of the line integral and alignment integral

Fig. 12
figure 12

Performance of the proposed orientation field transform on the synthetic 3D image in Fig. 2c. ad represent the segmentations of the results of the proposed field transform using f defined in Eqs. (17), (18), (19) and (20), respectively

6.1 Performance in 2D

The performance of the six measures of the maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) are presented in Figs. 7 and 8. The results show that the measures of the maximum and mean of the line integral (i.e. Eqs. (8) and (12)) perform similarly, acting as generic low-pass filters with no distinctly selective curve enhancement, see a, b in Figs. 7 and 8. Nevertheless, the measures of the maximum and mean of the alignment integral (i.e. Eqs. (10) and (14)) can both enhance curves but perform slightly differently, i.e. the mean of the alignment integral achieves results with higher contrast but much noisier than that of the maximum, see d–e in Figs. 7 and 8. In particular, the retained curves using the mean of the alignment integral are fragmented unlike the others, see e in Figs. 7 and 8. The measures of the absolute deviation of the line integral and alignment integral can mostly enhance the curves and suppress non-curve structures like the light-coloured blobs successfully, even though the results of the absolute deviation of the line integral are less selective compared to the others, see c and f in Figs. 7 and 8.

Fig. 13
figure 13

Maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) on test image in Fig. 2d. Rows from top to bottom, respectively, give the maximum, mean and absolute deviation of the line integral (first column) and alignment integral (second column)

The efficacy of combining the above-mentioned six transform components through the function in Eq. (16), i.e. Algorithm 1 with function f defined in Eqs. (17)–(20), is shown in Figs. 9 and 10. We see that the curves are enhanced successfully with the background information suppressed excellently. Table 2 gives the dice scores of the subsequent segmentation results corresponding to the maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and the alignment integral \(\mathcal{G}[\mathcal{F}]\), and the proposed field transform using f defined in Eqs. (17), (18), (19) and (20). It indicates that the proposed field transform can indeed help to achieve satisfied segmentation results even by the simplest hard thresholding where the thresholds are selected as a compromise between removing the noise and keeping the curves. Note that the target here is to showcase the power of the proposed orientation field transform for vascular structure enhancement rather than achieving the best segmentation accuracy. The segmentation accuracy can obviously be improved further by advanced thresholding strategies like hysteresis thresholding [2] for example.

6.2 Performance in 3D

6.2.1 Synthetic 3D image

In order to display 3D images here, the meshes are computed with an adaptation of the marching cubes algorithm [25] for MATLAB and are displayed with a built-in MATLAB GUI. The extremely dense point-like objects surrounding the 3D curve make the curve enhancement and detection challenging. The performance of the measures of the maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) are presented in Fig. 11. It shows that all the measures are able to enhance the curves well. The segmentation results obtained by hard thresholding of the enhanced curves using the proposed transform with different function f are shown in Fig. 12. The dice scores given in Table 2 regarding the synthetic 3D image indicate that the function in Eq. (18) helps to achieve the best segmentation result together with our visual validation.

6.2.2 Real-world 3D image

Fig. 14
figure 14

Performance of the proposed orientation field transform on the image in Fig. 2d. a The result of the proposed orientation field transform using f defined in (20); b the segmentation result obtained by hard thresholding of (a)

Fig. 15
figure 15

Performance of the proposed orientation field transform on the 3D image in Fig. 2d. a The result of the proposed orientation field transform using f defined in (20) (morphological closing is used for a better view); b medial axis transform of (a)

Fig. 16
figure 16

Performance of the proposed orientation field transform on a greater cropped 3D region of the liquid crystal imaged with electron tomography. a Given image; bd skeletons of the segmentation result following the same method as in Fig. 15 with skeleton denoising viewed at \(\langle 1\ 0\ 0 \rangle \), \(\langle 1\ 1\ 1 \rangle \) and \(\langle 1\ 1\ 0 \rangle \), respectively; eg a cubic diamond lattice unit cell rendered with VESTA and viewed at \(\langle 1\ 0\ 0 \rangle \), \(\langle 1\ 1\ 1 \rangle \) and \(\langle 1\ 1\ 0 \rangle \), respectively

Fig. 17
figure 17

Performance comparison of different methods (i.e. CLAHE, Frangi filter, BM3D and ours) for 3D images. Rows 1 to 3 are the results on the test images in Fig. 2d, f and g, respectively. Columns 1–2, 3–4, 5–6 and 7–8 are the results of ours, BM3D [40], CLAHE [14] and Frangi filter [34], respectively. In particular, the results in the odd columns are the enhanced curves and the results in the even columns are the corresponding segmentation results obtained by hard thresholding

We now test the proposed method on the real-world 3D image in Fig. 2d, e. The curve detection in this image is extremely challenging since the curve information is even barely visually sensible. As mentioned previously, it is an image of a lyotropic liquid crystal, whose curves converge and diverge in different directions frequently and regularly. A vast number of curves meander along different directions at close proximity and crams next to each other.

The performance of the measures of the maximum, mean and absolute deviation of the line integral \(\mathcal{R}[I]\) and alignment integral \(\mathcal{G}[\mathcal{F}]\) are presented in Fig. 13. It shows that the maximum and mean of the line integral perform better than other measures in enhancing the obscure curves. The close packing of the curves might have negated the need to exclusively remove structures without a clear orientation. Therefore, it is wise to use the function f in Eq. (20) in the proposed transform for this test image. The enhanced curves and the subsequent segmentation result with hard thresholding for the image in Fig. 2d are shown in Fig. 14a, b, which indeed presents curve features that are imperceptible in the given image. The 3D view of the segmentation results across the entire volume of the testing image of the lyotropic liquid crystal in Fig. 2d is given in Fig. 15. Note that as the values of the integrals decrease for the pixels at the periphery, each x-y plane across the z-axis is linearly scaled to have the same median value before hard thresholding of the curve-enhanced result. The lyotropic crystal is triply periodic, and thus, it is expected to be ‘seen through’ over several layers of periodicity when viewed from several angles, with regular interruptions on the viewing plane. A medial axis transform (skeletonisation) is performed on the segmented image Fig. 15a with a MATLAB function to better show the segmentation quality, see Fig. 15b.

A greater cropped 3D region of the liquid crystal is shown in Fig. 16a, which is known to take on a diamond cubic symmetry. As a demonstration, along the viewing directions shown in Fig. 16, the lattice viewed at \(\langle 1\ 0\ 0 \rangle \) should appear as tessellating squares (Fig. 16e), that viewed at \(\langle 1\ 1\ 1 \rangle \) should appear as tessellating triangles (Fig. 16f), and that viewed at \(\langle 1\ 1\ 0 \rangle \) should appear as tessellating hexagons (Fig. 16g). The diamond cubic lattice is then compared against the result (Fig. 16b–d) obtained using the same method as shown in Fig. 15 with a skeleton denoising procedure (see Appendix). The results are in congruence with the diamond cubic lattice structure, proving that the proposed 3D orientation transform is sufficient for the curve enhancement and segmentation quality we were seeking.

Finally, we compare the performance of different methods (i.e. CLAHE, Frangi filter, BM3D and ours) on three real-world 3D images as shown in Fig. 2d, f and g. The subjects of Fig. 2f and g are similar to that of Fig. 2d but they do not have the same level of regularity/periodicity over a wide area and depth as demonstrated in Fig. 15 and 16. This is because they are originated from different conditions, and thus, they would not have the same diamond cubic structure as found in Fig. 2d. The enhanced curves and segmentation results of all the methods compared are shown in Fig. 17. Their quantitative results in terms of segmentation accuracy in Dice scores are given in Table 3. The parameters used in the methods compared are fine-tuned to achieve the best results. As demonstrated in Fig. 17 and Table 3, we can see that our method performs the best compared to all the other methods.

Table 3 Quantitative results comparison of different methods (i.e. CLAHE, Frangi filter, BM3D and ours) for 3D test images in Fig. 2d, f and g in terms of the segmentation accuracy in Dice scores

6.3 Further discussions

The advantages of the proposed method lie in its simplicity in both the choice of parameters (i.e. size \(\varepsilon \) set as 1.5 times the widest curve width) and algorithmic design, with Eq. (20) being the most consistently performant version of the algorithm in all versions. The proposed method also benefits from its flexibility of extending it with similar modules when experimenting with different types of images, as seen in Eqs. (17)–(20), while being performant even in 3D images with very low signal-to-noise ratios that other vascular enhancement algorithms are not known to be useful in.

The proposed algorithm also suffers from several limitations. The first limitation is the lack of a multi-scale feature. As evidenced in Figs. 9, 10 and 14, the proposed algorithm is tolerant of a conceivable range of vessel/curve thickness given any value for the parameter \(\varepsilon \). However, the proposed algorithm has no contingency for treating ranges of curve thickness beyond the permissible limit of the parameter \(\varepsilon \). Future directions of this study can take inspiration from the design of, e.g. the Frangi filter, which can combine the enhanced results of the same image over different curve ranges to achieve a better-filtered image. This would also remove the need for the parameter \(\varepsilon \) altogether. A search into the tolerance of \(\varepsilon \) to different thicknesses of curves is also needed. The second limitation, as discovered in the processing of Fig. 2b, is the inability of the algorithm to differentiate closely packed curves. Although this is less likely a problem in 3D images owing to the extra space from the additional dimension, as evidenced in Fig. 2b, the algorithm would benefit from improvements in this direction. A more analytical study of the orientation of the neighbourhood of closely packed lines might provide some new insights into this problem.

The proposed algorithm’s synergy with others is also of interest. The proposed method was compared with other representative methods, but theoretically, these methods compared would not hamper the performance of the orientation field transform if they serve as preliminary filters. In recent studies that needed to use algorithms for vascular enhancement, segmentation and detection are inclined to be mixed and matched for better results [42, 46, 64]. With the current interest in automatic detection with machine learning, it is also worth studying how to incorporate the orientation field transform as part of the preprocessing pipeline to improve, e.g. segmentation and detection results.

Before closing this section, it is worth briefly discussing the running time and parameters selection of the proposed method, see Table 4. Note that the time complexity is \(\mathcal{O}(n_{\hat{\textbf{b}}}\varepsilon ^{2}lw)\) and \(\mathcal{O}(n_{\hat{\textbf{b}}}\varepsilon ^{3}lwh)\) for 2D and 3D images, respectively. For the 2D case, the number of directions \(n_{\hat{\textbf{b}}}\) is proportional to half of the circumference of the circle with \(\varepsilon \) as the diameter, i.e. \(\pi \varepsilon /2\). For the 3D case, it is proportional to half of the surface area of the equivalent sphere, i.e. \(\pi \varepsilon ^{2}/2\). When images are filtered with a greater \(\varepsilon \) value, the number of directions could potentially be reduced as if filtering a downsized image, decreasing the running time by several folds.

Table 4 Running time and parameters used of the proposed orientation field transform on each test image

7 Conclusion

An orientation field-based 3D orientation field transform was proposed and experimented with for the curve enhancement, with segmentation as a by-product. Thorough experiments and comparisons on synthetic and real-world data (e.g. a liquid crystal) demonstrated that the proposed 3D orientation field transform does enhance curves selectively and effectively, even in images having very low signal-to-noise ratios that pre-existing image enhancement algorithms are not known to be useful on. Its modular design also makes it possible for experimentation with other types of images to achieve the best results. Although this is a top-down processing transform, it involves only a few computational steps, and hence would serve as an ideal candidate as a preliminary filter. In consequence, the combination of the maximum, mean and absolute deviation of line integrals and alignment integrals was found to be an effective 3D orientation field transform for extremely challenging synthetic and real-world images. Furthermore, the proposed method can naturally tackle any number of dimensions.

Critical future work may follow the investigation of the impact of the single parameter \(\varepsilon \) on the performance of the proposed method, the incorporation of multi-scale features to the algorithm and the study of delineating closely packed curves. The pursuit of finding synergies between the 3D orientation field transform and other different algorithms (e.g. for vascular enhancement, segmentation and detection) is of great interest. Moreover, it is also worth investigating the use of the 3D orientation field transform in improving machine learning pipelines and in more applications.