1 Introduction

Glaucoma is the second leading cause of blindness in the world (after cataracts) and the first irreversible cause of blindness [26]. It is estimated that glaucoma will affect over 111.8 million people by 2040 [40]. As a chronic disease, glaucoma affects the physiological structure of patients’ eyes, causing the thinning of ganglion cells with internal plexiform layer (GCIPL), the increase of cup-disc ratio, and the narrowing of optic disc rim [15]. Normally, no evident symptoms appear in the early stage of glaucoma, which causes numerous patients diagnosed with glaucoma in the late stage when the damage to visual function is irreversible. Therefore, early screening is essential for the treatment of glaucoma and prevents the loss of vision.

Currently, colour fundus images and optical coherence tomography (OCT) are the most broadly implemented imaging techniques in the early screening of glaucoma. Compared with OCT, colour fundus image is less expensive and more frequently used for detecting glaucoma. The optic cup (OC) to optic disc (OD) ratio (CDR) of fundus images is an important indicator in the screening and diagnosis of glaucoma [9]. As shown in Fig. 1, the CDR of healthy eyes is generally between 0.3 to 0.4. When the value of CDR reaches 0.65, it is clinically considered to be glaucoma. Manually checking OD and OC is a time-consuming task, and it normally takes a professional ophthalmologist about 8 minutes on average to completely segment the OD and OC in a fundus image [21]. Hence, developing automatic algorithms to segment OD and OC from fundus images is significant for lightening the burden of ophthalmologists and promoting large-scale screenings of glaucoma.

Most of the early segmentation methods of OD and OC are based on hand-crafted features (e.g. colour, gradient and texture features), which include adaptive threshold-based method [2, 27], regional growth method [28] and segmentation method based on Wavelet transform [6]. However, these hand-crafted features are easily affected by the physiological structure of the fundus images.

Fig. 1
figure 1

Structure of the optic disc and optic cup in a fundus image. The region denoted with a blue circle is the optic disc (OD); the region denoted with a yellow circle is the optic cup (OC). The vertical cup-to-disc ratio (CDR) is calculated by the ratio of vertical cup diameter (VCD) to vertical disc diameter (VDD) (color figure online)

In recent years, deep learning has achieved excellent performance in tasks such as image classification [16], object detection [30], and image segmentation [24]. A large number of OD and OC segmentation methods based on deep learning have been proposed [12, 34, 36]. Due to the uncertainty of the boundary of the OD and OC in the fundus image, the accurate segmentation of OD and OC is still a challenging task. Most of the existing methods divide the segmentation of OD and OC into two stages or only conduct OD segmentation, which overlooks the inner connection between OD and OC. Moreover, most methods only use a single scale to process the image, which cannot fully capture the detailed features of the OD and OC, especially edge information.

In this paper, we propose a convolutional neural network, named ResFPN-Net, for joint OD and OC segmentation. The main contribution of our work can be summarized as follows:

  1. (1)

    A segmentation network for joint OD and OC segmentation: Through multi-scale loss supervision, the network can accurately segment the OD and OC from fundus images by fully taking advantage of the internal relationship between OD and OC.

  2. (2)

    A multi-scale feature extractor: It takes images of different scales as input and merges information from various feature maps, which can adequately express the feature information of the fundus image and preserve the edge features.

  3. (3)

    An attention pyramid structure: This structure combines attention mechanism with feature pyramid architecture to enhance the representation of OD and OC in the fundus image, which improves the segmentation performance of the network.

2 Related works

In the early stage, most research on OD and OC segmentation is based on hand-craft features. These features mainly include colour, texture, contrast, and gradient information. Abdel-Ghafar et al. [1] proposed a threshold-based segmentation method to segment the OD. This method utilizes the Sobel operator to enhance the fundus image; subsequently, the image is processed by the local threshold and applies Hough transform to get the OD region. Osareh et al. [29] proposed an OD location method based on colour channels. Juneja et al. [39] applied fuzzy C-means clustering method to segment the OD and OC, and the Canny operator is employed for post-processing. In the segmentation method of OD and OC, edge detection algorithms such as the Sobel operator and the Canny operator can improve the accuracy of segmentation. Different from the edge detection operators, the pixel classification-based method transforms the edge detection problem into the pixel segmentation problem and achieves satisfactory results. Jun Cheng et al. [8] proposed a superpixel classification to segment OD and OC and applied histograms and centre-surround statistics to divide each superpixel into disc region and non-disc region. In [42], a method based on deformation is proposed to locate the OD and OC. In addition, template-based methods [20] and reconstruction-based learning method [41] are also widely used in OD and OC segmentation. However, these methods heavily rely on hand-crafted features, which largely affects their performance.

Recently, deep learning has made great achievements in natural image segmentation and medical image segmentation, such as Mask-RCNN [13], U-Net [31]. Many OD and OC segmentation methods based on deep learning have also emerged. In [34], a modified U-Net architecture is proposed to segment the OD and OC, which achieves the lowest possible prediction time compared with traditional convolutional networks. In [18], an end-to-end convolutional neural network, named JointRCNN, is proposed to segment OD and OC, which applied the atrous convolution to boost the performance of segmentation results. However, these methods separate OD and OC segmentation separately. Gu et al. [12] proposed a CE-Net to capture more advanced information and retain spatial information for segmenting OD. Motivated by conventional U-Net architecture, Baid et al. [5] proposed a ResUnet Architecture to segment OD. Al-Bander et al. [33] used VGG as the backbone and transfer learning to solve the problem of OD segmentation. However, based on these methods, only the optic disc region is segmented. Therefore, they ignored the intimate relationship between the OD and OC. Subsequently, the Stack-U-Net [35] was further proposed, which takes U-Net as the backbone and assists the thought training network of iterative refinement. In [43], using ResNet-34 as an encoding layer, a modified U-Net architecture was proposed for the segmentation of OD and OC. Al-Bander et al. [3] proposed a new segmentation network that utilized DenseNet incorporated with a fully convolutional network. Fu et al. [10] used polar transformation to flat the image based on OD centre and applied interpolation to enlarge the cup region. However, the transformation of polar coordinates causes the edges of the OD to be not smooth.

3 Methodology

Inspired by RetinaNet [23], we proposed the ResFPN-Net, as shown in Fig. 2. The framework has four components: multi-scale feature extractor, multi-scale segmentation transition, attention pyramid architecture, and multi-scale loss supervision. The multi-scale feature extractor receives various scale fundus images as input. The multi-scale segmentation transition is used to achieve multi-level feature maps fusion and preserve feature maps of different scales. And then, the feature maps are transmitted into an attention pyramid structure to capture the inner connection within OD and OC. Finally, the segmentation result of the OD and OC is achieved. The entire network is trained by multi-scale loss supervision. The following sub-sections will introduce the details of this architecture.

Fig. 2
figure 2

Overview of our proposed ResFPN-Net. The input to the network consists of multi-scale fundus images. Firstly, the multi-scale fundus images generate the intermediate feature: \(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\). Then, the intermediate features are input into the attention pyramid structure for the fusion of different features. Finally, the OD and OC segmentation result is obtained through training of the network

3.1 Multi-scale extractor

The extraction of OD and OC edge information in fundus images can improve segmentation accuracy. However, in the fundus image, the boundary information of OD and OC is usually not clear, so it is difficult to retain the details based on a single scale. In general, the convolution with a large receptive field is suitable for large objects, while the convolution with a small receptive field can capture detailed information. Therefore, we take the multi-scale fundus image as input to construct various receptive fields and completely learn the edge features. As shown in Fig. 2, we modify the ResNet [14] as our feature extractor. ResNet is an efficient residual network for image classification. Specifically, all fundus images are resized into \(512 \times 512\), \(256 \times 256\), \(128 \times 128\), and \(64 \times 64\) pixels. We initially applied convolution with a kernel size of \(7 \times 7\) on the fundus images with a size of \(512 \times 512\) pixels. Batch normalization (BN) and ReLU activation function are applied to derive the feature map, denoted as \(s_{2}\). Then we construct different convolution layers to receive multi-scale fundus images, whose kernel size is \(3 \times 3\), the channel number is 64, 128, and 256, respectively. And, each convolution is followed by a rectified linear unit (ReLU). Finally, the feature map derived from the fundus images of the other three scales is denoted as \(s_{3}\), \(s_{4}\), \(s_{5}\).

Fig. 3
figure 3

Illustration of the fusion attention module

3.2 Multi-scale segmentation transition

The encoder–decoder structure is generally employed in many frameworks for  image segmentation. In this paper, our segmentation architecture is also based on this structure. In an encoder–decoder structure, the encoder is used to compress and encode the feature information of the image; the decoder is deployed to restore the encoded information. However, some segmentation methods [4, 45] based on encoder-decoder structure do not fully preserve multi-scale feature information. In our segmentation task, the multi-scale input is integrated into the decoder layer to broaden the network width of the decoder path.

To transfer the detailed feature and the multi-scale information to the decoder. We generate a set of feature maps produced by different multi-scale feature maps as information transitions between encoder and decoder. Specifically, the feature map \(s_{2}\) is fed to a residual block, which consists of a set of convolution and downsamples operations. The feature map derived from the residual block is denoted as \(c_{2}\). However, there are significant feature gaps between the features extracted from multi-size fundus images. Directly merging these features can weaken the representation of the multi-scale image. In this paper, we proposed a fusion attention module to alleviate gaps among these feature maps, as shown in Fig. 3. Firstly, we merge two feature maps by channel-wise concatenation followed by convolution layer and BN. This procedure can be formulated as follows.

$$\begin{aligned} V = Conv(concat(c_{i-1},s_{i})), (2 <i \leqslant 5) \end{aligned}.$$
(1)

Then, we collect global contextual information by global average pooling. We apply \(1 \times 1\) convolution operation and Softmax activate function to derive the attention matrix based on global context information. And the attention matrix is multiplied with V to get the fusion feature map. Finally, the fusion feature map is forwarded to the corresponding residual block. Following the above illustration, multi-level features used to build by fusion attention module and residual blocks are denoted as {\(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\)}, which correspond channels are {256, 512, 1024, 2028}, as shown in Fig. 2.

3.3 Attention pyramid architecture

We collect four feature maps of different scales through the multi-scale segmentation transition: {\(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\)}. Then, we utilize Feature Pyramid Network (FPN) [22] to explore features at different scales. The FPN was originally employed in the object detection task to solve the problem of multi-scale object detection. It adds different feature maps through Top-down pathway and lateral connections to aggregate multi-scale features. However, there are significant differences in these four feature maps. Specifically, the feature maps in the deeper layer are spatially coarser but have more semantic information. In contrast, the feature maps in the lower layer contain rich location information but fewer semantic features. We believe that this simple addition method will weaken the expression of some features and cannot fully learn the close relation between OD and OC. More importantly, fundus vessels in the OD and OC region make it difficult to segment the OD and the OC accurately.

In this paper, we propose an attention pyramid mechanism that concatenates multi-scale features to solve the above problems. In this architecture, an attention module integrates the high-level feature map and the low-level feature map, which bridges the gaps between the deeper feature map and the lower feature map. On the other hand, each region of the input image is given different weights to extract more critical information and help the model distinguish between the target region and the background. Specifically, feature maps obtained by the multi-scale transition: {\(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\)} are fed to the corresponding convolution layer of the pyramid network. Subsequently, the attention module concatenates high-level features with low-level features to achieve feature fusion, as shown in Fig. 4.

Fig. 4
figure 4

Structure of the attention pyramid architecture. The structure consists of a feature pyramid and an attention module. The feature pyramid retains multi-scale feature information. The attention module fuses different feature maps and highlights the feature representations

Fig. 5
figure 5

Illustration of the attention module. The attention module first applies the bilinear interpolation on feature map \(p_{j}\) and adds with the feature map \(p_ {i}\) to generate the intermediate feature f. Then, convolution, average pooling, and max pooling followed by nonlinear operation are applied to produce the final feature map O

Our attention module is based on CBAM [32] and is shown in Fig. 5, where \(p_{i}\), \(p_{j}\) represents the feature maps from diverse convolution layer. We first feed the \(p_{j}\) with bilinear interpolation and add it to \(p_{i}\) to produce the intermediate feature map f. Then, an Adaptive Average Pooling, Adaptive Max Pooling and \(3 \times 3\) kernel convolution layer followed by ReLu and Sigmoid activate function to generate two new feature maps \(S\in R^{C\times H\times W}\) and \(L\in R^{C\times H\times W}\), where C indicates the number of channels, and H and W is the height and width of the feature map. Finally, these two new feature maps are added together to receive the final feature map \(O\in R^{C\times H\times W}\).

3.4 Loss function

The OD and OC segmentation is formulated as a multi-label problem in our task. In the original fundus image, the proportion of the background region is more significant than that of OD and OC. The performance of the network is affected by the imbalance of categories in the training process. Therefore, we use focal loss [23] as the loss function for multi-class segmentation, which balances the proportion of the target region and background region by adding weights to the corresponding loss of the sample.

To adequately train the network, we introduced the sub-output layers to construct multi-scale loss. The advantage of the multi-scale loss is that it prevents the gradient from disappearing during training. In sub-output, the segmentation loss between the mask and the fundus image is formulated by Eq. (2).

$$\begin{aligned} L_{sub}(P_{t})=-\alpha (1-P_{t})^\gamma \log (P_{t}), \end{aligned}$$
(2)

where \(P_{t}\) is the probability of truth class in the network, and \(\alpha\) is an equilibrium variable to balance the number of positive and negative samples. \(\gamma\) is a hyperparameter used to focus the model on samples that are difficult to classify during training.

Besides, we integrate sub-outputs to calculate the fusion loss (\(L_{fusion}\)). There are four sub-outputs in our task, denoted as \(O_{1}\), \(O_{2}\), \(O_{3}\), \(O_{4}\), and the fusion of four sub-outputs O can be formulated as:

$$\begin{aligned} O = O_{1}+O_{2}+O_{3}+O_{4} \end{aligned}.$$
(3)

\(L_{fusion}\) is defined as follows:

$$\begin{aligned} L_{fusion}(O)=-\beta (1-O)^\gamma \log (O) \end{aligned}.$$
(4)

Finally, the multi-scale loss function of the segmentation network is formulated as:

$$\begin{aligned} L=\sum _{i=1}^NL_{sub}^{(i)}(O_{i})+L_{fusion}(O) \end{aligned},$$
(5)

where N represents the number of sub-outputs.

4 Experiments and results

4.1 Datasets and evaluation method

Experiments are conducted on two public datasets. The first dataset is the Drishti-GS dataset [37], collected by Aravind Eye Hospital, Madurai, India. It contains 101 colour fundus images, which are divided into a training set and a testing set. The training set contains 50 images with ground truth for OD and OC segmentation. The remaining 51 images are used for the testing.

The second database is RIM-ONE [11]. It contains 159 fundus images, including 85 images from healthy eyes as well as 74 images from eyes with glaucoma at different stages. RIM-ONE database provides pixel-level segmentation of OD and OC labelled by two ophthalmologists as the ground truth.

Three evaluation metrics are adopted to evaluate our proposed algorithm: Dice coefficient (DC), accuracy (acc) and Hausdorff distance (HD).

$$\begin{aligned}&DC = \frac{2\times {TP}}{2\times {TP}+FP+FN} \end{aligned}$$
(6)
$$\begin{aligned}&acc = \frac{TP+TN}{TP+FN+TN+FP} \end{aligned}$$
(7)
$$\begin{aligned}&HD(A,B) = max(h(A,B),h(B,A)) \end{aligned}$$
(8)
$$\begin{aligned}&h(A,B)=\mathop {max}\limits _{a\in A }\mathop {min}\limits _{b\in B}||a-b|| \end{aligned}$$
(9)

where TP, FP, TN and FN represent the number of true positives, false positives, true negatives and false negatives, respectively. And, A, B denote the prediction result and the Ground Truth, a, b represent the pixel belonging to the A and B, respectively.

4.2 Implementation details

The network was implemented by PyTorchFootnote 1, and Adam optimization algorithm [19] was used to train the network. The network was trained on a GPU of NVIDIA GeForce 3090 Super with 24 GBs graphic memory. Our multi-scale extractor employs pre-trained parameters based on ImageNet as initialization. During the training, we set the initial learning rate to 0.0001 and used Cosine Decay to adjust the learning rate. In our implementation, we set \(\alpha\) and \(\beta\) to 0.25 and \(\gamma\) to 3. We set the mini-batch size to 8 for all training and performed 300 iterations on the network.

To improve the performance of the model, all images were cropped to \(800 \times 800\) pixels centred on the OD. We used various transformations to augment the training set, including rotation by an angle of 90, 180, and 270 degrees.

Fig. 6
figure 6

Compare the variations of different loss in the process of training

4.3 Comparison of loss functions

Different loss functions are compared using the Drishti-GS dataset. Cross-Entropy loss, \(\mathrm {Lovasz\_Softmax}\) loss, and Dice loss were applied to train our network, respectively. The model was trained with an initial learning rate of 0.0001. As displayed in Fig. 6, when using multi-scale loss to train the network, the model converges at the loss of 0.008 around 300 epochs. When using Dice loss to train the network, the loss can converge to about 0.06. However, the convergence effect of \(\mathrm {Lovasz\_Softmax}\) loss and cross-entropy loss is not satisfactory, and it only converges to about 0.21 after 300 iterations. Therefore, the proposed multi-scale loss is proved to be more suitable for the training of the OD and OC segmentation network.

4.4 Segmentation results

Extensive experiments were conducted on two public databases. As shown in Table 1, our proposed method achieved scores of 97.59%, 99.21% and 0.099 in Dice, acc and HD for OD segmentation. Moreover, it achieves 89.87%, 98.77% and 0.882 for OC segmentation on the Drishti-GS database. On the RIM-ONE database, our proposed method achieved scores of 96.41%, 99.30% and 0.166 in terms of \(Dice_{OD}\), \(acc_{OD}\) and \(Avg.\ HD_{OD}\), respectively. For OC segmentation, it achieves 83.91%, 99.24% and 1.210 in Dice, acc and \(Avg.\ HD\).

Table 1 Optic disc and cup segmentation performance on Drishti-GS and RIM-ONE datasets compared with other methods

Based on the OD and OC segmentation results, the corresponding CDR values can be further calculated, which can be used to assist ophthalmologists in the diagnosis of glaucoma. We use the mean absolute error (MAE) to evaluate the accuracy of CDR estimation, which calculates the average error rate of all samples:

$$\begin{aligned} MAE=\sum _{i=1}^N|CDR_{i}^S-CDR_{i}^G| \end{aligned},$$
(10)

where N represents the number of test samples, \(CDR^{G}\) and \(CDR^{S}\) represent the ground truth of CDR provided by trained clinicians, and the CDR calculated by segmentation results of OD and OC, respectively. Our proposed method achieves MAE of 0.0499 and 0.0630 on the Drishti-GS and RIM-ONE datasets, respectively.

4.5 Accuracy analysis results

The performance comparison with the state-of-the-art approaches on two public databases is shown in Table 1. The results show that our method achieved higher segmentation performance than the state-of-the-art methods. On the Drishti-GS dataset, compared with the CCNet [17], our approach has an improvement of 0.48% and 0.13% in Dice and acc for OD segmentation, respectively. Furthermore, it has an improvement of 1.73% and 0.18% in terms of Dice and acc for OC segmentation. On the RIM-ONE database, compared with CCNet, the Dice increases from 93.88% to 96.41% by 2.53% and the acc increases from 99.03% to 99.30% for OD segmentation. For OC segmentation, the Dice increases by 2.82%, and the acc increases by 0.18%, respectively. In terms of HD metric, a considerable improvement has also been achieved for OD and OC segmentation on Drishti-GS and RIM-ONE datasets. Compared with the state-of-the-art approaches, the proposed method showed superiority in three metrics, as shown in Table 1.

Table 2 Cross-dataset performance on Drishti-GS and RIM-ONE datasets compared with other methods

To compare the adaptability of the model on different databases, we provide a comprehensive cross-dataset performance analysis. Firstly, we used the Drishti-GS training dataset to train the model and directly evaluated it on the RIM-ONE testing datasets. Moreover, we also used the RIM-ONE training datasets to train the model and tested it on the Drishti-GS datasets. Since the first two methods in Table 2 do not compare the cross-dataset performance of the model and do not disclose the specific implementation, we cannot obtain its cross-dataset performance. From Table 2, the proposed method remarkably outperforms the U-Net, M-Net [10], AGNet [44], and CCNet models, indicating a solid generalization ability. On the RIM-ONE database, compared to AGNet, the proposed method achieved 7.27% and 1.95% improvements in Dice and acc for OD segmentation. And, it achieved 22.23% and 2.86% improvements in Dice and acc for OC segmentation. On the Drishti-GS database, compared with CCNet, the Dice increases by 6.82% and the acc increases by 3.04% for OD segmentation. For OC segmentation, the Dice increases by 0.99% and the acc increases by 2.83%. This improvement can also be witnessed for the HD metric, which demonstrates the advantages of the proposed method over other approaches on adaptability.

Fig. 7
figure 7

Confusion matrix on the Drishti-GS database and RIM-ONE, respectively. (A1, B1) The M-Net results. (A2, B2) The AGNet results. (A3, B3) The CCNet results. (A4, B4) The proposed method result

Fig. 8
figure 8

Examples of visual segmentation results, where the yellow region denotes OD segmentation and the red region denotes OC segmentation result. (A1, A2, A3) Fundus images. (B1, B2, B3) Ground truth. (C1, C2, C3) The M-Net results. (D1, D2, D3) The AGNet results; (E1, E2, E3) The CCNet results. (F1, F2, F3) the proposed method results (The different coloured boxes represent the diverse region in fundus images)

Fig. 9
figure 9

Scatter plot of the CDR measurement on Drishti-GS and RIM-ONE datasets, respectively. (A1, B1) The M-Net results. (A2, B2) The AGNet results. (A3, B3) The CCNet results. (A4, B4) The proposed method result

The confusion matrix of segmentation results achieved by other competitive methods and our proposed method is shown in Fig. 7. Compared with other methods, our method can better distinguish the target region from the background and not divide the OC region into the background. Moreover, the number of misclassified pixels in OD and OC regions is lower than that of other methods.

4.6 Visual analysis results

We showed some typical results of the OD and OC segmentation in Fig. 8 to visually compare the proposed method with the competitive methods, including M-Net, AGNet and CCNet. From the comparison, it can be found that our method generates accurate segmentation results and exceeds other approaches. We constructed a multi-scale feature extractor to capture the edge information of the OD and OC. Compared with the previous methods (such as MNet, CCNet), our method is more accurate in depicting the edge information of the OD and OC. Meanwhile, our method used attention pyramid architecture to correlate the task of OD and OC segmentation, which can implicitly learn the relationship between them. It can be seen from Fig. 8, compared with other approaches, the proposed method is more accurate in locating the OD and OC.

Fig. 10
figure 10

ROC curves with AUC scores for glaucoma screening based on CDR on Drishti-GS and RIM-ONE datasets

We also conducted experiments on CDR calculation. The scatterplot of corresponding CDR values calculated based on OC and OD segmentation results derived by our proposed method and other competitive methods are visualized in Fig. 9. It can be observed that the CDR calculated by the proposed method has the highest correlation with the ground truth. On the Drishti-GS database, the M-Net achieved an MAE of 0.1003, and the AGNet achieved an MAE of 0.0816. In comparison, the proposed method achieved an MAE of 0.0499, which is a relative reduction of 0.0111 from 0.0610 by CCNet. While on the RIM-ONE dataset, the M-Net implemented an MAE of 0.0995, and the AGNet implemented an MAE of 0.0813. The proposed method implemented an MAE of 0.0630, which is a relative reduction of 0.0133 from 0.0763 by CCNet. Compared with other methods, the proposed method achieved the highest accuracy on CDR calculation.

4.7 Glaucoma screening

In this section, we evaluated the proposed method on glaucoma screening by using the calculated CDR value on Drishti-GS and RIM-ONE datasets. Moreover, we described the receiver operating characteristic (ROC) curve and area under the ROC curve (AUC) as the metric of the diagnostic accuracy shown in Fig. 10. From the ROC curves and AUC scores, it can be seen that the proposed method achieved the best performances on two public datasets. Comparing with the CCNet, the AUC scores increased from 0.8725 to 0.8947 on the Drishti-GS dataset. In the other database, comparing with the second-best method achieved by M-Net, the AUC scores increased by 1.7%. Compared with other methods, our method has higher accuracy in the diagnosis of glaucoma, which could be used to calculate clinical measurements and support ophthalmologists in clinical diagnosis.

Table 3 Effect of different components of our method on the Drishti-GS dataset

4.8 Ablation experiments

Ablation experiments were conducted on the Drishti-GS dataset. For the sake of description, we used ME, MT, AP and MF to represent the multi-scale extractor, multi-scale segmentation transition, attention pyramid architecture and multi-loss function, respectively. The result achieved by different components of the model is shown in Table 3. We used the ResNet50+FPN network as the baseline model and adopted focal loss to train the model.

When ME, MT, AP and MF were gradually added into the segmentation model, all the evaluation indexes continuedly increased. Hence, the contribution of each improvement of the proposed model is verified. The ME module captures multi-scale features to preserve the boundary and other detailed information, which brings significant benefits to the OD and OC segmentation. Compared with baseline, the Dice increased by 1.30%, acc increased by 0.44% and the \(Avg.\ HD\) decreased by 0.105 for OD segmentation. For OC segmentation, the Dice increased by 4.96%, the acc increased by 0.70% and the \(Avg.\ HD\) decreased by 0.725. The MT module is integrated into the network to retain the multi-scale feature maps and reduces the burden of the decoder. From Table 3, it can be seen that the MT module has a great contribution to the improvement of segmentation accuracy. The AP module not only eliminates different levels of semantic gaps but also implicitly learns the internal relationship between the OD and OC. When the AP module replaces the corresponding module in the baseline model, the segmentation accuracy is also improved in varying degrees. Finally, we showed that MF supervision could improve the accuracy of the OD and OC. Experiments showed that combined learning these components and used the MF to trained, the network can achieve excellent segmentation results. Therefore, the MF is useful for our segmentation task.

5 Conclusion

In this work, we proposed a novel deep learning architecture that can achieve OD and OC segmentation simultaneously. The proposed ResFPN-Net is trained under multi-loss supervision and converges quickly in a limited time. We have evaluated our method on two public datasets, i.e. Drishti-GS and RIM-ONE. Comprehensive experiments demonstrated the superiority of each improvement and proved that our method could accurately segment OD/OC and outperformed other methods. The proposed multi-scale loss functions converge much quicker, and reached significant lower training loss than the compared loss functions. By sharing the features from OD and OC for segmentation tasks, the proposed one-stage OD and OC segmentation network achieved both high accuracy and high efficiency. Cross-dataset experiments demonstrated the generalization performance of the network. Ablation experiments proved the contribution of each improvement of the proposed method. Based on the OD and OC segmentation results derived by the proposed ResFPN-Net, more accurate CDR can be calculated, which can provide key support for glaucoma diagnose. The proposed framework also has strong potential for other relative biomedical image segmentation tasks.