FormalPara Key Summary Points

The application of artificial intelligence (AI) in anterior segment diseases is promising.

AI is capable of diagnosing and managing anterior segment diseases.

Cornea, refractive surgery, cataract, anterior chamber angle detection and refractive error prediction are the most common fields of AI applications.

AI methods still face potential challenges.

Introduction

With the development of artificial intelligence (AI) technology, it is proving to be promising in the field of healthcare, including radiology, pathology, microbiology, electronic medical records, and surgery [1,2,3]. The potential value of AI in ophthalmology is expanding, especially in the areas relying on big data and image-based analysis [4, 5]. The cornea and the lens are the two most important refractive structures in the anterior segment [2]. Cataract and cornea opacity rank within the top five leading causes of blindness worldwide [6, 7]. Delayed recognition of anterior segment diseases may lead to severe complications and permanent vision loss. The diagnosis and treatment of anterior segment diseases often involve imaging analysis, including slit-lamp photography (SLP), anterior segment optical coherence tomography (AS-OCT), corneal tomography/topography (CT), ultrasound biomicroscope (UBM), specular microscopy (SM), and in vivo confocal microscopy (IVCM) [2,3,4]. Thus, AI algorithms, based on anterior segment images, are mostly committed to improving accuracy for disease screening and predicting possible outcomes after disease treatments, in combination with clinical data [2, 4, 5].

In this review, we detail the history and basic principles of AI algorithms in anterior segment diseases. In addition, we provide updated overview of AI applications and discuss the potential futures in anterior segment diseases, encompassing cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.

Methods

We performed the systematic review using the PubMed database, focusing on the most recent studies and clinical trials on the anterior segment diseases. The following keywords were searched: “artificial intelligence,” “machine learning,” “deep learning,” “cornea,” “keratitis,” “keratoplasty,” “corneal sub-basal nerve,” “corneal epithelium,” “corneal endothelium,” “refractive surgery,” “keratoconus,” “cataract,” “cataract surgery”, “intraocular lens,” “paediatric cataract,” “cataract surgery training,” “cataract surgery monitoring,” “anterior chamber angle,” “refractive error,” and “anterior segment diseases.” Articles published in English were included, which were manually screened for further relevant studies. This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors. The literature search is illustrated in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) flowchart (Fig. 1).

Fig. 1
figure 1

Illustration of literature search strategy in PRISMA flowchart

History and Principle

McCarthy et al. first reported the concept of AI in 1955 [8]. In general, the core of AI algorithm is to mimic human behaviors in the real world and make human-like decisions by software programs [9]. The first generation of an AI algorithm relied on the curation of medical experts and the formulation of robust decision rules [10]. During the past 60 years, AI algorithms have improved considerably. Recent AI algorithms can manage more complex interactions, which include machine learning (ML) and deep learning (DL) as the two most common subfields [5, 9].

The ML algorithms contain two categories: supervised and unsupervised. Supervised ML methods are trained by input–output pairs (termed “ground truth”), which contain inputs and the desired output labels manually marked by human experts [5, 9, 10]. The algorithm learns to give the correct output for an input on new cases. Unsupervised ML methods are trained using unlabeled data to find subclusters of the original data, which may demonstrate something previously unknown to experts. There are common ML algorithms, such as logistic regression (LR), decision trees (DT), support vector machines (SVM), Gaussian mixture model (GMM), and random forests (RF), etc.

The DL algorithms, the latest incarnation of the artificial neural networks (ANN), can perform sophisticated multilevel data abstraction without manual feature engineering [9, 11]. The DL method consists of an input layer, an output layer, and a few hidden layers in between. An artificial neuron, such as a cortical neuron, responds to one element of the input layer and hidden layer. The “weight” of each input and hidden layer can be ascribed before passing to the next layer. Convolutional neural networks (CNN) and recurrent neural networks (RNN) are the progression of DL methods.

Cornea

Infectious Keratitis

Corneal blindness largely results from keratitis [6, 12]. Keratitis can get worse rapidly and even progress to corneal perforation without timely and correct diagnosis [12]. Early detection and appropriate treatment of keratitis can halt the disease progression and reach a better visual prognosis [12, 13]. However, the diagnosis of keratitis often requires a skilled ophthalmologist, the current clinical positive rate (33–80%) is far from ideal [14].

AI has been developed to recognize causes and improve diagnostic accuracy of keratitis. Saini et al. built an ANN classifier using 106 corneal ulcer subjects with laboratory evidence on smear or culture, and complete healing response to specific antibiotic or antifungal drugs [15]. Specificities of the ANN classifier for bacterial and fungal categories were 76.5% and 100%, respectively. Using 2008 IVCM images, Lv et al. tested a DL ResNet system to diagnose fungal keratitis [16]. The accuracy, specificity, and sensitivity were 0.9364, 0.9889, and 0.8256, respectively. Also, with 1870 IVCM images, Liu et al. trained an novel CNN framework (AlexNet) for the automatic diagnosis of fungal keratitis (accuracy = 99.95%) [17]. Kuo et al. designed a DL framework with DenseNet architecture, relying on 288 slit-lamp images, to diagnose fungal keratitis (sensitivity of 71%, specificity of 68%) [18]. Hung et al. developed deep learning models for identifying bacterial keratitis and fungal keratitis, with a highest average accuracy of 80.0% using slit-lamp images from 580 patients [19]. Ghosh et al. tested CNN models with the highest area under the curve (AUC) of 0.86 for rapidly discriminating between fungal keratitis and bacterial keratitis using slit-lamp images from 194 patients [20]. Li et al. built a DL algorithm, DenseNet121, based on 6567 slit-lamp images [21]. DenseNet121 achieved an AUC of 0.998, a sensitivity of 97.7%, and a specificity of 98.2% in keratitis detection.

To test the capability of distinguishing keratitis from other anterior diseases, AI methods have also proven to be workable. Based on 1772 slit-lamp images, Li et al. combined Visionome with a DL framework for dense annotation of the pathological features [22]. DL frameworks using ResNet and faster region-based CNN detected anterior disease, such as keratitis, conjunctival hyperemia, and pterygium, etc. Gu et al. proved that a novel DL network focusing on 5325 slit-lamp images, which contained a family of multitask and multilabel learning classifiers, was workable to diagnose infectious keratitis, noninfectious keratitis, corneal dystrophy or degeneration, and corneal neoplasm (AUC = 0.910) [23]. Loo et al. also proposed a DL algorithm for segmentation of ocular structures and microbial keratitis biomarkers [24]. Using slit-lamp images from 133 eyes, the DL algorithm is promising for the quantification of corneal physiology and pathology.

Keratoplasty

As reported by the Eye Bank, the demand for corneal graft tissue is increasing, which represents a significant financial and public health burden [25]. AI methods may help cornea surgeons to better decide on possibilities for performing keratoplasty and related procedures. Yousefi et al. introduced a ML approach, which was useful for identifying keratoconus and suspects (3495 subjects) who may be at higher risk for future keratoplasty using AS-OCT information [26]. Hayashi et al. built a DL model (AUC = 0.964) to judge the need for rebubbling after Descemet’s endothelial membrane keratoplasty (DMEK) using AS-OCT images from 62 eyes.

AI models have been tested for detecting and improving therapeutic effects of surgical procedures during keratoplasty. Hayashi et al. built a deep neural network model, Visual Geometry Group-16, to predict successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) [27]. Based on AS-OCT and corneal biometric values of 46 patients, the AUC of this model reached 0.75. Using AS-OCT images from 1172 subjects, Treder et al. evaluated a CNN classifier (accuracy = 96%) to detect graft detachment after DMEK [28]. Heslinga et al. developed a DL model to locate and quantify graft detachment after DMEK using 1280 AS-OCT scans [29]. Dice scores for the horizontal projections of all B-scans with detachments was 0.896. Pan et al. used a DL framework for augmented reality (AR)-based surgery navigation to guide the suturing in DALK [30]. The corneal contour tracking accuracy was 99.2% on average. Vigueras-Guillén et al. developed a DL method to estimate the corneal endothelium parameters from SM images of 41 eyes after Descemet stripping automated endothelial keratoplasty (DSAEK) [31]. The proposed DL method obtained reliable and accurate estimations.

Corneal Subbasal Nerve Neuropathy

Diabetic peripheral neuropathy (DPN) is the most common complication of both type 1 and 2 diabetes [32]. IVCM is capable of quantifying corneal subbasal plexus, which represents small nerve fiber damage and repair. Scarpa et al. used a CNN algorithm for the classification of IVCM images from 50 healthy subjects and 50 diabetic subjects with neuropathy [33]. This CNN algorithm provided a completely automated analysis for identifying clinically useful features for corneal nerves (accuracy = 96%). Focusing on corneal nerve fiber, Williams et al. applied a deep learning algorithm on IVCM images from 222 subjects, with a specificity of 87% and sensitivity of 68% for the identification of DPN [34]. Based on IVCM images from 369 subjects, Preston et al. utilized a CNN algorithm for corneal subbasal plexus detection to classify DPN, with the highest F1 score of 0.91 [35].

Cornea Epithelium and Endothelium Parameters

The epithelium is the outermost layer of the cornea, critical for refractive status and wound healing [36]. The most common examination method used by ophthalmologists is slit-lamp microscopy combined with different illuminations and eye staining techniques. Noor et al. reported that AI methods could differentiate abnormal corneal epithelium tissues using 25 hyperspectral imaging (HSI) images without eye staining [37]. SVM and CNN algorithms were used to extract image features with an accuracy of 100%.

Fuchs endothelial dystrophy (FED) comes with an increase in the thickness of the Descemet’s membrane, guttae, and a progressive loss of endothelial cells [38]. The guttae is a large deposition of extracellular matrix in the corneal endothelium. Vigueras-Guillén et al. developed DL methods to estimate the corneal endothelium parameters using 500 SM images with guttae [39]. The DL methods obtained lower mean absolute errors compared with commercial software.

Refractive Surgery

Early Keratoconus

Keratoconus (KC) is a noninflammatory corneal ectasia disorder, over 90% of reported KC is bilateral, and because of the progressive corneal thinning, the corneal protrusion and irregular corneal astigmatism induces poor visual performance in KC patients [40]. Early KC is a general concept of early stage KC, including subclinical KC, preclinical KC, KC suspects, and forme fruste KC (FFKC) [41]. Unlike KC, the visual performance of early KC patients is good, and there is no specific corneal topographic finding in early KC patients [42]. However, undetected early KC is known to be associated with iatrogenic keratectasia, which is the most severe and irreversible complication after laser in situ keratomileusis (LASIK) [43, 44]. Hence, discrimination of early KC from normal eyes is an urgent task for ophthalmologist and ophthalmology science.

Discriminating early KC from normal eyes by single anterior corneal topographic map is difficult; however, AI makes it possible to generate thousands of features to improve the accuracy of the discrimination power of early KC based on big data. In 1997, Smolek et al. introduced a neural network (NN) algorithm to discriminate early KC eyes from KC eyes by using corneal topography with a limited sample size [45]. After that, numerous studies have used ML algorithms to detect early KC eyes by using corneal topography. Accardo et al. used a NN method to discriminate early KC eyes from normal eyes, achieving sensitivity of 94.1% and specificity of 97.6% [46]. Saad et al. also used a NN method to detect early KC eyes, achieving sensitivity of 63% [47].

After the Scheimpflug camera became widely applied in ophthalmology clinics, multiple AI-related have studies used this system to detect early KC by collecting both anterior and posterior corneal surface information. Kovacs et al. [48] and Hidalgo et al. [49] both applied ML algorithms to detect early KC eyes by using a Scheimpflug camera, achieving sensitivities of 92% and 79.1%, respectively. Lopes et al. [50] and Smadja et al. [51] did similar studies to detect early KC eyes by using ML algorithms of RF and DT. Recently, Xie et al. used a DL algorithm to detect early KC eyes and achieved an accuracy of 95%, which was higher than achieved by senior ophthalmologists [52]. Further, Xu et al. combined raw data of the entire cornea (anterior curvature, posterior curvature, anterior elevation, posterior elevation, and pachymetry) to build a ML model called the KerNet [53]. The KerNet was helpful for distinguishing clinically unaffected eyes in patients with asymmetric KC from normal eyes (AUC = 0.985). Chen et al. reported a CNN model that combined color-coded maps of the axis, thickness, and front and back elevation [54]. The CNN model reached accuracy of 0.90 for recognizing healthy eyes and early stage KC. The outcomes of these studies show great potential for the application of AI in detecting early KC eyes by using a Scheimpflug camera, though the detection accuracy varies among studies. The limited information of low-resolution images captured by the Scheimpflug camera is one of the possible reasons.

In recent years, based on AI algorithms, several studies have tried to combine corneal information from multiple instruments to improve the detection accuracy of early KC. Hwang et al. [55] and Shi et al. [56] both combined Scheimpflug camera information and AS-OCT, which quantified corneal epithelial information to differentiate early KC eyes from normal eyes. The AUCs were 1.0 and 0.93, respectively. Perez-Rueda et al. combined biomechanical and corneal topographic information to detect early KC with an AUC of 0.951 [57]. These studies show that corneal information at different dimensions can improve the detection accuracy of early KC. AI will be a useful tool to detect early KC by generating numerous features from cornea.

Surgery Outcomes in Refractive Surgery

Predicting the outcomes of laser refractive surgery is important during clinical work. AI methods may enhance the quality of refractive surgical results by preventing the misdiagnosis in nomograms. Based on big data (17,592 cases and 38 clinical parameters for each patient), Achiron et al. developed ML classifiers to support clinical decision-making and lead to better individual risk assessment [58]. Cui et al. analyzed the ML technique for prediction of small incision lenticule extraction (SMILE) nomograms with 1465 eyes [59]. Compared with the experienced surgeon, the ML model showed significantly higher efficacy and smaller error of SE correction. However, the ML model was less predictable for eyes with high myopia and astigmatism. Park et al. tested ML algorithms using 3034 eyes composed of four categorical features and 28 numerical features accepted for SMILE surgery [60]. AdaBoost achieved the highest performance and accuracy for the sphere, cylinder, and astigmatism axis, respectively.

Lens

Age-Related Cataract

Cataract is the leading cause of blindness worldwide [6]. Approximately 32 million cataract surgeries have been performed globally up to 2020. With the global aging trend, public eye care demand is growing fast [6]. The clinical consultation and surgery for cataract patients will cause heavy burden for healthcare systems [61]. Thus, more precise diagnosis and treatment are in urgent need.

Age-related cataract always manifests as cortical cataract, nuclear sclerotic, and posterior subcapsular cataract. Slit-lamp microscopy and photography are mostly used to observe and capture cataract with diffuse, slit-beam, and retro-illumination techniques. In general, doctors usually diagnose and grade cataract according to subjective experience, based on the Lens Opacities Classification (LOCS) III and/or the Wisconsin cataract grading system, which might be inconsistent [62, 63]. Therefore, a unified and objective assessment of cataract is critical. AI technology based on big data is capable of this diagnostic and grading task. Using 37,638 slit-lamp photographs, Wu et al. built a ResNet model to capture mode recognition (AUC > 99%), cataract diagnosis (AUC > 99%), and detection of referable cataracts (AUC > 91%) [64]. To grade nuclear cataracts, Gao et al. built a CNN model using 5378 slit-beam images (mean absolute error = 0.304) [65]. Similarly, Xu et al. built a SVR model to grade nuclear cataracts using 5378 slit-beam images (mean absolute error = 0.336) [66]. Cheung et al. built a SVM model to grade nuclear cataracts using 5547 slit-beam images (AUC = 0.88–0.90) [67]. Keenan et al. developed a deep learning model called DeepLensNet to perform automated and quantitative classification of cataract severity for all three types of cortex, nuclear, and posterior subcapsular cataract. Compared with clinical ophthalmologists, the DeepLensNet performed significantly more accurately for cortex opacity (mean squared error = 13.1) and nuclear sclerosis (mean squared error = 0.23). For the least common posterior subcapsular cataract, the grading capability was similar between the DeepLensNet (mean squared error = 16.6) and clinical ophthalmologists [68]. Objectively, silt-lamp images of good quality require a certain amount of training to reduce inter-examiner deviation. Clinical grading labels for the training set may bring biases. Future work might focus on more objective grading algorithms combined with cortex, nuclear, and posterior subcapsular cataract, avoiding such errors of subjectivity.

AI models using fundus images to diagnose and grade cataract have also been developed. Xu et al. built a CNN model using 8030 fundus images to grade cataract (accuracy = 86.24%) [69]. Zhang et al. applied a SVM and CNN model using 1352 fundus images to grade cataract (accuracy = 94.75%) [70]. Xiong et al. utilized a DT model for cataract grading using 1355 fundus images (accuracy = 83.8%) [71]. Yang et al. developed an ensemble learning model for cataract grading using 1239 fundus images (accuracy = 84.5%) [72]. However, the fundus images do not directly capture the lenticular opacity. Along the visual axis, any opacity from cornea and vitreous or small pupil can blur the fundus images, leading to inaccurate diagnosis of cataract.

Moreover, Scheimpflug tomography was used to build an objective tool for nuclear cataract staging, called Pentacam Nucleus Staging (PNS). Based on the pixel intensity in the nucleus region, the PNS provided the severity value for nuclear sclerosis [73, 74]. Based on a deep learning algorithm, swept-source optical coherence tomography (SS-OCT) images have also been used to quantify cataract by lens pixels intensity. The sensitivity and specificity to detect cataract were 94.4% and 94.7%, respectively, for cortex, nuclear, and posterior subcapsular cataract detecting [75]. However, these images are potentially flawed by the limitations of themselves. All the AI-based models and tools are hypothesized based on the properties of each image device. Well-designed AI models may be further developed along the following criteria: objective grading, detection cataract of three types, specificity of lens opacity, interpretability of results, robustness to corneal disease, and limited inter-operator variability [75].

IOL Prediction

Intraocular lens (IOL) formulas are developed to improve IOL selection for cataract eye. New IOL formulas are applying big data and computational methodologies to achieve better prediction of targeted refraction and simplify the IOL selection process (Table 1).

Table 1 Artificial intelligence (AI)-based formulas for intraocular lens (IOL) power calculations

Using the method of pattern recognition, the Hill-radial basis function (Hill-RBF) is developed based on AI and regression analysis of a very large database of actual postsurgical refractive outcomes to predict the IOL power [76]. The Hill-RBF can deal with the undefined factors in IOL power calculations. However, it is still an empirical algorithm that relies on the type of data and eye characteristics [77]. The Hill-RBF 2.0 is improved, with a larger database combined with expanded “in-bounds” biometry ranges [76]. Hill-RBF 3.0 included gender into the main metrics and further improved accuracy [78].

The Kane formula is a new IOL power formula combined AI with theoretical optics, which includes axial length, keratometry, anterior chamber depth, lens thickness, central corneal thickness, A-constant, and gender for IOL power prediction. Connell et al. found the Kane formula was more accurate than the Hill-RBF 2.0 Barrett Universal II for actual postoperative refraction [79].

In 2015, the concept of “super formula” was introduced with the integration of AI technology. This formula allowed the 3D analysis framework, including IOL power, axial length, and corneal power, to observe areas of similarities and disparities between IOL formulas [80]. The Ladas super formula consists of SRK/T, Hoffer Q, Holladay 1, Holladay with WK adjustment, and Haigis with the help of a complex DL algorithm. Kane et al. compared the Ladas formula with the new IOL formulas [77]. The study found that the Ladas super formula was less accurate than the Barrett Universal II, but was more accurate than Hill-RBF.

Based on the prediction of the internal lens position (TILP), the Pearl-DGS is a thick-lens IOL calculation formula using AI and linear algorithms [81]. The Pearl-DGS is currently an open-source tool for IOL power prediction. Clarke et al. reported a ML model (Bayesian additive regression trees) for IOL power calculation to optimize postoperation refractive outcomes, which had a median error close to the IOL manufacturer tolerance [82]. VRF formula is a vergence-based thin-lens formula. Based on theoretical optics with regression and ray-tracing principles, the VRF-G formula is a modification of VRF, including eight metrics: axial length, keratometry, anterior chamber depth, horizontal corneal diameter, lens thickness, preoperative refraction, central corneal thickness, and gender [83]. The FullMonte formula was built on mathematical neural networks combined with the Monte Carlo Markov Chain algorithm [84]. The Kamona method is based on ML algorithms using the mean K of anterior corneal surface, mean K of posterior corneal surface, axial length, anterior chamber depth, lens thickness, and white-to-white values [85].

Other AI-based IOL models have been tested to predict the IOL power. Li et al. developed a novel ML-based IOL power calculation method, the Nallasamy formula, based on a dataset of cataract patients [86]. They proved that the Nallasamy formula outperformed Barrett Universal II. Li et al. also developed a ML-prediction method to improve the performance of the ray-tracing IOL calculation, which showed more precise results in long eyes [87]. Sramka et al. evaluated the Support Vector Machine Regression model (SVM-RM) and the Multilayer Neural Network Ensemble model (MLNN-EM) for IOL power calculations, which achieved better results than the Barrett Universal II formula [88].

Li et al. incorporated a ML method for effective lens position (ELP) predictions, which is an important factor for IOL power formulas [89]. Brant et al. tested a ML algorithm to optimize the IOL inventory close to the target refractive status [90]. With the development of global databases and AI algorithms, more new IOL power calculators and models will achieve better IOL power prediction, especially for short eyes, long eyes, and post-refractive surgery eyes.

Pediatric Cataract

Although pediatric cataracts are relatively rare (1 per 3000), clinical manifestations are quite inconsistent [91]. Pediatric cataract will cause deprivation of visual stimuli, which is a big threat to visual development. Appropriate diagnosis and treatment will be helpful to reduce deprivation amblyopia and blindness [92, 93].

Recent developments in AI have shown their potential possibility for the diagnosis and management of pediatric cataract using slit-lamp images. Liu et al. built an AI model that combined CNN with SVM methods for qualitative and quantitative pediatric cataract detection using 886 slit-lamp images [94]. This model was validated for pediatric cataract diagnosis as classification (accuracy = 97.07%), area grading (accuracy = 89.02%), density (accuracy = 92.68%), and location (accuracy = 89.28%). Lin et al. created the Congenital Cataract-Cruiser (CC-Cruiser) to identify, stratify, and strategize treatment for images of pediatric cataract [95]. The accuracies of cataract diagnosis and treatment determination reached 87.4% and 70.8%, respectively. In addition, Long et al. developed a Congenital Cataract-Guardian (CC-Guardian) to accurately detect and address complications using internal and multiresource validation [96]. The CC-Guardian included three functional modules: a prediction module, a scheduling follow-up module, and a clinical decision module. The CC-Guardian provided real medical benefits for the effective management of congenital cataract. Combing the silt-lamp images and clinical information, Zhang et al. applied random forest (RF), Naïve Bayesian (NB), and association rules mining to build an AI model to predicate postoperative complications of pediatric cataract patients, with average classification accuracies over 75% [97]. In future studies, it is important to contain more clinical data and image results to improve the prognosis accuracy of pediatric cataract.

Cataract Surgery Training and Monitoring

Cataract surgery is the cornerstone operation to master for an eye surgeon. Video recording of cataract surgery is an effective way to collect surgical workflows, which are useful for surgical skill training and optimization. Combined with AI algorithms, this may extend applications to automatic report generation and real-time support. The Challenge on Automatic Tool Annotation for cataRACT Surgery (CATARACTS), in the context of a decision support algorithm, demonstrated that the annotation of cataract surgery was workable [98]. Yu et al. proved that a CNN-RNN algorithm inputted with instrument labels was accurate for identifying presegmented phases from cataract surgery videos [99]. Yeh et al. developed a CNN-RNN model that showed accurate predictions for routine steps of cataract surgery and estimated the possibility for complex cataract surgeries with advanced surgical steps [100]. Yoo et al. trained a CNN-based smart speaker (accuracy = 93.5%) for the timeout speech to confirm surgical information, such as right side, left side, cataract, phacoemulsification, and IOL [101]. Further improvements might offer more help in cataract surgery education and monitoring.

Primary Angle-Closure Glaucoma

Primary angle-closure glaucoma (PACG) accounts for 50% of global bilateral blindness due to glaucoma [102]. By 2040, the number of PACG will reach 32.04 million worldwide [103]. Objective and quantified assessments of anterior chamber depth (ACD) and anterior chamber angle (ACA) are important. To detect shallow anterior chamber, DL methods have been developed using anterior segment photographs (AUC = 0.86) [104] and fundus photographs (AUC = 0.987) [105]. Based on images obtained using ultrasound biomicroscope, automatic AI methods were applied for ACA analysis [106,107,108]. However, ultrasound biomicroscopy only shows the cross-section of localized ACA. The development of SS-OCT can capture the 3D structure of ACA. Combining AI algorithms with SS-OCT images, automatic classification for ACA evaluation is becoming more efficient for PACG diagnosis [109]. Using the SS-OCT images, Pham et al. developed a convolutional neural network (DCNN) for discrimination of scleral spur, iris, corneosclera shell, and anterior chamber, with a Dice coefficient of 95.7% [110]. In addition, Liu et al. tested the reproducibility of a DL algorithm to recognize scleral spur and anterior chamber angle [111]. The repeatability coefficients of were 0.049–0.058 mm for structure detection. Randhawa et al. tested the generalizability of DL algorithms to detect gonioscopic angle closure based on three independent patient populations, with AUCs of 0.894–0.992 [112]. Porporato et al. validated a DL algorithm for 360° angle assessment, with an AUC of 0.85 [113]. Li et al. proposed a DL method using SS-OCT images for classification of open angle, narrow angle, and angle closure (sensitivity = 0.989, specificity = 0.995) [114]. Similarly, Xu et al. tested a DL classifier for primary angle closure disease, the AUC of which reached 0.964 [115]. The incorporation of AI analysis and SS-OCT images might work as a useful tool for the management of angle-closure glaucoma.

Uncorrected Refractive Error

Uncorrected refractive error is associated with visual impairment worldwide [6]. The timely prediction of refractive error, including severe myopia and hyperopia, is essential for reducing the risks of retinal diseases, glaucoma, and amblyopia [116, 117]. Varadarajan et al. evaluated a DL method to extract information of refractive error from retinal fundus images. The mean absolute error of spherical equivalent (SE) was 0.56–0.91 diopters [118]. Yoo et al. evaluated a DL model to estimate uncorrected refractive error using posterior segment OCT (PS-OCT) images, which yielded an AUC of 0.813 for high myopia detection [119]. Chun et al. tested a DL-based system for refractive error using photorefraction images captured by a smartphone, with an overall accuracy of 81.6% [120]. With the application of ultrawide field fundus images, Yang et al. proved the feasibility of predicting refractive error in myopic patients with DL models [121]. Above all, for refractive error prediction, the AI analysis of fundus and PS-OCT images showed a clear focus of attention to the morphological changes of macular and optic nerve head. However, the accuracy of exact refractive error prediction might need further improvements considering more real-world factors.

Limitations and Future Applications

Combined with a considerable number of relevant data and images, AI shows significant promise in clinical diagnosis and decision-making. However, elaborating underlying features and classifying results of different diseases through the AI algorithms yet might be a black box, which is uncertainty whether agreeing with the real-world or not [5]. The Standard Protocol Items: Recommendations for Interventional Trials-AI (SPIRIT-AI) and Consolidated Standards of Reporting Trials-AI (CONSORT-AI) Steering Groups presented the need for consensus-based reporting guidelines for AI-related interventions [122]. These guidelines would improve both the consistency of medical professionals and the effectiveness of regulatory authorities. Therefore, AI interventions still face potential challenges before being introduced in clinical practice.

Multimodal inputs as structural images and functional data will be the future trend of AI developments [123]. Multiple input types of clinical tests are closer to the real world. However, more inputs also require more training samples to avoid overfitting. Moreover, the weight of each input should be considered when performing the integration.

Up to now, most AI algorithms have been trained with samples far away from clinical reality. AI algorithms are not comparable due to the applications of different databases. An accessible public dataset of multi-ethnicity patient cohorts is the key to enhance the generalizability of AI algorithms [122, 124]. Unified data may thus be beneficial for optimization and comparison among different AI models.

Conclusions

The application of AI in anterior segment diseases is promising (Table 2). With the advanced developments in AI algorithms, early diagnosis of debilitating anterior segment disorders and prediction of relative treatments, in the fields of cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction, will make great strides. Although AI interventions face potential challenges before their full application into clinical practice, AI technologies swill make a significant impact on intelligent healthcare in the future.

Table 2 Summary of AI applications in anterior segment diseases