Abstract
Introduction
The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasma diagnosis. The objective of this study is to develop and validate an intelligent diagnostic system based on deep learning for melasma images.
Methods
A total of 8010 images in the VISIA system, comprising 4005 images of patients with melasma and 4005 images of patients without melasma, were collected for training and testing. Inspired by four high-performance structures (i.e., DenseNet, ResNet, Swin Transformer, and MobileNet), the performances of deep learning models in melasma and non-melasma binary classifiers were evaluated. Furthermore, considering that there were five modes of images for each shot in VISIA, we fused these modes via multichannel image input in different combinations to explore whether multimode images could improve network performance.
Results
The proposed network based on DenseNet121 achieved the best performance with an accuracy of 93.68% and an area under the curve (AUC) of 97.86% on the test set for the melasma classifier. The results of the Gradient-weighted Class Activation Mapping showed that it was interpretable. In further experiments, for the five modes of the VISIA system, we found the best performing mode to be “BROWN SPOTS.” Additionally, the combination of “NORMAL,” “BROWN SPOTS,” and “UV SPOTS” modes significantly improved the network performance, achieving the highest accuracy of 97.4% and AUC of 99.28%.
Conclusions
In summary, deep learning is feasible for diagnosing melasma. The proposed network not only has excellent performance with clinical images of melasma, but can also acquire high accuracy by using multiple modes of images in VISIA.
To reduce misdiagnosis and missed diagnosis, an effective and accurate method for melasma diagnosis is necessary. |
On the basis of deep learning, we developed an intelligent diagnostic model for melasma. |
Our model was trained with a large sample of melasma and non-melasma facial images and acquired a high accuracy and area under the curve. |
In further experiments, we found that multichannel image input obtained by fusing multiple modes of images in VISIA increased our network performance. |
More data from multiple centers and improved applicability are needed to make the model a likely valuable tool in clinical practice. |
Introduction
Melasma is a commonly acquired pigmentation disorder characterized by symmetrical brown macules and patches on the face with irregular borders, which has a negative effect on appearance and self-esteem of patients [1,2,3]. Its pathophysiology is complex and unknown, but it is believed to relate to genetic and environmental factors [4]. Melasma mainly affects women and people with high pigmentation phenotypes. The prevalence of melasma is higher in East Asians, Indians, Latin Americans, and Hispanics [5,6,7]. However, the specific epidemiological data are still unclear.
The diagnosis of melasma usually depends on the naked-eye judgment of physicians according to the clinical characteristics of lesions. However, for pigmented skin lesions, the diagnostic ability of non-dermatologists is not at a comparable level to that of dermatologists [8, 9]. Melasma and other atypical hyperpigmentation like nevus of Ota are often missed and misdiagnosed [10]. Thus, the correct diagnosis of melasma with the naked eye alone may require physicians to have certain clinical experience, especially in complicated facial conditions. In addition, the use of diagnostic assistant tools, such as wood lamps and dermoscopy, is time consuming and inappropriate to accurately distinguish melasma from other pigmentation disorders [10]. These tools also need to be assessed by physicians, which could be a subjective process.
The misdiagnosis and missed diagnosis of melasma might have undesirable effects on patients in which treatment such as CO2 laser is required for other skin diseases that is not acceptable for melasma [11,12,13]. The treatment for melasma should be selected cautiously because of its high rate of recurrence [14]. Moreover, improper treatment under misjudgment of melasma might result in serious sequelae, such as pigmentation and scarring after CO2 laser [11,12,13]. In addition, the remoteness of certain regions and lack of knowledge make some patients with melasma seek help from beauty salons and estheticians instead of dermatologists. Nevertheless, owing to the lack of professional expertise and accurate diagnostic tools, such non-professionals usually cannot make a correct diagnosis and choose the wrong treatment. Thus, it is necessary to develop an accurate and rapid diagnostic method for melasma.
The purpose of this study was to develop and validate an intelligent diagnostic system for melasma images on the basis of deep learning and provide a reference for accurate and rapid diagnosis of melasma. In this study, we collected a large number of clinical melasma images and evaluated the performances of four deep learning models in melasma and non-melasma binary classifiers. We further conducted image fusion via multichannel image input and found an improvement in network performance.
Methods
This study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (no. 2022-K349). This study was performed in accordance with the Declaration of Helsinki of 1964. In the absence of any exclusion criteria, we retrospectively collected images of all patients with melasma that visited the Dermatology Clinic of Chongqing Medical University between January 2017 and September 2021. All images were stored in a VISIA imaging system (Canfield Scientific, NJ, USA). A similar number of images from patients without melasma were randomly selected from the VISIA system. Considering that the problem we intended to solve in this study was to judge the presence or absence of melasma on the basis of facial image, which required that the binary classifier be able to work in a variety of situations, the “non-melasma” images in this study were obtained from non-pigmentary diseases (such as rosacea and acne), other pigmentary diseases except melasma (such as freckles, lentigines, nevus of Ota), and healthy people.
With a resolution of 3128 × 4171 pixels, the VISIA system included an imaging chamber with a 15 megapixel resolution camera. Using three types of light sources, i.e., standard incandescent light, ultraviolet (UV) light, and polarized light, five images of different modes were obtained for each shot. The “NORMAL” mode was taken under standard incandescent light and used to identify spots, wrinkles, texture, and pores. The “UV SPOTS” and “PORPHYRINS” modes were taken under UV light to detect UV spots and porphyrins, respectively. The “BROWN SPOTS” and “RED AREAS” modes were taken under polarized light to observe brown spots and prominent blood vessels, respectively [15]. Thus, five different modes of images were obtained in one shot, i.e., “NORMAL”, “UV SPOTS”, “PORPHYRINS”, “BROWN SPOTS”, and “RED AREAS”, which could show different skin characteristics (Supplementary Material Fig. S1).
A total of 4005 melasma and 4005 non-melasma images were collected. The detailed clinical data of patients were not collected due to confidentiality requirements and inapplicability. The diagnosis of all patients was based on the discussion of three experienced dermatologists in accordance with the images, which was regarded as the ground truth in this study. For different patients, we randomly divided all images into the training and test sets with a ratio of approximately 2:1 to ensure that the images of the same patient did not appear simultaneously in the training and test sets (Fig. 1). To achieve a balanced distribution, the number of melasma and non-melasma images was approximately equal. Thus, there were 2650 melasma and 2670 non-melasma images in the training set, while 1355 melasma and 1335 non-melasma images were in the test set. The images of the training set were augmented by rotation, random erasing, and gray level adjustment. The resolution of all images was then adjusted to 480 × 640 pixels.
Our task was to build a binary classifier with “melasma” as the positive class and “non-melasma” as the negative class. Considering that there are five different modes of images in one shot, both single-mode and multimode binary classifiers were studied. The images of “NORMAL” mode were the same as those seen by clinicians with the naked eye; therefore, we used this mode to explore a network for direct and rapid diagnosis. Four deep learning models, i.e., MobileNetv2, Swin Transformer, ResNet50, and DenseNet121, were used to build the binary classifiers of melasma and non-melasma. The records with the best performance were saved. To visualize the features selected by the network, we used Gradient-weighted Class Activation Mapping (Grad-CAM) to demonstrate the interpretability of the optimal network via gradient-based localization. Subsequently, we further investigated the differences in network performance for four different modes of images: “UV SPOTS,” “PORPHYRINS,” “BROWN SPOTS,” and “RED AREAS.” Next, we studied multimode images to examine whether they had the potential to further improve the performance of our diagnostic system. We fused different modes of the same shot through a multichannel input and input the integrated multimode features to the network. A flowchart of the network for multimode images is shown in Fig. 2. For data analysis, the performance of all models on the test set was evaluated using the performance indices of accuracy, area under the curve (AUC), sensitivity, and specificity. All analyses were conducted using Python 3.7.3. All patient images shown in the figures were anonymized by covering the eyes manually for privacy purposes.
Results
Performance of Four Models
We examined the receiver operating characteristic (ROC) curves for the four models trained in this study, and the results of the test set are shown in Fig. 3. In terms of AUC (i.e., an indicator that describes the confidence of the prediction results and is considered to be an important performance index for binary classifiers), the DenseNet121 model outperformed the others with a value of 97.87%. In addition, confusion matrices were used to visualize the performances of the four models (Supplementary Material Fig. S2). The ResNet50 model achieved the highest sensitivity with a value of 97.14%, but obtained a relatively poor specificity of 88.76%, resulting in an accuracy of 91.45% for the test set. In contrast, the DenseNet121 model performed well in identifying both negative (95.88% specificity) and positive (94.29% sensitivity) samples. After comparing the performances of the four deep learning models, we found that the network based on DenseNet121 achieved the highest accuracy, with 93.68% (Table 1). Therefore, among these four deep learning models, DenseNet121 was regarded as the optimal model for melasma diagnosis on the basis of clinical images.
Interpretability of Optimal Model
Subsequently, Grad-CAM was used to implement model interpretability. According to the results of Grad-CAM presented in Fig. 4, the red regions represent areas activated by the network, whereas the blue regions represent areas that were not activated. Activation was focused on the lesions of melasma, which mainly appeared in the cheek and malar area. Notably, in images of patients with melasma coexisting with facial skin conditions (such as seborrheic keratosis and post-acne hyperpigmentation), the network was able to focus more on melasma lesions on the face than other skin disorders.
Visual explanations of melasma cases via gradient-based localization: A a patient with facial dark brown melasma lesion; B a patient with facial hazel melasma lesion; C a patient with facial melasma and seborrheic keratosis; D a patient with facial melasma and post-acne hyperpigmentation. The color of the pixel, from dark blue to red, indicates the importance ranging from lowest to highest. Eyes are covered manually for privacy purposes
Network Performance Using Multimode Input
Since different modes in the VISIA system showed different skin characteristics, we investigated the performance of the network on the basis of each image mode. From the five modes, “BROWN SPOTS” had the best performance, with an accuracy of 94.42% and AUC of 98.57% (Table 2 and Fig. 5). Additionally, the accuracy and AUC of “UV SPOTS” were 93.49% and 97.55%, respectively, which were similar to those of the “NORMAL” mode. The “PORPHYRINS” and “RED AREAS” modes performed slightly worse, with accuracy values of 88.29% and 82.34%, respectively. The results of the confusion matrices are shown in Supplementary Material Fig. S3.
To date, we have acquired the ranks of the five modes in our network. The performance of the network on multimode input was further explored. On the basis of the results of each mode, we determined the combinations of multimode input as follows: “NORMAL + BROWN SPOTS,” “NORMAL + BROWN SPOTS + UV SPOTS,” “NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS,” and “NORMAL + BROWN SPOTS + UV SPOTS + PORPHYRINS + RED AREAS.” Finally, for these combinations, the accuracy of “NORMAL + BROWN SPOTS + UV SPOTS” mode achieved the highest accuracy of 97.4% (Table 3). The AUC was 99.28%, which was slightly higher than those of the others (Fig. 6). Supplementary Material Figure S4 shows the confusion matrix results for these multimode combinations.
Discussion
In recent years, deep learning has gained widespread attention for medical diagnosis, grading, and efficacy evaluation. In particular, deep learning shows superior performance in image classification and recognition tasks and has been applied in the field of dermatology. A previous study reported an artificial intelligence-assisted decision making system for skin tumors with a recognition rate of 91.2% for benign and malignant skin tumors [16]. Lim et al. used a convolutional neural network to grade the severity of facial images of patients with acne, and obtained the best classification accuracy of 67% [17]. Additionally, in rosacea, psoriasis, eczema, and atopic dermatitis, deep learning has been proven to have excellent diagnostic or classification capabilities [18,19,20,21].
So far, there have been a few reports on the application of deep learning to melasma. One study used a voting-based probabilistic linear discriminant analysis to classify non-tumorous skin pigmentation diseases, including melasma, with an accuracy of 67.7% for melasma [22]. Another study presented a spatial compounding-based denoising convolutional neural network for quantifying and evaluating melanin in melasma optical coherence tomography images [23]. However, there is still a lack of research on large training datasets and high accuracy diagnostic systems for melasma facial images. In this study, a large number of images were used as the training set in deep learning models, and we developed an accurate diagnostic system on the basis of DenseNet121 for clinical melasma facial images. In a further experiment with multimode image input, we fused different modes of images in the VISIA system and fed them to the network to simulate how a dermatologist used multiple modes of images to diagnose melasma. Finally, in the experiment with the “NORMAL + BROWN SPOTS + UV SPOTS” combination, we acquired a high accuracy of 97.4% and AUC of 99.28%.
In this study, we chose four deep learning models for comparison purposes, including three traditional convolutional neural networks (i.e., MobileNetv2, ResNet50, and DenseNet121) and a novel network (i.e., Swin Transformer). In previous studies, MobileNet v2 was able to work on lightweight computing devices and had high accuracy in classification of skin disease images [24]; ResNet50 showed superior performance for segmentation and classification in multiple skin lesions diagnostics [25]; DenseNet121 was also used to segment skin and lesion [26]; and Swin Transformer was a novel fine-grained recognition framework and showed more powerful and robust features in medical image segmentation [27]. Therefore, we selected these mainstream and high-performance deep learning networks to explore their performance in melasma diagnosis task. Our results indicated that in MobileNetv2, ResNet50, and DenseNet121, the performance of DenseNet121 was slightly better than that of ResNet50, and MobileNetv2 had the worst performance in our study. In recent studies, the comparison of DenseNet and ResNet has been discussed for the identification of ductal carcinoma in situ and microinvasion of the breast using ultrasound images [28], recognition of digital dental X-ray images [29], and classification of glaucomatous fundus images [30]. For each of these applications, DenseNet has been reported to perform better than ResNet. The ResNet bypass signals from one layer to the next through identity connections and combines features by summing them before passing them into a layer. In contrast, to ensure maximum information flow, all layers of DenseNet could take additional input from the previous layer, pass on their own feature maps to all subsequent layers, and combine features by concatenating them. These features of DenseNet appear to be useful in our clinical images of melasma and non-melasma. Similarly, in another set of images with pigmented facial skin lesions, DenseNet also had better performance evaluation than ResNet; this is consistent with our results [31]. On the other hand, Swin Transformer was proposed in 2021 as a novel model for computer vision, which constructed hierarchical feature maps and had linear computational complexity to image size. At present, only a few studies have reported the application of Swin Transformer in medical images [32, 33]. To the best of our knowledge, this is the first study to apply Swin Transformer in clinical dermatology images. Although Swin Transformer was not the best model for our task, its accuracy reached 88.48%. The medical application of Swin Transformer is worthy of further study.
In clinical practice, dermatologists usually combine clinical information of patients, such as age, medical history, dermoscopy results, and multimode images of the VISIA system, to make the final diagnosis. This could increase the likelihood of dermatologists making the correct decision; additionally, it is worth investigating whether this could be applicable to artificial intelligence. Previous studies have shown that multiple types of information can improve the diagnostic ability of deep learning models. In a study by Tschandl et al., both dermoscopic images and clinical close-ups were used to train the network; the combination was found to acquire a better result than the individual modalities [34]. Jin et al. found that multiple extracted histological features, including nuclei, mitosis, epithelial, and tubular cells, could further improve the detection of lymph node metastasis in patients with breast cancer [35]. This study used a multichannel image input method to fuse multiple mode images in the VISIA system and discovered that it could improve the accuracy of our network to some extent. As the amount of image information increased, the network performance improved. However, we found that the optimal combination was the “NORMAL + BROWN SPOTS + UV SPOTS” mode, which had slightly higher accuracy and AUC than the other five modes. It also seems that the “PORPHYRINS” and “RED AREAS” modes did not improve the network performance under this multimode input method. This is noteworthy because it was previously believed that more information in training data could improve network performance [36]. The five modes had the following characteristics: (1) the “NORMAL” mode image could identify spots by their color and contrast from the surrounding skin; (2) the “UV SPOTS” mode image was generated by the selective absorption of UV light from epidermal melanin; (3) the “BROWN SPOTS” mode could reflect the detection of deeper deposition of melanin under cross-polarized light; (4) the “PORPHYRINS” mode image was photographed in UV light on the basis that porphyrin could fluoresce in UV light; and (5) the “RED AREAS” mode image was used as a measurement of hemoglobin content through cross-polarized light [15]. Since the pigmentation of melasma was considered to be a combination of epidermis and dermis [37], the “NORMAL,” “UV SPOTS,” and “BROWN SPOTS” modes might be more useful for dermatologists in melasma diagnosis. Although the classification basis of the deep learning model was unrevealed and regarded as a black box, the information from “PORPHYRINS” and “RED AREAS” mode images might cause confusion to our multichannel input network in multiple information fusion, thereby degrading the network performance. Hence, the specific reason for this outcome requires further investigation.
On the basis of common clinical images, we developed a straightforward and rapid melasma diagnosis network, thus eliminating time-consuming invasive or noninvasive methods such as wood lamps, dermoscopy, and reflectance confocal microscopy. In the subsequent clinical transformation, we expected to develop a remote diagnostic tool using smartphones or networks on the basis of single-mode imaging for convenient self-diagnosis of patients, and a downstream medical software of the VISIA system based on multimode image input, aiming to become an accurate assistant diagnostic tool.
However, this study has some limitations. First, the data were obtained from a single center, and it was better to validate our network in multiple centers to include more patients with different skin conditions. Second, owing to the fixed background and lighting, almost all image backgrounds were black. Thus, in future practical applications, it might be necessary to add content to our diagnostic system to mask the background and better simulate various training conditions. Third, this study was conducted in Southwest China and the Fitzpatrick types of all included patients were III and IV. Since higher phototypes exhibited more melanocytes [38], there was a possibility that the network performance might variate in people of different Fitzpatrick phototypes. This required further exploration in the datasets of different skin types. In addition, our model was not able to quantify melasma severity on the basis of the Melasma Area and Severity Index. It required higher performance networks and more sophisticated algorithms. Finally, our results lacked further validation of the model use, i.e., whether it could play a role in the diagnosis of melasma by non-dermatologists. This would need further investigation.
Conclusions
This study is the first one to use a large sample of melasma images to compare the diagnostic performance of multiple deep learning models and develop a diagnostic system for melasma. Multimode image combination evaluation was performed using images under different lighting conditions in the VISIA system, which further increased the diagnostic accuracy of melasma. Our study could provide a basis for the development of clinical diagnostic applications for melasma and other skin disorders. However, more available clinical images of patients from multiple centers are needed to further improve the proposed diagnostic system.
References
Balkrishnan R, McMichael AJ, Camacho FT, et al. Development and validation of a health-related quality of life instrument for women with melasma. Br J Dermatol. 2003;149(3):572–7.
Pandya AG, Hynan LS, Bhore R, et al. Reliability assessment and validation of the Melasma Area and Severity Index (MASI) and a new modified MASI scoring method. J Am Acad Dermatol. 2011;64(1):78–83.
Ikino JK, Nunes DH, Silva VP, Fröde TS, Sens MM. Melasma and assessment of the quality of life in Brazilian women. An Bras Dermatol. 2015;90(2):196–200.
Passeron T, Picardo M. Melasma, a photoaging disorder. Pigment Cell Melanoma Res. 2018;31(4):461–5.
Handel AC, Miot LD, Miot HA. Melasma: a clinical and epidemiological review. An Bras Dermatol. 2014;89(5):771–82.
Perez M, Luke J, Rossi A. Melasma in Latin Americans. J Drugs Dermatol. 2011;10(5):517–23.
Aishwarya K, Bhagwat P, John N. Current concepts in melasma - a review article. J Skin Sexually Transm Dis. 2022;2(1):13–7.
Brochez L, Verhaeghe E, Bleyen L, Naeyaert JM. Diagnostic ability of general practitioners and dermatologists in discriminating pigmented skin lesions. J Am Acad Dermatol. 2001;44(6):979–86.
Wagner RF Jr, Wagner D, Tomich JM, Wagner KD, Grande DJ. Diagnoses of skin disease: dermatologists vs nondermatologists. J Dermatol Surg Oncol. 1985;11(5):476–9.
Zeng X, Qiu Y, Xiang W. In vivo reflectance confocal microscopy for evaluating common facial hyperpigmentation. Skin Res Technol. 2020;26(2):215–9.
Angsuwarangsee S, Polnikorn N. Combined ultrapulse CO2 laser and Q-switched alexandrite laser compared with Q-switched alexandrite laser alone for refractory melasma: split-face design. Dermatol Surg. 2003;29(1):59–64.
Lai D, Zhou S, Cheng S, Liu H, Cui Y. Laser therapy in the treatment of melasma: a systematic review and meta-analysis. Lasers Med Sci. 2022;37(4):2099–110.
Kim C, Gao JC, Moy J, Lee HS. Fractional CO2 laser and adjunctive therapies in skin of color melasma patients. JAAD Int. 2022;8:118–23.
Artzi O, Horovitz T, Bar-Ilan E, et al. The pathogenesis of melasma and implications for treatment. J Cosmet Dermatol. 2021;20(11):3432–45.
Goldsberry A, Hanke CW, Hanke KE. VISIA system: a possible tool in the cosmetic practice. J Drugs Dermatol. 2014;13(11):1312–4.
Li CX, Fei WM, Shen CB, et al. Diagnostic capacity of skin tumor artificial intelligence-assisted decision-making software in real-world clinical settings. Chin Med J. 2020;133(17):2020–6.
Lim ZV, Akram F, Ngo CP, et al. Automated grading of acne vulgaris by deep learning with convolutional neural networks. Skin Res Tech. 2020;26(2):187–92.
Zhao Z, Wu CM, Zhang S, et al. A novel convolutional neural network for the diagnosis and classification of Rosacea: usability study. JMIR Med Inform. 2021;9(3): e23415.
Zhao S, Xie B, Li Y, et al. Smart identification of psoriasis by images using convolutional neural networks: a case study in China. J Eur Acad Dermatol Venereol. 2020;34(3):518–24.
Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol. 2018;138(7):1529–38.
Huang K, He X, Jin Z, et al. Assistant diagnosis of basal cell carcinoma and seborrheic keratosis in Chinese population using convolutional neural network. J Healthcare Eng. 2020;2020:1713904.
Liang Y, Sun L, Ser W, et al. Classification of non-tumorous skin pigmentation disorders using voting based probabilistic linear discriminant analysis. Comput Biol Med. 2018;99:123–32.
Chen IL, Wang YJ, Chang CC, et al. Computer-aided detection (CADe) system with optical coherent tomography for melanin morphology quantification in melasma patients. Diagnostics (Basel, Switzerland). 2021;11(8):1498.
Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors (Basel, Switzerland). 2021;21(8):2852.
Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput Methods Programs Biomed. 2020;190: 105351.
Munthuli A, Intanai J, Tossanuch P, et al. Extravasation screening and severity prediction from skin lesion image using deep neural networks. Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference. 2022;2022:1827–33.
Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-based multiscale feature pyramid aggregation network for medical image segmentation. In: IEEE Transactions on Neural Networks and Learning Systems; 2022.
Zhu M, Pi Y, Jiang Z, et al. Application of deep learning to identify ductal carcinoma in situ and microinvasion of the breast using ultrasound imaging. Quant Imaging Med Surg. 2022;12(9):4633–46.
Liu F, Gao L, Wan J, et al. Recognition of digital dental X-ray images using a convolutional neural network. J Dig Imaging. 2022. https://doi.org/10.1007/s10278-022-00694-9.
Singh H, Saini SS, Lakshminarayanan V. Rapid classification of glaucomatous fundus images. J Opt Soc Am A: Optics Image Sci Vis. 2021;38(6):765–74.
Yang Y, Ge Y, Guo L, et al. Development and validation of two artificial intelligence models for diagnosing benign, pigmented facial skin lesions. Skin Res Tech. 2021;27(1):74–9.
Peng L, Wang C, Tian G, et al. Analysis of CT scan images for COVID-19 pneumonia based on a deep ensemble framework with DenseNet, Swin transformer, and RegNet. Front Microbiol. 2022;13: 995323.
Chi J, Sun Z, Wang H, Lyu P, Yu X, Wu C. CT image super-resolution reconstruction based on global hybrid attention. Comput Biol Med. 2022;150: 106112.
Tschandl P, Rosendahl C, Akay BN, et al. Expert-Level diagnosis of nonpigmented skin cancer by combined convolutional neural networks. JAMA Dermatol. 2019;155(1):58–65.
Jin YW, Jia S, Ashraf AB, Hu P. Integrative data augmentation with U-Net segmentation masks improves detection of lymph node metastases in breast cancer patients. Cancers. 2020;12(10):2934.
Li Y, Kong AW, Thng S. Segmenting vitiligo on clinical face images using CNN trained on synthetic and internet images. IEEE J Biomed Health Inform. 2021;25(8):3082–93.
Kang HY, Bahadoran P, Suzuki I, et al. In vivo reflectance confocal microscopy detects pigmentary changes in melasma at a cellular level resolution. Exp Dermatol. 2010;19(8):e228-233.
Chan IL, Cohen S, da Cunha MG, Maluf LC. Characteristics and management of Asian skin. Int J Dermatol. 2019;58(2):131–43.
Acknowledgements
Funding
This study was supported by the National Natural Science Foundation of China (n82073462). The journal’s Rapid Service Fee was funded by the authors.
Author Contributions
Conceptualization, Lin Liu and Chen Liang. Data collection and processing, Tingqiao Chen, Yufan Lan, and Jiamei Wen. Interpretation of data, Yangmei Chen. Software, Yuzhou Xue and Chen Liang. Lin Liu is responsible for writing the initial article. Revision and finalization, Xinyi Shao and Jin Chen. All authors contributed to the article and approved the submitted version.
Disclosures
Lin Liu, Chen Liang, Yuzhou Xue, Tingqiao Chen, Yangmei Chen, Yufan Lan, Jiamei Wen, Xinyi Shao, and Jin Chen have nothing to disclose.
Compliance with Ethics Guidelines
This study was performed in accordance with the Helsinki Declaration of 1964. This study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (the number: 2022-K349). All patient photographs were anonymized by covering the eyes manually for privacy purposes.
Medical Writing/Editorial Assistance
Editage company provided the assistance of English editing and it was funded by the authors.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Author information
Authors and Affiliations
Corresponding author
Supplementary Information
Below is the link to the electronic supplementary material.Example images of five modes from one shot. The eyes are covered manually for privacy purposes
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.
About this article
Cite this article
Liu, L., Liang, C., Xue, Y. et al. An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input. Dermatol Ther (Heidelb) 13, 569–579 (2023). https://doi.org/10.1007/s13555-022-00874-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13555-022-00874-z
Keywords
- Convolutional neural network
- Deep learning
- Diagnosis
- Melasma
- Network performance