Skip to main content

Detection of Tumoral Epithelial Lesions Using Hyperspectral Imaging and Deep Learning

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12139)


We propose a new method for the analysis and classification of HSI images. The method uses deep learning to interpret the molecular vibrational behaviour of healthy and tumoral human epithelial tissue, based on data gathered via SWIR (short-wave infrared) spectroscopy. We analyzed samples of Melanoma, Dysplastic Nevus and healthy skin. Preliminary results show that human epithelial tissue is sensitive to SWIR to the point of making possible the differentiation between healthy and tumor tissues. We conclude that HSI-SWIR can be used to build new methods for tumor classification.


  • Short-Wave InfraRed
  • Hyperspectral Imaging
  • Deep learning
  • Skin lesions
  • Dysplastic Nevus
  • Melanoma

Acknowledgments for supporting this research to Instituto Federal de Educação, Ciência e Tecnologia de Goiás for the qualification license and Fundação de Amparo à Pesquisa do Estado de Goiás for the scholarship.

1 Introduction

Skin cancer is the most diagnosed malignant tumor in the whole world [48]. This pathology usually presents in two ways: a) Melanoma, originated from skin cells that produce pigments, called melanocytes and b) the non-melanoma [25].

Although less frequent than other tumors, melanoma is the most aggressive type of skin cancer due to the high possibility of metastasis and high mortality. Currently, melanoma accounts of approximately 3% of skin cancer cases and 74% of deaths [5, 11, 20, 25, 39, 49]. In 2015, it was estimated that in the United States alone 73,870 new cases of melanoma would be diagnosed with 9,940 deaths [47]. It was estimated that in 2017 there were 87,110 diagnoses of the disease with about 9,730 deaths [49]. In Brazil it is estimated for each year of 2018–2019 biennium, the occurrence of 6,620 new cases diagnosed [25].

2 Problem

To increase the chances of survival of patients with melanoma, early diagnosis is essential. When detected at early stages, chances of healing are high, but late diagnosis makes treatment ineffective [20]. However, the traditional method of skin cancer detection begins with a visual inspection. If a suspicious stain is identified, the doctor will analyze characteristics such as size, color and texture, besides questions about the stains [48]. Along with the visual inspection, some dermatologists apply a technique called dermoscopy, also known as epiluminescense microscopy (ELM). This technique uses the dermatoscope, a surface microscope with a light source that is held close to the skin for a more detailed view of the lesions [9, 26]. If suspicions remain, then further examinations such as blood tests, biopsies and imaging tests may be performed to confirm or deny the diagnosis [48].

To overcome the difficulties inherent in manual physician visual inspection, the use of an automated method to assist with the task of identifying suspicious stains may increase the effectiveness of the inspection and reduce the subjectivity of the examination.

3 Proposed Solution

As an alternative to the traditional method for investigating skin lesions, in particular, Melanoma, performed by manual dermoscopic inspection, automated inspection by Hyperspectral Imaging (HSI) is proposed. In the context of medical imaging, HSI is an emerging technology that provides, in addition to data such as size and shape, information on the chemical composition of matter analyzed from a set of spatially arranged spectral signals, where each spectral signal corresponds to the electromagnetic interaction of light with the material analyzed in a specific portion of the sample [33]. HSI has been used for the last two decades in medical applications [1, 7, 8, 29, 37, 44, 52] because it offers great potential for the diagnosis of noninvasive diseases, surgical guidance [33] and in particular the diagnosis of tumors [2,3,4, 10, 16, 18, 19, 22, 28, 32, 34, 35, 38, 41, 42, 45, 46]. Thus, automated inspection employing HSI increases the chances of identifying a suspicious stain of tumor tissue even when the stain is very small and its shape, color and texture are insufficient for accurate dermoscopic identification.

An HSI is composed of n two-dimensional images built from the values measured at a given wavelength [12, 40]. Figure 1 exemplifies the organization of the n two-dimensional images on an HSI and the spatial arrangement of the spectrum within the image.

Fig. 1.
figure 1

Adapted from Akbari et al. [2].

Representation of the layer structure of an HSI, the spatial arrangement of pixels-vectors and their respective spectral signals.

The set of sequentially arranged multicolored frames represent the hypercube. Each frame corresponds to a two-dimensional image of a specific wavelength within the hypercube spectral range. The vertical arrows indicate two distinct sets of pixels selected from the same spatial reference in all images. Each set makes up what is called a pixel-vector. The horizontal arrows indicate the spectral representations of two HSI pixels-vectors in the Cartesian plane, where the \(\lambda \) axis corresponds to the HSI wavelengths and the i axis corresponds to the measured light intensity.

3.1 SWIR Spectroscopy

The construction of an HSI is usually performed employing a particular method of spectroscopy. Each spectroscopic technique acts on a particular region of the light wave spectrum. A region still little explored in medical applications, especially in the diagnosis of tumors is the Short-Wave Infrared (SWIR). SWIR comprises the wavelengths between 1000 nm and 3000 nm [51]. Matter when irradiated by electromagnetic waves in the SWIR region, provides a molecular vibrational behavior which intensity is determined by the energy absorption at a given wavelength. [6, 23].

Suppose an HSI as illustrated in Fig. 1, obtained by SWIR spectroscopy from an epithelial tissue sample, where the values measured at each wavelength correspond to the energy absorption due to the molecular vibration of this tissue. Each pixel-vector of this HSI will correspond to the molecular vibration of the tissue at that specific location. Thus, the molecular vibration contained in any HSI pixel-vector can be represented as a spectral signal, as shown in Fig. 1, where on the Cartesian plane the \(\lambda \) axis corresponds to wavelengths in the SWIR region and the i axis corresponds to the intensity value measured at the respective wavelength of the \(\lambda \) axis.

Healthy and tumoral epithelial tissues under SWIR radiation may provide different energy absorption intensities at one or more wavelengths due to chemical variations between tissues. Thus, the existence of any measurable difference between the spectral signals of different types of epithelial tissues may determine the feasibility of constructing new tumor diagnosis methods using HSI and SWIR. Therefore, we proposes within this work to investigate the vibrational behavior of melanoma, dysplastic nevi and healthy skin epithelial tissues under SWIR radiation and to employ SWIR-obtained HSI as an alternative method to manual visual inspection by dermoscopy to identify tumor epithelial tissue.

3.2 HSI Acquisition

Hyperspectral images of skin samples were obtained using a high-speed chemical performance analyzer called SisuCHEMA SWIR. It uses a Hyperspectral Camera (HSC) and combines near-infrared spectroscopy (NIR) with high-resolution spectral images with 256 spectral bands. The spectral range comprises wavelengths between 900 and 2500, with a range between 900 and 1700 with a spectral resolution of 10 nm in the NIR region and a range between 1000 and 2500 nm with a spectral resolution of 6 nm in SWIR region. Image data is automatically calibrated for reflectance, however, the HSC software also provides an estimated absorbance value calculated from the measured reflectance intensity. The calculated absorbance, denoted pseudo-absorbance [50], is the unit registered in the HSI and used by the proposed classifier.

3.3 Epithelial Lesion Classifier

For the task of identifying and classifying the vibrational patterns present in the HSI pixels-vectors as well as the spatial correlation between them, we propose a classifier that uses the concept of deep learning. The neural network used in the experiments was RetinaNet. RetinaNet is a single and unified network composed of a backbone and two task-specific subnets. The backbone is responsible for computing a convolutional feature map across an entire input image. The first subnet performs the classification of convolutional objects in the backbone output, and the second subnet performs convolutional bounding box regression [31]. The rationale for the choice of this approach lies in the heterogeneity of pixels-vectors present in a single HSI. As the primary reference for identifying the sample type in advance is a visual inspection by a microscope, the task of labeling HSI pixels-vectors becomes difficult due to the difference in image precision and scale. Thus, labeling the entire sample and not the pixels-vectors was the way adopted in this paper.

4 Related Works

Many methods have been developed to analyze HSI. However, even with this diversity of methods, exploitation of HSI spatial information for tumor classification is limited. Ding et al. [17] categorized the methods for HSI analysis by the approach used in the classification. These are a) methods based on manual procedures and b) methods based on deep learning (DL).

Recent work has significantly contributed to improving HSI classification by employing deep learning. Hu et al. [24] modeled a CNN architecture with five layers between convolutive layers using basic CNN elements by inserting each pixel-vector with shared weights into the input layer.

Ma, Geng and Wang [36] proposed a CNN architecture, denoted contextual deep learning (CDL) that receives as input each pixel-vector and its neighboring pixels-vectors. This approach allows the extraction of spectral and spatial information providing a fine-tuning in classification. Chen, Zhao and Jia [15] introduced in 2015 a new architecture employing deep belief networks (DBF) and restricted Boltzmann machine (RBM) for the extraction of spectral and PCA characteristics for space extraction. The authors proposed a stacked spectrum-spatial vector as a network input. In 2016, Chen el al. [13] introduced a new network denoted 3-D-CNN that employs multiple convolutive and clustering layers with combined regularization for extraction of HSI spectral and spatial characteristics.

Pan, Shi and Xu [40] implemented a new simplified DL model based on rolling guidance filter (RGF) and vertex component analysis network (R-VCANet), for training a network when there is not an abundance of samples for training. Ding et al. [17] developed an adaptive model employing CNN based on the HSI classification method in which convolutional kernels can be learned automatically from data through clustering, even without knowing the number of clusterings. Similar to Chen et al. [14] proposal, Li, Zhang and Shen [30] proposed a 3D convolutional neural network structure, called 3D-CNN, as a method for analyzing HSI data, but without any preprocessing or postprocessing to extract the combined spectral-spatial resources deeply and effectively.

The most recent work in the context of HSI skin tumor detection employs an approach described as a non-parametric, online and multidimensional probability density estimate [43]. Using the concept of deep learning and HSI, Halicek et al. presented a traditional 6-layer convolutive CNN to classify excised squamous cell carcinoma, thyroid cancer, and normal head and neck tissue samples from 50 (fifty) patients with an accuracy of 80% [22].

This scenario shows that despite advances in HSI and deep learning, this approach is little explored in the context of tumor diagnosis. The contributions of this work are: a) to use SWIR as an acquisition technique of HSI in healthy and tumoral epithelial tissues and to investigate the vibrational behavior of the tissues under this spectroscopy technique, b) to develop a new method for classifying skin lesion samples by object detection through spatial and spectral classification in HSI employing deep learning.

5 Samples

All samples used in the experiments are from human epithelial tissue. They were taken from patients by laboratory procedures performed by physicians and arranged as pathology. Thus, the following sets of samples were defined: C1) Melanoma, containing 12 (twelve) samples divided into 34 (thirty four) parts, C2) Dysplastic Nevi, composed of 18 (eighteen) samples divided into 72 parts and C3) Healthy Skin that has 5 (five) samples divided into 17 (seventeen) parts. These samples were fixed on glass slides without the addition of dyes and without overlapping the sample by coverslipping. Sample thickness is 20 \(\upmu m\).

Fig. 2.
figure 2

Skin sample with melanoma fixed to glass slide.

In Fig. 2 are shown two slides referring to the same skin sample with Melanoma. This sample was divided into 3 (three) parts per slide, (A) the slide prepared for microscope viewing and (B) the slide prepared for scanning with SWIR spectroscopy and obtaining the respective HSI.

6 Methodological Procedures

The application of the proposed solution described in Sect. 3 in the analysis of the samples presented in Sect. 5 occurred through the following methodological procedures: A) Sample scanning, B) Annotation, C) Training and D) Detection. Figure 3 illustrates the activities of the Annotation, Training, and Detection procedures represented by a gray-colored bounding box and the flow of activities with their inputs and outputs.

Fig. 3.
figure 3

Methodology of activities with execution flow and inputs and outputs.

6.1 Sample Scanning

The scanning of samples is intended to generate the hyperspectral images by SWIR spectroscopy for each sample. This procedure is fairly simple as it consists of selecting the lens to be used for HSI acquisition, arranging the sample in the reading tray, adjusting the distance from the sensor to the sample, adjusting the lens focus, adjusting the exposure time parameters of the sensor during acquisition and, finally, tray speed during scanning. The images resulting from this procedure are inputs for annotation, training and detection procedures.

6.2 Annotation

Annotation of hyperspectral images is divided into two activities: 1. Generate visual representation, which consists of transforming one of the layers of the HSI into a visible image to identify the position of the sample within the image. This visual representation is intended to enable the analyst to view the HSI scanning result, to alow the annotation of the images for classifier training and, in the Detection procedure, to view the classification result of the already trained classifier and; 2. Annotate regions and labels, from the visible image it is possible to delimit the region of the sample within HSI and its respective type.

6.3 Training

The construction of the classifier begins with the reduction of the spectral dimension of each HSI, performed in activity 3. Reduce data size and obtain coefficients. This activity aims to minimize information overlays that may exist at different wavelengths and to simplify the mathematical model by reducing data. For this purpose, we used Principal Components Analysis (PCA) [27]. For each training set HSI, \(\mathbf {X'} = [\mathbf {x}_1, \mathbf {x}_2, \ldots , \mathbf {x}_p] \) of n pixels-vectors per p possibly correlated variables a new cube of uncorrelated axes with ordered variances is generated \(\mathbf {PC'} = [\mathbf {pc}_1, \mathbf {pc}_2, \ldots , \mathbf {pc}_p]\), preserving the spatial arrangement of pixels-vectors. The obtained coefficients of PCA allow to reconstruct the original HSI from \(\mathbf {PC}\) or transform a new HSI in \(\mathbf {PC}\). In activity 4. Training neural network, the \(\mathbf {PC}\) is used in training a neural network built from a RetinaNet implementation using the TensorFlow and Keras [21] frameworks. A minor adaptation was made to the original implementation to allow training from n layered images and the image resizer has been disabled for not changing the spectral dimension data. The RetinaNet configuration consist in Resnet50 model on backbone, 250 (two hundred and fifty) epochs with 1000 (one thousand) steps each, batch size 4 (four), optimizer Adam and learning rate 0.0001. The code of neural network is avaliable on

6.4 Detection

Detection is the final procedure of the methodology to evaluate the trained network through object detection and its classification under new HSI. The first activity, 5. Transforming data, consists in placing each new HSI in the same dimensional space as the samples used in training, using the coefficients obtained in activity 3 of the Training procedure (Sect. 6.3). Finally, each image is subjected to a trained neural network that will detect skin lesions and produce a two-dimensional visible representation of HSI, equivalent to 1. Generate visual representation activity of the Annotation procedure, however, with the respective demarcation of the region where the lesion is present and its respective label.

7 Results

A HSI for each sample of sets C1, C2, and C3 defined in the Sect. 5 was generated using the Sample Scanning procedure described in the Sect. 6.

Fig. 4.
figure 4

Two-dimensional representation of HSI from slide-set C1 (Melanoma) sample set.

Shown in Fig. 4 is a two-dimensional visual representation of each HSI of set C1 constructed from conversion of pseudo-absorbance intensity measurements at wavelength \(\lambda \) 1320 nm to whole gray scale values within the range 0–255. Each sample in set C1 is identified by the abbreviations \( L1, L2, \ldots , L12\). Results similar to those shown in Fig. 4 were also produced for Dysplastic Nevi (C2) and Healthy Skin (C3) samples. These representations were used in the Annotation and Detection procedures.

For the case study, we separated the sample sets described in the Sect. 5 into training and test samples. The configuration of this separation is presented in Table 1.

Table 1. Configuration of training and test sets.

After the classifier training under the training data, we performed the Detection procedure using the test samples. The result regarding the classification of Melanoma samples is illustrated in Fig. 5. The orange bounding boxes correspond to the region suggested by the classifier as containing melanoma and the light blue color to the region containing healthy skin.

Fig. 5.
figure 5

Melanoma sample classification.

In slide L1 that has three parts of the same sample, only two parts were detected as melanoma. In L7, only one of the three parts was detected. In L8 two regions were suggested, but two of three parts were within one of the suggested regions. In slide L9 despite correctly classifying the sample, there was an overlap of suggested regions, presenting three regions for two parts of samples. In L11 only one region has been suggested and misclassified as Healthy Skin. Finally in L12, three of the four parts were correctly classified and in one part there was a double classification, where overlapping regions received different labels, one correct and one wrong.

To determine the accuracy of the classifier, it was considered correct the correctly suggested and labeled regions on the parts of each sample. Failure to detect or classify with divergent labeling was considered error. Therefore, in numerical terms, the results for the melanoma samples correspond to 11 (eleven) hits, 2 (two) misses and 3 (three) unclassified parts. The accuracy of the classifier for Melanoma samples was 68.8%.

Figure 6 shows ten slides \( L13, L14, \ldots , L22 \) for the samples of Dysplastic Nevi used in the classifier test. The result of the classification is presented with the demarcation of a dark blue bounding box corresponding to the classifier suggestion for the region with presence of Dysplastic Nevus and, as in Fig. 5, the light blue bounding box corresponds to the region classified as healthy skin.

Fig. 6.
figure 6

Classification of Dysplastic Nevi samples.

In all ten slides, at least one part was correctly classified as dysplastic nevus. Of the 40 parts analyzed 29 were classified correctly. In slides L13, L16 and L20 five parts were erroneously classified as Healthy Skin in three suggested regions. Already in slides L15, L16 and L17 five of the twelve parts present were not classified. Therefore, the accuracy of the classifier for Dysplastic Nevi samples was 72.5%. Importantly, for the Dysplastic Nevi samples, the regions suggested by the classifier were well defined, not presenting the problem of overlap or cuts in parts of the samples as occurred with the melanoma samples.

Results for healthy tissue samples were inconclusive because of insufficient samples available in both Training and Detection procedures, so were not presented in this study.

8 Conclusion

We presented a proposal employing HSI obtained by SWIR spectroscopy to identify tumor epithelial tissue using deep learning for classification. The feasibility of using SWIR corroborated with previous studies and confirmed the hypothesis of sensitivity of human epithelial tissue to SWIR. This confirmation is most evident when using HSI as a data structure. It has been shown by the construction of the visual representations of each HSI that the morphology of the images coincides with the visible eye shapes of the samples arranged on the slides.

With the implementation of the proposed solution, it was possible to distinguish samples of Melanoma and Dysplastic Nevi by means of the spectral signals and their respective spatial arrangement present in the structure of HSI. Although the pixels-vectors of the epithelial tissues analyzed have a similar spectral profile, there are differences in subtle intensities between the samples that allow them to be distinguished. This result is a strong indication that HSI-SWIR can be used to construct new methods for the classification of epithelial tumors.

While refinements are needed to improve region suggestion for Melanoma samples, labeling suggested regions yielded more assertive than non-assertive results. This shows that deep learning has been able to extract spectral and spatial characteristics from tumor epithelial tissue lesion samples to the extent that they can be distinguished.

It is important to highlight that the samples are not homogeneous, that is, not the entire length of the sample has the pathology. Therefore, we cannot state that in all the length of the Melanoma sample, all pixels-vectors have the pathology. The precise location of tumor cells is most easily determined by using the microscope and preparing the slide. Due to the difference in precision between the microscope and the HSC used in the study, it was not possible to determine in HSI which pixels-vectors correspond to the tumor cells.

We suggest to continue the studies with the following future works: 1) expand the number of samples and perform new experiments to confirm the indicative presented in the results; 2) increase in the Training procedure an activity to remove pixels-vectors that do not correspond to the skin sample, performing a semantic segmentation in the sample preceding the neural network training; 3) incorporate semantic segmentation as the final activity of the Detection procedure and 4) locate within the sample the pixels-vectors that best match the classified pathology, 5) apply the proposed solution to images acquired from samples in vivo. We did not perform acquisition in vivo in this study due to the limitations of available HSC.


  1. Afromowitz, M.A., Callis, J.B., Heimbach, D.M., DeSoto, L.A., Norton, M.K.: Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth. IEEE Trans. Biomed. Eng. 35(10), 842–850 (1988)

    CrossRef  Google Scholar 

  2. Akbari, H., et al.: Hyperspectral imaging and quantitative analysis for prostate cancer detection. J. Biomed. Opt. 17(7), 076005-1–076005-10 (2012).

  3. Akbari, H., Halig, L.V., Zhang, H., Wang, D., Chen, Z.G., Fei, B.: Detection of cancer metastasis using a novel macroscopic hyperspectral method. In: Proceedings of SPIE, vol. 8317, p. 831711. NIH Public Access (2012)

    Google Scholar 

  4. Akbari, H., Uto, K., Kosugi, Y., Kojima, K., Tanaka, N.: Cancer detection using infrared hyperspectral imaging. Cancer Sci. 102(4), 852–857 (2011)

    CrossRef  Google Scholar 

  5. Almeida, V.L.D., Leitão, A., Reina, L.D.C.B., Montanari, C.A., Donnici, C.L., Lopes, M.T.P.: Câncer e agentes antineoplásicos ciclo-celular específicose ciclo-celular não específicos que interagem com o dna: uma introdução. Quim. Nova 28(1), 118–129 (2005)

    Google Scholar 

  6. Ball, D.W.: The Basics of Spectroscopy, vol. 49. SPIE Press, Bellingham (2001)

    CrossRef  Google Scholar 

  7. Bambery, K.R., Wood, B.R., Quinn, M.A., McNaughton, D.: Fourier transform infrared imaging and unsupervised hierarchical clustering applied to cervical biopsies. Aust. J. Chem. 57(12), 1139–1143 (2004)

    CrossRef  Google Scholar 

  8. Calin, M.A., Parasca, S.V., Savastru, D., Manea, D.: Hyperspectral imaging in the medical field: present and future. Appl. Spectrosc. Rev. 49(6), 435–447 (2014).

  9. Carli, P., De Giorgi, V., Soyer, H., Stante, M., Mannone, F., Giannotti, B.: Dermatoscopy in the diagnosis of pigmented skin lesions: a new semiology for the dermatologist. J. Eur. Acad. Dermatol. Venereol. 14(5), 353–369 (2000)

    CrossRef  Google Scholar 

  10. Carrasco, O., Gomez, R.B., Chainani, A., Roper, W.E.: Hyperspectral imaging applied to medical diagnoses and food safety. In: Proceediings of SPIE. vol. 5097, pp. 215–221 (2003)

    Google Scholar 

  11. Carvalho, G.C., Alves, F.: Principais marcadores moleculares para os cânceresde pele e mama. NBC-Periódico Científico do Núcleo de Biociências 4(07), 11–17 (2014)

    CrossRef  Google Scholar 

  12. Chang, C.I.: Hyperspectral Imaging: Techniques for Spectral Detection and Classification, vol. 1. Springer, Boston (2003).

    CrossRef  Google Scholar 

  13. Chen, Y., Jiang, H., Li, C., Jia, X., Ghamisi, P.: Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 54(10), 6232–6251 (2016)

    CrossRef  Google Scholar 

  14. Chen, Y., Lin, Z., Zhao, X., Wang, G., Gu, Y.: Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 7(6), 2094–2107 (2014)

    CrossRef  Google Scholar 

  15. Chen, Y., Zhao, X., Jia, X.: Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 8(6), 2381–2392 (2015)

    CrossRef  Google Scholar 

  16. Dicker, D.T., et al.: Differentiation of normal skin and melanoma using high resolution hyperspectral imaging. Cancer Biol. Ther. 5(8), 1033–1038 (2006)

    CrossRef  Google Scholar 

  17. Ding, C., Li, Y., Xia, Y., Wei, W., Zhang, L., Zhang, Y.: Convolutional neural networks based hyperspectral image classification method with adaptive kernels. Remote Sens. 9(6), 618 (2017)

    CrossRef  Google Scholar 

  18. Fei, B., Akbari, H., Halig, L.V.: Hyperspectral imaging and spectral-spatial classification for cancer detection. In: 2012 5th International Conference on Biomedical Engineering and Informatics (BMEI), pp. 62–64. IEEE (2012)

    Google Scholar 

  19. Ferris, D.G., et al.: Multimodal hyperspectral imaging for the noninvasive diagnosis of cervical neoplasia. J. Lower Genital Tract Dis. 5(2), 65–72 (2001)

    CrossRef  Google Scholar 

  20. Figueiredo, L.C., Cordeiro, L.N., Arruda, A.P., Carvalho, M.D.F., Ribeiro, E.M., Coutinho, H.D.M.: Câncer de pele: estudo dos principais marcadores moleculares do melanoma cutâneo. Rev Bras de Cancerologia 49(3), 179–183 (2003)

    Google Scholar 

  21. Fizyr: Keras implementation of retinanet object detection (2019). Accessed 18 Apr 2019

  22. Halicek, M., et al.: Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 22, 060503 (2017).

  23. Hansen, M.P., Malchow, D.S.: Overview of SWIR detectors, cameras, and applications. In: Proceedings SPIE, vol. 6939, p. 69390I (2008)

    Google Scholar 

  24. Hu, W., Huang, Y., Wei, L., Zhang, F., Li, H.: Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015 (2015)

    Google Scholar 

  25. INCA: Estimativa 2018: Incidência de câncer no brasil (2017). Accessed 14 Aug 2019

  26. Jacques, S.L., Ramella-Roman, J.C., Lee, K.: Imaging skin pathology with polarized light. J. Biomed. Opt. 7(3), 329–340 (2002)

    CrossRef  Google Scholar 

  27. Johnson, R.A., Wichern, D.W.: Applied Multivariate Statistical Analysis, 6th edn. Pearson, London (2014)

    MATH  Google Scholar 

  28. Kiyotoki, S., et al.: New method for detection of gastric cancer by hyperspectral imaging: a pilot study. J. Biomed. Opt. 18(2), 026010 (2013)

    CrossRef  Google Scholar 

  29. Koh, K.R., Wood, T.C., Goldin, R.D., Yang, G.Z., Elson, D.S.: Visible and near infrared autofluorescence and hyperspectral imaging spectroscopy for the investigation of colorectal lesions and detection of exogenous fluorophores. In: Proceedings of SPIE, vol. 7169, p. 71691E (2009)

    Google Scholar 

  30. Li, Y., Zhang, H., Shen, Q.: Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 9(1), 67 (2017)

    CrossRef  Google Scholar 

  31. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  32. Lindsley, E.H., Wachman, E.S., Farkas, D.L.: The hyperspectral imaging endoscope: a new tool for in vivo cancer detection. In: Proceedings of SPIE, vol. 5322, pp. 75–82 (2004)

    Google Scholar 

  33. Lu, G., Fei, B.: Medical hyperspectral imaging: a review. J. Biomed. Opt. 19(1), 010901 (2014).

  34. Lu, G., Halig, L., Wang, D., Chen, Z.G., Fei, B.: Spectral-spatial classification using tensor modeling for cancer detection with hyperspectral imaging. In: Proceedings of SPIE, vol. 9034, p. 903413. NIH Public Access (2014)

    Google Scholar 

  35. Lu, G., Halig, L., Wang, D., Qin, X., Chen, Z.G., Fei, B.: Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging. J. Biomed. Opt. 19(10), 106004 (2014)

    CrossRef  Google Scholar 

  36. Ma, X., Geng, J., Wang, H.: Hyperspectral image classification via contextual deep learning. EURASIP J. Image Video Process. 2015(1), 20 (2015)

    CrossRef  Google Scholar 

  37. Malkoff, D.B., Oliver, W.R.: Hyperspectral imaging applied to forensic medicine. In: Proceedings of SPIE, pp. 0277–786X (2000)

    Google Scholar 

  38. Martin, M.E., et al.: Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection. Ann. Biomed. Eng. 34(6), 1061–1068 (2006)

    CrossRef  Google Scholar 

  39. de Moraes Matheus, L.G., Verri, B.H.d.M.A.: Aspectos epidemiológicos do melanoma cutâneo. Revista Ciência e Estudos Acadêmicos de Medicina 1(03) (2015)

    Google Scholar 

  40. Pan, B., Shi, Z., Xu, X.: R-VCANet: a new deep-learning-based hyperspectral image classification method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10(5), 1975–1986 (2017)

    CrossRef  Google Scholar 

  41. Panasyuk, S.V., Freeman, J.E., Panasyuk, A.A.: Medical hyperspectral imaging for evaluation of tissue and tumor, US Patent 8,320,996, 27 November 2012

    Google Scholar 

  42. Panasyuk, S.V., et al.: Medical hyperspectral imaging to facilitate residual tumor identification during surgery. Cancer Biol. Ther. 6(3), 439–446 (2007)

    CrossRef  Google Scholar 

  43. Pardo, A., Gutiérrez-Gutiérrez, J.A., Lihacova, I., López-Higuera, J.M., Conde, O.M.: On the spectral signature of melanoma: a non-parametric classification framework for cancer detection in hyperspectral imaging of melanocytic lesions. Biomed. Opt. Express 9(12), 6283–6301 (2018)

    CrossRef  Google Scholar 

  44. Schultz, R.A., Nielsen, T., Zavaleta, J.R., Ruch, R., Wyatt, R., Garner, H.R.: Hyperspectral imaging: a novel approach for microscopic analysis. Cytometry Part A 43(4), 239–247 (2001)

    CrossRef  Google Scholar 

  45. Shah, S., Bachrach, N., Spear, S., Letbetter, D., Stone, R., Dhir, R., Prichard, J., Brown, H., LaFramboise, W.: Cutaneous wound analysis using hyperspectral imaging. Biotechniques 34(2), 408–413 (2003)

    CrossRef  Google Scholar 

  46. Siddiqi, A.M., et al.: Use of hyperspectral imaging to distinguish normal, precancerous, and cancerous cells. Cancer Cytopathol. 114(1), 13–21 (2008)

    CrossRef  Google Scholar 

  47. Siegel, R.L., Miller, K.D., Jemal, A.: Cancer statistics. CA Cancer J. Clin. 65(1), 5–29 (2015)

    CrossRef  Google Scholar 

  48. Society, A.C.: Tests for melanoma skin cancer (2016). Accessed 31 July 2017

  49. Society, A.C.: Cancer facts and figures 2017 (2017). Accessed 25 July 2017

  50. SPECIM: SisuCHEMA - Chemical Imaging Analyzer (2015). Accessed 25 July 2017

  51. Zevon, M., et al.: CXCR-4 targeted, short wave infrared (SWIR) emitting nanoprobes for enhanced deep tissue imaging and micrometastatic cancer lesion detection. Small 11(47), 6347–6357 (2015)

    CrossRef  Google Scholar 

  52. Zonios, G., et al.: Diffuse reflectance spectroscopy of human adenomatous colon polyps in vivo. Appl. Opt. 38(31), 6628–6637 (1999)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Daniel Vitor de Lucena .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

de Lucena, D.V., da Silva Soares, A., Coelho, C.J., Wastowski, I.J., Filho, A.R.G. (2020). Detection of Tumoral Epithelial Lesions Using Hyperspectral Imaging and Deep Learning. In: , et al. Computational Science – ICCS 2020. ICCS 2020. Lecture Notes in Computer Science(), vol 12139. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50419-9

  • Online ISBN: 978-3-030-50420-5

  • eBook Packages: Computer ScienceComputer Science (R0)