1 Introduction

In the health care system, there has been a dramatic increase in demand for medical image services, e.g. Radiography, endoscopy, Computed Tomography (CT), Mammography Images (MG), Ultrasound images, Magnetic Resonance Imaging (MRI), Magnetic Resonance Angiography (MRA), Nuclear medicine imaging, Positron Emission Tomography (PET) and pathological tests. Besides, medical images can often be challenging to analyze and time-consuming process due to the shortage of radiologists.

Artificial Intelligence (AI) can address these problems. Machine Learning (ML) is an application of AI that can be able to function without being specifically programmed, that learn from data and make predictions or decisions based on past data. ML uses three learning approaches, namely, supervised learning, unsupervised learning, and semi-supervised learning. The ML techniques include the extraction of features and the selection of suitable features for a specific problem requires a domain expert. Deep learning (DL) techniques solve the problem of feature selection. DL is one part of ML, and DL can automatically extract essential features from raw input data [88]. The concept of DL algorithms was introduced from cognitive and information theories. In general, DL has two properties: (1) multiple processing layers that can learn distinct features of data through multiple levels of abstraction, and (2) unsupervised or supervised learning of feature presentations on each layer. A large number of recent review papers have highlighted the capabilities of advanced DLA in the medical field MRI [8], Radiology [96], Cardiology [11], and Neurology [155].

Different forms of DLA were borrowed from the field of computer vision and applied to specific medical image analysis. Recurrent Neural Networks (RNNs) and convolutional neural networks are examples of supervised DL algorithms. In medical image analysis, unsupervised learning algorithms have also been studied; These include Deep Belief Networks (DBNs), Restricted Boltzmann Machines (RBMs), Autoencoders, and Generative Adversarial Networks (GANs) [84]. DLA is generally applicable for detecting an abnormality and classify a specific type of disease. When DLA is applied to medical images, Convolutional Neural Networks (CNN) are ideally suited for classification, segmentation, object detection, registration, and other tasks [29, 44]. CNN is an artificial visual neural network structure used for medical image pattern recognition based on convolution operation. Deep learning (DL) applications in medical images are visualized in Fig. 1.

Fig. 1
figure 1

a X-ray image with pulmonary masses [121] b CT image with lung nodule [82] c Digitized histo pathological tissue image [132]

2 Neural networks

2.1 History of neural networks

The study of artificial neural networks and deep learning derives from the ability to create a computer system that simulates the human brain [33]. A neurophysiologist, Warren McCulloch, and a mathematician Walter Pitts [97] developed a primitive neural network based on what has been known as a biological structure in the early 1940s. In 1949, a book titled “Organization of Behavior” [100] was the first to describe the process of upgrading synaptic weights which is now referred to as the Hebbian Learning Rule. In 1958, Frank Rosenblatt’s [127] landmark paper defined the structure of the neural network called the perceptron for the binary classification task.

In 1962, Windrow [172] introduced a device called the Adaptive Linear Neuron (ADALINE) by implementing their designs in hardware. The limitations of perceptions were emphasized by Minski and Papert (1969) [98]. The concept of the backward propagation of errors for purposes of training is discussed in Werbose1974 [171]. In 1979, Fukushima [38] designed artificial neural networks called Neocognitron, with multiple pooling and convolution layers. One of the most important breakthroughs in deep learning occurred in 2006, when Hinton et al. [9] implemented the Deep Belief Network, with several layers of Restricted Boltzmann Machines, greedily teaching one layer at a time in an unsupervised fashion. In 1989, Yann LeCun [71] combined CNN with backpropagation to effectively perform the automated recognition of handwritten digits. Figure 2 shows important advancements in the history of neural networks that led to a deep learning era.

Fig. 2
figure 2

Demonstrations of significant developments in the history of neural networks [33, 134]

2.2 Artificial neural networks

Artificial Neural Networks (ANN) form the basis for most of the DLA. ANN is a computational model structure that has some performance characteristics similar to biological neural networks. ANN comprises simple processing units called neurons or nodes that are interconnected by weighted links. A biological neuron can be described mathematically in Eq. (1). Figure 3 shows the simplest artificial neural model known as the perceptron.

$$ y=f\left(w{x}^T+b\right) $$
(1)
Fig. 3
figure 3

Perceptron [77]

Let be the vector of inputs (features), be the weight vector, be the bias, f be the non-linear activation (sigmoid) function and y be the scalar, the output of a node. For the construction of neural networks required parameters are i) The pattern of connections between the neurons (architecture) ii) The method of determining the weights on the connection (Training or learning) and iii) Activation function. Depending on the pattern of connection between nodes in the neural networks, they are mainly classified into two types, viz., a Feed-Forward Neural Network (FFNN) and Recurrent Neural Networks (RNN) [42]. For FFRN the requirement is that network has to be a directed acyclic graph. In RNN the connection between nodes forms a directed cycle. An FFNN consists of one or more layers in which each layer comprises one or more nodes. The network has one input layer and the last layer is called the output layer. The layers between the input and the output layer are considered as a hidden layer. FFNN is used for supervised [140] learning tasks like classification and regression.

2.3 Training a neural network with Backpropagation (BP)

In the neural networks, the learning process is modeled as an iterative process of optimization of the weights to minimize a loss function. Based on network performance, the weights are modified on a set of examples belonging to the training set. The necessary steps of the training procedure contain forward and backward phases. For Neural Network training, any of the activation functions in forwarding propagation is selected and BP training is used for changing weights. The BP algorithm helps multilayer FFNN to learn input-output mappings from training samples [16]. Forward propagation and backpropagation are explained with the one hidden layer deep neural networks in the following algorithm.

The backpropagation algorithm is as follows for one hidden layer neural network

  1. 1.

    Initialize all weights to small random values.

  2. 2.

    While the stopping condition is false, do steps 3 through10.

  3. 3.

    For each training pair ((x1, y1)…(xn, yn) do steps 4 through 9.

Feed-forward propagation:

  1. 4.

    Each input unit (Xi, i = 1, 2, …n) receives the input signal xi and send this signal to all hidden units in the above layer.

  2. 5.

    Each hidden unit (Zj, j = 1. ., p) compute output using the below equation, and it transmits to the output unit (i.e.) \( {z}_{j\_ in}={b}_j+{\sum}_{i=1}^n{w}_{ij}{x}_i \) applies to an activation function Zj = f(Zj _ in).

  3. 6.

    Compute the out signal for each output unit (Yk,k = 1, …., m).

    \( {y}_{k\_ in}={b}_k+{\sum}_{j=1}^p{z}_j{w}_{jk} \) and calculate activation yk = f(yk _ in)

Backpropagation

  1. 7.

    For input training pattern (x1, x2…., xn) corresponding output pattern (y1, y2, …, ym), let (t1, t2, …. . tm) be target pattern. For each output, the neuron computes network error δk

    At output-layer neurons δk = (tk − yk)f(yk _ in)

  2. 8.

    For each hidden neuron, calculate its error information term δj while doing so, use δk of the output neurons as obtained in the previous step

    At Hidden layer neurons \( {\delta}_j={f}^{\prime}\left({z}_{j\_ in}\right){\sum}_k^m{\delta}_k{w}_{jk} \)

  3. 9.

    Update weights and biases using the following formulas where ηis learning rate

Each output layer (Yk, k = 1, 2, …. m) updates its weights (J = 0, 1, …P) and bias

wjk(new) = wjk(old) + ηδkzj;bk(new) = bk(old) + ηδk

Each hidden layer (ZJ, J = 1, 2, …p) updates its weights (i = 0, 1, …n) biases:

wij(new) = wij(old) + ηδjxi; bj(old) = bj(old) + ηδj

  1. 10.

    Test stopping condition

2.4 Activation function

The activation function is the mechanism by which artificial neurons process and transfers information [42]. There are various types of activation functions which can be used in neural networks based on the characteristic of the application. The activation functions are non-linear and continuously differentiable. Differentiability property is important mainly when training a neural network using the gradient descent method. Some widely used activation functions are listed in Table 1.

Table 1 Activation functions

3 Deep learning

Deep learning is a subset of the machine learning field which deals with the development of deep neural networks inspired by biological neural networks in the human brain.

3.1 Autoencoder

Autoencoder (AE) [128] is one of the deep learning models which exemplifies the principle of unsupervised representation learning as depicted in Fig. 4a. AE is useful when the input data have more number of unlabelled data compared to labeled data. AE encodes the input x into a lower-dimensional space z. The encoded representation is again decoded to an approximated representation x of the input x through one hidden layer z.

Fig. 4
figure 4

a Autoencoder [187] b Restricted Boltzmann Machine with n hidden and m visible units [88] c Deep Belief Networks [88]

Basic AE consists of three main steps:

Encode: Convert input vector \( x\ \epsilon\ {\mathbf{\mathfrak{R}}}^{\boldsymbol{m}} \) into \( h\ \epsilon\ {\mathbf{\mathfrak{R}}}^{\mathrm{n}} \), the hidden layer by h = f(wx + b)where \( w\ \epsilon\ {\mathbf{\mathfrak{R}}}^{\boldsymbol{m}\ast \boldsymbol{n}} \) and\( b\ \epsilon\ {\mathbf{\mathfrak{R}}}^{\boldsymbol{n}} \).m and n are dimensions of the input vector and converted hidden state. The dimension of the hidden layer h is to be smaller than x.f is an activate function.

Decode: Based on the above h, reconstruct input vector z by equation z = f (wh + b) where \( {w}^{\prime}\epsilon\ {\mathbf{\mathfrak{R}}}^{\boldsymbol{n}\ast \boldsymbol{m}} \)and \( {b}^{\prime}\boldsymbol{\epsilon} {\mathbf{\mathfrak{R}}}^{\boldsymbol{m}}. \)The fis the same as the above activation function.

Calculate square error:Lrecons(x, z) =  ∥ x − z∥2, which is the reconstruction error cost function. Reconstruct error minimization is achieved by optimizing the cost function (2)

$$ J\left(\theta \right)=\sum \left(x,\mathrm{z}\right)\kern2em \theta =\left\{w,{w}^{\prime },b,{b}^{\prime}\right\} $$
(2)

Another unsupervised algorithm representation is known as Stacked Autoencoder (SAE). The SAE comprises stacks of autoencoder layers mounted on top of each other where the output of each layer was wired to the inputs of the next layer. A Denoising Autoencoder (DAE) was introduced by Vincent et al. [159]. The DAE is trained to reconstruct the input from random noise added input data. Variational autoencoder (VAE) [66] is modifying the encoder where the latent vector space is used to represent the images that follow a Gaussian distribution unit. There are two losses in this model; one is a mean squared error and the Kull back Leibler divergence loss that determines how close the latent variable matches the Gaussian distribution unit. Sparse autoencoder [106] and variational autoencoders have applications in unsupervised, semi-supervised learning, and segmentation.

3.2 Restricted Boltzmann machine

A Restricted Boltzmann machine [RBM] is a Markov Random Field (MRF) associated with the two-layer undirected probabilistic generative model, as shown in Fig. 4b. RBM contains visible units (input) v and hidden (output) units h. A significant feature of this model is that there is no direct contact between the two visible units or either of the two hidden units. In binary RBMs, the random variables (v, h) takes (v, h) ∈ {0, 1}m + n. Like the general Boltzmann machine [50], the RBM is an energy-based model. The energy of the state {v, h} is defined as (3)

$$ E\left(v,h\right)=-{\sum}_{i=1}^n{\sum}_{j=1}^m{w}_{ij}{h}_i{v}_j-{\sum}_{j=1}^m{b}_j{v}_j-{\sum}_{i=1}^n{c}_i{h}_i $$
(3)

where vj, hi are the binary states of visible unit j ∈ {1, 2, …m} and hidden unit i ∈ {1, 2, .. n}, bj, ci are their biases of visible and hidden units, wij is the symmetric interaction term between the units vj and hi them. A joint probability of (v, h) is given by the Gibbs distribution in Eq. (4)

$$ P\left(v,h\right)=\frac{1}{Z}{e}^{-E\left(v,h\right)} $$
(4)

Z is a “partition function” that can be given by summing over all possible pairs of visual v and hidden h (5).

$$ Z={\sum}_{v,h}{e}^{-E\left(v,h\right)} $$
(5)

A significant feature of the RBM model is that there is no direct contact between the two visible units or either of the two hidden units. In term of probability, conditional distributions p(h| v) and p(v| h) is computed as (6) \( p\left(h|v\right)={\prod}_{i=1}^np\left({h}_i|v\right) \)

$$ p\left(h|v\right)={\prod}_{i=1}^np\left({h}_i|v\ \right),p\left(v|h\right)={\prod}_{j=1}^mp\left({v}_j|h\right) $$
(6)

For binary RBM condition distribution of visible and hidden are given by (7) and (8)

$$ p\left({h}_j=1|v\right)=\sigma \left({b}_j+{\sum}_{i=1}^m{w}_{ij}{v}_i\right) $$
(7)
$$ p\left({v}_i=1|h\right)=\sigma \left({a}_j+{\sum}_{j=1}^n{w}_{ij}{h}_j\right) $$
(8)

where σ(·) is a sigmoid function

RBMs parameters (wij, bj, ci) are efficiently calculated using the contrastive divergence learning method [150]. A batch version of k-step contrastive divergence learning (CD-k) can be discussed in the algorithm below [36]

figure d

.

3.3 Deep belief networks

The Deep Belief Networks (DBN) proposed by Hinton et al. [51] is a non-convolution model that can extract features and learn a deep hierarchical representation of training data. DBNs are generative models constructed by stacking multiple RBMs. DBN is a hybrid model, the first two layers are like RBM, and the rest of the layers form a directed generative model. A DBN has one visible layer v and a series of hidden layers h(1), h(2), …, h(l) as shown in Fig. 4c. The DBN model joint distribution between the observed units v and the l hidden layers hkk = 1, …l) as (9)

$$ P\left(v,{h}^1,\dots, {h}^l\right)=\left({\prod}_{k=0}^{l-2}P\left({h}^{(k)}|{h}^{\left(k+1\right)}\right)\right)P\left({h}^{\left(l-1\right)},{h}^{(l)}\right) $$
(9)

where v = h(0), P(hk| hk + 1) is a conditional distribution (10) for the layer k given the units of k + 1

$$ P\left({h}_i^{(k)}=1|{h}^{\left(k+1\right)}\right)=\sigma \left({b}_i^{(k)}+{W}_{:,i}^{\left(k+1\right)}{h}^{\left(k+1\right)}\right)\forall i,\forall k\epsilon 0,1,\dots ..l-2, $$
(10)

A DBN has l weight matrices: W(1), …. , W(l) and l + 1 bias vectors: b(0), …, b(l)P(h(l), h(l − 1)) is the joint distribution of top-level RBM (11).

$$ P\left({h}^{(l)},{h}^{\left(l-1\right)}\right)\propto {\mathrm{e}}^{\left({b}^{(l)}{h}^{(l)}+{b}^{\left(l-1\right)}{h}^{\left(l-1\right)}+{h}^{\left(l-1\right)}{W}^{(l)}{h}^{(l)}\right)} $$
(11)

The probability distribution of DBN is given by Eq. (12)

$$ P\left({v}_i=1|{h}^{(1)}\right)=\sigma \left({b}_i^{(0)}+{W}_{:,i}^{(1)}{h}^{(1)}\right)\forall i $$
(12)

3.4 Convolutional neural networks (CNN)

In neural networks, CNN is a unique family of deep learning models. CNN is a major artificial visual network for the identification of medical image patterns. The family of CNN primarily emerges from the information of the animal visual cortex [55, 116]. The major problem within a fully connected feed-forward neural network is that even for shallow architectures, the number of neurons may be very high, which makes them impractical to apply to image applications. The CNN is a method for reducing the number of parameters, allows a network to be deeper with fewer parameters.

CNN’s are designed based on three architectural ideas that are shared weights, local receptive fields, and spatial sub-sampling [70]. The essential element of CNN is the handling of unstructured data through the convolution operation. Convolution of the input signal x(t) with filter signal h(t) creates an output signal y(t) that may reveal more information than the input signal itself. 1D convolution of a discrete signals x(t) and h(t) is (13)

$$ y(t)=x(t)\ast h(t)={\sum}_{t=-\infty}^{\infty }x\left(\tau \right)h\left(t-\tau \right) $$
(13)

A digital image x(n1, n2) is a 2-D discrete signal. The convolution of images x(n1, n2) and h(n1, n2) is (14)

$$ y\left({n}_1,{n}_2\right)=\kern0.5em {\sum}_{k_1=0}^{M-1}{\sum}_{k_2=0}^{N-1}x\left({k}_1,{K}_2\right)h\left({n}_1-{k}_1,{n}_2-{k}_2\right) $$
(14)

where 0 ≤ n1 ≤ M − 1, 0 ≤ n2 ≤ N − 1.

The function of the convolution layer is to detect local features xl from input feature maps xl − 1 using kernels kl by convolution operation (*) i.e. xl − 1 ∗ kl. This convolution operation is repeated for every convolutional layer subject to non-linear transform (15)

$$ {x}_n^{(l)}=f\left({\sum}_m^{M^{l-1}}{x}_m^{\left(l-1\right)}\ast {k}_{mn}^{(l)}+{b}_m^{(l)}\right) $$
(15)

where \( {k}_{mn}^{(l)} \) represents weights between feature map m at layer l − 1 and feature map n at \( l.{x}_m^{\left(l-1\right)} \) represents the m feature map of the layer l − 1 and \( {x}_n^l \) is n feature map of the layer l. \( {b}_m^{(l)} \) is the bias parameter. f(.) is the non-linear activation function. Ml − 1 denotes a set of feature maps. CNN significantly reduces the number of parameters compared with a fully connected neural network because of local connectivity and weight sharing. The depth, zero-padding, and stride are three hyperparameters for controlling the volume of the convolution layer output.

A pooling layer comes after the convolutional layer to subsample the feature maps. The goal of the pooling layers is to achieve spatial invariance by minimizing the spatial dimension of the feature maps for the next convolution layer. Max pooling and average pooling are commonly used two different polling operations to achieve downsampling. Let the size of the pooling region M and each element in the pooling region is given as xj = (x1, x2, …xM × M), the output after pooling is given as xi. Max pooling and average polling are described in the following Eqs. (16) and (17).

$$ {x}_i={\max}_{1\le j\le M\times M}\left({x}_j\right) $$
(16)
$$ {x}_i=\frac{1}{M\times M}{\sum}_{j=1}^{M\times M}{x}_j $$
(17)

The max-pooling method chooses the most superior invariant feature in a pooling region. The average pooling method selects the average of all the features in the pooling area. Thus, the max-pooling method holds texture information that can lead to faster convergence, average pooling method is called Keep background information [133]. Spatial pyramid pooling [48], stochastic polling [175], Def-pooling [109], Multi activation pooling [189], and detailed preserving pooling [130] are different pooling techniques in the literature. A fully connected layer is used at the end of the CNN model. Fully connected layers perform like a traditional neural network [174]. The input to this layer is a vector of numbers (output of the pooling layer) and outputs an N-dimensional vector (N number of classes). After the pooling layers, the feature of previous layer maps is flattened and connected to fully connected layers.

The first successful seven-layered LeNet-5 CNN was developed by Yann LeCunn in 1990 for handwritten digit recognition successfully. Krizhevsky et al. [68] proposed AlexNet is a deep convolutional neural network composed of 5 convolutional and 3 fully-connected layers. In AlexNet changed the sigmoid activation function to a ReLU activation function to make model training easier.

K. Simonyan and A. Zisserman invented the VGG-16 [143] which has 13 convolutional and 3 fully connected layers. The Visual Geometric Group (VGG) research group released a series of CNN starting from VGG-11, VGG-13, VGG-16, and VGG-19. The main intention of the VGG group to understand how the depth of convolutional networks affects the accuracy of the models of image classification and recognition. Compared to the maximum VGG19, which has 16 convolutional layers and 3 fully connected layers, the minimum VGG11 has 8 convolutional layers and 3 fully connected layers. The last three fully connected layers are the same as the various variations of VGG.

Szegedy et al. [151] proposed an image classification network consisting of 22 different layers, which is GoogleNet. The main idea behind GoogleNet is the introduction of inception layers. Each inception layer convolves the input layers partially using different filter sizes. Kaiming He et al. [49] proposed the ResNet architecture, which has 33 convolutional layers and one fully-connected layer. Many models introduced the principle of using multiple hidden layers and extremely deep neural networks, but then it was realized that such models suffered from the issue of vanishing or exploding gradients problem. For eliminating vanishing gradients’ problem skip layers (shortcut connections) are introduced. DenseNet developed by Gao et al. [54] consists of several dense blocks and transition blocks, which are placed between two adjacent dense blocks. The dense block consists of three layers of batch normalization, followed by a ReLU and a 3 × 3 convolution operation. The transition blocks are made of Batch Normalization, 1 × 1 convolution, and average Pooling.

Compared to state-of-the-art handcrafted feature detectors, CNNs is an efficient technique for detecting features of an object and achieving good classification performance. There are drawbacks to CNNs, which are that unique relationships, size, perspective, and orientation of features are not taken into account. To overcome the loss of information in CNNs by pooling operation Capsule Networks (CapsNet) are used to obtain spatial information and most significant features [129]. The special type of neurons, called capsules, can detect efficiently distinct information. The capsule network consists of four main components that are matrix multiplication, Scalar weighting of the input, dynamic routing algorithm, and squashing function.

3.5 Recurrent neural networks (RNN)

RNN is a class of neural networks used for processing sequential information (deal with sequential data). The structure of the RNN shown in Fig. 5a is like an FFNN and the difference is that recurrent connections are introduced among hidden nodes. A generic RNN model at time t, the recurrent connection hidden unit ht receives input activation from the present data xt and the previous hidden state ht − 1. The output yt is calculated given the hidden state ht. It can be represented using the mathematical Eqs. (18) and (19) as

Fig. 5
figure 5

a Recurrent Neural Networks [163] b Long Short-Term Memory [163] c Generative Adversarial Networks [64]

$$ {h}_t=f\left({w}_{hx}{x}_t+{w}_{hh}{h}_{t-1}+{b}_h\right) $$
(18)
$$ y= softmax\left({w}_{yh}+{b}_y\right) $$
(19)

Here f is a non-linear activation function, whx is the weight matrix between the input and hidden layers, whh is the matrix of recurrent weights between the hidden layers and itself wyh is the weight matrix between the hidden and output layer, and bhand by are biases that allow each node to learn and offset. While the RNN is a simple and efficient model, in reality, it is, unfortunately, difficult to train properly. Real-Time Recurrent Learning (RTRL) algorithm [173] and Back Propagation Through Time (BPTT) [170] methods are used to train RNN. Training with these methods frequently fails because of vanishing (multiplication of many small values) or explode (multiplication of many large values) gradient problem [10, 112]. Hochreiter and Schmidhuber (1997) designed a new RNN model named Long Short Term Memory (LSTM) that overcome error backflow problems with the aid of a specially designed memory cell [52]. Figure 5b shows an LSTM cell which is typically configured by three gates: input gate gt, forget gate ft and output gate ot, these gates add or remove information from the cell.

An LSTM can be represented with the following Eqs. (20) to (25)

$$ \mathrm{Input}\ \mathrm{state}\ {i}_t=\sigma \left({w}_{ix}{x}_t+{w}_{ih}{h}_{t-1}+{b}_i\right) $$
(20)
$$ \mathrm{Input}\ \mathrm{gate}\ {g}_t=\phi \left({w}_{gx}{x}_t+{w}_{gh}{h}_{t-1}+{b}_g\right) $$
(21)
$$ \mathrm{Forget}\ \mathrm{gate}\ {f}_t=\sigma \left({w}_{fx}{x}_t+{w}_{fh}{h}_{t-1}+{b}_f\right) $$
(22)
$$ \mathrm{Output}\ \mathrm{gate}\ {o}_t=\sigma \left({w}_{ox}{x}_t+{w}_{oh}{h}_{t-1}+{b}_o\right) $$
(23)
$$ \mathrm{Internal}\ \mathrm{state}\ {m}_t={g}_t\odot {i}_t+{m}_{t-1}\odot {f}_t $$
(24)
$$ \mathrm{Hidden}\ \mathrm{state}\ {h}_t={m}_t\odot {o}_t $$
(25)

3.6 Generative adversarial networks (GAN)

In the field of deep learning, one of the deep generative models are Generative Adversarial Networks (GANs) introduced by Good Fellow in [43]. GANs are neural networks that can generate synthetic images that closely imitate the original images. In GAN shown in Fig. 5c, there are two neural networks, namely generator, and discriminator, which are trained simultaneously. The generator G generates counterfeit data samples which aim to “fool” the discriminator D, while the discriminator attempts to correctly distinguish the true and false samples. In mathematical terms, D and G play a two player minimax game with the cost function of (26) [64].

$$ {\min}_G{\max}_DV\left(D,G\right)={E}_{x\sim {p}_{data(x)}}\left[ logD(x)\right]+{E}_{\mathrm{z}\sim {P}_{\mathrm{z}}\left(\mathrm{z}\right)}\left[\mathit{\log}\left(1-D\left(G\left(\mathrm{z}\right)\right)\right)\right]\kern0.75em $$
(26)

Where x represents the original image, z is a noise vector with random numbers. pdata(x) and pz(z) are probability distributions of x and z, respectively. D(x) represents the probability that x comes from the actual data pdata(x) rather than the generated data. 1 − D(G(z)) is the probability that it can be generated from pz(z). The expectation of x from the real data distribution pdata is expressed by \( {E}_{x\sim {p}_{data(x)}} \) and the expectation of z sampled from noise is \( {E}_{\mathrm{z}\sim {P}_{\mathrm{z}}\left(\mathrm{z}\right)}. \) The goal of the training is to maximize the loss function for the discriminator, while the training objective for the generator is to reduce the term log(1 − D(G(z))).The most utilization of GAN in the field of medical image analysis is data augmentation (generating new data) and image to image translation [107]. Trustability of the Generated Data, Unstable Training, and evaluation of generated data are three major drawbacks of GAN that might hinder their acceptance in the medical community [183].

3.7 U-net

Ronneberger et al. [126] proposed CNN based U-Net architecture for segmentation in biomedical image data. The architecture consists of a contracting path (left side) to capture context and an expansive symmetric path (right side) that enables precise localization. U-Net is a generalized DLA used for quantification tasks such as cell detection and shape measurement in medical image data [34].

3.8 Software frameworks

There are several software frameworks available for implementing DLA which are regularly updated as new approaches and ideas are created. DLA encapsulates many levels of mathematical principles based on probability, linear algebra, calculus, and numerical computation. Several deep learning frameworks exist such as Theano, TensorFlow, Caffe, CNTK, Torch, Neon, pylearn, etc. [138]. Globally, Python is probably the most commonly used programming language for DL. PyTorch and Tensorflow are the most widely used libraries for research in 2019. Table 2 shows the analysis of various Deep Learning Frameworks based on the core language and supported interface language.

Table 2 Comparison of various Deep Learning Frameworks

4 Use of deep learning in medical imaging

4.1 X-ray image

Chest radiography is widely used in diagnosis to detect heart pathologies and lung diseases such as tuberculosis, atelectasis, consolidation, pleural effusion, pneumothorax, and hyper cardiac inflation. X-ray images are accessible, affordable, and less dose-effective compared to other imaging methods, and it is a powerful tool for mass screening [14]. Table 3 presents a description of the DL methods used for X-ray image analysis.

Table 3 An overview of the DLA for the study of X-ray images

S. Hwang et al. [57] proposed the first deep CNN-based Tuberculosis screening system with a transfer learning technique. Rajaraman et al. [119] proposed modality-specific ensemble learning for the detection of abnormalities in chest X-rays (CXRs). These model predictions are combined using various ensemble techniques toward minimizing prediction variance. Class selective mapping of interest (CRM) is used for visualizing the abnormal regions in the CXR images. Loey et al. [90] proposed A GAN with deep transfer training for COVID-19 detection in CXR images. The GAN network was used to generate more CXR images due to the lack of the COVID-19 dataset. Waheed et al. [160] proposed a CovidGAN model based on the Auxiliary Classifier Generative Adversarial Network (ACGAN) to produce synthetic CXR images for COVID-19 detection. S. Rajaraman and S. Antani [120] introduced weakly labeled data augmentation for increasing training dataset to improve the COVID-19 detection performance in CXR images.

4.2 Computerized tomography (CT)

CT uses computers and rotary X-ray equipment to create cross-section images of the body. CT scans show the soft tissues, blood vessels, and bones in different parts of the body. CT is a high detection ability, reveals small lesions, and provides a more detailed assessment. CT examinations are frequently used for pulmonary nodule identification [93]. The detection of malignant pulmonary nodules is fundamental to the early diagnosis of lung cancer [102, 142]. Table 4 summarizes the latest deep learning developments in the study of CT image analysis.

Table 4 A review of articles that use DL techniques for the analysis of the CT image

Li et al. 2016 [74] proposed deep CNN for the detection of three types of nodules that are semisolid, solid, and ground-glass opacity. Balagourouchetty et al. [5] proposed GoogLeNet based an ensemble FCNet classifier for The liver lesion classification. For feature extraction, basic Googlenet architecture is modified with three modifications. Masood et al. [95] proposed the multidimensional Region-based Fully Convolutional Network (mRFCN) for lung nodule detection/classification and achieved a classification accuracy of 97.91%. In lung nodule detection, the feature work is the detection of micronodules (less than 3 mm) without loss of sensitivity and accuracy. Zhao and Zeng 2019 [190] proposed DLA based on supervised MSS U-Net and 3DU-Net to automatically segment kidneys and kidney tumors from CT images. In the present pandemic situation, Fan et al. [35] and Li et al. [79] used deep learning-based techniques for COVID-19 detection from CT images.

4.3 Mammograph (MG)

Breast cancer is one of the world’s leading causes of death among women with cancer. MG is a reliable tool and the most common modality for early detection of breast cancer. MG is a low-dose x-ray imaging method used to visualize the breast structure for the detection of breast diseases [40]. Detection of breast cancer on mammography screening is a difficult task in image classification because the tumors constitute a small part of the actual breast image. For analyzing breast lesions from MG, three steps are involved that are detection, segmentation, and classification [139].

The automatic classification and detection of masses at an early stage in MG is still a hot subject of research. Over the past decade, DLA has shown some significant overcome in breast cancer detection and classification problem. Table 5 summarizes the latest DLA developments in the study of mammogram image analysis.

Table 5 Summary of DLA for MG image analysis

Fonseca et al. [37] proposed a breast composition classification according to the ACR standard based on CNN for feature extraction. Wang et al. [161] proposed twelve-layer CNN to detect Breast arterial calcifications (BACs) in mammograms image for risk assessment of coronary artery disease. Ribli et al. [124] developed a CAD system based on Faster R-CNN for detection and classification of benign and malignant lesions on a mammogram image without any human involvement. Wu et al. [176] present a deep CNN trained and evaluated on over 1,000,000 mammogram images for breast cancer screening exam classification. Conant et al. [26] developed a Deep CNN based AI system to detect calcified lesions and soft- tissue in digital breast tomosynthesis (DBT) images. Kang et al. [62] introduced Fuzzy completely connected layer (FFCL) architecture, which focused primarily on fused fuzzy rules with traditional CNN for semantic BI-RADS scoring. The proposed FFCL framework achieved superior results in BI-RADS scoring for both triple and multi-class classifications.

4.4 Histopathology

Histopathology is the field of study of human tissue in the sliding glass using a microscope to identify different diseases such as kidney cancer, lung cancer, breast cancer, and so on. The staining is used in histopathology for visualization and highlight a specific part of the tissue [45]. For example, Hematoxylin and Eosin (H&E) staining tissue gives a dark purple color to the nucleus and pink color to other structures. H&E stain plays a key role in the diagnosis of different pathologies, cancer diagnosis, and grading over the last century. The recent imaging modality is digital pathology

Deep learning is emerging as an effective method in the analysis of histopathology images, including nucleus detection, image classification, cell segmentation, tissue segmentation, etc. [178]. Tables 6 and 7 summarize the latest deep learning developments in pathology. In the study of digital pathology image analysis, the latest development is the introduction of whole slide imaging (WSI). WSI allows digitizing glass slides with stained tissue sections at high resolution. Dimitriou et al. [30] reviewed challenges for the analysis of multi-gigabyte WSI images for building deep learning models. A. Serag et al. [135] discuss different public “Grand Challenges” that have innovations using DLA in computational pathology.

Table 6 Summary of articles using DLA for digital pathology image - Organ segmentation
Table 7 Summary of articles using DLA for digital pathology image - Detection and classification of disease

4.5 Other images

Endoscopy is the insertion of a long nonsurgical solid tube directly into the body for the visual examination of an internal organ or tissue in detail. Endoscopy is beneficial in studying several systems inside the human body, such as the gastrointestinal tract, the respiratory tract, the urinary tract, and the female reproductive tract [60, 101]. Du et al. [31] reviewed the Applications of Deep Learning in the Analysis of Gastrointestinal Endoscopy Images. A revolutionary device for direct, painless, and non-invasive inspection of the gastrointestinal (GI) tract for detecting and diagnosing GI diseases (ulcer, bleeding) is Wireless capsule endoscopy (WCE). Soffer et al. [145] performed a systematic analysis of the existing literature on the implementation of deep learning in the WCE. The first deep learning-based framework was proposed by He et al. [46] for the detection of hookworm in WCE images. Two CNN networks integrated (edge extraction and classification of hookworm) to detect hookworm. Since tubular structures are crucial elements for hookworm detection, the edge extraction network was used for tubular region detection. Yoon et al. [185] developed a CNN model for early gastric cancer (EGC) identification and prediction of invasion depth. The depth of tumor invasion in early gastric cancer (EGC) is a significant factor in deciding the method of treatment. For the classification of endoscopic images as EGC or non-EGC, the authors employed a VGG-16 model. Nakagawa et al. [105] applied DL technique based on CNN to enhance the diagnostic assessment of oesophageal wall invasion using endoscopy. J.choi et al. [22] express the feature aspects of DL in endoscopy.

Positron Emission Tomography (PET) is a nuclear imaging tool that is generally used by the injection of particular radioactive tracers to visualize molecular-level activities within tissues. T. Wang et al. [168] reviewed applications of machine learning in PET attenuation correction (PET AC) and low-count PET reconstruction. The authors discussed the advantages of deep learning over machine learning in the applications of PET images. AJ reader et al. [123] reviewed the reconstruction of PET images that can be used in deep learning either directly or as a part of traditional reconstruction methods.

5 Discussion

The primary purpose of this paper is to review numerous publications in the field of deep learning applications in medical images. Classification, detection, and segmentation are essential tasks in medical image processing [144]. For specific deep learning tasks in medical applications, the training of deep neural networks needs a lot of labeled data. But in the medical field, at least thousands of labeled data is not available. This issue is alleviated by a technique called transfer learning. Two transfer learning approaches are popular and widely applied that are fixed feature extractors and fine-tuning a pre-trained network. In the classification process, the deep learning models are used to classify images into two or more classes. In the detection process, Deep learning models have the function of identifying tumors and organs in medical images. In the segmentation task, deep learning models try to segment the region of interest in medical images for processing.

5.1 Segmentation

For medical image segmentation, deep learning has been widely used, and several articles have been published documenting the progress of deep learning in the area. Segmentation of breast tissue using deep learning alone has been successfully implemented [104]. Xing et al. [179] used CNN to acquire the initial shape of the nucleus and then isolate the actual nucleus using a deformable pattern. Qu et al. [118] suggested a deep learning approach that could segment the individual nucleus and classify it as a tumor, lymphocyte, and stroma nuclei. Pinckaers and Litjens [115] show on a colon gland segmentation dataset (GlaS) that these Neural Ordinary Differential Equations (NODE) can be used within the U-Net framework to get better segmentation results. Sun 2019 [149] developed a deep learning architecture for gastric cancer segmentation that shows the advantage of utilizing multi-scale modules and specific convolution operations together. Figure 6 shows U-Net is the most usually used network for segmentation (Fig. 6).

Fig. 6
figure 6

U-Net architecture for segmentation,comprising encoder (downsampling) and decoder (upsampling) sections [135]

5.2 Detection

The main challenge posed by methods of detection of lesions is that they can give rise to multiple false positives while lacking a good proportion of true positive ones. For tuberculosis detection using deep learning methods applied in [53, 57, 58, 91, 119]. Pulmonary nodule detection using deep learning has been successfully applied in [82, 108, 136, 157].

Shin et al. [141] discussed the effect of CNN pre-trained architectures and transfer learning on the identification of enlarged thoracoabdominal lymph nodes and the diagnosis of interstitial lung disease on CT scans, and considered transfer learning to be helpful, given the fact that natural images vary from medical images. Litjens et al. [85] introduced CNN for the identification of Prostate cancer in biopsy specimens and breast cancer metastasis identification in sentinel lymph nodes. The CNN has four convolution layers for feature extraction and three classification layers. Riddle et al. [124] proposed the Faster R-CNN model for the detection of mammography lesions and classified these lesions into benign and malignant, which finished second in the Digital Mammography DREAM Challenge. Figure 7 shows VGG architecture for detection.

Fig. 7
figure 7

CNN architecture for detection [144]

An object detection framework named Clustering CNN (CLU-CNNs) was proposed by Z. Li et al. [76] for medical images. CLU-CNNs used Agglomerative Nesting Clustering Filtering (ANCF) and BN-IN Net to avoid much computation cost facing medical images. Image saliency detection aims at locating the most eye-catching regions in a given scene [21, 78]. The goal of image saliency detection is to locate a given scene in the most eye-catching regions. In different applications, it also acts as a pre-processing tool including video saliency detection [17, 18], object recognition, and object tracking [20]. Saliency maps are a commonly used tool for determining which areas are most important to the prediction of a trained CNN on the input image [92]. NT Arun et al. [4] evaluated the performance of several popular saliency methods on the RSNA Pneumonia Detection dataset and was found that GradCAM was sensitive to the model parameters and model architecture.

5.3 Classification

In classification tasks, deep learning techniques based on CNN have seen several advancements. The success of CNN in image classification has led researchers to investigate its usefulness as a diagnostic method for identifying and characterizing pulmonary nodules in CT images. The classification of lung nodules using deep learning [74, 108, 117, 141] has also been successfully implemented.

Breast parenchymal density is an important indicator of the risk of breast cancer. The DL algorithms used for density assessment can significantly reduce the burden of the radiologist. Breast density classification using DL has been successfully implemented [37, 59, 72, 177]. Ionescu et al. [59] introduced a CNN-based method to predict Visual Analog Score (VAS) for breast density estimation. Figure 8 shows AlexNet architecture for classification.

Alcoholism or alcohol use disorder (AUD) has effects on the brain. The structure of the brain was observed using the Neuroimaging approach. S.H.Wang et al. [162] proposed a 10-layer CNN for alcohol use disorder (AUD) problem using dropout, batch normalization, and PReLU techniques. The authors proposed a 10 layer CNN model that has obtained a sensitivity of 97.73, a specificity of 97.69, and an accuracy of 97.71. Cerebral micro-bleeding (CMB) are small chronic brain hemorrhages that can result in cognitive impairment, long-term disability, and neurologic dysfunction. Therefore, early-stage identification of CMBs for prompt treatment is essential. S. Wang et al. [164] proposed the transfer learning-based DenseNet to detect Cerebral micro-bleedings (CMBs). DenseNet based model attained an accuracy of 97.71% (Fig. 8).

Fig. 8
figure 8

CNN architecture for classification [144]

5.4 Limitations and challenges

The application of deep learning algorithms to medical imaging is fascinating, but many challenges are pulling down the progress. One of the limitations to the adoption of DL in medical image analysis is the inconsistency in the data itself (resolution, contrast, signal-to-noise), typically caused by procedures in clinical practice [113]. The non-standardized acquisition of medical images is another limitation in medical image analysis. The need for comprehensive medical image annotations limits the applicability of deep learning in medical image analysis. The major challenge is limited data and compared to other datasets, the sharing of medical data is incredibly complicated. Medical data privacy is both a sociological and a technological issue that needs to be discussed from both viewpoints. For building DLA a large amount of annotated data is required. Annotating medical images is another major challenge. Labeling medical images require radiologists’ domain knowledge. Therefore, it is time-consuming to annotate adequate medical data. Semi-supervised learning could be implemented to make combined use of the existing labeled data and vast unlabelled data to alleviate the issue of “limited labeled data”. Another way to resolve the issue of “data scarcity” is to develop few-shot learning algorithms using a considerably smaller amount of data. Despite the successes of DL technology, there are many restrictions and obstacles in the medical field. Whether it is possible to reduce medical costs, increase medical efficiency, and improve the satisfaction of patients using DL in the medical field cannot be adequately checked. However, in clinical trials, it is necessary to demonstrate the efficacy of deep learning methods and to develop guidelines for the medical image analysis applications of deep learning.

6 Conclusion and future directions

Medical imaging is a place of origin of the information necessary for clinical decisions. This paper discusses the new algorithms and strategies in the area of deep learning. In this brief introduction to DLA in medical image analysis, there are two objectives. The first one is an introduction to the field of deep learning and the associated theory. The second is to provide a general overview of the medical image analysis using DLA. It began with the history of neural networks since 1940 and ended with breakthroughs in medical applications in recent DL algorithms. Several supervised and unsupervised DL algorithms are first discussed, including auto-encoders, recurrent, CNN, and restricted Boltzmann machines. Several optimization techniques and frameworks in this area include Caffe, TensorFlow, Theano, and PyTorch are discussed. After that, the most successful DL methods were reviewed in various medical image applications, including classification, detection, and segmentation. Applications of the RBM network is rarely published in the medical image analysis literature. In classification and detection, CNN-based models have achieved good results and are most commonly used. Several existing solutions to medical challenges are available. However, there are still several issues in medical image processing that need to be addressed with deep learning. Many of the current DL implementations are supervised algorithms, while deep learning is slowly moving to unsupervised and semi-supervised learning to manage real-world data without manual human labels.

DLA can support clinical decisions for next-generation radiologists. DLA can automate radiologist workflow and facilitate decision-making for inexperienced radiologists. DLA is intended to aid physicians by automatically identifying and classifying lesions to provide a more precise diagnosis. DLA can help physicians to minimize medical errors and increase medical efficiency in the processing of medical image analysis. DL-based automated diagnostic results using medical images for patient treatment are widely used in the next few decades. Therefore, physicians and scientists should seek the best ways to provide better care to the patient with the help of DLA. The potential future research for medical image analysis is the designing of deep neural network architectures using deep learning. The enhancement of the design of network structures has a direct impact on medical image analysis. Manual design of DL Model structure requires rich knowledge; hence Neural Network Search will probably replace the manual design [73]. A meaningful feature research direction is also the design of various activation functions. Radiation therapy is crucial for cancer treatment. Different medical imaging modalities are playing a critical role in treatment planning. Radiomics was defined as the extraction of high throughput features from medical images [28]. In the feature, Deep-learning analysis of radionics will be a promising tool in clinical research for clinical diagnosis, drug development, and treatment selection for cancer patients. Due to limited annotated medical data, unsupervised, weakly supervised, and reinforcement learning methods are the emerging research areas in DL for medical image analysis. Overall, deep learning, a new and fast-growing field, offers various obstacles as well as opportunities and solutions for a range of medical image applications.