1 Introduction

Quantum mechanics is the theoretical basis of the emerging field of quantum computing with the aim of speeding-up calculations. Recent quantum applications are numerous. For instance, quantum computing could enable accelerating diagnoses in medicine and solve problems in finance. Quantum algorithms of different disciplines such as neural networks and optimization are developing fast. At the same time, industrial firms are working to build and make available quantum computers (Zahorodko 2021). Quantum machine learning (QML) is a new research axis in artificial intelligence field (Bharti et al. 2020; Biamonte 2017; Wittek 2014). It let speed up classical machine learning algorithms that deal with big data.

Quantum mechanics is a rich collection of powerful theories that provide description of nature. Concepts such as superposition of states, interference, entanglement are expected to provide an exponential speed up to solve complex scientific and industrial problems. As QML has a great attentiveness nowadays, our interest in this paper focuses on developing a quantum version of a density-based clustering algorithm, knowing that the latter has practical applications in numerous domains.

Density-based clustering techniques discover clusters with arbitrary shape in a spatial database. The yielded clusters are dense regions of objects in the data space and the objects outside the clusters are of low density and hence represent noise. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al. 1996) is a nice algorithm, very popular in geographic information systems (GIS). In one scan, it clusters objects given two parameters representing, respectively, the maximum distance of the neighborhood from one object and a number of points representing the minimum desired density in an elementary cluster. This algorithm is known to be very sensitive to such parameters, which usually are unknown et consequently hard to determine, especially for real-world and high-dimensional data sets. Intensive experiments should be performed to tune their values in order to achieve the right clustering. The Ordering Points To Identify the Clustering Structure (OPTICS) (Mihael et al. 1999) which is based on similar principles, has the advantage to handle a broad range of parameter settings. It can be seen as an extension from DBSCAN. It is a smart algorithm for automatic and interactive cluster analysis, allowing the ordering of clusters through an ordered file that can be graphically viewed. The cluster ordering allows to extract clustering information such as cluster centers, arbitrary-shaped clusters and the clustering structure.

On the other hand, up-to-date and topical technologies of artificial intelligence are emerging and exploited to solve hard complex problems. Deep learning methods provide a great hope to solve more effectively and efficiently these issues.

We proposed four main contributions in this paper. The first one is a quantum version of OPTICS algorithm, aiming at gaining efficiency for clustering spatial data. The algorithm is called Quantum OPTICS (QOPTICS) and its computational complexity is evaluated and compared to that of the classic counterpart.

The second one consists of a design of deep learning of two swarm intelligence algorithms, the Artificial Orcas Algorithm (AOA) (Bendimerad and Drias 2020) and the Elephant Herding Optimisation [EHO] (Wang et al. 2015), in order to achieve effectiveness for problem solving. Two mutation operators known as Cauchy and Gaussian operators are introduced in the algorithms for that aim. The designed algorithms are called Deep Self Learning AOA (DSLAOA) and Deep Self Learning EHO (DSLEHO).

The third is a hybrid system combining DSLAOA and DSLEHO respectively with QOPTICS in order to gain more efficiency for problem solving. Indeed, running DSLAOA and DSLEHO on just one cluster that is, on a small part of the dataset is reducing calculation time.

The fourth is an application to emergency transportation in the context of COVID-19, which helps managing hospitals during crises of this pandemic. In order to optimize the time transportation of a patient to a health center, QOPTICS is used to cluster hospitals in regions, each one taking in consideration a certain number of emergency calls, knowing that the most important objective for the decision makers in EMS is to save human lives. For this purpose, responding to emergency calls by sending ambulances in the shortest time should be decided quickly. Assigning vehicles to the nearest calls location may cause some regions to become less covered than others and hence calls from these zones will wait a long time to be served. Ambulance dispatching and zones covering problems are managed jointly in order to optimize not only the vehicles arriving time to an emergency call but also the covering of all the calls. DSLAOA and DSLEHO attempt to respond to this goal.

The originality of the study relies on the fact that the tools used in each phase of the proposed system are from recent emerging technologies such as quantum machine learning and deep learning.

This paper is organized in six sections. The next one presents backgrounds on the OPTICS algorithm, the basic quantum algorithms and the dynamic ambulance dispatching and emergency calls covering. The third section describes the proposed quantum OPTICS algorithm alongside with its complexity. The fourth section enchains with the presentation of the deep self-learning method on AOA and on EHO. The fifth describes the adaptation of DSLAOA and DSLEHO to the EMS transportation issue. The sixth section exhibits the experimental results obtained by QOPTICS, which consists in clustering geographical zones containing a great density of hospitals in the Kingdom of Saudi Arabia (KSA) as well as emergency Covid-19 calls providing from this country. The outcomes generated by deep self-learning methods are also presented and discussed. The article ends with a conclusion highlighting the major contributions and future open questions.

2 Background

This section outlines the basic concepts used in the construction of our proposals. First, the OPTICS clustering algorithm is described and discussed. Second, the quantum subroutines that helped to design QOPTICS are presented. Third, the dynamic ambulance dispatching and emergency calls covering problem is exposed as it serves as an application of the developed algorithms.

2.1 OPTICS clustering algorithm

Clustering n objects of an unordered set \(O=\{O_{1},O_{2}, \ldots ,O_{n}\}\) consists in grouping the similar objects in the same cluster and the dissimilar ones in different clusters. To measure the similarity between objects, a distance function is defined. Ordering Points To Identify the Clustering Structure (OPTICS) (Mihael et al. 1999) is an unsupervised density-based clustering algorithm. It can be seen as an improved version of DBSCAN with additional advantages. It computes an ordering of the objects, while associating with each object its core-distance and its reachability distance. This ordering allows the extraction of all clusterings with respect to any distance smaller than the distance parameter. The computational complexity of OPTICS is \(O(n^2)\) (Mihael et al. 1999).

2.2 Quantum operators and subroutines

Quantum computing uses quantum mechanics as a theoretical basis, where the concepts of superposition of states, density matrices representing states and the entanglement phenomenon are fundamental. A state in quantum mechanics encompasses statistical information about its position in a n-dimensional Hilbert space. Based on the Dirac notation, it is represented by a vector as in Eq. 1.

$$\begin{aligned} \arrowvert \Psi>=\sum _{i}\alpha _{i} \arrowvert b_{i}> \end{aligned}$$
(1)

\(\alpha _{i}\) for all i are complex numbers representing the amplitudes of the basis states \( \arrowvert b_{i}>\) under the normalization condition \(\sum _{i}{\alpha _{i}}^2=1\). The linear combination of all the basis states is called superposition. It denotes the fact that the states exist simultaneously all of them in the superposition.

A qubit (quantum bit) is a quantum state that represents the smallest unit of quantum information storage. It consists of a superposition \(\alpha _{1} \arrowvert 0>+\alpha _{2} \arrowvert 1>\) of two basis states that are \( \arrowvert 0>\) and \( \arrowvert 1>\) such that \({\alpha _{1}}^2+{\alpha _{2}}^2=1\). The main advantage of the quantum computer is that its computing power is exponential in terms of the number of qubits. In fact, two combined qubits are in a superposition of four states \(\alpha _{1} \arrowvert 00>+\alpha _{2} \arrowvert 01>+\alpha _{3} \arrowvert 10>+\alpha _{4} \arrowvert 11>\), with \({\alpha _{1}}^2+{\alpha _{2}}^2+{\alpha _{3}}^2+{\alpha _{4}}^2=1\). With n qubits, a superposition of \(2^{n}\) states can be created. So when an operator is applied to the set of qubits, it is applied to \(2^{n}\) states at the same time, which is equivalent to a parallel calculation on \(2^{n}\) data.

One important operation in quantum computing is the measurement, which allows to get a state component with a probability proportional to its weight. For instance, when measuring the value of the qubit, the only answers that can be obtained are 0 or 1. The probability of measuring state 0 is \(\alpha _{1}^{2}\), while that of measuring state 1 is equal to \(\alpha _{2}^{2}\). After measurement, the qubit is in the measured state in classical computing.

Except the measurement operation, all the quantum operators are unitary and are represented by gates. A quantum algorithm is then designed as a classical algorithm using Toffoli gates involving in some places quantum operators or subroutines in order to speed up the algorithm calculation.

A quantum state is entangled if it cannot be expressed by a combination of all its basis states. Another important concept is the quantum register, which belongs to the Hilbert space with \(2^{n}\) states and n qubits. Using the Dirac notation, it is represented as \( \arrowvert \Psi >=\sum _{i=0}^{i=2^n-1}\alpha _{i} \arrowvert i>\) under the normalization condition \(\sum _{i}{\alpha _{i}}^2=1\). The basis state \( \arrowvert i>\) expresses the binary encoding of i.

2.2.1 Quantum operators

Quantum computers use quantum circuits composed of quantum gates or operators. Among quantum operators is the Toffoli gate that can perform all the operations of classical circuits. The n-bit Toffoli gate is a generalization of the Toffoli gate. It has n input bits \((x_1, x_2, \ldots , x_n)\) and n output bits. The first \(n-1\) output bits are unchanged whereas the last one is (\(x_1\) AND ...AND \(x_{n-1}\)) XOR \(x_{n}\). The Hadamard gate transforms the pure state \( \arrowvert 0>\) into the superposed state \(\frac{1}{\sqrt{2}} \arrowvert 0>+\frac{1}{\sqrt{2}} \arrowvert 1>\) and the pure state \( \arrowvert 1>\) into the superposed state \(\frac{1}{\sqrt{2}} \arrowvert 0>-\frac{1}{\sqrt{2}} \arrowvert 1>\). The measurement in this case will have the same probability of giving 1 or 0. The application of a Hadamard gate to each qubit of an n-qubit register in parallel is equivalent to the Hadamard transform \(H_{n}\).

Another important and extensively used gate is the Grover diffusion operator as it has the ability of searching for an element in an unordered list, in just one quantum operation.

2.2.2 Quantum subroutines

Numerous basic quantum subroutines have been developed and published in the recent literature. Those that are used for devising the quantum OPTICS algorithm are described below.

The Grover’s algorithm (Grover 1996) is a quantum subroutine that searches for an element \(x_{0}\) belonging to an unordered set S of n elements. Using a classical algorithm, the best complexity to perform such search is O(n), as in the worst case all the elements will be tested before encountering \(x_{0}\). The Grover’s algorithm performs the same task within \(O(\sqrt{n})\) time complexity. The algorithm considers a function \(F:\{0, 1, 2, \ldots , n-1\} \rightarrow \{0, 1\}\) that returns 1 for \(x_{0}\) and 0 for all the other elements, that is for only \(x_{0}\), \(F(x_{0})=1\). It uses Hadamard gates to create a uniform superposition of all the states at the beginning and then the Grover operator.

In Boyer et al. (1998), the authors provide a generalization of Grover’s algorithm, which is a new technique for counting the number of occurrences of \(x_{0}\) in the list, with the same computational complexity \(O(\sqrt{n})\).

Quantum amplitude amplification algorithm (Brassard et al. 2002) is another generalization of Grover’s algorithm. Used as a subroutine, it helps to quadratic speedup over several classical algorithms. The quantum amplitude amplification algorithm does not start in a uniform superposition but only initializes states, when information is available on the states. If the probability to find an element a of S is \(p_{a}\) (and not \(\frac{1}{n}\) as in Grover’s algorithm) and the sum of probabilities of all elements is \(\sum _1^{ \arrowvert S \arrowvert }p_{i}\), the computational complexity is \(O(\frac{1}{\sqrt{p_{a}}})\).

Using the quantum exponential searching algorithm (Boyer et al. 1998), the authors in Durr and Hoyer (1996) devised a quantum algorithm that yields the minimum of an unordered set with a computational complexity \(O(\sqrt{n})\).

In Durr et al. (2006), Durr et al. developed a quant_find_smallest_values subroutine for finding the c closest neighbors of a point in \(O(\sqrt{c \times n})\) time.

Although its numerous advanced developments, QML remains rich in open questions. Quantum OPTICS or QOPTICS studied in this paper is one of them.

2.3 The dynamic ambulance dispatching and emergency calls covering

The dynamic ambulance dispatching and emergency calls covering (DADECC-COVID19) system that we propose is described and depicted in Fig. 1. It considers a set of ambulance stations geographically dispersed, a set of hospitals and a set of real time incoming emergency calls. It receives emergency calls and controls the movement of all the vehicles from the station towards the call point and then from this point to the selected hospital. More precisely, when an emergency call arrives, the system makes a decision about the vehicle to send to assist this call. To respond to calls in an optimal time, the system should guarantee a good covering rate of all the zones from where the calls can come. A vehicle can be idle when it is at station, on route towards a call where it can be deviated to serve another call, arriving to a call location and finishing service to become available. The real-time emergency vehicle dispatching problem in this work is dynamic. The destination of vehicles for example can change at any time if their diversion improves the service quality, especially the response time. To avoid deviating the same vehicle many times or deviating too much vehicles with the aim to preserve the emergency crews and the caller from disturbances, a limit is imposed by accepting only reassignments that bring in an important saving of time or calls covering. The proposed solving methods are based on new AI technologies and adapt to this dynamism. Figure 2 illustrates the chain of treatments of an emergency call by DADECC-COVID19.

Fig. 1
figure 1

Dynamic ambulance dispatching and emergency calls covering system (DADECC-Covid19)

In the context of COVID-19, numerous studies have been published, the aim being to explore in a way or another some aspects of this unpredictable pandemic. Artificial intelligence has been largely explored in this context. Castillo and his co-author worked on the forecast for the countries based on the COVID-19 time series of confirmed cases and deaths (Castillo and Melin 2020). They also developed a method for a COVID-19 classification of countries based on an intelligent fuzzy fractal approach (Castillo and Melin 2021). For the case of Mexico, an application for predicting the COVID-19 time series was also undertaken (Melin 2020). In this interesting study, a multiple ensemble neural network model with fuzzy response aggregation is proposed. In Mansour (2021), the authors propose an unsupervised deep learning model based on image recognition to detect and classify COVID-19. A preprocessing technique is introduced to enhance the image quality for the diagnosis and another one for the classification.

The present study is conducted for KSA but with another concern, which is managing the emergency transportation during COVID-19 crises.

Fig. 2
figure 2

Emergency call response chain

3 Quantum ordering points to identify the clustering structure

The QOPTICS algorithm we devise stores its input data in a quantum random access memory (QRAM) and makes use of the quantum minimum searching subroutine and the quant_find_smallest_values subroutine among other quantum routines. The details are described in the next subsections.

3.1 Data storage in quantum random access memory

QRAM (Giovannetti et al. 2008) is used to store the input data of QOPTICS algorithm. As input data, the algorithm handles the distance matrix D containing the distances between objects in addition to the parameters eps and MinPts. To store the matrix D in QRAM, an address register and an output register are defined. As there are \(n^2\) elements in D, the address register is composed of qubits and is a superposition of \(n^2\) addresses as shown in Eq. 2.

$$\begin{aligned} \sum _{k=0}^{k=n^{2}-1}\frac{1}{n} \arrowvert k>_a \end{aligned}$$
(2)

The output register is also composed of qubits and is a superposition containing the distances correlated to the address register as shown in Eq. 3.

$$\begin{aligned} \sum _{k=0}^{k=n^{2}-1}\frac{1}{n} \arrowvert k>_a \quad \xrightarrow []{QRAM} \quad \sum _{k=0}^{k=n^{2}-1}\frac{1}{n} \arrowvert k>_a \arrowvert dist_{i,j}>_r\nonumber \\ \end{aligned}$$
(3)

where \(k=(i-1)*n+j-1\).

With QRAM, it takes \(O(\log n^2)\) that is \(O(\log n)\) time to access a distance as there are \(\log n^2 = 2 \log n\) number of qubits in the address register.

The output of QOPCTICS is an ordered file where the objects are ordered within the clusters they belong to. Each object contains its core-distance and its reachability distance. The file can be stored using a superposition of address and output register as shown in Eq. 4.

$$\begin{aligned} \begin{aligned}&\sum _{k=0}^{k=n-1}\frac{1}{\sqrt{n}} \arrowvert k>_a \text { } \xrightarrow []{QRAM} \\&\sum _{k=0}^{k=n-1}\frac{1}{\sqrt{n}} \arrowvert k>_a \arrowvert core_{k}>_{r} \arrowvert reachability_{k}>_r \end{aligned} \end{aligned}$$
(4)

where core and reachability are two different superpositions of n items.

3.2 QOPTICS algorithm

The algorithms below highlight the theoretical constructs of the quantum version of QOPTICS using quantum subroutines such as Quant-Min-Search (Durr and Hoyer 1996) and Quant_find_smallest_values (Durr et al. 2006). Algorithm 1 is the quantum algorithm of OPTICS. It calls the quantum subroutines Algorithm 2 and Algorithm 3. Algorithm 2 produces MinPts neighbors of an object p that are away from p by a distance equal to eps. Algorithm 3 calculates the core distance of the current object and the current reachability distance of the objects existing in eps-Neighbors. Seeds has no needs to be ordered as the quantum subroutine Quant-Min-Search can be invoked to retrieve the object with the minimum reachability distance. A quantum oracle O is created and used in Algorithm 3 to transform the index \( \arrowvert q>\) belonging to eps-Neighbors into a state \( \arrowvert q.reachability>\) defined as in Eq. 5.

figure a
figure b
figure c
$$\begin{aligned} q.reachability= {\left\{ \begin{array}{ll} p.core &{} \text {if } dist_{p,q} \le {p.core} \\ dist_{p,q} &{} \text {otherwise } \end{array}\right. } \end{aligned}$$
(5)

Lemma 1

The computational complexity of Quant-eps-Neighbors subroutine is \(O(log n\sqrt{n})\).

Proof

Quant-eps-neighbors Algorithm calls Quant-Min-Search subroutine a number of times equal to the number of eps-neighbors. The worst-case access time to eps-neighbors is O(logn) as the maximum size of eps-neighbors is n. The computational complexity of Quant-min-search is \(O(\sqrt{n})\), then the computational complexity of Quant-eps-Neighbors is \(O(log n\sqrt{n})\). \(\square \)

Lemma 2

The run time of Quant-Core-Distance subroutine is \(O(\sqrt{MinPts \times n})\).

Proof

Algorithm 3 calls Quant_find_smallest_values in \(O(\sqrt{MinPts \times n})\) according to Durr et al. (2006). The reachability distance is calculated for each object in MinPts-neighbors in O(logn) time. The Hadamard transform is computed in O(1) time, as it is a quantum logic gate. The application of a Hadamard gate to each qubit of an n-qubit register in parallel is equivalent to the Hadamard transform \(H_{n}\). This operation is obtained in log(n) steps. The computational complexity of Quant-Core-Distance Algorithm is then \(O(\sqrt{MinPts \times n})\). \(\square \)

Theorem 1

The running time of QOPTICS is \(O(log n\sqrt{MinPts \times n})\).

Proof

QOPTICS calculates for each point its eps-neighborhood with a complexity of \(O(log n\sqrt{n})\). As the worst case of the eps-neighborhood considered in Lemma 1 was n, this complexity stands also for all the objects. The algorithm computes also for each object its core distance and its reachability distance with a complexity of \(O(\sqrt{MinPts \times n})\). As the points can be accessed in O(logn), its time complexity is then \(O(log n\sqrt{MinPts \times n})\).

In order to highlight the gain in execution time brought by QOPTICS, let consider the case of our dataset, where n is the number of hospitals, which is equal to 279 and MinPts = 1 hospital. If the unit of time is 1 millisecond, then OPTICS runtime is \(279^{2}\)= 77 841 milliseconds, which is equal to 77 seconds and QOPTICS runtime is \(log 279\sqrt{279}\) = 16 milliseconds or 0,016 seconds. The gain is even more significant for larger datasets. \(\square \)

Theorem 2

The spatial complexity of QOPTICS is \(O(log_{2} n)\).

Proof

QOPTICS uses a QRAM to store the distances from two distinct points in a register, which is a superposition of the distances. Its size is \(O(log_{2} n^{2})\), as the number of distances that separate two points is \(O(n^2)\). It is then \(O(log_{2} n)\) as \(log_{2} n^{2} = 2 log_{2} n\). The reachability distance is calculated by an oracle and is stored in memory of size \(log_{2} n\), as the reachability distance is calculated for each point. Therefore, in total the spatial complexity of QOPTICS is \(O(log_2 n)\). \(\square \)

4 Deep self learning AOA and EHO

This section presents the proposed Deep Self-Learning approach on Artificial Orcas Algorithm (AOA) (Bendimerad and Drias 2020) and on Elephant Herding Optimization (EHO) (Wang et al. 2015). First, AOA is briefly described followed by its improved version using deep self-learning. The deep self-learning EHO is based on the same principle.

4.1 Artificial Orca algorithm (AOA)

Swarm Intelligence Algorithms have received a lot of attention these last decades. In Bendimerad and Drias (2020), the authors proposed one of these algorithms mimicking orcas in their living environment. Recently, in order to add the cultural dimension of orcas, AOA was hybridized with the cultural algorithm (CA) to develop an algorithm called OCA (Drias et al. 2021). The social organization of orcas includes several clans containing in their turn pods of individuals. Orcas practice echolocation to detect preys and perform various types of hunting strategies to reach their prey. All these phenomena are simulated in an algorithm called Orcas Artificial algorithm (AOA). The artificial orcas are directed by a matriarch which is considered as the fittest individual in the pod. In addition to this, each hierarchical structural level is distinguished by a degree of closeness. The pods are closer to themselves than to the clans. AOA is featured by making an excellent balance between two very important phases of evolutionary algorithms; the search intensification and the search diversification. The Original AOA is outlined in Algorithm 4.

figure d
$$\begin{aligned}&f_{group}=f_{min}+(f_{max}-f_{min}) \end{aligned}$$
(6)
$$\begin{aligned}&v_{p_{i}}^{t}=v_{p_{i}}^{t-1}+f_{i}\times D_{p}+f_{c}\times D_{c}+ f_{pop}\times D_{pop} \end{aligned}$$
(7)
$$\begin{aligned}&x_{p_{i}}^{t}=x_{p_{i}}^{t-1}+v_{p_{i}}^{t} \end{aligned}$$
(8)
$$\begin{aligned}&x_{temp,p_{i}}^{t}=A\times \sin \left( \frac{2\times \Pi }{L}\times x_{p_{i}}^{t-1}\right) \times \cos \left( \frac{2\times \Pi }{T}\times t_{x}\right) \end{aligned}$$
(9)
$$\begin{aligned}&x_{m,p}^{t}=\frac{\sum _{j=1}^{j=n}x_{temp,p_{j}}}{n} \end{aligned}$$
(10)
$$\begin{aligned}&x_{p_{i}}^{t}=x_{p}^{*}-\beta \times x_{m,p}^{t} \end{aligned}$$
(11)
$$\begin{aligned}&x_{new, p_{i}}^{t}= \frac{\gamma \times x_{pop_{rand1}}+\omega \times x_{c_{rand2}}^{t}}{2} \end{aligned}$$
(12)

t indicates the current iteration. \(x_{p_i}^t\) is the individual at position i in the pod p. \(v_{p_i}^t\) is the velocity of the individual at position i in the pod p. \(f_{min}\) and \(f_{max}\) are respectively the minimum and the maximum frequencies and are used to generate a random frequency \(f_{group}\) in this range. Note that group corresponds either to p, c or pop in order to determine the frequency for the pod, the clan and the population to which the individual belongs. \(\alpha \), \(\gamma \) and \(\omega \) are random numbers in interval [0,1]. The population levels are respectively distant from each other by the distances \(D_p\), \(D_c\) and \(D_{pop}\) defined as follows.

  • \(D_p= \arrowvert x_{p_{i}}^{t-1}-x_{p}^{*} \arrowvert \) where \(x_{p}^{*}\) is the matriarch of the pod to which \(x_{p_{i}}^{t-1}\) belongs.

  • \(D_c= \arrowvert x_{p_{i}}^{t-1}-x_{c}^{*} \arrowvert \) where \(x_{c}^{*}\) is the matriarch of the clan to which \(x_{p_{i}}^{t-1}\) belongs.

  • \(D_{pop}= \arrowvert x_{p_{i}}^{t-1}-x_{pop}^{*} \arrowvert \) where \(x_{pop}^{*}\) is the matriarch of the population.

For the hunting strategies, A is a parameter depending on the problem modeling, L is a parameter representing the wave length and T is an empirical parameter that represents the wave period during the chasing activity. \(x_{new,p_i}^t\) is the new solution of the individual i in the pod p. \(x_{pop,rand1}^t\) is a random individual in the population at position rand1 and \(x_{c,rand2}^t\) is a random individual in the clan c of the current pod at position rand2.

4.1.1 Analysis of AOA complexity

The number of operations of AOA denoted \(TC_{AOA}\) and presented in Eq. 13 depends on the maximum number of iterations MaxIter, the population size n and the number of pods \(\#pods\). Note that the sorting method used in the algorithm is heapsort and its computational complexity is O(nlogn).

$$\begin{aligned} TC_{AOA}=\sum _{i=1}^{MaxIter}(nlogn+n+\#pods + n) \end{aligned}$$
(13)

nlogn operations are needed for Instruction 3, n operations for Instruction 4 and Instruction 6 and \(\#pods\) operations for Instruction 5. As the size of the population \(n=\#clans\times \#pods \times \#orcas \), \(n > \#pods\), we conclude that:

\(TC_{AOA}<\sum _{i=1}^{MaxIter}(nlogn+3n)\) and therefore the computational complexity of AOA is \(O(MaxIter\times nlogn)\)

4.2 Elephant herding optimization (EHO)

Elephant Herding Optimization (EHO) is a Swarm Intelligence-based method inspired by the herding behavior of elephants and proposed to solve optimization problems. Since it was firstly proposed in 2015 by Wang et al. (2015), it has received significant attention from scholars in the world (Li et al. 2020). Several EHO improvements and hybridization have been recently published (Tuba et al. 2018; Li et al. 2019; Moayedi et al. 2020; Houacine and Drias 2021).

In nature, elephants have a social structure composed of clans, where the elephants of each clan are under the leadership of a matriarch. Females live in family clans, while male elephants leave their group once they grow up. These two behaviors are formalized into EHO through two main operators: Clan updating operator and Separating operator.

Also, EHO is characterized by a maximum number of generations (iterations), the influence rate of the matriarch on elephants (\(\alpha \)), and the influence rate of the clan’s gravity center on the matriarch (\(\beta \)). EHO algorithm is summarized in Algorithm 5.

figure e
$$\begin{aligned} x^{t+1}_{i,j}= & {} \beta \times x^{t+1}_{center,j} \end{aligned}$$
(14)
$$\begin{aligned} X_{center,i}= & {} \frac{1}{\#E} * \sum X_{i,j} \end{aligned}$$
(15)
$$\begin{aligned} x^{t+1}_{worst,j}= & {} x_{min} + ( x_{max} - x_{min} ) \times rand \end{aligned}$$
(16)
$$\begin{aligned} x^{t+1}_{i,j}= & {} x^{t}_{i,j} + \alpha \times ( x^{t+1}_{best,j} - x^{t}_{i,j} ) \times r \end{aligned}$$
(17)

\(\#C\) and \(\#E\) are the number of clans and the number of elephants in each clan, respectively. \(x^{t}_{i,j}\), \(x^{t+1}_{i,j}\), respectively represent the actual and newly updated i-th elephant’s solution of the j-th clan. \(x^{t}_{best,j}\), \(x^{t}_{worst,j}\) are the best and the worst elephant’s solution of the j-th clan. \(x_{max}\), \(x_{min}\) are the upper and lower bound of a solution, \(x_{center,j}\) gives the gravity center of clan j, rand, r are random numbers \(\in \) [0,1], and \(\alpha \), \(\beta \in \) [0,1].

4.2.1 Analysis of EHO algorithm complexity

The number of operations of EHO algorithm at worst is given in Eq. 18 and is based on the maximum number of iteration MaxIter, the population size n which is equal to the product of the number of clans \(\#C\) by the number of elephants \(\#E\).

$$\begin{aligned} TC_{EHO} = \sum _{i=1}^{MaxIter} \#C \times \#E \times \log (\#E) + 2n + const\nonumber \\ \end{aligned}$$
(18)
Fig. 3
figure 3

Gaussian distribution in interval \(x \in [-5,5]\)

\(\#C \times \#E \times \log (\#E)\) operations are needed for Instruction 3, n operations for Instruction 6 and Instruction 7 and a constant const for Instruction 4 and Instruction 5. We deduce that:

\(TC_{EHO} = MaxIter \times (nlog(\#E) + 2n+ const) \). The computational complexity of EHO algorithm is then \(O(MaxIter \times nlog(\#E))\).

4.3 Mutation based on Gaussian and Cauchy distributions

Evolutionary algorithms are known to have limited potential to solve very complex real-world problems (Kechid and Drias 2020). One possible way to strengthen their performance and to allow them to escape from local optima is to introduce within the algorithm self-learning strategies (Wang et al. 2020) that have proved recently their usefulness. Gaussian and Cauchy distributions are two operators used for that purpose.

4.3.1 Mutation based on Gaussian operator

The theoretical basis of Gaussian mutation (Jakubik et al. 2021) is based on the Normal density probability function defined in Eq. 19 and depicted with its cumulative distribution function in Fig. 3.

$$\begin{aligned} P(x)=\frac{1}{2}\left( 1+erf\left( \frac{x-\mu }{\sigma \sqrt{2}}\right) \right) \end{aligned}$$
(19)

\(\mu \) represents the mean, \(\sigma \) the standard deviation and erf the error function, which is defined in Eq. 20.

$$\begin{aligned} erf(x)=\frac{2}{\sqrt{\Pi }}\int _{0}^{x} e^{-t^{2}}.dt \end{aligned}$$
(20)

4.3.2 Mutation based on Cauchy distribution function

Similarly, the Cauchy mutation operator (Paiva et al. 2017) is based on the Cauchy distribution function presented in Eq. 21 and shown with its cumulative function in Fig. 4.

$$\begin{aligned} P(x)=\frac{1}{2}+\frac{1}{\Pi }\arctan \left( \frac{x-x_{0}}{\gamma }\right) \end{aligned}$$
(21)
Fig. 4
figure 4

Cauchy distribution in interval \(x \in [-5,5]\)

4.4 DSLAOA and DSLEHO

The designed deep self-learning on AOA and EHO are called respectively Deep Self-Learning AOA (DSLAOA) and Deep Self-Learning EHO (DSLEHO). In what follows, the approach is detailed for AOA. The mutation operators are integrated in AOA after the fourth step of the algorithm between the intensification and the diversification phases. The matriarch of each pod is deeply improved using Eq. 22 expressed in terms of two random individuals of this pod. In addition, a new parameter called Depth is introduced to define the depth of learning. For EHO, The matriarch of each clan is deeply improved using the same equation.

$$\begin{aligned} x_{new,p_{matriarch}}^{t}=MO \times (x_{p,rand1}^{t}-x_{p,rand2}^{t})+x_{p_{matriarch}}^{t}\nonumber \\ \end{aligned}$$
(22)

MO corresponds either to Gaussian Mutation Operator or Cauchy Mutation Operator. Algorithm 6 outlines the Deep Self-Learning algorithm.

figure f

4.4.1 Analysis of DSLAOA algorithm complexity

The computational complexity of Algorithm 6 depends greatly on the depth parameter Depth. This parameter involves adding a loop to AOA to repeat applying Eq. 22 to the matriarch of each pod of the population. The computational complexity of DSLAOA is then obtained by adding to the complexity of AOA the operations of the introduced loop, that is \(\#Clans \times \#pods\times Depth\). It is then \(O(MaxIter(nlogn+\#Clans \times \#pods\times Depth)\), Which is at worst \(O(MaxIter \times nlogn \times Depth)\).

The Depth parameter, as its name suggests, is supposed to be a large number that allows the improvement of the best solution due to several executions of the mutation operators (GAUSS or CAUCHY). Accordingly, the complexity of DSLAOA is more likely to be greater that that of AOA.

4.4.2 Analysis of DSLEHO algorithm complexity

With similar reasoning as that done for the computational complexity calculation of DSLAOA, we obtain that the computational complexity of DSLEHO algorithm is at worst \(O(MaxIter \times nlog(\#E)\times depth)\).

5 Adaptation of DSLAOA and DSLEHO for Covid-19 EMS transportation

In this section, the Covid-19 EMS transportation problem is described and formulated mathematically. Thereafter, the adaptation of DSLAOA and DSLEHO for the issue is presented.

5.1 Related work

Ambulance fleet management is an essential tool to avoid severe problems in emergency medical services (EMS). Early studies have been undertaken for the issue of the emergency vehicle dispatching problem with different static approaches based among others on the minimization of the total travel time in the system, the priority of the call and the first come first served strategy (Bandara et al. 2014; Lee 2017). On the other side, the covering demand problem has also been investigated using static models. These approaches are well reported in a recent survey (Belanger et al. 2019), which focuses on operations research (OR) approaches applied to EMS. Researchers are still interested in this issue as it can be applied to fire departments, police stations and human rescue in natural disasters such as earthquakes. Very recent works (Belanger et al. 2020; Usanov et al. 2019) and (Carvalho et al. 2020) proposed new methods based on OR strategies such as Markov decision process and heuristic information. Interesting case studies were realized for Amsterdam and Lisbon.

As far as we know, few studies have been dedicated to integrate the ambulance dispatching and the demand covering problems jointly in the same model. Except the work held in Ibri et al. (2012), the majority of the efforts have adopted OR approaches with simple heuristics. None of the previous works have used sophisticated bio-inspired approaches such as DSLAOA or DSLEHO. Also, none of them have explored the decomposition of the territory into sub-territories with a data science technique such as the OPTICS clustering method.

5.2 EMS problem formulation

The main target of the problem is to assign an ambulance to each emergency call so that there is no unanswered call in due time. To tackle the issue, the problem is modeled as follows.

  • The day is a sequence of several periods of times p, representing respectively the calls arrival times.

  • A list C of the calls that arrive at each period of time. The calls are specified by their positions and their priority.

  • A list S of stations defined by hospital positions.

  • Each station contains a list A of ambulances, among them only Z ambulances are available.

  • A list E of edges representing the distance between a call \(c \in C\) and a station \(s \in S\).

  • The travel time \(t_{a_s}\) that keeps an ambulance \(a_s\) belonging to Station s busy is expressed by Eq. 23.

$$\begin{aligned} t_{a_{s}}=2 \times \left( UnitPerHour \times \frac{Distance(a_{s},c)}{v_{s}}+p\right) \end{aligned}$$
(23)

c is the call to which \(a_s\) is responding. \(v_s\) represents the emergency vehicle speed (in km/h) and UnitPerHour is the number of time units in 1 h. p represents the time it takes to deal with the emergency and distance(cs) is the spherical distance between two geographical points calculated by Eq. 24.

$$\begin{aligned}&distance(c,s)=2\times \arcsin \nonumber \\&\quad \sqrt{\sin \left( {\frac{\delta _{c}-\delta _{s}}{2}}\right) ^{2}+\cos \delta _{c} \times \cos \delta _{s} \times \sin \left( \frac{\gamma }{2}\right) ^{2}}\times R\nonumber \\ \end{aligned}$$
(24)

\(\delta \) and \(\gamma \) represent, respectively, the radians of the latitude and longitude of each point and R the earth radius. Based on these assumptions, the problem can be defined by the multi-objective function of Eq. 25.

$$\begin{aligned} F = Obj_{1}+Obj_{2}+Obj_{3} \end{aligned}$$
(25)

\(Obj_{1}\) is to minimize the predicted travel time, which can be expressed as: \(\sum _{j\in S} \sum _{i\in C} t_{i,j} \times x_{i,j}^{t}\)

\(Obj_{2}\) is to minimize waiting calls, which can be expressed as:

\(\sum _{i\in C} q_{i}^{t}\)

\(Obj_{3}\) is to minimize unserved priorities, which can be formulated as:

\(\sum _{i\in C} Priority_{i} \times q_{i}^{t}\)

where:

$$\begin{aligned} x_{i,j}^{t}= & {} {\left\{ \begin{array}{ll} 1 &{} {\text {if an ambulance from Station}} j {\text {is assigned to Call}}\ i\ {\text {at time}}\ t\\ 0 &{} {\text {otherwise}} \end{array}\right. } \\ q_{i}^{t}= & {} {\left\{ \begin{array}{ll} 1 &{} {\text {if Call}}\ i\ {\text {is waiting for an assignment of an ambulance}}\\ 0 &{} {\text {otherwise}} \end{array}\right. } \end{aligned}$$

These objective functions are subject to the following constraints:

  1. 1.

    Assignment uniqueness: \(\sum _{i\in C}x_{i,a}^{t} + \sum _{j\in C}x_{j,a}^{t} = 1\).

  2. 2.

    Reachability constraint: \(E(i,a) \ne \phi \) where i represents the position of the incoming call and a an ambulance assigned to i.

5.3 Solution description

A solution S is a sequence of sub-solutions, where each sub-solution \(x_i\) represents a pair of points \((c_i,a_j)\), \(c_i\) belongs to C and \(a_j\) to an ambulance of Station j. A sub-solution (Call, Ambulance) is characterized by its geographical position (Longitude, Latitude). AOA, DSLAOA, EHO and DSLEHO deal with the update of the solution using arithmetic operators. Adapting these operators to the current solution consists in updating the latitude and longitude of the call and the ambulance of each sub-solution.

5.4 DSLAOA and DSLEHO for Covid-19 EMS transportation

The Covid-19 EMS problem we are dealing with is dynamic, that is, the emergency calls arrive in real-time at different period of time. For this purpose, the system updates the data (Covid-19 incoming calls and occupation time of ambulances) at each period of time. When calls arrive and ambulances are released, the developed algorithms calculate the best solution consisting in an assignment of an ambulance to a call respecting the multi-objective fitness function defined in Eq. 25. Algorithm 7 highlights the dynamic aspect of the approach that addresses DADECC-COVID19 problem.

figure g

6 Experiments

QOPTICS is designed to help solving the NP-hard DADECC-COVID19 problem. An application has been developed to simulate the case of EMS transportation of KSA in the context of Covid-19 crisis. QOPTICS is used in a first phase of the program to locate the zones that condense an important number of hospitals. Also, the emergency calls have been clustered in order to treat the issue in one local region and thus to reduce this way the complexity of the problem solving. Once, these regions are identified, DADECC-COVID19 has been addressed using the intelligent methods DLSAOA and DSLEHO presented previously.

6.1 Datasets construction

The objects to be clustered are the KSA hospitals and the resulted outcome is the spatial clusters representing dense regions of hospitals. The dataset is built using the data information available on the Open Data portal of Saudi Arabia (2021). The data have undergone a preprocessing step where redundant data, maternity centers and mental patient centers were removed and GPS position of each hospital according to Google Maps was added. This way, an instance of the dataset is composed of a hospital and its geographic coordinates. As we are dealing with geolocation data, Eq. 24 was used to calculate the distance between two hospitals. The dataset containing the distances was generated to be used as input for QOPTICS.

The second dataset provides emergency calls we generated on the basis of the highest number of infected cases per day in Rami (2021). Figure 5a presents the evolution of the cases of Covid-19 pandemic noted until August 3, 2021. The peak encountered happened on June 17, 2020 with 4919 new cases. The emergency calls were built with these cases while respecting their distribution in their region as depicted in Fig. 5b. These calls are represented by their region, their geographical position and a priority of call as (Region, (Longitude, Latitude), Priority). The datasets we built can be found in Datasets for DADCC (2021).

Fig. 5
figure 5

Dataset construction

6.2 Experimental results

The performed experiments consist of two parts. First, we present the clustering results to validate QOPTICS and then the outcomes of the execution of DSLAOA and DSLEHO applied to Covid-19 emergency transportation.

6.2.1 Clustering results

As all hospitals should be exploited for hosting the calling patients, outliers are unauthorized. Therefore, MinPts is set to 1. To fix eps the distance that determines the eps-neighbors, various tests were performed on values of MinPts (the number of hospitals in the eps-neighborhood) ranging from 5 to 40 km by step equal to 5. The experimental outcomes are shown in Table 1.

Table 1 Parameters setting for clustering hospitals
Fig. 6
figure 6

Hospitals clustering with \(MinPts=1\) hospital and \(eps =15\) km

Fig. 7
figure 7

A mapping of a hospitals cluster between its geographic position and the reachability plot

The best setting is the one that minimizes both the number of clusters and eps in order to make the calls positions close to the hospitals. According to the results presented in Table 1, the most appropriate setting would be \(eps = 15\) km, which allows the formation of 108 clusters. The dispersion of the 108 clusters on the KSA map is shown in Fig. 6a. Each cluster is represented by a distinct color. The reachability plot resulting from clustering with the best parameterization found is depicted in Fig. 6b, which shows the structure of the data in clusters with different densities. The correspondence between the representation of the 50th cluster (which includes 43 hospitals) on the map and on the reachability plot is illustrated in Fig. 7.

The second set of experiments concern the clustering of the Covid-19 emergency calls. As for hospitals clustering, we varied eps from 5 to 40 by step = 5 and tested the algorithm to tune this parameter. The results of the experiments are reported in Table 2. According to these outcomes, the clustering of the 4919 emergency calls that seems the most suitable is obtained with \(Minpts = 1\) and \(eps = 15\), which correspond to 1237 clusters. Figure 8a and b shows respectively the clusters in the geographic map and the reachability plot. The mapping between a cluster and the reachability plot is depicted in Fig. 9.

Table 2 Parameters setting for clustering emergency calls
Fig. 8
figure 8

Calls clustering with \(MinPts=1\) hospital and \(eps =15\) km

Fig. 9
figure 9

A mapping of a calls cluster between its geographic position and the reachability plot

6.2.2 Results of deep self-learning AOA and EHO

The empirical parameters of AOA and EHO and the depth parameter of the Deep Self-learning have been first tuned. The values of these parameters for AOA are shown in Table 3 while those of EHO are presented in Table 4.

Table 3 Empirical parameters of deep self-learning AOA
Table 4 Empirical parameters of deep self-learning EHO

Recall that the Deep Self-Learning framework is based on two mutation operators: Cauchy and Gaussian. The algorithms that exploit the Cauchy operator are called DSLAOA-C and DSLEHO-C while the algorithms that are based on the Gaussian operator are called DSLAOA-G and DSLEHO-G. For better describing the dispatching mechanism, we focused on the region with the largest cluster of calls. the following tasks illustrate the whole process.

Step 1. The ambulance base formation.

Each cluster of hospitals is represented by a centroid to form a base. The resources of hospitals in the same cluster are grouped together and shared between them. Thus, the center of gravity of each hospital cluster is used to represent an ambulance base. Figure 10 takes as example the blue cluster of 37 hospitals whose center of gravity is estimated at (Latitude: 20.44837719, Longitude: 40.879243798).

Fig. 10
figure 10

Representation of each cluster of hospitals by an ambulance base

Fig. 11
figure 11

Zoom in on the largest cluster of emergency calls

Fig. 12
figure 12

Positions of the ambulance bases and the emergency calls of the prominent cluster

Step 2. Emergency calls assignment to ambulances bases.

In order to decentralize and parallelize the dispatching system to different KSA regions, it was necessary to distribute a certain number of bases per emergency call cluster. For this purpose, the databases and call clusters have been grouped together. As the dispatching at the level of the different clusters is independent and carried out in parallel, we have chosen to treat the largest cluster shown in pink in Fig. 11. Figure 12 shows the result of regrouping the bases and emergency calls of the prominent cluster, this corresponds to 675 calls and 15 bases.

Step 3. DSLAOA/DSLEHO experimental results on the prominent cluster.

This step provides the results of the execution of DSLAOA/DSLEHO on the largest group (Calls, Ambulances). First the experiments undertaken on the parameter depth are exhibited followed by the outcomes yielded by these algorithms. A comparative study with a state-of-the-art method is performed at last.

Step 3.1. Impact of the parameter Depth.

In this part, we carried out experiments on the impact of the Depth parameter. We fixed the number of ambulances to 5 per base, and varied the Depth value in [50, 100, 200].

Fig. 13
figure 13

Evolution of DSLAOA’s overall fitness with different Depth values

Fig. 14
figure 14

Evolution of DSLAOA’s execution time with different Depth values

The influence of the Depth parameter on DSLAOA-C and DSLAOA-G in terms of performance and execution time are illustrated in Figs. 13 and 14, respectively. Figure 14 shows that the greater value of Depth the higher execution time for all the DSLAOA algorithms. However, Fig. 13 reveals that \({ Depth} = 100\) performs the best results in almost all the time, with very few instances where its performance is equal to the one with Depth equal to 200. Consequently, we found out that the optimal Depth value is 100 in both terms performance and time execution.

Fig. 15
figure 15

Evolution of DSLEHO’s overall fitness with different Depth values

Fig. 16
figure 16

Evolution of DSLEHO’s execution time with different Depth values

Figures 15 and 16 illustrate the impact of Depth variation on DSLEHO-C and DSLEHO-G in terms of performance and execution time, respectively. Figure 16 shows that as for the DSLAOA results, the execution time increases with the growth of Depth value. While from Fig. 15, it appears that \({ Depth} = 100\) and \({ Depth} = 200\) perform the best results, with a negligible better fitness with \({ depth} = 200\) until the 100th time unit. Then, from time unit 100 to 144, DSLEHO-C with \({ Depth} = 100\) and 200 fluctuate in the same ranges and are equivalent. Thus, it comes out that a Depth value set to 200 does not improve sufficiently EHO performance to cover its growing time consumption. Therefore, setting Depth at 100 appears to be the best performance versus time ratio.

According to these experiments concerning both DSLAOA and DSLEHO, we set the depth to the best found value (100) for the remainder of the experiments.

Step 3.2. Comparing DSLAOA/DSLEHO with their origin models.

Original AOA and EHO as well as their Deep Self-Learning versions (DSLAOA-C, DSLAOA-G, DSLEHO-C, and DSLEHO-G) are compared on the basis of their execution time for the dispatching problem with 5 and 10 ambulances per bases.

Fig. 17
figure 17

Execution time of AOA and its improvement over 144 time units with 5 ambulances

Fig. 18
figure 18

Execution time of AOA and its improvement over 144 time units with 10 ambulances

Fig. 19
figure 19

Execution time of EHO and its improvement over 144 time units with 5 ambulances

Fig. 20
figure 20

Execution time of EHO and its improvement over 144 time units with 10 ambulances

Figures 17, 18, 19, and 20 show the execution time of AOA and EHO and their respective enhanced versions of deep self learning with both Cauchy and Gaussian mutation operators when dealing with 5 and 10 ambulances per base. It is clear that the deep approaches take much longer time than the original version of both AOA and EHO algorithms. These results reflect the accuracy of the theoretical study of computational complexity. It shows that the proposed deep self-learning approaches have a disadvantage compared to the basic methods because it performs with a higher computational time (a quantity value of \(\frac{Depth}{\#orcas}\) for AOA and \(\frac{Depth}{\#E}\) for EHO ). However, as long as this time problem remains polynomial, it can be easily circumvented either by switching to a GPU-based parallel implementation or by proposing a quantum approach.

Fig. 21
figure 21

Evolution of overall fitness over 144 time units (24h) with 5 ambulances per base

Fig. 22
figure 22

Evolution of overall fitness over 144 time units (24h) with 10 ambulances per base

Fig. 23
figure 23

Performances of the different algorithms for “Travel time” base

Step 3.3. Comparing the six algorithms with a state-of-the-art method.

The performances of the six algorithms (AOA, DSLAOA-C, DSLAOA-G, EHO, DSLEHO-C, DSLEHO-G) were compared with the classical algorithm of the state-of-the-art (Closest-first policy) (Bandara et al. 2014). The executions were repeated 3 times on 5 different datasets. The overall fitness is calculated from the sum of the 3 sub-fitness Travel time, Unsatisfied calls and Priority of unsatisfied calls previously normalized between [0, 1] in order to balance the weight of each sub-fitness. The results of the minimization of the overall fitness spread over the 144 time units are presented in Figs. 22 and 21. It is clear that in the long term when resources are limited (5 Ambulances per Base), the “Closest-first” (Nearest) algorithm records the highest fitness function scores, knowing that the latter should be minimized. When a large number of Covid-19 calls are received at the center level, the proposed Deep Self-Learning approaches perform better. These DSL-based approaches compete with each other, such that we remark very good results by DSLAOA (Cauchy and Gauss) up to time unit 79, where DSLEHO (Cauchy and Gauss) regain the upper hand.

Figure 21 shows over time units, that the Deep Self-Learning approaches with the two mutation operators Cauchy and Gauss improve the results obtained by AOA and EHO. Also, we note peaks at certain units of time: between the 40th and 60th and from the 118th unit, which corresponds to the period of high demand (urgent calls). Looking closely at the performance of the different algorithms, we find that the two variants of Deep Self-Learning AOA outperform both the Closest-first approach and the EHO-based approaches with a slight amelioration for DSLAOAG.

Figures 23 and 24 summarize the performances of the 7 algorithms with respect to “Travel time” on the 144 time units with 5 and 10 ambulances per base, respectively. From these figures, we notice that the three AOA-based algorithms and the three EHO-based algorithms follow the same trend, which is explained by the common strategy of updating their solutions. Also, when the number of ambulances is equal to 10, more than half of the “Travel time” scores recorded during the 144 units of time by the Algorithm Nearest (closest-first policy) vary over a considerably large interval, which denotes an inability to provide solutions in a more or less stable manner unlike the other approaches.

Fig. 24
figure 24

Performances of the different algorithms for “Unsatisfied calls” with 5 ambulances per base

Fig. 25
figure 25

Performance of the different algorithms for “missed priorities” with 5 ambulances per base

Figures 25 and 26 represent the statistical distribution with boxplot of Unsatisfied calls over the 144 time units. Clearly with 10 ambulances there are enough ambulances per hospital to cover all incoming calls. On the other hand, for 5 ambulances, we note that DSLAOAC represents the method which achieves the best scores in terms of the number of “Unsatisfied calls”, the smallest in the long term.

As the number of ambulances = 10 yields no “Unsatisfied call” with the different compared approaches, Fig. 27 also presents no “missed Priorities”. However Fig. 28 shows that Nearest algorithm does not prioritize the choice of the emergency calls and therefore obtains the worst results. Concerning the approaches based on AOA and EHO, it is clear that they take into account the impact that the priority of emergency calls can have. Nonetheless, we note that the ranges of variation of EHO, DSLEHOC, and DSLEHOG are lower than the others which means better results for this sub-fitness function.

Fig. 26
figure 26

Performances of the different algorithms for “Travel time” with 10 ambulances per base

Fig. 27
figure 27

Performances of the different algorithms for “Unsatisfied calls” with 10 ambulances per base

Fig. 28
figure 28

Performances of the different algorithms for “missed Priorities” with 10 ambulances per base

7 Conclusion

In this paper, a quantum algorithm for the Ordering Points To Identify the Clustering Structure is designed. Its computational complexity is calculated and is shown to be reduced relatively to its classical counterpart. Experiments have been performed for an application of emergency transportation consisting in dispatching ambulances and covering urgent calls in case of Kingdom of Saudi Arabia during Covid-19 crisis. With the aim of gaining scalability for the problem solving method, clustering hospitals and clustering emergency calls have been performed in order to distribute the problem solving execution on the different clusters. First the parameters MinPts and eps were tuned by experiments for both clusterings. The obtained results helped to undertake the second part of the study very efficiently.

Afterwards, to cope with the hard issue of EMS transportation, we have proposed four novel algorithms, namely Deep Self-Learning Orcas Artificial Algorithm using Cauchy operator (DSLAOA-C), Deep Self-Learning Orcas Artificial Algorithm using Gaussian operator (DSLAOA-G), Deep Self-Learning EHO using Cauchy operator (DSLEHO-C) and Deep Self-Learning EHO using Gaussian operator (DSLEHO-G). A second set of experiments was carried out on the real life EMS transportation. The empirical parameters were first fixed by extensive experiments prior to test the performance of the proposed methods. Results proved the superiority of DSLAOA-C on both DSLAOA-G and original AOA for the three objectives. The integration of the Gaussian mutation operator has not brought improvement to original AOA as it was expected. Similar outcomes have been observed for DSLEHO-C and DSLEHO-G and original EHO. And when comparing DSLAOA-C and DSLEHO-C, we notice a better outcome for DSLAOA-C. At last, the most significant result is that all the proposed algorithms outperform the state-of-the-art algorithm namely nearest in the illustrated curves.

For future work, we will integrate other quantum subroutines and operators in the proposed quantum clustering algorithm in order to further improve its overall efficiency. In this work, OPTICS was experimented using Python Programming Language as it computes the same results as QOPTICS. In order to appreciate the gained calculation time in practice, we plan to code QOPTICS using Qiskit, the language proper to quantum computing that needs more investigation. Moreover, we will extend our research to develop other deep learning strategies to solve the Covid-19 EMS transportation problem. Another perspective would be to test the deep self-learning framework with larger learning depth parameter value to improve the outcomes performance. Also Quantum versions of DSLAOA and DSLEHO will be developed in order to overcome the calculation time. Another alternative would be to code the procedures using CUDA and execute them on a GPU.