1 Introduction

Quantum computing is a hot research topic on which academies, enterprises and government agencies are investing huge resources due to its potential capabilities in solving problems that are intractable for classical computers (Nielsen and Chuang 2010). This advantage comes from the use of quantum mechanical principles, such as superposition and entanglement, which enable intrinsic and massive parallelism in computation. As demonstrated by some remarkable research (Biamonte et al. 2017; Tacchino et al. 2019; Acampora 2019; Pourabdollah et al. 2022), artificial and computational intelligence are some of the research areas that could benefit most from this quantum revolution. In our vision, the field of the evolutionary optimization is particularly well-suited to be approached by the quantum paradigm, because this kind of computation can support evolutionary algorithms in exploring multiple regions of a problem’s search space in a concurrent way. This is the idea behind the hybrid algorithm known as HQGA (Acampora and Vitiello 2021), one of the first evolutionary computation approaches run on an actual quantum computer. HQGA is defined as a hybrid algorithm, because it performs fitness function evaluations on classical computers, whereas it implements a whole genetic evolution on actual quantum computersFootnote 1. Throughout the evolutionary optimization process, HQGA represents the solutions of a problem as quantum chromosomes, each one of them represents a quantum state that embodies a superposition of classical individuals belonging to a genetic population. This quantum chromosome-based representation provides a potential computational advantage: a quantum chromosome composed of n qubits can embody a subset of the search space composed of up to \(2^n\) classical individuals.

Unfortunately, the size of current quantum processors (around few dozens of qubits) does not allow HQGA to fully express its potential advantage. Indeed, the limited number of qubits that equip current quantum computers does not allow HQGA to use a suitable number of quantum chromosomes to offer adequate degrees of exploration and exploitation in genetic evolution, and identify good quality near-optimal solutions of the problem. As a consequence, there is a strong need to introduce innovative approaches to the design of quantum algorithms for evolutionary computation that can solve the aforementioned issue, above all, to deal with continuous optimization problems, which are characterized by potentially infinite solution spaces.

The main goal of this paper is to address this critical challenge using granular computing which, as reported by Pedrycz (2001), can be used to break down a problem into a sequence of smaller, more manageable subtasks to reduce the overall (classical or quantum) computational effort. Over years, granular computing has proven to be a good strategy in complex problem solving (Cheng et al. 2021) and to improve optimization and machine learning approaches (Pownuk and Kreinovich 2021; Song and Wang 2016; Wang et al. 2017). In our work, granular computing is used to induce a hierarchical navigation of the solution space of the problem to be solved in order to identify nested granules of information, which may contain good near-optimal solutions of the problem. Our idea results in the design of a new algorithm named Hybrid and Granular Quantum Genetic Algorithm (HGQGA) which provides a good trade-off between exploration and exploitation, because, at the higher levels of the hierarchy, it uses the quantum processor to explore and identify the intervals that may contain the optimal solution, whereas at the lower levels of the hierarchy, it uses the quantum processor to refine the search around the optimal solution. The suitability of the proposed algorithm has been evaluated in an experimental session, where it has been applied to solve well-known continuous optimization problems used in evolutionary computation. The experiments have been run using the family of quantum processors provided by the IBM Q Experience project. As shown by the results, HGQGA statistically enhances the performance of HQGA, laying the groundwork for making current small-sized quantum computers useful in solving real-world optimization problems.

The rest of the manuscript is as follows. Section 2 discusses the state-of-the-art approaches in the interplay between quantum evolutionary computation and granular computing. Section 3 provides details about the basic concepts of quantum computing to make the manuscript self-contained. The details about the proposed approach, HGQGA, are given in Sect. 4. Section 5 describes experiments and results, before concluding in Sect. 6.

2 Related works

The proposed approach aims at improving an existing evolutionary optimization algorithm, designed to be run on actual quantum computers, by means of granular computing. In the world of classical computation, some research efforts have been made to integrate evolutionary algorithms and granular computing mainly in two different ways: (1) using evolutionary algorithms to optimize granular computing-based approaches; (2) using granular computing to improve performance of evolutionary algorithms. An example belonging to the first category is reported in (Cimino et al. 2014), where a multilayer perceptron is used to model a particular type of information granules, namely, interval-valued data, and trained using a genetic algorithm designed to fit data with different levels of granularity. Another example is reported in (Dong et al. 2018). In this work, a new feature selection algorithm based on the granular information is presented to deal with the redundant features and irrelevant features in high-dimensional/low-sample data and low-dimensional/high-sample data. This proposal uses a genetic algorithm to find out the optimal hyper-parameters of the feature selection algorithm, such as the granular radius and the granularity \(\lambda\) optimization. Moreover, in (Melin and Sánchez 2019), an optimization procedure based on a hierarchical genetic algorithm is proposed to select type of fuzzy logic, granulation of each fuzzy variable and fuzzy rules selection to design optimal fuzzy inference systems applied in combining modular neural networks responses. The optimization of granulation for fuzzy controllers is proposed also in (Lagunes et al. 2019). In this case, the optimization is carried out by using the Firefly Algorithm and the optimized fuzzy controllers are used in the context of autonomous mobile robots. As for the second category, an example is reported in (Gao-wei et al. 2011), where the data generated in the process of the Multi-Objective Evolutionary Algorithms (MOEAs) are considered as information system and granular computing is used to disposal the information system. Based on the dominate relationship in the information system, the proposed approach gets the dominance granule of the objective function, and adopts the granularity of dominance granule as the criteria of individual superiority. The result of the experiments carried out in this work shows that the proposed method based on granular computing improves the efficiency of the MOEAs significantly.

Analyzing the literature, we discover that there are not existing studies in regard to integrating granular computing and evolutionary algorithms in the context of quantum computation. This is surely also due to the fact that research activities about evolutionary algorithms runnable on quantum processors are really in a limited number. Indeed, in literature, several efforts have been carried out to develop the so-called quantum-inspired evolutionary approaches (Narayanan and Moore 1996; Ross 2019; Zhenxue et al. 2021; Dey et al. 2021), i.e., classical optimization methodologies that draw inspiration from quantum mechanics, but, continue to be founded on conventional concepts from digital computation and Boolean algebra. To the best of our knowledge, only a work (Acampora and Vitiello 2021) proposes a genetic algorithm, named HQGA, whose genetic evolution is runnable on a real quantum processor thanks to its capability of performing genetic operators by evolving vectors belonging to Hilbert spaces. In spite of the indisputable innovations introduced by HQGA in the field of evolutionary computation, the limited number of qubits that characterizes current quantum processors does not yet allow an efficient execution of that kind of algorithms in terms of accuracy of the computed solution.

To bridge this gap, a new algorithm named HGQGA is proposed in this paper to be run on small quantum devices thanks to a granular computation scheme, which iteratively limits the search space of a given problem to a subspace (information granule) that may contain a near-optimal solution of the problem being solved. As shown in the experimental results, the proposed approach shows better performance than HQGA in solving continuous optimization problems.

3 Basic concepts of quantum computing

This section introduces the main concepts related to quantum computing useful to understand the design of HGQGA.

Quantum computing is a fascinating new field at the intersection of computer science, mathematics, and physics, which strives to harness some of the key aspects of quantum mechanics, such as superposition and entanglement to broaden our computational horizons (Yanofsky and Mannucci 2008). This new computing paradigm uses the so-called qubit (short for a quantum bit) to store and manage information. In detail, a qubit is a unit vector in a two-dimensional complex vector space (usually Hilbert Space) for which a particular basis has been fixed. Formally

$$\begin{aligned} |\psi \rangle = \alpha |0 \rangle + \beta |1 \rangle \end{aligned}$$
(1)

where \(\alpha\) and \(\beta\) are complex numbers, such that \(\vert \alpha \vert ^2 + \vert \beta \vert ^2 = 1\), and the Dirac notation, \(|0 \rangle\) and \(|1 \rangle\), is a shorthand for the vectors encoding the two basis states of the two dimensional vector space:

$$\begin{aligned} \begin{array}{lr} |0 \rangle = \biggl ( \begin{array}{c} 1 \\ 0 \end{array}\biggr ) &{} |1 \rangle = \biggl ( \begin{array}{c} 0 \\ 1 \end{array}\biggr ) \end{array} \end{aligned}$$
(2)

Hence, the state of the qubit is the two dimensional complex vector \(\biggl ( \begin{array}{c} \alpha \\ \beta \end{array}\biggr )\). The coefficients \(\alpha\) and \(\beta\) are known as the amplitude of the \(|0 \rangle\) component and the \(|1 \rangle\) component, respectively.

Unlike the bit, i.e., the basic unit of information in the classical computation, a qubit is not constrained to be wholly 0 or wholly 1 at a given instant, but it can be a superposition of both a 0 and a 1 simultaneously. For this reason, to gain information from a qubit, it is necessary to perform a so-called measurement. When a qubit is measured, the measurement changes the state to one of the basis states by resulting in only one of two states \(|0 \rangle\) or \(|1 \rangle\). According to quantum physics, after measuring the qubit, it will be found in state \(|0 \rangle\) with probability \(\vert \alpha \vert ^2\) and in state \(|1 \rangle\) with probability \(\vert \beta \vert ^2\). Hence, the need that \(\vert \alpha \vert ^2 + \vert \beta \vert ^2\) is equal to 1 in Eq. 1.

As useful as single qubits can be, they are much more powerful in groups by composing a so-called quantum register. Indeed, like a single qubit can be found in a superposition of the possible bit values it may assume, i.e., 0 and 1, so too a n-qubit quantum register can be found in a superposition of all the \(2^n\) possible bit strings 00...0, 00...1,..., 11...1 it may assume. Formally, a n-qubit quantum register is a quantum system comprising n individual qubits, where each qubit \(q_i\) with \(i \in \{0, \ldots , n-1\}\) is represented by a unit vector of two-dimensional Hilbert space \({\mathcal {H}}_i\) with \(i \in \{0, \ldots , n-1\}\). Then, the resulting quantum register is represented by a unit vector of n-dimensional Hilbert space:

$$\begin{aligned} {\mathcal {H}} = {\mathcal {H}}_{n-1} \otimes {\mathcal {H}}_{n-2} \otimes \ldots \otimes {\mathcal {H}}_{0} \end{aligned}$$

where the symbol \(\otimes\) computes the tensorFootnote 2 product of two vector spaces.

Table 1 Some reversible quantum gates

Like classical computation, quantum computing uses logic gates known as quantum gates to change the state of qubits and transform input information into a desired output. For each quantum gate, there is a unitary operatorFootnote 3U capable of formalizing its behavior (Acampora and Vitiello 2021). The unitary operator U acts on qubits as follows:

$$\begin{aligned} U|\psi \rangle = U[\alpha |0 \rangle + \beta |1 \rangle ] = \alpha U |0 \rangle + \beta U |1 \rangle \end{aligned}$$

An interesting consequence of the unitary nature of the quantum transformations is that they are reversible, i.e., given an output, the corresponding input can be retrieved. The subset of quantum gates used in this paper is reported in Table 1. The first gate is known as Hadamard gate (H). It is used to create quantum states in a superposition. Its corresponding unitary operator is as follows:

$$\begin{aligned} H = \frac{1}{\sqrt{2}}\biggl ( \begin{array}{cc} 1 &{} 1 \\ 1 &{} -1 \end{array} \biggr ) \end{aligned}$$
(3)

For example, let us consider a qubit \(|\psi \rangle\) initialized in the state \(|0 \rangle\), i.e., \(|\psi \rangle = 1\cdot |0 \rangle + 0\cdot |1 \rangle\), where \(\alpha = 1\) and \(\beta = 0\), since this initial quantum state is the most effective to understand the power of the Hadamard gate, and compute \(|\psi ' \rangle = H|\psi \rangle\) as follows:

$$\begin{aligned} |\psi ' \rangle= & {} \frac{1}{\sqrt{2}}\biggl ( \begin{array}{cc} 1 &{} 1 \\ 1 &{} -1 \end{array} \biggr ) \left[ 1\cdot \biggl ( \begin{array}{c} 1 \\ 0 \end{array}\biggr ) + 0\cdot \biggl ( \begin{array}{c} 0 \\ 1 \end{array}\biggr )\right] \nonumber \\= & {} \biggl ( \begin{array}{cc} \frac{1}{\sqrt{2}} &{} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &{} -\frac{1}{\sqrt{2}} \end{array}\biggr ) \cdot \biggl ( \begin{array}{c} 1 \\ 0 \end{array}\biggr ) = \biggl ( \begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{array}\biggr ) \end{aligned}$$
(4)

After applying the quantum operator H, the qubit will be in a superposition state \(|\psi ' \rangle = \frac{1}{\sqrt{2}}\cdot |0 \rangle + \frac{1}{\sqrt{2}}\cdot |1 \rangle\), where \(\alpha = \frac{1}{\sqrt{2}}\) and \(\beta = \frac{1}{\sqrt{2}}\). Therefore, after measuring the qubit, the probability that it is in state \(|0 \rangle\) or \(|1 \rangle\) is the same, i.e., \(\vert \alpha \vert ^2 = \frac{1}{2}\) and \(\vert \beta \vert ^2 = \frac{1}{2}\).

The second quantum gate reported in Table 1 is known as the Pauli-X. Pauli-X is a gate acting on a single qubit and reverses the probabilities of measuring 0 and 1 (for this reason, it is sometimes called bit-flip). The unitary matrix associated with this gate is as follows:

$$\begin{aligned} X = \biggl ( \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \end{array} \biggr ) \end{aligned}$$

For instance, let us consider a qubit in the state \(|\psi \rangle = (0.866+0i)\cdot |0 \rangle + (0-0.5i)\cdot |1 \rangle\), where \(\alpha = 0.866+0i\), \(\beta = 0-0.5i\) and i is the imaginary unit, and compute \(|\psi ' \rangle = X|\psi \rangle\):

$$\begin{aligned} |\psi ' \rangle&= \biggl ( \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \end{array} \biggr ) \left[ (0.866+0i)\cdot \biggl ( \begin{array}{c} 1 \\ 0 \end{array}\biggr ) + (0-0.5i)\cdot \biggl ( \begin{array}{c} 0 \\ 1 \end{array}\biggr )\right] \\&= \biggl ( \begin{array}{cc} 0 &{} 1 \\ 1 &{} 0 \end{array}\biggr ) \cdot \biggl ( \begin{array}{c} 0.866+0i \\ 0-0.5i \end{array}\biggr ) = \biggl ( \begin{array}{c} 0-0.5i \\ 0.866+0i \end{array}\biggr ) \end{aligned}$$

Hence, the computed quantum state is \(|\psi ' \rangle =(0-0.5i) \cdot |0 \rangle + (0.866+0i)\cdot |1 \rangle\), where \(\alpha = 0-0.5i\) and \(\beta = 0.866+0i\). In other words, the probabilities of measuring the bits 0 and 1 are reversed from the quantum state \(|\psi \rangle\) to the quantum state \(|\psi ' \rangle\).

Several quantum gates can be used to change the state of a qubit. Among these, there are the rotation quantum gates \(R_x\), \(R_y\) and \(R_z\). In this paper, only the \(R_y\) gate is used. The unitary operator associated with this gate is

$$\begin{aligned} R_y(\theta ) = \left( \begin{array}{cc} \cos \big (\frac{\theta }{2}\big ) &{} -\sin \big (\frac{\theta }{2}\big ) \\ \sin \big (\frac{\theta }{2}\big ) &{} \cos \big (\frac{\theta }{2}\big ) \end{array} \right) \end{aligned}$$

The \(R_y\) rotation mainly changes the amplitudes of the qubit, and as consequence, the probabilities that it will collapse to 1 or 0 after the measurement. For instance, let us consider again the quantum state \(|\psi \rangle = (0.866+0i)\cdot |0 \rangle + (0-0.5i)\cdot |1 \rangle\) and \(\theta = \frac{\pi }{3}\), and compute \(|\psi ' \rangle = R_y(\frac{\pi }{3})|\psi \rangle\):

$$\begin{aligned} |\psi ' \rangle&= \biggl ( \begin{array}{cc} \cos \big (\frac{\pi }{6}\big ) &{} -\sin \big (\frac{\pi }{6}\big ) \\ \sin \big (\frac{\pi }{6}\big ) &{} \cos \big (\frac{\pi }{6}\big ) \end{array} \biggr ) \\&\quad \left[ (0.866+0i)\cdot \biggl ( \begin{array}{c} 1 \\ 0 \end{array}\biggr ) + (0-0.5i)\cdot \biggl ( \begin{array}{c} 0 \\ 1 \end{array}\biggr )\right] \\&= \biggl ( \begin{array}{cc} \cos \big (\frac{\pi }{6}\big ) &{} -\sin \big (\frac{\pi }{6}\big ) \\ \sin \big (\frac{\pi }{6}\big ) &{} \cos \big (\frac{\pi }{6}\big ) \end{array}\biggr ) \cdot \biggl ( \begin{array}{c} 0.866+0i \\ 0-0.5i \end{array}\biggr ) \\&= \biggl ( \begin{array}{c} 0.75+0.25i \\ 0.433-0.433i \end{array}\biggr ) \end{aligned}$$

Hence, in this example, the rotation gate \(R_y(\frac{\pi }{3})\) applied on the quantum state \(|\psi \rangle\) has changed the probability of measuring the classical bit 0 from 1 to 0.625 and the probability of measuring the classical bit 1 from 0 to 0.375.

The last of the gates reported in Table 1 is the Controlled NOT (CNOT). It operates on two qubits, a control qubit and a target qubit. In detail, it works by applying the Pauli-X gate to the target qubit, in the case that the control qubit has the value 1. The unitary operator related to this gate is as follows:

$$\begin{aligned} \text {CNOT} = \left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \\ \end{array} \right) \end{aligned}$$
(5)

The CNOT gate has an interesting role when the control qubit is in superposition state, because, in this case, it enables quantum entanglement. In an abstract way, if we have two quantum systems \(Q_1\) and \(Q_2\) in entanglement, the values of certain properties of system \(Q_1\) are associated with the values that those properties will assume for system \(Q_2\). Bell states are the simplest form of quantum entanglement. As an example, let us consider two qubits \(q_0\) and \(q_1\), where \(q_0\) is initialized to the Hadamard superposition state and \(q_1\) is initialized to state \(|0 \rangle\):

$$\begin{aligned} \begin{array}{ll} q_0 = \left( \begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{array}\right) &{} q_1 = \left( \begin{array}{c} 1 \\ 0 \end{array}\right) \end{array} \end{aligned}$$
(6)

Then, let us suppose to have a relationship between \(q_0\) and \(q_1\) created by applying a CNOT gate and considering \(q_0\) as control bit and \(q_1\) as a target bit, as shown in the following quantum circuit:

(7)

The result is a superposition of \(|00 \rangle\) and \(|11 \rangle\). In detail, if \(q_0\) takes value \(|0 \rangle\), then no action would occur on \(q_1\), and it remains in the state \(|0 \rangle\), leaving the two-qubit register in a total state of \(|00 \rangle\). Vice versa, if \(q_0\) takes value \(|1 \rangle\), then a bit flip is applied to \(q_1\) and the two-qubit register changes its state moving to the state \(|11 \rangle\). In other words, the value of \(q_1\) is completely connected to the quantum measurement on \(q_0\). Quantum entanglement is a key ingredient to demonstrate an advantage of quantum computers over classical computers. Indeed, if a quantum system is not highly entangled it can often be simulated efficiently on a classical computer (Acampora and Vitiello 2021).

Currently, quantum computation can be deployed by executing quantum circuits on so-called Noise Intermediate Scale Quantum (NISQ) devices, where “intermediate scale” refers to the limited number of qubits whose they are equipped (even if this number is larger than the first generation of quantum devices), and “noisy” emphasizes that there is imperfect control over these qubits (Preskill 2018).

4 A hybrid and granular design of genetic algorithms for quantum computers

A very first hybrid quantum evolutionary algorithm aimed at implementing a quantum version of evolutionary optimization has been presented by Acampora and Vitiello (2021), where completely new evolutionary concepts, such as quantum chromosomes, entangled crossover, \(R_y\) mutation, quantum selection and quantum elitism, have been introduced to demonstrate that a quantum computer can exhibit evolutionary optimization capabilities. In detail, this approach uses the concept of quantum chromosome to embody a whole genetic population in a superposition. As for the entangled crossover, it is a quantum circuit used to perform a genetic crossover among quantum chromosomes; the superposed nature of quantum chromosomes allows a single application of the entangled crossover to act on a large collection of individual pairs and improve the computational performance of the genetic algorithms. The \(R_y\) mutation is the analogue of the mutation operator for classical genetic algorithms, but similar to the entangled crossover, the application of the \(R_y\) mutation on a single qubit affects a large part of a genetic population. The quantum selection allows the superposed genetic population coded by a quantum chromosome to collapse in a single classical chromosome whose quality, with respect to the problem that is being solved, will be evaluated by a classical computer. Finally, quantum elitism is the equivalent of the elitism concept for classical genetic algorithms, and it is used to move the best solution from the current generation to the next evolutionary population expressed by quantum chromosomes.

However, in spite of the indisputable innovations introduced by the above method in the field of evolutionary computation, the limited number of qubits that characterizes current quantum processors does not yet allow an efficient execution of that kind of algorithms in terms of accuracy of the computed solution. As a consequence, there is a strong emergence for algorithmic tricks aimed at addressing the limitations of quantum hardware, and improving the performance of current approaches of quantum evolutionary computation. To bridge this gap, HGQGA has been designed by means of granular computing approach to induce a hierarchical scheme, where a quantum computer iteratively limits the search space of a given problem and identify so-called information granules, i.e., sub-spaces of the problem search space that may contain the optimal solution of the problem to be solved (see Fig. 1).

Fig. 1
figure 1

Granular nature of HGQGA using a one-dimensional continuous minimization problem. The same approach can be used to solve problems with multi-dimensional search space

4.1 HGQGA: implementation

This section is devoted to presenting the above quantum evolutionary concepts and how to use them synergistically in a hybrid and granular evolutionary algorithm aimed at solving continuous optimization problems. For sake of simplicity, the design of HGQGA will be described using a one-dimensional continuous minimization problem P, whose the solution space is limited by the interval \([a_0, b_0]\). In this scenario, let us suppose to have a quantum computer equipped with N qubits to run HGQGA and solve the problem P using m quantum chromosomes, where each quantum chromosome is coded by n qubits. It is important to note that, to allow HGQGA to work correctly, it is necessary to use at least three quantum chromosomes to enable the quantum evolutionary process. Hence, the number of qubits equipping the quantum device must be at least three times that of the n value, namely, \(N\ge 3n\).

As shown in Fig. 1, HGQGA iteratively computes a sequence of nested ranges (information granules) \([a_{i+1}, b_{i+1}] \subset [a_i, b_i]\) (initially, \(i=0\)) which may contain the optimal solution of the problem P, namely, \(x^* \in [a_{i+1}, b_{i+1}]\). To achieve this goal, at the \((i+1)\)-th iteration, HGQGA divides the range \([a_{i}, b_{i}]\) in h sub-intervals \([a_{i}^0, b_{i}^0],\ldots , [a_{i}^{j}, b_{i}^j],\ldots , [a_{i}^{h-1}, b_{i}^{h-1}]\), where \(h = 2^n\), \(a_{i}^0 = a_{i}\) and \(b_{i}^{h-1} = b_{i}\). The set of potential solutions of the problem P is represented by the set of \(a_{i}^j\) values, with \(j = 0, \ldots , h-1\). These are embodied in a quantum chromosome using superposition. During the iteration, m quantum chromosomes are measured by obtaining classical chromosomes, so that HGQGA evaluates their fitness value using a classical computer and identifies the current best solution of the problem. Successively, the set of quantum chromosomes evolves by means of a quantum circuit implementing evolutionary operators and concepts, such as entangled crossover and \(R_y\) mutation. The cycle composed of the quantum measurement, the fitness function evaluation and the application of quantum evolutionary operators is repeated until a termination criterion such as a maximum number of iterations is reached. At the end of the iterations, the best solution \(a^*\) computed by the algorithm is used to identify a new search interval \([a_{i+1}, b_{i+1}] \subset [a_{i}, b_{i}]\) (where \(a_{i+1}=a^*\)) in which the algorithm will look for a new and more refined solution to the problem. HGQGA goes down in the levels until a maximum number of levels k is reached. The solution of the algorithm is the best solution in the last interval \([a_{i+1}, b_{i+1}]\). It is worth noting that at each level the selected interval is divided in \(h=2^n\) sub-intervals representing the number of solutions at that level. Therefore, at the k-th level, the search space of the problem is characterized by \(2^{kn}\) solutions in the initial interval \([a_0, b_0]\). The workflow of HGQGA is described in Fig. 2. Hereafter, more details about the main HGQGA steps are given.

Fig. 2
figure 2

HGQGA: the flowchart

Thus, the main first step of HGQGA is the initialization of m quantum chromosomes. A quantum chromosome is a quantum state composed of n qubits, which embodies a set of potential solutions of a problem using quantum superposition. It is initialized by using, for each qubit, a Hadamard gate followed by a \(R_y(\pm \delta )\) gate, where \(\delta\) is a so-called rotation parameter usually chosen from the setFootnote 4\(\{\frac{\pi }{32}, \frac{\pi }{16}, \frac{\pi }{8}, \ldots \}\), and the sign, \(+\) or −, is set in a uniformly random way. Figure 3 shows an example of the initialization of a quantum chromosome composed of four qubits, whereas Fig. 4 shows the classical population corresponding to that quantum chromosome, together with corresponding measurement probabilities. It is worth noting that the use of quantum superposition enables a strong parallelism in computation. Indeed, thanks to a single quantum operation acting on a quantum state, it is possible to transform all the individuals of a classical population embodied in the quantum state simultaneously. With respect to Fig. 4, the quantum parallelism permits to modify the probability distribution having as domain the classical population in a single computational step.

Fig. 3
figure 3

Example of the initialization of a quantum chromosome composed of four qubits

Fig. 4
figure 4

Set of solution embodied in a 4-qubits quantum chromosome with corresponding measurement probabilities

Successively, a quantum measurement operator is used to collapse the collection of m initialized quantum chromosomes \(\{|q_0^0 q_1^0 \ldots q_{n-1}^0 \rangle , |q_0^1 q_1^1 \ldots q_{n-1}^1 \rangle , \ldots , |q_0^{m-1} q_0^{m-1} \ldots q_{n-1}^{m-1} \rangle \}\) to a collection of m classical chromosomes, which can be evaluated by a fitness function related to the problem P to be solved by means of a classical computer as follows:

$$\begin{aligned} \begin{array}{rcccl} \text {Quantum state} &{} \rightarrow &{} \text {Classical state} &{} \rightarrow &{} \text {Fitness evaluation} \\ |q_0^0 q_1^0 \ldots q_{n-1}^0 \rangle &{} \rightarrow &{} a^0 &{} \rightarrow &{} f^0 = \text {fitness}(a^0) \\ |q_0^1 q_1^1 \ldots q_{n-1}^1 \rangle &{} \rightarrow &{} a^1 &{} \rightarrow &{} f^1 = \text {fitness}(a^1) \\ &{} \dots &{} \ldots &{} \ldots \\ |q_0^{m-1} q_1^{m-1} \ldots q_{n-1}^{m-1} \rangle &{} \rightarrow &{} a^{m-1} &{} \rightarrow &{} f^{m-1} = \text {fitness}(a^{m-1}) \\ \end{array} \end{aligned}$$

Let us suppose that \(a^* \in \{a^0, a^1, \ldots , a^{m-1}\}\) is the current best solution found by HGQGA. After a quantum measurement, it is necessary to reconstruct the quantum state that originated the solution \(a^*\), i.e., the best quantum chromosome. Let us consider \(|q_0^* q_1^* \ldots q_{n-1}^* \rangle\), with \(l \in \{0, 1, \ldots , n-1\}\), the quantum state that originated \(a^*\), there are three possibilities for the reconstruction of the best quantum chromosome. The first and obvious choice, named quantum elitism, reconstructs the best quantum chromosome as a new quantum state \(|q'_0 q'_1\ldots q'_{n-1} \rangle\) by setting \(|q'_l \rangle = |q_l^* \rangle\), with \(l = 0, \ldots , n-1\). The second choice, named quantum elitism with reinforcement, reconstructs a new quantum state \(|q'_0 q'_1 \ldots q'_{n-1} \rangle\), so that the probability that \(|q'_l \rangle\), with \(l \in \{0, 1, \ldots , n-1\}\) will collapse to 1 is increased of a certain amount \(\rho\) if the l-th bit of \(a^*\) is equal to 1; analogously, the probability that \(|q'_l \rangle\), with \(l \in \{0, 1, \ldots , n-1\}\) will collapse to 0 is increased of a certain amount \(\rho\) if the l-th bit of \(a^*\) is equal to 0. The third and last choice, named deterministic elitism, reconstructs a new quantum state \(|q'_0 q'_1 \ldots q'_{n-1} \rangle\), so that \(|q'_l \rangle = |1 \rangle\), with \(l \in \{0, 1, \ldots , n-1\}\), if the l-th bit of \(a^*\) is equal to 1; analogously \(|q'_l \rangle = |0 \rangle\), with \(l \in \{0, 1, \ldots , n-1\}\), if the l-th bit of \(a^*\) is equal to 0.

Fig. 5
figure 5

Example of entangled crossover and \(R_y\) mutation. The dashed line groups the four qubits of the best quantum chromosome representing the control qubits in the entangled crossover

Once the quantum state of the best solution \(a^*\) is correctly reconstructed, HGQGA “moves” the good features embodied in \(a^*\) toward the remaining \(m-1\) quantum chromosomes using the entangled crossover. This operator divides the qubits of the best quantum chromosome in \({m-1}\) groups of consecutive qubits and entangles them to \(m-1\) groups of qubits, each one belonging to the remaining \(m-1\) quantum chromosomes, randomly selected (see Fig. 5). At the end of this crossover operation, some of the qubits belonging to the remaining \(m-1\) chromosomes will be not entangled to qubits of the best quantum chromosome. These qubits will be undergone with a certain probability \(\mu\) to a mutation operation implemented by means of \(R_y\) rotation. The goal of this operator is to invert the probability that a given qubit will collapse to 0 or 1 after a quantum measurement. In particular, the mutation operator is applied by means of the quantum operator \(R_y(\theta _z)\), where \(\theta _z\) is an angle value, properly computed starting from initial quantum state to mutate, so as to invert the probabilities that the specific qubit \(|q_z \rangle\) will collapse to 0 or 1, after a quantum measurement. An example of entangled crossover and \(R_y\) mutation is shown in Fig. 5; here, the quantum state \(|q_4q_5q_6q_7 \rangle\) corresponds to the current quantum best chromosome. After the execution of the \(R_y\) mutation operator, the quantum chromosomes are made to collapse through a quantum measurement operation to a new set of classical chromosomes that will be evaluated using the fitness function of the problem P on a classical computer. When a termination condition is satisfied as a maximum number of iterations, the best solution identified by the algorithm is used to calculate a new search interval in which the algorithm will look for a new and more refined solution to the problem. The algorithm ends after being gone down in k levels.

4.2 HGQGA: a case study

This section shows all the steps carried out by HGQGA to solve a continuous optimization problem. A well-known benchmark function, named Forrester (Forrester et al. 2008), is used. This is a one-dimensional multimodal function defined as follows:

$$\begin{aligned} f(x) = (6x - 2)^2 \cdot \sin (12x - 4) \end{aligned}$$
(8)

It is evaluated in \(x \in [0,1]\) (i.e., \(a_0=0\) and \(b_0=1\)) as reported in Fig. 6. The global optimum is \(x^* = 0.757249\) and the corresponding optimal fitness value is \(-6.020740\). The Forrester problem is solved by applying HGQGA with 3 quantum chromosomes, each one composed of 5 qubits. The values of the hyper-parameters are: the quantum elitism with reinforcement, \(\delta =\frac{\pi }{8}\), \(\mu =0.15\), \(\rho = \frac{\pi }{8}\), the number of levels \(k=3\) and the maximum number of iterations \(\#iter=3\). The hyper-parameters are set in an arbitrary way for this case study. The first step of the algorithm is to run the quantum initialization circuit of the first level reported in Fig. 7a. The application of a quantum measurement operator collapses the three quantum chromosomes q0, q1 and q2 to three binary strings, ‘00000’, ‘00101’, ‘11000’, corresponding to three different intervals \([a^0_1, b^0_1]\), \([a^1_1, b^1_1]\), and \([a^2_1, b^2_1]\). The left bound of each range, \(a^0_1\), \(a^1_1\), and \(a^2_1\), are used to compute the fitness function value on the classical side of HGQGA. According to the fitness value, the quantum chromosome q1 is identified as the current best solution (see Table 2). Then, in the first iteration of HGQGA, the qubits of the best solution q1 are opportunely partitioned to be entangled to the corresponding qubits belonging to quantum chromosomes q0 and q2; successively, a \(R_y\) mutation is applied on some of the no entangled qubits in the circuit, as shown in Fig. 7b. After running this quantum circuit, a quantum measurement is to carry out to get three classical chromosomes, namely, ‘11100’, ‘00000’, ‘00001’, to be evaluated. Figure  7c, d shows the circuits executed in the second and third iteration of the first level. The best range \([a_1, b_1]\) obtained after the third iteration will be used to start the computation in the second level of HGQGA. At this point, in the second and third level, the initialization and three iterations are performed similar to those of the first level (see Figs. 8 and 9, respectively). All the evolutions of HGQGA are reported in Table 2. As reported, the best solution obtained by HGQGA after performing all levels is 0.757376 characterized by a fitness value that is \(-6.020731\).

Fig. 6
figure 6

Forrester function and its minimum value

Table 2 HGQGA evolutions for the Forrester problem
Fig. 7
figure 7

Quantum circuits of the first level of HGQGA applied to the Forrester problem

Fig. 8
figure 8

Quantum circuits of the second level of HGQGA applied to the Forrester problem

Fig. 9
figure 9

Quantum circuits of the third level of HGQGA applied to the Forrester problem

5 Experiments and results

This section is devoted to showing the benefits of the proposed approach on the state-of-the-art approaches. In detail, HGQGA is compared with HQGA in solving benchmark real continuous functions typically used to assess the performance of the evolutionary approaches. The hyper-parameters of HGQGA used in the experimentation have been set through a tuning procedure. The performance of the compared algorithms has been assessed by means of a consolidated quality measure such as the average fitness value computed on a set of runs. Moreover, to investigate the significance of the obtained results, the non-parametric statistical test known as Wilcoxon signed rank test (Wilcoxon 1992) has been applied. Finally, a discussion about the robustness of HGQGA with respect to the noise of the current NISQ quantum devices is reported. Hereafter, more details about the benchmark functions, all the experimental settings including the tuning process and the results are given.

5.1 Benchmark functions

The experimental study involves a set of continuous benchmark functions well-known in literature (Hussain et al. 2017). Due to the binary encoding used by our evolutionary approach, a discretization procedure has been implemented as reported in (Acampora and Vitiello 2021). Table 3 shows the definitions of the used benchmark functions, the ranges of their variables (that is, the upper and lower bounds) and the optimal fitness values (by considering the discretization). In detail, the functions \(f_1\) - \(f_{10}\) are one-dimensional continuous optimization problems, whereas \(f_{11}\) - \(f_{14}\) are multi-dimensional ones considered as bi-dimensional ones due to the limitations related to the number of qubits made available from the current quantum hardware architecture. All functions are characterized by one or more global minima.

Table 3 Properties of the benchmark functions

5.2 Experimental setup

The HGQGA algorithm has been implemented in Python by mainly exploiting the open-source quantum computing framework Qiskit developed by IBM. During experiments, the HGQGA algorithm has been run on a real quantum computer made available by the IBM Quantum Experience platformFootnote 5, named IBM Q Guadalupe and equipped with 16 qubitsFootnote 6. The number of qubits of the used quantum processor has forced the use of a population of three quantum chromosomes (\(m=3\)) and a quantum register of five qubits for coding each quantum chromosome. It is important to note that a 16-qubits quantum processor is not enough to solve bi-dimensional benchmark functions (\(f_{11}\)-\(f_{14}\)) that require 30 qubits (i.e., 10 qubits for each chromosome). Therefore, these functions have been solved using the IBM quantum simulator, known as qasm simulator, characterized by 32 qubits and executed on a classical computer equipped with an Intel i7 architecture with 16GB of RAM.

To select the best configuration of hyper-parameters of HGQGA for each benchmark function, a tuning process has been performed. The tuned hyper-parameters are: the \(\delta\) value used during the initialization procedure of a quantum chromosome; the \(\mu\) value representing the probability to apply the \(R_y\) mutation to the free qubits; the elitist selection representing the mechanism to “carry over” the best individual from a generation to the next one; the \(\rho\) value used when the reinforcement elitism is selected; the number of levels k and the number of iterations \(\#iter\) for each level. In the tuning process, all the elitism strategies are considered. Moreover, the values \(\frac{\pi }{8}\) and \(\frac{\pi }{16}\) are investigated for \(\delta\) and \(\rho\) and the values 0.15 and 0.3 for \(\mu\). Finally, two combinations for the number of levels and the number of iterations for each level are considered: (i) \(k=3\) and \(\#iter=7\) and (ii) \(k=4\) and \(\#iter=5\). These two combinations permit to run the same number of fitness evaluations (i.e., 72) and, at the same time, to investigate if the performance of the algorithm is affected more by increasing the number of levels or the number of iterations. Obviously, increasing the number of iterations or the number of levels could improve strongly the performance, above all, in the case of the multi-dimensional functions. By considering all the combinations for the hyper-parameters, 32 configurations have been obtained. Each configuration will be denoted with the string composed by different parts separated by the symbol “_” in the following order: the letter D, P or R indicating the deterministic, pure and with reinforcement quantum elitism, respectively; the value of \(\delta\), the value of \(\mu\), the value of \(\rho\) in the case of the reinforcement elitism, the number of levels and the number of iterations. The tuning process consists of running all the different configurations, each one for 15 times, for each benchmark function. Hence, the tuning process implies to perform 6720 runs of HGQGA. Since executing this process would have required several weeks on the real quantum computer made available via cloud by IBM because of the long waits in queue, all the runs of the tuning process are executed using the IBM qasm simulator. For each run, the obtained best fitness value is stored and used to evaluate the quality of the configurations. The fitness values obtained in each configuration for all benchmark functions are reported in Figs. 101112 and 13 using the boxplot methodology. In detail, each box plot displays summary information related to the set of fitness values obtained by one configuration: the minimum fitness value (represented by the lowest point of the box), the maximum fitness value (represented by the highest point of the box), the first (Q1) and third (Q3) quartiles, the median fitness value (plotted as a red line) and the mean fitness value (plotted as a red point). Outliers are plotted as individual blue crosses. The configurations are evaluated in terms of the best mean fitness value. Therefore, the hyper-parameters selected for HGQGA for each benchmark function are highlighted in bold red in Figs. 101112 and 13. To conclude, a Jupyter notebook is packaged and available to allow the complete reproduction of the experimentsFootnote 7.

Fig. 10
figure 10

Parameter tuning study results for a \(f_1\), b \(f_2\), c \(f_3\) and d \(f_4\). Outliers are omitted for sake of readability

Fig. 11
figure 11

Parameter tuning study results for a \(f_5\), b \(f_6\), c \(f_7\) and d \(f_8\). Outliers are omitted for sake of readability

Fig. 12
figure 12

Parameter tuning study results for a \(f_9\) and b \(f_{10}\). Outliers are omitted for sake of readability

Fig. 13
figure 13

Parameter tuning study results for a \(f_{11}\), b \(f_{12}\), c \(f_{13}\) and d \(f_{14}\). Outliers are omitted for sake of readability

5.3 Comparison with the state of the art: HGQGA vs. HQGA

HGQGA is compared with the state-of-the-art approach named HQGA. HGQGA is run by considering the best configurations identified during the aforementioned tuning process for each benchmark function. HQGA is run with the same hyper-parameters (except for the hyper-parameter k that is not present in HQGA). The use of the same hyper-parameters will permit to show the benefits of the introduction of the levels in HGQGA. Both algorithms are run on IBM Q Guadalupe for solving the one-dimensional benchmark functions and on the IBM qasm simulator for the multi-dimensional ones. The comparison is performed in terms of fitness values and, of average fitness values computed on several runs. For this comparison, the number of runs is set to 25. Figure 14 shows the results of the executed runs by means of boxplot methodology for all one-dimensional benchmark functions. Figure 15 shows the same information for all bi-dimensional benchmark functions. As it is possible to see, the average fitness values (reported as red points) of the HGQGA are always better (i.e., lower values being functions to be minimized) than those of the HQGA except for the function \(f_7\) in which the performance are the same. Moreover, HGQGA provides results more stable as highlighted by the length of the rectangular boxes that is most often smaller than that of the boxes related to the HQGA (except for the function \(f_7\), \(f_{11}\), \(f_{13}\) and \(f_{14}\)).

Fig. 14
figure 14

Comparison between HGQGA and HQGA in terms of fitness values obtained on the executed runs for all one-dimensional benchmark functions

Fig. 15
figure 15

Comparison between HGQGA and HQGA in terms of fitness values obtained on the executed runs for all bi-dimensional benchmark functions

To summarize, Table 4 shows the average fitness values for HGQGA and HQGA for all the considered benchmark functions and the relative improvement. The average relative improvement of HGQGA with respect to HQGA on all benchmark functions is about 14%. Moreover, to investigate the significance of the obtained results, a non-parametric statistical test known as Wilcoxon signed rank test has been used. In general, this test aims to detect significant differences between two sample means, where the two sample data represent the behavior of two algorithms. The underlying idea of this test is not just making a count of the wins of each compared algorithm but ranking the differences between the performance and developing the statistic over them (Conover and Iman 1981). In our statistical comparison, the samples related to the two compared algorithms, HGQGA and HQGA, used in the statistical comparison are composed of the average fitness values obtained for the different benchmark functions (i.e., the values contained in Table  4). The p value resulting by the Wilcoxon’s test is 0.000122. Therefore, it is possible to state that HGQGA statistically outperforms HQGA at 99% confidence level.

Table 4 Comparison between HGQGA and HQGA in terms of average fitness value and relative improvement (Impr.) in percentage

5.4 Robustness of HGQGA

As aforementioned, NISQ devices are equipped with noisy qubits that can lead error in quantum computation. As for publicly available quantum devices from IBM, the single-qubit instruction error-rates are of the order of \(10^{-3}\), whereas for two-qubit instructions, such as CNOT, it is \(10^{-2}\) (Acampora et al. 2023). To investigate the robustness of HGQGA with respect to the noise characterizing real quantum devices, in this section, a comparison between the performance of HGQGA executed without noise and that obtained when HGQGA is run on the quantum processor IBM Q Guadalupe is carried out. In fact, if the performance of HGQGA run on the real device is statistically equivalent to the performance of HGQGA executed without noise, it is possible to state the robustness of HGQGA with respect to the noise. The comparison involves the executions made for solving the one-dimensional benchmark functions as only for these functions it was possible to carry out executions on both a simulator device (i.e., case without noise) and the real device. The statistical comparison has been carried out using the Wilcoxon signed rank test applied on two populations of samples for each benchmark function, where the first population is composed of the results obtained from 15 executions in the case without noise, whereas the second one is composed of the results obtained from 15 executions on the real device. Table 5 reports the p-value obtained for each benchmark function. As it is possible to see, the p-values for 8 out of 10 functions are larger than 0.01, a typical value of significance. Therefore, for these functions, it is possible to state that the Wilcoxon’s test does not reject the null hypothesis representing the equality of the two populations of the samples. Hence, it can be concluded that HGQGA shows adequate robustness, as its performance with noise versus without noise is equivalent in most of the considered benchmark functions.

Table 5 Results of the Wilcoxon signed rank test (in terms of p value) applied to investigating the robustness of HGQGA with respect to the noise of NISQ devices

6 Conclusions

The proposed research merges together, for the very time, granular computing, quantum computation and evolutionary computation to provide theoretical and practical benefits in the solution of continuous optimization problems. Indeed, from the theoretical point of view, HGQGA allows overcoming the limitations related to the low number of qubits present in a quantum computer and, as a consequence, defining a suitable number of quantum chromosomes to efficiently navigate the search space of a given problem. From a practical point of view, the granular computing approach used in this paper allows using current quantum computers and quantum evolutionary computation to solve real-world problems.

The main practical benefit of the proposed work relates to the fact that HGQGA offers a significant improvement over HQGA. Indeed, HGQGA provides an average \(14\%\) improvement over HQGA in solving well-known continuous optimization problems. The significance of the obtained results is confirmed by the Wilcoxon’s test that states that HGQGA statistically outperforms HQGA at 99% confidence level.

Although good results were obtained, the approach used in this research activity could be further improved. Indeed, currently, the exploration phase of HGQGA could identify incorrect solution ranges and completely alter the smooth running of the algorithm. As a consequence, there is a need for robust and insightful future research activities to solve the above issue. In particular, three different lines of research will be conducted. In the first line of research, solution space navigation techniques other than the hierarchical approach will be investigated to improve the accuracy of problem solving. With respect to the second line of research, parallelization techniques of quantum evolutionary algorithms will be introduced to use multiple quantum processors simultaneously and enhance the ability of current algorithms to navigate search spaces. Finally, the third line of research will be conducted to merge together quantum/classical population-based optimization algorithms with quantum/classical local search strategies to improve the exploration and exploitation capabilities of current approaches.