Improving quantum genetic optimization through granular computing

Quantum computers promise to revolutionize the world of computing thanks to some features of quantum mechanics that can enable massive parallelism in computation. This benefit may be particularly relevant in the design of evolutionary algorithms, where the quantum paradigm could support the exploration of multiple regions of the search space in a concurrent way. Although some efforts in this research field are ongoing, the potential of quantum computing is not yet fully expressed due to the limited number of qubits of current quantum processors. This limitation is even more acute when one wants to deal with continuous optimization problems, where the search space is potentially infinite. The goal of this paper is to address this limitation by introducing a hybrid and granular approach to quantum algorithm design, specifically designed for genetic optimization. This approach is defined as hybrid, because it uses a digital computer to evaluate fitness functions, and a quantum processor to evolve the genetic population; moreover, it uses granular computing to hierarchically reduce the size of the search space of a problem, so that good near-optimal solutions can be identified even on small quantum computers. As shown in the experiments, where IBM Q family processors are used, the usage of a granular computation scheme statistically enhances the performance of the state-of-the-art evolutionary algorithm implemented on quantum computers, when it is run to optimize well-known benchmark continuous functions.


Introduction
Quantum computing is a hot research topic on which academies, enterprises and government agencies are investing huge resources due to its potential capabilities in solving problems that are intractable for classical computers (Nielsen and Chuang 2010). This advantage comes from the use of quantum mechanical principles, such as superposition and entanglement, which enable intrinsic and massive parallelism in computation. As demonstrated by some remarkable research (Biamonte et al. 2017;Tacchino et al. 2019;Acampora 2019;Pourabdollah et al. 2022), artificial and computational intelligence are some of the research areas that could benefit most from this quantum revolution. In our vision, the field of the evolutionary optimization is particularly well-suited to be approached by the quantum paradigm, because this kind of computation can support evolutionary algorithms in exploring multiple regions of a problem's search space in a concurrent way. This is the idea behind the hybrid algorithm known as HQGA (Acampora and Vitiello 2021), one of the first evolutionary computation approaches run on an actual quantum computer. HQGA is defined as a hybrid algorithm, because it performs fitness function evaluations on classical computers, whereas it implements a whole genetic evolution on actual quantum computers 1 . Throughout the evolutionary optimization process, HQGA represents the solutions of a problem as quantum chromosomes, each one of them represents a quantum state that embodies a superposition of classical individuals belonging to a genetic population. This quantum chromosome-based representation provides a potential computational advantage: a quantum chromosome composed of n qubits can embody a subset of the search space composed of up to 2 n classical individuals.
Unfortunately, the size of current quantum processors (around few dozens of qubits) does not allow HQGA to fully express its potential advantage. Indeed, the limited number of qubits that equip current quantum computers does not allow HQGA to use a suitable number of quantum chromosomes to offer adequate degrees of exploration and exploitation in genetic evolution, and identify good quality near-optimal solutions of the problem. As a consequence, there is a strong need to introduce innovative approaches to the design of quantum algorithms for evolutionary computation that can solve the aforementioned issue, above all, to deal with continuous optimization problems, which are characterized by potentially infinite solution spaces.
The main goal of this paper is to address this critical challenge using granular computing which, as reported by Pedrycz (2001), can be used to break down a problem into a sequence of smaller, more manageable subtasks to reduce the overall (classical or quantum) computational effort. Over years, granular computing has proven to be a good strategy in complex problem solving (Cheng et al. 2021) and to improve optimization and machine learning approaches (Pownuk and Kreinovich 2021;Song and Wang 2016;Wang et al. 2017). In our work, granular computing is used to induce a hierarchical navigation of the solution space of the problem to be solved in order to identify nested granules of information, which may contain good near-optimal solutions of the problem. Our idea results in the design of a new algorithm named Hybrid and Granular Quantum Genetic Algorithm (HGQGA) which provides a good trade-off between exploration and exploitation, because, at the higher levels of the hierarchy, it uses the quantum processor to explore and identify the intervals that may contain the optimal solution, whereas at the lower levels of the hierarchy, it uses the quantum processor to refine the search around the optimal solution. The suitability of the proposed algorithm has been evaluated in an experimental session, where it has been applied to solve well-known continuous optimization problems used in evolutionary computation. The experiments have been run using the family of quantum processors provided by the IBM Q Experience project. As shown by the results, HGQGA statistically enhances the performance of HQGA, laying the groundwork for making current small-sized quantum computers useful in solving real-world optimization problems.
The rest of the manuscript is as follows. Section 2 discusses the state-of-the-art approaches in the interplay between quantum evolutionary computation and granular computing. Section 3 provides details about the basic concepts of quantum computing to make the manuscript self-contained. The details about the proposed approach, HGQGA, are given in Sect. 4. Section 5 describes experiments and results, before concluding in Sect. 6.

Related works
The proposed approach aims at improving an existing evolutionary optimization algorithm, designed to be run on actual quantum computers, by means of granular computing. In the world of classical computation, some research efforts have been made to integrate evolutionary algorithms and granular computing mainly in two different ways: (1) using evolutionary algorithms to optimize granular computing-based approaches; (2) using granular computing to improve performance of evolutionary algorithms. An example belonging to the first category is reported in (Cimino et al. 2014), where a multilayer perceptron is used to model a particular type of information granules, namely, interval-valued data, and trained using a genetic algorithm designed to fit data with different levels of granularity. Another example is reported in (Dong et al. 2018). In this work, a new feature selection algorithm based on the granular information is presented to deal with the redundant features and irrelevant features in high-dimensional/low-sample data and low-dimensional/highsample data. This proposal uses a genetic algorithm to find out the optimal hyper-parameters of the feature selection algorithm, such as the granular radius and the granularity k optimization. Moreover, in (Melin and Sánchez 2019), an optimization procedure based on a hierarchical genetic algorithm is proposed to select type of fuzzy logic, granulation of each fuzzy variable and fuzzy rules selection to design optimal fuzzy inference systems applied in combining modular neural networks responses. The optimization of granulation for fuzzy controllers is proposed also in (Lagunes et al. 2019). In this case, the optimization is carried out by using the Firefly Algorithm and the optimized fuzzy controllers are used in the context of autonomous mobile robots. As for the second category, an example is reported in (Gao-wei et al. 2011), where the data generated in the process of the Multi-Objective Evolutionary Algorithms (MOEAs) are considered as information system and granular computing is used to disposal the information system. Based on the dominate relationship in the information system, the proposed approach gets the dominance granule of the objective function, and adopts the granularity of dominance granule as the criteria of individual superiority. The result of the experiments carried out in this work shows that the proposed method based on granular computing improves the efficiency of the MOEAs significantly.
Analyzing the literature, we discover that there are not existing studies in regard to integrating granular computing and evolutionary algorithms in the context of quantum computation. This is surely also due to the fact that research activities about evolutionary algorithms runnable on quantum processors are really in a limited number. Indeed, in literature, several efforts have been carried out to develop the so-called quantum-inspired evolutionary approaches (Narayanan and Moore 1996;Ross 2019;Zhenxue et al. 2021;Dey et al. 2021), i.e., classical optimization methodologies that draw inspiration from quantum mechanics, but, continue to be founded on conventional concepts from digital computation and Boolean algebra. To the best of our knowledge, only a work (Acampora and Vitiello 2021) proposes a genetic algorithm, named HQGA, whose genetic evolution is runnable on a real quantum processor thanks to its capability of performing genetic operators by evolving vectors belonging to Hilbert spaces. In spite of the indisputable innovations introduced by HQGA in the field of evolutionary computation, the limited number of qubits that characterizes current quantum processors does not yet allow an efficient execution of that kind of algorithms in terms of accuracy of the computed solution.
To bridge this gap, a new algorithm named HGQGA is proposed in this paper to be run on small quantum devices thanks to a granular computation scheme, which iteratively limits the search space of a given problem to a subspace (information granule) that may contain a near-optimal solution of the problem being solved. As shown in the experimental results, the proposed approach shows better performance than HQGA in solving continuous optimization problems.

Basic concepts of quantum computing
This section introduces the main concepts related to quantum computing useful to understand the design of HGQGA.
Quantum computing is a fascinating new field at the intersection of computer science, mathematics, and physics, which strives to harness some of the key aspects of quantum mechanics, such as superposition and entanglement to broaden our computational horizons (Yanofsky and Mannucci 2008). This new computing paradigm uses the so-called qubit (short for a quantum bit) to store and manage information. In detail, a qubit is a unit vector in a two-dimensional complex vector space (usually Hilbert Space) for which a particular basis has been fixed. Formally where a and b are complex numbers, such that jaj 2 þ jbj 2 ¼ 1, and the Dirac notation, j0i and j1i, is a shorthand for the vectors encoding the two basis states of the two dimensional vector space: Hence, the state of the qubit is the two dimensional complex vector a b . The coefficients a and b are known as the amplitude of the j0i component and the j1i component, respectively.
Unlike the bit, i.e., the basic unit of information in the classical computation, a qubit is not constrained to be wholly 0 or wholly 1 at a given instant, but it can be a superposition of both a 0 and a 1 simultaneously. For this reason, to gain information from a qubit, it is necessary to perform a so-called measurement. When a qubit is measured, the measurement changes the state to one of the basis states by resulting in only one of two states j0i or j1i. According to quantum physics, after measuring the qubit, it will be found in state j0i with probability jaj 2 and in state j1i with probability jbj 2 . Hence, the need that jaj 2 þ jbj 2 is equal to 1 in Eq. 1.
As useful as single qubits can be, they are much more powerful in groups by composing a so-called quantum register. Indeed, like a single qubit can be found in a superposition of the possible bit values it may assume, i.e., 0 and 1, so too a n-qubit quantum register can be found in a superposition of all the 2 n possible bit strings 00...0, 00...1,..., 11...1 it may assume. Formally, a n-qubit quantum register is a quantum system comprising n individual qubits, where each qubit q i with i 2 f0; . . .; n À 1g is represented by a unit vector of two-dimensional Hilbert space H i with i 2 f0; . . .; n À 1g. Then, the resulting quantum register is represented by a unit vector of n-dimensional Hilbert space: where the symbol computes the tensor 2 product of two vector spaces.
Like classical computation, quantum computing uses logic gates known as quantum gates to change the state of qubits and transform input information into a desired output. For each quantum gate, there is a unitary operator 3 U capable of formalizing its behavior (Acampora and Vitiello 2021). The unitary operator U acts on qubits as follows: An interesting consequence of the unitary nature of the quantum transformations is that they are reversible, i.e., given an output, the corresponding input can be retrieved. The subset of quantum gates used in this paper is reported in Table 1. The first gate is known as Hadamard gate (H). It is used to create quantum states in a superposition. Its corresponding unitary operator is as follows: For example, let us consider a qubit jwi initialized in the state j0i, i.e., jwi ¼ 1 Á j0i þ 0 Á j1i, where a ¼ 1 and b ¼ 0, since this initial quantum state is the most effective to understand the power of the Hadamard gate, and compute jw 0 i ¼ Hjwi as follows: After applying the quantum operator H, the qubit will be in a superposition state p . Therefore, after measuring the qubit, the probability that it is in state j0i or j1i is the same, i.e., jaj 2 ¼ 1 2 and jbj 2 ¼ 1 2 . The second quantum gate reported in Table 1 is known as the Pauli-X. Pauli-X is a gate acting on a single qubit and reverses the probabilities of measuring 0 and 1 (for this reason, it is sometimes called bit-flip). The unitary matrix associated with this gate is as follows: For instance, let us consider a qubit in the state jwi ¼ ð0:866 þ 0iÞ Á j0i þ ð0 À 0:5iÞ Á j1i, where a ¼ 0:866 þ 0i, b ¼ 0 À 0:5i and i is the imaginary unit, and compute jw 0 i ¼ Xjwi: Hence, the computed quantum state is jw 0 i ¼ ð0 À 0:5iÞ Á j0i þ ð0:866 þ 0iÞ Á j1i, where a ¼ 0 À 0:5i and b ¼ 0:866 þ 0i. In other words, the probabilities of measuring the bits 0 and 1 are reversed from the quantum state jwi to the quantum state jw 0 i.
Several quantum gates can be used to change the state of a qubit. Among these, there are the rotation quantum gates R x , R y and R z . In this paper, only the R y gate is used. The unitary operator associated with this gate is The R y rotation mainly changes the amplitudes of the qubit, and as consequence, the probabilities that it will collapse to 1 or 0 after the measurement. For instance, let us consider again the quantum state jwi ¼ ð0:866 þ 0iÞ Á j0i þ ð0 À 0:5iÞ Á j1i and h ¼ p 3 , and compute jw 0 i ¼ R y ð p 3 Þjwi: R y Rotation R y It is one of the rotation operators and is used to modify the probabilities of states in which the quantum system resides.
It is used to enable the quantum entanglement between two qubits Hence, in this example, the rotation gate R y ð p 3 Þ applied on the quantum state jwi has changed the probability of measuring the classical bit 0 from 1 to 0.625 and the probability of measuring the classical bit 1 from 0 to 0.375.
The last of the gates reported in Table 1 is the Controlled NOT (CNOT). It operates on two qubits, a control qubit and a target qubit. In detail, it works by applying the Pauli-X gate to the target qubit, in the case that the control qubit has the value 1. The unitary operator related to this gate is as follows: The CNOT gate has an interesting role when the control qubit is in superposition state, because, in this case, it enables quantum entanglement. In an abstract way, if we have two quantum systems Q 1 and Q 2 in entanglement, the values of certain properties of system Q 1 are associated with the values that those properties will assume for system Q 2 . Bell states are the simplest form of quantum entanglement. As an example, let us consider two qubits q 0 and q 1 , where q 0 is initialized to the Hadamard superposition state and q 1 is initialized to state j0i: Then, let us suppose to have a relationship between q 0 and q 1 created by applying a CNOT gate and considering q 0 as control bit and q 1 as a target bit, as shown in the following quantum circuit: The result is a superposition of j00i and j11i. In detail, if q 0 takes value j0i, then no action would occur on q 1 , and it remains in the state j0i, leaving the two-qubit register in a total state of j00i. Vice versa, if q 0 takes value j1i, then a bit flip is applied to q 1 and the two-qubit register changes its state moving to the state j11i. In other words, the value of q 1 is completely connected to the quantum measurement on q 0 . Quantum entanglement is a key ingredient to demonstrate an advantage of quantum computers over classical computers. Indeed, if a quantum system is not highly entangled it can often be simulated efficiently on a classical computer (Acampora and Vitiello 2021).
Currently, quantum computation can be deployed by executing quantum circuits on so-called Noise Intermediate Scale Quantum (NISQ) devices, where ''intermediate scale'' refers to the limited number of qubits whose they are equipped (even if this number is larger than the first generation of quantum devices), and ''noisy'' emphasizes that there is imperfect control over these qubits (Preskill 2018).

A hybrid and granular design of genetic algorithms for quantum computers
A very first hybrid quantum evolutionary algorithm aimed at implementing a quantum version of evolutionary optimization has been presented by Acampora and Vitiello (2021), where completely new evolutionary concepts, such as quantum chromosomes, entangled crossover, R y mutation, quantum selection and quantum elitism, have been introduced to demonstrate that a quantum computer can exhibit evolutionary optimization capabilities. In detail, this approach uses the concept of quantum chromosome to embody a whole genetic population in a superposition. As for the entangled crossover, it is a quantum circuit used to perform a genetic crossover among quantum chromosomes; the superposed nature of quantum chromosomes allows a single application of the entangled crossover to act on a large collection of individual pairs and improve the computational performance of the genetic algorithms. The R y mutation is the analogue of the mutation operator for classical genetic algorithms, but similar to the entangled crossover, the application of the R y mutation on a single qubit affects a large part of a genetic population. The quantum selection allows the superposed genetic population coded by a quantum chromosome to collapse in a single classical chromosome whose quality, with respect to the problem that is being solved, will be evaluated by a classical computer. Finally, quantum elitism is the equivalent of the elitism concept for classical genetic algorithms, and it is used to move the best solution from the current generation to the next evolutionary population expressed by quantum chromosomes. However, in spite of the indisputable innovations introduced by the above method in the field of evolutionary computation, the limited number of qubits that characterizes current quantum processors does not yet allow an efficient execution of that kind of algorithms in terms of accuracy of the computed solution. As a consequence, there is a strong emergence for algorithmic tricks aimed at addressing the limitations of quantum hardware, and improving the performance of current approaches of quantum evolutionary computation. To bridge this gap, HGQGA has been designed by means of granular computing approach to induce a hierarchical scheme, where a quantum computer iteratively limits the search space of a given problem and identify so-called information granules, i.e., sub-spaces of the problem search space that may contain the optimal solution of the problem to be solved (see Fig. 1).

HGQGA: implementation
This section is devoted to presenting the above quantum evolutionary concepts and how to use them synergistically in a hybrid and granular evolutionary algorithm aimed at solving continuous optimization problems. For sake of simplicity, the design of HGQGA will be described using a one-dimensional continuous minimization problem P, whose the solution space is limited by the interval ½a 0 ; b 0 . In this scenario, let us suppose to have a quantum computer equipped with N qubits to run HGQGA and solve the problem P using m quantum chromosomes, where each quantum chromosome is coded by n qubits. It is important to note that, to allow HGQGA to work correctly, it is necessary to use at least three quantum chromosomes to enable the quantum evolutionary process. Hence, the number of qubits equipping the quantum device must be at least three times that of the n value, namely, N ! 3n.
As shown in Fig. 1, HGQGA iteratively computes a sequence of nested ranges (information granules) may contain the optimal solution of the problem P, namely, x Ã 2 ½a iþ1 ; b iþ1 . To achieve this goal, at the ði þ 1Þ-th iteration, HGQGA divides the range The set of potential solutions of the problem P is represented by the set of a j i values, with j ¼ 0; . . .; h À 1. These are embodied in a quantum chromosome using superposition. During the iteration, m quantum chromosomes are measured by obtaining classical chromosomes, so that HGQGA evaluates their fitness value using a classical computer and identifies the current best solution of the problem. Successively, the set of quantum chromosomes evolves by means of a quantum circuit implementing evolutionary operators and concepts, such as entangled crossover and R y mutation. The cycle composed of the quantum measurement, the fitness function evaluation and the application of quantum evolutionary operators is repeated until a termination criterion such as a maximum number of iterations is reached. At the end of the iterations, the best solution a Ã computed by the algorithm is used to in which the algorithm will look for a new and more refined solution to the problem. HGQGA goes down in the levels until a maximum number of levels k is reached. The solution of the algorithm is the best solution in the last interval ½a iþ1 ; b iþ1 . It is worth noting that at each level the selected interval is divided in h ¼ 2 n sub-intervals representing the number of solutions at that level. Therefore, at the k-th level, the search space of the problem is characterized by 2 kn solutions in the initial interval ½a 0 ; b 0 . The workflow of HGQGA is described in Fig. 2. Hereafter, more details about the main HGQGA steps are given. Thus, the main first step of HGQGA is the initialization of m quantum chromosomes. A quantum chromosome is a quantum state composed of n qubits, which embodies a set of potential solutions of a problem using quantum superposition. It is initialized by using, for each qubit, a Hadamard gate followed by a R y ðAEdÞ gate, where d is a socalled rotation parameter usually chosen from the set 4 f p 32 ; p 16 ; p 8 ; . . .g, and the sign, þ or -, is set in a uniformly random way. Figure 3 shows an example of the initialization of a quantum chromosome composed of four qubits, whereas Fig. 4 shows the classical population corresponding to that quantum chromosome, together with corresponding measurement probabilities. It is worth noting that the use of quantum superposition enables a strong parallelism in computation. Indeed, thanks to a single quantum operation acting on a quantum state, it is possible to transform all the individuals of a classical population embodied in the quantum state simultaneously. With respect to Fig. 4, the quantum parallelism permits to modify the probability distribution having as domain the classical population in a single computational step.
Successively, a quantum measurement operator is used to collapse the collection of m initialized quantum chromosomes fjq 0 0 q 0 1 . . .q 0 nÀ1 i; jq 1 0 q 1 1 . . .q 1 nÀ1 i; . . .; jq mÀ1 0 q mÀ1 0 . . . q mÀ1 nÀ1 ig to a collection of m classical chromosomes, which can be evaluated by a fitness function related to the problem P to be solved by means of a classical computer as follows: Let us suppose that a Ã 2 fa 0 ; a 1 ; . . .; a mÀ1 g is the current best solution found by HGQGA. After a quantum measurement, it is necessary to reconstruct the quantum state that originated the solution a Ã , i.e., the best quantum chromosome. Let us consider jq Ã 0 q Ã 1 . . .q Ã nÀ1 i, with l 2 f0; 1; . . .; n À 1g, the quantum state that originated a Ã , there are three possibilities for the reconstruction of the best quantum chromosome. The first and obvious choice, named quantum elitism, reconstructs the best quantum chromosome as a new quantum state jq 0 0 q 0 1 . . .q 0 nÀ1 i by setting jq 0 l i ¼ jq Ã l i, with l ¼ 0; . . .; n À 1. The second choice, named quantum elitism with reinforcement, reconstructs a new quantum state jq 0 0 q 0 1 . . .q 0 nÀ1 i, so that the probability that jq 0 l i, with l 2 f0; 1; . . .; n À 1g will collapse to 1 is increased of a certain amount q if the l-th bit of a Ã is equal to 1; analogously, the probability that jq 0 l i, with l 2 f0; 1; . . .; n À 1g will collapse to 0 is increased of a certain amount q if the l-th bit of a Ã is equal to 0. The third and last choice, named deterministic elitism, reconstructs a new quantum state jq 0 0 q 0 1 . . .q 0 nÀ1 i, so that jq 0 l i ¼ j1i, with l 2 f0; 1; . . .; n À 1g, if the l-th bit of a Ã is equal to 1; analogously jq 0 l i ¼ j0i, with l 2 f0; 1; . . .; n À 1g, if the l-th bit of a Ã is equal to 0.
Once the quantum state of the best solution a Ã is correctly reconstructed, HGQGA ''moves'' the good features embodied in a Ã toward the remaining m À 1 quantum chromosomes using the entangled crossover. This operator divides the qubits of the best quantum chromosome in m À 1 groups of consecutive qubits and entangles them to m À 1 groups of qubits, each one belonging to the remaining m À 1 quantum chromosomes, randomly selected (see Fig. 5). At the end of this crossover operation, some of the qubits belonging to the remaining m À 1 chromosomes will be not entangled to qubits of the best quantum chromosome. These qubits will be undergone with a certain probability l to a mutation operation implemented by means of R y rotation. The goal of this operator is to invert the probability that a given qubit will collapse to 0 or 1 after a quantum measurement. In particular, the mutation operator is applied by means of the quantum operator R y ðh z Þ, where h z is an angle value, properly computed starting from initial quantum state to mutate, so as to invert the probabilities that the specific qubit jq z i will collapse to 0 or 1, after a quantum measurement. An example of entangled crossover and R y mutation is shown in Fig. 5; here, the quantum state jq 4 q 5 q 6 q 7 i corresponds to the current quantum best chromosome. After the execution of the R y mutation operator, the quantum chromosomes are made to collapse through a quantum measurement operation to a new set of classical chromosomes that will be evaluated using the fitness function of the problem P on a classical computer. When a termination condition is satisfied as a maximum number of iterations, the best solution identified by the algorithm is used to calculate a new search interval in which the algorithm will look for a new and more refined solution to the problem. The algorithm ends after being gone down in k levels.

HGQGA: a case study
This section shows all the steps carried out by HGQGA to solve a continuous optimization problem. A well-known benchmark function, named Forrester (Forrester et al. 2008), is used. This is a one-dimensional multimodal function defined as follows: It is evaluated in x 2 ½0; 1 (i.e., a 0 ¼ 0 and b 0 ¼ 1) as reported in Fig. 6. The global optimum is x Ã ¼ 0:757249 and the corresponding optimal fitness value is À6:020740. The Forrester problem is solved by applying HGQGA with 3 quantum chromosomes, each one composed of 5 qubits. The values of the hyper-parameters are: the quantum elitism with reinforcement, d ¼ p 8 , l ¼ 0:15, q ¼ p 8 , the number of levels k ¼ 3 and the maximum number of iterations #iter ¼ 3. The hyper-parameters are set in an arbitrary way for this case study. The first step of the algorithm is to run the quantum initialization circuit of the first level reported in Fig. 7a. The application of a quantum measurement operator collapses the three quantum chromosomes q0, q1 and q2 to three binary strings, '00000', '00101', '11000', corresponding to three different intervals ½a 0 1 ; b 0 1 , ½a 1 1 ; b 1 1 , and ½a 2 1 ; b 2 1 . The left bound of each range, a 0 1 , a 1 1 , and a 2 1 , are used to compute the fitness function value on the classical side of HGQGA. According to the fitness value, the quantum chromosome q1 is identified as the current best solution (see Table 2). Then, in the first iteration of HGQGA, the qubits of the best solution q1 are opportunely partitioned to be entangled to the corresponding qubits belonging to quantum chromosomes q0 and q2; successively, a R y mutation is applied on some of the no entangled qubits in the circuit, as shown in Fig. 7b. After running this quantum circuit, a quantum measurement is to carry out to get three classical chromosomes, namely, '11100', '00000', '00001', to be evaluated.  obtained after the third iteration will be used to start the computation in the second level of HGQGA. At this point, in the second and third level, the initialization and three iterations are performed similar to those of the first level (see Figs. 8 and 9, respectively). All the evolutions of HGQGA are reported in Table 2. As reported, the best solution obtained by HGQGA after performing all levels is 0.757376 characterized by a fitness value that is À6:020731.

Experiments and results
This section is devoted to showing the benefits of the proposed approach on the state-of-the-art approaches. In detail, HGQGA is compared with HQGA in solving benchmark real continuous functions typically used to assess the performance of the evolutionary approaches. The hyper-parameters of HGQGA used in the experimentation have been set through a tuning procedure. The performance of the compared algorithms has been assessed by means of a consolidated quality measure such as the average fitness value computed on a set of runs. Moreover, to investigate the significance of the obtained results, the non-parametric statistical test known as Wilcoxon signed rank test (Wilcoxon 1992) has been applied. Finally, a discussion about the robustness of HGQGA with respect to the noise of the current NISQ quantum devices is reported. Hereafter, more details about the benchmark functions, all the experimental settings including the tuning process and the results are given.

Benchmark functions
The experimental study involves a set of continuous benchmark functions well-known in literature (Hussain et al. 2017).
Due to the binary encoding used by our evolutionary approach, a discretization procedure has been implemented as reported in (Acampora and Vitiello 2021). Table 3 shows the definitions of the used benchmark functions, the ranges of q0 3 : q0 4 : q2 1 : q2 4 : (c) Second iteration circuit q0 0 : q0 1 : q2 4 : (d) Third iteration circuit their variables (that is, the upper and lower bounds) and the optimal fitness values (by considering the discretization). In detail, the functions f 1f 10 are one-dimensional continuous optimization problems, whereas f 11f 14 are multi-dimensional ones considered as bi-dimensional ones due to the limitations related to the number of qubits made available from the current quantum hardware architecture. All functions are characterized by one or more global minima.

Experimental setup
The HGQGA algorithm has been implemented in Python by mainly exploiting the open-source quantum computing framework Qiskit TM developed by IBM. During experiments, the HGQGA algorithm has been run on a real quantum computer made available by the IBM Quantum Experience platform 5 , named IBM Q Guadalupe and equipped with 16 qubits 6 . The number of qubits of the used quantum processor has forced the use of a population of three quantum chromosomes (m ¼ 3) and a quantum register of five qubits for coding each quantum chromosome. It is important to note that a 16-qubits quantum processor is not enough to solve bi-dimensional benchmark functions (f 11 -f 14 ) that require 30 qubits (i.e., 10 qubits for each chromosome). Therefore, these functions have been solved using the IBM quantum simulator, known as qasm simulator, characterized by 32 qubits and executed on a classical computer equipped with an Intel i7 architecture with 16GB of RAM.
To select the best configuration of hyper-parameters of HGQGA for each benchmark function, a tuning process has been performed. The tuned hyper-parameters are: the d value used during the initialization procedure of a quantum chromosome; the l value representing the probability to apply the R y mutation to the free qubits; the elitist selection representing the mechanism to ''carry over'' the best individual from a generation to the next one; the q value used when the reinforcement elitism is selected; the number of levels k and the number of iterations #iter for each level. In the tuning process, all the elitism strategies are considered. Moreover, the values p 8 and p 16 are investigated for d and q and the values 0.15 and 0.3 for l. Finally, two combinations for the number of levels and the number of iterations for each level are considered: (i) k ¼ 3 and #iter ¼ 7 and (ii) k ¼ 4 and #iter ¼ 5. These two combinations permit to run the same number of fitness evaluations (i.e., 72) and, at the same time, to investigate if the performance of the algorithm is affected more by increasing the number of levels or the number of iterations. Obviously, increasing the number of iterations or the number of levels could improve strongly the performance, above all, in the case of the multi-dimensional functions. By considering all the combinations for the hyper-parameters, 32 configurations have been obtained. Each configuration will be denoted with the string composed by different parts separated by the symbol ''_'' in the following order: the letter D, P or R indicating the deterministic, pure and with reinforcement quantum elitism, respectively; the value of d, the value of l, the value of q in the case of the reinforcement elitism, the number of levels and the number of iterations. The tuning process consists of running all the different configurations, each one for 15 times, for each benchmark function. Hence, the tuning process implies to perform 6720 runs of HGQGA. Since executing this process would have required several weeks on the real quantum computer made available via cloud by À0.788685 x 2 þ1 ½À5:0; 5:0 À 0.035534 f 11 ðxÞ ¼ 10n þ P n i¼1 ðx 2 i À 10 cosð2px i ÞÞ ½À5:12; 5:12 1.17199eÀ05 IBM because of the long waits in queue, all the runs of the tuning process are executed using the IBM qasm simulator. For each run, the obtained best fitness value is stored and used to evaluate the quality of the configurations. The fitness values obtained in each configuration for all benchmark functions are reported in Figs. 10, 11, 12 and 13 using the boxplot methodology. In detail, each box plot displays summary information related to the set of fitness values obtained by one configuration: the minimum fitness value (represented by the lowest point of the box), the maximum fitness value (represented by the highest point of the box), the first (Q1) and third (Q3) quartiles, the median fitness value (plotted as a red line) and the mean fitness value (plotted as a red point). Outliers are plotted as individual blue crosses. The configurations are evaluated in terms of the best mean fitness value. Therefore, the hyperparameters selected for HGQGA for each benchmark function are highlighted in bold red in Figs. 10, 11, 12 and 13. To conclude, a Jupyter notebook is packaged and available to allow the complete reproduction of the experiments 7 .

Comparison with the state of the art: HGQGA vs. HQGA
HGQGA is compared with the state-of-the-art approach named HQGA. HGQGA is run by considering the best configurations identified during the aforementioned tuning process for each benchmark function. HQGA is run with the same hyper-parameters (except for the hyper-parameter k that is not present in HQGA). The use of the same hyper-parameters will permit to show the benefits of the introduction of the levels in HGQGA. Both algorithms are run on IBM Q Guadalupe for solving the one-dimensional benchmark functions and on the IBM qasm simulator for the multi-dimensional ones. The comparison is performed in terms of fitness values and, of average fitness values computed on several runs. For this comparison, the number of runs is set to 25. Figure 14 shows the results of the executed runs by means of boxplot methodology for all one-dimensional benchmark functions. Figure 15 shows the same information for all bidimensional benchmark functions. As it is possible to see, the average fitness values (reported as red points) of the HGQGA are always better (i.e., lower values being functions to be minimized) than those of the HQGA except for the function f 7 in which the performance are the same. Moreover, HGQGA provides results more stable as highlighted by the length of the rectangular boxes that is most often smaller than that of the boxes related to the HQGA (except for the function f 7 , f 11 , f 13 and f 14 ).
To summarize, Table 4 shows the average fitness values for HGQGA and HQGA for all the considered benchmark functions and the relative improvement. The average relative improvement of HGQGA with respect to HQGA on all benchmark functions is about 14%. Moreover, to investigate the significance of the obtained results, a nonparametric statistical test known as Wilcoxon signed rank test has been used. In general, this test aims to detect significant differences between two sample means, where the two sample data represent the behavior of two algorithms. The underlying idea of this test is not just making a count of the wins of each compared algorithm but ranking the differences between the performance and developing the statistic over them (Conover and Iman 1981). In our statistical comparison, the samples related to the two compared algorithms, HGQGA and HQGA, used in the statistical comparison are composed of the average fitness values obtained for the different benchmark functions (i.e., the values contained in Table 4). The p value resulting by the Wilcoxon's test is 0.000122. Therefore, it is possible to state that HGQGA statistically outperforms HQGA at 99% confidence level.

Robustness of HGQGA
As aforementioned, NISQ devices are equipped with noisy qubits that can lead error in quantum computation. As for publicly available quantum devices from IBM, the singlequbit instruction error-rates are of the order of 10 À3 , whereas for two-qubit instructions, such as CNOT, it is 10 À2 (Acampora et al. 2023). To investigate the robustness of HGQGA with respect to the noise characterizing real quantum devices, in this section, a comparison between the performance of HGQGA executed without noise and that obtained when HGQGA is run on the quantum processor IBM Q Guadalupe is carried out. In fact, if the performance of HGQGA run on the real device is statistically equivalent to the performance of HGQGA executed without noise, it is possible to state the robustness of HGQGA with respect to the noise. The comparison involves the executions made for solving the one-dimensional benchmark functions as only for these functions it was possible to carry out executions on both a simulator device (i.e., case without noise) and the real device. The statistical comparison has been carried out using the Wilcoxon signed rank test applied on two populations of samples for each benchmark function, where the first population is composed of the results obtained from 15 executions in the case without noise, whereas the second one is composed of the results obtained from 15 executions on the real device. Table 5 reports the p-value obtained for each benchmark function. As it is possible to see, the p-values for 8 out of 10 functions are (c) f 7 (d) f 8 Fig. 11 Parameter tuning study results for a f 5 , b f 6 , c f 7 and d f 8 . Outliers are omitted for sake of readability larger than 0.01, a typical value of significance. Therefore, for these functions, it is possible to state that the Wilcoxon's test does not reject the null hypothesis representing the equality of the two populations of the samples. Hence, it can be concluded that HGQGA shows adequate robustness, as its performance with noise versus without noise is equivalent in most of the considered benchmark functions.

Conclusions
The proposed research merges together, for the very time, granular computing, quantum computation and evolutionary computation to provide theoretical and practical benefits in the solution of continuous optimization problems. Indeed, from the theoretical point of view, HGQGA allows overcoming the limitations related to the low number of qubits present in a quantum computer and, as a consequence, defining a suitable number of quantum chromosomes to efficiently navigate the search space of a given problem. From a practical point of view, the granular computing approach used in this paper allows using current quantum computers and quantum evolutionary computation to solve real-world problems.
The main practical benefit of the proposed work relates to the fact that HGQGA offers a significant improvement over HQGA. Indeed, HGQGA provides an average 14% improvement over HQGA in solving well-known continuous optimization problems. The significance of the obtained results is confirmed by the Wilcoxon's test that states that HGQGA statistically outperforms HQGA at 99% confidence level.
Although good results were obtained, the approach used in this research activity could be further improved. Indeed, currently, the exploration phase of HGQGA could identify incorrect solution ranges and completely alter the smooth running of the algorithm. As a consequence, there is a need for robust and insightful future research activities to solve the above issue. In particular, three different lines of research will be conducted. In the first line of research, solution space navigation techniques other than the hierarchical approach will be investigated to improve the accuracy of problem solving. With respect to the second line of research, parallelization techniques of quantum evolutionary algorithms will be introduced to use multiple quantum processors simultaneously and enhance the ability of current algorithms to navigate search spaces. Finally, the third line of research will be conducted to merge together quantum/classical population-based optimization algorithms with quantum/classical local search strategies to improve the exploration and exploitation capabilities of current approaches.
(a) f 9 (b) f 10  Acknowledgements The proposed quantum approach for evolutionary computation has been implemented on the quantum processor IBM Q Guadalupe whose access has been provided in the context of the IBM Quantum Researchers Program Access Award (Agreement Number: W2177387).
Funding Open access funding provided by Università degli Studi di Napoli Federico II within the CRUI-CARE Agreement.

Declarations
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.