Solving The Vehicle Routing Problem via Quantum Support Vector Machines

The Vehicle Routing Problem (VRP) is an example of a combinatorial optimization problem that has attracted academic attention due to its potential use in various contexts. VRP aims to arrange vehicle deliveries to several sites in the most efficient and economical manner possible. Quantum machine learning offers a new way to obtain solutions by harnessing the natural speedups of quantum effects, although many solutions and methodologies are modified using classical tools to provide excellent approximations of the VRP. In this paper, we implement and test hybrid quantum machine learning methods for solving VRP of 3 and 4-city scenarios, which use 6 and 12 qubit circuits, respectively. The proposed method is based on quantum support vector machines (QSVMs) with a variational quantum eigensolver on a fixed or variable ansatz. Different encoding strategies are used in the experiment to transform the VRP formulation into a QSVM and solve it. Multiple optimizers from the IBM Qiskit framework are also evaluated and compared.


I. INTRODUCTION A. Quantum Computing
Quantum computing has provided novel approaches for solving computationally complex problems over the last decade by leveraging the inherent speedup(s) of quantum calculations compared to classical computing.Quantum superposition and entanglement are two key factors that give a massive speed up to calculations in the quantum domain compared to classical counterparts [1], [2], [3].Because of this, addressing Optimization problems by quantum computing is an appealing prospect.Multiple approaches, such as Grover's algorithm [4], adiabatic computation (AC) [5], and quantum approximate optimization algorithm (QAOA) [6], have been proposed to use quantum effects and, as such, have served as the basis for solving mathematically complex problems using quantum computing.The performance of classical algorithms has generally been found to be subpar when applied to larger dimensional problem spaces [7].On a multidimensional problem, classical machine learning optimization techniques frequently require a significant amount of CPU and GPU resources and take a long time to compute.The reason for this is because ML techniques are needed to resolve NP-hard optimization problems [8].

B. Vehicle Routing Problem
The vehicle routing problem is an intriguing optimization problem because of its many uses in routing and fleet management [9], but its computational complexity is NP-hard [10], [11].Moving automobiles as quickly and cheaply as feasible is always the objective.VRP has inspired a plethora of precise and heuristic approaches [9], [12], all of which struggle to provide fast and trustworthy solutions.The VRP's bare bones implementation comprises sending a single vehicle to deliver items to many client locations before returning to the depot to restock [13].By optimizing a collection of routes that are available and all begin and terminate at a single node called the depot, VRP seeks to maximize the reward, which is often the inverse of the total distance traveled or the average service time.It is computationally difficult to find an optimum solution to this issue, even with just a few hundred customer nodes.
Explicitly, in every VRP (n, k), there are (n − 1) stations, k vehicles, and a depot D [14], [9].The solution is a collection of paths whereby each vehicle takes exactly one journey, and all k vehicles start and conclude at the same location, D. The best route is one that requires k vehicles to drive the fewest total miles.This problem may be thought of as a generalization of the well-known "traveling salesman" problem, whereby a group of k salesmen must service an aggregate of (n − 1) sites with a single visit to each of those places being guaranteed [9].In most practical settings, the VRP issue is complicated by other constraints, such as limited vehicle capacity or limited time for coverage.As a consequence, several other approaches, both classical and quantum, have been proposed as potential ways forwards.Currently, available quantum approaches for optimizing a system include the Quantum Approximate Optimization Algorithm (QAOA) [14], the Quadratic Unconstrained Binary Optimization (QUBO) [15], and quantum annealing [16], [17], [18].The goal of the support vector machine (SVM) technique is to find the best line (or decision boundary) between two classes in n-dimensional space so that new data may be classified quickly.This optimum decision boundary is referred to as a hyperplane.The most extreme vectors and points that help construct the hyperplane are selected using SVM.The SVM method is based on support vectors, which are used to represent these extreme instances.Typically, a hyperplane cannot divide a data point in its original space.In order to find this hyperplane, a nonlinear transformation is applied to the data as a function.A feature map is a function that transforms the features of provided data into the inner product of data points, also known as the kernel [19], [20], [21].
Quantum computing produces implicit calculations in highdimensional Hilbert spaces using kernel techniques by physically manipulating quantum systems.Feature vectors for SVM in the quantum realm are represented by density operators, which are themselves encodings of quantum states.The kernel of a quantum support vector machine (QSVM) is made up of the fidelities between different feature vectors, as opposed to a classical SVM; the kernel conducts an encoding of classical input into quantum states [19], [22].

D. Novelty and Contribution
• In this work, we propose a new method to solve the VRP using a machine-learning approach through the use of QSVM.
• In this context, we came across recent and older works in QSVM [21], [23], [20] and VQE algorithms [24], which are used to solve optimization problems such as VRP.However, none of them use a hybrid approach to arrive at a solution.
• Our work implements this new approach of solving VRP in a detailed gate-based simulation of a 3-city or 4-city problem on a 6-qubit or 12-qubit system, respectively, using a parameterized circuit that is developed as a solution to VRP. • We apply quantum encoding techniques such as amplitude encoding, angle encoding, higher order encoding, IQP Encoding, and quantum algorithms such as QSVM, VQE, and QAOA to construct circuits for VRP and analyze the effects and consolidate our findings.• We evaluate our solution using a variety of classical optimizers, as well as fixed and variable Hamiltonians to draw statistical conclusions.

E. Organization
The paper is organized as follows.Sec.II discusses the fundamental mathematical concepts such as QAOA, the Ising model, quantum support vector machine, Amplitude encoding, Angle encoding, Higher order encoding, IQP encoding, and VQE.Sec.III discusses the formulation and solution of VRP using the concepts discussed in the previous Section.Sub-Sec.III-B covers the basic building blocks of circuits to solve VRP using QSVM.Sec.IV covers the outcomes of the QSVM simulation consisting of two sub-sections.Sub Sec IV-A covers the outcome of simulation results of all the encoding schemes used, Finally in Sub Sec.IV-B, we conclude by comparing the results of QSVM solutions using various optimizers in the Qiskit platform on the VRP circuit and discuss the feasibility of higher qubit solutions as the future directions of research.

II. BACKGROUND
Dealing with methods and processes for resolving combinatorial optimization problems is the foundation of solving routing challenges.The objective function is then created by transforming the mathematical models into a quantum equivalent mathematical model.By maximizing or minimizing the mathematical model iteratively, we arrive at the solution of the objective function.We list the main ideas in this section for our solution approach.

A. QAOA
A variational approach called the Quantum Approximate Optimization Algorithm (QAOA) was put forth by Farhi et al. in 2014 [5], [6] using adiabatic quantum computation framework as the foundation of this algorithm.It is a hybrid algorithm since it applies both classical and quantum approaches.Simply described, quantum adiabatic computation involves switching from the eigenstate of the driver Hamiltonian to that of the problem Hamiltonian.The problem Hamiltonian can be expressed as, We are aware that the combinatorial optimization problem is resolved by finding the highest energy eigenstate of C. Similarly, we employ driver Hamiltonian as where σ x j represents the σ x Pauli operator on bit z j and B is the mixing operator.Let's additionally define U C (γ) =e −iγC and U B (β) =e −iβB which allow the system to evolve under C for γ time and under B for β time, respectively.Essentially, QAOA creates a state where |s⟩ denotes the superposition state of all input qubits The expectation value of the cost function m α=1 ⟨β,γ |C α |β,γ ⟩ gives the solution, or an approximate solution to the problem [25].

B. Ising Model
In statistical mechanics, the Ising model is a well-known mathematical depiction of ferromagnetism [26], [27].In the model, discrete variables (+1 or −1) represent the magnetic dipole moments of "spins" in one of two states.Because the spins are organized in a network, commonly a lattice(when there is periodic repetition in all directions of the local structure), each spin can interact with its neighbors.The spins interact in pairs, with an energy that has one value when the two spins are identical and a second value when they are dissimilar.Nevertheless, heat reverses this tendency, enabling alternate structural phases to arise.The model is a condensed representation of reality that enables the recognition of phase transitions.The following Hamiltonian explains the total spin energy: where J ij represents the interaction of adjacent spins i and j, and h represents an external magnetic field.The ground state at h = 0 is a ferromagnet if J is positive.If J is negative, the ground-state at h = 0 is an anti-ferromagnet for a bipartite lattice.As a result, for the sake of simplicity and in the context of this document, we can write the Hamiltonian as Here σ z and σ x represent Pauli z and x operator.For the sake of simplification, we can assume the following conditions to be ferromagnetic (J ij > 0) if there is no external impact on the spin: h = 0. Hence, the Hamiltonian may be rewritten as follows: C. Quantum Support Vector Machine SVM [20], [21] is a supervised algorithm that constructs hyper-plane with ⃗ w • ⃗ x + b = 0 such that ⃗ w • ⃗ x + b ≥ 1 for a training point ⃗ x i in the positive class, and ⃗ w • ⃗ x + b ≤ −1 for a training point ⃗ x i in the negative class.During the training process, the algorithm aims to maximize the gap between the two classes, which is intuitive as we want to separate two classes as far as possible, in order to get a sharper estimate for the classification result of new data samples like ⃗ x 0 .Mathematically we can see the objective of SVM is to find a hyper-plane that maximizes the distance 2/| ⃗ w| constraint to The normal vector ⃗ w can be written as ⃗ w = M i=1 α i ⃗ x i where α i is the weight of the i th training vector ⃗ x i .Thus, obtaining optimal parameters b and α i is the same as finding the optimal hyper-plane.To classify the new vector, is analogous to knowing which side of the hyper-plane it lies, i.e., y i (⃗ x 0 ) = sign( ⃗ w.⃗ x + b).After having the optimal parameters, classification now becomes a linear operation.From the least-squares approximation of SVM, the optimal parameters can be obtained by solving a linear equation, In a general form of F we adopt the linear kernels Thus to find the hyper-plane parameters we use matrix inversion of F :

1) Quantum Kernels
The main inspiration of a quantum Support vector machine is to consider quantum feature maps that lead to quantum kernel functions, which are hard to simulate in classical computers.In this case, the quantum computer is only used to estimate a quantum kernel function, which can be later used in kernel-based algorithms.For simplicity assuming the datapoints x, z ∈ X , the nonlinear feature map of any datapoint is The kernel function κ(x, z) can be computed as The state |ϕ(x)⟩ can be prepared by using a unitary gate U (x), and thus |ϕ(x)⟩ = U (x)|0⟩.Thus the kernel fuction becomes , From the above we can say that the kernel κ(x, z) is simply the probability of getting an all-zero string when the circuit U † (x)U (z)|0⟩ is measured or this kernel is an |0 n ⟩ to |0 n ⟩ transition probability of a particular unitary quantum circuit on n qubits [19], [28].This can be implemented using the following kernel estimation circuit (Fig. 1).

D. Amplitude Encoding(AE)
In the process of amplitude-embedding [29], data is encoded into the amplitudes of a quantum state.A N-dimensional classical datapoint x is represented by the amplitudes of an n-qubit quantum state |ψ x ⟩ as where N = 2 n , x i is the i-th element of x and |i⟩ is the i-th computational basis state.
In order to encode any data point x into an amplitudeencoded state, we must normalize the same by following where

E. Angle Encoding(AgE)
While the above-described amplitude encoding expands into a complicated quantum circuit with huge depths, the angle encoding employs N qubits and a quantum circuit with fixed depth, making it favorable to NISQ computers [30], [31].We define angle encoding as a method of classical information encoding that employs rotation gates(the rotation could be chosen along x, y or z axis).In our scenario, the classical information consists of the node and edge weights assigned to the vehicle's nodes and pathways which are further assigned as parameters to ansatz.
where x i represents the classical information stored on the angle parameter of rotation operator R.

F. Higher Order Encoding(HO)
Higher order encoding is a variation of angle encoding where we have an entangled layer and an additional sequential operation of rotation angles of two entangled qubits [31].This can be loosely defined as the following Similar to angle encoding we are free to chose the rotation.

G. IQP Encoding(IqpE)
IQP-style encoding is a relatively complicated encoding strategy.We encode classical information [32] where r is the depth of the circuit, indicating the repeating times of U Z (x)H ⊗n .H ⊗n is a layer of Hadamard gates acting on all qubits.U Z (x) is the key step in IQP encoding scheme: where S is the set containing all pairs of qubits to be entangled using R ZZ gates.First, we consider a simple two-qubit gate: R Z1Z2 (θ).Its mathematical form e −i i 2 Z1⊗Z2 can be seen as a two-qubit rotation gate around ZZ, which makes these two qubits entangled.

H. VQE
Another hybrid quantum classical algorithm is the Variational Quantum Eigensolver (VQE), which is used to estimate the eigenvalue of a large matrix or Hamiltonian H [33].The basic goal of this method is to find a trial qubit state of a wave function |ψ( ⃗ θ)⟩ that is dependent on a parameter set ⃗ θ = θ 1 , θ 2 , • • • , which is also known as the variational parameters.The expectation of an observable or Hamiltonian H in a state |ψ( ⃗ θ)⟩ can be expressed in quantum theory as, By spectral decomposition where λ i and |ψ⟩ i are the eigenvalues and eigenstates, respectively, of matrix H. Also, because the eigenstates of H are orthogonal, ⟨ψ i | ψ j ⟩ = 0 If i ̸ = j .The wave function |ψ( ⃗ θ)⟩ can be expressed as a superposition of eigenstates.
Thus the expectation becomes Clearly, E( ⃗ θ) ≥ λ min .So in VQE algorithm, we vary the parameters ⃗ θ = θ 1 , θ 2 , . . .until E( ⃗ θ) is minimized.This property of VQE is useful when attempting to solve combinatorial optimization problems namely those in which a parameterized circuit is used to set up the trial state of the algorithm, and E( ⃗ θ) is referred to as the cost function, that is also the expected value of the Hamiltonian in this state.The ground state of the desired Hamiltonian may be obtained by iterative minimization of the cost function.The optimization process utilizes a classical optimizer which uses quantum computer to evaluate the cost function and calculate its gradient at each optimization step.

A. Modelling VRP in QSVM
The vehicle routing problem can be solved by mapping the cost function to an Ising Hamiltonian H c [34].The answer to the problem is given by minimizing the Ising Hamiltonian H c .Consider an arbitrarily connected graph with n vertices and n−1 edges.Assuming we need to route a vehicle between two non-adjacent vertices in the graph; Consider a binary decision variable x ij whose value is 1 if there is an edge between i and j with an edge weight w ij > 0; otherwise, its value is 0. Now, the VRP problem requires n × (n − 1) choice variables.We define two sets of nodes for each edge from i → j: source [i] and target [j].source [i] contains the nodes j to which i sends an edge j ϵ source [i].The collection target [j] comprises the nodes i to which the node i delivers the edge i ϵ target[j].The VRP is defined as follows [14], [35]: where k is the number of vehicles, and n is the total number of locations, we have n − 1 locations for vehicles to traverse if we consider the starting place to be the 0th location or Depot D. Noticeably, this is subject to the following restrictions [11]: The first two restrictions establish the limitation that the delivering vehicle may only visit each node once.After delivering the products, the middle two limitations enforce the requirement that the vehicle must return to the depot.The last two constraints impose the sub-tour elimination conditions and are bound on u i , with Q > q j > 0, and u i , Q, q i ∈ R. For the VRP equation and restrictions, the Hamiltonian of VRP can be expressed as follows [14].
A > 0 represents a constant.In vector form, the collection of all binary decision variables x ij can be written as Using the preceding vector, we can build two new vectors for each node: − → z S[i] and − → z T [i] (in the beginning of the section, we defined two sets for source and target nodes, thus two vectors will represent them).
j∈ source [i] The aforementioned vectors will aid in the development of the QUBO model of VRP [36], [15], [37], [38].In general, the QUBO model of a connected graph G = (N, V ) is specified as follows: where, Q is a quadratic edge weight coefficient, g is a linear node weight coefficient, and c is a constant.In order to find these coefficients in the QUBO formations of H V RP given in Eq. 23 we first put in Eqs. 26 in terms H B and H c respectively, then expand and regroup the expression of H V RP according to Eq. 27 Hence for QUBO formulation of Eq. ( 23) we get the coefficients Q(n(n − 1)× n(n − 1)), g(n(n − 1) × 1) and c : The coefficients for the QUBO formulation of Eq. ( 23) are therefore as follows: J is the matrix containing all ones, I is the identity matrix, and T is the identity matrix.The binary decision variable x ij is converted to the spin variable s ij ∈ {−1, 1} using the formula From the aforementioned equations, we may expand Eq. ( 27) to form the Ising Hamiltonian of VRP [15].
Following are definitions for the terms J ij , h i , and d:

B. Analysis And Circuit Building 1) VRP
In this section, we create a gate-based circuit to realize the above formulation using the IBM gate model, which we have implemented using the Qiskit framework [39].For any arbitrary VRP problem using qubits, we begin with the state of |+⟩ ⊗n(n−1) the ground state of H mixer by applying the Hadamard to all qubits initialized as zero state, and we prepare the following state.
The energy E of the state |β, γ⟩ is calculated by expectation of H cost from Eq. (17).Again From the Ising model, H cost term can be written in terms of Pauli operators as, Thus for a single term of state in |β, γ⟩ as β 0 , γ 0 , the expression reads, e −iHmixerβ0 e −iHcostγ0 .The first term H cost can be expanded to following, Applying CN OT gate on before and after the above matrix 'M ' we can swap the diagonal elements, Observing the upper and lower blocks of matrix we can rewrite, 1 0 0 e −2iJij γ0 is a phase gate.Looking at the 2-nd term of H cost we get, Fig. 3b depicts the basic circuit with two qubits along with gate selections for H cost .Similarly, H mixer is merely a rotation along the X axis, as depicted by the U gate in Fig. 3d.
The above sample circuits can be used for the solution of VRP combined with VQE and QAOA approach, However, in this paper, we are focusing on a machine learning solution of VRP by use of QSVM; thus we need to construct a QSVM circuit using various encoding schemes.Simple interpretation and implementation of encoding schemes are described in upcoming subsections.
2) Amplitude Encoding As we look into AE, a single qubit state is represented by for two qubits Now applying Ctrl U and Anti-CTRL U on the above state we achieve Here Combining VRP and amplitude encoding circuit eliminates the need of Hadamard gates and H mixer components and we end up with the following skeleton circuits Fig. 3 (a).

3) Angle Encoding
For a 2-qubit scenario, angle encoding translates to the following example.We define the R y gate as follows

4) Higher Order Encoding
For a 2qubit scenario, HO encoding translates to the following We define the R y gate as follows (44)

5) IQP Encoding
For a 2qubit scenario IqpE translates to the following IV. RESULTS

A. VQE Simulation of QSVM and VRP
We build the Hamiltonian with a uniform distribution of weights between 0 and 1, and then run it along with the ansatz via IBM's three available VQE optimizers (COBYLA, L BFGS B, and SLSQP).We run the circuit up to two layers and gather data using all of the available optimizers.We run the experiment again with a fixed Hamiltonian and, subsequently, a set of variable Hamiltonians to see whether the QSVM and encoding approach can effectively reach the classical minimum.Our results indicate that COBYLA is the most efficient optimizer, followed by SLSQP and L BFGS B. In the sections that follow, we'll have a look at the results obtained using various QSVM encoding schemes.We define two terms-Accuracy and Error-in the context of outcomes' interpretability.An error occurs when the solution deviates from the classical minimum more often than it reaches it, whereas accuracy is defined as the number of times the solution reaches the classical minimum.Percentages based on the distribution of the outcomes are used to evaluate both terms.
T = Total number of Simulation runs N = Total number of times solution reaches classical minimum 1) Amplitude Encoding With a large number of gates, the AE circuit has proven to be the most complex of all encoding circuits.We can simulate no more than six qubit computations due to this complexity.Despite its complexity, AE has a high, nearly perfect accuracy rate (100%) and a very low error rate (0%) for 50-iteration fixed Hamiltonian simulations.The trend is present in both the first and second layer.The first layer accuracy for a variable Hamiltonian simulation is 96%, and the second layer accuracy is 94% across all optimizers.Figure 4 depicts the results of 50 iterations of simulating SVM with amplitude encoding on a VRP circuit with fixed and variable Hamiltonian.The decline in accuracy, however, can be attributed to simulation or computational errors, as all the errors are greater than 100 percent and are therefore considered aberrations.Most likely, the simulation hardware cannot accommodate the VQE procedure.

2) Angle Encoding
Angle encoding is the second encoding, following amplitude encoding; we have experimented with SVM VRP simulation, which yields high accuracy and low error rates.Observing tables I and II, angle encoding is second most precise encoding employed in our investigations.For fixed Hamiltonian simulations over 50 iterations with 6 qubits angle encoding, the first layer, including all optimizers, achieves 100 percent accuracy and zero percent error.In the 2nd layer simulation (over 50 iterations), the accuracy decreases to 98% for COBYLA, 96% for SLSQP, and 86% for L BGFS B, which is a greater decrease than the other two.These declines are attributable to optimizer-dependent statistical errors.Similarly, for 12 qubit simulations of SVM VRP, the accuracy rates are higher in the first layer, which consists of COBYLA at 100%, SLSQP at 92%, and L BGFS B at 88%, reiterating that the accuracy is highly dependent on the optimizer.As we move to the second layer of 12 qubit simulations on Fixed hamiltonian, we observe a decline in precision as the level of optimization rises.In this case, COBYLA winds up with 80%, L BGFS B with 70%, and SLSQP with 84%.Here, SLSQP's accuracy loss is less than that of the other two optimizers.The variable hamiltonian with 12 qubits demonstrates a comparable trend.On the initial layer, we observe high accuracy with COBYLA at 96%, L BGFS B at 86%, and SLSQP at 90%.Moving to the second stratum, the accuracy figures drop significantly, with COBYLA at 76% and L BGFS B at 62%, while SLSQP maintains excellent accuracy at 86%.In every scenario of our investigation, it is evident that over-optimization reduces accuracy rates.
3) Higher Order Encoding After Amplitude and Angle Encoding, Higher Order Encoding is the third most prevalent encoding in our SVM VRP simulation experiment.This is also the third most accurate encoding in our experiment.For both 6 qubit and 12 qubit simulations, HO encoding yields moderately accurate results; however, as the number of circuit layers is increased, the accuracy of the HO encoding scheme deteriorates, rendering it inappropriate.Figure 5 depicts the statistics of the HO encoding scheme for fixed and variable hamiltonian simulations of SVM VRP circuits over 50 iterations for both 6 qubit and 12 qubit simulations.COBYLA achieves 78% accuracy for a 6-qubit HO encoding circuit on a fixed Hamiltonian, while L BGFS B achieves 66% accuracy and SLSQP achieves 70% accuracy.As we proceed to the second layer, the accuracy considerably decreases, with COBYLA at 34% and SLSQP, L BGFS B at 16%, respectively.Similar trends can be observed in variable Hamiltonian simulations of HO encoding with 6 qubits, with COBYLA at 76%, SLSQP at 62%, and L BGFS B at 58% for the first layer; for the second layer, the accuracy drops to 36%, 34%, and 36% for COBYLA, L BGFS B, and SLSQP, respectively.The 12 qubit simulation yields superior results than the 6 qubit simulation and improves COBYLA's accuracy.For fixed hamiltonian simulations, COBYLA achieves an accuracy of 92%, compared to 78% for 6qubit.For variable hamiltonian simulations, COBYLA stores 76% for 6 qubit in the first layer, and 92% for 12 qubit in the first layer.The tendencies for L BGFS B and SLSQP are ambiguous for both cases (fixed and variable hamiltonian simulations); it is reassuring to conclude that an increase in layer decreases accuracy and that COBYLA outperforms the other two optimizers and ensures stable performance.

4) IQP Encoding
IQP encoding is the last and least accurate encoding in our experiment to simulate an SVM VRP circuit.The results are plotted in 7 and in tables I and tables II.As we can see from the figures and tables that accuracy is consistently poor for fixed and variable hamiltonian simulations in both 6 qubit and 12 qubit circuits.The accuracy further declines as layers increase.Hence this encoding is unsuitable in our experiment of SVM VRP circuits.

B. Inferences from Simulation
As we scan through the results of SVM VRP simulations across the encoding schemes we observe some clear and distinct trends regarding the experiment.The tables I , II summarize the results obtained from the plots of all the encoding schemes used in this experiment.We list these trends as our outcomes of this experiment in the below points • The approach to solving VRP using machine learning is successful and is capable of accomplishing the same or a superior result than the conventional approach using VQE and QAOA.• The use of encoding/decoding schemes can serve the purpose of creating superposition and entanglement and eliminate the additional effort required to construct the mixer hamiltonian when solving the VRP using the standard approach of QAOA and VQE.• While the standard approach to solving VRP or any combinatorial optimization problem requires a few layers of circuit depth (2 in most cases), we are able to achieve the same on the first layer itself with this approach, proving that it is more efficient than the standard approach.• We also observe a distinct trend that as the number of layers increases, the accuracy decreases, which can be used to determine where to limit the optimization depth.• Encoding/decoding schemes reduce the number of optimization layers but increase the circuit's complexity by introducing more gates.Therefore, when selecting an encoding scheme, we must take into account the complexity of the generated circuit and the number of required gates, as well as the number of classical resources (memory, CPU) it will require.There must be a trade-off between circuit complexity and the desired problem accuracy.• Despite the fact that amplitude encoding provided the greatest accuracy, it could not be used to simulate a 12qubit VRP scenario due to the large number of gates required.Angel encoding, on the other hand, was found to be much simpler due to a significantly smaller number of gates, as well as providing excellent accuracy (96% for COBYLA, and 92% for SLSQP and L BGFS B in variable hamiltonian simulation) across all the available optimizers.This again demonstrates that the complexity of circuits and the number of gates used are the most important considerations when choosing an encoding/decoding scheme.• It can be noticed that AgE performs the best in terms of circuit complexity and accuracy rates due to the formation of a single layer of superposition.In other encodings (HO, IqpE), we observe multi-layered complex superposition structures, which is the reason for fluctuations or error rates.Also in the fact that increasing layers also increases the superposition structures and therefore decreases the accuracy.fewer qubits (6 qubits) and higher accuracy in circuits with more qubits (12 qubits) for both fixed and variable hamiltonian simulations.The trend is disregarded by SLSQP and L BGFS B. This demonstrates that the algorithm's performance is extremely dependent on the optimizer; therefore, when evaluating the algorithm's performance, the most efficient optimizer should be selected by comparing the available optimizers.
• The IQP encoding scheme performed the worst in this experiment, with the lowest accuracy and highest error rates among all other encodings used for 1-layer, 2-layer, fixed, and variable Hamiltonians simulations.Therefore, the IqpE method cannot be used to solve VRP using QSVM.
• All of the optimizers used in the experiments performed well across AE, AgE, and HO encodings; however, COBYLA outperformed the other two due to its consistently high level of accuracy, but SLSQP is more resistant to accuracy fluctuations caused by an increase in optimization depth or in the presence of multi-layered circuits.

C. Experimental Setup, data gathering, and statistics
This experiment is conducted within the ambit of the QISKIT framework.while performing the experiment, we used a quantum instance object, and the ansatz runs inside the quantum instance object.A random seed is added to quantum instance to stabilize VQE results.All the experiments have been run 50 + 50 times, one with a fixed Hamiltonian matrix and the other by varying the Hamiltonian matrix.The objective of the experiments is to ensure that the results of experiments are just not dependent on a single Hamiltonian.This is also to ensure that the used circuits achieve classical minimum or near classical minimum regardless of the hamiltonian used.Thus apart from the plots, the tables I, II become the figure of merit.In addition to the many hours of testing and debugging, it is to be noted that the results reported here amounted to 150 hours of CPU time on a 24-core AMD workstation using Qiskit's built-in simulators [39].
V. CONCLUSION In this paper, we presented a novel technique for solving VRP through the use of a 6 and 12-qubit circuit-based quantum support vector machine (QSVM) with a variational quantum eigensolver for both fixed and variable Hamiltonians.In the experiment, multiple encoding strategies were used to convert the VRP formulation into a QSVM and solve it.In addition, we utilized multiple classical optimizers available within the QISKIT framework to measure the output variation and accuracy rates.Consequently, our machine learning-based approach to resolving VRP has proven fruitful thus far.Using a QSVM to implement a gate-based simulation of a 3-city or 4-city VRP on a 6-qubit or 12-qubit system accomplishes the goal.The method not only resolves VRP, but also outperforms the conventional method of resolving VRP via multiple Optimization phases involving only VQE and QAOA.In addition, selecting appropriate encoding methods establishes the optimal balance between circuit complexity and optimization depth, thereby enabling multiple approaches to solve CO problems using machine learning techniques.

Fig. 2 :
Fig. 2: (a) Circuit example illustrating gate operations for Hcost.(b) Circuit example displaying gate selections with an additional u gate for Hmixer.

TABLE I :
For 6 and 12 qubit VRP circuits using SVM with 2 layers , the table above shows the Accuracy and Error with reference to classical minimum (over 50 iterations) for VQE simulations over a fixed Hamiltonian; utilizing Amplitude, Angle, Higher-Order, and IQP encoding schemes, Over the use of COBYLA, SLSQP and L BGFS B optimizers.

TABLE II :
For 6 and 12 qubit VRP circuits using SVM on with 2 layers, the table above shows the Accuracy and Error with reference to classical minimum (over 50 iterations) for VQE simulations on variable hamiltonians utilizing Amplitude, Angle, Higher-Order, and IQP encoding schemes, Over the use of COBYLA, SLSQP and L BGFS B optimizers.