Extracting a function encoded in amplitudes of a quantum state by tensor network and orthogonal function expansion

There are quantum algorithms for finding a function f satisfying a set of conditions, such as solving partial differential equations, and these achieve exponential quantum speedup compared to existing classical methods, especially when the number d of the variables of f is large. In general, however, these algorithms output the quantum state which encodes f in the amplitudes, and reading out the values of f as classical data from such a state can be so time-consuming that the quantum speedup is ruined. In this study, we propose a general method for this function readout task. Based on the function approximation by a combination of tensor network and orthogonal function expansion, we present a quantum circuit and its optimization procedure to obtain an approximating function of f that has a polynomial number of degrees of freedom with respect to d and is efficiently evaluable on a classical computer. We also conducted a numerical experiment to approximate a finance-motivated function to demonstrate that our method works.

However, when we consider these and other quantum solvers and the speedup they provide, we should pay attention to the meaning of the words "solve" and "speedup".That is, we must resolve the issue of how to extract the function values from the quantum state, which might ruin the quantum speedup.
The detail is as follows.As pointed out in [43], many quantum algorithms output the solution not as a figure that we eventually want, but as the quantum state in which the solution is encoded as the amplitudes.For example, quantum algorithms for solving a PDE output the quantum state in the form of where f is the solution function, ⃗ x 0 , ⃗ x 1 , ..., ⃗ x N gr −1 are the N gr grid points in the domain of f , |0⟩ , |1⟩ , ..., |N gr − 1⟩ are the computational basis states, and C is the normalization factor.
Reading out the values of the function as classical data from such a quantum state can often be a bottleneck.For the example of solving a PDE, the number of grid points N gr is exponentially large with respect to the dimension d: naively, taking n gr grid points in each dimension leads to N gr = n d gr points in total.This makes the amplitude 1  C f (⃗ x i ) of each basis state in | f ⟩ exponentially small when the amplitude is not localized on certain grids.This means that when we try to retrieve the amplitude and then the function value f (⃗ x i ) at the point x i using methods such as quantum amplitude estimation [44,45], an exponentially large time overhead is added.Therefore, even if a quantum algorithm exists that generates the state | f ⟩ exponentially faster than a classical algorithm outputs the value of f (⃗ x i ), obtaining f (⃗ x i ) through the quantum algorithm might not be faster than the classical one.
If we want to obtain the function values at many points, for example, to plot the values as a graph, the situation worsens.To obtain the function values at M points, the quantum algorithm must be repeated M times.
Motivated by this background, this study focuses on how to efficiently extract the function encoded as the amplitudes in the quantum state | f ⟩, given an oracle O f to generate it.Although we can resolve this issue using the specific nature of the problem in some cases such as derivative pricing consid-arXiv:2208.14623v2[quant-ph] 1 Jun 2023 ered in [27], where the martingale property of the derivative price is used to calculate the function value at a point, nevertheless, it is desirable to devise methods that are generally applicable.
The method proposed in this study is twofold.First, we use an orthogonal function system.Orthogonal functions such as trigonometric functions, Legendre polynomials, Chebyshev polynomials, and so on are the sequences of functions orthogonal to each other with respect to some inner product, which is defined as the weighted integral of the product of two functions over the domain or the sum of the values of the product at the specific nodes.Orthogonal functions are often used for function approximation.Any function f satisfying some conditions of smoothness is approximated as f ≈ l a l P l , the series of orthogonal functions {P l } l , with the coefficient a l given by the inner product of f and P l .We expect that orthogonal function expansion may also be used in the quantum setting, because, as explained later, the coefficient a l is given by ⟨P l | f ⟩ multiplied by a known factor, with |P l ⟩ being the quantum state in which P l is encoded like | f ⟩.Thus, by estimating ⟨P l | f ⟩ for every l up to a sufficiently high order, we obtain the orthogonal function expansion f of f , and then the approximate values of f at arbitrary points by evaluating f .This approach seems promising because we expect that ⟨P l | f ⟩ is not exponentially small, unlike the amplitudes of the computational basis states in | f ⟩ (see Sec. II C).
However, in the high-dimensional case, the above approach still suffers from high complexity.If we use the D orthogonal functions {P l } l∈[D] 0 1 for an accurate approximation in the onedimensional case, the naive way to achieve similar accuracy in the d-dimensional case is to use the tensorized functions , where P ⃗ l (x 1 , ..., x d ) = d i=1 P l i (x i ) for ⃗ l = (l 1 , ..., l d ).In this way, because the total number of P ⃗ l 's is D d , obtaining the coefficients a ⃗ l for all P ⃗ l 's and then the orthogonal function expansion of f exhibits exponential complexity, and so does evaluating the resultant expansion.Although this is less serious than reading out the amplitudes in | f ⟩ because we take D < n gr , the exponential dependence of the complexity on d still exists.
Then, we make use of the second building block of our method: tensor network, especially the type called matrix product state (MPS), which is simple and widely used (for reviews, see [46,47]).Tensor network is an approximation scheme for high-order tensors as a contraction of lower-order tensors.In some situations, it approximates the original tensor well, reducing the degrees of freedom (DOF) and data volume.It was originally invented in quantum many-body physics to approximate wave functions in intractably highdimensional Hilbert spaces; however, it is currently used to reduce complexity in various fields including function approximation [48][49][50][51][52][53][54].A recent study [54] showed that the complexity of the tensor network approximation of a d-dimensional function does not scale exponentially on d under certain conditions, which indicates the powerful approximation ability of tensor network. 1 For n ∈ N, we define [n] 0 := {0, 1, ..., n − 1}.Another advantage of tensor network is its compatibility with quantum computing.That is, we can generate a quantum state in which a type of tensor network is amplitude-encoded using a simple quantum circuit [55].Moreover, there is a general procedure for optimizing such a tensor network circuit to maximize the fidelity between the resulting state and a given state [56].Therefore, we reach the following idea: given O f , we find a tensor network approximation of the coefficients a ⃗ l in the orthogonal function expansion of f and then an approximate function of f through quantum circuit optimization.
In the remainder of this paper, we show how this idea is concretely realized.We present the tensor-network-based quantum circuit and optimization procedure for making the generated state close to | f ⟩, based on [56].The parameters in the tensor network are easily read out from the circuit.Note that this method can be used on both fault-tolerant quantum computers and noisy intermediate-scale quantum computers.
The remainder of this paper is organized as follows.Sec.II presents our method.Beginning with an explanation of the problem setting under consideration, we present the quantum circuit we use and the procedure to optimize it.Sec.III, describes the numerical experiment we conducted to validate our method, which demonstrates that the method successfully approximates a finance-motivated multivariate function.Sec.IV summarizes this paper.

A. Overview
Before providing a detailed explanation of the proposed method, we present a high-level overview.For the given function f , our goal is to obtain an approximation function efficiently computable on a classical computer.For this purpose, we use a quantum circuit, whose overview is shown is given in Fig. 1.This consists of two parts.In the first half, we en-code the coefficients ã⃗ l of the orthogonal function series written in the form of an MPS into the amplitudes of the quantum state.In the latter half, we operate the quantum circuit that encodes the values of the orthogonal functions into the amplitudes.These two operations yield the quantum state f , in which the approximation function f based on the MPS and orthogonal function expansion is encoded in the amplitudes.Then, we optimize the first half of the circuit such that f approaches | f ⟩, the state encoding f .As a result, we obtain the coefficients in the form of an MPS optimized to approximate f , and then the approximation function f , which is efficiently computable by virtue of the MPS.
The approximation problem addressed in this study is explained in more detail in Sec.II B and the setting of the oracles available is shown in Sec.II C. The MPS approximation of the coefficients is explained in Sec.II D, and encoding it into the quantum state, that is, the first half of the circuit, is described in Sec.II E. The entire circuit including the latter half and optimization of the MPS are explained in Sec.II F.

B. Problem description
As the approximation target, we consider a real-valued function f on the d-dimensional hyperrectangle For any i ∈ [d], we also consider orthogonal functions {P i l } l on [L i , U i ] labeled by l ∈ N 0 := {0} ∪ N.These functions are characterized by the orthogonal relation that, for any l, l ′ ∈ N 0 , where the weight function w i is defined on [L i , U i ] and takes a non-negative value, and δ l,l ′ is the Kronecker delta.However, we hereafter assume that {P i l } l satisfy the discrete orthogonal relation as follows: for any D ∈ N, there exist n gr points x i,0 < x i,1 < ... < x i,n gr −1 in [L i , U i ], where n gr ≥ D, such that, for any l, l ′ ∈ [D] 0 , holds for some c i l > 0. We can consider Eq. (3) as the discrete approximation of Eq. ( 2), with w i absorbed into the spacing of the grid points x i, j .For some orthogonal functions such as trigonometric functions and Chebyshev polynomials, the relationship in Eq. (3) holds strictly.
We define the tensorized orthogonal functions as follows: for any ⃗ l = (l 1 , ..., l d ) ∈ N d 0 and ⃗ x = (x 1 , ..., 2 For n ∈ N, we define [n] := {1, ..., n}. It follows from Eq. (3) that, with ⃗ x ⃗ j defined as ⃗ x ⃗ j := (x 1, j 1 , ..., x d, j d ) ∈ R d for any ⃗ j = ( j 1 , ..., j d ) ∈ [n gr ] d 0 , they satisfy the orthogonal relation for any ⃗ l = (l 1 , ..., l d ) and We can use these P ⃗ l ⃗ l to approximate f by the orthogonal function series with D such that max ⃗ x | f (x) − f (x)| is less than the desired threshold ϵ.Here, for any ⃗ l ∈ [D] d 0 , the coefficient a ⃗ l is given by which follows from Eq. ( 3).It is known that this kind of series converges to f in the large D limit for some orthogonal functions (see [57] for the trigonometric series and [58] for the Chebyshev series).

C. Oracles to generate the function-encoding states
In this study, we assume the availability of the oracle O f mentioned in the introduction.We now define this formally.Hereafter, we assume that the grid number n gr satisfies n gr = 2 m gr with some integer m gr .Then, we consider the system S consisting of dm gr -qubit registers and the unitary operator O f that acts on S as For any n ∈ N 0 , we denote by |n⟩ the computational basis states on the quantum register with a sufficient number of qubits, in which the bit string on the register corresponds to the binary representation of n. | j⟩ with j ∈ [n gr ] 0 in Eq. ( 8) is the computational basis state on an m gr -qubit register.Furthermore, C in Eq. ( 8) is defined as We also assume that the availability of the oracles V 1 OF , ..., V d OF , each of which acts on an m gr -qubit register as for any l ∈ [D] 0 .V i OF |D⟩ , ..., V i OF |n gr − 1⟩ may be any states as far as V i OF is unitary.In principle, these oracles are constructible because of the orthogonal relation (3).We discuss their implementation in Appendix A.
Now, let us elaborate on the reason why reading out f ⃗ x ⃗ j for ⃗ j ∈ [n gr ] d 0 from | f ⟩ is difficult but obtaining it through orthogonal function expansion is more promising, as mentioned briefly in the introduction.We rewrite the amplitude in | f ⟩ as where f : is the root mean square of f over the grid points.The amplitude is suppressed by the factor 1/ N gr , which is exponential with respect to d, and thus exponentially small unless f is extremely localized at ⃗ x ⃗ j such that f ⃗ x ⃗ j / f is comparable to N gr .However, we note that, for any ⃗ l ∈ [D] d 0 , we have where Because both C and √ c ⃗ l are the root mean squares of some functions over the grid points, we expect that their ratio is O(1) and that we can efficiently obtain the expansion coefficient a ⃗ l and then the approximation f of f in Eq. ( 6) by estimating P i ⃗ l f , without suffering from an exponential suppression factor such as 1/ N gr .However, we do not consider this direction in this study, because finding all the expansion coefficients suffers from an exponential increase in the number of coefficients in the high-dimensional case, as explained in the introduction.

D. Matrix product state
Thus, we consider using a tensor network.Among the various types of tensor networks, we use MPS, also known as the tensor train, which is simple but powerful and therefore widely used in various fields; a basic introduction of the MPS is presented in Appendix B. In MPS scheme, the order-d ten- where r ∈ N is called the bond dimension, U 1 ∈ R D×r , U i ∈ R r×D×r for i ∈ {2, ..., d − 2}, and U d−1 ∈ R r×D×D 3 .We also impose the following conditions: for any i ∈ {2, ..., d − 2} and holds, and, for any holds.We call this form of the MPS the right canonical form.This can always be imposed by the procedure explained in Appendix B.
The representation in Eq. ( 13) actually reduces the DOF compared with the original a ⃗ l as a d-dimensional tensor.The total number of components in U 1 , ..., which is smaller than that in a ⃗ l , D d , unless r = O(D O(d) ).Furthermore, with the coefficients a ⃗ l in Eq. ( 6) represented as Eq. ( 13), the computation of the approximation of the function f becomes efficient.We now approximate f by 3 Usually, the MPS representation of a d-dimensional tensor takes the form where, in comparison to Eq. ( 13), (U d−1 ) is in R r×D×r rather than R r×D×D , and (U d ) ∈ R r×D is added.We can consider that U d−1 and U d in Eq. ( 14) are contracted to U d−1 in Eq. ( 13).The reason for the form in Eq. ( 13) is that it corresponds to the quantum circuit considered in Sec.II E. In addition, although we can set the bond dimension r separately for each pair of U i and U i+1 , for simplicity we set it to the same value.
This can be computed as that is, we first contract U i k i−1 ,l i ,k i and P i l i with respect to l i for each i ∈ [d], and then take contractions with respect to k 1 , ..., k d−2 .In this procedure, the number of arithmetic operations is which is much smaller than O(D d ) for computing (6) with general a ⃗ l not having any specific structure.Let us denote by V MPS the unitary that corresponds to the whole of the above quantum circuit, which is depicted in Fig. 2.Then, V MPS generates the MPS-encoded state which is close to the state that encodes the true coefficients a ⃗ l if ã⃗ l in Eq. ( 13) approximates a ⃗ l .Here, |l⟩ with l ∈ [D] 0 denotes the computational basis states on S deg,1 , ..., S deg,d .In addition, we associate the entries in U 1 , ..., U d−1 with those in V 1 , ..., V d−1 as unitaries, respectively, as follows: for any for and for any where |n⟩ with n ∈ N 0 denotes the computational basis state on either Note that each U i , which is not a unitary matrix, is realized as a block in V i , a component unitary in the circuit V MPS in Fig. 2.Although V 1 , ..., V d−1 have additional components that do not appear in Eqs.(23) to (25), those do not affect the state |ã⟩ because of the initialization of all qubits to |0⟩.
Note that U 2 , ..., U d−1 in Eqs.(24) and ( 25) automatically satisfy the conditions ( 15) and ( 16) because of the unitarity of V 2 , ..., V d−1 .However, the unitarity of V 1 imposes the constraint on U 1 in Eq. ( 23).This implies that, with such U 1 , the MPSbased approximation f in Eq. ( 18) does not have the DOF of the overall factor, and therefore, although we can express the functional form of f by f , we cannot adjust the magnitude of f so that it fits f .Conversely, if we have some estimate C for the ratio of f to f , we can approximate f by C f .This issue is addressed in Section II F.

F. Optimization of the quantum circuit
We now consider how to optimize the quantum circuit V MPS and obtain an MPS-based function approximation.
First, we extend the circuit V MPS in Section II E using {V i OF } i as follows.For each i ∈ [d], we add m gr − m deg qubits after the im deg -th qubits in the original circuit and denote the system consisting of S deg,i and the added qubits by S gr,i .Note that the resultant system is the same as the system S for O f , which consists of the d m gr -qubit registers.We then perform V i OF on n gr −1 where |n⟩ with n ∈ N 0 now denotes the computational basis state on S gr,1 , ..., S gr,d .That is, this is the quantum state that amplitude-encodes f in Eq. ( 18), the approximation of f by the orthogonal function expansion and the MPS approximation of the coefficients.Therefore, if we obtain V App that generates f close to | f ⟩, we also obtain {U i } i for which f in Eq. (18) well approximate f at least on the grid points, by reading out their entries from the quantum gates {V i } i in V App , except for the overall factor C.
Next, we consider how to obtain such V App , especially {V i } i in it.We aim to maximize the fidelity Note that maximizing F is equivalent to minimizing the sum of the squared differences between the two normalized functions which, in the large n gr limit, is equivalent to that is, the squared L 2 norm of f C − f , the common metric in function approximation.
The procedure for maximizing F is similar to that presented in [56].We attempt to optimize each V i alternatingly.In other words, we optimize each V i with the others fixed, setting i to 1, 2, ..., d − 1 in turn.This loop can be repeated an arbitrary number of times.The optimization step for V i proceeds as follows.We define Here, S ′ deg,i denotes the system consisting of the qubits except those in S ′ deg,i , and, for any subsystem s in S , Tr s denotes the partial trace over the Hilbert space corresponding to s. |Ψ i+1 ⟩ is defined as where  (27) in which the approximation f in Eq. ( 18) of f is amplitude-encoded.V MPS is the circuit depicted in Fig. 2; the lines that bypass it are not used in it.All the dm gr qubits are initialized to |0⟩.The subsystems consisting of a portion of the qubits, which are indicated at the left end, are described in the body text.defined as ; for i = 1 (33) We can regard this F i as an M × M matrix, where M = rD for i ∈ [d − 2] and M = D 2 for i = d − 1.Its entries are given by ⟨l| F i |l ′ ⟩, where |l⟩ , |l ′ ⟩ ∈ {|0⟩ , |1⟩ , ..., |M − 1⟩} are now the computational basis states on S ′ deg,i .Supposing that we know F i , we perform its singular value decomposition (SVD) where X and Y are the M × M unitaries and D is the diagonal matrix having the singular values of F i as its diagonal entries.Finally, we update V i by We obtain F i through Pauli decomposition and the Hadamard test [56].We set m = log 2 M, and, for any k ∈ [m], we denote the identity operator, the Pauli-X gate, the Pauli-Y gate, and the Pauli-Z gate on the k-th qubit in S ′ deg,i by σk 0 , σk 1 , σk 2 and σk 3 , respectively.We also define σ⃗ α := σ1 for any ⃗ α = (α 1 , ..., α m ) ∈ {0, 1, 2, 3} m .Then, we can always decompose F i as with Fi,⃗ α ∈ R, where, rather than {0, 1, 2, 3} m , the index tuple ⃗ α runs over the subset holds, we obtain Fi,⃗ α by estimating Tr we can estimate this using the Hadamard test, in which the circuit V MPS with replacement of V i by σ⃗ α is used.In one update of each V i , the total number of estimations of the quantities ( 40) is |A| = M(M + 1)/2.We refer to [56] for the details of the estimations.After we optimize the {V i } i and read out {U i } i from them, the remaining task is just multiplying the factor C to f constructed from {U i } i .In this paper, we do not go into the details of estimating this factor but simply assume that we have some estimate for it.In some quantum algorithms for solving PDEs, the method of estimating this factor, the root of the squared sum of the function values on the grid points, is presented [19].
The entire procedure to obtain an approximation of f is summarized as Algorithm 1.

G. Situation in which our method is useful
In the introduction, we mentioned solving PDEs using quantum algorithms as a main use case of our method.One might think that if the solution of the PDE can be approximated by MPS as Eq. ( 19), we can directly plug the ansatz (19) into the PDE and optimize the parameters {U i k i−1 ,l i ,k i } using a classical computer.In fact, such a method has been studied in [59][60][61][62].However, the scheme we are now considering, that is, generating the quantum state that encodes the solution in the amplitude by a quantum algorithm and reading out the solution in MPS-based approximation, is applicable to the broader range of problems.For example, in the initial value problem, it is possible that MPS-based approximation is valid at the terminal time but not at some intermediate time points.In the context of quantum many-body physics, this is applicable to the wave function of a system that changes to a lowentangled state via a highly entangled state, such as quantum annealing [63][64][65].In such a case, finding the function at the terminal time using a quantum PDE solver and then reading it out by MPS-based approximation might work, although using MPS throughout does not seem to work.

III. NUMERICAL EXPERIMENT
We now confirm the feasibility of our method through a numerical experiment on approximating a finance-motivated function.
Algorithm 1 Proposed algorithm to obtain an approximation of the function f : Ω → R. Input: 1: • D = 2 m deg with m deg ∈ N: the degree of the orthogonal functions • r = 2 m BD with m BD ∈ N: the bond dimension • The oracle O f in Eq. ( 8).
• The initial values of the rD × rD unitaries V 1 , ..., V d−2 and the D 2 × D 2 unitary V d−1 , which may be chosen randomly.

8:
Perform the SVD of F i as per Eq. ( 34).

A. Problem setting
As a reasonable instance of the target function for the approximation, we take the price of a financial derivative (simply, derivative).
Specifically, we consider the worst-of put option written on the d underlying assets that obey the Black-Scholes (BS) model and take its present price as a function f (⃗ s) of the asset prices ⃗ s = (s 1 , ..., s d ) as the approximation target.This is the solution of the so-called Black-Scholes PDE and thus fits the current situation in which a quantum PDE solver outputs the state | f (⃗ s)⟩.Details are provided in Appendix C.
We approximated We took cosine functions as orthogonal functions.Specifically, for any i ∈ [d] and l ∈ [D] 0 , we set The settings for U i and L i are presented in Appendix C. The orthogonal relation ( 3) is satisfied with the grid points set as for j ∈ [n gr ] 0 and c i l being Let us comment on the choice of cosines as orthogonal functions.We consider the case where the grid points are equally spaced points in the hyperrectangle, which we expect to be the simplest and most common.In this setting, the cosine functions (41) satisfy the orthogonal relation ( 5) exactly.Therefore, the choice was natural in this case.Generally, it is plausible to take orthogonal functions such that the orthogonal relation holds for the given grid points.

B. Algorithm modifications to run the numerical experiment
We used Algorithm 1 but made the following modification, since we performed all calculations on a classical computer, and thus there was a memory space limitation.Rather than F in Eq. ( 28), we attempt to maximize Here, |ã⟩ is given in Eq. ( 21) and where we calculated the coefficients a ⃗ l for each ⃗ l ∈ [D] d 0 using Eq. ( 7) and the values of f (⃗ s) on the grid points computed by Monte Carlo integration (see Appendix C).We take the minimal grid points to calculate the coefficients for a given D, which means n gr = D, although it is assumed that n gr > D when our algorithm is used on a future quantum computer with the oracle O f .With this modification, the quantum circuit under consideration becomes small, and the calculation becomes feasible on a classical computer.The gates {V i OF } do not appear in our experiment, and their roles are absorbed into the aforementioned preprocessing to calculate {a ⃗ l }.Most importantly, note that {U i } calculated under the above modification are the same as those output by our algorithm without modification.

C. Result of the approximation
Next, we attempted to obtain an approximation of f (⃗ s).We set the parameters as follows: d = 5, D = r = 16, n iter = 5.For C, we used the root squared sum of the values of f (⃗ s) on the grid points, which are calculated using Monte Carlo integration.We used the ITenosr library [66] for the tensor calculations.
We now demonstrate the accuracy of the obtained MPSbased approximation of f (⃗ s).However, because we are considering a high-dimensional space, it is difficult to display the  accuracy over the entire space.Therefore, we show the accuracy on the following two sets of selected points: (i) the "diagonal" line s 1 = ... = s d and (ii) 10 4 random points sampled according to the BS model (see Appendix C for details).Fig. 4 displays results for the diagonal line, depicting the MPS-based approximation f (⃗ s) of f (⃗ s) along with Monte Carlo integration values and the cosine expansion approximation as per Eq. ( 6).The MPS-based approximation fits the Monte Carlo integration values well within an error of about 0.5 or less, except for the region near the lower end of the domain of the approximations.Note that the cosine expansion approximation already has an error from the Monte Carlo values, and this acts as a lower bound on the error of the MPSbased approximation.The MPS-based approximation almost overlaps the cosine expansion approximation, which means that the MPS-based approximation worked well.
The results for the random sample points are summarized in Table I.On these points, the maximum differences among the values of f (⃗ s) calculated by Monte Carlo integration, MPSbased approximation, and cosine expansion using the full coefficients without MPS approximation are about 0.5 or less, similar to most regions in the diagonal line case.This result supports the proposed method.
In the current MPS-based approximation, the number of DOF is 12544, which is smaller than the number of cosine expansion coefficients (1048576) by two orders of magnitude.
Therefore, we achieved a large parameter reduction while maintaining the approximation accuracy.

D. Relationship between approximation accuracy and degrees of freedom
To investigate the relationship between the DOF and the approximation accuracy, we performed the following additional experiment.We replaced the circuit V MPS in Fig. 2 by a different V ′ MPS having the different DOF.Here, with m bl ∈ {2, ..., dm deg − 1}, V ′ MPS consists of the gates Ṽ1 , ..., Ṽdm deg −m bl +1 that act on the systems of m bl qubits displaced one by one, as shown in Fig. 5.The DOF of this circuit is We alternatingly optimized the blocks { Ṽi } in a manner similar to Algorithm 1 so that ⟨a|ã ′ ⟩ is maximized, where is the state generated by V ′ MPS .Then, we obtained the approximation of f (⃗ s) as In Fig. 6(a), we display the maximum difference of f ′ from the cosine expansion approximation on the diagonal line s 1 = ... = s d , taking m bl = 2, ..., 6, along with that of the MPS-based approximation.This figure shows that there is a power law relationship between the approximation accuracy and DOF in the region with large DOF.This behavior is similar to that often observed in critical MPS systems applied to one-dimensional quantum systems or two-dimensional classical systems [67][68][69][70][71][72][73].If this behavior also appears in the case of d ≫ 1, one might make a similar argument to the finite bond-dimension (entanglement) scaling refined in the study of MPS and evaluate f with appropriate extrapolations with respect to DOF.
We then show the relationship between the accuracy of the cosine expansion approximation and its DOF in Fig. 6(a).We plot the maximum error of the approximated worst-of put option price by the cosine expansions of low degree D = 3, ..., 10 on the line s 1 = ... = s d , taking that of degree D = 16 as the reference value.The error of the low-degree cosine expansion is much larger than that of f ′ with comparable DOF.Fig. 6(b) is similar to Fig. 6(a) but for the random sample points.Again, the maximum difference of f ′ from the cosine expansion with D = 16 displays a power law with respect to DOF and is much smaller than the error of the low-degree cosine expansion.
These results provide additional evidence of the advantages of our MPS-based approximation and the approximation by the circuit V ′ MPS over the simple cosine expansion.

IV. SUMMARY
In this study, we have considered how to extract the function encoded in the amplitudes of the quantum state as classical data.Such a task necessarily accompanies quantum algorithms such as PDE solvers, but to date there has not been a proposal for a general method to accomplish this task, even though it risks ruining quantum speedup.We proposed a method based on orthogonal function expansion and tensor network.Orthogonal function expansion is widely used for function approximation but suffers from an exponential increase in the number of the parameters, the expansion coefficients, with respect to the dimension, that is, the number of variables of the function.We then use an MPS, a type of tensor network, to approximate the coefficients as a high-order tensor and reduce the DOF.We presented a quantum circuit that produces the state corresponding to such a function approximation.Such a circuit is, in fact, constructible because an MPS is encoded in a quantum state by a simple circuit, and so are orthogonal functions because of their orthogonal relation.We also presented the procedure to optimize the quantum circuit and MPS-based approximation, based on the alternating method proposed in [56].Finally, we conducted a numerical experiment to approximate a finance-motivated multivariate function and found that our method works in this case.
This study has scope for further investigation.For example, we did not explicitly present the bound on the total query complexity for the oracles O f and V i OF to obtain an approximating function with a given accuracy ϵ.Presenting such a complexity estimation is desired but difficult at present.In general, function approximation in a high-dimensional space is a longstanding problem in computational mathematics.Although a recent study showed that there is an MPS-based approximation that achieves a given accuracy with DOF subexponential with respect to d for any function with some property [54], to the best of the authors' knowledge, there is no known algorithm that certainly outputs such an approximation.The convergence property of the alternating optimization method explained above is not well known theoretically, even though its good performance is empirically known.In these situations, we have considered a quantitative discussion on the accuracy and complexity of our method beyond the scope of this study and left it for future work, presenting some numerical evidence instead.
We expect that the proposed method can be used widely in combination with various quantum algorithms that output function-encoding states, whether for FTQC or NISQ.In future work, we will attempt to combine this method with concrete function-finding quantum algorithms on concrete problems and present a complete set of the algorithms, along with a more quantitative analysis of complexity.
Then, we define the coefficients with n = 1, where n means the number of times SVD is applied.
Second, with n = 2, letting the degrees of freedom of {l 1 , • • • , l d−n−1 } be row components and the degrees of freedom of {l d−n , k d−n } be column components for Ψ n−1 , and applying the singular-value decomposition (SVD) again, we obtain Repeating the sequence of Eq. (B5) to Eq. (B9) until n = d − 2 and defining U we can rewrite Ψ l 1 ,...,l d as the right canonical form of MPS and this form is equivalent to Eq. ( 13).In the practical calculation of MPS, the dimension ρ d−n−1 of the matrix Λ n , which diverges exponentially with respect to d, is truncated to r, which we call the bond dimension.The numerical error ε that occurs when the truncation of the dimension in the MPS is introduced at the n-th SVD during the repeating procedure is the error that appears in the low-rank approximation of the SVD, namely Therefore, the shape of the decay function of the singular value Λ n k d−n−1 , which is a monotonically decreasing nonnegative real number for k d−n−1 , is directly related to the error.It is known that, in the critical region of one-dimensional quantum systems, this decay function is power-law and ε thus decays with a power law when we increase r and, accordingly, the DOF in the MPS representation, as seen in Figs.6(a) and 6(b).
The relationship between MPS and quantum circuits is as described in Sec.II E. We refer to the review articles [46,47] and references therein for more detailed properties of MPS itself.

Appendix C: Derivative price as an approximation target function
Here, we explain the derivative price, which is considered as an approximation target in the above numerical experiment.
A derivative is a contract between two parties in which the amounts (payoffs) determined by the prices of some widely traded assets (underlying assets) such as stocks and bonds are paid and/or received between the parties.Under some mathematical models that describe the random movement of the underlying asset prices, we can use the established theory to calculate the derivative price (see [80,81]).
In this study, we consider d underlying assets whose prices at time t are denoted by ⃗ S (t) = (S 1 (t), ..., S 1 (t)) and obey the Black-Scholes (BS) model [82,83] characterized by the following stochastic differential equation in the risk-neutral measure: for i ∈ [d], dS i (t) = r RF S i (t)dt + σ i S i (t)dW i (t). (C1) Here, r RF is the real parameter called the risk-free interest rate, and σ 1 , ..., σ d are positive parameters called volatilities.W 1 , ..., W d are Brownian motions and satisfy dW i dW j = ρ i j dt for i, j ∈ [d], where the correlation matrix (ρ i j ) is symmetric and positive definite and satisfies ρ 11 = ... = ρ dd = 1 and −1 < ρ i j < 1 if i j.Time t = 0 corresponds to the present.We consider the derivative in which one party A receives the payoff from the other party B at a predetermined time T > 0, and its amount f pay ( ⃗ S (T )) depends on the underlying asset prices at T .Under some technical assumptions, the price of this derivative for A at time t ∈ [0, T ) with ⃗ S (t) being ⃗ s = (s 1 , ..., s d ) is given by V(t, ⃗ s) = E e −r RF (T −t) f pay ( ⃗ S (T )) ⃗ S (t) = ⃗ s , (C2) where E[•] denotes the (conditional) expectation in the riskneutral measure.This expectation can be calculated by Monte Carlo integration, that is, by generating many sample paths of the time evolution of ⃗ S (t) up to T and averaging e −r RF (T −t) f pay ( ⃗ S (T )) on the paths.It is also known that V(t, ⃗ s) can be obtained by solving the BS PDE σ i σ j ρ i j s i s j ∂ 2 ∂s i ∂s j V(t, ⃗ s) backward from time T to t, with the boundary condition in the time direction being and those in ⃗ s directions set according to the asymptotic behavior of V(t, ⃗ s) in the small and large asset price limits.Solving the BS PDE by quantum computing has been considered in previous studies [27,32,[36][37][38].Specifically, we consider the worst-of put option, which has the payoff function and is often incorporated into exotic equity derivatives.Here, K is a positive constant called the strike.
We then regard V(0, ⃗ s) as a function f (⃗ s) and attempt to approximate it.Because we cannot evaluate this analytically, we compute its values on the grid points using Monte Carlo integration with 10 5 sample paths and use these values in the numerical experiment.For this calculation, we used the TF Quant Finance library [84].
We set the upper bounds U i and lower bounds L i in the ⃗ s space as follows.U i is set by with ϵ = 0.01, following [85] on the appropriate grid setting for solving the BS PDE.On the other hand, we simply set L i = 0.01K.In the numerical experiment described in Sec.III, we set r RF = 0, σ 1 = ... = σ 5 = 0.2, K = 100, T = 1.The 10 4 random sample points in the ⃗ s space used for the numerical experiment were sampled from the distribution of ⃗ S (t) at t = 1 with the initial value ⃗ S (0) set to (K, ..., K).Therefore, we can regard the worst-of put option under consideration as the option one year after the start, at which it had a 2-year maturity and was at-the-money 4 .

E
. Quantum circuit to generate the tensor network state The quantum circuit to generate an MPS-encoded quantum state is shown in Fig. 2. First, we prepare dm deg qubits initial-ized to |0⟩, where we assume that D = 2 m deg holds with some m deg ∈ N. Labeling them with the integers 1, ..., dm deg , for each i ∈ [d], we denote the system of the ((i − 1)m deg + 1)-th to im deg -th qubits by S deg,i .In addition, assuming that r = 2 m BD also holds with some m BD ∈ N, for each i ∈ [d − 2], we denote the system of S deg,i and the first m BD qubits in S deg,i+1 by S ′ deg,i , and the system of the last 2m deg qubits by S ′ deg,d−1 .Then, we put the quantum gates V 1 , ..., V d−1 on S ′ deg,1 , ..., S ′ deg,d−1 , respectively, in this order.

FIG. 4 :
FIG.4: Five-asset worst-of put option prices f (⃗ s) for s 1 = ... = s 5 = s calculated in three different ways.The horizontal and vertical axes, respectively, correspond to s and f (⃗ s).The +, ×, and black line, respectively, indicate results obtained from MPS-based approximation, cosine expansion approximation, and Monte Carlo integration.

SFIG. 5 :
FIG. 5: Diagram of the quantum circuit V ′ MPS used in the additional numerical experiment instead of V MPS for m deg = 4 and m bl = 3.
On the diagonal line.a = e 7.27 and b = −0.87.On the random sample points.a = e 8.75 and b = −1.07.

FIG. 6 :
FIG. 6: Maximum difference ∆ max of the various approximations for the five-asset worst-of put option price from the cosine expansion approximation of degree D = 16.The circles indicate the approximations based on the circuit V ′ MPS in Fig. 5. From left to right, the points correspond to m bl = 2, ..., 6, respectively.The dotted lines represent the function ∆ max (x) = ax b of the degrees of freedom x with a and b fitted with respect to the data for m bl = [4, 6] for (a) and m bl = [2, 6] for (b).The square indicates the maximum difference of the MPS-based approximation, and the triangles indicate those of the cosine expansion approximations of degree D = 3, ..., 10, which correspond to the points from left to right, respectively.The vertical and horizontal lines correspond to the maximum difference and DOF of the approximations, respectively.
,4 FIG.2: Diagram of the quantum circuit V MPS that generates the MPS-encoded state (21) in the case of d = 4.All the dm deg qubits are initialized to |0⟩.We describe the subsystems consisting of a portion of the qubits at the left end: S deg,i has m deg qubits, the ((i − 1)m deg + 1)-th to im deg -th ones, and S ′ deg,i has m deg + m BD qubits, the ((i − 1)m deg + 1)-th to (im deg + m BD )-th ones, except for S ′ deg,d−1 having the last 2m deg qubits.S deg,i .The resultant circuit V App is shown in Fig. 3.Note that this circuit generates 1, 2, 3} m do 5: Estimate ⟨Φ i−1 | σ⃗ α |Ψ i+1 ⟩ using the Hadamard test, and divide it by M to obtain F⃗ α .

TABLE I :
Maximum differences among the values of the five-asset worst-of put option prices f (⃗ s) calculated by Monte Carlo integration (MC), MPS-based approximation (TN), and cosine expansion using the full coefficients (COS), on the random sample points.