Abstract
The process of obtaining solutions to optimal control problems via mesh based techniques suffers from the well known curse of dimensionality. This issue is especially severe in quantum systems whose dimensions grow exponentially with the number of interacting elements (qubits) that they contain. In this article we develop a min-plus curse-of-dimensionality-free framework suitable to a new class of problems that arise in the control of certain quantum systems. This method yields a much more manageable complexity growth that is related to the cardinality of the control set. The growth is attenuated through \(\max \)-plus projection at each propagation step. The method’s efficacy is demonstrated by obtaining an approximate solution to a previously intractable problem on a two qubit system.
Similar content being viewed by others
Notes
The implementation of the min-plus algorithm for the results in this section may be found at the following URL: https://github.com/srsridharan/maxplusCODfree.
References
Dirr, G., Helmke, U.: Lie theory for quantum control. GAMM-Mitteilungen 31(1), 59–93 (2008)
Khaneja, N., Reiss, T., Luy, B., Glaser, S.J.: Optimal control of spin dynamics in the presence of relaxation. J. Magn. Reson. 162(9), 311–319 (2003)
Nielsen, M.A., Dowling, M.R., Gu, M., Doherty, A.C.: Optimal control, geometry, and quantum computing. Phys. Rev. A 73(6), 062323 (2006)
Schulte-Herbrüggen, T., Spörl, A., Khaneja, N., Glaser, S.J.: Optimal control-based efficient synthesis of building blocks of quantum algorithms: a perspective from network complexity towards time complexity. Phys. Rev. A 72(4), 42331 (2005)
Belavkin, V.P.: Measurement, filtering and control in quantum open dynamical systems. Rep. Math. Phys. 43(3), 405–425 (1999)
Belavkin, V.P., Negretti, A., Mølmer, K.: Dynamical programming of continuously observed quantum systems. Phys. Rev. A 79(2), 22123 (2009)
Gough, J., Belavkin, V.P., Smolyanov, O.G.: Hamilton–Jacobi–Bellman equations for quantum optimal feedback control. J. Opt. B 7, S237–S244 (2005)
Sridharan, S., James, M.R.: Minimum time control of spin systems via dynamic programming. In: Proceedings of the IEEE Conference on Decision and Control. IEEE, December 2008
Sridharan, S., Gu, M., James, M.R.: Gate complexity using dynamic programming. Phys. Rev. A 78(5), 052327 (2008)
Sridharan, S., James, M.R.: Dynamic programming and viscosity solutions for the optimal control of quantum spin systems. Syst. Control Lett. 60(9), 726–733 (2011)
Kushner, H.J., Dupuis, P.: Numerical Methods for Stochastic Control Problems in Continuous Time. Springer Verlag, New York (2001)
Bardi, M., Dolcetta, I.C.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Birkhauser, Boston (1997)
Crandall, M.G., Evans, L.C., Lions, P.L.: Some properties of viscosity solutions of Hamilton–Jacobi equations. Trans. AMS 282, 487–502 (1984)
McEneaney, W.M., Deshpande, A., Gaubert, S.: Curse-of-complexity attenuation in the curse-of-dimensionality-free method for HJB PDEs. Am. Control Conf. 2008, 4684–4690 (2008)
McEneaney, W.M.: A curse-of-dimensionality-free numerical method for solution of certain HJB PDEs. SIAM J. Control Optim. 46(4), 1239–1276 (2008)
McEneaney, W.M.: Max-Plus Methods for Nonlinear Control and Estimation. Birkhäuser, Boston (2006)
Nijmeijer, H., Van der Schaft, A.: Nonlinear Dynamical Control Systems. Springer, New York (1990)
Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions. Springer Verlag, Berlin (2006)
Sontag, E.D.: Mathematical Control Theory: Deterministic Finite Dimensional Systems. Springer Verlag, New York (1998)
Ben-Tal, A., Nemirovski, A.S.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2001)
Löfberg, J.: YALMIP: a toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004)
Pfeifer, W.: The Lie Algebras su\((N)\): An Introduction. Springer, Basel (2003)
Sepanski, M.R.: Compact Lie Groups. Springer, New York (2007)
Hall, B.C.: Lie Groups, Lie Algebras, and Representations: An Elementary Introduction. Springer, New York (2003)
Sorensen, O.W., Eich, G.W., Levitt, M.H., Bodenhausen, G., Ernst, R.R.: Product operator formalism for the description of NMR pulse experiments. Prog. Nucl. Magn. Reson. Spectrosc. 16(2), 163–192 (1983)
Khaneja, N., Brockett, R., Glaser, S.J.: Time optimal control in spin systems. Phys. Rev. A 63, 032308 (2001)
Brockett, R.: Nonlinear systems and differential geometry. Proc. IEEE 64(1), 61–72 (1976)
Acknowledgments
Research partially supported by AFOSR Grant FA9550-10-1-0233.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proofs of Proposition 3.1 and Lemma 3.10
Proof
[Proposition 3.1] We prove two inequalities which when combined yield the result. Let \(T\ge T_0\). From the definition of \(C_{0}(\cdot )\) in Eq. (2.5) and Eq. (3.5), we have that for all \({U_0}\in \mathbf {G}\) and all \(\delta >0\), there exists \(\hat{v}=\hat{v}(U_0,\delta ) \in \mathcal {V}^e_g\) such that
Combining this with the definition of \(C^{\epsilon }_0\), one sees that for all \(\epsilon >0\), \(C^{\epsilon }_0({U_0})< C_{0}({U_0}) + \delta \). Since this holds for all \(\delta >0\), we have that
To prove the reverse inequality to this, we proceed as follows. Let \(\epsilon \in (0,1]\). From the definition of \(C^{\epsilon }_0\), \(\forall {U_0}\in \mathbf {G}, \forall \delta >0, \exists \hat{v}=\hat{v}(U_0,\delta ) \in \mathcal {V}_{g}\), such that
From the assumption that \(\hat{\varphi }(\cdot ) \ge C_{0}(\cdot )\) we have that \(\forall \epsilon \in (0,1]\),
Using the dynamic programming equation for \(C_{0}\) in the right hand side of the equation above it follows that
From Eqs. (9.2) and (9.3) we see that \(\forall {U_0}\in \mathbf {G}, \forall \delta >0\), \(\forall \epsilon \in (0,1]\), one has \(C^{\epsilon }_0({U_0})+ \delta \ge C_{0}(U_{0})\). Since this holds for all \(\delta >0\) we have that
Combining (9.1) and (9.4), one has the desired result. \(\square \)
Proof
[Theorem 3.10 for the dynamic programming equation for the relaxation of the optimal cost function]
Let \({U_0}\in \mathbf {G}\) and \(\tau \in [0, T-s]\). Consider any particular control \(\alpha (t), t \in [s, s+\tau ]\). By the definition of \(C^{\epsilon }_r\) in Eqs. (3.1) and (3.2), \(\forall {U_0}\in \mathbf {G}, \forall \delta >0, \exists \) control \(\beta (\cdot )\) such that
Constructing a composite control
we have from the dynamic programming principle that
Using Eq. (9.5) we obtain
Hence taking the infimum among possible controls \(\alpha (\cdot )\) and then letting \(\delta \downarrow 0\),
Now to prove the other side of the inequality: from the definition of \(C^{\epsilon }_s (\cdot )\) in Eqs. (3.1) and (3.2), it follows that \(\forall \delta >0\), \(\exists \bar{v}\) s.t
The control \(\bar{v}\) is partitioned as
Hence from Eq. (9.8) and the definition of \(J^{\epsilon }_s(\cdot )\) we have
Now, the definition of \(C^{\epsilon }_s(\cdot )\) implies that
Therefore from Eqs. (9.9) and (9.10) we have
Taking the infimum over all \(\alpha (\cdot )\) and then letting \(\delta \rightarrow 0\)
Thus from Eqs. (9.7) and (9.11) the result follows. \(\square \)
Appendix 2: Proof of Upper Bound on the Time to Reach the Identity
In this section we prove various results that lead up to the proof of Theorem 3.2. The intuition behind this consists of two main steps: (1) determining how to generate a desired control action via a sequence of propagation steps using the basis elements of the Lie algebra, and (2) obtaining the sequence of propagations from the available control set that would produce the desired propagation steps.
We also note that the following proof is for general \(\mathbf {G}\doteq SU(2^n)\) for \(n\in \mathbf{N}\), and in particular, \({\mathfrak {g}}\doteq \mathfrak {su}(2^n)\). We begin by introducing some notation: Let \(M={\tilde{N}}^2-1=2^{2n}-1\), the dimension of \({\mathfrak {g}}\). Let \(\mathcal{M}=]1,M[\), where we remind the reader that \(]a,b[\) is used throughout to denote \(\{a,a+1,a+2,\ldots , b\}\) for integer pairs \(a<b\). Let \(\overline{\mathcal{H}}\doteq \{H_m\,\vert \, m\in \mathcal{M}\}\) be a set of orthonormal basis vectors for \({\mathfrak {g}}\). For some \(M_0\in ]1,M[\), \(\mathcal{M}^0=]1,M_0[\) and
where \(\mathcal{H}^0\) is the set of basis directions for the available control. Let \(\mathcal{M}^1=]M_0+1,M[\) and
We suppose that for each \(k\in \mathcal{M}^1\), there exist \(m^1(k),m^2(k)\in \mathcal{M}^0\) and \(\alpha _k\in \mathbb {R}\) such that
As the \(\alpha _k\) only modify the result by a fixed, multiplicative constant, for simplicity of the already-technical proof, we suppose \(\alpha _k=1\) for all \(k\in \mathcal{M}^1\). Similarly, for simplicity, we are also assuming that the \(H_k\) are orthogonal.
Let \({\overline{D}_\beta }<\infty \) represent the maximum magnitude of the control and \(\overline{B}_{\overline{D}_\beta }= \overline{B}_{\overline{D}_\beta }(0)\subset I\!R^{M_0}\), denote the closure of the ball of radius \({\overline{D}_\beta }\) centered at the origin. Let
The controlled dynamics are
We remind the reader that we use the norm on \(\mathbf {G}\) given by \( \Vert U\Vert =\Bigl [\sum _{i,j}|U_{i,j}|^2\Bigr ]^{1/2} = \Bigl \{\text{ tr }[UU^\dag ]\Bigr \}^{1/2} \).
Lemma 10.1
There exists \(\mathcal{D}_{\tilde{N}}<\infty \) such that \(\Vert \log (U)\Vert \le \mathcal{D}_{\tilde{N}}\Vert U-I\Vert \) for all \(U\in \mathbf {G}\).
Proof
This is a slight variant of a standard calculation. For \(U\in \mathbf {G}\) satisfying \(\Vert U-I\Vert \le {\frac{1}{2}}\), one has (c.f., [24])
which implies
More generally, there exists \(\overline{\mathcal{D}}_{\tilde{N}}\) such that \(\Vert \log (U)\Vert \le \overline{\mathcal{D}}_{\tilde{N}}\) for all \(U\in \mathbf {G}\). Consequently, for \(U\in \mathbf {G}\) satisfying \(\Vert U-I\Vert \ge {\frac{1}{2}}\), one has
Combining (10.5) and (10.6), and taking \(\mathcal{D}_{\tilde{N}}=\max \{3/2,2 \overline{\mathcal{D}}_{\tilde{N}}\}\) yields the result. \(\square \)
The following is a standard calculation obtained using Taylor expansions; c.f., [27].
Lemma 10.2
Suppose \(G,H\in \mathfrak {su}({\tilde{N}})\) with \({\tilde{N}}\in \mathbf{N}\) and \(\Vert G\Vert ,\Vert H\Vert \le D<\infty \). Suppose \(\delta \le 1\). Then, there exists \(c_0=c_0({\tilde{N}},D)\) such that
where \(\Vert F_0(G,H)\Vert \le c_0\).
We are looking for an upper bound on the minimal time to travel from \(U^1\) to \(U^2\) in \(\mathbf {G}\) by application of a bounded control in directions spanned by \(\mathcal{H}^0\). By rotation of coordinates, this problem is equivalent to the problem of finding an upper bound on the minimal time to travel from \(U^0\in \mathbf {G}\) to \(I\); (see Theorem 2.1). We will obtain an upper bound for a slightly more constrained problem, which will then provide an upper bound for this problem.
Specifically we obtain the upper bound by considering the more restrictive case where, for all \(t\), \([v_t]_m=\alpha _t\hat{\delta }_{j_t,m}\) for some \(\alpha _t\in \{0,-{\overline{D}_\beta },{\overline{D}_\beta }\}\), \(j_t\in \mathcal{M}_0\). Here \(\hat{\delta }\) denotes the Dirac delta function. We let
By the nature of \(\mathbf {G}\), there exists \(\overline{H}^0\in {\mathfrak {g}}\) such that
Let \(\overline{H}=\overline{H}^0/\Vert \overline{H}^0\Vert \). Let \({D_\beta }=\min \bigl \{{\overline{D}_\beta },{\overline{D}_\beta }^2\bigr \}\), so that \(\max \bigl \{{D_\beta },\sqrt{{D_\beta }}\bigr \}={\overline{D}_\beta }\), and let \(T~=~\Vert \overline{H}^0\Vert /{D_\beta }\). Then, \(U^0=e^{{D_\beta }\overline{H}T}\). As \({\overline{\mathcal{H}}}\) is an orthonormal basis for \({\mathfrak {g}}\), it follows that we can generate a linear combination satisfying
where \(\sum _{m\in \mathcal{M}}(\beta ^0_m)^2={D_\beta }^2\). We have
Remark 10.3
Note that we have chosen \({D_\beta }\) such that when we apply control for motion in the \(\mathcal{H}^1\) directions via bracket operations (such as indicated in Lemma 10.2), the actual applied control will not exceed \({\overline{D}_\beta }\) in magnitude.
We will choose a sufficiently small \(\delta >0\) (where this size will be clarified below), such that certain estimates will hold. If \(T\) is sufficiently small we will simply take \(\delta =T\). Otherwise, we will take \(T=\hat{N}\delta \) where \(\hat{N}\) is sufficiently large so that \(\delta \) satisfies the estimates. Then,
Let \(U^\kappa \doteq \left[ e^{\sum _{m\in \mathcal{M}}\beta ^0_m H_m \delta }\right] ^{\hat{N}-\kappa }\) for \(\kappa \in ]0,\hat{N}[\). We will drive the system from \(U_0=U^0\) to \(I\) by repeatedly driving the system from \(U^\kappa \) to \(U^{\kappa +1}\), that is, by repeatedly applying \(e^{-\sum _{m\in \mathcal{M}}\beta ^0_m H_m \delta }\). Our goal is to obtain transition matrix \(e^{-\sum _{m\in \mathcal{M}}\beta ^0_m H_m \delta }\) via some control in \(\mathcal{V}^e_{0,{\overline{D}_\beta }}\); this will lead to our upper bound on the travel time to the identity element. We begin with another Taylor series based calculation.
Lemma 10.4
Suppose \(\widetilde{H}_1,\widetilde{H}_2\in {\mathfrak {g}}\) with \(\Vert \widetilde{H}_1\Vert ,\Vert \widetilde{H}_2\Vert =1\), \(\langle \widetilde{H}_1,\widetilde{H}_2\rangle =0\) (where we note that the appropriate inner product is given by \(\langle \widetilde{H}_1,\widetilde{H}_2\rangle \doteq \sum _{i,j}[\widetilde{H}_1]_{i,j}[\bar{\widetilde{H}}_2]_{i,j}\)). Suppose \(\tilde{\beta }_1^2+\tilde{\beta }_2^2\le {D_\beta }^2\). Then, there exists \(c_1=c_1({\tilde{N}},{D_\beta })<\infty \), \(\widehat{H}_1 \in {\mathfrak {g}}\) such that for any \(\delta >0\),
where \(\Vert \widehat{H}_1\Vert \le \delta ^2 c_1\).
Proof
We simply expand the exponentials in their Taylor series. First, note that
where \(\Vert \overline{F}_1\Vert \le \bar{c}_1({\tilde{N}},{D_\beta })\). Combining this with another expansion for \({e^{-\delta \tilde{\beta }_2\widetilde{H}_2}}\) yields
where \(\Vert F_1\Vert \le \tilde{c}_1({\tilde{N}},{D_\beta })<\infty \). Then, by Lemma 10.1,
Consequently, letting \(\widehat{H}_1\doteq \log (I+\delta ^2 F_1)\), one has the desired result. \(\square \)
We define the ordered product notation
Lemma 10.5
Suppose \(\sum _{m\in \mathcal{M}}(\beta ^0_m)^2 \le {D_\beta }^2\). There exists \(c_1=c_1({\tilde{N}},{D_\beta })>0\) such that for any \(\delta >0\),
where \(\Vert \widehat{H}_m\Vert \le \delta ^2 c_1\) for all \(m\in \mathcal{M}_0\), and \(c_{1}\) is as indicated in Lemma 10.4.
Proof
Suppose \(M_0>1\) (the case of \(M_0 = 1\) has been dealt with in Lemma. 10.4). Let
and \(\tilde{\beta }_2=\bigl [\sum _{m=2}^M(\beta ^0_m)^2\bigr ]^{\frac{1}{2}}\). Then,
with \(\Vert \widetilde{H}_2\Vert =1\). We have
which, by Lemma 10.4 yields
with \(\Vert \widehat{H}_1\Vert \le \delta ^2 c_1\).
Suppose that for some \({\widehat{M}}\in ]1,M_0-1[\),
where \(\Vert \widehat{H}_m\Vert \le \delta ^2 c_1\) for all \(m\in ]1,{\widehat{M}}[\), and note that by Eq. (10.13) this is true for \({\widehat{M}}=1\). Now let \(\widetilde{H}_1=H_{{\widehat{M}}+1}\) and \(\tilde{\beta }_1=\beta ^0_{{\widehat{M}}+1}\). Let
and \(\tilde{\beta }_2=\bigl [\sum _{m={\widehat{M}}+2}^M(\beta ^0_m)^2\bigr ]^{\frac{1}{2}}\), where we are ignoring the trivial case \(M_0=M\), \({\widehat{M}}=M_0-1\). Then,
which, by Lemma 10.4, gives
where \(\Vert \widehat{H}_{{\widehat{M}}+1}\Vert \le \delta ^2 c_1\). Rearranging, we have
Substituting (10.19) into (10.14), and rearranging, yields
with \(\Vert \widehat{H}_m\Vert \le \delta ^2 c_1\) for all \(m\in ]1,{\widehat{M}}+1[\). By induction, one has the desired result. \(\square \)
We have now dealt with all the \(m\in \mathcal{M}_0\), and must consider the slightly more difficult case of \(m\in \mathcal{M}_1\). For \(m\in \mathcal{M}_1\), we define
where \(m^1\) and \(m^2\) are given in (10.2). We recall that for all \(m\), \(|\beta ^0_m|\le {D_\beta }\), where we chose \({D_\beta }\) such that \(\sqrt{{D_\beta }}\le {\overline{D}_\beta }\) where \({\overline{D}_\beta }\) is the maximum control magnitude.
Lemma 10.6
There exists \(c_2=c_2({\tilde{N}},{D_\beta })<\infty \) such that for any \(\delta \in (0,1]\) and any \(m\in \mathcal{M}_1\), there exists \(H^\prime _m\) such that
where \(\Vert H^\prime _m\Vert \le \delta ^{3/2} c_2\).
Proof
Let \(m\in \mathcal{M}_1\). Then, using Lemma 10.2 in the case
we obtain
for \(\delta \in (0,1]\) where \(\Vert \bar{F}_0(H_{m^1(m)},H_{m^2(m)},\beta ^0_{m},{\tilde{N}})\Vert \le c_0({\tilde{N}},{D_\beta })\), which by the definition of \(m^1,m^2\), yields
Combining this with the Taylor expansion for \(e^{\delta {\beta ^0_m}H_m}\), one has
where \(\Vert F_2(H_{m^1(m)},H_{m^2(m)},\beta ^0_{m},{\tilde{N}})\Vert \le c'_2=c'_2({\tilde{N}},{D_\beta })\). Then, by Lemma 10.1,
Letting \(H^\prime _m\doteq \log (I+\delta ^{3/2} F_2)\), we have
with \(\Vert H^\prime _m\Vert \le \delta ^{3/2} c_2\). \(\square \)
The proof of the following lemma is similar to the above calculations, and we do not include it. We only note that the fact that \(F_3\in {\mathfrak {g}}\) below follows immediately from the fact that the left-hand side of (10.23) is in \(\mathbf {G}\).
Lemma 10.7
Let \(\delta \le 1\), \(D^1<\infty \) and \(H,G\in {\mathfrak {g}}\) with \(\Vert H\Vert ,\Vert G\Vert \le D^1\). Then, there exists \(c_3=c_3({\tilde{N}},D^1)\) such that for any \(k\in \mathbf{N}\) and \(m,n\in [0,\infty )\),
where \(\Vert F_3\Vert \le \delta ^{2m} c_3\) and \(F_{3} \in {\mathfrak {g}}\).
Theorem 10.8
There exist \(c_1({\tilde{N}},{D_\beta }),c_2({\tilde{N}},{D_\beta }),\bar{c}_3({\tilde{N}},{D_\beta })<\infty \) such that for all \(\delta \in (0,1]\),
where \(H^\prime _m, H^{\prime \prime }_m, \widehat{H}_m \in {\mathfrak {g}}\), \(\Vert H^\prime _m\Vert \le \delta ^{3/2} c_2\) for all \(m\in \mathcal{M}_1\), \(\Vert H^{\prime \prime }_m\Vert \le \delta ^2 c_3\) for all \(m\in \mathcal{M}_1\), and \(\Vert \widehat{H}_m\Vert \le \delta ^2 c_1\) for all \(m\in \mathcal{M}\).
Proof
Suppose there exists
such that
where the \(\widehat{H}_m\), \(H^\prime _m\), \(H^{\prime \prime }_m\) satisfy the conditions in the theorem statement. Note that by Lemma 10.5, this is true for \(M^\prime =M_0\). If, by an induction argument, we show that (10.25) holds for \(M^\prime +1\), we will be done up to \(M^\prime +1=M-1\).
Let \(\tilde{\beta }_1=\beta ^0_{M^\prime +1}\), \(\widetilde{H}_1=H_{M^\prime +1}\),
and \(\tilde{\beta }_2=\left[ \sum _{m=M^\prime +2}^M ({\beta ^0_m})^2\right] ^{\frac{1}{2}}\). Then,
which by Lemma 10.4,
where \(\Vert \widehat{H}_{M^\prime +1}\Vert \le \delta ^2 c_1=\delta ^2 c_1({\tilde{N}},{D_\beta })\). Also, assuming that \(\delta \le 1\), one has from Lemma 10.6 that
where \(\Vert H^\prime _{M^\prime +2}\Vert \le \delta ^{3/2} c_2\). Now, using (10.26) to substitute on the right-hand side in Eq. (10.25) yields
and substituting for the first term on the left-hand side of this using (10.27) yields
Finally, using Lemma 10.7 to reorder the first two terms on the right-hand side of Eq. (10.29) yields
where \(\Vert H^{\prime \prime }_{M^\prime +1}\Vert \le \delta ^2 \bar{c}_3({\tilde{N}},{D_\beta })<\infty \). By induction, we obtain
where the \(\widehat{H}_m\), \(H^\prime _m\), \(H^{\prime \prime }_m\) satisfy the conditions in the theorem statement.
It only remains to work on the term \(e^{\delta \beta ^0_M H_M}\). Note that Eq. (10.30) is
which, by Lemma 10.6, yields
Theorem 10.9
There exists \(c_4({\tilde{N}},{D_\beta })<\infty \) and \(\check{H}^0 \in {\mathfrak {g}}\) such that for any \(\delta \in (0,1]\),
where \(\Vert \check{H}^0\Vert \le \delta ^{3/2} c_4\).
Proof
We need only make a substitution on the right-hand side of (10.24). By Lemma 10.1,
Let
With this notation definition, Eq. (10.31) becomes
which is
For \(m\in \mathcal{M}_0\), using Theorem 10.8,
Also, for all \(m\in \mathcal{M}\), using Theorem 10.8 again,
and
for proper choice of \(c_5({\tilde{N}},{D_\beta })<\infty \). Combining Eqs. (10.32)–(10.35) yields
for a proper choice of \(c_6=c_6({\tilde{N}},{D_\beta })<\infty \).
Now, for \(m\in ]M_0+1,M-1[\), using Theorem 10.8,
which implies that for \(m\in ]M_0+1,M[\),
Applying Eq. (10.39) in Eq. (10.37) yields
which by Eqs. (10.35) and (10.36), yields
Now, as above,
which, using (10.35) again,
which, in a similar fashion to (10.35), (10.36),
where \(c_8=c_8({\tilde{N}},{D_\beta })<\infty \). Substituting (10.42) into (10.41) yields
where \(c_4=c_4({\tilde{N}},{D_\beta })<\infty \). Letting \(\check{H}^0=\log \left[ \prod _{m=1}^{M} \Delta _m\right] \) yields the result. \(\square \)
We now describe a result regarding the generation of a control signal via a series of propagation steps. For \(m\in \mathcal{M}\), let
We have
where \(\Vert \check{H}^0\Vert \le \delta ^{3/2} c_4\). Let
One has \(\Vert \check{H}^0\Vert \le \frac{\delta }{2}{D_\beta }\). Consequently, \(\check{H}^0 \) may be written as a linear combination of the basis vectors of the algebra \({\mathfrak {g}}\) as follows
where \(\sum _{m\in \mathcal{M}}({\beta ^1_m})^2 \le 1\). Using this in (10.43), we have
For \(m\in \mathcal{M}\), let
where \(\phi ^\cdot _m(\cdot )\) is given by (10.21). By Theorem 10.9 with \(\delta /2\) replacing \(\delta \) and \({\beta ^1_m}\) replacing \({\beta ^0_m}\), we have
or
where \(\Vert \check{H}^1\Vert \le c_4\left( \frac{\delta }{2}\right) ^{3/2}\). From inequality (10.44), we have
Substituting (10.46) into (10.45) yields
where, from above, \(\Vert \check{H}^1\Vert \) is bounded by the right-hand side of (10.47). We see that by repeating this process, one obtains the following.
Theorem 10.10
Let \(\delta _5=\min \{1,{D_\beta }/(2c_4)\}\) (where \(c_4\) was indicated in Theorem 10.9). Then,
for all \(\delta \in (0,\delta _5]\) and all \(\{{\beta ^0_m}\}_{m\in \mathcal{M}}\) such that \(\sum _{m\in \mathcal{M}}({\beta ^0_m})^2\le 1\), where the \({\tilde{\phi }}^{\delta /2^k}_m\) are constructed as above.
The above result indicates that one may achieve control \(e^{-\delta \sum _{m\in \mathcal{M}}{\beta ^0_m}H_m}\) by applying the sequence of sequences of propagation mappings
The associated controls drive the system from \(U^\kappa \) to \(U^{\kappa +1}\), where \(U^\kappa \doteq \left[ {e^{\delta \sum _{m\in \mathcal{M}}\beta ^0_m H_m}}\right] ^{\hat{N}-\kappa }\) for any \(\kappa \in ]0,\hat{N}[\). Recall that the goal is to drive the system from \(U^0\) to \(U^{\hat{N}}=I\).
We now consider the time required to drive the system in this manner from \(U^0\) to \(I\). Note that \(U^0=e^{\sum _{m\in \mathcal{M}}{\beta ^0_m}H_m T}\), and that we can expect the required time to be greater than \(T\) when \(M_0<M\). There are two cases to be considered: \(T<\delta _5\) and \(T\ge \delta _5\). First, we compute the time required for the control sequence corresponding to (10.48), (10.49). For \(m\in \mathcal{M}_0\), the time required for transition \({\tilde{\phi }}^{\delta /2^k}_m\) is \(\delta /2^k\). (Note that the actual control applied during this interval is \({\beta ^0_m}H_m\).) For \(m\in \mathcal{M}_1\), the time required for transition \({\tilde{\phi }}^{\delta /2^k}_m\) is \(4\sqrt{\delta /2^k}\). Therefore, the total time required for (10.48), (10.49) is
Consequently, in the case \(T<\delta _5\), the minimal required time to drive the system from \(U^0\) to \(I\) is bounded above by
In the case where \(T\ge \delta _5\), we let \(\bar{N}=\lceil (T/\delta _5)\) and \(\delta =T/\bar{N}\). That is, we will have \(\bar{N}\) steps of (10.48), (10.49), where the \(\kappa \) step drives the system from \(U^\kappa \) to \(U^{\kappa +1}\). The total required time bound is
Summarizing, we have now obtained Theorem 3.2.
Lastly, note that from Lemma 10.1, one has \(\Vert \log U^0\Vert \le \frac{3}{2}\Vert U^0-I\Vert \). Using this, one may obtain Corollary 3.3.
Rights and permissions
About this article
Cite this article
Sridharan, S., McEneaney, W.M., Gu, M. et al. A Reduced Complexity Min-Plus Solution Method to the Optimal Control of Closed Quantum Systems. Appl Math Optim 70, 469–510 (2014). https://doi.org/10.1007/s00245-014-9247-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245-014-9247-3