Skip to main content
Log in

Robust Route Realization by Linear-Quadratic Tracking

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The problem of a prescribed discrete route realization by a controlled linear system in the presence of unknown bounded disturbance is considered. The problem is solved based on an auxiliary cheap control linear-quadratic differential game. Novel solvability conditions are established. A numerical example is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Turetsky, V., Glizer, V., Shinar, J.: Robust trajectory tracking: differential game/cheap control approach. Int. J. Syst. Sci. 45(11), 2260–2274 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Turetskij, V.Y.: Moving a linear system to a neighborhood of the origin. Autom. Remote Control 50(1), 75–81 (1989)

    MathSciNet  MATH  Google Scholar 

  3. Tretyakov, V.E., Turetsky, V.Y.: Using an auxiliary differential game to synthesize a pursuit strategy. J. Comput. Syst. Sci. Int. 33, 140–145 (1995)

    MathSciNet  Google Scholar 

  4. Shinar, J., Turetsky, V., Glizer, V., Ianovsky, E.: Solvability of linear-quadratic differential games associated with pursuit-evasion problems. Int. Game Theory Rev. 10, 481–515 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bernhard, P.: Linear-quadratic, two-person, zero-sum differential games: necessary and sufficient conditions. J. Optim. Theory Appl. 27, 51–69 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  6. Al-Hasan, S., Vachtsevanos, G.: Intelligent route planning for fast autonomous vehicles operating in a large natural terrain. Rob. Auton. Syst. 40(1), 1–24 (2002)

    Article  Google Scholar 

  7. Van Vuren, T., Van Vliet, D.: Route Choice and Signal Control: The Potential for Integrated Route Guidance. Ashgate Publishing Company, Burlington (1992)

    Google Scholar 

  8. Michelin, A., Idan, M., Speyer, J.L.: Merging of air traffic flows. AIAA J. Guid. Control Dyn. 34(1), 13–28 (2011)

    Article  Google Scholar 

  9. Krasovskii, N., Subbotin, A.: Game-Theoretical Control Problems. Springer, New York (1988)

    Book  Google Scholar 

  10. Bellman, R.: Introduction to Matrix Analysis. McGraw-Hill Book Co., Inc., New York (1960)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladimir Turetsky.

Additional information

Communicated by Meir Pachter.

Appendices

Appendix 1

1.1 Proof of Theorem 3.1

Due to Theorem 1 of [4], it should be proved that for a fixed \(\beta >0\) and for a sufficiently small \(\alpha >0\), the inequality (22) is valid.

First, let us establish the auxiliary propositions.

Proposition 6.1

The matrices \({\mathcal F}_u(t)\) and \({\mathcal F}_v(t)\), \(t\in [t_0, t_f]\), are symmetric and positive semi-definite.

Proof

Due to (1013),

$$\begin{aligned} F_u^{(j,i)}(t)=\left( F_u^{(i,j)}(t)\right) ^T,\quad F_v^{(j,i)}(t)=\left( F_v^{(i,j)}(t)\right) ^T. \end{aligned}$$
(45)

Thus, by (9), the matrices \({\mathcal F}_u(t)\) and \({\mathcal F}_v(t)\), \(t\in [t_0, t_f]\), are symmetric.

Let \(t\in [t_0,t_f]\) be fixed and \(m_t^{(i)}\in \mathbb {R}^n\), \(i=q(t),\ldots ,K\), be arbitrary n-vectors, constituting an arbitrary vector \(m_t\in \mathbb {R}^{p(t)}\):

$$\begin{aligned} m_t^T=\left[ {m_t^{(q(t))}}^T \ldots {m_t^{(K)}}^T\right] . \end{aligned}$$
(46)

By direct calculation,

$$\begin{aligned} m_t^T{\mathcal F}_u(t) m_t \nonumber= & {} \displaystyle \int \limits _t^{t_{q(t)}}\left| B^T(\tau )X^T(t_{q(t)},\tau )g_t^{(q(t))}\right| ^2\mathrm{{d}}\tau \nonumber \\&\quad +\sum \limits _{i=q(t)}^{K-1}\displaystyle \int \limits _{t_i}^{t_{i+1}}\left| B^T(\tau )X^T(t_{i+1},\tau )g_t^{(i+1)}\right| ^2\mathrm{{d}}\tau , \end{aligned}$$
(47)
$$\begin{aligned} m_t^T{\mathcal F}_v(t) m_t= & {} \displaystyle \int \limits _t^{t_{q(t)}}\left| C^T(\tau )X^T(t_{q(t)},\tau )g_t^{(q(t))}\right| ^2\mathrm{{d}}\tau \nonumber \\&\quad +\sum \limits _{i=q(t)}^{K-1}\displaystyle \int \limits _{t_i}^{t_{i+1}}\left| C^T(\tau )X^T(t_{i+1},\tau )g_t^{(i+1)}\right| ^2\mathrm{{d}}\tau , \end{aligned}$$
(48)

where

$$\begin{aligned} g_t^{(i)}:= \sum \limits _{j=i}^K X^T(t_j,t_i)D_j^T m_t^{(j)},\quad i=q(t),\ldots , K. \end{aligned}$$
(49)

Thus, the matrices \({\mathcal F}_u(t)\) and \({\mathcal F}_v(t)\), \(t\in [t_0, t_f]\), are positive semi-definite. \(\square \)

Proposition 6.2

If Condition A holds, then

$$\begin{aligned} \mathrm{Ker}{\mathcal F}_u(t)= & {} \left\{ m_t\in \mathbb {R}^{p(t)}:\ m_t^T=\left[ {m_t^{(q(t))}}^T \ldots {m_t^{(K)}}^T\right] ,\right. \nonumber \\&\quad \left. \ m_t^i\in \mathbb {R}^n, D_im_t^i=0,\quad i=q(t),\ldots , K\right\} , \end{aligned}$$
(50)

while

$$\begin{aligned} \mathrm{Ker}{\mathcal F}_v(t)\supseteq \mathrm{Ker}{\mathcal F}_u(t). \end{aligned}$$
(51)

Proof

Assume that \(m_t\in \mathrm{Ker}{\mathcal F}_u(t)\), i.e., \({\mathcal F}_u(t)m_t=0\), providing

$$\begin{aligned} {m}_t^T{\mathcal F}_u(t)m_t=0. \end{aligned}$$
(52)

Note that Condition A guarantees the positive definiteness of the matrix

$$\begin{aligned} W(t):= \displaystyle \int \limits _t^{t_{q(t)}}X(t_{q(t)},\tau )B(\tau )B^T(\tau )X^T(t_{q(t)},\tau )\mathrm{{d}}\tau , \end{aligned}$$
(53)

and the matrices

$$\begin{aligned} W_i:= \displaystyle \int \limits _{t_i}^{t_{i+1}}X(t_{i+1},\tau )B(\tau )B^T(\tau )X^T(t_{i+1},\tau )\mathrm{{d}}\tau , \quad i=q(t),\ldots ,K-1. \end{aligned}$$
(54)

Therefore, due to the representation (47), the Eq. (52) yields

$$\begin{aligned} g_t^{(i)}=0,\quad i=q(t),\ldots ,K, \end{aligned}$$
(55)

which, by virtue of (49), leads to

$$\begin{aligned} D_im_t^i=0,\quad i=q(t),\ldots ,K. \end{aligned}$$
(56)

This proves (50). The inclusion (51) is obtained by similar considerations, based on the representation (48). \(\square \)

Let us denote

$$\begin{aligned} \varphi _u(t):= & {} \min \limits _{m_t\in {\mathcal N}_t}\Big [m_t^T{\mathcal F}_u(t) m_t\Big ], \end{aligned}$$
(57)
$$\begin{aligned} \varphi _v(t):= & {} \max \limits _{m_t\in {\mathcal N}_t}\Big [m_t^T{\mathcal F}_u(t) m_t\Big ], \end{aligned}$$
(58)

where

$$\begin{aligned} {\mathcal N}_t:= & {} {\mathcal M}_t\setminus \mathrm{Ker} {\mathcal F}_v(t), \end{aligned}$$
(59)
$$\begin{aligned} {\mathcal M}_t:= & {} \Big \{m_t\in \mathbb {R}^{p(t)}:\ \ m_t^T m_t=1\Big \}. \end{aligned}$$
(60)

In other words, \(\varphi _u(t)\) is the minimal positive eigenvalue of the matrix \({\mathcal F}_u(t)\), while \(\varphi _v(t)\) is the maximal positive eigenvalue of the matrix \({\mathcal F}_v(t)\).

Proposition 6.3

The function \(\varphi _v(t)\) is continuous and decreases monotonically for \(t\in [t_0, t_f]\). Moreover,

$$\begin{aligned} \lim \limits _{t\downarrow t_f-0}\varphi _v(t)=0. \end{aligned}$$
(61)

Proof

Due to the representation (48) and the definition (58), the function \(\varphi _v(t)\) is continuous for all \(t\in [t_0, t_f]\), \(t\ne t_i\), \(i=1,\ldots ,K\). Moreover,

$$\begin{aligned} \lim \limits _{t\downarrow t_i-0}\varphi _v(t)= & {} \lim \limits _{t\downarrow t_i+0}\varphi _v(t)\nonumber \\= & {} \max \limits _{m_t\in {\mathcal N}_t} \left[ \sum \limits _{i=q(t)}^{K-1}\displaystyle \int \limits _{t_i}^{t_{i+1}}\left| C^T(\tau )X^T(t_{i+1},\tau )g_t^{(i+1)} \right| ^2\mathrm{{d}}\tau \right] . \end{aligned}$$
(62)

Thus, the function \(\varphi _v(t)\) is continuous.

Due to (48), \(m_t\in \mathrm{Ker}{\mathcal F}_v(t)\) iff the

$$\begin{aligned} C^T(\tau )X^T(t_{q(t)},\tau )g_t^{(q(t))}\equiv 0,\quad \tau \in [t,t_{q(t)}[,\end{aligned}$$
(63)

and

$$\begin{aligned} C^T(\tau )X^T(t_{i},\tau )g_t^{(i)}\equiv 0,\ \ \tau \in [t_i,t_{i+1}[,\quad i=q(t)+1,\ldots , K.\end{aligned}$$
(64)

This implies that, if \(\tau _1<\tau _2\), \(\tau _1, \tau _2\in [t_0, t_f[\), then

$$\begin{aligned} \mathrm{Ker}{\mathcal F}_v(\tau _1)\subseteq \mathrm{Ker}{\mathcal F}_v(\tau _2), \end{aligned}$$
(65)

i.e., due to (59),

$$\begin{aligned} {\mathcal N}_{\tau _1}\supseteq {\mathcal N}_{\tau _2}. \end{aligned}$$
(66)

By virtue of the inclusion (66) and the definition (58), the function \(\varphi _v(t)\) decreases monotonically. \(\square \)

Proposition 6.4

The function \(\varphi _u(t)\) is continuous on the open intervals \(]t_i, t_{i+1}[\), \(i=1,\ldots , K\), and it is right-continuous for \(t=t_i\), \(i=1,\ldots , K\). Moreover,

$$\begin{aligned} \lim \limits _{t\downarrow t_i-0}\varphi _u(t)= & {} 0, \end{aligned}$$
(67)
$$\begin{aligned} \lim \limits _{t\downarrow t_i+0}\varphi _u(t)> & {} 0. \end{aligned}$$
(68)

Proof

The continuity of the function \(\varphi _u(t)\) on the intervals \(]t_i, t_{i+1}[\) is proved based on the representation (47), similarly to the proof of the continuity of the function \(\varphi _v(t)\) for \(t\in [t_0, t_f]\) based on the representation (48). Due to (47),

$$\begin{aligned}&\lim \limits _{t\downarrow t_i-0}\varphi _u(t)\nonumber \\&\quad =\min \left\{ 0,\min \limits _{m_t\in {\mathcal N}_t} \left[ \sum \limits _{i=q(t)}^{K-1}\displaystyle \int \limits _{t_i}^{t_{i+1}}\left| C^T(\tau )X^T(t_{i+1},\tau )g_t^{(i+1)} \right| ^2\mathrm{{d}}\tau \right] \right\} =0, \end{aligned}$$
(69)

while, by virtue of (47) and (59),

$$\begin{aligned}&\lim \limits _{t\downarrow t_i+0}\varphi _u(t)\nonumber \\&\quad =\min \limits _{m_t\in {\mathcal N}_t} \left[ \sum \limits _{i=q(t)}^{K-1}\displaystyle \int \limits _{t_i}^{t_{i+1}}\left| C^T(\tau )X^T(t_{i+1},\tau )g_t^{(i+1)} \right| ^2\mathrm{{d}}\tau \right] >0. \end{aligned}$$
(70)

\(\square \)

Now, let us proceed to the proof of the theorem.

Due to (15), Propositions 6.16.2 and by [10],

$$\begin{aligned} \lambda _{\alpha \beta }(t)= \max \limits _{m_t\in {\mathcal M}_t}g(m_t), \end{aligned}$$
(71)

where the set \({\mathcal M}_t\) is given by (60),

$$\begin{aligned} g(m_t):= \dfrac{1}{\beta }m_t^T{\mathcal F}_v(t) m_t -\dfrac{1}{\alpha }m_t^T{\mathcal F}_u(t) m_t. \end{aligned}$$
(72)

If \(m_t\in \mathrm{Ker}{\mathcal F}_v(t)\), then, due to Propositions 6.16.2, \(g(m_t)\le 0\). Thus, the function \(g(m_t)\) can admit positive values only for \(m_t\in {\mathcal N}_t\), defined by (59). Therefore, in order to satisfy the condition (22), the inequality

$$\begin{aligned} \max \limits _{m_t\in {\mathcal N}_t}g(m_t)<1 \end{aligned}$$
(73)

should be valid for all \(t\in [t_0, t_f]\).

Due to (15), (5758), (71) and (72),

$$\begin{aligned} \max \limits _{m_t\in {\mathcal N}_t}g(m_t)\le \dfrac{1}{\beta }\varphi _v(t)-\dfrac{1}{\alpha }\varphi _u(t). \end{aligned}$$
(74)

Now, let us show that, for any fixed \(\beta >0\), it can be chosen a sufficiently small \(\alpha >0\), such that the following inequality holds:

$$\begin{aligned} \dfrac{1}{\beta }\varphi _v(t)- \dfrac{1}{\alpha }\varphi _u(t)<1. \end{aligned}$$
(75)

The inequality (75), along with (74), guarantees (73).

The inequality (75) can be rewritten as

$$\begin{aligned} \dfrac{\varphi _u(t)}{\alpha }>\dfrac{\varphi _v(t)-\beta }{\beta }. \end{aligned}$$
(76)

Remember that, due to Proposition 6.3, the function \(\varphi _v(t)\) is decreasing. Then, if \(\varphi _v(t_0)<\beta \), then the inequality (76) holds for all \(t\in [t_0, t_f]\) and for any \(\alpha >0\).

Let \(\varphi _v(t_0)\ge \beta \). Since the function \(\varphi _v(t)\) is continuous and due to (61), there exists a moment \(t^*=t^*(\beta )\in ]t_0, t_f[\) such that \(\varphi _v(t^*)=\beta \). Then, the condition (76) holds for all \(t\in ]t^*, t_f]\) and for any \(\alpha >0\). Thus, it is enough to prove that for a sufficiently small \(\alpha >0\), (76) holds for all \(t\in [t_0, t^*]\).

First, let us show that \(\alpha >0\) can be chosen in such a way that for all \(i=1,\ldots ,q(t^*)\),

$$\begin{aligned} \lambda _{\alpha \beta }(t_i)=0. \end{aligned}$$
(77)

Indeed, by virtue of (911), (15), (71) and (74),

$$\begin{aligned} \lambda _{\alpha \beta }(t_i)=\max \left\{ 0,\dfrac{1}{\beta }\varphi _v(t_i+0)- \dfrac{1}{\alpha }\varphi _u(t_i+0)\right\} . \end{aligned}$$
(78)

Due to (70), the following values are defined correctly:

$$\begin{aligned} \alpha _i:=\beta \dfrac{\varphi _v(t_i+0)}{\varphi _u(t_i+0)},\ \quad i=1,\ldots ,s(t^*)-1. \end{aligned}$$
(79)

If

$$\begin{aligned} \alpha \le \alpha _i,\quad i=1,\ldots , s(t^*)-1, \end{aligned}$$
(80)

then

$$\begin{aligned} \dfrac{1}{\beta }\varphi _v(t_i+0)- \dfrac{1}{\alpha }\varphi _u(t_i+0)\le 0, \end{aligned}$$
(81)

which, along with (78), guarantees (77). Moreover, the Eq. (71) and the inequalities (74) and (81) lead to

$$\begin{aligned} \lambda _{\alpha \beta }(t_i+0)\le 0. \end{aligned}$$
(82)

Similarly to Propositions 6.36.4, it can be proved that the function \(\lambda _{\alpha \beta }(t)\) is continuous on the open intervals \(]t_i, t_{i+1}[\), \(i=1,\ldots , K\), it is left-continuous for \(t=t_i\), \(i=1,\ldots , K\), and

$$\begin{aligned} \lim \limits _{t\downarrow t_f-0}\lambda _{\alpha \beta }(t)=0. \end{aligned}$$
(83)

Therefore, due to (77) and (82), there exist vicinities

$$\begin{aligned} d_i:= (t_i-\varepsilon _i,t_i+\varepsilon _i),\quad \varepsilon _i>0,\quad i=1,\ldots ,s(t^*)-1, \end{aligned}$$
(84)

such that, subject to (80),

$$\begin{aligned} \lambda _{\alpha \beta }(t)<1, \end{aligned}$$
(85)

for \(t\in T_1:=\bigcup \limits _{i=1}^{s(t^*)-1}d_i\). Let us denote

$$\begin{aligned} T_2:= [t_0,t^*]\backslash T_1. \end{aligned}$$
(86)

The set (86) consists of a finite number of closed intervals, where, due to Proposition 6.4, \(\varphi _u(t)>0\). Therefore, we can define

$$\begin{aligned} \varphi ^*:=\max \limits _{t\in T_2}\left[ \dfrac{\varphi _v(t)-\beta }{\varphi _u(t)}\right] >0. \end{aligned}$$
(87)

If

$$\begin{aligned} \alpha <\dfrac{\beta }{\varphi ^*}, \end{aligned}$$
(88)

then the inequality (85) is valid for \(t\in T_2\). Now, let us define

$$\begin{aligned} \tilde{\alpha }:=\tilde{\alpha }(\beta )=\min \left\{ \alpha _i, i=1,\ldots ,s(t^*)-1;\ \dfrac{\beta }{\varphi ^*}\right\} . \end{aligned}$$
(89)

Then, for all \(\alpha >0\), satisfying (23), the inequality (85) holds for all \(t\in [t_0, t^*]\), yielding (22). The saddle point (2427) is derived based on Theorem 1 of [4]. This completes the proof of the theorem. \(\square \)

Appendix 2

1.1 Proof of Theorem 4.1

Let the trajectory \(x_{\alpha \beta }(t)\) be generated by the optimal strategy \(u_{\alpha \beta }^0(t,x)\), given by (25) for some \(\beta >0\) and \(\alpha <\alpha ^*(\beta )\), and by an arbitrary \(v(t)\in L_2^{m_v}[t_0,t_f]\), satisfying (4). Then, by the game value definition

$$\begin{aligned} J_{\alpha \beta }\le J^0_{\alpha \beta }(t_0,x_0), \end{aligned}$$
(90)

leading, due to (2), to

$$\begin{aligned} G(x_{\alpha \beta }(\cdot ))\le J^0_{\alpha \beta }(t_0,x_0)+\beta \displaystyle \int \limits _{t_0}^{t_f}|v(t)|^2\mathrm{{d}}t. \end{aligned}$$
(91)

Let us define

$$\begin{aligned} \beta ^*(\zeta ,\nu ):=\dfrac{\zeta }{2\nu }.\end{aligned}$$
(92)

If \(\beta <\beta ^*\), then by (4),

$$\begin{aligned} \beta \displaystyle \int \limits _{t_0}^{t_f}|v(t)|^2\mathrm{{d}}t<\dfrac{\zeta }{2}. \end{aligned}$$
(93)

Let \(\beta <\beta ^*(\zeta ,\nu )\) be fixed. Now, let us show that the game value \(J^0_{\alpha \beta }(t_0,x_0)\) also can be made arbitrary small by a proper choice of \(\alpha \). Due to Theorem 3.1 and Remark 3.1, there exists \(\tilde{\alpha }=\tilde{\alpha }(\beta )>0\) such that for \(\alpha \in ]0, \tilde{\alpha }[\), the matrix \(I_{Kn}-{\mathcal F}_{\alpha \beta }(t_0)\) is non-singular and

$$\begin{aligned} J_{\alpha \beta }^0(t_0,x_0)=w_{t_0}^T\left( I_{Kn}-{\mathcal F}_{\alpha \beta }(t_0)\right) ^{-1}w_{t_0}. \end{aligned}$$
(94)

By virtue of (15) and by extracting \(\alpha \) from the inverse matrix in (94), the latter can be rewritten

$$\begin{aligned} J_{\alpha \beta }^0(t_0,x_0)=\alpha w_{t_0}^T\left( \alpha I_{Kn}-\dfrac{\alpha }{\beta }{\mathcal F}_v(t_0)+{\mathcal F}_u(t_0)\right) ^{-1}w_{t_0}. \end{aligned}$$
(95)

Due to (2930) and (50), the condition (44) means that

$$\begin{aligned} w_{t_0}\in \left( \mathrm{Ker}\left( {\mathcal F}_u(t_0)\right) \right) ^\bot . \end{aligned}$$
(96)

The latter, along with (95), yields

$$\begin{aligned} \lim \limits _{\alpha \downarrow 0}J_{\alpha \beta }^0(t_0,x_0)=\lim \limits _{\alpha \downarrow 0}\left[ \alpha w_{t0}^T\left( {\mathcal F}_u(t_0)\right) ^\dag w_{t0}\right] =0, \end{aligned}$$
(97)

where \({\mathcal F}^\dag \) stands for a pseudo-inverse matrix of \({\mathcal F}\). The limiting expression (97) implies that for a fixed \(\beta >0\) and for an arbitrary \(\zeta >0\) there exists \(\bar{\alpha }=\bar{\alpha }(\beta ,\zeta )\in ]0, \tilde{\alpha }[\) such that for \(\alpha <\bar{\alpha }\),

$$\begin{aligned} J_{\alpha \beta }^0(t_0, x_0)<\dfrac{\zeta }{2}. \end{aligned}$$
(98)

By setting

$$\begin{aligned} \alpha ^*(\zeta ,\nu )=\bar{\alpha }(\beta ^*(\zeta ,\nu ),\zeta ), \end{aligned}$$
(99)

and by using the inequalities (91), (93) and (98), we obtain that for \(\alpha <\alpha ^*(\zeta ,\nu )\), \(\beta <\beta ^*(\zeta ,\nu )\),

$$\begin{aligned} G(x_{\alpha \beta }(\cdot ))<\zeta , \end{aligned}$$
(100)

meaning that the strategy \(u_{\alpha \beta }^0(t,x)\) solves the Route Realization Problem. This completes the proof of the theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Turetsky, V. Robust Route Realization by Linear-Quadratic Tracking. J Optim Theory Appl 170, 977–992 (2016). https://doi.org/10.1007/s10957-016-0931-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-016-0931-0

Keywords

Mathematics Subject Classification

Navigation