Skip to main content
Log in

CVaR-based optimization of environmental flow via the Markov lift of a mixed moving average process

  • Research Article
  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

Environmental flow is the minimum threshold value of streamflow discharge to guarantee the sustainable development of river environments. In this study, we propose an optimization approach to establish a balance between the water resource supply and environmental conservation based on the mixed moving average (MMA) process as a superposition of infinite-dimensional stochastic differential equations. We show that a certain jump-driven MMA process can be suitable for describing the non-Gaussian streamflow discharge time series that has a sub-exponential autocorrelation. Furthermore, we introduce Markov lift to efficiently approximate the MMA process and its characteristic function, from which we obtain the stationary probability density of the process through a Fourier inversion. Then, we formulate a convex optimization problem of intaking river water to meet a prescribed target intake subject to a conditional value-at-risk (CVaR) as a risk measure of small water depth. Based on the Markov lift and a regularized CVaR, the optimization problem is numerically solved using a gradient descent method. Notably, the CVaR constraint is essential for endogenously deriving the optimal intake policy that guarantees the positive minimum discharge, which is the environmental flow, without explicitly imposing it as a constraint. Lastly, we identify the MMA process for different river environments and solve the associated optimization problems to analyze the sensitivity of the optimal intake policy and environmental flow.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

Download references

Acknowledgements

This study was supported by the Japan Society for the Promotion of Science (22K14441, 22H02456), Environmental Research Projects from the Sumitomo Foundation (203160), Kurita Water and Environment Foundation (21K018), and Grant from MLIT Japan (B4R202002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hidekazu Yoshioka.

Ethics declarations

Competing interest

The authors have no competing interest to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Hidekazu Yoshioka, Ayumi Hashiguchi moved from Shimane University to Japan Advanced Institute of Science and Technology, Okayama University during the review and publication processes of this work.

Appendices

Appendix A: Parameter values of the supCBI process at Stations 1 and 2

This appendix summarizes the parameter values of the supOU process at Station 1 (Table 2 for Period 1 and Table 3 for Period 2) and Station 2 (Table 4), which were identified based on the leas-squares method to minimize the sum of the squares of the relative errors of first- to fourth-order moments of the discharge (Yoshioka et al. 2022c; Yoshioka 2022b). The supCBI process was identified for different \(D = 1 - BM_{1}\) as explained in the main text. Table 5 presents the parameters of the water depth functions at the two stations. Tables 6, 7 and 8 concerning the goodness-of-fit in terms of the statistics demonstrate that the fitted models reasonably reproduce the empirical statistics reported in Table 1. These tables also show that the fitted models with different values of \(D\) exhibit the comparable performance for each data set. The performance of the fitted models is satisfactory as not only the average and variance are accurately reproduced, but also the positive skewness and large kurtosis are captured.

Appendix B: Convergence of Markov lift

In this appendix, convergence speed of the Markov lift is analyzed considering its computational performance has not been well-studied in the past studies. We show that the approximation ability of the Markov lift is first-order and slightly less than first-order for the first-order moment \(\int_{0}^{ + \infty } {r\rho \left( {{\text{d}}r} \right)}\) and the square of the second-order centered moment \(\sqrt {\int_{0}^{ + \infty } {\left( {r - \int_{0}^{ + \infty } {r\rho \left( {{\text{d}}r} \right)} } \right)^{2} \rho \left( {{\text{d}}r} \right)} }\), respectively. Here, the convergence speed at the discretization level \(N\) is computed as \(\log_{2} \left( {e_{N/2} /e_{N} } \right)\) where \(e_{N}\) and \(e_{N/2}\) are the relative errors at the discretization levels \(N\) and \(N/2\), respectively. Note that we are not using stochastic simulations as the Markov lift is a quadrature and hence deterministic method.

Tables 9 and 10 show the computational results for the different values of \(N\) in the Markov lift for the model at Station 2. Without any loss of generality, we used the normalization \(\beta_{\pi } = 1\) since the functional shape of the Gamma distribution does not change by modulating \(\beta_{\pi }\), and our Markov lift is based on quantiles. The discretization based on the Markov lift was proved to be convergent such that the characteristic function of the finite-dimensional supCBI process uniformly converges to that of the original supCBI process locally (Yoshioka et al. 2022c); the convergence speed of the inverse moment \(\int_{0}^{ + \infty } {r^{ - 1} \rho \left( {{\text{d}}r} \right)}\) was approximately 0.5 when \(\alpha_{\pi }\) was approximately 2, as in this study.

Appendix C: PDFs of discharge in a logarithmic scale

Figures 

Fig. 15
figure 15

Empirical (Circles) and computed PDFs (Curve) at Station 1 in Period 1 in a common logarithmic plot

15,

Fig. 16
figure 16

Empirical (Circles) and computed PDFs (Curve) at Station 1 in Period 2 in a common logarithmic plot

16, and

Fig. 17
figure 17

Empirical (Circles) and computed PDFs (Curve) at Station 2 in a common logarithmic plot

17 compare the empirical and computed PDFs in a common logarithmic scale for Station 1 (Periods 1 and 2) and Station 2, respectively. For the computed PDFs, we use those with \(D = 0.6\) as demonstrative examples, considering using the other values of \(D\) used in the main text provides comparable results.

Appendix D: Impacts of regularization

Figures 

Fig. 18
figure 18

Comparison of the efficient frontiers with different regularization parameter values for Station 1 in Period 2. The plots for \(\varepsilon = 0.0001\) and \(\varepsilon = 0.00005\) are difficult to distinguish in the figure

18 compares the efficient frontiers with different regularization parameter values for Station 1 in Period 2, which demonstrates that using the parameter value \(\varepsilon = 0.0001\) in the main text is justified considering the fronter is found to converge sufficiently at this level of regularization. Note that employing a smaller regularization is theoretically preferred, however, it degrades computational efficiency. We empirically found that the increment of the gradient descent should be smaller for weaker regularization. Indeed, we encountered a computational failure of the gradient descent with \(\varepsilon = 0.00001\).

Appendix E: On a convexity result

We show that the regularized optimization problem Eq. (34) is (strictly) convex. Firstly, the admissible set of decision variables \({\mathfrak{U}}_{n}\) is convex. For simplicity, write \(m_{\varepsilon ,i} = m_{i}\) and \(h\left( {Q_{i} - q_{i} } \right) = h_{i}\). We then have

$$A_{00} \equiv \frac{{\partial^{2} F}}{{\partial u^{2} }} = \frac{\lambda }{\alpha }\sum\limits_{i = 1}^{n} {p_{i} m^{\prime\prime}_{i} } > 0,\;A_{ij} \equiv \frac{{\partial^{2} F}}{{\partial q_{i} \partial q_{j} }} = 0,\;i \ne j\;{\text{with}}\;1 \le i,j \le n$$
(42)
$$A_{ii} \equiv \frac{{\partial^{2} F}}{{\partial q_{i}^{2} }} = p_{i} \left( {2 + \frac{\lambda }{\alpha }\left( {h^{\prime}_{i} } \right)^{2} m^{\prime\prime}_{i} - \frac{\lambda }{\alpha }h^{\prime\prime}_{i} m^{\prime}_{i} } \right) > 0,\;1 \le i \le n$$
(43)
$$A_{0i} = A_{i0} \equiv \frac{{\partial^{2} F}}{{\partial u\partial q_{i} }} = p_{i} \frac{\lambda }{\alpha }h^{\prime}_{i} m^{\prime\prime}_{i} ,\,1 \le i \le n$$
(44)

The regularized problem has a global minimizer in \({\mathfrak{U}}_{n}\) (Sect. 4.2.2 in Boyd et al. (2004)). Indeed, by formally setting \(q_{0} = u\), we obtain

$$\begin{aligned} \sum\limits_{i,j = 0}^{n} {A_{ij} q_{i} q_{j} } &= A_{00} q_{0} q_{0} + 2\sum\limits_{i = 1}^{n} {A_{0i} q_{0} q_{i} } + \sum\limits_{i = 1}^{n} {A_{ii} q_{i} q_{i} } \\ &= \sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }m^{\prime\prime}_{i} } q_{0} q_{0} + 2\sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }h^{\prime}_{i} m^{\prime\prime}q_{0} q_{i} } + \sum\limits_{i = 1}^{n} {p_{i} \left( {2 + \frac{\lambda }{\alpha }\left( {h^{\prime}_{i} } \right)^{2} m^{\prime\prime}_{i} - \frac{\lambda }{\alpha }h^{\prime\prime}_{i} m^{\prime}_{i} } \right)q_{i} q_{i} } \\& = \sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }m^{\prime\prime}_{i} } \left( {q_{0} q_{0} + 2h^{\prime}_{i} q_{0} q_{i} } \right) + \sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }m^{\prime\prime}_{i} \left( {h^{\prime}_{i} } \right)^{2} q_{i} q_{i} } + \sum\limits_{i = 1}^{n} {p_{i} \left( {2 - \frac{\lambda }{\alpha }h^{\prime\prime}_{i} m^{\prime}_{i} } \right)q_{i} q_{i} } \\ &= \sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }m^{\prime\prime}_{i} } \left( {q_{0} q_{0} + 2h^{\prime}_{i} q_{0} q_{i} + \left( {h^{\prime}_{i} } \right)^{2} q_{i} q_{i} } \right) + \sum\limits_{i = 1}^{n} {p_{i} \left( {2 - \frac{\lambda }{\alpha }h^{\prime\prime}_{i} m^{\prime}_{i} } \right)q_{i} q_{i} } \\ &= \sum\limits_{i = 1}^{n} {p_{i} \frac{\lambda }{\alpha }m^{\prime\prime}_{i} } \left( {q_{0} + h^{\prime}_{i} q_{i} } \right)^{2} + \sum\limits_{i = 1}^{n} {p_{i} \left( {2 - \frac{\lambda }{\alpha }h^{\prime\prime}_{i} m^{\prime}_{i} } \right)q_{i} q_{i} } \\& > 0 \\ \end{aligned}$$
(45)

by the positivity \(p_{i} , - h^{\prime\prime}_{i} ,m^{\prime\prime}_{i} > 0\). Therefore, the Hessian matrix \(A\) of \(F = F\left( {u,\left\{ {q_{i} } \right\}_{1 \le i \le n} } \right)\) is positive definite. Therefore, \(F\) is strictly convex and the regularized problem has exactly one global minimizer in \({\mathfrak{U}}_{n}\) (e.g., Theorem 3.3 of Okelo 2019). Note that the convexity is not affected by the regularization parameter \(\varepsilon > 0\). Furthermore, the positive-definiteness may fail if \(h\) is not concave. Additionally, as for Theorem 1 and Propositions 1–2 of Luna et al. (2016), each solution to Eq. (34) accumulates to that of Eq. (29) by the convexity of the objective and constraint of the regularized optimization problem Eq. (34).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yoshioka, H., Tanaka, T., Yoshioka, Y. et al. CVaR-based optimization of environmental flow via the Markov lift of a mixed moving average process. Optim Eng 24, 2935–2972 (2023). https://doi.org/10.1007/s11081-023-09800-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-023-09800-4

Keywords

Navigation