Skip to main content
Log in

A performance measure approach for risk optimization

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

A Correction to this article was published on 17 May 2019

This article has been updated

Abstract

In recent years, several approaches have been proposed for solving reliability-based design optimization (RBDO), where the probability of failure is a design constraint. The same cannot be said about risk optimization (RO), where probabilities of failure are part of the objective function. In this work, we propose a performance measure approach (PMA) for RO problems. We first demonstrate that RO problems can be solved as a sequence of RBDO sub-problems. The main idea is to take target reliability indexes (i.e., probabilities of failure) as design variables. This allows the use of existing RBDO algorithms to solve RO problems. The idea also extends the literature concerning RBDO to the context of RO. Here, we solve the resulting RBDO sub-problems using the PMA. Sensitivity expressions required by the proposed approach are also presented. The proposed approach is compared to an algorithm that employs the first-order reliability method (FORM) for evaluation of the probabilities of failure. The numerical examples show that the proposed approach is efficient and more stable than direct employment of FORM. This behavior has also been observed in the context of RBDO, and was the main reason for the development of PMA. Consequently, the proposed approach can be seen as an extension of PMA approaches to RO, which result in more stable optimization algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Change history

  • 17 May 2019

    The original article unfortunately contains error in one of the equations and the results of the last example. This is also to emphasize that these corrections do not affect the conclusions of the paper.

Notes

  1. In this work, the term “surrogate model” is employed if the relation between the probability of failure (alternatively the objective function/constraint of the optimization problem) and the design variables is approximated (i.e., the approximation occurs at the upper-level optimization problem). The term “stochastic expansion” is employed when the relation between the probability of failure (or statistical moments) and the random variables is approximated (i.e., the approximation occurs at the lower-level reliability analysis problem). Note that the classification is not related to the approximation technique itself. Kriging and artificial neural networks, for example, can be employed for both purposes.

References

Download references

Funding

This research was financially supported by CNPq-Brazil.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to André Jacomel Torii.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Replication of results

The results obtained with the proposed approach are presented in Section 5.

Additional information

Responsible Editor: Xu Guo

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Sensitivity of the RBDO problem

The proposed approach requires the sensitivity of the optimal cost according to target reliability indexes of RBDO problems, .i.e., ∇βc. This information is required to update the target reliability indexes. For this purpose, we first evaluate the sensitivity considering the RBDO problem from (6)–(7), with constraints imposed directly on target reliability indexes β. In Appendix B, the expressions are extended to the case where the RBDO problem is solved with the PMA from (12)–(13).

The procedure employed to obtain the sensitivity of the optimization problem is similar to that described by Luenberger and Ye (2008). Take the optimization problem: Find \(\mathbf {d} \in \mathbb {R}^{n}\) that,

$$ \min c(\mathbf{d},\mathbf{b}) $$
(60)

subject to

$$ \mathbf{g}(\mathbf{d}) \leq \mathbf{b}. $$
(61)

In the RBDO problem from (6)–(7), we have b = β.

The KKT conditions for this problem result (Luenberger and Ye 2008)

$$ \nabla_{\mathbf{d}} c + \boldsymbol{\lambda}^{T} \nabla_{\mathbf{d}} \mathbf{g} = \mathbf{0}. $$
(62)

By the chain rule, we have

$$ \nabla_{\mathbf{b}} c = \nabla_{\mathbf{d}} c \nabla_{\mathbf{b}} {\mathbf{d}} + \frac{d c}{d \mathbf{b}}, $$
(63)

where dc/db is the explicit ordinary derivative of c according to b. Assuming active constraints we also have the following:

$$ \nabla_{\mathbf{b}} \mathbf{g} = \nabla_{\mathbf{d}} \mathbf{g} \nabla_{\mathbf{b}} {\mathbf{d}} = \mathbf{I}. $$
(64)

Thus, sensitivity of the Lagrangian results (Luenberger and Ye 2008)

$$ \nabla_{\mathbf{b}} c + \boldsymbol{\lambda}^{T} \nabla_{\mathbf{b}} \mathbf{g} = \nabla_{\mathbf{d}} c \nabla_{\mathbf{b}} {\mathbf{d}} + \frac{d c}{d \mathbf{b}} + \boldsymbol{\lambda}^{T} \nabla_{\mathbf{d}} \mathbf{g} \nabla_{\mathbf{b}} {\mathbf{d}}, $$
(65)

that can be written as follows:

$$ \nabla_{\mathbf{b}} c + \boldsymbol{\lambda}^{T} \mathbf{I} = \left( \nabla_{\mathbf{d}} c + \boldsymbol{\lambda}^{T} \nabla_{\mathbf{d}} \mathbf{g} \right) \nabla_{\mathbf{b}} {\mathbf{d}} + \frac{d c}{d \mathbf{b}}. $$
(66)

From (62), we get

$$ \nabla_{\mathbf{b}} c + \boldsymbol{\lambda}^{T} = \frac{d c}{d \mathbf{b}} $$
(67)

and finally,

$$ \nabla_{\mathbf{b}} c = \frac{d c}{d \mathbf{b}} - \boldsymbol{\lambda}^{T}. $$
(68)

Note that this result is basically the same as that presented in literature (Luenberger and Ye 2008), with a correction dc/db, accounting for the explicit dependence of the objective function on b. Further mathematical requirements are as discussed in details by Luenberger and Ye (2008).

Appendix B: Sensitivity for the PMA

In order to employ the results from Appendix A for the PMA (12)–(13), we first write a single constraint of the PMA as follows:

$$ g_{i}(\mathbf{d},\mathbf{x}_{i},\beta_{i}) \leq b_{i}. $$
(69)

For active and inactive constraints, we have, respectively,

$$ \frac{\partial g_{i}}{\partial b_{j}} = \left\lbrace \begin{array}{ll} 1, & \textrm{ if } i = j \textrm{ and } g_{i} = b_{i}\\ 0, & \textrm{ otherwise } \end{array} \right.. $$
(70)

If the reliability constraint is inactive, then gi/βi = 0. Otherwise,

$$ \frac{\partial g_{i}}{\partial \beta_{i}} = \frac{\partial g_{i}}{\partial b_{i}} \frac{\partial b_{i}}{\partial \beta_{i}}. $$
(71)

Consequently,

$$ \frac{\partial g_{i}}{\partial \beta_{j}} = \left\lbrace \begin{array}{ll} \partial b_{i}/\partial \beta_{i}, & \textrm{ if } i = j \textrm{ and } g_{i} = b_{i}\\ 0, & \textrm{ otherwise }\\ \end{array} \right. $$
(72)

From the sensitivity results presented in Appendix A, we have

$$ \nabla_{\mathbf{b}} c = \frac{d c}{d \mathbf{b}} - \boldsymbol{\lambda}^{T}, $$
(73)

where λ are Lagrange multipliers related to the PMA constraints. By multiplying the above sensitivity by ∇βb, we get the following:

$$ \nabla_{\mathbf{b}} c \nabla_{\boldsymbol{\beta}} \mathbf{b} = \left( \frac{d c}{d \mathbf{b}} - \boldsymbol{\lambda}^{T} \right) \nabla_{\boldsymbol{\beta}} \mathbf{b} $$
(74)

and, from (72),

$$ \nabla_{\boldsymbol{\beta}} c = \frac{d c}{d \boldsymbol{\beta}} - \boldsymbol{\lambda}^{T} \nabla_{\boldsymbol{\beta}} \mathbf{g}. $$
(75)

From this result, we conclude that in the PMA, the sensitivity expressions from Appendix A must be corrected by ∇βg. We thus require the derivatives of the constraints gi according to βi.

Considering a single constraint g = gi, the first-order optimality condition for the inverse reliability analysis problem of (14)–(15) is known to be (Wu et al. 1990; Yi et al. 2008)

$$ \mathbf{u} = -\beta \frac{\nabla_{\mathbf{u}} g}{\Vert \nabla_{\mathbf{u}} g \Vert}. $$
(76)

We thus conclude that,

$$ \frac{\partial \mathbf{u}}{\partial \beta} = -\frac{\nabla_{\mathbf{u}} g}{\Vert \nabla_{\mathbf{u}} g \Vert} = \frac{1}{\beta} \mathbf{u}. $$
(77)

and

$$ \frac{\partial \mathbf{x}}{\partial \beta} = \nabla_{\mathbf{u}} \mathbf{x} \frac{\partial \mathbf{u}}{\partial \beta}, $$
(78)

where ∇ux is evaluated from the transformation of (17). Note that ∇ux can be evaluated numerically with little computational effort, if necessary. We finally have the following:

$$ \frac{\partial g}{\partial \beta} = \left( \nabla_{\mathbf{x}} g \right)^{T} \frac{\partial \mathbf{x}}{\partial \beta} = \frac{1}{\beta} \left( \nabla_{\mathbf{x}} g \right)^{T} \left( \nabla_{\mathbf{u}} \mathbf{x} \right) \mathbf{u}. $$
(79)

If X is a vector of independent normal random variables, we have as follows:

$$ \mathbf{x} = \boldsymbol{\mu} + \boldsymbol{\sigma} \mathbf{u}, $$
(80)

where μ is the vector of expected values and σ is a diagonal matrix of standard deviations of the random variables. In this case, we get the following:

$$ \nabla_{\mathbf{u}} \mathbf{x} = \boldsymbol{\sigma}, $$
(81)
$$ \mathbf{u} = \boldsymbol{\sigma}^{-1}(\mathbf{x} - \boldsymbol{\mu}). $$
(82)

and (79) results the following:

$$ \frac{\partial g}{\partial \beta} = \frac{1}{\beta} \left( \nabla_{\mathbf{x}} g \right)^{T} (\mathbf{x} - \boldsymbol{\mu}). $$
(83)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Torii, A.J., Lopez, R.H., Beck, A.T. et al. A performance measure approach for risk optimization. Struct Multidisc Optim 60, 927–947 (2019). https://doi.org/10.1007/s00158-019-02243-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-019-02243-5

Keywords

Navigation