1 Introduction

Portfolio selection problems are usually tackled with the mean-risk models that characterize the uncertain returns by two scalar characteristics: the mean, which is the expected return, and the risk—a scalar measure of the variability of returns. In the original Markowitz model (Markowitz 1952) the risk is measured by the standard deviation or variance. Several other risk measures have been later considered thus creating the entire family of mean-risk (Markowitz-type) models. In particular, downside risk measures (Fishburn 1977) are frequently considered instead of symmetric measurement by the standard deviation. While the original Markowitz model forms a quadratic programming problem, many attempts have been made to linearize the portfolio optimization procedure (cf. Mansini et al. 2014, and references therein). The LP solvability is very important for applications to real-life financial decisions where the constructed portfolios have to meet numerous side constraints including the minimum transaction lots, transaction costs and mutual funds characteristics (Mansini et al. 2015). A risk measure can be LP computable in the case of discrete random variables, i.e., in the case of returns defined by their realizations under specified scenarios. This applies to a wide class of risk measures called recently polyhedral (Eichhorn and Römisch 2005), as represented by convex polyhedral functions for discrete random variables. Several such polyhedral risk measures have been applied to portfolio optimization (Mansini et al. 2014). Typical risk measures are deviation type. The simplest polyhedral risk measures are dispersion measures similar to the variance. Konno and Yamazaki (1991) introduced the portfolio selection model with the mean absolute deviation (MAD). Young (1998) presented the Minimax model while earlier Yitzhaki (1982) introduced the mean-risk model using Gini’s mean (absolute) difference (GMD) as the risk measure. The Gini’s mean difference turns out to be a special aggregation technique of the multiple criteria LP model (Ogryczak 2000) based on the pointwise comparison of the absolute Lorenz curves. The latter makes the quantile shortfall risk measures directly related to the dual theory of choice under risk (Quiggin 1982). Recently, the second order quantile risk measures have been introduced in different ways by many authors (Artzner et al. 1999; Rockafellar and Uryasev 2002). The measure, usually called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. The CVaR measures maximization is consistent with the second order stochastic dominance (Ogryczak and Ruszczyński 2002a). The LP computable portfolio optimization models are capable to deal with non-symmetric distributions. Some of them, like the downside mean semideviation (half of the MAD) can be combined with the mean itself into optimization criteria (safety or underachievement measures) that remain in harmony with the second order stochastic dominance. Some, like the conditional value at risk (CVaR) (Rockafellar and Uryasev 2002) having a great impact on new developments in portfolio optimization, may be interpreted as such a combined functional while allowing to distinguish the corresponding deviation type risk measure.

Some polyhedral risk measures, like the mean below-target deviation, Minimax or CVaR, represent typical downside risk focus. On the other hand, measures like MAD or GMD are rather symmetric. Nevertheless, all they can be extended to enhance downside risk focus (Michalowski and Ogryczak 2001; Krzemienowski and Ogryczak 2005) while preserving their polyhedrality.

Having given the risk-free rate of return \(r_0\), a risky portfolio \({\mathbf x}\) may be sought that maximizes ratio between the increase of the mean return \(\mu ({\mathbf x})\) relative to \(r_0\) and the corresponding increase of the risk measure \(\varrho ({\mathbf x})\), compared to the risk-free investment opportunities. Namely, a performance measure of the reward-risk ratio is defined \({(\mu ({\mathbf x}) - r_0})/{\varrho ({\mathbf x})}\) to be maximized. The optimal solution of the corresponding problem is usually called the tangency portfolio as it corresponds to the tangency point of the so-called capital market line drawn from the intercept \(r_0\) and passing tangent to the risk/return frontier. For the polyhedral risk measures the reward-risk ratio optimization problem can be converted into an LP form (Mansini et al. 2003). The reward-risk ratio is well defined for the deviation type risk measures. Therefore while dealing with the CVaR or Minimax risk model we must replace this performance measure (coherent risk measure) \(C({\mathbf x})\) with its complementary deviation representation \(\mu ({\mathbf x}) - C({\mathbf x})\) (Chekhlov et al. 2002; Mansini et al. 2003). The reward-risk ratio optimization with polyhedral risk measures can be transformed into LP formulations. Such an LP model, for instance for the MAD type risk measure, contains then T auxiliary variables as well as T corresponding linear inequalities. Actually, the number of structural constraints in the LP model (matrix rows) is proportional to the number of scenarios T, while the number of variables (matrix columns) is proportional to the total of the number of scenarios and the number of instruments \(T + n\). Hence, its dimensionality is proportional to the number of scenarios T. It does not cause any computational difficulties for a few hundreds scenarios as in computational analysis based on historical data. However, real-life financial analysis must be usually based on more advanced simulation models employed for scenario generation (Carino et al. 1998). One may get then several thousands scenarios (Pflug 2001) thus leading to the LP model with huge number of auxiliary variables and constraints and thereby hardly solvable by general LP tools. Similar difficulty for the standard minimum risk portfolio selection have been effectively resolved by taking advantages of the LP duality to reduce the number of structural constraints to the number of instruments (Ogryczak and Śliwiński 2011a, b). For the linearized reward-risk ratio models such an approach does not work. Although, for the CVaR risk measure we have shown (Ogryczak et al. 2015) that while taking advantages of possible inverse formulation of the reward-risk ratio optimization as ratio \({\varrho ({\mathbf x})}/{(\mu ({\mathbf x}) - r_0)}\) to be minimized and the LP dual of the linearized problem one can get the number of constraints limited to the number of instruments.

In this paper we analyze efficient optimization of reward-risk ratio for various polyhedral risk measures by taking advantages of possible inverse formulation of the reward-risk ratio optimization as a risk-reward ratio \({\varrho ({\mathbf x})}/{(\mu ({\mathbf x}) - r_0)}\) to be minimized, we show that (under natural assumptions) this ratio optimization is consistent with the SSD rules, despite that the ratio does not represent a coherent risk measure (Artzner et al. 1999). Further, while transforming this ratio optimization to an LP model, we use the LP duality equivalence to get a model formulation providing much higher computational efficiency. The number of structural constraints in the introduced model is proportional to the number of instruments n while only the number of variables is proportional to the number of scenarios T thus not affecting so seriously the simplex method efficiency. Therefore, the model can effectively be solved with general LP solvers even for very large numbers of scenarios. Indeed, the computation time for the case of fifty thousand scenarios and one hundred instruments is then below a minute.

2 Portfolio optimization and risk measures

The portfolio optimization problem considered in this paper follows the original Markowitz’ formulation and is based on a single period model of investment. At the beginning of a period, an investor allocates the capital among various securities, thus assigning a nonnegative weight (share of the capital) to each security. Let \(J = \{ 1,2,\ldots ,n \} \) denote a set of securities considered for an investment. For each security \(j \in J\), its rate of return is represented by a random variable \(R_j\) with a given mean \(\mu _j = {\mathbb E}\{ R_j \}\). Further, let \({\mathbf x}= (x_j)_{j=1,2,\ldots ,n}\) denote a vector of decision variables \(x_j\) expressing the weights defining a portfolio. The weights must satisfy a set of constraints to represent a portfolio. The simplest way of defining a feasible set Q is by a requirement that the weights must sum to one and they are nonnegative (short sales are not allowed), i.e.

$$\begin{aligned} Q = \left\{ {\mathbf x}\ : \ \sum _{j=1}^{n}x_j =1, \quad x_j \ge 0 \quad \text{ for }\ j=1,\ldots ,n\right\} \end{aligned}$$
(1)

Hereafter, we perform detailed analysis for the set Q given with constraints (1). Nevertheless, the presented results can easily be adapted to a general LP feasible set given as a system of linear equations and inequalities, thus allowing one to include short sales, upper bounds on single shares or portfolio structure restrictions which may be faced by a real-life investor.

Each portfolio \(\mathbf{x}\) defines a corresponding random variable \(R_{{\mathbf x}} = \sum _{j=1}^{n}R_j x_j\) that represents the portfolio rate of return while the expected value can be computed as \(\mu ({\mathbf x}) = \sum _{j=1}^{n}\mu _j x_j\). We consider T scenarios with probabilities \(p_t\) (where \(t=1,\ldots ,T\)). We assume that for each random variable \(R_j\) its realization \(r_{jt}\) under the scenario t is known. Typically, the realizations are derived from historical data treating T historical periods as equally probable scenarios (\(p_t=1/T\)). Although the models we analyze do not take advantages of this simplification. The realizations of the portfolio return \(R_{{\mathbf x}}\) are given as \(y_t = \sum _{j=1}^n r_{jt} x_j\).

The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem where the mean \(\mu (\mathbf{x})\) is maximized and the risk measure \(\varrho (\mathbf{x})\) is minimized. In the original Markowitz model, the standard deviation \(\sigma (\mathbf{x}) = [ {\mathbb E}\{ ( R_\mathbf{x} - \mu (\mathbf{x}) )^2 \}]^{1/2}\) was used as the risk measure. Actually in most optimization models the variance \(\sigma ^2(\mathbf{x}) = {\mathbb E}\{ ( R_\mathbf{x} - \mu (\mathbf{x}) )^2 \}\) can be equivalently used. The latter simplifies the computations as

$$\begin{aligned} \sigma ^2({\mathbf x}) = \sum _{t=1}^{T}\left( \mu ({\mathbf x})- \sum _{j=1}^{n}r_{jt}x_j\right) ^2 p_t = \sum _{t=1}^{T} \left( \sum _{j=1}^{n}(\mu _j - r_{jt})x_j\right) ^2 p_t = {\mathbf x}^T \mathbf{Cx} \end{aligned}$$

where \(\mathbf{C}\) is \(n\times n\) symmetric matrix with elements

$$\begin{aligned} c_{jk} = \sum _{t=1}^{T} (\mu _j - r_{jt})(\mu _k - r_{kt}) p_t \end{aligned}$$

i.e. the covariance matrix for random variables \(R_j\). It can be argued that the variability of rate of return above the mean should not be penalized since the investors concern of an underperformance rather than the overperformance of a portfolio. This led Markowitz (1959) to propose downside risk measures such as downside standard deviation and semivariance \(\bar{\sigma }^2({\mathbf x}) = \sum _{t=1}^{T} ( \max \{0, \mu ({\mathbf x})- \sum _{j=1}^{n}r_{jt}x_j\})^2 p_t\) to replace variance as the risk measure. Consequently, one observes growing popularity of downside risk models for portfolio selection Sortino and Forsey (1996). Several other risk measures \(\varrho ({\mathbf x})\) have been later considered thus creating the entire family of mean-risk models (cf., Mansini et al. 2014, 2015). These risk measures, similar to the standard deviation, are not affected by any shift of the outcome scale and are equal to 0 in the case of a risk-free portfolio while taking positive values for any risky portfolio. Several risk measures applied to portfolio optimization belong to a wide class of risk measures called recently polyhedral (Eichhorn and Römisch 2005), as represented by convex polyhedral (piece-wise linear) functions for discrete random variables. Such risk measures can be LP computable in the case of discrete random variables, i.e., in the case of returns defined by their realizations under specified scenarios. The LP solvability of the portfolio optimization problems is very important for applications to real-life financial decisions where the constructed portfolios have to meet numerous side constraints including the minimum transaction lots, transaction costs and mutual funds characteristics (Mansini et al. 2015).

Typical risk measures are deviation type and most of them are not directly consistent with the stochastic dominance order or other axiomatic models of risk-averse preferences (Rothschild and Stiglitz 1970) and coherent risk measurement (Artzner et al. 1999). In stochastic dominance, uncertain returns (modeled as random variables) are compared by pointwise comparison of some performance functions constructed from their distribution functions. The first performance function \(F^{(1)}_{{\mathbf x}}\) is defined as the right-continuous cumulative distribution function: \(F^{(1)}_{{\mathbf x}}(\eta ) = F_{{\mathbf x}}(\eta ) = {\mathbb P}\{ R_{{\mathbf x}} \le \eta \}\) and it defines the First order Stochastic Dominance (FSD). The second function is derived from the first as \( F^{(2)}_{{\mathbf x}}(\eta ) = \int _{-\infty }^{\eta } F_{{\mathbf x}}(\xi ) \ d\xi \) and it defines the Second order Stochastic Dominance (SSD). We say that portfolio \( {{\mathbf x}^\prime }\) dominates \({{\mathbf x}^{{\prime \prime }}}\) under the SSD (\( R_{{\mathbf x}^\prime } \succ _{_{SSD}} R_{{\mathbf x}^{{\prime \prime }}}\)), if \(F^{(2)}_{{\mathbf x}^\prime }(\eta ) \le F^{(2)}_{{\mathbf x}^{{\prime \prime }}}(\eta )\) for all \(\eta \), with at least one strict inequality. A feasible portfolio \({\mathbf x}^0 \in Q\) is called SSD efficient if there is no \({\mathbf x}\in Q\) such that \(R_{{\mathbf x}} \succ _{_{SSD}} R_{{\mathbf x}^0}\). Stochastic dominance relates the notion of risk to a possible failure of achieving some targets. As shown by Ogryczak and Ruszczyński (1999), function \(F^{(2)}_{{\mathbf x}}\), used to define the SSD relation, can also be presented as follows:

$$\begin{aligned} F^{(2)}_{{\mathbf x}}(\eta ) = {\mathbb E}\left\{ \max \{ \eta - R_{{\mathbf x}}, 0 \}\right\} \end{aligned}$$
(2)

and thereby its values are LP computable for returns represented by their realizations \(y_t\).

The simplest shortfall criterion for a specific target value \(\tau \) is the mean below-target deviation

$$\begin{aligned} \bar{\delta }_{\tau }({\mathbf x}) = {\mathbb E}\{ \max \{ \tau - R_{{\mathbf x}}, 0 \} \} = F^{(2)}_{{\mathbf x}}(\tau ) \end{aligned}$$
(3)

The mean below-target deviation, when minimized, is LP computable for returns represented by their realizations \(y_t\) as:

$$\begin{aligned} \bar{\delta }_\tau = \min \sum _{t=1}^T d_t^- p_t \quad \text{ subject } \text{ to } \quad d_t^- \ge \tau - y_t , \ d_t^- \ge 0 \quad \text{ for } t=1,\ldots ,T \end{aligned}$$

When the mean \(\mu ({\mathbf x})\) is used instead of the fixed target the value \(F^{(2)}_{{\mathbf x}}(\mu ({\mathbf x}))\) defines the risk measure known as the downside mean semideviation from the mean

$$\begin{aligned} \bar{\delta }({\mathbf x}) = {\mathbb E}\{ \max \{ \mu ({\mathbf x}) - R_{{\mathbf x}}, 0 \} \} = F^{(2)}_{{\mathbf x}}(\mu ({\mathbf x})). \end{aligned}$$
(4)

The downside mean semideviation is always equal to the upside one and therefore we refer to it hereafter as to the mean semideviation (SMAD). The mean semideviation is a half of the mean absolute deviation (MAD) from the mean (Ogryczak and Ruszczyński 1999) \(\delta ({\mathbf x}) = {\mathbb E}\{ | R_\mathbf{x} - \mu (\mathbf{x}) | \}= 2 \bar{\delta }({\mathbf x}).\) Hence the corresponding portfolio optimization model is equivalent to those for the MAD optimization. However, to avoid misunderstanding, we will use the notion SMAD for the mean semideviation risk measure and the corresponding optimization models. Many authors pointed out that the SMAD model opens up opportunities for more specific modeling of the downside risk (Speranza 1993). Since \(\bar{\delta }({\mathbf x}) = F^{(2)}_{{\mathbf x}}(\mu ({\mathbf x}))\), the mean semideviation (4) is LP computable (when minimized), for a discrete random variable represented by its realizations \(y_t\). Although, due to the use of distribution dependent target value \(\mu ({\mathbf x})\), the mean semideviation cannot be directly considered an SSD consistent risk measure. SSD consistency (Ogryczak and Ruszczyński 1999) and coherency (Mansini et al. 2003) of the SMAD model can be achieved with maximization of complementary risk measure \(\mu _\delta ({\mathbf x}) =\mu ({\mathbf x}) - \bar{\delta }({\mathbf x})= {\mathbb E}\{ \min \{ \mu ({\mathbf x}), R_{{\mathbf x}} \} \}\), which also remains LP computable for a discrete random variable represented by its realizations \(y_t\).

An alternative characterization of the SSD relation can be achieved with the so-called Absolute Lorenz Curves (ALC) (Ogryczak and Ruszczyński 2002a), i.e. the second order quantile functions:

$$\begin{aligned} F^{(-2)}_{{\mathbf x}}(p) = \int _0^p F_{{\mathbf x}}^{(-1)}(\alpha ) d\alpha \quad \text{ for } 0 < p \le 1 \quad \text{ and } \quad F^{(-2)}_{{\mathbf x}}(0) = 0 , \end{aligned}$$
(5)

where the quantile function \(F_{{\mathbf x}}^{(-1)}(p) = \inf \ \{ \eta : F_{{\mathbf x}}(\eta ) \ge p \}\) is the left-continuous inverse of the cumulative distribution function \(F_{{\mathbf x}}\). As \(F_{{\mathbf x}}\) and \(F_{{\mathbf x}}^{(-1)}\) represent a pair non-decreasing inverse functions, their integrals \(F_{{\mathbf x}}^{(2)}\) and \(F^{(-2)}_{{\mathbf x}}\) are a pair of convex conjugent functions (Rockafellar 1970), i.e.

$$\begin{aligned} F^{(-2)}_{{\mathbf x}}(\beta ) = \max _{\eta \in R}\ \left[ \beta \eta - F_{{\mathbf x}}^{(2)}(\eta ) \right] . \end{aligned}$$
(6)

Therefore, pointwise comparison of ALCs is equivalent to the SSD relation (Ogryczak and Ruszczyński 2002a) in the sense that \(R_{{\mathbf x}^\prime } \succeq _{_{SSD}} R_{{\mathbf x}^{{\prime \prime }}}\) if and only if \(F^{(-2)}_{{\mathbf x}^\prime }(\beta ) \ge F^{(-2)}_{{\mathbf x}^{\prime \prime }}(\beta )\) for all \(0 < \beta \le 1\). Moreover, following (2) and (6), the ALC values can be computed by optimization:

$$\begin{aligned} F^{(-2)}_{{\mathbf x}}(\beta ) = \max _{\eta \in R}\ \left[ \beta \eta - {\mathbb E}\{ \max \{ \eta - R_{{\mathbf x}}, 0 \} \} \right] \end{aligned}$$
(7)

where for a discrete random variable represented by its realizations \(y_t\) problem (7) becomes an LP.

For any real tolerance level \(0 < \beta \le 1\), the normalized value of the ALC defined as

$$\begin{aligned} M_{\beta }({\mathbf x})=F^{(-2)}_{{\mathbf x}}(\beta )/\beta \end{aligned}$$
(8)

is the Worst Conditional Expectation or Tail VaR which is now commonly called the Conditional Value-at-Risk (CVaR). This name was introduced by (Rockafellar and Uryasev 2000) who considered (similar to the Expected Shortfall by (Embrechts et al. 1997)) the measure CVaR defined as \({\mathbb E}\,\{ R_{{\mathbf x}} | R_{{\mathbf x}}\le F^{(-1)}_{{\mathbf x}}(\beta ) \}\) for continuous distributions showing that it could then be expressed by a formula analogous to (7) and thus be potentially LP computable. The approach has been further expanded to general distributions in (Rockafellar and Uryasev 2002). For additional discussion of relations between various definitions of the measures we refer to Ogryczak and Ruszczyński (2002b). The CVaR measure is an increasing function of the tolerance level \(\beta \), with \(M_1({\mathbf x})=\mu ({\mathbf x})\). For any \(0< \beta < 1\), the CVaR measure is SSD consistent (Ogryczak and Ruszczyński 2002a) and coherent (Artzner et al. 1999). Opposite to deviation type risk measures, for coherent measures larger values are preferred and therefore the measures are sometimes called safety measures (Mansini et al. 2003). Due to (7), for a discrete random variable represented by its realizations \(y_t\) the CVaR measures are LP computable. It is important to notice that although the quantile risk measures (VaR and CVaR) were introduced in banking as extreme risk measures for very small tolerance levels (like \(\beta =0.05\)), for the portfolio optimization good results have been provided by rather larger tolerance levels (Mansini et al. 2003). For \(\beta \) approaching 0, the CVaR measure tends to the Minimax measure

$$\begin{aligned} M({\mathbf x}) = \min _{t=1,\ldots ,T} y_{t} \end{aligned}$$
(9)

introduced to portfolio optimization by Young (1998). Note that the maximum (downside) semideviation

$$\begin{aligned} \Delta ({\mathbf x}) = \mu ({\mathbf x}) - M({\mathbf x}) = \max _{t=1,\ldots ,T} (\mu ({\mathbf x}) - y_{t} ) \end{aligned}$$
(10)

and the conditional \(\beta \)-deviation

$$\begin{aligned} \Delta _{\beta }({\mathbf x}) = \mu ({\mathbf x}) - M_{\beta }({\mathbf x}) \quad \text{ for }\ 0 < \beta \le 1 , \end{aligned}$$
(11)

respectively, represent the corresponding deviation polyhedral risk measures. They may be interpreted as the drawdown measures (Chekhlov et al. 2002). For \(\beta =0.5\) the measure \(\Delta _{0.5}({\mathbf x})\) represents the mean absolute deviation from the median (Mansini et al. 2003).

Fig. 1
figure 1

Tangency portfolio \(TP_\tau \) by reward-risk ratio maximization

The commonly used approach to implement the Markowitz-type mean-risk models is based on the use of a specified lower bound \(\mu _0\) on expected return while minimizing the risk criterion. An alternative specific approach to portfolio optimization looks for a risky portfolio offering the maximum increase of the mean return, compared to the risk-free target \(\tau \). Namely, given the risk-free rate of return \(\tau \), a risky portfolio \({\mathbf x}\) that maximizes the ratio \({(\mu ({\mathbf x}) - \tau )}/{\varrho ({\mathbf x})}\) is sought. This leads us to the following ratio optimization problem:

$$\begin{aligned} \max \ \left\{ \frac{\mu ({\mathbf x}) - \tau }{\varrho ({\mathbf x})}\ : \ {\mathbf x}\in Q\right\} . \end{aligned}$$
(12)

We illustrate ratio optimization (12) in Fig. 1. The approach is well appealing with respect to the preferences modeling and applied to standard portfolio selection or (extended) index tracking problems (with a benchmark as the target). For the classical Markowitz modeling of risk with the standard deviation one gets the Sharpe ratio (Sharpe 1966):

$$\begin{aligned} \max \ \left\{ \frac{\mu ({\mathbf x}) - \tau }{\sigma ({\mathbf x})}\ : \ {\mathbf x}\in Q\right\} . \end{aligned}$$
(13)

Many approaches have replaced the standard deviation in the Sharpe ratio with an alternative risk measure. First of all, Sortino ratio (Sortino and Price 1994) replaces the standard deviation with the downside standard deviation:

$$\begin{aligned} \max \ \left\{ \frac{\mu ({\mathbf x}) - \tau }{\bar{\sigma }({\mathbf x})}\ : \ {\mathbf x}\in Q\right\} . \end{aligned}$$
(14)

Further, for the reward-risk ratio may be optimized for various polyhedral risk measures. For the latter, the reward-risk ratio optimization problem can be converted into an LP form (Mansini et al. 2003).

When the risk-free return \(r_0\) is used instead of the target \(\tau \) than the ratio optimization (12) corresponds to the classical Tobin’s model (Tobin 1958) of the modern portfolio theory (MPT) where the capital market line (CML) is the line drawn from the risk-free rate at the intercept that passes tangent to the mean-risk efficient frontier. Any point on this line provides the maximum return for each level of risk. The tangency (tangent, super-efficient) portfolio is the portfolio of risky assets on the efficient frontier at the point where the CML is tangent to the efficiency frontier. It is a risky portfolio offering the maximum increase of the mean return while comparing to the risk-free investment opportunities. Namely having given the risk-free rate of return \(r_0\) one seeks a risky portfolio \({\mathbf x}\) that maximizes the ratio \({(\mu ({\mathbf x}) -r_0})/{\varrho ({\mathbf x})}\).

Fig. 2
figure 2

Tangency portfolio \(TP_\tau \) by risk-reward ratio minimization

Instead of the reward-risk ratio maximization one may consider an equivalent model of the risk-reward ratio minimization (see Fig. 2):

$$\begin{aligned} \min \ \left\{ \frac{\varrho ({\mathbf x})}{\mu ({\mathbf x}) - \tau }\ : \ {\mathbf x}\in Q\right\} . \end{aligned}$$
(15)

Actually, this is a classical model for the tangency portfolio as considered by Markowitz (1959) and used in statistics books (Pratt et al. 1995).

Both the ratio optimization models (12) and (15) are theoretically equivalent. However, the risk-reward ratio optimization (15) enables easy control of the denominator positivity by simple inequality \({\mu ({\mathbf x}) \ge \tau + \varepsilon }\) added to the problem constraints. In the standard reward-risk ratio optimization the corresponding inequality \({\varrho ({\mathbf x}) \ge \tau + \varepsilon }\) with convex risk measures requires complex discrete model even for LP computable risk measures (Guastaroba et al. 2016). Model (15) may also be additionally regularized for the case of multiple risk-free solutions. Regularization \(({\varrho ({\mathbf x}) + \varepsilon })/({\mu ({\mathbf x}) - \tau })\) guarantees that the risk-free portfolio with the highest mean return will be selected then.

Moreover, since

$$\begin{aligned} -1 + \frac{\varrho ({\mathbf x})}{\mu ({\mathbf x}) - \tau } = \frac{\tau - (\mu ({\mathbf x}) - \varrho ({\mathbf x}))}{\mu ({\mathbf x}) - \tau } \end{aligned}$$

the risk-reward ratio formulation allows us to justify SSD consistency of the many ratio optimization approaches. Indeed, the following theorem is valid (Ogryczak et al. 2015):

Theorem 1

If risk measure \(\varrho ({\mathbf x})\) is mean-complementary SSD consistent, i.e.

$$\begin{aligned} R_{{\mathbf x}^\prime } \succeq _{_{SSD}} R_{{\mathbf x}^{{\prime \prime }}} \quad \Rightarrow \quad \mu ({\mathbf x}^\prime ) - \varrho ({\mathbf x}^\prime ) \ge \mu ({\mathbf x}^{\prime \prime }) - \varrho ({\mathbf x}^{\prime \prime }) \end{aligned}$$

then the risk-reward ratio optimization (15) and equivalently (12) is SSD consistent, i.e.

$$\begin{aligned} R_{{\mathbf x}^\prime } \succeq _{_{SSD}} R_{{\mathbf x}^{{\prime \prime }}} \quad \Rightarrow \quad \frac{\varrho ({\mathbf x}^\prime )}{\mu ({\mathbf x}^\prime ) - \tau } \le \frac{\varrho ({\mathbf x}^{\prime \prime })}{\mu ({\mathbf x}^{\prime \prime }) - \tau } \quad \text{ and } \quad \frac{\mu ({\mathbf x}^\prime ) - \tau }{\varrho ({\mathbf x}^\prime )} \ge \frac{\mu ({\mathbf x}^{\prime \prime }) - \tau }{\varrho ({\mathbf x}^{\prime \prime })} \end{aligned}$$

provided that \(\mu ({\mathbf x})> \tau > \mu ({\mathbf x}) - \varrho ({\mathbf x})\).

The risk-reward ratio models can usually be easily reformulated into simpler and computationally more efficient forms. Note that both the Sharpe ratio optimization (13) and the Sortino ratio optimization (14) represent complex nonlinear problems. When switching on the risk-reward ratio optimization they can easily be reformulated into classical quadratic optimization with linear constraint. Actually, the risk-reward ratio model equivalent to the Sharpe ratio maximization takes the form:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \frac{\displaystyle \sum _{j=1}^{n}\sum _{k=1}^{n} c_{jk}x_j x_k}{\displaystyle (z - \tau )^2} \\ \text{ s.t. } &{} \displaystyle z= \sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \\ &{} x_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(16)

Applying the Charnes and Cooper transformation 1962 with substitutions: \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) the problem can be reformulated into the following convex quadratic program (Cornuejols and Tütüncü 2007):

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle {\sum _{j=1}^{n}\sum _{k=1}^{n} c_{jk}\tilde{x}_j \tilde{x}_k} \\ \text{ s.t. } &{} v -v_0 \tau =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0\\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(17)

The risk-reward ratio model equivalent to the Sortino ratio maximization takes the form:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \frac{\displaystyle \sum _{t=1}^{T} p_t d_{t}^2}{\displaystyle (z - \tau )^2} \\ \text{ s.t. } &{} \displaystyle d_{t} \ge z - \sum _{j=1}^{n}r_{jt} {x}_j, \ d_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} \displaystyle z= \sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \\ &{} x_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(18)

By substitutions: \(\displaystyle \tilde{d}_{t} = {d_{t}}/({z-\tau })\), \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) it can be reformulated into the following convex quadratic program:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle {\sum _{t=1}^{T} p_t \tilde{d}_{t}^2} \\ \text{ s.t. } &{} \displaystyle \tilde{d}_{t} \ge v - \sum _{j=1}^{n}r_{jt} \tilde{x}_j, \ \tilde{d}_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} v -v_0 \tau =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0\\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(19)

Note that possible formulation of the Sortino index optimization with the corresponding risk-reward ratio based on the standard semideviation allows us to justify its SSD consistency. Indeed, as the standard semideviation is a mean-complementary SSD consistent risk measure (Ogryczak and Ruszczyński 1999), applying Theorem 1 we get the following assertion.

Corollary 1

The Sortino index optimization is SSD consistent provided that \(\mu ({\mathbf x})> \tau > \mu ({\mathbf x}) - \bar{\sigma }({\mathbf x})\).

In the following section we analyze efficient computation of tangency portfolio for various polyhedral risk measures taking advantages of possible inverse formulation as the risk-reward ratio optimization model

$$\begin{aligned} \min \ \left\{ \frac{\varrho ({\mathbf x}) + \varepsilon }{\mu ({\mathbf x}) - \tau }\ : \ \mu ({\mathbf x}) \ge \tau + \varepsilon ,\ {\mathbf x}\in Q\right\} . \end{aligned}$$
(20)

Note that due to Theorem 1 (under natural assumptions) this ratio optimization is consistent with the SSD rules in the following sense.

Corollary 2

Let \({\mathbf x}^0\) be an optimal portfolio to the risk-reward ratio optimization problem (20) that satisfies condition \(\mu ({{\mathbf x}^0}) - \varrho ({{\mathbf x}^0}) \le \tau \). For any deviation risk measure \(\varrho ({\mathbf x})\) which is mean-complementary SSD consistent, portfolio \({\mathbf x}^0\) is SSD nondominated with the exception of alternative optimal portfolios having the same values of mean return \(\mu ({{\mathbf x}^0})\) and risk measure \(\varrho ({{\mathbf x}^0})\).

We analyze the risk-reward ratio optimization model (20) for the most popular polyhedral risk measures such as SMAD, CVaR, Minimax and mean below-target deviation. We show that this ratio optimization is consistent with the SSD rules, despite that the ratio does not represent a coherent risk measure (Artzner et al. 1999). The most important, we demonstrate that while transforming this ratio optimization to an LP model and taking advantages of the LP duality we get model formulations providing high computational efficiency and scalable with respect to the number of scenarios. Actually, the number of structural constraints in the introduced LP models is proportional to the number of instruments n while only the number of variables is proportional to the number of scenarios T thus not affecting so seriously the simplex method efficiency. Therefore, the models can effectively be solved with general LP solvers even for very large numbers of scenarios. The simplex method applied to a primal problem actually solves both the primal and the dual (Vanderbei 2014). Since the dual of the dual is the primal, applying the simplex method to the dual also solves both the primal and the dual problem. Therefore, one can take advantages of smaller dimensionality of the dual model without need of any additional computations to recover the primal solution (the optimal portfolio).

3 Computational LP models for basic polyhedral risk measures

3.1 SMAD model

For the SMAD model, the downside mean semideviation from the mean \(\bar{\delta }({\mathbf x})\) may be considered as a basic risk measure. The risk-reward ratio model (20) for the SMAD risk measurement may be formulated as follows:

$$\begin{aligned}&\min&\displaystyle \frac{\displaystyle \sum _{t=1}^{T} p_t d_{t} + \varepsilon }{\displaystyle z - \tau }\nonumber \\&\text{ s.t. }&\displaystyle d_{t} \ge z - \sum _{j=1}^{n}r_{jt} {x}_j, \ d_{t} \ge 0 \quad t=1,\ldots ,T \\&\displaystyle z=&\sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \nonumber \\&x_j&\ge 0 \quad j=1,\ldots ,n\nonumber \end{aligned}$$
(21)

Note that the downside mean semideviation is a mean-complementary SSD consistent risk measure (Ogryczak and Ruszczyński 1999), thus applying Theorem 1 one gets:

Corollary 3

The SMAD risk-reward ratio optimization (21) is SSD consistent provided that \(\mu ({\mathbf x})> \tau > \mu ({\mathbf x}) - \bar{\delta }({\mathbf x})\).

The ratio model (21) can be linearized by the Charnes and Cooper substitutions (Charnes and Cooper 1962; Williams 2013): \(\displaystyle \tilde{d}_{t} = {d_{t}}/({z-\tau })\), \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) leading to the following LP formulation:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \sum _{t=1}^{T} p_t \tilde{d}_{t} +\varepsilon v_0\\ \text{ s.t. } &{} \displaystyle \tilde{d}_{t} \ge v - \sum _{j=1}^{n}r_{jt} \tilde{x}_j, \ \tilde{d}_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} v -\tau v_0 =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0\\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(22)

The original values of \(x_j\) can be then recovered dividing \(\tilde{x}_j\) by \(v_0\).

After eliminating defined by equations variables v and \(v_0\), one gets the most compact formulation:

$$\begin{aligned} \min&\displaystyle \sum _{j=1}^{n}\varepsilon \tilde{x}_j + \sum _{t=1}^{T} p_t \tilde{d}_{t}&\end{aligned}$$
(23a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \tilde{d}_{t} \ge \sum _{j=1}^{n}(\mu _j - r_{jt}) \tilde{x}_j&\quad t=1,\ldots ,T \end{aligned}$$
(23b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(23c)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j \le \frac{1}{\varepsilon }&\end{aligned}$$
(23d)
$$\begin{aligned}&\tilde{x}_j \ge 0&\quad j=1,\ldots ,n \end{aligned}$$
(23e)
$$\begin{aligned}&\tilde{d}_{t} \ge 0&\quad t=1,\ldots ,T \end{aligned}$$
(23f)

with \(T+n\) variables and \(T+2\) constraints.

Taking the LP dual to model (23) one gets the model:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(24a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} ( r_{jt} - \mu _j ) u_{t} + (\mu _j -\tau )q -\varepsilon h \le \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(24b)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(24c)
$$\begin{aligned}&\ 0 \le u_t \le {p_t}&\quad t=1,\ldots ,T \end{aligned}$$
(24d)

containing T variables \(u_{t}\) corresponding to inequalities (23b), nonnegative variable q corresponding to inequality (23d) and unbounded (free) variable h corresponding to equation (23c), but only n structural constraints. Indeed, the T constraints corresponding to variables \(d_{t}\) from (23) take the form of simple upper bounds (SUB) on \(u_{t}\) (24d) thus not affecting the problem complexity. Hence, similar to the standard portfolio optimization model based on the SMAD measure (Ogryczak and Śliwiński 2011b), the number of constraints in (24) is proportional to the portfolio size n, thus it is independent from the number of scenarios. The same remains valid for the model transformed to the so-called canonical form (Dantzig 1963) or computational form (Maros 2003) with explicit slack variables:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(25a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} ( r_{jt} - \mu _j ) u_{t} + (\mu _j -\tau )q -\varepsilon h + s_j = \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(25b)
$$\begin{aligned}&0 \le u_t \le {p_t}&\quad t=1,\ldots ,T \end{aligned}$$
(25c)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(25d)
$$\begin{aligned}&\displaystyle s_j \ge 0&\quad j=1,\ldots ,n\end{aligned}$$
(25e)

Exactly, there are \(T +n +2\) variables and n constraints in the computational form. This guarantees a high computational efficiency of the simplex method for the dual model, even for very large number of scenarios.

The optimal portfolio weights are easily recovered from the dual variables (shadow prices) corresponding to the inequalities (24b) or equations (25b). Exactly, the dual variables to (24b) represent optimal values of \(\tilde{x}_j\) from (23) while the optimal values of the original \(x_j\) variables from model (21) can be recovered as normalized values \(\tilde{x}_j\), thus dividing \(\tilde{x}_j\) by \(\sum _{j=1}^n \tilde{x}_j\).

For finding a primal solution having known only a dual solution or vice versa, the following Complementary Slackness Theorem (see, e.g. Vanderbei 2014), is generally used.

Theorem 2

(Complementary Slackness) Suppose that \({\mathbf x}= (x_1,\ldots , x_n )\) is primal feasible and that \({\mathbf y}= (y_1 ,\ldots , y_m )\) is dual feasible. Let \((w_1 , w_2 ,\ldots , w_m )\) denote the corresponding primal slack variables (residuals), and let \((s_1 , s_2 , \ldots , s_n )\) denote the corresponding dual slack variables (residuals). Then \({\mathbf x}\) and \({\mathbf y}\) are optimal for their respective problems if and only if

$$\begin{aligned} x_j s_j = 0, \quad j=1,\ldots ,n\quad \text{ and } \quad w_i y_i = 0, \quad i=1,\ldots ,m\end{aligned}$$
(26)

Following Theorem 2, a primal solution \(\tilde{x}_j\) may be found as a solution to the system:

$$\begin{aligned}&\sum _{j=1}^{n}(\mu _j - r_{jt}) \tilde{x}_j \le 0&\quad t:\ u_t < p_t \end{aligned}$$
(27a)
$$\begin{aligned}&\sum _{j=1}^{n}(\mu _j - r_{jt}) \tilde{x}_j \ge 0&\quad t:\ u_t > 0 \end{aligned}$$
(27b)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h>0 \end{aligned}$$
(27c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j: s_j>0 \end{aligned}$$
(27d)
$$\begin{aligned}&\text{ constraints } (23\hbox {c})-(23\hbox {e}) \end{aligned}$$
(27e)

where inequalities (27a) and (27b) lead to the linear system

$$\begin{aligned} \sum _{j=1}^{n}(\mu _j - r_{jt}) \tilde{x}_j = 0 \quad t:\ 0< u_t < p_t \end{aligned}$$
(28)

In the case of the unique primal optimal solution, it is simply given as a solution of linear equations (23c), (27c) and (28). In general, finding a solution of the system (27) is more complex and thereby finding a primal solution having known only a dual solution could be a quite complex problem. Solving such a system of linear inequalities and equations, in general, may require the use of the simplex method restricted to finding a feasible solution (the so-called Phase I, e.g. Vanderbei 2014) or with an additional objective if a special property of solutions is sought.

The problem is much simpler, however, for the basic solutions (vertices) used in the simplex method. Basic solutions are characterized by a structure of basic variables with linearly independent columns of coefficients and nonbasic variables fixed at their bounds (either lower or upper). The simplex method while generating sequence of basic solutions, takes this advantage and when applied to a primal problem it actually solves both the primal and the dual (Maros 2003; Vanderbei 2014). Since the dual of the dual is the primal, applying the simplex method to the dual also solves both the primal and the dual problem. Exactly, in the simplex method a sequence of basic feasible solutions and complementary dual vectors (satisfying complementary slackness conditions (26)) is built. A complementary basic dual vector may be infeasible during the solution process while its feasibility is reached at the optimality. Hence, having the basic optimal solution of the dual problem (24) defined by the basis index set B consisted of variable q and \(n-1\) of variables \(u_t\), h and corresponding slacks \(s_j\), one has defined the primal optimal solution as the unique solution of the linear system:

$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(29a)
$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j - r_{jt}) \tilde{x}_j = 0&\quad t:\ u_t \in B \end{aligned}$$
(29b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h \in B \end{aligned}$$
(29c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j:\ s_j \in B \end{aligned}$$
(29d)

of n equations with n variables. Recall that in the simplex method this solution is already found as the so-called simplex multipliers (Maros 2003) during the pricing step. Therefore, the dual solutions are directly provided by any linear programming solver (see e.g. (IBM 2016)). Thus one can take advantages of smaller dimensionality of the dual model without need of any additional computations to recover the primal solution (the optimal portfolio).

The downside mean semideviation is a risk relevant measure. Therefore, assuming that only risky assets are considered for a portfolio, the corresponding reward-risk ratio (12) does not need any nonconvex constraint to enforce positive values of the risk measure. Nevertheless, the reward-risk ratio optimization for the SMAD measure results in an LP model that contains \(T+n\) variables and \(T+2\) constraints (Mansini et al. 2003) and its dimensionality cannot be reduced with any dual reformulation.

3.2 CVaR model

In the CVaR model, one needs to use the complementary deviational risk measure, the drawdown measure \(\varrho ({\mathbf x})=\Delta _{\beta }({\mathbf x})\) for the reward-risk ratio optimization. Hence, the CVaR risk-reward ratio model (20) based on the CVaR risk measure takes the following form:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \frac{\displaystyle {z} - y + \frac{1}{\beta }\sum _{t=1}^{T} p_t d_{t} + \varepsilon }{\displaystyle z - \tau } \\ \text{ s.t. } &{} \displaystyle d_{t} \ge y - \sum _{j=1}^{n}r_{jt} {x}_j, \ d_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} \displaystyle z= \sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \\ &{} x_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(30)

Note that the CVaR measure is a SSD consistent measure (the drawdown measure is mean-complementary SSD consistent) (Ogryczak and Ruszczyński 2002a), thus following Theorem 1 we get:

Corollary 4

The CVaR risk-reward ratio optimization (30) is SSD consistent provided that \(\mu ({\mathbf x})> \tau > M_{\beta }({\mathbf x})\).

Fractional ratio model (30) can be linearized by substitutions: \(\displaystyle \tilde{d}_{t} = {d_{t}}/({z-\tau })\), \(\displaystyle \tilde{y} = {y}/({z - \tau })\), \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) leading to the following LP formulation:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle {v} - \tilde{y} + \frac{1}{\beta }\sum _{t=1}^{T} p_t \tilde{d}_{t} +\varepsilon v_0\\ \text{ s.t. } &{} \displaystyle \tilde{d}_{t} \ge \tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j, \ \tilde{d}_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} v - \tau v_0 =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0\\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(31)

After eliminating defined by equations variables v and \(v_0\), one gets the most compact formulation:

$$\begin{aligned} \min&\displaystyle \sum _{j=1}^{n}(\mu _j + \varepsilon ) \tilde{x}_j - \tilde{y} + \frac{1}{\beta }\sum _{t=1}^{T} p_t \tilde{d}_{t}&\end{aligned}$$
(32a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \tilde{d}_{t} \ge \tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j&\quad t=1,\ldots ,T \end{aligned}$$
(32b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(32c)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j \le \frac{1}{\varepsilon }&\end{aligned}$$
(32d)
$$\begin{aligned}&\tilde{x}_j \ge 0&\quad j=1,\ldots ,n \end{aligned}$$
(32e)
$$\begin{aligned}&\tilde{d}_{t} \ge 0&\quad t=1,\ldots ,T \end{aligned}$$
(32f)

with \(T+n+1\) variables and \(T+2\) constraints.

Taking the LP dual to model (32) we get the following model:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(33a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} u_t = 1&\end{aligned}$$
(33b)
$$\begin{aligned}&\displaystyle \sum _{t=1}^{T} r_{jt} u_{t} + (\mu _j -\tau )q -\varepsilon h \le \mu _j + \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(33c)
$$\begin{aligned}&0 \le u_t \le \frac{p_t}{\beta }&\quad t=1,\ldots ,T \end{aligned}$$
(33d)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(33e)

containing T variables \(u_{t}\) corresponding to inequalities (32b), nonnegative variable q corresponding to inequality (32d) and unbounded (free) variable h corresponding to equation (32c), but only \(n+1\) structural constraints. Indeed, the T constraints corresponding to variables \(d_{t}\) from (32) take the form of simple upper bounds on \(u_{t}\) (33d) not affecting the problem complexity. Thus similar to the standard portfolio optimization with the CVaR measure (Ogryczak and Śliwiński 2011a), the number of constraints in (33) is proportional to the number of instruments n, thus it is independent from the number of scenarios. The same remains valid for the model transformed to the canonical form with explicit slack variables:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(34a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} u_t = 1&\end{aligned}$$
(34b)
$$\begin{aligned}&\displaystyle \sum _{t=1}^{T} r_{jt} u_{t} + (\mu _j -\tau )q -\varepsilon h + s_j = \mu _j + \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(34c)
$$\begin{aligned}&0 \le u_t \le \frac{p_t}{\beta }&\quad t=1,\ldots ,T \end{aligned}$$
(34d)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(34e)
$$\begin{aligned}&\displaystyle s_j \ge 0&\quad j=1,\ldots ,n\end{aligned}$$
(34f)

Exactly, there are \(T +n +2\) variables and \(n+1\) equations in the canonical form. This guarantees a high computational efficiency of the simplex method for the dual model, even for very large number of scenarios.

The optimal portfolio weights, i.e., the optimal values of the original \(x_j\) variables from model (30) can be recovered as normalized values \(\tilde{x}_j\) of dual variables corresponding to inequalities (33c) (or equations (34c)), thus dividing \(\tilde{x}_j\) by \(\sum _{j=1}^n \tilde{x}_j\). The simplex method when applied to a primal problem, it actually solves both the primal and the dual (Maros 2003; Vanderbei 2014). Since the dual of the dual is the primal, applying the simplex method to the dual also solves both the primal and the dual problem. Therefore, the dual solutions are directly provided by any linear programming solver (see e.g. IBM 2016). Thus one can take advantages of smaller dimensionality of the dual model without need of any additional complex computations to recover the primal solution (the optimal portfolio).

When the simplex method generates the basic optimal solution of the dual problem (33) defined by the basis index set B consisted of variable q and n of variables \(u_t\), h and corresponding slacks \(s_j\), the provided primal optimal solution is defined as the unique solution of the linear system:

$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(35a)
$$\begin{aligned}&\displaystyle \tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j = 0&\quad t:\ u_t \in B \end{aligned}$$
(35b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h \in B \end{aligned}$$
(35c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j:\ s_j \in B \end{aligned}$$
(35d)

of \(n+1\) equations with \(n+1\) variables: \(\tilde{y}\) and \(\tilde{x}_j\) for \(j=1,\ldots ,n\).

If one needs to find another primal solutions, which is usually not a case of portfolio optimization, we consider, then the full complementary slackness system must be solved. Following Theorem 2, a primal solution \(\tilde{x}_j\) (together with \(\tilde{y}\)) may be found as a solution to the system:

$$\begin{aligned}&\tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j \le 0&\quad t:\ u_t < p_t \end{aligned}$$
(36a)
$$\begin{aligned}&\tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j \ge 0&\quad t:\ u_t > 0 \end{aligned}$$
(36b)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h>0 \end{aligned}$$
(36c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j: s_j>0 \end{aligned}$$
(36d)
$$\begin{aligned}&\text{ constraints } (32\hbox {c})-(32\hbox {e}) \end{aligned}$$
(36e)

Solving such a system of linear inequalities and equations, in general, may require the use of the simplex method restricted to finding a feasible solution (Phase I) or with an additional objective if a special property of solutions is sought.

3.3 Minimax model

The Minimax portfolio optimization model representing a limiting case of the CVaR model for \(\beta \) tending to 0 is even simpler than the general CVaR model. The risk-reward ratio optimization model (20) using the maximum downside deviation \(\Delta ({\mathbf x})\) can be written as the following problem:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \frac{\displaystyle {z} - y + \varepsilon }{\displaystyle z - \tau } \\ \text{ s.t. } &{} \displaystyle y \le \sum _{j=1}^{n}r_{jt} {x}_j \quad t=1,\ldots ,T \\ &{} \displaystyle z= \sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \\ &{} x_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(37)

The Minimax risk measure is SSD consistent (the maximum downside deviation measure is mean-complementary SSD consistent) (Mansini et al. 2003), thus applying Theorem 1 we get the following assertion.

Corollary 5

The Minimax risk-reward ratio optimization (37) is SSD consistent provided that \(\mu ({\mathbf x})> \tau > M({\mathbf x})\).

Fractional model (37) is linearized by substitutions: \(\displaystyle \tilde{y} = {y}/({z - \tau })\), \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) leading to the following LP formulation:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle {v} - \tilde{y} +\varepsilon v_0 \\ \text{ s.t. } &{} \displaystyle \tilde{y} \le \sum _{j=1}^{n}r_{jt} \tilde{x}_j \quad t=1,\ldots ,T \\ &{} v -\tau v_0 =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0 \\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(38)

After eliminating defined by equations variables v and \(v_0\), one gets the most compact formulation:

$$\begin{aligned} \min&\displaystyle \sum _{j=1}^{n}(\mu _j + \varepsilon ) \tilde{x}_j - \tilde{y}&\end{aligned}$$
(39a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \tilde{y} \le \sum _{j=1}^{n}r_{jt} \tilde{x}_j&\quad t=1,\ldots ,T \end{aligned}$$
(39b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(39c)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j \le \frac{1}{\varepsilon }&\end{aligned}$$
(39d)
$$\begin{aligned}&\tilde{x}_j \ge 0&\quad j=1,\ldots ,n \end{aligned}$$
(39e)

with \(n+1\) variables and \(T+2\) constraints.

The LP dual to model (39), built with T variables \(u_{t}\) corresponding to inequalities (39b), nonnegative variable q corresponding to inequality (39d) and unbounded (free) variable h corresponding to equation (39c), gets the following problem:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(40a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} u_t = 1&\end{aligned}$$
(40b)
$$\begin{aligned}&\displaystyle \sum _{t=1}^{T} r_{jt} u_{t} + (\mu _j -\tau )q -\varepsilon h \le \mu _j + \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(40c)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(40d)
$$\begin{aligned}&u_t \ge 0&\quad t=1,\ldots ,T \end{aligned}$$
(40e)

containing only \(n+1\) structural constraints. Thus similar to the standard Minimax optimization (Ogryczak and Śliwiński 2011b), the number of constraints in (40) is proportional to the portfolio size n while being independent from the number of scenarios. The same remains valid for the model transformed to the canonical form with explicit slack variables:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(41a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} u_t = 1&\end{aligned}$$
(41b)
$$\begin{aligned}&\displaystyle \sum _{t=1}^{T} r_{jt} u_{t} + (\mu _j -\tau )q -\varepsilon h + s_j = \mu _j + \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(41c)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(41d)
$$\begin{aligned}&u_t \ge 0&\quad t=1,\ldots ,T \end{aligned}$$
(41e)
$$\begin{aligned}&\displaystyle s_j \ge 0&\quad j=1,\ldots ,n\end{aligned}$$
(41f)

Exactly, there are \(T +n +2\) variables and \(n+1\) equations in the canonical form. This guarantees a high computational efficiency of the dual model even for a very large number of scenarios.

The optimal portfolio, i.e., the optimal values of the original \(x_j\) variables from model (37) can be recovered as normalized values \(\tilde{x}_j\) of dual variables corresponding to inequalities (40c) (or equations (41c)), thus dividing \(\tilde{x}_j\) by \(\sum _{j=1}^n \tilde{x}_j\). The simplex method when applied to an LP problem, it actually solves both the primal and the dual (Maros 2003; Vanderbei 2014). Hence, applying the simplex method to the dual also solves both the primal and the dual problem. Therefore, the dual solutions are directly provided by any linear programming solver (see e.g. IBM 2016). Thus one can take advantages of smaller dimensionality of the dual model without need of any additional computations to recover the primal solution (the optimal portfolio).

When the simplex method generates the basic optimal solution of the dual problem (40) defined by the basis index set B consisted of variable q and n of variables \(u_t\), h and corresponding slacks \(s_j\), the provided primal optimal solution is defined as the unique solution of the linear system:

$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(42a)
$$\begin{aligned}&\displaystyle \tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j = 0&\quad t:\ u_t \in B \end{aligned}$$
(42b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h \in B \end{aligned}$$
(42c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j:\ s_j \in B \end{aligned}$$
(42d)

of \(n+1\) equations with \(n+1\) variables: \(\tilde{y}\) and \(\tilde{x}_j\) for \(j=1,\ldots ,n\).

If one needs to find another primal solutions, which is usually not a case of portfolio optimization, we consider, then the full complementary slackness system must be solved. Following Theorem 2, a primal solution \(\tilde{x}_j\) (together with \(\tilde{y}\)) may be found as a solution to the system:

$$\begin{aligned}&\tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j = 0&\quad t:\ u_t > 0 \end{aligned}$$
(43a)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h>0 \end{aligned}$$
(43b)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j: s_j>0 \end{aligned}$$
(43c)
$$\begin{aligned}&\text{ constraints } (39\hbox {b})-(39\hbox {e}) \end{aligned}$$
(43d)

Solving such a system of linear inequalities and equations, in general, may require the use of the simplex method restricted to finding a feasible solution (Phase I) or with an additional objective if a special property of solutions is sought.

3.4 Mean below-target deviation and Omega ratio

The risk-reward ratio model based on the mean below-target deviation risk measure \(\bar{\delta }_{\tau }({\mathbf x})\) (downside mean semideviation from the threshold \(\tau \)) turns out to be equivalent to the so-called Omega ratio optimization. The Omega ratio is a relatively recent performance measure introduced by Keating and Shadwick (2002) which captures both the downside and upside potential of a portfolio. The Omega ratio can be defined as the ratio between the expected value of the profits and the expected value of the losses, where for a predetermined threshold \(\tau \), portfolio returns over the target \(\tau \) are considered as profits, whereas returns below the threshold are considered as losses. Following Ogryczak and Ruszczyński (1999), for any target value \(\tau \) the following equality holds:

$$\begin{aligned} {\mathbb E}\{ (R_{\mathbf x}- \tau )_+ \} = \mu (R_{\mathbf x}) - (\tau -{\mathbb E}\{(R_{\mathbf x}- \tau )_-\}). \end{aligned}$$

Thus, we can write:

$$\begin{aligned} \varOmega _\tau ({\mathbf x}) = \frac{{\mathbb E}\{ \max \{ R_{{\mathbf x}} - \tau , 0 \} \}}{ {\mathbb E}\{ \max \{ \tau - R_{{\mathbf x}}, 0 \} \} } = \frac{\mu ({\mathbf x}) - (\tau - \delta _\tau ({\mathbf x})) }{{\delta }_\tau ({\mathbf x}) } =1+ \frac{\mu ({\mathbf x}) - \tau }{{\delta }_\tau ({\mathbf x})} \end{aligned}$$

Therefore, the Omega maximization is equivalent to the risk-reward ratio model based on the mean below-target deviation thus minimizing \(\frac{{\delta }_\tau ({\mathbf x})}{\mu ({\mathbf x}) - \tau }\). Hence, the Omega ratio model (20) takes the form:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \frac{\displaystyle \sum _{t=1}^{T} p_t d_{t} + \varepsilon }{\displaystyle z - \tau } \\ \text{ s.t. } &{} \displaystyle d_{t} \ge \tau - \sum _{j=1}^{n}r_{jt} {x}_j, \ d_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} \displaystyle z= \sum _{j=1}^{n}\mu _j x_j,\ \sum _{j=1}^n x_j = 1,\ z \ge \tau +\varepsilon \\ &{} x_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(44)

The mean below-target deviation is a mean-complementary SSD consistent risk measure (Ogryczak and Ruszczyński 1999). Hence, from Theorem 1 we get the following assertion.

Corollary 6

The Omega ratio optimization (44) is SSD consistent provided that \(\mu ({\mathbf x})> \tau > \mu ({\mathbf x}) - {\delta }_\tau ({\mathbf x})\).

Fractional model (44) can be linearized by substitutions: \(\displaystyle \tilde{d}_{t} = {d_{t}}/({z-\tau })\), \(\displaystyle \tilde{x}_j={x_j}/({z-\tau })\), \(\displaystyle v={z}/({z-\tau })\) and \(\displaystyle v_0={1}/({z-\tau })\) leading to the formulation:

$$\begin{aligned} \begin{array}{rl} \min &{} \displaystyle \sum _{t=1}^{T} p_t \tilde{d}_{t} +\varepsilon v_0\\ \text{ s.t. } &{} \displaystyle \tilde{d}_{t} \ge \tau v_0 - \sum _{j=1}^{n}r_{jt} \tilde{x}_j, \ \tilde{d}_{t} \ge 0 \quad t=1,\ldots ,T \\ &{} v -\tau v_0 =1,\ v_0 \le 1/\varepsilon \\ &{} \displaystyle v= \sum _{j=1}^{n}\mu _j \tilde{x}_j, \ \displaystyle \sum _{j=1}^n \tilde{x}_j = v_0\\ &{} \tilde{x}_j \ge 0 \quad j=1,\ldots ,n \end{array} \end{aligned}$$
(45)

After eliminating variables v and \(v_0\) defined by equations, one gets the most compact formulation:

$$\begin{aligned} \min&\displaystyle \sum _{j=1}^{n}\varepsilon \tilde{x}_j + \sum _{t=1}^{T} p_t \tilde{d}_{t}&\end{aligned}$$
(46a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \tilde{d}_{t} \ge \sum _{j=1}^{n}(\tau - r_{jt}) \tilde{x}_j&\quad t=1,\ldots ,T \end{aligned}$$
(46b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(46c)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j \le \frac{1}{\varepsilon }&\end{aligned}$$
(46d)
$$\begin{aligned}&\tilde{x}_j \ge 0&\quad j=1,\ldots ,n \end{aligned}$$
(46e)
$$\begin{aligned}&\tilde{d}_{t} \ge 0&\quad t=1,\ldots ,T \end{aligned}$$
(46f)

with \(T+n\) variables and \(T+2\) constraints.

Taking the LP dual to model (46) one gets the following model:

$$\begin{aligned} \max&\ \displaystyle q - h&\end{aligned}$$
(47a)
$$\begin{aligned} \text{ s.t. }&\ \displaystyle \sum _{t=1}^{T} ( r_{jt} - \tau ) u_{t} + (\mu _j -\tau )q -\varepsilon h \le \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(47b)
$$\begin{aligned}&\ 0 \le u_t \le {p_t}&\quad t=1,\ldots ,T \end{aligned}$$
(47c)
$$\begin{aligned}&\ \displaystyle h \ge 0 \end{aligned}$$
(47d)

containing T variables \(u_{t}\) corresponding to inequalities (46b), nonnegative variable q corresponding to inequality (46d) and unbounded (free) variable h corresponding to equation (46c), but only n structural constraints. Indeed, the T constraints corresponding to variables \(d_{t}\) from (46) take the form of simple upper bounds on \(u_{t}\) (47c) thus not affecting the problem complexity. Hence, similar to the standard portfolio optimization model based on the SMAD measure (Ogryczak and Śliwiński 2011b), the number of constraints in (47) is proportional to the portfolio size n, thus it is independent from the number of scenarios. The same remains valid for the model transformed to the canonical or computational form (Maros 2003) with explicit slack variables:

$$\begin{aligned} \max&\ \displaystyle q - h \end{aligned}$$
(48a)
$$\begin{aligned} \text{ s.t. }&\displaystyle \sum _{t=1}^{T} ( r_{jt} - \tau ) u_{t} + (\mu _j -\tau )q -\varepsilon h + s_j = \varepsilon&\quad j=1,\ldots ,n\end{aligned}$$
(48b)
$$\begin{aligned}&0 \le u_t \le {p_t}&\quad t=1,\ldots ,T \end{aligned}$$
(48c)
$$\begin{aligned}&\displaystyle h \ge 0 \end{aligned}$$
(48d)
$$\begin{aligned}&\displaystyle s_j \ge 0&\quad j=1,\ldots ,n\end{aligned}$$
(48e)

Exactly, there are \(T +n +2\) variables and n constraints in the computational form. This guarantees a high computational efficiency of the simplex method for the dual model, even for very large number of scenarios.

The optimal portfolio weights are easily recovered from the dual variables (shadow prices) corresponding to the inequalities (47b) or equations (48b). Exactly, the dual variables to (47b) represent optimal values of \(\tilde{x}_j\) from (46) while the optimal values of the original \(x_j\) variables from model (44) can be recovered as normalized values \(\tilde{x}_j\), thus dividing \(\tilde{x}_j\) by \(\sum _{j=1}^n \tilde{x}_j\). The simplex method when applied to an LP problem, it actually solves both the primal and the dual (Maros 2003; Vanderbei 2014). Hence, applying the simplex method to the dual also solves both the primal and the dual problem. Therefore, the dual solutions are directly provided by any linear programming solver (see e.g. IBM 2016) and one can take advantages of smaller dimensionality of the dual model without need of any additional computations to recover the primal solution (the optimal portfolio).

When the simplex method generates the basic optimal solution of the dual problem (47) defined by the basis index set B consisted of variable q and n of variables \(u_t\), h and corresponding slacks \(s_j\), the provided primal optimal solution is defined as the unique solution of the linear system:

$$\begin{aligned}&\displaystyle \sum _{j=1}^{n}(\mu _j -\tau )\tilde{x}_j =1&\end{aligned}$$
(49a)
$$\begin{aligned}&\displaystyle \tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j = 0&\quad t:\ u_t \in B \end{aligned}$$
(49b)
$$\begin{aligned}&\displaystyle \sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h \in B \end{aligned}$$
(49c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j:\ s_j \in B \end{aligned}$$
(49d)

of \(n+1\) equations with \(n+1\) variables: \(\tilde{y}\) and \(\tilde{x}_j\) for \(j=1,\ldots ,n\).

If one needs to find another primal solutions, which is usually not a case of portfolio optimization we consider, the full complementary slackness system must be solved. Following Theorem 2, a primal solution \(\tilde{x}_j\) (together with \(\tilde{y}\)) may be found as a solution to the system:

$$\begin{aligned}&\tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j \le 0&\quad t:\ u_t < p_t \end{aligned}$$
(50a)
$$\begin{aligned}&\tilde{y} - \sum _{j=1}^{n}r_{jt} \tilde{x}_j \ge 0&\quad t:\ u_t > 0 \end{aligned}$$
(50b)
$$\begin{aligned}&\sum _{j=1}^n \tilde{x}_j = \frac{1}{\varepsilon }&\quad \text{ if } h>0 \end{aligned}$$
(50c)
$$\begin{aligned}&\tilde{x}_{j} = 0&\quad j: s_j>0 \end{aligned}$$
(50d)
$$\begin{aligned}&\text{ constraints } (46\hbox {c})-(46\hbox {e}) \end{aligned}$$
(50e)

Solving such a system of linear inequalities and equations, in general, may require the use of the simplex method restricted to finding a feasible solution (Phase I) or with an additional objective if a special property of solutions is sought.

The mean below-target deviation is not a risk relevant measure. Therefore, the corresponding reward-risk ratio (12) and the standard Omega ratio optimization models need a nonconvex constraint to enforce positive values of the risk measure. The latter requires introduction of additional constraints and auxiliary binary variables to deal with those critical situations where the risk measure at the denominator of the reward-risk ratio may take null value (Guastaroba et al. 2016). Moreover, the reward-risk ratio optimization for the mean below-target deviation measure results in a Mixed Integer LP model that contains \(T+n\) variables and \(T+2\) constraints (Mansini et al. 2003) and its dimensionality cannot be reduced with any dual reformulation.

3.5 Computational tests

In order to test empirically increase of the efficiency while taking advantages of inverse ratio models with dual reformulations, we have run computational tests on the large scale tests instances developed by Lim et al. (2009). They were originally generated from a multivariate normal distribution for 50, 100 or 200 securities with the number of scenarios 50 000 just providing an adequate approximation to the underlying unknown continuous price distribution. All computations were performed with primal simplex algorithm of the CPLEX 12.6.3 package on a PC with the Intel Xeon E5-2680 v2 CPU and 32GB RAM. All computations were performed as a single thread in order to be independent on the scalability of the CPLEX algorithms and yield time results that are easier to compare.

Our earlier experience has shown that the computational performances of the primal LP model for the risk-reward ratio optimization are similar to those of LP models for reward-risk ratio optimization. Actually, they have almost identical numbers of variables and constraints. However, for the mean below-target deviation measure which is not strictly risk relevant, the latter ratio optimization not always can be represented by LP models since the positivity of the risk measure (in denominator) may be an active constraint. Explicit use of this constraint generate additional integer programming relations and dramatically increases the computational complexity. We experienced this phenomenon while studying the standard approach to the Omega ratio maximization (Guastaroba et al. 2016), which corresponds to the reward-risk maximization for the mean below-target deviation. Generally, the reward-risk models are computationally not simpler that the primal risk reward models. Therefore, in our experimental study we have focused on computational advantages of the usage of dual models while comparing them to the primal ones.

In Table 1 there are presented computation times for all the above primal and dual models. All results are presented as the averages of 10 different test instances of the same size. For the large scale test problems the solution times of the dual CVaR models (33) ranging from 7 to 110 s are 20 to 60 times shorter than those for the primal models.

It was explicitly forbidden for CPLEX to automatically select best LP optimizer in order to guarantee comparable results between all the LP problems, as CPLEX tried to solve different problems (depending mostly on their size) with different algorithms. However, in general, dual simplex algorithm offered similar or worse performance, while barrier algorithm provided better performance overall.

Table 1 Computational times (in seconds) for the dual versus primal LP models for problems with 50,000 scenarios

The Minimax models are computationally very easy. Running the computational tests we were able to solve the large scale test instances of the dual model (40) in times below 15 s. In fact, even the primal model could be solved in reasonable time up to 50 s for large scale test instances.

The SMAD and Omega models are computationally very similar each other and both similar to the CVaR models. Indeed, the dual models can be solved in not more than 92 s for the large scale test instances and those times are 18–35 times shorter than for the primal models. Additionally, in case of the primal problem for SMAD risk measure CPLEX offered an automatic dual problem formulation during presolve step which resulted in exactly the same performance figures compared to the dual problem. That behavior was blocked by turning off certain settings during the comparison computations.

4 Conclusions

We have presented the reward-risk ratio optimization models for the several polyhedral risk measures and analyzed their properties. Taking advantages of possible inverse formulation of the risk-reward ratio optimization (15) we get a model well defined and SSD consistent under natural restriction on the target value selection. Thus, this ratio optimization is consistent with the SSD rules, despite that the ratio does not represent a coherent risk measure (Artzner et al. 1999).

We show that while transforming the risk-reward ratio optimization to an LP model, we can take advantages of the LP duality to get a model formulation providing higher computational efficiency. In the introduced dual model, similar to the direct polyhedral risk measures optimization (Ogryczak and Śliwiński 2011b), the number of structural constraints is proportional to the number of instruments while only the number of variables is proportional to the number of scenarios, thus not affecting so seriously the simplex method efficiency. The model can effectively be solved with general LP solvers even for very large numbers of scenarios. Actually, the dual portfolio optimization problems of fifty thousand scenarios and two hundred instruments can be solved with the general purpose LP solvers in less than 2 min. On the other hand, such efficiency cannot be achieved with LP models corresponding to the standard reward-risk ratio optimization even if these LP models can be applied without additional discrete constraints.

We have studied in details the risk-return ratio models for the basic polyhedral risk measures of the SMAD, CVaR, Minimax and mean below-target deviation (Omega index optimization). Similar computationally efficient dual reformulations may be also achieved for more complex polyhedral risk measures. In particular for the measures with enhanced downside risk focus (Michalowski and Ogryczak 2001; Krzemienowski and Ogryczak 2005; Mansini et al. 2007).