1 Introduction

In finance, the expression index funds identifies management strategies that have the objective of tracking the performance of a specific market index (the so-called benchmark), attempting to match, as much as possible, its returns. This investment strategy, usually called indexing or index tracking, is a passive form of fund management where the manager has a low degree of flexibility, and the fund is expected to reproduce the performance of the benchmark by properly choosing a representative selection of securities. The index tracking problem aims at minimizing a function, called the tracking error, which measures how closely the portfolio mimics the performance of the benchmark. Several authors studied the index tracking problem, proposing different optimization models, mainly based on different formulations of the tracking error, and solution methods.

The term enhanced index tracking refers to an investment strategy that, while still attempting to track the market index, is specifically designed to find a portfolio that outperforms the benchmark. In other words, the manager of an enhanced index fund enjoys a little leeway, trying to achieve a higher return than the benchmark but incurring into a minimal additional risk, as measured by the tracking error. The enhanced index tracking problem (EITP) aims at minimizing the tracking error, while simultaneously maximizing the excess return above the benchmark. A number of studies highlight that the amount invested in enhanced index funds steadily increased in the last three decades. The figures reported in Ahmed and Nanda (2005) indicate that a sharp increase occurred in the middle of the ’90s in both the number of enhanced index funds available and the total net assets under enhanced fund management. The same trend is pointed out by Jorion (2002) who reports the outcomes of a survey conducted among fund managers of US institutional tax-exempt assets which indicate that, from 1994 to 2000, enhanced index funds have grown from 33 to 365 USD billions, which is a ten-fold factor! The same author also claims that, over the same period, passively managed funds have grown slower than enhanced index funds. Koshizuka et al. (2009) mention that in the Tokyo stock exchange a significant amount of funds is managed by enhanced index tracking approaches. The growing popularity of the enhanced index funds is not only experienced in mature financial markets, but also in emerging markets. Weng and Wang (2017) report a substantial increase in the importance of enhanced index funds in the Chinese market from 2008 to 2015. It is not surprising that, given this increasing spread of enhanced index funds, the topic is attracting a growing attention, although the number of papers addressing the EITP, compared to the ones on the index tracking problem, is still limited and almost all the contributions appeared in the literature only in the last decade. Indeed, despite the first papers applying OR techniques to the index tracking problem date back to the late ’80s and the ’90s (some of the earliest papers on the topic include Meade and Salkin 1989, 1990; Kwiatkowski 1992; Roll 1992), this subject is still attracting a steady interest among the academics. We refer to Beasley et al. (2003), Gaivoronski et al. (2005), Canakgoz and Beasley (2009), and Guastaroba and Speranza (2012) for a midway exposition of the related literature, and to Sant’Anna et al. (2017) for an overview of the recent contributions. On the other side, the study of the EITP, which is the main topic addressed in the present paper, is a relatively more recent and less mature research area. Most of the research proposing optimization models or solution methods for the latter problem dates since 2005 or later. As Canakgoz and Beasley (2009) provide an overview of the early literature on the EITP, whereas Guastaroba et al. (2016) detail the recent research, in the following we briefly mention only the foremost papers on the EITP and focus on the additional articles not included in the above references. To the best of our knowledge, Beasley et al. (2003) are the first to formalize the EITP. They propose a generalized formulation that allows the decision maker to control the trade-off between minimizing the index tracking error and maximizing the excess return over the benchmark through a parameter in the objective function. Drawing on the latter formulation, Dose and Cincotti (2005) employ a two-step solution approach for the EITP. The method starts constructing a tracking portfolio by selecting a subset of stocks designed to statistically represent the index. Subsequently, the stock weights in the portfolio are determined as the result of an optimization process. Lejeune and Samatlı-Paç (2013) propose a stochastic mixed-integer non-linear model for the EITP where asset returns and the return covariance terms are treated as random variables. Based on approximated stochastic dominance conditions, Bruni et al. (2012) formulate the EITP as a linear programming (LP) model with an exponential number of constraints. The formulation is solved using a separation procedure for the latter family of constraints. Along similar lines, Sharma et al. (2017a) propose an LP model for the EITP that aims at maximizing the mean portfolio return subject to constraints that limit the violation of the second order stochastic dominance criterion. Roman et al. (2013) devise two models for the EITP that aim at selecting a portfolio having a return distribution that dominates the distribution of the benchmark with respect to the second-order stochastic dominance relation. Canakgoz and Beasley (2009) devise two mixed-integer linear programming models that adopt a regression-based view of index tracking and enhanced indexation, respectively. Konno and Hatagi (2005) propose a scheme to construct an index-plus-alpha (i.e., an enhanced) portfolio with minimal transaction costs. Kwon and Wu (2017) propose a mixed-integer second order cone programming formulation for the EITP, that maximizes the expected portfolio return subject to a limit on the portfolio risk and a bound on the tracking error. The portfolio risk is measured using the standard deviation of the portfolio returns, whereas the tracking error is defined as the standard deviation of the excess return of the portfolio from the benchmark. Furthermore, the authors devise a robust counterpart of the above model. Mezali and Beasley (2013) apply quantile regression to index tracking and enhanced indexation. While studying the problem of determining an absolute return portfolio, Valle et al. (2014) discuss how their approach can be extended to address the EITP.

We mentioned above that, at its core, the EITP has a bi-objective nature, like any other mean-risk portfolio optimization model. Despite this observation, very few authors address explicitly the EITP as a bi-objective optimization problem. Among them, it is worth citing the paper by Li et al. (2011) where the EITP is formulated as a bi-objective mixed-integer non-linear optimization model that minimizes the tracking error, given by the downside standard deviation of the portfolio return from the benchmark, and maximizes the portfolio excess return. Their model is solved by means of an immunity-based multi-objective algorithm. Bruni et al. (2015) model the EITP as a bi-objective linear program that maximizes the average excess return of the portfolio over the benchmark, and minimizes the maximum downside deviation of the portfolio return from the market index. Filippi et al. (2016) cast the EITP as a bi-objective mixed-integer LP model which maximizes the excess return of the portfolio over the benchmark, and minimizes the tracking error, here defined as the absolute deviation between the portfolio and benchmark values. The authors devise a bi-objective heuristic framework for its solution.

Like any other multi-objective approach, the methods devised in the former papers do not provide a single optimal solution, but rather a set of (Pareto) optimal solutions, or a set of near-optimal solutions if the method used is a heuristic. As a consequence, these approaches provide the decision maker with a, possibly wide, range of alternative solutions. However, this could be seen as a drawback instead of a point of strength, since they leave the choice of the specific solution to implement to the subjectivity of the decision maker. To overcome the above limit, some authors propose to cast the two objective functions of the EITP as a single objective expressed as a reward-risk ratio. In general terms, these ratios are performance measures that compare the expected returns of an investment (i.e., the reward) to the amount of risk undertaken to achieve these returns, and stem from the observation that there exists an inherent trade-off between the risk and the return of an investment. Nowadays, reward-risk ratios like the Sharpe ratio (see Sharpe 1966) and the Sortino ratio (see Sortino and Price 1994) are widely used to evaluate, compare and rank different investment strategies. To the best of our knowledge, Meade and Beasley (2011) are the first ones attempting to use a reward-risk ratio in the context of enhanced indexation. The authors introduce a non-linear optimization model, based on the maximization of a modified Sortino ratio, and solve it by means of a genetic algorithm. However, the non-linearity of this model may represent an undesirable limitation to its use in financial practice, especially when portfolios have to meet several side constraints (such as cardinality constraints or buy-in thresholds) or when large-scale instances have to be solved since, in most cases, the inclusion of these features requires the introduction of binary and integer variables (see the survey by Mansini et al. 2014). Based on this observation, Guastaroba et al. (2016) introduce two mathematical formulations for the EITP based on the Omega ratio. The Omega ratio is a performance measure introduced by Keating and Shadwick (2002) which, broadly speaking, can be defined as the ratio between expected value of the profits, defined as the portfolio returns over a predetermined target \(\tau \), and expected value of the losses, that are the portfolio returns below \(\tau \). The first formulation introduced in Guastaroba et al. (2016) applies a standard definition of the Omega ratio, computing the ratio with respect to a given target, whereas the second model, called the Extended Omega Ratio model, formulates the Omega ratio with respect to a random target. The authors show that both formulations, despite being non-linear in nature, can be transformed into LP models. The computational results point out that the portfolios selected by the Extended Omega Ratio model consistently outperform, in term of out-of-sample performance, those optimized with the former model.

Since their introduction, quantile risk measures have had a crucial impact on the developments of new risk measures in finance. Conditional value-at-risk (CVaR), which is known also as mean excess loss, expected shortfall, worst conditional expectation, or tail VaR, is one of such measures. The name CVaR was introduced in Rockafellar and Uryasev (2000) where the risk measure is developed for continuous distributions, and later extended to general distributions (i.e., with a possibly discontinuous distribution function) in Rockafellar and Uryasev (2002). The interest for this measure is continuously growing, as proved by a large number of recently published contributions, also including risk portfolio optimization problems in continuous-time (see Gao et al. 2017), applications within a data envelopment analysis framework (see Branda 2015; Branda and Kopa 2014), and many other applications different from optimization in finance (see the survey by Filippi et al. 2020). A relevant advantage of the CVaR is that for discrete random variables, i.e., when probabilities can be represented by using scenarios rather than densities, it can be optimized by means of LP methods. The success of the CVaR as a measure of risk is related to the theoretical properties it satisfies and to some practical considerations that make it attractive also among practitioners. From a theoretical point of view, the CVaR is a coherent risk measure as shown in Pflug (2000) (see Artzner et al. 1999 for the definition of coherent risk measures) and is consistent with the second-degree stochastic dominance as detailed in Ogryczak and Ruszczyński (2002b). From a practical viewpoint, it is a downside risk measure in the sense that it does not penalize upside deviations, which are deviations of the portfolio returns above a given target and that any rational investor perceives as profits. Mansini et al. (2007) suggest that the concept of CVaR can be extended to improve the risk averse modeling capabilities of the measure. Indeed, the authors show that a more detailed risk aversion modeling can be achieved by considering simultaneously multiple CVaR measures, each one specified by a given tolerance level, and then combining them together, as a weighted sum, into a single risk measure. The resulting measure is called the weighted multiple CVaR (WCVaR) and is, obviously, LP computable.

Besides, it is worth mentioning the paper by Sharma et al. (2017b) where the concepts of Omega ratio optimization and CVaR are combined together in the general context of portfolio optimization, and, hence, not directly related to the EITP addressed in the current paper. Particularly, Sharma et al. (2017b) reformulate the original Omega ratio by computing the target \(\tau \) (which is predetermined in its classical form) as the CVaR of a benchmark market portfolio. Goel et al. (2018) propose a formulation to the EITP that maximizes the ratio obtained by using the WCVaR in a STARR-like ratio (see “Appendix A” for further details on the STARR). Finally, another critical aspect concerns the relevance of the proposed approaches as efficient and fast tools for a financial manager’s desk. In the last decades, several contributions have focused their attention to the development of optimization tools able to provide reliable solutions for the index tracking problem and other portfolio problems in a real-time environment (see Adcock and Meade 1994; Zenios 2008). Nowadays, optimization models can be considered as decision-aiding tools that can be successfully applied also by users who are not familiar with optimization theory. This is especially true for the class of LP models to which the models we propose in this work belong to.

Contributions This paper provides the following contributions. Firstly, we introduce a theoretical framework for risk-reward ratio models, and employ it in the context of the EITP. Formulating the EITP as a single objective risk-reward ratio model overcomes the main disadvantage, as mentioned above, that afflicts the classical multi-objective approaches. Indeed, the latter produce an (often overwhelming) amount of solutions for the decision maker that has, then, to identify the most appropriate solution given her/his preferences. Conversely, the proposed approach provides the decision maker with a single optimal solution, the one minimizing the risk-reward ratio. We propose a novel class of bi-criteria optimization models expressed in terms of risk-reward ratios, where the risk measurement is based on the WCVaR. The latter measure is, as mentioned before, an extension of the classical CVaR that enables a more detailed risk modeling by considering concurrently multiple CVaR measures, and encompasses the classical CVaR as a special case. In more details, within a risk-reward ratio setting, we consider a deviation measure which is a counterpart of the WCVaR as a risk measure (see Rockafellar and Uryasev 2013). Following the findings reported in Guastaroba et al. (2016), the class of optimization models introduced here is also designed with respect to a random target. We show that the resulting formulation, non-linear in nature, can be reformulated as an LP model. In terms of decision-aiding tools the proposed models have the advantage to include in a unique framework the whole gamut of risk attitudes making them an intuitive custom-built application. More precisely, by simply deciding the weights associated with the multiple CVaR structure and by selecting appropriate values for the parameter controlling the excess return over the benchmark, much room is left to the user in deciding her/his risk tolerance. From a computational viewpoint, the proposed are LP optimization models, which can be directly solved with an off-the-shelf solver in negligible computing times. This property makes their use suitable in financial practice, where decisions have to be, more and more frequently, taken in a real-time environment. To validate the performance of the optimal portfolios selected by the proposed formulation, we conducted extensive computational experiments on benchmark instances taken from the literature, and compare their out-of-sample behavior with that of the portfolios constructed solving a reformulation of the Extended Omega Ratio model introduced in Guastaroba et al. (2016). Indeed, in the current paper, we express the Extended Omega Ratio model in terms of a risk-reward minimization, rather than a reward-risk maximization as it was originally proposed in Guastaroba et al. (2016). Since, at least theoretically, the deviation measure at the denominator of the reward-risk ratio can take null value, Guastaroba et al. (2016) introduced additional constraints to guarantee its positivity and keep the problem always solvable. Our reformulation avoids such modeling issues.

Despite the extensive experiments carried out, the outcomes do not seem to clearly favor one model over the others. On the other side, the results indicate a quite satisfactory ex-post performance of the optimal portfolios: all the optimal portfolios track very closely the behavior of the benchmark over the out-of-sample period, often achieving better returns.

Structure of the paper The remainder of the paper is organized as follows. In Sect. 2, we introduce the basic notation and some preliminary concepts that will be used throughout the rest of the paper. Section 3 is devoted to the introduction of the mathematical formulation for the EITP based on the WCVaR. Computational experiments are reported in Sect. 4, where an extensive evaluation and comparison of the out-of-sample performance of the optimal portfolios is provided. Finally, some concluding remarks are drawn in Sect. 5.

2 Basic notation and preliminary concepts

2.1 Basic notation

We consider an investor whose aim is to optimally select a portfolio of securities and hold it until the end of a specific investment horizon, i.e., the investor follows a so-called buy-and-hold strategy. Let \(J = \{ 1,2,\ldots ,n \} \) be the set of securities available for the investment. For each security \(j \in J\), its rate of return is represented by a random variable (r.v.) \(R_j\) with a given mean \(\mu _j = {\mathbb {E}}\{ R_j \}\). Let \(\mathbf{x}= (x_j)_{j=1,\ldots ,n}\) be the vector of decision variables \(x_j\) representing the shares (weights) that define a portfolio of securities. In any feasible portfolio the weights must sum to one, i.e., \(\sum ^n_{j=1} x_j =1\), and short sales are not allowed, i.e., \(x_j \ge 0\) for \(j=1,\ldots ,n\). Such basic constraints form a feasible set \(\mathcal{P}\). Each portfolio \(\mathbf{x}\) defines a corresponding r.v. \(R_\mathbf{x} = \sum ^n_{j=1} R_j x_j\) that represents the portfolio rate of return. The mean rate of return for portfolio \(\mathbf{x}\) is given as \(\mu (R_\mathbf{x}) = {\mathbb {E}}\{ R_\mathbf{x} \} = \sum ^n_{j=1} \mu _j x_j\). We consider T scenarios, each one with probability \(p_t\), where \(t=1,\ldots ,T\). We assume that for each r.v. \(R_j\) its realization \(r_{jt}\) under scenario t is known and that, for each security j, with \(j \in J,\) its mean rate of return is computed as \(\mu _j = \sum _{t=1}^T r_{jt} p_t\). The realization of the portfolio rate of return \(R_{\mathbf{x}}\) under scenario t is given by \(y_t = \sum _{j=1}^n r_{jt} x_j\). Although the optimization models that we are going to describe remain valid for any arbitrary set of scenarios or discrete probability distribution function, we assume that the T scenarios are treated as equally probable, i.e., we set \(p_t=1/T\) for \(t=1,\ldots ,T\), and that these scenarios are represented by historical data observed on a stock exchange market.

Regarding the benchmark, we denote the r.v. representing its rate of return as \(R^I\), whereas its realization under scenario t is denoted as \(r^I_t\), with \(t=1,\ldots ,T\), and its mean rate of return as \(\mu ^I = \sum _{t=1}^T r^I_t p_t\). In enhanced indexation, the investor is interested in determining an optimal portfolio that outperforms the rate of return of the benchmark. This situation can be modeled using as a target some reference r.v. \(R^\alpha = R^I + \alpha \) rather than simply the benchmark rate of return \(R^I\). In these terms, \(R^\alpha \) represents the rate of return beating the benchmark by a given excess return equal to \(\alpha \). Its realization under scenario t is denoted as \(r^\alpha _t = r^I_t + \alpha \), with \(t=1,\ldots ,T\), and mean rate of return \(\mu ^\alpha = \sum _{t=1}^T r^\alpha _t p_t\). Note that the value of \(\alpha \) should be chosen according to the market behavior. Finally, in the following the notation \((.)_+\) will denote the non-negative part of a quantity, that is, \((Q)_+=\max \{ Q,0 \}\).

2.2 Risk, deviation and ratio measures

In his cornerstone research, Markowitz (1952) suggests to model portfolio optimization problems as mean-risk bi-criteria problems, where the mean portfolio return \(\mu (R_\mathbf{x})\) is maximized and its standard deviation \(\sigma (R_\mathbf{x})\) is minimized. Since then, a number of other (generalized) deviation measures have been considered (see Konno and Yamazaki 1991; Mansini et al. 2003a; Rockafellar et al. 2006a for further details). A deviation measure is defined as a functional \(\mathscr {\varrho }\) that satisfies the following axioms (Rockafellar et al. 2006b; Rockafellar and Uryasev 2013):

  • shift invariance: \(\varrho (R_\mathbf{x}+C)=\varrho (R_\mathbf{x})\), for all \(R_\mathbf{x}\) and constants C;

  • positive homogeneity: \(\varrho (0)=0\) and \(\varrho (\lambda R_\mathbf{x})= \lambda \varrho (R_\mathbf{x})\), for all \(R_\mathbf{x}\) and all \(\lambda > 0\);

  • subadditivity: \(\varrho (R_{\mathbf{x}^\prime } + R_{\mathbf{x}^{\prime \prime }}) \le \varrho (R_{\mathbf{x}^\prime }) + \varrho (R_{\mathbf{x}^{\prime \prime }})\), for all \(R_{\mathbf{x}^\prime }\) and \(R_{\mathbf{x}^{\prime \prime }}\);

  • risk relevance: \(\varrho (R_\mathbf{x})>0 \) for all nonconstant \(R_{\mathbf{x}}\), and \(\varrho (R_\mathbf{x})=0\) for constant \(R_{\mathbf{x}}\).

A relevant drawback of deviation measures is that their minimization is not consistent with the stochastic dominance order paradigms (e.g., see Whitmore and Findlay 1978). In stochastic dominance, uncertain returns (modeled as random variables) are confronted by pointwise comparison of some performance functions constructed from their distribution functions. The first performance function is defined as the right-continuous cumulative distribution function: \(F^{(1)}_{R_\mathbf{x}}(\eta ) = F_{R_\mathbf{x}}(\eta ) = {\mathbb {P}}\{ {R_\mathbf{x}} \le \eta \}\) and defines the first-degree stochastic dominance. The second function is derived from the first as \( F^{(2)}_{R_\mathbf{x}}(\eta ) = \int _{-\infty }^{\eta } F_{R_\mathbf{x}}(\xi ) \ d\xi \), and defines the Second-degree stochastic dominance (SSD). We say that portfolio \( \mathbf{x}^\prime \) dominates \(\mathbf{x}^{{\prime \prime }}\) under the SSD criterion (denoted as \( \mathbf{x}^\prime \succ _{_{SSD}} \mathbf{x}^{{\prime \prime }}\)), if \(F^{(2)}_{R_{\mathbf{x}^\prime }}(\eta ) \le F^{(2)}_{R_{\mathbf{x}^{\prime \prime }}}(\eta )\) for all \(\eta \), with at least one strict inequality. The latter relation can be expressed in a weaker form, which claims that portfolio \( \mathbf{x}^\prime \) dominates \(\mathbf{x}^{{\prime \prime }}\) under the weak SSD criterion (\( \mathbf{x}^\prime \succeq _{_{SSD}} \mathbf{x}^{{\prime \prime }}\)), if \(F^{(2)}_{R_{\mathbf{x}^\prime }}(\eta ) \le F^{(2)}_{R_{\mathbf{x}^{\prime \prime }}}(\eta )\) for all \(\eta \). Furthermore, a feasible portfolio \(\mathbf{x}^\prime \in \mathcal{P}\) is said to be SSD efficient if there is no other feasible portfolio \(\mathbf{x}\in \mathcal{P}\) such that \(\mathbf{x}\succ _{_{SSD}} \mathbf{x}^\prime \). The concept of stochastic dominance relates the notion of risk to a possible failure of achieving some targets. As shown by Ogryczak and Ruszczyński (1999), values of function \(F^{(2)}_{R_\mathbf{x}}\), used to define the SSD relation can also be presented as the first-order lower partial moments (\(\hbox {LPM}_1\)), that is \(F^{(2)}_{R_\mathbf{x}}(\eta ) = {\mathbb {E}}\{ (\eta - R_{\mathbf{x}})_+ \}\). The latter is the simplest downside risk criterion (cf. Fishburn 1977) that, when computed for a specific value \(\tau \), will be denoted as \(\delta _{\tau }\):

$$\begin{aligned} \delta _{\tau }(R_\mathbf{x}) = {\mathbb {E}}\{ ( \tau - R_{\mathbf{x}} )_+ \} = F^{(2)}_{R_\mathbf{x}}(\tau ). \end{aligned}$$
(1)

For discrete rates of return represented by their realizations, \(\delta _{\tau }(R_\mathbf{x})\) is LP computable.

A deviation measure \({\varrho }(R_\mathbf{x})\) is said to be mean-complementary SSD consistent if \({\mathbf{x}^\prime } \succeq _{_{SSD}} {\mathbf{x}^{\prime \prime }}\) implies that \(\mu (R_{\mathbf{x}^\prime })-{\varrho }(R_{\mathbf{x}^\prime }) \ge \mu (R_{\mathbf{x}^{\prime \prime }})-{\varrho }(R_{\mathbf{x}^{\prime \prime }})\). If a deviation measure is mean-complementary SSD consistent then, except for portfolios with identical values of \(\mu (R_\mathbf{x})\) and \({\varrho }(R_\mathbf{x})\), every efficient solution of the bi-criteria problem \( \max \{ (\mu (R_\mathbf{x}),\mu (R_\mathbf{x})-{\varrho }(R_\mathbf{x})) \ : \ \mathbf{x}\in \mathcal{P}\}\) is an SSD efficient portfolio (see Ogryczak and Ruszczyński 1999 for further details).

Note that for any deviation measure \({\varrho }(R_\mathbf{x})\), \(\mu (R_\mathbf{x})-{\varrho }(R_\mathbf{x})\) is the negative of the corresponding risk measure \({\mathscr {R}}(R_\mathbf{x})={\mathbb {E}}\{-R_\mathbf{x}\} + \varrho (R_\mathbf{x}) = - (\mu (R_\mathbf{x})-{\varrho }(R_\mathbf{x}))\) (Rockafellar et al. 2006b; Rockafellar and Uryasev 2013). In particular, assuming the standard deviation as the deviation measure, the corresponding risk measure is defined as \({\mathscr {R}}_{\sigma }(R_\mathbf{x})={\mathbb {E}}\{-R_\mathbf{x}\} + \sigma (R_\mathbf{x})\). A risk measure \({\mathscr {R}}\) is a functional that satisfies the following axioms (Rockafellar et al. 2006b; Rockafellar and Uryasev 2013):

  • translation invariance: \({\mathscr {R}}(R_\mathbf{x}+C)={\mathscr {R}}(R_\mathbf{x}) - C\), for all \(R_\mathbf{x}\) and constants C;

  • positive homogeneity: \({\mathscr {R}}(0)=0\) and \({\mathscr {R}}(\lambda R_\mathbf{x})= \lambda {\mathscr {R}}(R_\mathbf{x})\), for all \(R_\mathbf{x}\) and all \(\lambda > 0\);

  • subadditivity: \({\mathscr {R}}(R_{\mathbf{x}^\prime } + R_{\mathbf{x}^{\prime \prime }}) \le {\mathscr {R}}(R_{\mathbf{x}^\prime }) + {\mathscr {R}}(R_{\mathbf{x}^{\prime \prime }})\), for all \(R_{\mathbf{x}^\prime }\) and \(R_{\mathbf{x}^{\prime \prime }}\);

  • risk relevance: \({\mathscr {R}}(R_\mathbf{x})>{\mathbb {E}}\{-R_\mathbf{x}\} \) for all nonconstant \(R_{\mathbf{x}}\), and \({\mathscr {R}}(R_\mathbf{x})={\mathbb {E}}\{-R_\mathbf{x}\}\) for constant \(R_{\mathbf{x}}\).

Several portfolio performance measures have been introduced as such risk measures \({\mathscr {R}}\). As prominent examples, we recall the worst realization (maximum loss) studied by Young (1998) and the CVaR measure introduced by Rockafellar and Uryasev (2000). The corresponding deviation measure is then defined as \({\varrho }(R_\mathbf{x})={\mathscr {R}}(R_\mathbf{x}-\mu (R_\mathbf{x}))\).

The common approach used to tackle a Markowitz-type mean-risk model is to transform the objective of maximizing the mean portfolio return into a constraint by imposing a minimum acceptable mean return \(\mu _0\), while minimizing the risk criterion. An alternative approach is to seek for a risky portfolio that offers the maximum increase of the portfolio mean return compared to a given target \(\tau \), per unit of risk incurred. Target \(\tau \) is often represented by the mean return of a risk-free asset. The latter approach leads to the following optimization problem expressed as a ratio:

$$\begin{aligned} \max \left\{ \frac{\mu (R_\mathbf{x}) - \tau }{\varrho (R_\mathbf{x})}\ : \ \mathbf{x}\in \mathcal{P}\right\} . \end{aligned}$$
(2)

The optimal solution of problem (2) is usually called the tangency portfolio or the market portfolio. Mansini et al. (2003b) show that for LP computable risk measures, the reward-risk ratio optimization problem (2) can be converted into an LP form. When the risk-free rate of return \(r_0\) is used instead of the target \(\tau \), ratio optimization problem (2) corresponds to the classical Tobin’s model (cf. Tobin 1958) of the modern portfolio theory, where the capital market line is the line drawn from the intercept corresponding to \(r_0\) and that passes tangent to the mean-risk efficient frontier. Any point on this line provides the maximum return for each level of risk. The tangency portfolio \({\textit{TP}}_{r_0}\) is the portfolio of risky assets corresponding to the point where the capital market line is tangent to the efficient frontier.

Instead of the reward-risk ratio maximization (2), one may formulate the same problem in terms of risk-reward ratio minimization as follows:

$$\begin{aligned} \min \left\{ \frac{\varrho (R_\mathbf{x})}{\mu (R_\mathbf{x}) - \tau }\ : \ \mathbf{x}\in \mathcal{P}\right\} . \end{aligned}$$
(3)

Even though both ratio optimization models (2) and (3) are theoretically equivalent, the risk-reward formulation (3) enables an easier control of the denominator positivity by simply introducing the additional inequality \({\mu (R_\mathbf{x}) - \tau \ge \varepsilon _1}\), with \(\varepsilon _1 > 0\).

Note that two feasible portfolios having zero risk are both optimal to the risk-reward ratio model (3), even if they are characterized by different mean returns. This shortcoming can be regularized leading to the following formulation:

$$\begin{aligned} \min \left\{ \frac{\varrho (R_\mathbf{x}) + \varepsilon _2}{\mu (R_\mathbf{x}) - \tau }\ : \ \mu (R_\mathbf{x}) - \tau \ge \varepsilon _1,\ \mathbf{x}\in \mathcal{P}\right\} . \end{aligned}$$
(4)

This regularization of the numerator is useful when for multiple portfolios the deviation measure \(\varrho (R_\mathbf{x})\) takes value equal to zero. In these cases, an optimal solution to problem (4) is the portfolio with the largest mean return. Furthermore, the following theorem is valid.

Theorem 1

Let \(\mathbf{x}^0\) be an optimal portfolio to the risk-reward ratio optimization problem (4) that satisfies condition \(\mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0}) \le \tau \). For any deviation measure \(\varrho (R_\mathbf{x})\) which is mean-complementary SSD consistent, portfolio \(\mathbf{x}^0\) is nondominated in terms of the bi-criteria optimization \(\max \{ \mu (R_\mathbf{x}), - \varrho (R_\mathbf{x})\}\) and is SSD nondominated with the exception of alternative (and equivalent) optimal portfolios having the same values of mean return \(\mu (R_{\mathbf{x}^0})\) and deviation measure \(\varrho (R_{\mathbf{x}^0})\).

Proof

Suppose that there exists a feasible portfolio \(\mathbf{x}\), i.e., \(\mathbf{x}\in \mathcal{P}\) and \(\mu (R_\mathbf{x}) \ge \tau + \varepsilon _1\), such that \(\mathbf{x}\succeq _{_{SSD}} \mathbf{x}^0\) or \( \mu (R_\mathbf{x}) \ge \mu (R_{\mathbf{x}^0})\) and simultaneously \(\varrho (R_\mathbf{x}) \le \varrho (R_{\mathbf{x}^0})\). Each of these dominance relations implies that \(\mu (R_\mathbf{x}) - \varrho (R_\mathbf{x}) \ge \mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0})\) and \(\mu (R_\mathbf{x})\ge \mu (R_{\mathbf{x}^0})\). Note that the objective function in problem (4) can be written as:

$$\begin{aligned} \frac{\varrho (R_\mathbf{x}) + \varepsilon _2}{\mu (R_\mathbf{x}) - \tau } = \frac{\tau - (\mu (R_\mathbf{x}) - \varrho (R_\mathbf{x})) + \varepsilon _2}{\mu (R_\mathbf{x}) - \tau } +1. \end{aligned}$$

Due to the optimality of \(\mathbf{x}^0\) and the additional condition \(\mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0}) \le \tau \), in the above ratio both numerator and denominator are positive for solution \(\mathbf{x}^0\), whereas the denominator is positive for any feasible portfolio \(\mathbf{x}\). Hence, whenever \(\mu (R_\mathbf{x}) - \varrho (R_\mathbf{x}) > \mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0})\) or \(\mu (R_\mathbf{x}) > \mu (R_{\mathbf{x}^0})\), the following inequality holds:

$$\begin{aligned} \frac{\varrho (R_\mathbf{x}) + \varepsilon _2}{\mu (R_\mathbf{x}) - \tau }= & {} \frac{\tau - (\mu (R_\mathbf{x}) - \varrho (R_\mathbf{x})) + \varepsilon _2}{\mu (R_\mathbf{x}) - \tau } + 1 < \frac{\tau - (\mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0})) +\varepsilon _2}{\mu (R_{\mathbf{x}^0}) - \tau } + 1 \\= & {} \frac{\varrho (R_{\mathbf{x}^0}) + \varepsilon _2}{\mu (R_{\mathbf{x}^0}) - \tau } \end{aligned}$$

which contradicts the optimality of \(\mathbf{x}^0\). Therefore, \(\mu (R_\mathbf{x}) = \mu (R_{\mathbf{x}^0})\) and \(\varrho (R_\mathbf{x}) = \varrho (R_{\mathbf{x}^0})\), which means \(\mathbf{x}\) is an equivalent optimal solution to (4). \(\square \)

Note that condition \(\mu (R_{\mathbf{x}^0}) - \varrho (R_{\mathbf{x}^0}) \le \tau \) is equivalent to imposing that the value of ratio \((\varrho (R_{\mathbf{x}^0})+\varepsilon )/(\mu (R_{\mathbf{x}^0}) - \tau )\) is greater than 1. Consequently, any risk-reward ratio model (4) is well-defined only if this condition is not violated.

To apply directly the risk-reward ratio model (4) in the domain of the enhanced indexation, one should replace the target value \(\tau \) with the mean rate of return \(\mu ^\alpha \), the latter as defined above in Sect. 2.1. As already mentioned, Guastaroba et al. (2016) have shown that the performance of the portfolios selected by a ratio optimization model (in their paper the Omega ratio is expressed in terms of a reward-risk ratio model) can be significantly improved if the models are modified in order to take into consideration if the portfolio tracks, falls below or beats the benchmark under multiple scenarios. To this aim, one should formulate the risk-reward ratio model for a random benchmark return \(R^\alpha \), rather than for the mean rate of return \(\mu ^\alpha \). In other words, the optimization model is applied to the distribution of the difference \((R_\mathbf{x}-R^\alpha )\), thus taking the following form:

$$\begin{aligned} \min \left\{ \frac{\varrho (R_\mathbf{x}-R^\alpha ) + \varepsilon _2}{\mu (R_\mathbf{x}- R^\alpha )}\ : \ \mu (R_\mathbf{x}-R^\alpha ) \ge \varepsilon _1,\ \mathbf{x}\in \mathcal{P}\right\} . \end{aligned}$$
(5)

Note that applying model (5) to the deterministic target \(\tau \), i.e., replacing \(R^\alpha =\tau \), one gets exactly the standard risk-reward ratio model (4), as \(\mu (R_\mathbf{x}- \tau )= \mu (R_\mathbf{x}) - \tau \) and for the deviation risk measure \(\varrho (R_\mathbf{x}-\tau )=\varrho (R_\mathbf{x})\).

It is worth highlighting that, in the literature, some authors proposed ratio performance measures based on using the CVaR. In “Appendix A”, we discuss some of these ratio measures, and point out their similarities to the ones considered in the present paper.

2.3 Weighted CVaR risk measures

We consider the CVaR defined directly on the distribution of returns \(R_\mathbf{x}\). Hence, for any real \(0 < \beta \le 1\), the CVaR at level \(\beta \) is defined as (see Rockafellar et al. 2006a):

$$\begin{aligned} CVaR_\beta (R_\mathbf{x}) = - [ \text{ expectation } \text{ of } R_\mathbf{x} \text{ in } \text{ its } \text{ lower } \beta \text{ tail } \text{ distribution }]. \end{aligned}$$

Thus, formally, as the negative to the normalized value of the Absolute Lorenz Curve \(CVaR_\beta (R_\mathbf{x}) = - M_{\beta }(R_\mathbf{x})\), given by the following formula (see Ogryczak and Ruszczyński 2002b):

$$\begin{aligned} M_{\beta }(R_\mathbf{x}) = \frac{1}{\beta } \int _0^\beta F_{R_\mathbf{x}}^{(-1)}(\xi ) d\xi , \end{aligned}$$
(6)

where \(F_{R_\mathbf{x}}^{(-1)}\) is the quantile function for the portfolio return \(R_\mathbf{x}\). It is defined as \(F_{R_\mathbf{x}}^{(-1)}(\xi ) = \inf \{ \eta : F_{R_\mathbf{x}}(\eta ) \ge \xi \}\) for \(0 < \xi \le 1\), i.e., the left-continuous inverse of the right-continuous cumulative distribution function \(F_{R_\mathbf{x}}(\eta )= {\mathbb {P}}\{ R_\mathbf{x}\le \eta \}\). According to Rockafellar et al. (2006b) and Rockafellar and Uryasev (2013), the CVaR measure (6) can be classified as a risk measure and the corresponding deviation measure is \(\varDelta _{\beta }(R_\mathbf{x}) = CVaR_\beta (R_\mathbf{x}- \mu (R_\mathbf{x}))= \mu (R_\mathbf{x}) - M_{\beta }(R_\mathbf{x})\) (see Rockafellar et al. 2006b; Mansini et al. 2003b). For a discrete random variable represented by its realizations \(y_t\), with \(t = 1, \dots , T\), both the CVaR measure and its corresponding deviation measure \(\varDelta _{\beta }(R_\mathbf{x})\) are LP computable when minimized. In particular:

$$\begin{aligned} \varDelta _{\beta }(R_\mathbf{x}) = \min _{\mathbf{d}, \eta } \left\{ \mu (R_\mathbf{x}) - \eta + \frac{1}{\beta } \sum _{t=1}^{T} p_t d_t : d_t \ge \eta - y_t,\ d_t \ge 0, \ t=1,\ldots ,T\right\} , \end{aligned}$$
(7)

where \(\eta \) is an unbounded variable taking, at the optimum, the value of the \(\beta \)-quantile.

Although the CVaR is risk relevant for \(0< \beta <1\), it represents only the mean within a part (tail) of the distribution of returns. Therefore, such a single criterion might present some limits when it is important to model various risk aversion preferences treating differently events that are more or less extreme. Aiming at strengthening its modeling capabilities, Mansini et al. (2007) show that a more detailed risk aversion modeling can be achieved by considering simultaneously multiple CVaR measures, each one specified by a given tolerance level, and then combining them together, as a weighted sum, into a single risk measure. They proposed to consider several, say m, levels \(0< \beta _1< \beta _2< \cdots< \beta _m < 1\) and combine together the corresponding CVaR measures with the weighted sum leading to the Weighted CVaR (WCVaR) measure. Note that larger losses as present in CVaR measures for more tolerance levels are then taken into account with larger accumulated weights within the WCVaR measure. The WCVaR can be expressed in terms of the deviation measure as a weighted sum of several \(\varDelta _{\beta _k}(R_\mathbf{x})\) measures combined by using positive (and normalized) weights, thus leading to the following form:

$$\begin{aligned} \varDelta ^{(m)}_\mathbf{w}(R_\mathbf{x}) = \sum _{k=1}^m w_k \varDelta _{\beta _k}(R_\mathbf{x}), \quad \sum _{k=1}^m w_k = 1, \quad w_k > 0, \quad k=1,\ldots ,m. \end{aligned}$$
(8)

\(\varDelta _{\beta _k}(R_\mathbf{x})\) is a convex deviation measure. Since, as mentioned above, the CVaR is coherent and SSD consistent, the same applies to the WCVaR. In particular, \(\mathbf{x}^\prime \succeq _{_{SSD}} \mathbf{x}^{{\prime \prime }}\) implies that \(\mu (R_{\mathbf{x}^\prime }) - \varDelta ^{(m)}_{\mathbf{w}}(R_{\mathbf{x}^\prime }) \ge \mu (R_{\mathbf{x}^{\prime \prime }}) - \varDelta ^{(m)}_{\mathbf{w}}(R_{\mathbf{x}^{\prime \prime }})\) (see Ogryczak and Ruszczyński 2002a).

Mansini et al. (2007) identified two main classes of WCVaR measures, that primarily differ for the set of weights \(w_k\) used. More precisely, they considered the Wide WCVaR measures providing an approximation to the Gini mean difference (GMD) measure \(\varGamma (R_\mathbf{x}) = 2 \int _0^{1}(\mu (R_\mathbf{x}) \alpha - F_{R_\mathbf{x}}^{(-2)}(\alpha )) d\alpha \) (see Yitzhaki 1982) and the Tail WCVaR providing an approximation to the Tail GMD measure \(\varGamma _{\beta _m}(R_\mathbf{x}) = \frac{2}{\beta _m^2} \int \nolimits _0^{\beta _m} (\mu (R_\mathbf{x}) \alpha - F_{R_\mathbf{x}}^{(-2)}(\alpha )) d\alpha \) (see Ogryczak and Ruszczyński 2002a), where \(F_{R_\mathbf{x}}^{(-2)}(\alpha )= \int _0^\alpha F_{R_\mathbf{x}}^{(-1)}(\nu ) d\nu \). In both classes of models, once the tolerance levels \(\beta _k\) have been decided, the corresponding weights \(w_k\) are automatically defined. In both cases, it is not necessary to consider a dense grid with a large number of tolerance levels to provide a proper modeling of risk averse preferences, but \(m=3\) and \(m=2\) might be enough. Based on above definitions of the weights and the computational results reported in (Mansini et al. 2007), where the Wide CVaR models came out to be dominated by the Tail WCVaR models, we decided to concentrate on the latter models. In more details, given a grid of m tolerance levels \(0< \beta _1< \dots< \beta _k< \dots < \beta _m = \beta \), the weights are defined as follows:

$$\begin{aligned} \begin{array}{rl} w_k = \frac{\beta _k (\beta _{k+1} - \beta _{k-1})}{\beta ^2} \quad k =1,\ldots ,m-1,\ \text{ and } \ w_m = \frac{\beta _m (\beta _{m} - \beta _{m-1})}{\beta ^2}, \end{array} \end{aligned}$$
(9)

where \(\beta _0 = 0\). To the sake of brevity, in the remainder of the paper we will refer to the Tail WCVaR measure with weights defined as in (9) simply as WCVaR.

For returns represented by their realizations \(y_t\), with \(t = 1, \dots , T\), the WCVaR deviation measures can be represented by the following LP problem:

$$\begin{aligned} \begin{array}{rl} \varDelta ^{(m)}_\mathbf{w}(R_\mathbf{x}) = &{} \displaystyle \min _{\mathbf{d}, \eta } \left\{ \mu (R_\mathbf{x}) - \mathop \sum \limits _{k=1}^{m} w_k \eta _k + \mathop \sum \limits _{k=1}^{m} \frac{w_k}{\beta _k} \mathop \sum \limits _{t=1}^{T} p_t d_{tk} \right\} \\ \text{ s.t. } &{} d_{tk} \ge \eta _k - y_{t}, \quad d_{tk} \ge 0 \quad t=1,\ldots ,T;\ k=1,\ldots ,m, \end{array} \end{aligned}$$
(10)

where \(\eta _k\), with \(k=1,\ldots ,m\), are unbounded variables taking, at the optimum, the values of the corresponding \(\beta _k\)-quantiles.

3 Optimization models for the enhanced index tracking problem

The present section is devoted to the introduction of the optimization models tested in the computational experiments. In Sect. 3.1, we introduce in detail the new optimization model based on the WCVaR. The model described in the following Sect. 3.2 is the risk-reward version of the formulation devised in Guastaroba et al. (2016) and based on the Omega ratio. To the sake of brevity, we simply derive the LP formulation of our model and highlight the differences compared to the model in Guastaroba et al. (2016).

3.1 Extended WCVaR ratio model

As risk-reward ratio models are well-defined for deviation measures, in a CVaR-based risk-reward ratio model one must use the deviation measure \(\varDelta ^{(m)}_\mathbf{w}(R_\mathbf{x})\). Therefore, the risk-reward ratio model for the EITP based on the WCVaR is the following:

$$\begin{aligned} \min \!\left\{ \! \frac{\varDelta ^{(m)}_{\mathbf{w}}(R_\mathbf{x}- R^\alpha ) + \varepsilon _2}{\mu (R_\mathbf{x}- R^\alpha )} \ : \ \mu (R_\mathbf{x}- R^\alpha ) \ge \varepsilon _1, \mathbf{x}\in \mathcal{P}\right\} , \end{aligned}$$
(11)

where we replaced \(\varrho (R_\mathbf{x}-R^\alpha )\) in (5) with \(\varDelta ^{(m)}_{\mathbf{w}}(R_\mathbf{x}- R^\alpha )\) as defined in (8). Since the deviation measure \(\varDelta ^{(m)}_{\mathbf{w}}\) is mean-complementary SSD consistent, applying Theorem 1 to the distribution of the difference \((R_\mathbf{x}-R^\alpha )\) with \(\tau =0\), one gets the following corollary.

Corollary 1

Let \(\mathbf{x}^0\) be an optimal solution to the optimization problem (11) that satisfies condition \(\mu (R_{\mathbf{x}^0} - R^\alpha ) - \varDelta ^{(m)}_{\mathbf{w}}(R_{\mathbf{x}^0} - R^\alpha ) \le 0\). Then, portfolio \(\mathbf{x}^0\) is SSD nondominated with the exception of alternative (and equivalent) optimal portfolios having the same values of mean \(\mu (R_{\mathbf{x}} -R^\alpha )\) and deviation measure \(\varDelta ^{(m)}_{\mathbf{w}}(R_{\mathbf{x}} - R^\alpha )\).

Under the assumption of security returns described by discrete random variables having, for each security \(j \in J\), realization \(r_{jt}\) under scenario t, with \(t = 1, \dots , T\), one obtains the following non-linear optimization model:

$$\begin{aligned} \min _{(\mathbf{x}, \mathbf{y }, \mathbf{d }, {\varvec{\eta }}, z, z_1)}&\displaystyle \frac{z- \mu ^\alpha - z_1 + \varepsilon _2}{z - \mu ^\alpha } \end{aligned}$$
(12)
$$\begin{aligned} \text{ s.t. }&z - \mu ^\alpha \ge \varepsilon _1 \end{aligned}$$
(13)
$$\begin{aligned}&\mathop \sum \limits _{j=1}^{n}x_j =1, \quad x_j \ge 0 \quad \text{ for } j=1,\ldots ,n\end{aligned}$$
(14)
$$\begin{aligned}&\mathop \sum \limits _{j=1}^{n}r_{jt} x_j = y_t \quad \text{ for } t=1,\ldots ,T \end{aligned}$$
(15)
$$\begin{aligned}&\mathop \sum \limits _{j=1}^{n}\mu _j x_j = z \end{aligned}$$
(16)
$$\begin{aligned}&\mathop \sum \limits _{k=1}^{m} w_k \eta _k - \mathop \sum \limits _{k=1}^{m} \frac{w_k}{\beta _k}\mathop \sum \limits _{t=1}^{T} p_t d_{tk} = z_1 \end{aligned}$$
(17)
$$\begin{aligned}&d_{tk} \ge \eta _k - y_{t} + r_t^\alpha , \quad d_{tk} \ge 0 \quad \text{ for }\ t=1,\ldots ,T;\ k=1,\ldots ,m. \end{aligned}$$
(18)

Constraint (13) imposes the positivity of the ratio denominator. Constraints (14) ensure that in any feasible portfolio the sum of the non-negative weights must be equal to one, whereas for each scenario t, with \(t=1, \dots , T\), constraint (15) defines the corresponding realization of the portfolio rate of return \(y_t\). Subsequently, constraint (16) defines z as the mean portfolio rate of return. Finally, constraint (17), along with (18), defines variable \(z_1\) that allows us to express \(\varDelta ^{(m)}_\mathbf{w}(R_{\mathbf{x}} - R^\alpha )\) as \(z- \mu ^\alpha - z_1\). Hence, objective function (12) minimizes the risk-reward ratio in (11).

The non-linear optimization model (12)–(18) with the quasi-linear objective function (12) can be linearized using the Charnes and Cooper (1962) transformation. Specifically, we apply the following substitutions \(v_0 = 1 / (z - \mu ^\alpha )\), \(v_1 = z_1 / (z - \mu ^\alpha )\), \(v = z / (z - \mu ^\alpha )\), \(\displaystyle {\tilde{x}}_j={x_j}/({z- \mu ^\alpha })\), \(\displaystyle {\tilde{d}}_{tk} = {d_{tk}}/({z- \mu ^\alpha })\), \(\displaystyle {\tilde{\eta }}_k = {\eta }_k/({z - \mu ^\alpha })\), and \(\tilde{y_{t}} = y_{t} / (z - \mu ^\alpha )\), dividing all the constraints by \((z -\mu ^\alpha )\), and add the constraint required by the transformation. The resulting convex programming formulation is the following LP model:

$$\begin{aligned} \displaystyle \min _{({\tilde{\mathbf{x}} }, {\tilde{\mathbf{y}} }, {\tilde{\mathbf{d}} }, {\tilde{\mathbf{\eta }} }, v, v_0, v_1)}&\displaystyle v - v_1 + (\varepsilon _2 - \mu ^\alpha ) v_0 \\ \text{ s.t. } \ \&v - \mu ^\alpha v_0 = 1\\&v_0 \le 1 / \varepsilon _1\\&\mathop \sum \limits _{j=1}^{n}{\tilde{x}}_j = v_0, \quad {\tilde{x}}_j \ge 0 \quad \text{ for } j=1,\ldots ,n\\&\mathop \sum \limits _{j=1}^{n}r_{jt} {\tilde{x}}_j = {\tilde{y}}_t \quad \text{ for } t=1,\ldots ,T\\&\mathop \sum \limits _{j=1}^{n}\mu _j {\tilde{x}}_j = v,\ \ \mathop \sum \limits _{k=1}^{m} w_k {\tilde{\eta }}_k - \mathop \sum \limits _{k=1}^{m} \frac{w_k}{\beta _k}\mathop \sum \limits _{t=1}^{T} p_t {\tilde{d}}_{tk} = v_1 \\&{\tilde{d}}_{tk} \ge {\tilde{\eta }}_k - {\tilde{y}}_t + r_t^\alpha v_0, \ {\tilde{d}}_{tk} \ge 0 \quad t=1,\ldots ,T;\ k=1,\ldots ,m, \end{aligned}$$

where the first constraint is a transformed form of the substitution \(v_0 = 1 / (z - \mu ^\alpha )\). After eliminating variables \({\tilde{y}}_t\), v, \(v_0\), and \(v_1\), which are defined by equations, one obtains the following more compact LP formulation (EWCVaR model):

$$\begin{aligned} \begin{array}{rl} \displaystyle \min _{({\tilde{\mathbf{x}} }, {\tilde{\mathbf{d}} }, {\tilde{\mathbf{\eta }} })} &{} \displaystyle \mathop \sum \limits _{j=1}^{n}(\mu _j -\mu ^\alpha + \varepsilon _2) {\tilde{x}}_j - \mathop \sum \limits _{k=1}^{m} w_k {\tilde{\eta }}_k + \mathop \sum \limits _{k=1}^{m} \frac{w_k}{\beta _k}\mathop \sum \limits _{t=1}^{T} p_t {\tilde{d}}_{tk}\\ \text{ s.t. } &{} \displaystyle \mathop \sum \limits _{j=1}^{n}(\mu _j -\mu ^\alpha ){\tilde{x}}_j =1,\ \sum _{j=1}^n {\tilde{x}}_j \le \frac{1}{\varepsilon _1}, \ {\tilde{x}}_j \ge 0 \quad j=1,\ldots ,n\\ &{} \displaystyle {\tilde{d}}_{tk} \ge {\tilde{\eta }}_k - \mathop \sum \limits _{j=1}^{n}(r_{jt}-r_t^\alpha ) {\tilde{x}}_j, \ {\tilde{d}}_{tk} \ge 0 \quad t=1,\ldots ,T;\ k=1,\ldots ,m.\\ \end{array} \end{aligned}$$
(19)

After solving the transformed EWCVaR model (19), the original values of variables \(x_j\) can be determined dividing \({\tilde{x}}_j\) by \(\mathop \sum \limits _{j=1}^{n}{\tilde{x}}_j\).

3.2 Extended Omega ratio model

In its standard form, the Omega ratio is defined as the ratio between the expected value of the profits and the expected value of the losses where, for a predetermined threshold \(\tau \), portfolio returns over the target \(\tau \) are considered as profits, whereas returns below the threshold are considered as losses. Ogryczak and Ruszczyński (1999) prove that for any target value \(\tau \) the following chain of equalities holds:

$$\begin{aligned} {\mathbb {E}}\{ (R_{\mathbf{x}} - \tau )_+ \} = \mu (R_\mathbf{x}) - (\tau -{\mathbb {E}}\{( \tau - R_{\mathbf{x}} ) _+\}) = \mu (R_\mathbf{x}) - \tau + \delta _\tau (R_\mathbf{x}), \end{aligned}$$
(20)

where the last equality is related to the definition of the first-order lower partial moment (\(\hbox {LPM}_1\)) expressed in (1). Thus, we can formulate the (standard) Omega ratio as follows:

$$\begin{aligned} \varOmega (\tau ,R_\mathbf{x}) = \frac{{\mathbb {E}}\{ ( R_{\mathbf{x}} - \tau )_+ \}}{ {\mathbb {E}}\{ ( \tau - R_{\mathbf{x}} )_+ \} } = \frac{\mu (R_\mathbf{x}) - \tau + \delta _\tau (R_\mathbf{x}) }{{\delta }_\tau (R_\mathbf{x}) } =1+ \frac{\mu (R_\mathbf{x}) - \tau }{{\delta }_\tau (R_\mathbf{x})}. \end{aligned}$$

Hence, the maximization of the above Omega ratio, with the additional restriction requiring \(\mu (R_\mathbf{x}) - \tau \ge \varepsilon _1\), is equivalent to the minimization of the \(\hbox {LPM}_1\) based ratio \(\frac{{\delta }_\tau (R_\mathbf{x})}{\mu (R_\mathbf{x}) - \tau }\). Restriction \(\mu (R_\mathbf{x}) - \tau \ge \varepsilon _1\) along with (20) imply that \({\mathbb {E}}\{ ( R_{\mathbf{x}} - \tau )_+ \} > {\mathbb {E}}\{ ( \tau - R_{\mathbf{x}} ) _+ \}\), thus limiting the Omega ratio to take only values greater than 1.

In the domain of enhanced indexation, the ratio optimization is formulated with respect to the random target \(R^\alpha \), instead of a deterministic value \(\tau \). Replacing in the numerator of model (5) \(\varrho (R_\mathbf{x}- R^\alpha )\) with the measure \(\delta _0(R_\mathbf{x}- R^\alpha )\), one obtains the following problem:

$$\begin{aligned} \min \left\{ \frac{\delta _0(R_\mathbf{x}- R^\alpha ) + \varepsilon _2}{\mu (R_\mathbf{x}- R^\alpha )}\ : \ \mu (R_\mathbf{x}-R^\alpha ) \ge \varepsilon _1,\ \mathbf{x}\in \mathcal{P}\right\} . \end{aligned}$$
(21)

As the \(\hbox {LPM}_1\) minimization is consistent with the SSD, and thereby the Omega ratio optimization is SSD consistent (Balder and Schweizer 2017), the following corollary is valid.

Corollary 2

Let \(\mathbf{x}^0\) be an optimal solution to the risk-reward optimization problem (21). Then, portfolio \(\mathbf{x}^0\) is nondominated in terms of bi-criteria optimization \(\max \{ \mu (R_\mathbf{x}- R^\alpha ),- \delta _0(R_{\mathbf{x}} - R^\alpha ) \}\), and is SSD nondominated with the exception of alternative (and equivalent) optimal portfolios having the same values of mean return \(\mu (R_{\mathbf{x}^0} -R^\alpha )\) and \(\hbox {LPM}_1\) value \(\delta _0(R_\mathbf{x}- R^\alpha )\).

For security returns described by discrete random variables having, for each security \(j \in J\), realization \(r_{jt}\) under scenario t, with \(t = 1, \dots , T\), one obtains the following non-linear optimization model:

$$\begin{aligned} \displaystyle \min _{(\mathbf{x}, \mathbf{y }, \mathbf{d }, z, z_1)}&\displaystyle \frac{z_1 + \varepsilon _2}{z - \mu ^\alpha } \end{aligned}$$
(22)
$$\begin{aligned} \text{ s.t. }&\ (13){-}(15)\nonumber \\&\mathop \sum \limits _{t=1}^{T} p_{t} d_{t} = z_1 \end{aligned}$$
(23)
$$\begin{aligned}&d_{t} \ge r_t^\alpha - y_{t}, \quad d_{t} \ge 0 \quad \text{ for }\ t=1,\ldots ,T. \end{aligned}$$
(24)

Objective function (22) minimizes the risk-reward ratio in (21). In each scenario t, with \(t = 1, \dots , T\), constraint (24), along with (23) and objective function (22), forces the non-negative variable \(d_t\) to take value equal to \(\max \{r_t^\alpha - y_{t}, 0\}\). As a consequence, constraint (23) defines variable \(z_1\) as the first lower partial moment \(\delta _0(R_\mathbf{x}- R^\alpha )\).

Compared to the optimization model proposed in Guastaroba et al. (2016), the main difference is that the objective function in model (22)–(24) is expressed in terms of a risk-reward minimization, rather than a reward-risk maximization. Although this modification is conceptually of minor importance, it avoids the introduction of additional constraints and auxiliary binary variables to deal with those critical situations where the \(\hbox {LPM}_{{1}}\) may take null value at the denominator (see Guastaroba et al. 2016).

Also the non linear optimization model (22)–(24) can be linearized applying the following substitutions: \(v_0 = 1 / (z - \mu ^\alpha )\), \(v_1 = z_1 / (z - \mu ^\alpha )\), \(v = z / (z - \mu ^\alpha )\), \(\tilde{x_{j}} = x_{j} / (z - \mu ^\alpha )\), \(\tilde{d_{t}} = d_{t} / (z - \mu ^\alpha )\) and \(\tilde{y_{t}} = y_{t} / (z - \mu ^\alpha )\), dividing all the constraints by \((z -\mu ^\alpha )\), and adding the constraint required by the Charnes-Cooper transformation, leading to the convex formulation as the following LP:

$$\begin{aligned} \begin{array}{rl} \displaystyle \min _{({\tilde{\mathbf{x}} }, {\tilde{\mathbf{y}} }, {\tilde{\mathbf{d}} }, v, v_0, v_1)} &{} \displaystyle v_1 + \varepsilon _2 v_0 \\ \text{ s.t. } &{} \displaystyle v - \mu ^\alpha v_0 = 1, \quad \displaystyle v_0 \le 1 / \varepsilon _1\\ &{} \displaystyle \mathop \sum \limits _{j=1}^{n}{\tilde{x}}_j = v_0, \quad {\tilde{x}}_j \ge 0 \quad \text{ for } j=1,\ldots ,n\\ &{} \displaystyle \mathop \sum \limits _{j=1}^{n}\mu _j {\tilde{x}}_j = v,\ \mathop \sum \limits _{t=1}^{T} p_{t} {\tilde{d}}_{t} = v_1 \\ &{} \displaystyle \mathop \sum \limits _{j=1}^{n}r_{jt} {\tilde{x}}_j = {\tilde{y}}_t \quad \text{ for } t=1,\ldots ,T\\ &{} \displaystyle {\tilde{d}}_{t} \ge r_t^\alpha v_0 - {\tilde{y}}_{t}, \quad {\tilde{d}}_{t} \ge 0 \quad \text{ for }\ t=1,\ldots ,T, \end{array} \end{aligned}$$
(25)

where the first constraint is a transformed form of the substitution \(v_0 = 1 / (z - \mu ^\alpha )\) whose introduction is required by the Charnes-Cooper transformation. A more compact formulation can be obtained eliminating variables \({\tilde{y}}_t\), v, \(v_0\), and \(v_1\), which are defined by equations, leading to the following LP formulation (EOR model):

$$\begin{aligned} \begin{array}{rl} \displaystyle \min _{({\tilde{\mathbf{x}} }, {\tilde{\mathbf{d}} })} &{} \displaystyle \mathop \sum \limits _{t=1}^{T} p_t {\tilde{d}}_{t} + \varepsilon _2 \mathop \sum \limits _{j=1}^{n}{\tilde{x}}_j \\ \text{ s.t. } &{} \displaystyle \mathop \sum \limits _{j=1}^{n}(\mu _j - \mu ^\alpha ){\tilde{x}}_j =1, \ \sum _{j=1}^n {\tilde{x}}_j \le \frac{1}{\varepsilon _1}, \ {\tilde{x}}_j \ge 0 \quad j=1,\ldots ,n\\ &{} \displaystyle {\tilde{d}}_{t} \ge \mathop \sum \limits _{j=1}^{n}(r_t^\alpha - r_{jt}) {\tilde{x}}_j, \ {\tilde{d}}_{t} \ge 0 \quad t=1,\ldots ,T. \end{array} \end{aligned}$$
(26)

As for the EWCVaR model, after solving the transformed EOR model (26), the original values of \(x_j\) can be determined dividing \({\tilde{x}}_j\) by \(\mathop \sum \limits _{j=1}^{n}{\tilde{x}}_j\).

The above optimization models can be applied in any financial setting where the values of the return rates of the securities under different scenarios are available or can be generated. In Sect. 4.1.1 of the following experimental analysis, some guidelines are provided on how to choose the values of the model parameters.

Finally, in practical investment situations, an investor might desire that the portfolio composition complies with some trading requirements. One of the most fundamental is that, in order to control transaction and management costs, investors often prefer to hold a portfolio comprising a limited number of assets. Furthermore, investors desire well-diversified portfolios, and want to avoid portfolios where very small weights are invested in some assets or very large weights are invested in one or few assets. These requirements can be incorporated into both the EWCVaR and the EOR models by introducing a cardinality constraint limiting the maximum number of assets in the optimal portfolios, along with lower and upper bounds on asset weights (see Guastaroba et al. 2016 for a description of how to introduce such features in the EOR model). Nevertheless, their incorporation requires the introduction into the mathematical formulations of binary variables, thus transforming the LP models into mixed-integer LP problems, which solution can be computationally challenging when the investing universe is very large.

4 Experimental analysis

This section is dedicated to the presentation and discussion of the computational experiments. They were conducted on a PC Intel XEON with 3.33 GHz 64-bit processor, 12 GB of RAM, and Windows 7 64-bit as Operating System. Optimization models were implemented in Java, compiled within NetBeans 8.0.2, and solved by means of CPLEX 12.6. After preliminary experiments, we decided to use the default values for all CPLEX parameters.

The results discussion is organized as follows. In Sect. 4.1, we consider a static investor who applies a (single-period) buy-and-hold investment strategy. In contrast, in Sect. 4.2 we consider an investor who desires to rebalance the portfolio composition. To this aim, we use a rolling time window approach, that is, we shift the in-sample observations (and consequently the out-of-sample ones) all over the entire time frame covered in each data set.

4.1 Single-period evaluation

In this section, we consider a static investor who is not interested in modifying the portfolio composition when new information on the market trend is made available, but quietly waits for the end of the chosen investment horizon. This investment strategy implies only one optimization, which correspond to solving one of the proposed optimization models by using a set of in-sample data as scenarios, and then evaluating the optimal portfolio over the following out-of-sample period. In Sect. 4.1.1, we briefly describe the instances and the optimization models we solved in the computational experiments, whereas in Sect. 4.1.2 we report on the in-sample characteristics of the optimal portfolios and provide an extensive validation of their out-of-sample performance.

4.1.1 Data sets and tested optimization models

In the computational experiments, we used the two data sets tested in Guastaroba et al. (2016) with some small differences, as described below. To make the paper self-contained, we provide here a brief description of these instances and refer to the above paper for any further detail.

The first data set was introduced in Guastaroba et al. (2009) and, from the name of the authors, is referred to as data set GMS. No change has been made to this data set. The set consists of 4 instances created from historical rates of return of the 100 securities composing the FTSE 100 Index. These instances were intentionally selected to span four different market trends. In particular, the first instance, hereafter called GMS-UU, considers an increasing trend of the benchmark (i.e., the market index is moving Up) in both the in-sample and the out-of-sample period. The second instance, from now on referred to as GMS-UD, has an increasing trend of the benchmark in the in-sample period and a decreasing one (i.e., it is moving Down) in the out-of-sample period. The third instance, henceforth called GMS-DU, is characterized by a decreasing trend in the in-sample period and by an increasing one in the out-of-sample period. Finally, the last instance, referred to as GMS-DD in the following, is characterized by a decreasing trend in both the in-sample and the out-of-sample periods. The temporal positioning of each instance in data set GMS is illustrated in Fig. 1.

Fig. 1
figure 1

The four different market periods in data set GMS

Guastaroba et al. (2016) used a second data set, which was generated from the 8 benchmark instances for the index tracking problem currently belonging to the OR-Library (available at http://people.brunel.ac.uk/~mastjjb/jeb/orlib/indtrackinfo.html). These instances consider the securities included in eight different stock market indices: the Hang Seng market index (related to the Hong Kong stock exchange market), the DAX 100 (Germany), the FTSE 100 (United Kingdom), the S&P 100 (USA), the Nikkei 225 (Japan), the S&P 500 (USA), the Russell 2000 (USA) and the Russell 3000 (USA). The number of securities included in these instances ranges from 31, composing the Hang Seng index, to 2151, composing the Russell 3000 index. We found that in the two largest instances there were some securities achieving extremely large weekly returns (even larger than 1000 %) in one or very few observations. Since rates of return of this magnitude have a strong impact on the average return of a security, even if realized in very few observations, we decided to remove these securities from the instance. In the following, this modified data set is called ORL, and each instance is referred to as ORL-IT\(\kappa \), \(\kappa =1, \dots , 8\). Eventually, we removed two securities from both the ORL-IT7 and the ORL-IT8 instances. Regarding the time frames adopted, for instances ORL-IT1–ORL-IT5, originally introduced in Beasley et al. (2003), the in-sample period spans from March 1992 to February 1994, whereas the out-of-sample period spans from March 1994 to February 1995. Instances ORL-IT6–ORL-IT8 were originally introduced in Canakgoz and Beasley (2009), and the authors did not provide any detail regarding the time frame the data refer to.

Each of the above instances comprises 2 years of in-sample weekly observations (i.e., 104 scenarios) and 1 year of out-of-sample ones (i.e., 52 realizations). For each instance, the optimal portfolio composition is first decided by solving one of the optimization models described in the following and using the in-sample 104 scenarios. Then, the performance of the portfolios is evaluated by observing their behaviors over the 52 weeks following the date of portfolio selection.

Despite in our experiments we have used historical data as scenarios, the proposed optimization models can also be employed using a set of scenarios generated with more elaborated techniques. We refer the interested reader to Guastaroba et al. (2009), where different scenario generation techniques are compared when embedded into portfolio selection models similar in structure to those proposed here. The reader should also be aware that using historical data might affect the performance of any strategy. As a consequence, the results concerning especially the out-of-sample validation of the proposed optimization models must be interpreted with care, since they might contain noise that is difficult to separate from the intrinsic performance of the proposed methodologies.

The optimization models that we considered in our computational experiments are the following. To provide some insights on the effectiveness of the EWCVaR model (19), we solved it using four different sets of values for the tolerance levels \(\{ \beta _k\}_{k=1, \dots , m}\). More specifically, the first model considers two tolerance levels (i.e., \(m=2\)) equal to \(\beta _1 = 0.05\) and \(\beta _2 = 0.25\), respectively. This model is henceforth referred to as EWCVaR(.05, .25). The second model, from now on denoted as EWCVaR(.05, .25, .50), is based on the choice of three tolerance levels (i.e., \(m=3\)). We set these three values equal to \(\beta _1 = 0.05\), \(\beta _2 = 0.25\), and \(\beta _3 = 0.50\), respectively. The remaining two models consider only one tolerance level (i.e., \(m = 1\)). These models are hereafter called ECVaR(.05) and ECVaR(.50), since they correspond to setting the tolerance level \(\beta _1\) equal to 0.05 and 0.50, respectively. For each of the above models, weights \(w_k\) were computed according to (9). Finally, as a basis for comparison with the literature, we also solved the EOR model (26) on the aforementioned instances. The value used for parameters \(\varepsilon _1\) and \(\varepsilon _2\) is .00001 for all the tested optimization models.

As mentioned above, the EWCVaR model (19) is valid only if the value of the ratio (11) is not smaller than 1. To guarantee that this condition is satisfied, we devised the following pre-processing procedure to choose the value of \(\alpha \) to use in the experiments. For each instance, we solved separately each of the aforementioned EWCVaR models, starting with an initial value of \(\alpha \) equal to 0. Then, we solved iteratively that optimization model increasing the value of \(\alpha \) by 1% (on yearly basis), as long as ratio (11) took a value smaller than 1. Finally, the maximum among the final values taken by \(\alpha \) over all the models is chosen, to guarantee that the above condition is satisfied for any of the tested EWCVaR models. Computing times to solve the EWCVaR models, and hence to carry out the above pre-processing procedure, are negligible, as described in the following section.

Table 1 summarizes the main characteristics of all the tested instances, including the average in-sample return of the benchmark (column with header \(\mu ^I \%\)) and the value of \(\alpha \) used in the experiments (column \(\alpha \%\)). For the sake of readability, we expressed the latter two values in percentage and on yearly basis, even though they are expressed on weekly basis in the instances.

Table 1 The main characteristics of the tested data sets

All instances in the two data sets are publicly available on the website of the Operational Research Group at the University of Brescia (http://or-brescia.unibs.it), in section “Benchmark Instances”.

Table 2 Optimal portfolios: in-sample and out-of-sample statistics for the GMS data set
Table 3 Optimal portfolios: in-sample and out-of-sample statistics for the ORL data set

4.1.2 Comparing the performance of the optimal portfolios

In Tables 2 and 3, we provide some in-sample and out-of-sample statistics summarizing the computational results obtained by solving all the tested models with the GMS and ORL data sets, respectively. Both tables have the same structure, and the meaning of each column header is as follows. Regarding the in-sample, we report the following statistics:

\(\checkmark \):

DI: the diversification index computed as the complement of the Herfindhal index, i.e., DI \(= 1 - \mathop \sum \limits _{j=1}^{n}{x_j^2}\) (see Woerheide and Persson 1993);

\(\checkmark \):

Div.: the number of securities selected in the optimal portfolio;

\(\checkmark \):

Min %: the minimum portfolio share (in percentage);

\(\checkmark \):

Max %: the maximum portfolio share (in percentage).

On the other hand, as out-of-sample statistics we report the following ones:

\(\checkmark \):

\(y_t > r^I_t\) %: the number of weeks, divided by 52 and in percentage, that the portfolio rate of return has outperformed the benchmark in the out-of-sample period;

\(\checkmark \):

\(r_{av}\) %: the average portfolio return on yearly basis (in percentage);

\(\checkmark \):

Excess Ret. %: the out-of-sample average excess return of the portfolio over the benchmark, on yearly basis and in percentage. It is computed as [\(r_{av}\)] - [average benchmark return];

\(\checkmark \):

s-std: the downside semi-standard deviation of the portfolio return compared to the benchmark return, computed as \(\sqrt{\frac{1}{52}\sum _{t=1}^{52} (y_t - r^I_t)_-^2}\);

\(\checkmark \):

Sortino index: the average excess return divided by the semi-standard deviation s-std.

The above statistics provide a synthetic and clear assessment of both in-sample main characteristics and out-of-sample performance of the optimal portfolios. In both tables, for each instance we highlighted in bold the model(s) that achieved the best value of the Sortino index (the larger, the better). As already mentioned, computing times required to optimally solve the tested models are always negligible (in the order of fractions of a second), and thus they have not been reported here. This is one of the strengths of the proposed optimization models. The reader should be aware that solving other optimization models with very large investment universes can lead to intractable problems. In these cases, one possible strategy to employ is to preselect the set of securities available for the investment, for example based on a risk-adjusted performance ratio. One can note that for both the EOR model (26) and the EWCVaR model (19) the number of variables and constraints increase with the number of scenarios. Hence, finding an optimal solution for these models may become computationally challenging when the number of scenarios employed is very large. In “Appendix B”, we show that, taking advantage of LP duality, one can obtain more computationally efficient formulations to use with a large number of scenarios. Finally, to evaluate and easily compare the out-of-sample performance of the optimal portfolios over time, we plot in Figs. 2, 3, 4, 5, 6 and 7 the ex-post cumulative returns yielded by all the selected portfolios and the respective benchmark in each of the 12 tested instances.

Table 2 summarizes the results for the GMS data set. We recall that DI takes value zero for a portfolio with absolutely no diversification (a one security portfolio), whereas 1.0 represents the ultimate in diversification. Looking at the in-sample results, it is evident that all the portfolios have a similar diversification, as captured by index DI which ranges from .94 to .95. By quoting from Woerheide and Persson (1993) that “Portfolios with index values greater than .91 probably are adequately diversified”, we can conclude that all the portfolios achieve satisfactory results in terms of diversification. Similar conclusions can be drawn by analyzing the portfolio cardinality (column Div.), as well as the minimum and maximum portfolio shares (columns Min % and Max %, respectively). Regarding the latter statistic, it takes very similar values in all the instances with the only exclusion of instance GMS-UU, where a slightly larger deviation can be identified. Note that the maximum portfolio share never exceeds the 14.5%, indicating that in all the optimal portfolios the budget available has been, in a broad sense, well-diversified among the securities. We stress that, as pointed out at the end of Sect. 3, if the investor desires to limit the portfolio cardinality, this can be incorporated into the proposed optimization models by introducing binary variables along with a cardinality constraint. As far as the out-of-sample performance is considered, after analyzing the figures reported in Table 2 and the cumulative returns depicted in Figs. 2 and 3, one can conclude that all the optimal portfolios perform similarly and well: they closely mimic the benchmark, often outperform it (even if not for the entire out-of-sample period), and show a limited performance deviation between each others. Some differences are evident for instance GMS-UD. In this case, all the optimized portfolios clearly outperform the benchmark, although differently. In more details, Fig. 2b shows that the optimal portfolios selected by models EOR and EWCVaR(.05, .25, .50) achieve the highest cumulative returns in the first part of the ex-post period, whereas they are clearly outperformed by the portfolio selected by model ECVaR(.05) in the last part of the ex-post period. Analyzing more in depth the figures reported in Table 2, one can also notice that for the GMS-UU instance the portfolio selected with the EOR model is the one that yielded the best ex-post cumulative return, and the only one that achieved a (slightly) positive average excess return (see column Excess Ret. %).

Fig. 2
figure 2

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the GMS-UU and GMS-UD instances

Fig. 3
figure 3

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the GMS-DU and GMS-DD instances

Fig. 4
figure 4

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the ORL-IT1 and ORL-IT2 instances

Fig. 5
figure 5

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the ORL-IT3 and ORL-IT4 instances

Fig. 6
figure 6

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the ORL-IT5 and ORL-IT6 instances

Fig. 7
figure 7

Out-of-sample cumulative returns: a comparison among the optimization models and the benchmark on the ORL-IT7 and ORL-IT8 instances

We now turn our attention to the results regarding the ORL data set, which are summarized in Table 3 and illustrated in Figs. 4, 5, 6 and 7. As far as the four smallest instances of this data set are considered, Figs. 4a and 5a show that all the optimal portfolios replicate quite closely the ex-post behavior of their benchmark. In more details, regarding instances ORL-IT1 through ORL-IT3, the portfolios that perform best are those obtained solving model ECVaR(.50), achieving an average excess return that ranges from 1.99% to 3.62%, with values of statistic s-std slightly smaller than the ones of the other optimal portfolios. Conversely, the portfolio selected by the ECVaR(.50) model is the one that performs worst in instance ORL-IT4 (compare the values of the Sortino index). For this instance, the only portfolios that achieved a positive excess return are those determined by solving the EOR and, in particular, the EWCVaR(.05, .25, .50) models. Regarding instance ORL-IT5, the ex-post cumulative returns yielded by the benchmark are always better than the ones achieved by the optimized portfolios, although the differences are not very large (the average excess return of the portfolios ranges from − 2.74% to − 4.04%). In this instance, the portfolio selected using the EWCVaR(.05, .25) and the ECVaR(.05) models is the one that looses less compared to the benchmark. In the three largest-scale instances of the ORL data set, all the optimized portfolios outperform considerably the benchmark (see Figs. 6b, 7a and 7b). Regarding instance ORL-IT6, the portfolio selected by model EOR provides the best out-of-sample results, achieving an average excess return equal to 5.44%, beating the 57.69% of times (out of the 52 ex-post observations) the return yielded by the benchmark, and with the smallest downside risk (statistic s-std takes a value approximately equal to 0.0044). Nevertheless, the performance of the portfolio obtained by the ECVaR(.05) and the EWCVaR(.05,.25) models is only slightly worse, achieving an average excess return roughly equal to 4.65%, beating the benchmark return 55.77% of the times, and with a downside risk around 0.0049. As far as instance ORL-IT7 is considered, the portfolio constructed by solving the EOR model achieves the best ex-post performance: it yields an average excess return of approximately 16.41% and a value of the Sortino index equal to 0.3993. It is worth noting that, with the exception of the ECVaR(.50) model, all the CVaR-based models select the same optimal portfolio. Finally, regarding instance ORL-IT8, both the ECVaR(.05) and the EWCVaR(.05, .25) models find the same optimal portfolio. More importantly, the latter portfolio is the one performing best ex-post, yielding an average excess return roughly equal to 28.98% (which is considerably larger than that achieved by the other portfolios), and being only slightly riskier (compare the figures reported in column s-std). Note that the portfolio with the worst value of the Sortino index for instance ORL-IT8 is the one selected by the EOR model. It is worth highlighting that in both the ORL-IT7 and ORL-IT8 instances, all the optimal portfolios largely outperform the benchmark over the entire out-of-sample period, yielding much larger cumulative returns than the ones achieved by the market index. A similar finding also occurs in instance ORL-IT6 for most of the optimal portfolios, with the exception of model ECVaR(.50). Actually, the latter clearly outperforms the benchmark over most of the out-of-sample period, but achieving similar cumulative returns towards the end of the period. Interesting enough, although we treat differently some more or less extreme events, for several instances the ECVaR(.05) and ECVaR(.05, 0.25) models find the same optimal portfolios, perhaps indicating that in these cases a larger number of in-sample observations, or further levels of diversification and weight setting might help.

Summarizing the previous discussion, the experimental results that we conducted indicate that no optimization model shows a clear dominance over the others. Indeed, considering all the instances we tested, it is not possible to determine neither a “winning model”, nor a “losing model”. It is, however, possible to determine the following general guidelines. Firstly, optimization models can be a valuable tool to support investment decisions. Indeed, it is worth noting that in 10 out of the 12 instances, at least one optimal portfolio outperforms ex-post the respective benchmark in terms of average return yielded. More interestingly, in 8 out of the 12 instances all the optimal portfolios outperform their benchmark. Secondly, analyzing in more details the results reported in Table 2, one can notice that the instances where most of (or even all) the optimal portfolios yielded a negative excess return are those when the market trend is increasing out-of-sample (i.e., instances GMS-UU and GMS-DU). Considering that in these two instances the optimal portfolios yielded an average return at least equal to 42.84% and 31.55%, respectively, we believe that these are situations where for an investor it is, from a practical perspective, less relevant to outperform a benchmark. On the other side, this becomes crucial when the market trend is decreasing ex-post, as for instances GMS-UD and GMS-DD. Note that, in these two cases, all the optimal portfolios yielded a positive excess return, at least equal to 6.15% and to 1.90%, respectively. Regarding the excess returns yielded ex-post by the optimized portfolios, it is worth highlighting that the values reported in Tables 2 and 3 are sometimes considerably different from the in-sample values shown in Table 1 (see column \({\alpha } \%\)). Likely, this outcome is caused by a remarkable change in the market trend from the in-sample to the out-of-sample period. To provide some further insights into the performance of the optimal portfolios, Table 4 summarizes the ranking of the five models according to the Sortino index values. More precisely, this table reports the number of times (out of the 12 instances) that the portfolio selected by each optimization model was ranked from the first to the fifth position, based on the Sortino index. Column Top/Bot. shows the ratio between the number of times the portfolio selected by each optimization model was ranked in one of the first two positions (i.e., either first or second), and the number of times it was ranked in one of the last two positions (i.e., either fourth or last). Finally, column Aver. reports the average position achieved by the portfolios selected by each optimization model. Although the ECVaR(.50) model, with a value of 4, achieved the highest number of times the first position, it slips back to the worst performance when one considers the cumulative sum of the first two positions (the total number remains equal to 4, which is the same result attained by model EWCVaR(.05, .25, .50)). It is worth noting that the ECVaR(.50) model is also the one with the highest number of times in the last position. If one considers the sum of first and second positions, models EOR and EWCVaR(.05, .25) are the best ones with a cumulative sum equal to 7. Analyzing the figures in more details, one can notice that the EWCVaR(.05, .25) model has never been in the last position, whereas the EOR model attained twice the last position, hence making the former preferable to the latter. Although the EWCVaR(.05, .25) and the ECVaR(.05) models often select the same optimal portfolios, the rankings indicate that in the remaining instances model EWCVaR(.05, .25) performs better than model ECVaR(.05). Thus, the EWCVaR(.05, .25) model tends to produce more stably reasonably good results than models ECVaR(.50) and ECVaR(.05). Finally, model EWCVaR(.05, .25, .50) is the more conservative, awarding the first position and the last one only once each, hence ranking in the middle positions for most of the instances. The considerations above are confirmed by the values of the Top/Bot. ratios and the average positions. According to these two statistics the best performing model is EWCVaR(.05, .25), which achieved the same Top/Bot. ratio as model EOR but a slightly better average position. On the other side, the worst performance is yielded by model ECVaR(.50), which produced a performance considerably worse than any of the other optimization models.

Table 4 Optimization models: a summary of their out-of-sample rankings
Table 5 Optimal portfolios: in-sample and out-of-sample statistics for strategy RS-S

4.2 Rolling time window evaluation

In the previous section, we observed the performance of the optimal portfolios in the 52 weeks following the date of portfolio selection. Nevertheless, in real-life investment situations such a holding period is unrealistically long, as investors tend to rebalance their portfolios much more frequently in response to market changes. To gain some insights on this issue, we now consider more dynamic investors who decide to rebalance, at regular intervals, the portfolio composition during the 52 weeks following the date of the initial portfolio selection. In the financial literature, this type of investment strategy is often called “calendar-based rebalancing” (e.g., see Eakins and Stansell 2007). We consider three different levels of dynamism, corresponding to three investors who decide to rebalance the portfolio composition every 4 weeks (Monthly, and hereafter denoted as RS-M), every 12 weeks (Quarterly, from now on indicated as RS-Q), and every 24 weeks (Semester, denoted as RS-S in the following). From an optimization viewpoint, this simply requires to solve sequentially each of the proposed optimization models by using a rolling time window of 104 in-sample observations as scenarios, and then evaluating each optimal portfolio over the out-of-sample period that elapses from one optimization to the following.

Table 6 Optimal portfolios: in-sample and out-of-sample statistics for strategy RS-Q
Table 7 Optimal portfolios: in-sample and out-of-sample statistics for strategy RS-M

Tables 5, 6 and 7 summarize the in-sample and out-of-sample results for the three strategies RS-S, RS-Q, and RS-M, respectively. In these tables, the in-sample statistics introduced in Sect. 4.1.2 are computed as averages over all the portfolios determined by applying each investment strategy. In addition, to provide some information on how much the portfolio composition changes by implementing each rebalancing strategy, we compute the average turnover index as follows:

\(\checkmark \):

TI \( = \frac{1}{L} \mathop \sum \limits _{l=0}^{L - 1} \mathop \sum \limits _{j=1}^{n}| x_{j, l+1} - x_{j, l}|\),

where \(l=0\) corresponds to the initial portfolio selection, L denotes the number of times the portfolio is rebalanced in each strategy (L is equal to 2, 4, and 12 for RS-S, RS-Q, and RS-M, respectively), and \(x_{j, l}\) is the weight of security j in the initial portfolio (if \(l=0\)), or after the \(l-th\) rebalancing (otherwise).

Analyzing more in depth the results, one can notice that the general guidelines highlighted in the previous section carry over to the portfolios selected by applying the considered rebalancing strategies. Firstly, and more importantly, they confirm, once more, that optimization models can be a valuable support for investment decisions. Indeed, in the majority of the instances, most of the portfolios outperform the respective benchmark in terms of out-of-sample average return yielded. Particularly, the value of statistic “Excess Ret. %” ranges from a minimum value of − 6.21% (see in Table 5 the excess return yielded in instance ORL-IT5 by model ECVaR(.50)) to a maximum value larger than 70% (e.g., see in Table 6 the results related to all the models in instance ORL-IT8). Regarding the in-sample statistics, they indicate that the optimal portfolios are sufficiently well-diversified, as for the portfolios selected by using the single-period strategy, and that, in general, no remarkable differences can be noted by modifying the rebalancing frequency. The only exception is represented by the turnover index “TI” that, on average, takes smaller values by increasing the frequency of rebalancing (cf. the values of this statistic for strategy RS-S with those for RS-M).

Table 8 Investment strategies: a summary of their out-of-sample rankings

To better evaluate the impact of the different rebalancing strategies on the out-of-sample performance of the optimal portfolios, in Table 8 we summarize, for each optimization model, the ranking of each strategy according to the Sortino index value (strategy SP refers to the single-period strategy detailed in Sect. 4.1). Particularly, this table reports the number of times (out of the 12 instances) that the portfolio selected by each optimization model was ranked from the first to the fourth position, based on the Sortino index. As a general conclusion, the figures reported in Table 8 indicate that the investment strategy that was ranked first the highest number of times is RS-Q (20 times out of all the optimization models), immediately followed by strategy RS-S (18 times). These two strategies are those that achieved the highest number of times the second position, and, hence, that also performed best if one considers the cumulative sum of the first two positions. Conversely, strategy RS-M achieved the smallest number of times the first position: 10 times in contrast to the 12 times performed by strategy SP. The latter two strategies are also those that were ranked the highest number of times in the last position. Summarizing these results, investors can yield substantially better out-of-sample performance by periodically rebalancing the portfolio composition. This observation is consistent with the practice observed among financial investors. As far as the best frequency of rebalancing is concerned, the results indicate that a strategy based on a frequent portfolio rebalancing (i.e., RS-M) is often outperformed by other investment strategy based on a less frequent portfolio rebalancing (i.e., RS-S and RS-Q). Given these findings, strategy RS-M is excluded from the following analysis.

Table 9 Strategy RS-S: a summary of the out-of-sample rankings
Table 10 Strategy RS-Q: a summary of the out-of-sample rankings

Tables 9 and 10 summarize the ranking of the five optimization models according to the Sortino index values for strategy RS-S and RS-Q, respectively. The meaning of the values reported in each column of these tables is akin to that explained above for Table 4. Analyzing in detail the results achieved for strategy RS-S (cf. the figures in Table 9) the best performing model is EWCVaR(.05, .25, .50): it has achieved the highest Top/Bot. ratio and the smallest average ranking. The second-best is model EOR, which yielded a considerably worse Top/Bot. ratio, along with a slightly worse average position, compared to model EWCVaR(.05, .25, .50). Models ECVaR(.50) and ECVaR(.05) are those achieving a middle performance, whereas model EWCVaR(.05, .25) yielded the worst performance.

Turning our attention to the results produced by strategy RS-Q (cf. the figures in Table 10), it is quite evident that the two WCVaR models are those yielding the best performance. Particularly, model EWCVaR(.05, .25) achieved the best Top/Bot. ratio, as well as the best average position, immediately followed by model EWCVaR(.05, .25, .50) which achieved slightly worse values in both statistics. Also for strategy RS-Q, models ECVaR(.50) and ECVaR(.05) yielded a middle performance, whereas EOR is the model who achieved the worst performance.

As a final attempt to provide some insights, in Table 11 we show the rankings of each of the five models employed within both rebalancing strategy. Note that, altogether, we are comparing ten different approaches. More specifically, this table reports the number of times (out of the 12 instances) that the portfolio selected by a given model and rebalancing strategy was ranked from the first to the tenth position, according to the Sortino index values. Column Top/Bot. here indicates the ratio between the number of times the portfolio selected by an optimization model within a given rebalancing strategy was ranked in one of the first five positions and the number of times it was ranked in one of the last five positions. Column Aver. shows the average position achieved. Looking at the results reported in Table 11, model EWCVaR(.05, .25, .50) employed within strategy RS-S yielded the best Top/Bot. ratio. The second-best performance is achieved by model EOR followed by ECVaR(.05), both models still applied within strategy RS-S. The latter model yielded the same value of the Top/Bot. ratio than the former, but a worse average position. Finally, all the models using strategy RS-Q, with the exception of model ECVaR(.05), and model ECVaR(.50) applying strategy RS-S achieve equally the same worst performance in terms of the Top/Bot. ratio. One may conclude that with respect to the 12 instances, the WCVaR models tend to produce more stably reasonably good quality results than the corresponding CVaR models.

Table 11 Strategies RS-S and RS-Q: a comparison of their out-of-sample rankings

5 Conclusions

In recent years, shortfall or quantile risk measures have been playing a central role in financial applications. The Conditional Value-at-Risk (CVaR) is one of such measures. In this paper, we have contributed to the literature by:

  • introducing a theoretical framework for risk-reward ratio models, and showing how it can be applied to the Enhanced Index Tracking Problem (EITP);

  • proposing, in the context of the EITP, a class of risk-reward ratio optimization models, where the risk measure is based on the Weighted CVaR (WCVaR), which can be defined as a weighted combination of few CVaR measures. It allows a more detailed risk aversion modeling while preserving the simplicity of the CVaR, and encompass the classical CVaR as a special case;

  • showing that, using standard linearization techniques, the risk-reward ratio optimization models can be reformulated as LP solvable models;

  • showing that some modeling issues of the formulation introduced in Guastaroba et al. (2016) can be overcome by reformulating it as a risk-reward model.

The performance of the portfolios optimized by means of the proposed approach has been compared to the one of the portfolios constructed using the reformulation of the Extended Omega Ratio (EOR) model presented in Guastaroba et al. (2016). All optimization models were solved by using CPLEX.

We conducted extensive computational experiments on two different sets of benchmark instances, exploring different market trends both in-sample and out-of-sample. In the experimental analysis we both considered a static investor who applies a single-period buy-and-hold investment strategy and, by using a rolling time window approach, an investor who desires to rebalance the portfolio composition. The results indicate that:

  • all the optimal portfolios yield a quite satisfactory out-of-sample performance, tracking very closely the benchmark over the out-of-sample period, often achieving considerably better returns;

  • all the optimized portfolios are adequately well-diversified;

  • no optimization model clearly dominates all the others in terms of out-of-sample performance;

  • considering a ranking based on the values of the Sortino index yielded ex-post by the optimal portfolios suggest that adding further CVaR measures within a WCVaR framework tends to produce better performance;

  • contemplating the rolling time window evaluation, investors can achieve a substantially better out-of-sample performance by periodically rebalancing the portfolio composition, and that rebalancing every semester produces a better performance than rebalancing every quarter and, in turn, better than a monthly rebalancing;

  • computing times required for solving the proposed models, even for very large investment universes, are negligible, making their use suitable in financial practice, where decisions have to be, more and more frequently, taken in a real-time environment.

The results suggest that optimization models can represent, to an investor, a valuable quantitative tool to support investment decisions.