Introduction

For a very long time, the optimal allocation of wealth has been an essential topic for mankind and is still developing. DeMiguel et al. (2009) cite a rabbi from the fourth century who already proposed a rule on how to split wealth across assets. Since then, many researchers have tried to find the best portfolio strategy. A remarkable step was the introduction of the Modern Portfolio Theory (“MPT”) in 1952 with Markowitz’ mean–variance analysis, see Markowitz (1952). Fifty years later, Fabozzi et al. (2002) conclude that the MPTs importance will not vanish and it will have a permanent presence in financial research and practice. However, they also note the MPT is a normative theory, explaining optimal investing under perfect information. As DeMiguel et al. (2009) note, the strategy suffers from low out-of-sample performance. Such results are due to the MPT tendency to estimate extreme weights and are very sensitive to changes in its inputs. Furthermore, the estimation of these inputs may be another fallacy, leading to the continuous development of portfolio optimization, for example in Jorion (1986), Clarke et al. (2013), Bessler et al. (2017), DeMiguel et al. (2009), Estrada (2010), Trimborn et al. (2019) and Petukhina et al. (2020).

This paper compares and evaluates different optimization techniques applied to investment universe diversified on the following dimensions: multiple asset classes, geographic markets and input estimators, and a specific comparative view on the naive (equally weighted) portfolio.

The important focus of this paper is the application of different asset classes in the empirical section. The general motivation of diversifying portfolios is to exploit correlation structures among assets, markets or asset classes, which can help increase the return or decrease the portfolio risk, see Elton et al. (2003). Due to the rise of alternative investments such as cryptocurrencies, this paper will include cryptocurrencies (CCs) as a new asset class. As Glaser et al. (2014) note, there is still a discussion on whether CCs should be treated as alternative assets or currencies. Trimborn et al. (2019) and Petukhina et al. (2020) already deal with them as assets. Both include them in portfolios together with traditional investments such as stocks, bonds and commodities. Klein et al. (2018) studied the properties of Bitcoin as an asset and specifically compared it to gold. They found that, though there is a similarity to major precious metals in its response to market shocks, Bitcoin as an asset differs from other conventional investments. Thus, Bitcoin and presumably other cryptocurrencies can be characterized as alternative assets.

The investment universe in this research includes commodities, where some of them can be treated as traditional assets whereas others as alternative assets. For instance, gold is one of the most traditional alternatives to stocks and bonds in portfolios. It is generally known as a “safe haven”, meaning that investors tend to buy gold if they fear bear markets or crashes, see Baur and McDermott (2010). Furthermore, we include palladium, silver, corn, wheat, oil, coriander, pork bellies, platinum and diamonds as commodities. Whereas palladium and silver are generally categorized as precious metals and also as options for portfolio diversification, this is not the case for diamonds. They are considered as an alternative investment. One possible reason for this might be that, whereas silver and palladium also are widely used in industry, the main demand for diamonds comes from the wedding industry, see Scott and Yelowitz (2010). Bessler and Wolff (2015) analyzed the benefits of adding commodities to a stock-bond portfolio and Bosch (2017) trading and speculation of commodity markets. The lack of studies discussing corn and wheat in a portfolio context motivates the inclusion of them into our investment set. Therefore, our analysis is applied to stocks, CCs as new assets and commodities as traditional assets and alternative assets as well.

Our contribution to current research is that a wider investment universe is considered in the context of portfolio analysis. Multiple asset classes are included, namely stocks, precious metals, commodities, diamonds and cryptocurrencies. Furthermore, a broader spectrum of strategies is investigated in a comparative matter using several different success measurements. Additionally, various estimators for input parameters are employed.

The paper is structured in the following way: Sect. 2 covers the empirical analysis data. Section 3 explains the methodology of the investing strategies, Sect. 4 presents the empirical results and Sect. 5 gives a brief conclusion.

Data

The investment universe used for the empirical analysis of the strategies includes German stocks, commodities and cryptocurrencies. The data set of CCs contains the ten largest by market capitalization as on 11.05.2020, obtained from coingecko.com. The daily observations start at 29.04.2013 up to 31.12.2018. In contrast to the German stocks, CCs also have observations on weekend days. The prices/exchange rates are measured per unit of cryptocurrency (i.e. a 6 USD price means that with 6 USD one could buy 1 BTC). The trading volume is also measured in USD.

Due to the data’s availability (e.g., several cryptocurrencies were issued later than others) the set contains a lot of missing values (NA). To have at least some CCs but also a large enough set for calculations, the minimum number of CCs with complete observations was set to six in this research. The list of the included cryptocurrencies can be found in the appendix.

Thus, after removing weekend days and keeping only the complete series, the new set of cryptocurrencies contains six different cryptocurrencies and daily observations from 11.02.2016 to 31.12.2018 (753 daily observations).

The stocks were obtained from the German SDAX index in its composition as on 08.02.2019 (70 stocks) for the same timespan as CCs: from 11.02.2016 to 31.12.2018. Incomplete time series for the analyzed period were filtered out; thus, 62 German stocks were included in the investment universe for empirical analysis. The SDAX is a German stock market index for small-cap stocks. However, the index contains the largest 70 companies which are not large enough to be listed within the DAX (large-cap index) or the MDAX (mid-cap index). Though, these companies have to fulfill the criteria of the Prime-Standard regulation of the Frankfurt stock exchange. The stock prices and the trading volume are measured in USD, and the frequency of the observations is daily. The source of the data is Bloomberg.

The following commodities: WTI crude oil, Brent crude oil, coriander, pork bellies, three types of diamonds, corn, wheat, gold, palladium, silver and platin are also constituents for the portfolio analysis. The source is Thomson Reuters Eikon. The unit for the respective prices is USD. Data on their trading volume are not available.

The final data contain 753 daily observations for 62 German stocks, six CCs and 13 commodities. For portfolio optimization, one is not directly interested in the prices but rather indirectly through the returns. Furthermore, returns have the convenient property of being “standardized” because they share the same unit (percentage). Thus, daily returns will be calculated and used in our analysis, which reduces the data set to 752 daily observations. The trading volume data have been adjusted so that the first observations of the time-adjusted set have been removed. Hence, the volume data contain 752 observations of 62 German stocks and six CCs.

Outliers in the data were not removed; these are extreme events, and a strategy should be robust against such tail events instead of neglecting the data points in empirical research. Especially the CCs contain these extreme values. The data were double-checked, and independent sources reported the outlying values.

Furthermore, we construct different samples of 70 stocks. We randomly sample stratified from SDAX, DAX30, FTSE 100, Nasdaq 100, Nikkei 225, TOPIX small, CSI 500 and S&P 600. We use subsampling to avoid dimensionality problems. The data from DAX30, FTSE, Nasdaq 100, Nikkei 225 and TOPIX small were obtained at 11.05.2020 and the data from CSI 500 and S&P 600 at 03.06.2020, all from Thomson Reuters Eikon. We use these to check our results’ robustness in the presence of different, geographically more diversified assets. The performance metrics for them are reported in the appendix. Also, we want to note that our selection of alternatives might seem ad hoc to the reader; however, we seek to conduct a comparative methodological study with exemplary data instead of a specific trading report.

The formula to obtain daily returns from prices is the following:

$$\begin{aligned} r_{i,t} = \frac{P_{i,t}-P_{i,t-1}}{P_{i,t-1}} = \frac{P_{i,t}}{P_{i,t-1}} - 1. \end{aligned}$$
(1)

The index i denotes asset i of a given set of assets, t is the time index which corresponds to a day and P is the respective price.

Methodology

In this section, the theoretical background and methodology are explained. It starts with a brief overview of the notation used, followed by the estimators for the necessary parameters. Afterwards, a description of the strategies takes place. For each allocation rule, the intuition, as well as the mathematical definition, is given. Lastly, the performance measurements used to evaluate the allocation techniques are explained.

T represents the number of available observations, which is equivalent to the total number of days in the data.

  • N is the number of risky assets.

  • \(\mu\) is a \(N \times 1\) vector of expected returns of these assets and r the true, realized returns.

  • \(\Sigma\) is the \(N \times N\) variance–covariance matrix of the same risky assets.

  • \(r_{f}\) is a scalar representing the risk-free rate.

  • \(\varvec{1_{N}}\) represents a vector of ones of length N: \((1_1, 1_2, \ldots , 1_i, \ldots , 1_N)^{\intercal }\).

  • x is a \(N \times 1\) vector of the weights: \((x_1, x_2, \ldots , x_i, \ldots , x_N)^{\intercal }\).

  • M is a scalar representing the window size for moving-window estimations.

Parameter estimation

The parameters for the respective strategies are not known a priori. To implement them, estimators are necessary, which are described in this section. The window size is essential, as all parameters are estimated on a rolling-window basis. That means new information is continuously included in the parameters and data points older than M are dropped out of the estimation. The abbreviation in the brackets will be used in the empirical section to denote which estimator was used. The estimators are dependent on time, that is for every point of time t, a parameter is estimated based on the respective window.

Arithmetic mean (AM)

The first parameter refers to the unknown \(\mu\) which represents a vector of expected returns \((\mu _1, \ldots , \mu _i, \ldots , \mu _N)^{\intercal }\) and has to be estimated for every time t. Here, this will be the arithmetic mean, described by the following formula:

$$\begin{aligned} \hat{\mu }_{i,t} = \frac{1}{M}\sum _{j = t-M}^{t-1}{r_{i,j}}. \end{aligned}$$
(2)

There, i corresponds to asset i of the set, t to a time, r is the realized return.

Geometric mean (GM)

However, the arithmetic mean might not be suitable as an estimator for mean growth rates. By definition of Eq. (1) returns are growth rates of prices. Thus, we include the geometric mean as another estimator, especially as prices follow a geometric series. Furthermore, the geometric mean likely is a more conservative estimator and therefore might lead to better results. The GM is conservative because, for positive real numbers, the GM is never greater than the same sample’s arithmetic mean. Furthermore, Jacquier et al. (2003) showed that compounding the arithmetic average is an upwardly biased estimator. The following formula represents the computation of the geometric mean:

$$\begin{aligned} \hat{\mu }_{i,t} = \root M \of {\prod _{j = t-M}^{t-1}{(1+r_{i,j})}}-1. \end{aligned}$$
(3)

Again, i corresponds to asset i of the set, t to a time, r is the realized return and M the window size.

Variance–covariance matrix (AM/GM)

Besides the parameter \(\mu\), the variance–covariance matrix \(\Sigma\) is used. The usual estimator takes the following form:

$$\begin{aligned} \hat{e}_{ij,t}&= \frac{1}{M-1}\sum _{h=t-M}^{t-1}\sum _{k = t-M}^{t-1}{(r_{i,h}-\hat{\mu }_{i,t})(r_{j,k}-\hat{\mu }_{j,t})} \end{aligned}$$
(4)
$$\begin{aligned} \hat{\Sigma }_{t}^{u}&= [\hat{e}_{ij,t}]. \end{aligned}$$
(5)

The respective window size M will always be the same as for the \(\mu\) estimator here.

The “u” as an exponent is used to assign a name (“u” stands for usual).

Bayes–Stein shrinkage estimator (BS)

Jorion (1986) states that usually the portfolio analysis, especially in the MPT framework, is separated into two steps. In the first, the moments and other necessary parameters are estimated and in the second step, these are plugged into the optimization as these were the true values. He argues that such a separation impedes the portfolio analysis through estimation error, especially with the first moment, as the variance seems to be more robust when the sample gets larger. Thus, the Bayesian approach aims to minimize utility loss coming from the use of sample estimates. It shrinks the sample mean towards a common value which is, in his case, the mean of the global minimum variance portfolio. Jorion (1986) showed in a simulation analysis that his shrinkage procedure reduces estimation error significantly.

Following Stein (1955), James and Stein (1961), Jorion (1986) and DeMiguel et al. (2009), the equations take the following form:

$$\begin{aligned} \hat{\mu }_{t}&= (1-\hat{\phi }_{t})\bar{\mu }_{t} + \hat{\phi }_{t}\hat{\mu }_{t}^{\min }, \end{aligned}$$
(6)
$$\begin{aligned} \hat{\phi }_{t}&=\frac{N+2}{(N+2) + M(\hat{\mu }_{t}-\mu _{t}^{\min })^{\intercal }\hat{\Sigma }_{t}^{-1}(\hat{\mu }_{t}-\mu _{t}^{\min })}, \end{aligned}$$
(7)
$$\begin{aligned} 0&< \hat{\phi }_{t} \quad < 1, \end{aligned}$$
(8)
$$\begin{aligned} \hat{\Sigma }_{t}&= \frac{M - 1}{M - N - 2}\hat{\Sigma }_{t}^{u}. \end{aligned}$$
(9)

Again, M represents the window size. \(\hat{\mu }_{t}^{\min }\) is the estimated return of the global minimum variance portfolio and \(\bar{\mu }_{t}\) the sample mean (arithmetic).

Furthermore, Jorion (1986) provides an estimator for the variance–covariance matrix:

$$\begin{aligned} \hat{\Sigma }_{t}^{bs}&= \hat{\Sigma }_{t}\left( 1 + \frac{1}{M+\lambda }\right) + \frac{\lambda }{M(M + 1 + \lambda )}\frac{\varvec{1_{N}}\varvec{1_{N}}^{\intercal }}{\varvec{1_{N}}^{\intercal }\hat{\Sigma }_{t}^{-1}\varvec{1_{N}}}. \end{aligned}$$
(10)

In that formula, \(\lambda\) denotes the prior precision.

Portfolio optimization

In this subsection, the different allocation strategies are discussed. For better readability, the time index has been omitted. However, it should be noted that for each rebalancing the weights and necessary parameters are estimated. Thus, it is \(x_t\), \(\mu _t\) and \(\Sigma _t\) everywhere in this subsection. In the empirical section, the rebalancing has been conducted daily. If not stated otherwise explicitly, short selling and leverage are allowed.

Equally weighted (Naive)

The equally weighted portfolio (Naive portfolio) is one of the most straightforward strategies for an investor. The idea is to assign the same weight to each asset in a portfolio (e.g.: 20% for each of 5 stocks in a portfolio). The naive rule has a few convenient features: it is easy to implement and non-parametric; it is diversified and characterized by low trading costs. The following formula describes this approach:

$$\begin{aligned} x_{i}^{EW} = \frac{1}{N}\quad \forall i \in [1,N]. \end{aligned}$$
(11)

In this equation, i is the index for the ith asset.

This approach will serve as a benchmark in the empirical section.

Mean–variance (modern portfolio theory—MPT)

The MPT was one of the milestones in the history of financial economics, founded by the Nobel Prize laureate Harry Markowitz’s work, see Markowitz (1952). Today, different variations of strategies exist in this framework, but they share the property of optimization along the efficient frontier. It is the line of sets that are at the same time feasible and optimal. Portfolios below the frontier are possible but not optimal because combinations exist that have at least the same return with lower risk or the same level of risk but grant higher return. Portfolios above the efficient frontier are not feasible, see Markowitz (1952) and Elton et al. (2003).

Target return This approach is the central idea of the MPT, see Markowitz (1952).

$$\begin{aligned}&\min _{x}\quad x^{\intercal }\Sigma x \nonumber \\&\text { s.t. } \quad x^{\intercal }\mu = \mu ^{target} \nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1. \end{aligned}$$
(12)

Sharpe ratio maximization (tangency portfolio) Another version within the MPT family is the sharpe ratio, named after William Sharpe who introduced this measurement to compare funds’ performance, see Sharpe (1966). Thus, one can also build a portfolio based on it, also known as the tangency portfolio, which is expressed in the following way:

$$\begin{aligned}&\max _{x}\quad \frac{x^{\intercal }(\mu -r_{f})}{(x^{\intercal }\Sigma x)^{\frac{1}{2}}} \nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1, \end{aligned}$$
(13)

which is the same as maximizing the certainty equivalent.

Certainty equivalent maximization Similar to the sharpe ratio, the certainty equivalent serves as a measurement for portfolio performance. This metric can be interpreted as the return an investor would require from a risk-free asset to be indifferent between the risk-free asset and the portfolio. The maximization problem is denoted as follows, see DeMiguel et al. (2009):

$$\begin{aligned}&\max _{x}\quad x^{\intercal }\mu - \frac{\gamma }{2}x^{\intercal }\Sigma x \nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1. \end{aligned}$$
(14)

Here, \(\gamma\) is a parameter representing the risk aversion of the investor. The solution is identical to the one in Sect. 3.2.2. Thus, we skip the results for the certainty equivalent maximization in the empirical section.

Global minimum variance This strategy is also an option of the MPT and can be seen as a special case of mean–variance with only one input parameter—variance. Ignoring the mean in the optimization problem is the same as assuming that all means are equal (e.g.: \(\varvec{1_{N}} = \mu\)), see DeMiguel et al. (2009):

$$\begin{aligned}&\min _{x}\quad x^{\intercal }\Sigma x\nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1. \end{aligned}$$
(15)

This approach is the most risk-averse of all MPT strategies as it is the lowest point on the efficient frontier, see Elton et al. (2003).

Geometric mean maximization

In contrast to the MPT family, the maximization of the geometric mean is a dynamic approach. Whereas, the Mean–Variance strategies are static in the sense that they only take one future period into account, the GMM considers a large number of periods. To achieve the maximum terminal wealth, the growth rate of wealth should be maximized, which is the portfolio return’s geometric mean return in such a multi-period model, see Estrada (2010). The mathematical formulation takes the following form:

$$\begin{aligned}&\max _{x}\quad \root L \of {\prod _{t = 1}^{L}{1+x_{t}^{\intercal }\mu _{t}}}-1\nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1 . \end{aligned}$$
(16)

In the equation above, L represents the last considered period. It is possible to have a finite L as well as \(\lim _{L\rightarrow \infty }\). As Estrada (2010) shows, a Taylor Expansion of second order leads to the following approximation:

$$\begin{aligned}&\max _{x}\quad {\text {exp}}\left\{ \ln {(1+x^{\intercal }\mu )} - \frac{x^{\intercal }\Sigma x}{2(1+x^{\intercal }\mu )^2}\right\} -1\nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1. \end{aligned}$$
(17)

As the problem 17 is not solvable analytically, numerical methods should be used. The Nelder–Mead algorithm is applied in the empirical study, see Nelder and Mead (1965). Furthermore, for the empirical part, a short-constrained version of the problem is analyzed. That means, that additional constraints are imposed where every weight x has to be non-negative. For this problem, the L-BFGS-B algorithm is used, see Byrd et al. (1995).

Conditional value-at-risk minimization (CVaR)

Usually, investors want to control the risks of their portfolio. However, the strategies of the MPT family include risk only in terms of expected portfolio variance. Krokhmal et al. (2003) note that distributions such as normal- or log-normal distribution is often assumed. However, such assumptions about the distribution are contrary to stylized facts as returns usually exhibit heavy tails, see Petukhina et al. (2020). The conditional value-at-risk (CVaR) was introduced to overcome such problems, see Artzner (1999). The CVaR is not reliant on stylized estimators coming from a normal distribution, as it includes higher moments and takes the actual distribution into account. The following problem has to be solved to find a CVaR-optimal portfolio, see Rockafellar and Uryasev (2000):

$$\begin{aligned}&\min _{x}\quad \mathrm{CVaR}_{\alpha }(x) \nonumber \\&\text { s.t. }\quad x^{\intercal }\varvec{1_{N}} = 1 \nonumber \\&\text { s.t. }\quad x_{i} \ge 0, \end{aligned}$$
(18)

where

$$\begin{aligned} \mathrm{CVaR}_{\alpha }(x)&= -\frac{1}{1 - \alpha } \int _{x^{\intercal }\mu \le -VaR_{\alpha }(x)} \! x^{\intercal }\mu f(x^{\intercal }\mu | x) \mathrm {d}x^{\intercal }\mu . \end{aligned}$$
(19)

VaR\(_{\alpha }(x)\) is the \(\alpha\)-quantile of the return distribution. The term \(f(x^{\intercal }\mu | x)\) represents the probability density function of the portfolio return dependent on the weights x. It is possible to impose additional constraints, such as a target return that shall be reached at least by the portfolio. However, an analytical solution is not possible for the problem. Additionally, the a priori distribution is not known and thus is estimated using the empirical cumulative counterpart based on the window M.

A simulation method is applied to find the optimal weights to minimize CVaR. That is, \(\eta\) sets with N independently sampled weights from the continuous uniform distribution U(0, 1) are created. The simulated set of weights is then used to calculate a large set of historical portfolio returns, from which the empirical cumulative distribution function is estimated. In the next step, the simulation finds the subset of weights with the smallest CVaR among all simulated portfolios.

LIBRO

All strategies mentioned before assume that an investor can buy or sell any quantity at any time. However, this might not reflect the reality, where trading depends on supply and demand on the markets. Thus, Trimborn et al. (2019) proposed a portfolio strategy that controls the liquidity aspect. The idea behind this is to create an upper boundary for each weight, dependent on the liquidity the respective asset has, which is imposed as an additional constraint. Formally, this approach is expressed as follows:

$$\begin{aligned} x_{i} \le \frac{\mathrm{TV}_{i}f_{i}}{W}. \end{aligned}$$
(20)

There, i stands for the ith asset, TV represents the sample median trading volume as a proxy for liquidity, f the speed an investor intends to clear the current position and W the wealth of the investor.

The beauty of this strategy lies in its simplicity and universality, as it can be implemented easily into any other allocation method. In this research, we apply LIBRO to the mean–variance strategy of Eq. 12 and the CVaR strategy of Eq. 18. To solve the equations when the LIBRO constraint is added to the MPT approach, it appears to be a quadratic programming problem. To solve it the quadprog-package in R is used, see Goldfarb and Idnani (1983). To find the optimal CVaR-LIBRO strategy, a simulation is done similar to the original CVaR before.

Performance metrics

To assess and compare the performance of the different strategies, it is necessary to introduce comparable evaluation metrics. Let \(\hat{\Psi }\) denote the estimator for success measurement.

\(\hat{\mu }_{k}\) and \(\hat{\sigma }_{k}^2\) denote the arithmetic mean return and variance of the respective kth strategy.

Certainty equivalent

The certainty equivalent was already mentioned in Sect. 3.2.2. This metric gives information about the rate a risk-free asset must return at least, such that an investor was indifferent between the respective kth portfolio and the risk-free asset.

The certainty equivalent takes the form:

$$\begin{aligned} \hat{\Psi }_{k, \gamma }^\mathrm{ceq}&= \hat{\mu }_{k} - \frac{\gamma }{2}\hat{\sigma }_{k}^2. \end{aligned}$$
(21)

There, \(\gamma\) denotes the risk aversion of an investor.

(Adjusted) sharpe ratio

The sharpe ratio was already mentioned in Sect. 3.2.2. Economically, it can be interpreted as how much return an investor receives per unit of risk. Said colloquially, it is how much return \(\mu\) an investor could “buy” paying an additional unit of risk \(\sigma\). Formally, it is defined as:

$$\begin{aligned} \hat{\Psi }_{k}^\mathrm{sr}&= \frac{\hat{\mu }_{k} - r_f}{\sqrt{\hat{\sigma }_{k}^2}}. \end{aligned}$$
(22)

However, investors might be interested in skewness and kurtosis as well, thus, to assess the performance properly, Pézier and White (2008) proposed the adjusted sharpe ratio:

$$\begin{aligned} \hat{\Psi }_{k}^\mathrm{asr}&= \hat{\Psi }_{k}^\mathrm{sr}\left[ 1 + \left( \frac{\hat{S}}{6}\right) \hat{\Psi }_{k}^\mathrm{sr} - \left( \frac{\hat{K}}{24}\right) (\hat{\Psi }_{k}^\mathrm{sr})^2\right] . \end{aligned}$$
(23)

In this formula, \(\hat{S}\) and \(\hat{K}\) represent sample skewness and sample excess kurtosis. This measurement incorporates the preference for positive skewness and negative excess kurtosis, as it penalizes the respective opposite. This is important, as a distribution with negative skewness and positive excess kurtosis increases the tail risks of a portfolio. These are not desired by investors.

Turnover

Another important dimension to assess the performance of a portfolio is the amount of trading fees necessary to implement a strategy. We use a turnover to proxy transactional costs of strategies with the following computation:

$$\begin{aligned} \hat{\Psi }_{k}^{to}&= \frac{1}{T-M}\sum _{t = M +1}^{T}\sum _{i = 1}^{N}{(|\hat{x}_{k, i, t+1} - \hat{x}_{k, i, t+}|)}. \end{aligned}$$
(24)

Here, \(\hat{x}\) denotes the weight on asset i, in time \(t+1\) after rebalancing and right before rebalancing (\(t+\)). k denotes the k-th strategy. It can be seen that this formula calculates the absolute sum of changes in the weights, so the interpretation is that the larger this metric is, the higher the implementation cost of the strategy.

Terminal return

The last metric used is the terminal return, sometimes also called terminal wealth. It is an essential factor because it denotes a strategy’s outcome at the end of the investing period. It does not control for risk in any way. This metric’s importance lies in the fact, that for example a fund manager’s performance may be measured by the wealth she created for her investors. The following formula is used for the computation:

$$\begin{aligned} \hat{\Psi }_{k}^\mathrm{tr}&= \prod _{t = M+1}^{T}(1+r_{t,k}). \end{aligned}$$
(25)

Here, r denotes the realized portfolio return, k the k-strategy and t is the time index. M is the window size. This formula, thus, represents cumulative performance at the final time T and can also be used to calculate the daily compound return.

Significance

To compare the results, it is necessary to look at the metrics themselves and whether they are significant. For instance, the Certainty Equivalent relies on the first moment of the final portfolios; we compare and test for significant differences. As the naive portfolio serves as a benchmark in this paper, a classical one-sample t-test is conducted. However, the test requires that the random variable, in this case \(\hat{\mu }_{k}\), shall be normally distributed. To assess whether the portfolios follow a normal distribution, the Shapiro–Wilk test is conducted for each of them, as it delivers the greatest power among normality tests, see Razali and Yap (2011). However, Stonehouse and Forrester (1998) demonstrated that the t test is robust against violations of the normality assumption, especially if the skewness is not too extreme. To interpret the sample skewness \(\hat{S}\), a rule-of-thumb is used, see Bulmer (1979). By this rule-of-thumb, a distribution is approximately symmetric when \(|\hat{S} |\le 0.5\), it is moderately skewed when \(0.5 \le |\hat{S} |\le 1\) and highly skewed when \(1 \le |\hat{S} |\).

One may argue that a two-sample test is more appropriate, such as the Welch test; however as the strategies are compared with a benchmark, this “benchmark-mean” is considered as an externality. Furthermore, Stonehouse and Forrester (1998) also showed that the Welch test is not robust against normality violations.

Empirical analysis

In this section, we analyze and discuss the empirical results. The non-parametric Naive Portfolio serves as a benchmark. The central question is whether an approach can outperform such a simple asset allocation but also whether it can do it efficiently. That means the performance has to be significantly different such that it is worth the effort a potential investor has to make to implement the respective model. For all parametric strategies, three different estimators are employed. Further on, we use the following abbreviations in the brackets, where (AM) stands for the arithmetic mean, (GM) for the geometric mean and (BS) for the Bayes–Stein estimators. The (AM) and (GM) strategies both use the same variance–covariance matrix and only differ in the mean. The (BS) portfolios have their own first and second moments as inputs, see Sect. 3.1.4.

Besides discussing the strategies themselves, this section also deals with whether the usage of different estimators has some impact on the portfolios’ performance.

First of all, values for the other parameters, explained in the methodological section, need to be assigned, see Table 1

Table 1 Parameters values used in the empirical analysis

The chosen target return might seem very low, but the observations and the calculated (expected) returns are daily. The denoted return corresponds to an approximate return of \(10\%\) per year. Furthermore, the assumption of \(r_f = 0\) implies that returns and excess returns are equal.

For some of the assets, data of their trading volume were not available. To use the LIBRO strategy on these assets anyway, the missing values were resampled in the following way: As for every asset with non-missing values the sample median trading volume on the window size was calculated, this results in a vector of median sample TVs in every t. Let this vector be denoted with \(\zeta\), and we replace missing values with \(\min (\zeta )\).

Table 2 Values of performance metrics for all strategies and all estimators (AM = arithmetic mean, GM = geometric mean, BS = Bayes–Stein) with respect to the out-of-sample timeframe from 28.07.2016 to 31.12.2018, using stocks from SDAX, alternative assets, precious metals and cryptocurrencies

Table 2 demonstrates that the global minimum variance strategy and the constrained minimum variance strategy have a certainty equivalent of approximately zero for all three estimators except for the Bayes–Stein constrained minimum variance. Interestingly, the Naive Portfolio reports a CEQ of \(0.05\%\), which is only exceeded by constrained geometric mean maximization with arithmetic mean estimator and the three CVaR strategies. Additionally, it is interesting that the three sharpe ratio strategies are the only ones to report a negative CEQ and indeed very extreme values. The interpretation of for example the \(-0.5123\) of the sharpe ratio (AM) is that if the risk-free rate \(r_\mathrm{f}\) is more than \(-51.23\%\), an investor would choose the risk-free asset over this portfolio. However, it is already known that MPT strategies tend to extreme weights and outlying results, see DeMiguel et al. (2009). Nonetheless, the sharpe ratio approach is the only one which demonstrates extreme results, supported by other metrics, for example, the turnover which is around several thousand times higher than for the rest of portfolios. Yet, the rest of the MPT strategies has moderate results compared to the sharpe ratio maximization.

In terms of (adjusted) sharpe ratio, not only the three CVaR strategies, the LIBRO-CVaR (GM) and the constrained geometric mean maximization using arithmetic mean estimator perform at least as good as the Naive portfolio, but also the three unconstrained geometric mean maximization approaches do so. For example, an investor would get approximately 0.175 percentage points more return per unit of risk when choosing the LIBRO-CVaR (GM) instead of the Naive Portfolio.

Besides the outlying sharpe ratio strategies, three clusters can be identified in terms of turnover. The first one covers a range from 0 to circa 0.013, the second one from roughly 0.22 to 0.23 and the last one from 0.51 to 0.64. As expected, the naive portfolio falls into the first cluster, together with the unconstrained geometric mean maximization. The second cluster contains the constrained geometric mean maximization. Within the first cluster, the minimum variance strategies lie at the lower bound of the range, the LIBRO strategies cover the middle of it and the classic CVaR strategies are on the upper bound of this cluster.

The last metric reported in the table above is the terminal return. As mentioned in Sect. 3, it gives information about the total wealth accumulated at the end. The three CVaR strategies have the highest outcome, closely followed by the constrained geometric mean maximization using the arithmetic mean estimator. Still, the LIBRO-CVaR approaches do perform slightly better than the naive portfolio, together with the unconstrained geometric mean maximization and the constrained version using the geometric mean as an estimator. All other strategies perform worse than the equally weighted portfolio. Again, the sharpe ratio strategies report outlying values. Figure 1 shows the cumulative performance of a few chosen allocation methods. The cumulative performance starts at value one since an investor begins with \(100\%\) of her wealth. Thus, the metric in Table 2 has to be interpreted as “an investor ends up with \(x\%\) of her initial wealth”.

In Appendix D, we report these performance metrics for the diversified stock universe (see Tables 7, 8). We find similar patterns, although the results are slightly different. For all metrics but the turnover, CVaR and LIBRO-CVaR perform best with a noteworthy difference. In terms of turnover, the results are quite similar, although the LIBRO-CVaR perform slightly better than in the case where only the SDAX stocks are used.

Also, in Appendix E, we report results for all datasets where we included explicit trading costs of \(0.2\%\) for cryptocurrencies and \(0.05\%\) for the other assets (see Tables 9, 10, 11), as well as trading costs of \(0.22\%\) for all assets (see Tables 12, 13, 14). The percentage refers to the market value of the position to be bought or sold. Although results slightly change, we see similar patterns and the results are in line with what we expected from the reported turnover-metric in Table 2. Thus, we see that generally the equally weighted portfolio, the geometric mean maximization and the LIBRO-CVaR still are on top of the other strategies. However, CVaR and LIBRO-MPT perform quite good, too. The classical strategies of the modern portfolio theory are among the worst. As mentioned and in line with the high turnover of the CVaR-based investment methods, their returns decrease drastically with increasing trading costs. However, due to their low turnover, the Naive portfolio and the Geometric Mean Maximization are relatively unaffected by increasing trading costs. This underlines that the choice of the strategy for portfolio optimization is strongly dependent on actual trading costs as well as of the reallocation frequency.

Furthermore, it is of interest whether the strategies have a mean significantly different from the benchmark portfolio. As already described, a simple t test is used to test the mean difference. Table 3 denotes the absolute sample skewness, the p values of the Shapiro–Wilk test of normality and the p values of the t test. The sample skewness and Shapiro–Wilk p values are reported, since the t test requires normality, but is robust against violations if the skewness is not too extreme, see Stonehouse and Forrester (1998).

Table 3 Absolute sample skewness and p values for normality and t test for all strategies and all estimators (AM = arithmetic mean, GM = geometric mean, BS = Bayes–Stein) with respect to the out-of-sample timeframe from 28.07.2016 to 31.12.2018, using stocks from SDAX, alternative assets, precious metals and cryptocurrencies

Given the table, it can be seen that the hypothesis of normal distribution has to be rejected in all cases. However, excluding the sharpe ratio maximization, an absolute sample skewness below one is reported for all of the strategies. However, some of these strategies have a skewness quite close to 1. That means, following the rule of thumb of Bulmer (1979), the t test remains powerful for these, although they should be interpreted carefully. That the first row in the t test column of the table contains an NA comes from the fact that the Naive portfolio is the benchmark; thus, testing for a difference between the strategy and itself again is redundant as they are exactly equal by construction. At the \(5\%\)-level, only the constrained minimum variance approach is significant. At the \(10\%\)-level, the six classical MPT strategies are significant. However, the arithmetic mean is not perfectly suitable for portfolio returns; therefore, these results should not be the non plus ultra in the analysis.

Fig. 1
figure 1

Cumulative out-of-sample performance from 28.07.2016 to 31.12.2018, using stocks from SDAX, alternative assets, commodities and cryptocurrencies for following strategies: equally weighted, . Performance starts at 1, since an investor starts with \(100\%\) of her wealth

One can see that the Naive Portfolio and the unconstrained geometric mean maximization are approximately equal to each other and so are the global minimum variance strategy and the constrained minimum variance portfolio. Up to some point, the LIBRO-CVaR Portfolio lies approximately on the Naive portfolio and then starts outperforming it for some time, ending up with a slightly higher terminal wealth. Given the metrics and the figure above, it should be checked whether the LIBRO-CVaR portfolio is cost-efficient, and if it is not an investor should choose either the Naive Portfolio or one of the geometric mean maximizations. Figure 2 is a \(\mu\)-\(\sigma\)-diagram of the portfolios, using different symbols and colors for strategy and estimator used. The sharpe ratio portfolios have been excluded as they contain extreme outliers.

Fig. 2
figure 2

Mean–variance diagram of strategies for out-of-sample timeframe from 28.07.2016 to 31.12.2018, using stocks from SDAX, alternative assets, commodities and cryptocurrencies Colors: . Symbols: \(\square\) constrained minimum variance, \(\bigcirc\) global minimum variance, \(\bigtriangleup\) unconstrained geometric mean maximization, \(+\) constrained geometric mean maximization, \(\times\) LIBRO-MPT, \(\nabla\) LIBRO-CVaR, \(*\) CVaR black bullet: \(\bullet\) Naive portfolio

As already seen in the tables and figures before, the LIBRO-CVaR and CVaR are the best among those strategies depicted, closely followed by the unconstrained Geometric Mean Maximization. Furthermore, no clear relationship between performance and estimators can be seen.

Besides the individual quality, it is also of interest whether the usage of different estimators has an impact on portfolio performance. Figure 4 shows how the mean and standard deviation of the strategies are distributed with respect to their estimator. The Sharpe Ratio Maximizations have been removed from the computation due to their outlyingness.

It can be seen that the median of the mean return of the Bayes–Stein strategies is slightly below those of the geometric mean and arithmetic mean. However, its standard deviation is also lower at the median. For both parameters, the range of their distribution is approximately equal for all three estimators. Anyway, the portfolios might contain outliers, thus in the boxplots of Fig. 5 robust location- and dispersion parameter, namely the median and the interquartile range, are depicted.

Using robust parameters, the location of returns among strategies has changed. The median of the median return is close to zero for the arithmetic mean estimations as well as for the Bayes–Stein estimation. However, the median return for the (GM) is even higher than the mean return (ca. 0.001 compared to ca. 0.0007). The median interquartile range seems to be approximately equal to the standard deviation and compared to each other, although outliers disturb the interpretation. As pointed out before, the geometric mean might be a better measurement to assess the performance of a portfolio. Furthermore, the final wealth is of interest for an investor, thus these two metrics are shown in the boxplots of Fig. 6.

The results for (GM) as a location parameter are similar to those of the arithmetic mean. Again, Bayes–Stein comes out with the lowest mean, whereas arithmetic mean strategies and geometric mean strategies have only slightly different median returns. Given these figures, it seems that the portfolios which use the geometric mean as an estimator have the most stable performance. When it comes to terminal return, the portfolios which used the arithmetic mean as an input parameter perform nearly identical to the geometric mean strategies. The Bayes–Stein portfolios come out with a terminal return slightly lower than the other two groups.

Before, it was often mentioned that the sharpe ratio strategies were removed from the computations and diagrams. Figure 3 demonstrates the outlyingness for the arithmetic mean estimator, even though all three of them have a similar pattern. The exhibit shows the cumulative performance of the portfolio. DeMiguel et al. (2009) already pointed out that in empirical research it is known that the MPT strategies tend to estimate extreme weights and thus take extreme values. However, within this research, only the sharpe ratio approaches does so. The data set of this paper contains some outliers. Nonetheless, the other strategies worked well with the data and it does not seem reasonable why specific assets, for example, corn, are assigned with extreme weights in this strategy. Cumulative performances of 5, which is equal to \(500\%\), as well as \(-30\), which is equal to \(-3000\%\), are uncommon and unrealistic. Weights in this set contain values such as \(28048\%\) and \(-16623\%\). Thus, the investor would have extreme leverage and extreme, levered short selling on specific assets.

Fig. 3
figure 3

Cumulative performance of tangency portfolio from 28.07.2016 to 31.12.2018, using stocks from SDAX, alternative assets, commodities and cryptocurrencies, demonstrating the outlyingness of this strategy

Conclusion

The process of asset allocation and portfolio analysis can usually be divided into three parts. The first one is the parameter and input estimation, the second one the allocation of wealth and the third one risk management. The focus of our investigation is the allocation and optimization; however, the other two aspects are tackled as well.

We investigate standard asset-allocation models’ performance based on historical prices and trading volumes of 6 CCs, 70 stocks diversified in geographical and size dimensions, and 13 commodities from more traditional to more exotic investments. We extend the analysis by applying different input parameters’ estimators for every portfolio allocation rule. We assess out-of-sample performance with five metrics. Furthermore, we explicitly incorporate transactional costs to realistically challenge the considered investment strategies for additional robustness check of our findings.

The empirical results demonstrate that CVaR strategies outperform other considered rules. This finding stays fair for CVaR LIBRO combinations as well; even though, incorporation of liquidity constraints reduces investors’ gains. The robustness check with diversified stock data confirms this result. Six CVaR variations outperform the naive benchmark rule, stressing out the importance of using the actual distribution and taking into account boundary for less liquid assets. The only performance metric of CVaR or LIBRO-CVaR not being among the winning strategies is the turnover, where the maximization of the geometric mean takes advantage besides the Naive Portfolio.

The additional analysis of performance with included trading costs demonstrates the CVaR- and LIBRO-CVaR strategies still perform well. However, increasing these trading costs up to 2% leads to vanishing investment gains, and active portfolio rebalancing makes CVaR methods perform worse than the Naive portfolio or the geometric mean maximization rule.

The MPT strategies do not perform well, which is already commonly known, see DeMiguel et al. (2009). On the contrary, the CVaR and the LIBRO-CVaR deal well with this lack of information and do not rely on such assumptions about distributions and moments.

Furthermore, we discover the performance of the portfolios with respect to parameters’ estimators. We cannot claim that one of the three estimators analyzed has a significant advantage over the other two. The (GM) has the most stable performance among different measurements (mean, median, geometric mean). The (BS) portfolios seem to have the worst investment results. Nonetheless, a clear relationship between performance and estimator cannot be proved. This paper’s contribution is that several asset allocation methods are evaluated in details in the context of new digital assets such as CCs and different parameter estimations. We believe that further development of this research can be done. Thus, for example, more detailed investigation on the impact of different asset classes can be conducted; that is, comparing how portfolios perform against each other when one asset class is dropped out of optimization. Furthermore, more detailed studies of parameter estimation should be conducted to improve portfolio performance. For instance, Jacquier et al. (2003) suggest a weighted average between the geometric and arithmetic mean. Also, robust parameters such as median and median-based dispersion parameters could be considered instead of the first two moments. Furthermore, we suggest that in-depth research about the properties of CVaR-portfolios should be conducted to assess their performance against their main competitors in this paper, namely the Naive portfolio and the unconstrained geometric mean maximization.

As the follow-up extension of current research, one could consider an even broader investment universe to study portfolio optimization. Bonds and several alternative assets such as real estate, art, wine, musical instruments or toys could be considered. The latter one is especially fascinating, as a recent study showed that Lego has, at least partially, very nice properties for a portfolio. These properties are good returns compared to stock markets and low correlation to it simultaneously, see Dobrynskaya and Kishilova (2018). Furthermore, financial instruments such as options, warrants and futures could be considered too.