Selection of investment portfolios with social responsibility: a multiobjective model and a Tabu search method

In this study, a model for the selection of investment portfolios is proposed with three objectives. In addition to the traditional objectives of maximizing profitability and minimizing risk, maximization of social responsibility is also considered. Moreover, with the purpose of controlling transaction costs, a limit is placed on the number of assets for selection. To the best of our knowledge, this specific model has not been considered in the literature to date. This model is difficult (NP-Hard), and therefore, only very small instances may be solved in an exact way. This paper proposes a method based on tabu search and multiobjective adaptive memory programming (MOAMP) strategies. With this method it is possible to obtain sets of nondominated solutions in short computational times. To check the performance of our method it is compared with adaptations of the nondominated sorting genetic algorithm (NSGA-II), strength Pareto evolutionary algorithm (SPEA-II) and multiobjective particle swarm optimization (MOPSO). The results of different computational experiments show that our tabu search-MOAMP method performed best. The quality of the sets of solutions that were obtained and the speed of execution mean that our tabu search-MOAMP can be used as a tool for financial assessment and analysis (including online services). This tool, as we can see in this work with some examples, can take into account the social concerns of many clients and their overall risk profile (very conservative, conservative, moderate, or fearless). This approach is also in line with current legal regulations that oblige financial advisors to take the client profile into account to provide greater protection and propose good financial advice.


Motivation
Social responsibility is increasingly being taken into account in portfolio selection. Indeed, investors are increasingly aware of issues such as environmental sustainability, human rights, inclusive labor practices, fair trade, cooperation, and social improvement in general. On the other hand, current European regulations, including the MIFID (Markets in Financial Instruments Directive), require financial institutions and financial advisors to carry out pretests on clients to identify their profiles (conservative, audacious, moderate, etc.) and to take these profiles into account when advising them on portfolio alternatives. Therefore, it is important for these entities and/or advisors to have adequate tools to help choose the portfolios they propose to their clients, and equally important for these tools to be fast and available online.
This paper proposes a method that can serve as a basis for these tools to help propose portfolios to clients. Specifically, this method generates sets of nondominated solutions for a portfolio selection model. In this model social responsibility is considered a third objective alongside the classic objectives of expected return and risk. Additionally, different constraints (such as cardinality constraints and the minimum and maximum amounts to invest in each asset) are included since they incorporate important advantages as explained in detail below. Once the set of efficient solutions (portfolios) has been obtained, the most suitable solution is selected according to the investor's preferences.
The proposed method is based on tabu search and MOAMP strategies. Different computer tests with data from the Spanish IBEX-35, EUROSTOXX-50, DAX-30 and DOW JONES markets show that the proposed method will result in high-quality sets of solutions when compared with other strategies for multiobjective optimization such as the well-known evolutionary algorithms NSGA-II, SPEA-II and MOPSO. Moreover, our tabu search-MOAMP method obtains these solution sets in real time (in only a few seconds), which allows it to be used as an online consulting tool.

Related literature
An investment portfolio is defined as a set of financial assets into which the total investment can be divided. Its construction is equivalent to conducting an acceptable selection of the investment assets that compose it, as well as determining the percentage allotted to each of these assets. The origin of investment portfolio construction and administrative techniques can be found in the theory of portfolio selection [51], in which the earliest portfolio selection models are defined and analyzed, considering the classic objectives of profit (or return) maximization and risk reduction.
Many other works have contributed to the improvement and development of investment portfolio construction and administrative techniques, such as The Separation Theorem of Tobin [68], The Market Model developed by Sharpe [62], The Capital Asset Pricing Model (CAPM) [63], and Arbitrage Pricing Theory (APT) [59]. An analysis of the theoretical framework of portfolio management may be seen in greater depth in Suárez [65].
In any case, all of these mathematical models ignore other realistic constraints, causing them to be inefficient in real-life applications. Thus, studies considering practical constraints in real-life financial markets began to appear. Usually, portfolio managers want to restrict the number of assets ("cardinality constraint") in the portfolio bounded to a fixed integer [49,61,74,75]. In addition, lower bounds in the amount invested in each asset are considered [61]. The aim of these constraints is to control transaction costs. On the other hand, cardinality can also be considered an objective of portfolio optimization, as in Anagnostopoulos and Mamanis [3].
Setting a maximum number of assets favors the supervision, diversification, and control of transaction costs. In addition, portfolios are not formed by a large number of assets and, as a result, by a number of extremely high combinations [64]. Cardinality constraints play an important role as a regularization term in the optimization problem, allowing for the identification of portfolios that are robust with respect to small variations in the input data (i.e., the expected returns on the individual assets and correlations among the asset returns, [23]). In addition to controlling robustness, another advantage of imposing cardinality constraints is that they shorten runtime. Although the cardinality constraint transforms the optimization problem to mixed-integer programming, the search space is reduced at the same time [15,40,78]. Another advantage of considering cardinality constraints is related to index tracking, since the index tracking model normally includes almost all available assets in the market. This leads to large transaction costs and a portfolio that is very difficult to manage because of its diversity [31,38]. In this way, a tracking portfolio with fewer assets can avoid small fraction holdings and reduce transaction costs compared with a fund that purchases all of the stocks that make up the index. The tracking portfolio with a cardinality constraint can simplify the complexity of asset management and reduce administrative overhead and administration costs. In addition, small portfolios seem to reduce estimation errors for variances and covariances leading to better out-of-sample performance [1,9,13]. Other papers, such as Chen et al. [16,17] and Chen and Xu [18], consider cardinality constraints to solve problems of investment portfolio selection by using uncertain variables and modifying the initial mathematical model. With Uncertainty or Fuzzy Set Theories. Finally, Utz et al. [71] and Liagkouras and Metaxiotis [47] state that there are both practical and regulatory reasons that justify the imposition of lower and upper limits on the number of assets that compose a portfolio. The lower limit to the number of assets held in a portfolio is usually imposed to avoid excessive exposure to the idiosyncrasies of any particular asset, even though the portfolio's overall risk may appear acceptable. On the other hand, the upper limit to the number of assets held in a portfolio is imposed to avoid excessive administrative and monitoring costs.
Cardinality constraints turn the convex nature of the Markowitz problem into a nonlinear combinatorial problem [53]. Such constraints better capture the real-world trading system but make the problem more difficult to solve with exact methods. Therefore, given the computational difficulty of tackling the problem exactly, using heuristics in these cases is imperative [77]. The first works that employed metaheuristic techniques in portfolio optimization problems were Perold [58] and Takehara [66]. Chang et al. [14] proposed three metaheuristic algorithms based on genetic algorithms, tabu search, and simulated annealing. Following the work of Chang et al. [14], papers using heuristic algorithms can be divided into two classes: single-objective optimization and multiobjective optimization algorithms. Focusing on single-objective optimization, there are genetic algorithms [72], memetic algorithms [60], tabu search [72], simulated annealing [26], particle swarm [27], and artificial bee colonies [70]. For multiobjective optimization, the works of Liagkouras and Metxiotis (2015), Macedo et al. [50], and Kar et al. [45] must be highlighted.
In addition, metaheuristic algorithms have been applied to other fields of finance. For example, they have been used in the prediction of business insolvency [57], in the assessment of credit lines [2], and in financial planning [73].
In addition to all of the above, corporative social responsibility (CSR) has become a new objective to maximize. The prioritization on the part of investors of ethical, social, and environmental aspects as decision criteria has given rise to the appearance of new financial products that comply with the requirements of social responsibility. A socially responsible investment allows investors to integrate their personal values and social concerns with their purely financial objectives. Therefore, CSR is another objective that must be satisfied to make an appropriate selection for an investment portfolio [54]. Furthermore, two important surveys with a vast amount of theoretical research for portfolio selection with social responsibility or with ESG (environment, social, and governance) can be found in Friede et al. [29] and Zopounidis et al. [81].
Faced with this situation, a real model for portfolio selection is proposed in this work wherein cardinality restrictions are included with CSR considered to be a third objective in addition to the classic objectives of profitability and risk. To the best of our knowledge, there are no models with these specific characteristics in the bibliography that have been reviewed. Nevertheless, we have found models with a certain similarityfor example, those that consider CSR as a restriction and establish different levels for compliance [7,24], those that focus on designing CSR composite indicators [12,69], those that use CSR as an objective but risk is considered a restriction and not an objective [4,39], and those that consider the three objectives indicated but not the cardinality constraint [41,71]. In the latter work, these objectives are aggregated into a single objective function. Table 1 summarizes the differences between our approach and these previous approaches.
In this paper, the problem of portfolio selection is analyzed, considering social responsibility as a third objective in addition to the classic objectives of risk and return. Furthermore, due to the advantages mentioned above, this model contemplates constraints on cardinality and minimum investment per asset. To our knowledge, this specific model has not been previously discussed in the literature.

Contribution
The main contributions of this work are as follows: -Analysis of a portfolio selection model that considers three objectives: along with the classic objectives of profitability and risk, the maximization of social responsibility is considered. The model also includes constraints (such as cardinality constraints and minimum and maximum amounts to invest in each asset) since they incorporate important advantages. The resulting model is complete. Computationally, it is a complex problem (NP-Hard). -Design and development of a method to generate a set of nondominated solutions for this problem. This method is based on the tabu search and MOAMP strategies and has the following characteristics: a) it generates better quality solution sets than other strategies, such as the well-known NSGA-II, SPEA-II and MOPSO algorithms; b) these solution sets are obtained by our tabu search-MOAMP method using very little computation time.
These characteristics allow the method to be used as a basis for tools to help choose a good portfolio according to the characteristics and profile of the investor (avid, conservative, socially concerned, etc.). Some examples of this use are shown in Section 6. These tools can be used by the investor himself or by the financial institutions and banks that advise their clients. This is in line with European regulations (such as MIFID) that oblige financial institutions to take into account the profile of their clients when recommending asset Table 1 Main differences with the proposed approach "Reference" Approach of the "Reference" Difference with our approach [24]; [7] They consider CSR as a restriction and establish different levels for compliance In our approach CSR is considered as an objective [69]; [12] -They use aggregate models to define the objective function. -CSR was designed as a composite indicator -We use a multi-objective programming model to find the set of efficient solutions (also known as the Pareto front). -CSR is obtained from the S&P Global ESG Score, which is prepared by the S&P Global Company [4]; [39] Their CSR was used as an objective but risk was considered as a restriction Our CSR and risk are considered as objectives [41]; [71] They consider the three objectives (return, risk, CSR) but not the cardinality constraint We consider the three objectives (return, risk, CSR) and the cardinality constraint portfolios. In addition, the short time taken to obtain the solution sets allows them to be used online.
In short, both the model analyzed in this paper and the method designed and developed for its resolution take into account 1) the growing importance of social responsibility; 2) the convenience of financial institutions having advisory tools for their customers that take their profile into account. This convenience is motivated by European regulations.
The rest of this work is structured in the following way: in Section 2, the notations are established, and the problem is modeled; Section 3 is devoted to describing our MOAMP method as well as their main procedures; in Section 4, the adaptions of the NSGA-II, SPEA-II and MOPSO algorithms are described; in Section 5, computational experiments with real data are described and the results are evaluated with different metrics; in Section 6, some examples are described to demonstrate how to use the method as a tool for financial assessment while taking into account the client profiles and their social concerns. Finally, the conclusions are presented in Section 7.

Problem description
As stated previously, the problem of selecting an investment portfolio is analyzed. In this work, three objectives are considered: profit maximization, minimization of risk, and maximization of social responsibility. Additionally, in an attempt to control transaction costs, the restriction of limiting the number of assets to be selected is considered. The problem can be mathematically formalized in the following way: subject to: where w = (w 1 , w 2 , …, w n ) T .
In the previous formulation, the following notation is used for the input: n: Number of available assets (the first asset is referred to as 1, the second as 2, and so on).
V: Set of assets, in other words V = {1, 2, …., n}. R i : Expected profitability/return in asset i A: Variance/covariance matrix of asset returns RS i : Social Responsibility of asset i Q min , Q max : Minimum and maximum proportion of the budget to invest in each selected asset k max : Maximum number of selected assets Section 5 explains how the values of the inputs used in this work are obtained.
In addition, the following decision variables are defined: w i : Proportion of budget invested in asset i, ∀i ∈ V, z i : Binary variable takes the value of 1 if asset i is selected and 0 if otherwise ∀i ∈ V; it must hold that z i = 1 ⇔ w i > 0 Expressions (1), (2), and (3) correspond with each of the three objectives of the model: to maximize the expected return, to minimize the variance, and to maximize social responsibility. Constraint (4) refers to the maximum number of assets that should make up a portfolio ("cardinality"). Constraint (5) refers to the total quantity to invest (in other words, the budgetary restriction). Constraint (6) obliges an investment of at least Q min monetary units in the asset if it is selected. Constraint (7) refers to the maximum quantity to invest in an asset (Q max ). In addition, constraints (6) and (8) Objective functions (1) and (2) correspond to expected return and risk in the classic Markovitz model. On the other hand, objective function (3), related to social responsibility, uses the ESG scores/measures provided by S&P Global (more details in section 5). As stated earlier, the cardinality constraint (4) and the minimum quantity to invest constraint (6) aim to control transaction costs.
Regarding the use of variance as a measure of risk, it should be noted that this measure is sensitive to perturbations in the input data [5,19]. For this reason, the more sophisticated risk measures VaR and conditional value-atrisk (CVaR) were used. Applications of these risk measures can be found in Lotfi and Zenios [48]. However, when the data used for estimation are long time series such as the data used in this paper, the effect of the disturbances is less. In fact, variance is still used today as described in Zhang et al. [77].
Each solution will be defined by the values of the decision variables w i , ∀ i ∈ V (which implicitly determine the value of the variable z i ). Moreover, the relations of dominance of the solutions that were obtained through the multiobjective approach were added. Given a solution w, we denote f 1 (w) as the value of the objective function that measures the expected return (1), f 2 (w) as the value of the objective function that measures risk (2), and f 3 (w) as the value of the objective function that measures social responsibility (3). Let w and w′ be two possible solutions; w is said to dominate w′ if: One solution is said to be efficient if there is no other solution that dominates it. The problem that we propose consists of finding the set of efficient solutions (also known as the Pareto front) or, in this case, an approximation to this set.
Let: f min r : Minimum value known for the r th objective function (r = 1, 2, 3) in the set of efficient solutions. f max r : Maximum value known for the r th objective function (r = 1, 2, 3) in the set of efficient solutions.

Solution method
As commented on in the introduction, the model analyzed in this work is an NP-hard problem and therefore can only be solved in an exact way for instances of a small size within reasonable calculation times. Therefore, a heuristic method is proposed in this work with the objective of obtaining good approximations to the set of efficient solutions quickly and in instances of different sizes. Specifically, the method proposed to solve our model is an adaptation of an MOAMP strategy for multiobjective problems. This strategy was initially proposed by Caballero et al. [10], and their first practical application may be found in Caballero et al. [11]. Other further applications were found in García et al. [32], Pacheco et al. [56], Gómez et al., (2015), and Martínez-Puras and Pacheco [52]. These papers describe in detail the basic principles on which the MOAMP strategy is based.
It should be noted that MOAMP is a strategy, that is, a conceptual structure that defines what to do in general but not the specific implementations in each case. For example, in some works, tabu search is used as a "search engine" [36,52,56]; in other works, a variable neighborhood search is used [32]. In contrast, Molina et al. [55] use scatter search. Some of the differences in our adaptation of MOAMP compared to previous adaptations are the following: 1) previous adaptations deal with logistic, location and routing problems, some very similar to each other, but all different from our problem. Therefore, construction procedures, neighborhood movements, etc. that have been designed are logically different; 2) through various experiments we have determined that obtaining optimal or near-optimal solutions to the single-objective problems in the initial steps has an important positive effect on overall performance. These solutions become high-quality "anchors" for approximation of the efficient frontier. For this reason, we have implemented specific constructive procedures to obtain good solutions to the three initial objectives (steps 2-4 pseudocode 1). 3) A fine neighborhood has been added, which increases the size of the set of nondominated solutions (steps 13-14 pseudocode 1), as shown in Section 5.
Next, we provide a high-level description of our MOAMP approach, a four-phase method, summarized in pseudocode 1. The output of the method is an approximation of the efficient frontier. This approximation (SetND) is composed of the nondominated solutions found during the process. Phase 1 starts by setting SetND to the empty set (step 1). Then, the ReturnGenerator procedure is executed to obtain a solution with the maximum f 1 value (step 2). A RiskGenerator procedure is executed (step 3) with the aim of finding a solution with the minimum risk (i.e., with the minimum f 2 value). In the same way, the SocialGenerator procedure is executed (step 4) to obtain a solution with maximum social responsibility (i.e., with the maximum f 3 value). We denote the solutions obtained in steps 2, 3, and 4 by w 1 , w 2 , and w 3 , respectively. A set of tabu search executions (steps 5 and 6) are launched in an attempt to link all the pairs of the solution sets {w 1 , w 2 , w 3 }. The aim of these executions of tabu search is to update SetND with all the solutions visited in these executions (step 7).
Hence, when Phase 1 terminates, f max 1 is the objective function value of the solution obtained in step 2, f min 2 is the Selection of investment portfolios with social responsibility: a multiobjective model and a Tabu search... objective function of the best solution found in this phase considering objective f 2 , and f max 3 is the objective function value of the solution obtained in step 4.

Pseudocode 1. Adaptation of MOAMP method
Phase 2 consists of a series of executions of tabu search. Each execution uses F λ as the objective function (step 9) but with different λ vectors. The value λ r of its components is generated randomly with a uniform distribution U(0, 1) (step 8). The first execution uses the last solution obtained in Phase 1 as the initial solution. Subsequent executions start with the solution reached in the previous execution. Phase II ends when maxPhase consecutive executions pass without any changes in SetND. The purpose of the third phase is to intensify the search by exploring the neighborhood of each of the solutions in SetND. The first iteration of step 11 explores the neighborhoods of all the solutions in SetND. The new solutions that have entered SetND are then identified (step 12). In the following iterations of step 11, only the neighborhoods of these new solutions are explored. This phase ends when there is no iteration with any new solutions in SetND. The mechanism for generating neighborhoods in step 11 is the same as that used in the TabuSearch procedure. Finally, Phase IV works very similarly to Phase III but with a different mechanism for generating neighborhoods in step 13 so that SetND becomes the final output of the whole four-phase MOAMP adaptation. It should be noted that to avoid an excessive number of solutions, the size of SetND is limited to 10,000. This limit is taken into account in the update of SetND.
The constructive procedures (ReturnGenerator and SocialGenerator) are very simple and make use of the linear nature of the objective functions and the fact that the constraints allow for the whole available budget to be assigned to an asset. These procedures determine the assets with the best unitary value in the corresponding function and assign them all of the available budget (QR) while meeting the constraints determined by Qmin and Qmax. As stated earlier, each solution w is determined by the values w i obtained ∀i ∈ V. In an analogous way, the procedure SocialGenerator is described, changing R i to RS i . In the case of the RiskGenerator procedure, the design and description are more complicated because the risk function f 2 is quadratic. As commented on earlier in Section 2, the values w i determine the corresponding values z i (z i = 0 ⇔ w i = 0). Therefore, every change in the values of w i that occur in the different procedures or methods that are described in this work implies that the corresponding z i values are also updated although sometimes it is not explicitly indicated in the description of these procedures. In the two following subsections, the procedures RiskGenerator and TabuSearch as well as the "fine" neighborhood used in step 12 are described in detail.

RiskGenerator procedure
To describe the RiskGenerator procedure, we use some definitions and notations. For each set S ⊂ V, with |S | = p, and for each pair of values q and q1 such that 0 ≤ q ≤ 1/p and q < q1 ≤ 1, the following quadratic problem is defined: subject to: where d = (d 1 , d 2 , …, d p ) T and B is the variance/covariance matrix of the values of the assets of S. We denote this problem by P(S, q, q 1 ). For each S ⊂ V, such that |S| ≤ k max , we denote the value of the optimal solution of the problem P(S, Q min , Q max ) by h(S). Each P(S, q, q 1 ) problem has an optimal solution that can be found by quadratic programming methods, as described in Appendix 1. The RiskGenerator procedure is outlined in pseudocode 3. Pseudocode 3.RiskGenerator procedure In step 1, the problem P(V, 0, Q max ) is solved. This problem is defined by function (2), i.e., f 2 , and constraints (5), (7), and (9) of Section 2. In step 2, it is checked whether the optimal solution d * fulfills constraints (4) and (6). If it does, d * is the optimum for f 2 considering all the constraints of Section 2, (4)-(9), we do w 2 = d * and it is not necessary to continue. If f * does not fulfill constraints (4) and (6), then in step 3, a set S ⊂ V is built from set C. This set S is the initial set of an iterative procedure in which set S with the best value h(S) is searched. In each iteration, three types of movements are considered: adding an element (steps 6 and 7), removing an element (steps 8 and 9), and exchanging an inside element for an outside element (steps 10 and 11). The process ends when there is no improvement in the best h(S) value found in iteration (ant = value). From the final S set obtained and the optimal f * of P(S, Q min , Q max ), the solution w 2 is formed (step 13).

TabuSearch procedure and "fine" neighborhood
The TabuSearch procedure is based on the strategy tabu search [33,34]. This is a very well-known metaheuristic strategy with thousands of applications in the literature. The principles behind this strategy are explained in great detail in Glover et al. [35]. Given a function G to optimize (either the initial objective functions f 1 , f 2 , and f 3 or the mixed function F λ ), the TabuSearch procedure is described in pseudocode 4. It must be noted that in this description, it is considered, without loss of generality, that the objective is to minimize G.

Pseudocode 4.TabuSearch procedure
In pseudocode 4, N(w) is the set of neighboring solutions of w, i.e., those feasible solutions that can be reached by (neighbor) movements from w. The current solution (w) and the best solution found (w * ) may be different since the procedure permits movements to worse solutions. To avoid cycles, some movements are declared "tabu". However, a tabu movement may be permitted if it results in a better solution than w * . The set of neighborhood solutions of w, N(w), is determined by four types of movements (A, B, C, D) that are described as follows: -A) Go down/Go up: There are two assets i and i ′ in the portfolio (i.e., Q min ≤ w i ; w i 0 ≤ Q max ). The movement consists of reducing w i to a maximum but without eliminating i from the portfolio and increasing w i 0 by the same amount that we have reduced w i . This amount should be the maximum possible. More formally, the movement consists of executing the following three steps: 1) -B) Remove/Go up: There are two assets i and i ′ in the portfolio; the movement consists of deleting i from the portfolio (so that w i = 0) and maximizing w i 0 . In other words, the movement consists of implementing the following two steps: 1) w i 0 ¼ w i 0 þ w i and 2) w i = 0. The following initial condition must be checked to ensure the feasibility of the movement:w i 0 þ w i ≤ Q max -C) Go down/Introduce: There are two assets i and i ′ , i in the portfolio and i ′ outside of it. The movement consists of decreasing w i as much as possible but without eliminating i from the portfolio (that is, making w i = Q min ) and introducing i ′ into the portfolio. It therefore consists of taking the following two steps: 1) The following initial conditions have to be checked to ensure the feasibility of the movement: 1) w i − Q min ≥ Q min , (or w i ≥ 2 · Q min ), because w i − Q min is the value that w i 0 will take and this value should be greater than Q min ; 2) the number of assets in the actual solution w should strictly be less than , as another asset is added in the movement without removing any other. -D) Remove/Introduce: There are two assets i and i ′ , i in the portfolio and i ′ outside of it. The movement consists of removing i from the portfolio (w i = 0) and introducing i ′ into the portfolio. It therefore consists of implementing the following two steps: 1) w i 0 ¼ w i and 2) w i = 0.
As noted earlier, to avoid cycles and return to recently visited solutions some movements are declared tabu during a series of iterations. The different types of movements that have been defined entail two simultaneous actions: the reduction of some value w i (including the removal of assets from the solution) and the increase of some value w i 0 (including the introduction of assets). Therefore, the movements that imply increase in some w i 0 value that has recently decreased, as well as reduction in some w i value that has recently increased, are declared tabu so as not to return to recent solutions. The following tabu vectors are defined to check the tabu status of the different movements: In the same way, the parameters tenureUp and tenureDown down are respectively defined as the number of iterations in which it is tabu to increase a value of w i that has just decreased and the number of iterations in which it is tabu to decrease a value of w i that has just increased. Specifically, the increase in w i is tabu if and only if iter ≤ tabuUp(i) + tenureUp, where iter is the counter of iterations. By analogy, the reduction in a value w i is tabu if and only if iter ≤ tabuDown(i) + tenureDown.
Thus, a movement is tabu if it implies a "tabu" increase or a "tabu" reduction or both. The tabu vectors are initialized in step 2 of the procedure, as shown in pseudocode 3, with the values tabuUp(i) = − tenureUp and tabuDown(i) = − tenureDown, for i = 1, …, n. In the same way, when a movement is executed the values of the corresponding tabu vectors are updated (step 6).
In Phase 3, the same neighborhoods N(w) defined above are used. Finally, the "fine" neighborhood used in Phase IV (step 13) of the MOAMP algorithm is determined using a variant of type A movements that is described as follows: for every pair of two assets i and i ′ in the portfolio the movement consists of reducing w i in an amount determined by a parameter IncFin but without eliminating i from the portfolio, and increasing w i 0 by the same amount that we have reduced w i . More formally, let Inc ¼ min IncFin; w i −Q min ; Q max −w i 0 À Á , and the movement consists of executing the two following steps: 1) w i 0 ¼ w i 0 þ Inc and 2) w i = w i − Inc. Therefore, the "fine" neighborhood depends on the parameter IncFin. In this case, IncFin has been set to IncFin = 0.01. In Appendix 2, the computational complexity of the entire MOAMP method is explained.

Adaptation of evolutionary strategies
To check the performance of our MOAMP-tabu search method, adaptations of three evolutionary strategies are designed for multiobjective optimization problems: NSGA-II, SPEA-II and MOPSO. These adaptations are designed ad hoc for this specific problem. Subsequently, computational tests are carried out to compare the results obtained by our method and these evolutionary methods. The three strategies, NSGA-II [22], SPEA-II [80] and MOPSO [20], are recognized and accepted in the field of multiobjective optimization. A large number of applications can be found in the related literature. Among the most recent applications we highlight the work of Jalili et al. [43] and Ghanbari and Alaei [30]. Although they were conceived as general-purpose methods, specific context-dependent elements were used in this paper. However, other interesting and recent evolutionary methods for multiobjective problems can be found in Gong et al. [37], Hu et al. [42], Zhang et al. [76], Jiang and Chen [44] and Tian et al. [67].
Basically, NSGA-II is a genetic algorithm, and it therefore operates over a set of solutions: this set is named the "population". The solutions are also called "individuals." It is an iterative process that follows the following scheme: initially, it creates a population; subsequently, in each iteration ("generation"), a set of parents is selected from the population in a tournament; from this set of parents, a population of descendants ("offspring") are obtained of the same size as the current population (by applying the crossover and mutation operations); then, both populations are joined, and their individuals are ordered. Finally, the best half is selected, giving rise to a new population. The process is repeated in each iteration. The algorithm ends when a stop criterion is reached (maximum number of iterations without change in the set of nondominated solutions, maximum calculation time, etc.) Moreover, NSGA-II makes use of the concepts of "nondominance" fronts and the "crowding distance" of each solution. Thus, between two solutions the solution that belongs to the best dominance front is preferred. If the two solutions belong to the same front, the one with the greatest "crowding distance" is preferred. This sorting is known as "partial order" and denoted by < C . The partial order is used in the different selection processes. The concepts of nondominance fronts, crowding distance, and partial order are explained in Deb et al. [22].
In what follows, the procedure to generate initial random solutions is described (ConstructiveRandom) as well as the crossover and mutation operators. The After the operations of crossover and mutation, the solution that is obtained may not be feasible. Specifically, the crossover and mutation operators produce four cases of infeasibil- w i < 1. The procedure SolutionToFeasability was designed to "recover" the feasibility. This procedure is executed after the mutation operator and successively corrects, where necessary, each of these 4 cases. This procedure is described in pseudocode 6.
Pseudocode 6. ProcedureSolutionToFeseability If case a) applies in step 1, procedure ConstructiveRandom is executed to generate a feasible solution, and procedure SolutionToFeseability ends. If case b) applies, steps 2-3 correct it although case c) could apply after this correction. Case c) is corrected in steps 4-5, and finally case d) is corrected in steps 6-7. Observe, specifically in cases c) and d), that the redefinition of the values of w i in steps 4-5 (6-7) ensures, on the one hand, that ∑ n i¼1 w i ¼ 1 and, on the other hand, that Q min ≤ w i ≤ Q max . We consider a simple example to illustrate the functioning of steps 4-5: n = k max = 2, Q min = 0.2, Q max = 0.8, w 1 = 0.7 and w 2 = 0.4. In this example, case c) applies because ∑ Let N be the size of the initial population. The adaptation of the NSGA-II algorithm to our problem is described in pseudocode 7.
Pseudocode 7.NSGA − II Adaptation In this NSGA II adaptation, a set of solutions H t with N elements is generated as follows: 2·N elements of P t are randomly selected with replacement; N pairs are formed with these 2·N elements; the best element according to the partial order < C are chosen in each pair; with the N elements chosen, N/2 pairs of parents are formed; finally, two "offspring" solutions are obtained through the crossover operator for each couple. In this way, the N elements of H t are obtained.
SPEA-II is also basically a genetic algorithm and has a structure similar to that of NSGA-II. In addition to the population of P t solutions, a set of "elite" P′ t solutions are used. Both sets interact with each other. Let N be the size of the current population P t , and let N' be the size of the "elite" solutions P′ t . The adaptation of the SPEA-II algorithm to our problem is described in pseudocode 8.

Pseudocode 8.SPEA − II Adaptation
In this SPEA-II adaptation, the function fitness in step 4 is calculated as follows: where -σ w is the distance in the space of objective functions from w to its closest solution different from it in P t ∪P 0 t .
is the number of solutions of P t ∪P 0 t that are dominated by w ′′ .
The function fitness in step 8 is calculated in the same way but changing P t ∪P 0 t by R t . The solutions with the lowest FIT value are chosen. Therefore, the least dominated solutions and those that are more distanced from other solutions are favored. In the case of a tie, the solution with the greatest distance to its second closest solution is chosen. The set of solutions H t (step 6) is generated in a similar way as in step 6 of the NSGA-II adaptation but uses the value of fitness (calculated in step 4) to choose the best element of each pair.
Finally, the notation to describe the MOPSO algorithm is slightly different from previous algorithms. We define P the current population of solutions and N its size. The solutions of P are denoted by w j , j = 1. . N. In adition, we define an axillar set of solutions pw j ∀j = 1. . N and a set of vectors v j ∈ R n , j = 1. . N. The adaptation of the MOPSO algorithm to our problem is described in pseudocode 9.
Pseudocode 9.MOPSO Adaptation The output of the algorithm is the set ND of non-dominated solutions. In particle swarm optimization methods [25] the solutions w j are referred to as "particles", each solution pw j is referred to as the "best individual position" of the particle w j , the vector v j is referred to as the "velocity" of the particle w j and the solution pg is referred to as the best global position.

Computational experiments
This section is devoted to the comparison of the MOAMP, NSGA-II and SPEA-II methods. Different computational tests were performed, described below. The procedures were coded and compiled in Object Pascal using Rad Studio (version 10 and later). All the tests were performed on an Intel Core i7-7700 computer with a 4.20 GHz processor. To do so, four sets of 25 instances were defined. The first set corresponds with the real data of the Spanish stock market (Ibex-35), the second corresponds with the European stock market (EuroStoxx-50), the third corresponds with the German stock market (Dax-30) and the fourth corresponds with the American stock market (Dow-Jones). Asset prices for each market can be found on the investing.com website. In our case we have taken the values from April 1 till June 30, 2021. With the table of values of the quotations of each market, the variancecovariance matrix of that market (matrix A) and the expected return for each asset R i (difference between the last value and the first divided by the first) are obtained. Regarding to social responsibility, the RS i values were obtained from the website https://www.spglobal.com/esg/scores/ (ESG scores corresponding to June 2021). These indexes prepared by the S&P Global company are widely recognized and used in the financial and academic fields. For more information on the S&P Global ESG Score, please refer to the S&P ESG Index Series Methodology (https://www.spglobal.com/spdji/en/ documents/methodologies/methodology-sp-esg-index-series. pdf). On the other hand, companies that have not been assigned this ESG index have not been considered. Thus, for each database, we have considered the number of assets shown in Table 2.
Therefore, instances of the same database use the same values of n, R i , RS i ∀i ∈ {1. . n} and the same matrix, A. The value of Q max is fixed at 0.7. In each group, the instances are distinguished by the values of the parameters Q min and kmax . Specifically, 25 combinations of parameters Q min and kmax have been defined, the same for both sets. These combinations are shown in Table 3.
Preliminary fine-tuning experiments resulted in the following parameter settings for the MOAMP approach: maxPhase = 10, tenureDown = ⌊n/2⌋, tenureUp = ⌊n/2⌋, and maxTS = 10 · n. On the other hand, previous works [36,56] recommended the use of pmut = 0.05. Next, the comparison between the results with the MOAMP and the other three methods (NSGA-II, SPEA-II and MOPSO) is shown. Initially, the MOAMP procedure is executed for each instance, and the number of nondominated solutions (|Set _ ND|) and the computational time (CTime) used are recorded. Then, the NSGA-II, SPEA-II and MOPSO procedures are executed, taking the size of the population N as 2 · |Set _ ND| (NSGA-II, SPEA-II and MOPSO) and N ' = N (SPEA-II). The stop criterion of the NSGA-II, SPEA-II and MOPSO methods is to reach 10·CTime. As a preliminary result, Table 4 and Fig. 1 show the average of the number of solutions obtained by the four sets in each phase of the MOAMP method as well as the NSGA-II, SPEA-II and MOPSO adaptations.
From Table 4 and Fig. 1, it can be seen that phase III and, mainly, phase IV show great capability for increasing the set of solutions obtained in phases I and II. Phase IV multiplies the number of solutions of phase II in all databases by nearly 150. Furthermore, the MOAMP method obtains approximately 20-35 times more solutions than the NSGA II, SPEA-II and MOPSO methods. Both Table 2 and Fig. 1 clearly show the contribution of Phase IV and the fine neighborhood to obtain dense nondominated solution sets. This increases the information provided to the decision maker and thus his decision capacity.
Three metrics will be used to compare these results. The first is based on cardinality and was proposed in Zitzler and Thiele [79]. In this particular case, cardinality is defined as the number of solutions obtained by each method in the approximation of the efficient frontier (E * ). We build this approximation of the efficient frontier (E * ) by combining all the solutions obtained by both methods and by identifying the nondominated and different solutions.
The second metric is based on the concept of coverage of the two sets [79]. This metric compares the percentage of a set that is dominated by another set. Specifically, let R(X, Y) be the ratio of solutions found by method Y that are dominated by Dax-30 28 4 Dow-Jones 30 The larger the values of df(X) the greater the dominance of the solutions obtained by method X. Finally, the third metric is known as hypervolume [79]. For every set of solutions this indicator measures the ratio of the volume of the region that is dominated by this set in the objective function space. This ratio is calculated with respect to the hypercube, the extremes of which are the ideal and anti-ideal points. The ideal and antiideal points are obtained by calculating the minimum and the maximum values for each objective function from all the solutions in the sets under consideration. More details on how to calculate the hypervolume can be found in Beume et al. [6] and Bradstreet [8].
The results for each instance are shown in Appendix 3 (Tables 6, 7, 8 and 9). The means of these metrics for each database are shown in Table 5. Columns 2, 3, 4, 5 and 6 refer to the metric of cardinality: the second column indicates the number of solutions of the approximation of the efficient frontier (|E * |), as previously defined; columns 3, 4, 5 and 6 indicate the percentage of elements E * found by the four methods; columns 7 to 10 refer to the coverage metric; and finally, columns 11 to 14 indicate the value of the hypervolume of the sets obtained by each method. Table 5 shows that the MOAMP method performed better than the NSGA-II, SPEA-II and MOPSO methods for all considered instances and metrics in all databases. The percentage contribution of solutions obtained by MOAMP to the construction of E * is much higher, indicating that MOAMP is better with regard to cardinality. The value of coverage (df) is always higher with MOAMP than those values obtained by NSGA-II, SPEA-II and MOPSO, showing greater dominance for the set obtained by MOAMP. The hypervolume generated by the set obtained by MOAMP was also greater than the hypervolume generated by the sets obtained by NSGA-II, SPEA-II and MOPSO. Perhaps in the Dow-Jones database the differences in hypervolume are slightly smaller. However, this metric is usually the one that gives the least differences between methods, as seen in Colmenar et al. [21]. In summary, it can be deduced that in four data sets the MOAMP method achieves better results, although NSGA-II, SPEA-II and MOPSO use 10 times more computing time. In    Figure 2 shows the sets of points obtained by the MOAMP, NSGA-II, SPEA-II and MOPSO methods in instance 12 of the IBEX-35 data. For a better understanding of the graphics only the nondominated solutions with respect to f 1 and f 2 appear. The value 'expected return' (f 1 ) is shown on the x-axis, and the value 'risk' (f 2 ) is shown on the y-axis.
The set of solutions obtained by MOAMP dominates most of those obtained by the other methods. Moreover, the curve obtained by MOAMP reaches solutions with lower risk, while the solution with higher profitability is the same for all 4 methods. This is an example of the best properties of the sets of solutions obtained by MOAMP.

Financial assessment tool
In brief, in the different cases under analysis the MOAMP method achieves a broad set of nondominated solutions within a short calculation time (less than 25 s in all instances). In addition, these sets have good characteristics compared with those obtained by other multiobjective optimization methods.
This means that the MOAMP method may be used as a calculation engine in financial assessment tools, even for online use. To illustrate its use as a financial assessment tool, some examples will be given. With the set of nondominated solutions that have been generated and with the help of simple filtering and sorting procedures, graphics can be rapidly generated on the basis of which the financial advisor can recommend the portfolio solution according to the preferences of the client.
Let us take the case of a client who approaches a financial entity or a bank to make an investment. If we suppose that this client expresses great social concern and therefore has a special interest in obtaining a portfolio with a high value of social responsibility, this client could consider only, for example, the 5% of solutions with highest value on this objective function. Therefore, the solutions obtained by MOAMP that accomplish this requirement can be filtered, and subsequently, among these, the nondominated solutions may be selected, taking into account the other two objectives. Figure 3 shows an example with the selected solutions after this process is applied to the set of solutions corresponding to instance 12 of IBEX-35 data. The value of expected return (f 1 ) is shown on the x-axis, and the value of risk f 2 is shown on the y-axis. On the basis of Fig. 3, the entity can recommend, according to the client profile (risky, conservative, etc.), the most acceptable portfolio (solution).
We assume there is a second client with a very conservative profile who is interested in portfolios with low risks. This client could consider only, for example, the 5% of solutions with the lowest values of risk. On the basis of the set of solutions obtained by MOAMP, through a process analogous to the earlier example, a subset of solutions may be selected with the required risk values that are nondominated with regard to the other two objectives. Figure 4 shows an example with the selected solutions after this process. The value of the expected return (f 1 ) is shown on the x-axis, and the value of social responsibility (f 3 ) is shown on the y-axis. On the basis of this figure, the entity can recommend, in response to the social concerns of the client, the most appropriate portfolio (solution).
Finally, let us suppose a third case: a client who seeks the portfolios with the highest return (bold profile). This client could consider only, for example, the 5% of solutions with the best values on the expected return. Following the same steps as in the earlier examples, a subset of solutions may be found, which are shown in Fig. 5. According to the social concerns of the client, the appropriate portfolio (solution) is found.  In short, the MOAMP method allows us to obtain a broad set of quality solutions. We think this is always very positive because this set provides the client and/or advisor with more possibilities. In effect, taking into account the preferences and the profile of each client, a set of minor solutions may be filtered and selected on the basis of which appropriate selection may be completed.

5% best in Social Responsability
In this sense, it should be taken into account that, increasingly, different regulations are established for making recommendations on investment portfolios. For example, the MIFID 1 regulations were enacted for European Union countries on 1st November 2007. In other words, to complete an investment proposal, it is obligatory to determine and take into account the global profile of investors. Specifically, their knowledge, experience, financial situation, and investment objectives are assessed. In this way, the client is classified as "Very conservative", "Conservative", "Moderate", "Bold or fearless", etc. In case the necessary information to analyze and to determine the client profile is lacking, the regulations prevent any investment proposal whatsoever.

Conclusions
The selection of an investment portfolio is an increasingly complex process. In addition to the expected return and risk, other criteria are increasingly considered when assessing different investment possibilities. For many investors, although return is still important it is no longer the principal criterion. Social, ethical, and environmental concerns are increasingly taken into account. Therefore, in recent times, social responsibility has assumed greater importance. Various works that have appeared in recent years, as cited in the introduction, have demonstrated this. Nevertheless, in the majority of these works social responsibility is treated as a constraint (establishing minimum levels or previously selecting assets already known to be socially responsible).
In this work, unlike earlier models, a multiobjective portfolio selection model is analyzed in which social responsibility is introduced as a third objective in addition to the expected return and risk. Moreover, in the model under analysis, other real restrictions have also been included such as prefixing a maximum number of assets to select and determining a minimum amount to invest in each selected asset. These constraints attempt to avoid high transaction costs. The resulting model has evident real applications. When considering social responsibility as an objective and not as a restriction, the model is more flexible than other earlier models that included social responsibility.
Specifically, we designed an algorithm for this model that is capable of generating sets of nondominated solutions with good properties with short calculation times (only a few seconds). Therefore, our algorithm could be used as the engine of financial analysis and advisory tools, including real-time and online tools.
Furthermore, our algorithm takes into account the social concerns of each client and their overall risk profile (very conservative, conservative, moderate, or fearless). Indeed, after obtaining the set of nondominated solutions, the preferences, profile, and/or social concerns of the client can be used to make the appropriate selection with simple filtering and sorting tools. This approach is also in line with recent legal regulations (e.g., MIFID regulations) that oblige financial advisors to take the client profile into account to provide greater protection and propose good financial advice.
Specifically, the algorithm proposed in this paper is based on the MOAMP strategy. This algorithm uses a tabu search procedure as its main tool, which can be used with different objective functions. In addition, to validate and test the algorithm design we have compared the results with those offered by an adaptation of the well-known NSGA-II, SPEA-II and MOPSO algorithms for multiobjective problems. The different tests employed show that the sets of solutions were better when using the MOAMP-based method in comparison to the sets obtained with the NSGA-II-, SPEA-II and MOPSO-based methods. This higher quality is given for all the metrics under consideration. The MOAMP-based method uses very little calculation time, allowing it to be used as an online financial assessment tool. Examples of how to give advice in a simple manner are shown, taking into account the preferences of each client.
Finally, it should be noted that the MOAMP strategy is not a "plug-and-play" type algorithm that can be easily adapted to different problems. MOAMP sets out basic ideas for which we must devise an ad hoc development for each model or problem. This ad hoc development requires a greater effort of design and implementation in comparison with "plug-andplay" strategies. However, at least in this case this effort allows for better performance.
Appendix 1: Solving quadratic problem P(S, q, q 1 ) in Section 3.2 In the problem defined by (11)- (14), Section 3.1, constraint (12) can be redefined as inequality because when B is defined as positive, constraint (13) is always saturated at the optimum. On the other hand, multiplying the objective function by 1/2 and changing the variables t i = d i − q ∀i = 1. . p, the problem is defined as: subject to: where t = (t 1 , t 2 , …, t p ) T , 1 ¼ 1; 1; …; 1 ð Þ T ∈R p , q 0 = 1 − p · q, q 2 = q 1 − q. The Lagrangian function is defined as follows: where u = (u 1 , u 2 , …, u p ) T ∈ R p , e = (e 1 , e 2 , …, e p ) T ∈ R p and v ∈ R are the penalty parameters. Since the objective function is convex, because the B matrix is defined as positive and the constraints are linear Kuhn-Tucker conditions are necessary and sufficient to find the optimal function. To find a solution that fulfills these conditions Frank-Wolfe's method [28] proposes solving a linear programming problem with complementarity conditions, where the constraints are the Kunh-Tucker conditions. These constraints are as follows: So that g = (g 1 , g 2 , …, g p ) T ∈ R p , where Expressions (16), (21) and (22) can be rewritten as follows: x−1 The aim is therefore to find a solution that verifies (24)- (26) and constraints (17)- (19) (conditions of complementarity) and that the variables involved are positive (constraints (20) and (23)). Expressions (24)-(26) consist of 2·p+1 equality constraints and 4·p+2 variables. Since variables u, x, and g are multiplied by the identity matrix they can form the initial base in the Simplex table. However, since among the constant terms there is at least a negative value (−q 0 ), we must add the artificial variable y. Specifically, we start with the nonfeasible solution, A and the rest of the variables equal to 0. Next, the artificial variable is introduced in the base and all the variables of the new base are positive. It then continues to pivot, as in the Simplex method until the artificial variable y leaves the base. To choose the variable that enters in each step the complementarity rule is followed: the variable that enters the base is the complementary variable of the one eliminated in the previous step: if u i came out, it enters its complementary t i and vice versa; if v came out, it enters its complementary x and vice versa; if e i came out, it enters its complementary g i and vice versa. This method is an adaptation of Lemke's method or "complemen-tary pivot" [46]. The following is an example of how it works. Assume the following data: p = 2, q = 0.4, q 1 =0.55 and The initial table is as follows: The artificial variable y is entered in the base with the value y = 0.2. To do this, we change the sign of the third row: As x has left the base in the following iteration, its complementary v enters.
As u 1 has left the base in the following iteration, its complementary t 1 enters.
As g 1 has left the base in the following iteration, its complementary e 1 enters.
As u 2 has left the base in the following iteration, its complementary t 2 enters.
The artificial variable y leaves the base. Therefore, the method ends. The solution obtained is t 1 = 0.15 and t 2 = 0.05. Therefore, d 1 = 0.55 and d 2 = 0.45.

Appendix 2: Analysis of the complexity of the MOAMP method
The MOAMP method starts with the ReturnGenerator, RiskGenerator, and SocialGenerator procedures. The ReturnGenerator and SocialGenerator procedures require θ(k max · n) calculations, as can be easily seen in pseudocode 2. The RiskGenerator procedure is a local search procedure, as seen in pseudocode 3. In each iteration (steps 5-12), θ(k max · (n − k max )) neighboring solutions (corresponding to the sets S') are analysed. To determine the objective function of these solutions, the corresponding P(S′, Q min , Q max ) problems are solved. As shown in Appendix 1, this problem is solved in θ(k max 3 ) steps. Therefore, each iteration of the RiskGenerator procedure requires θ(k max 4 · n) calculations.
The TabuSearch procedure (pseudocode 4) is an iterative neighborhood search procedure. Each neighborhood consists of θ(k max · (n − k max )) solutions. The evaluation of the return and social responsibility objectives requires θ(k max ) calculations, while the evaluation of the risk objective requires θ(k max 2 ) calculations. Therefore, each iteration requires θ(k max 3 · n) steps of calculations, and the complexity of the TabuSearch procedure is θ(k max 3 · n · maxTS). The number of times the TabuSearch procedure is executed is θ(maxPhase).
In Phases III and IV of the MOAMP procedure, the same neighboring solutions are explored as in the TabuSearch procedure, and then every iteration of these phases requires θ(k max 3 · n · |SetND|) calculations. As indicated in Section 3, in this implementation the size of SetND has been limited to 10,000. On the other hand, maxTS = 10 · n and maxPhase = 10. Thus, the overall complexity of our MOAMP method is θ(k max 3 · n 2 ).
Base   Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was partially supported by FEDER funds and the Spanish Ministry of Economy and Competitiveness (Projects PID2019-104263RB-C44 and PDC2021-121021-C22), the Regional Government of "Castilla y León", Spain (Projects BU329U14 and BU071G19), the Regional Government of "Castilla y León" and FEDER funds (Projects BU062U16, COV2000375 and BU056P20).

Declarations
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of interest All authors declare that they have no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.