Skip to main content

Advertisement

Log in

Evaluation of the absolute forms of cost functions in optimization using a novel evolutionary algorithm

  • Optimization
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Many ordinary approaches in optimization are mathematical-based. Due to the limitations of such methods, optimal problems have been commonly defined in single-objective and quadratic forms, leading to non-optimal solutions based on the main design requirements. In order to address these issues, this paper presents a new type of genetic programming (GP) called “multi-objective archived-based genetic programming (MAGP)”. The absolute forms of cost functions, which are unsolvable using the conventional methods, could be assessed herein through the suggested algorithm. It is shown that in addition to the possibility of obtaining optimal single-objective solutions, It is shown that in addition to the possibility of obtaining optimal single-objective solutions, some very fruitful non-dominant solutions on the Pareto fronts would be acquired, providing the designer with a set of optimal analytical solutions which could be selected depending on the design requirements. For example, in the case studies examined in this paper, around 30,000 and 14,000 non-dominant solutions were obtained for linear and nonlinear problems, respectively, indicating the high performance of the novel proposed MAGP algorithm in acquiring non-dominant solutions in optimization problems. Generally, it is observed that obtaining and comparing optimal solutions in the quadratic form are basically unreliable and as a consequence, an optimal control problem must be analyzed in its absolute form of indices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

Some or all data, models, or codes that support the findings of this study are available from the corresponding author upon reasonable request.

Abbreviations

GP:

Genetic programming

MAGP:

Multi-objective archived-based genetic programming

CE:

Control effort

Trxi :

Trajectory

MR:

Measurement region: The whole region between an arbitrary time function and the horizontal axis

NS:

Non-dominant solutions: A set of solutions (control signals) whose objective function vectors are non-dominant in a particular form

QNS:

Quadratic non-dominant solutions

ANS:

Absolute non-dominant solutions

CV:

Cross-validation: Calculation a set of NS in another form

QCV:

Quadratic cross-validation: \({\text{CE}}_{\text{QCV}}= {\int }_{0}^{{t}_{f}}|{U}_{\text{QNS}}(t)|\text{d}t, {\text{Tr}}_{{\text{QCV}}_{{x}_{i}}}={\int }_{0}^{{t}_{f}}|{{x}_{i}}_{\text{QNS}}(t)|\text{d}t\)

ACV:

Absolute cross-validation \({\text{CE}}_{\text{ACV}}= {\int }_{0}^{{t}_{f}}{({U}_{\text{ANS}}\left(t\right))}^{2}\text{d}t, {\text{Tr}}_{{\text{CCV}}_{{x}_{i}}}={\int }_{0}^{{t}_{f}}{({{x}_{i}}_{\text{ANS}}(t))}^{2}\text{d}t\)

TS:

Transported solutions: A set of NS with its CV

QTS:

Quadratic transported solutions: QTS = [QNS, QCV]

ATS:

Quadratic transported solutions: ATS = [ANS, ACV

References

  • Abbasi S, Choukolaei HA (2023) A systematic review of green supply chain network design literature focusing on carbon policy. Decis Anal J 100189

  • Abbasi S, Erdebilli B (2023) Green closed-loop supply chain networks’ response to various carbon policies during COVID-19. Sustainability 15(4):3677

    Google Scholar 

  • Abbasi S, Daneshmand-Mehr M, Ghane Kanafi A (2021) The sustainable supply chain of CO2 emissions during the coronavirus disease (COVID-19) pandemic. J Ind Eng Int 17(4):83–108

    Google Scholar 

  • Abbasi S, Daneshmand-Mehr M, Ghane Kanafi A (2022a) Designing sustainable recovery network of end-of-life product during the COVID-19 pandemic: a real and applied case study. Discrete Dyn Nature Soc 2022a

  • Abbasi S, Khalili HA, Daneshmand-Mehr M, Hajiaghaei-Keshteli M (2022b) Performance measurement of the sustainable supply chain during the COVID-19 pandemic: a real-life case study. Found Comput Decis Sci 47(4):327–358

    Google Scholar 

  • Abbasi S, Daneshmand-Mehr M, Ghane Kanafi A (2023a) Green closed-loop supply chain network design during the coronavirus (COVID-19) pandemic: a case study in the Iranian Automotive Industry. Environ Model Assess 28(1):69–103

    Google Scholar 

  • Abbasi S, Daneshmand-Mehr M, Ghane K (2023b) Designing a tri-objective, sustainable, closed-loop, and multi-echelon supply chain during the COVID-19 and lockdowns. Found Comput Decis Sci 48(1)

  • Abbasi S, Sıcakyüz Ç, Erdebilli B (2023c) Designing the home healthcare supply chain during a health crisis. J Eng Res 100098

  • Abdolazimi O, Esfandarani MS, Salehi M, Shishebori D (2020a) Robust design of a multi-objective closed-loop supply chain by integrating on-time delivery, cost, and environmental aspects, case study of a Tire Factory. J Clean Prod 264:121566

    Google Scholar 

  • Abdolazimi O, Salehi Esfandarani M, Salehi M, Shishebori D (2020b) A comparison of solution methods for the multi-objective closed loop supply chains. Adv Ind Eng 54(1):75–98

    Google Scholar 

  • Abdolazimi O, Esfandarani MS, Shishebori D (2021) Design of a supply chain network for determining the optimal number of items at the inventory groups based on ABC analysis: a comparison of exact and meta-heuristic methods. Neural Comput Appl 33:6641–6656

    Google Scholar 

  • Abraham A, Jain L (2005) Evolutionary multiobjective optimization. Springer, London, pp 1–6

    MATH  Google Scholar 

  • Ali MMA, Jamali A, Asgharnia A, Ansari R, Mallipeddi R (2022) Multi-objective Lyapunov-based controller design for nonlinear systems via genetic programming. Neural Comput Appl 1–13

  • Aliyu BK, Osheku CA, Adetoro LM, Funmilayo AA (2012) Optimal solution to matrix riccati equation-for kalman filter implementation. Edited by Vasilios N. KATSikis, 97

  • Ayala HVH, dos Santos Coelho L (2012) Tuning of PID controller based on a multiobjective genetic algorithm applied to a robotic manipulator. Expert Syst Appl 39(10):8968–8974

    Google Scholar 

  • Bermudez A (2002) Some applications of optimal control theory of distributed systems. ESAIM Control Optim Calc Var 8:195–218

    MathSciNet  MATH  Google Scholar 

  • Bertsekas D (2012) Dynamic programming and optimal control (vol 1). Athena Scientific, Nashua

  • Bittanti S, Laub AJ, Willems JC (eds) (2012) The Riccati equation. Springer, Berlin

    Google Scholar 

  • Brameier M, Banzhaf W (2001) A comparison of linear genetic programming and neural networks in medical data mining. IEEE Trans Evol Comput 5(1):17–26

    MATH  Google Scholar 

  • Brusset X, Jebali A, La Torre D, Liuzzi D (2023) Production optimization in the time of pandemic: an SIS-based optimal control model with protection effort and cost minimization. Ann Oper Res 1–24

  • Burakov SV, Semenkin ES (2013) Solving variational and Cauchy problems with self-configuring genetic programming algorithm. Int J Innov Comput Appl 5(3):152–162

    Google Scholar 

  • Cao J, Li L (2022) The control mode study of PPP project financing management information system. Soft Comput 26(16):7669–7675

    Google Scholar 

  • Clarke F (2013) Functional analysis, calculus of variations and optimal control, vol 264. Springer, London, p 591

    MATH  Google Scholar 

  • Coello CAC, Pulido GT (2001) A micro-genetic algorithm for multiobjective optimization. In: International conference on evolutionary multi-criterion optimization. Springer, Berlin, pp 126–140

  • Coello CAC, Zacatenco CSP (2005) Twenty years of evolutionary multi-objective optimization: a historical view of the field. CINVESTAV-IPN Evolutionary Computing Group

  • Chang YF, Lee TT (1986) Application of general orthogonal polynomials to the optimal control of time-varying linear systems. Int J Control 43(4):1283–1304

    MATH  Google Scholar 

  • Cheng Y, Zhao L (2021) Dynamical behaviors and control measures of rumor-spreading model in consideration of the infected media and time delay. Inf Sci 564:237–253

    MathSciNet  Google Scholar 

  • Çimen T (2011) On the existence of solutions characterized by Riccati equations to infinite-time horizon nonlinear optimal control problems. IFAC Proc 44(1):9618–9626

    Google Scholar 

  • Cui Y, Geng Z, Zhu Q, Han Y (2017) Multi-objective optimization methods and application in energy saving. Energy 125:681–704

    Google Scholar 

  • Deng W, Yao R, Zhao H, Yang X, Li G (2019) A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithm. Soft Comput 23:2445–2462

    Google Scholar 

  • Dhiman G, Singh KK, Soni M, Nagar A, Dehghani M, Slowik A et al (2021) MOSOA: a new multi-objective seagull optimization algorithm. Expert Syst Appl 167:11415

    Google Scholar 

  • Gholaminezhad I, Jamali A, Assimi H (2014) Automated synthesis of optimal controller using multi-objective genetic programming for two-mass-spring system. In: 2014 Second RSI/ISM international conference on robotics and mechatronics (ICRoM). IEEE, pp 041–046

  • Ghorbani MA, Khatibi R, Mehr AD, Asadi H (2018) Chaos-based multigene genetic programming: a new hybrid strategy for river flow forecasting. J Hydrol 562:455–467

    Google Scholar 

  • Hsiao CH, Wang WJ (1999) Optimal control of linear time-varying systems via Haar wavelets. J Optim Theory Appl 103:641–655

    MathSciNet  MATH  Google Scholar 

  • Hull DG (2013) Optimal control theory for applications. Springer, Berlin

    Google Scholar 

  • Hussain K, Mohd Salleh MN, Cheng S, Shi Y (2019) Metaheuristic research: a comprehensive survey. Artif Intell Rev 52:2191–2233

    Google Scholar 

  • Hwang C, Chen MY (1985) Analysis and optimal control of time-varying linear systems via shifted Legendre polynomials. Int J Control 41(5):1317–1330

    MathSciNet  MATH  Google Scholar 

  • Jamali A, Ghamati M, Ahmadi B, Nariman-Zadeh N (2013) Probability of failure for uncertain control systems using neural networks and multi-objective uniform-diversity genetic algorithms (MUGA). Eng Appl Artif Intell 26(2):714–723

    Google Scholar 

  • Jamali A, Mallipeddi R, Salehpour M, Bagheri A (2020) Multi-objective differential evolution algorithm with fuzzy inference-based adaptive mutation factor for Pareto optimum design of suspension system. Swarm Evol Comput 54:100666

    Google Scholar 

  • Kalman RE (1960) Contributions to the theory of optimal control. Bol Soc Mat Mexicana 5(2):102–119

    MathSciNet  Google Scholar 

  • Kamalapurkar R, Walters P, Rosenfeld J, Dixon W (2018) Reinforcement learning for optimal feedback control. Springer, Berlin

    MATH  Google Scholar 

  • Keller AA (2019) Multi-objective optimization in theory and practice II: metaheuristic algorithms. Bentham Science Publishers, Sharjah

    Google Scholar 

  • Kirk DE (2004) Optimal control theory: an introduction. Courier Corporation, North Chelmsford

    Google Scholar 

  • Konak A, Coit DW, Smith AE (2006) Multi-objective optimization using genetic algorithms: a tutorial. Reliab Eng Syst Saf 91(9):992–1007

    Google Scholar 

  • Koza JR (1990a) Concept formation and decision tree induction using the genetic programming paradigm. In: International conference on parallel problem solving from nature. Springer, Berlin, pp 124–128

  • Koza JR (1990b) Genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems, vol 34. Stanford University, Department of Computer Science, Stanford

    Google Scholar 

  • Koza JR (2010) Human-competitive results produced by genetic programming. Genet Program Evolvable Mach 11:251–284

    Google Scholar 

  • La Torre D, Kunze H, Ruiz-Galan M, Malik T, Marsiglio S (2015) Optimal control: theory and application to science, engineering, and social sciences. In: Abstract and applied analysis, vol 2015. Hindawi

  • Langdon DW, Amato MP, Boringa J, Brochet B, Foley F, Fredrikson S et al (2012) Recommendations for a brief international cognitive assessment for multiple sclerosis (BICAMS). Multiple Sclerosis J 18(6):891–898

    Google Scholar 

  • Lewis FL, Vrabie D, Syrmos VL (2012) Optimal control. Wiley, New York

    MATH  Google Scholar 

  • Li T, Xiao Y (2023) Optimal strategies for coordinating infection control and socio-economic activities. Math Comput Simul 207:533–555

    MathSciNet  MATH  Google Scholar 

  • Li R, Noack BR, Cordier L, Borée J, Harambat F (2017) Drag reduction of a car model by linear genetic programming control. Exp Fluids 58:1–20

    Google Scholar 

  • Liberzon D (2011) Calculus of variations and optimal control theory: a concise introduction. Princeton University Press, Princeton

    MATH  Google Scholar 

  • Maher RA, Mohamed MJ (2013) An enhanced genetic programming algorithm for optimal controller design

  • Maier HR, Razavi S, Kapelan Z, Matott LS, Kasprzyk J, Tolson BA (2019) Introductory overview: optimization using evolutionary algorithms and other metaheuristics. Environ Model Softw 114:195–213

    Google Scholar 

  • Mezura-Montes E, Coello CAC (2011) Constraint-handling in nature-inspired numerical optimization: past, present and future. Swarm Evol Comput 1(4):173–194

    Google Scholar 

  • Mohammadi A, Nariman-Zadeh N, Jamali A (2020) The archived-based genetic programming for optimal design of linear/non-linear controllers. Trans Inst Meas Control 42(8):1475–1491

    Google Scholar 

  • Pizzuti C (2011) A multiobjective genetic algorithm to find communities in complex networks. IEEE Trans Evol Comput 16(3):418–430

    Google Scholar 

  • Radhoush S, Samavat M, Vali MA (2014) Optimal control of linear time-varying systems using the Chebyshev wavelets (a comparative approach). Syst Sci Control Eng Open Access J 2(1):691–698

    Google Scholar 

  • Sanaye S, Hajabdollahi H (2010) Thermal-economic multi-objective optimization of plate fin heat exchanger using genetic algorithm. Appl Energy 87(6):1893–1902

    Google Scholar 

  • Sardahi Y (2016) Multi-objective optimal design of control systems. University of California, Merced

    Google Scholar 

  • Sethi SP (2019) What is optimal control theory? Springer, Berlin, pp 1–26

    Google Scholar 

  • Shao L, Liu L, Li X (2013) Feature learning for image classification via multiobjective genetic programming. IEEE Trans Neural Netw Learn Syst 25(7):1359–1371

    Google Scholar 

  • Soldatenko S, Yusupov R (2017) On the application of optimal control theory to climate engineering. arXiv preprint arXiv:1709.05597

  • Tiwari S, Fadel G, Deb K (2011) AMGA2: improving the performance of the archive-based micro-genetic algorithm for multi-objective optimization. Eng Optim 43(4):377–401

    Google Scholar 

  • Tu PN (2013) Introductory optimization dynamics: optimal control with economics and management science applications. Springer, Berlin

    Google Scholar 

  • Tröltzsch F (2010) Optimal control of partial differential equations: theory, methods, and applications, vol 112. American Mathematical Soc, Providence

    MATH  Google Scholar 

  • Türk S, Deveci M, Özcan E, Canıtez F, John R (2021) Interval type-2 fuzzy sets improved by simulated annealing for locating the electric charging stations. Inf Sci 547:641–666

    MathSciNet  MATH  Google Scholar 

  • Vanneschi L, Henriques R, Castelli M (2017) Multi-objective genetic algorithm with variable neighbourhood search for the electoral redistricting problem. Swarm Evol Comput 36:37–51

    Google Scholar 

  • Yang Y, Nazir S, Khalil W (2022) A probabilistic approach toward evaluation of Internet rumor on COVID. Soft Comput 26(16):8077–8088

    Google Scholar 

  • Zavala GR, Nebro AJ, Luna F, Coello Coello CA (2014) A survey of multi-objective metaheuristics applied to structural optimization. Struct Multidiscip Optim 49:537–558

    MathSciNet  Google Scholar 

Download references

Funding

No funding was received for conducting this study.

Author information

Authors and Affiliations

Authors

Contributions

AM contributed to conceptualization, numerical analysis, data analysis and writing the first draft. NN was involved in conceptualization, data analysis, supervision and revising the manuscript. MP contributed to conceptualization, data analysis, supervision and revising the manuscript. AJ was involved in conceptualization, data analysis, supervision and revising the manuscript.

Corresponding author

Correspondence to Ali Jamali.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Participants of the study were the authors, and informed consent is not valid.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix: Theoretical background

1.1 Optimal control

In an optimal control problem, designers try to find an optimal control u(t) for a control system \(\dot{X}\left(t\right)=f(X\left(t\right),u\left(t\right),t)\) which causes the system to be driven and simultaneously optimize several objectives (such as energy usage and convergence rate of system state variables to their respective final conditions). In other words, the optimal control problems aim to find a control input signal u(t) which makes the system states follow a specific trend toward the desired final conditions and simultaneously causes a performance index (PI) to get minimized. The objective functions of energy consumption (control effort), the trajectory of state variables (state trajectories) and desired final conditions of state variables constitute PIs (commonly denoted as J) in the optimal control problems.

Figure 

Fig. 14
figure 14

Diagram of a control system. a Classic control system, b modern control system

14 shows the diagram of a control system for both classic and modern problems. As can be observed, the main focus of the benchmarks in this study is the structure of the input control signal (either \(u(s)\) or \(u(t)\)) to be implemented in any types of optimal control problems for both classic and modern control systems. It should be noted that for any closed-loop control systems, the “output” of controller (in other words the input of plant) is commonly examined in all optimal control problems.

1.1.1 Quadratic Performance Index (QPI)

In optimal control studies, different cost functions, which are commonly combined in the quadratic form using proper coefficients, are typically called the quadratic performance index (QPI). Considering the state vector and input control, as \(X(t)_{n \times 1} = \left[ {x_{1} (t),x_{2} (t),...,x_{n} (t)} \right]^{T}\) and u(t), respectively, QPI is defined as follows:

$${J}_{\text{QPI}}=\frac{1}{2}X{\left({t}_{f}\right)}_{1\times n}^{T}.{H}_{n\times n}.X{\left({t}_{f}\right)}_{n\times 1}+\frac{1}{2}\underset{{t}_{0}}{\overset{{t}_{f}}{\int }}\left(X{\left({t}\right)}_{1\times n}^{T} . {Q}_{n\times 1}+u\left(t\right).R\left(t\right).u\left(t\right)\right)dt$$
(10)

where JQPI is the QPI optimal value, R(t) and the positive semi-definite diagonal matrices Q(t) and H are the weighting coefficients of QPI, and n is the number of system states. Also, the first, second and third terms represent the desired final condition, the convergence rate of state variables X(t) and the energy consumption of the control input u(t), respectively. It is also worth noting that the first matrix multiplication inside integral yields a number (i.e., \({X(t)}_{1\times n}^{T}\cdot {Q\left(t\right)}_{n\times n}\cdot {X(t)}_{n\times 1}\)=number), just similar to the second term (i.e., \(u\left(t\right).R\left(t\right).u(t)\) =number), both together leading JQPI to be presented as a number.

1.1.2 Absolute Performance Index (API)

In a recent study conducted by Mohammadi et al. (2020), a new type of performance index, named as absolute performance index (API), is introduced which is the absolute combination of the two optimal control objectives (control effort and state trajectory) employing appropriate weighting coefficients. For a given optimal control problem, API is defined as follows:

$${J}_{\text{API}}={H}_{1\times n}^{T}.\left|X{\left({t}_{f}\right)}_{n\times 1}\right|+\underset{{t}_{0}}{\overset{{t}_{f}}{\int }}\left({Q}_{1\times n}^{T}\left(t\right). \left|X{\left(t\right)}_{n\times 1}\right|+R\left(t\right).\left|u\left(t\right)\right|\right)\text{d}t$$
(11)

where R(t) and the vectors H and Q(t) are considered as the weighting coefficients of API. Quite similar to the previous section, the first, second and third terms represent the desired final condition, the convergence rate of state variables X(t) and the energy consumption of the control input u(t), respectively. It has been shown that for a unique design goal and a single-objective problem, the responses obtained with the incorporation of absolute solutions are more desired than quadratic ones (Mohammadi et al. 2020).

1.1.3 Linear quadratic regulator (LQR) problem

The special cases of the optimal control problems which frequently arise for the linear time-invariant (LTI) control systems are linear quadratic regulator (LQR) problems in which all the following conditions are satisfied:

$${t}_{0}=0,$$
(12a)
$$X{\left({t}_{f}\right)}_{N\times 1}\to 0,$$
(12b)
$$A, B, Q, R=\text{constant},$$
(12c)
$$\text{elements}\left(Q\right)\ge 0, R>0.$$
(12d)

Upon satisfying these conditions, an LQR problem could be defined as:

Find u*(t) minimizing a QPI:

$${J}_{\text{QPI}}=\frac{1}{2}\underset{0}{\overset{{t}_{f}}{\int }}\left(X{\left({t}\right)}_{1\times n}^{T} . {Q}_{n\times n} .X{(t)}_{n\times 1}+u\left(t\right).R.{u}^{*}\left(t\right)\right)\text{d}t$$
(13)

Or find u*(t) minimizing an API:

$${J}_{\text{API}}=\underset{{t}_{0}}{\overset{{t}_{f}}{\int }}\left({Q}_{1\times n}^{T}. \left|X{\left(t\right)}_{n\times 1}\right|+R.\left|{u}^{*}\left(t\right)\right|\right)\text{d}t$$
(14)

Subject to the system equations:

$$\dot{X}\left(t\right)=A.X\left(t\right)+B.u\left(t\right)$$
(15)

It has been shown that in an optimal control problem, the API solutions are closer to the design requirements as compared to those of QPI (Mohammadi et al. 2020).

1.1.4 Control effort (CE)

Control effort (CE) is one of the main objectives which has been utilized since the emergence of the first optimal control problem. For an arbitrary optimal control problem, the CE index is a measure whose value is a representation of a controller’s energy consumption to perform a specific task. To put it into a numerical form, the whole area between a control signal u(t) and the time axis is indicative of the control effort. In order to mathematically estimate this area, CE is commonly defined as (\(\triangleq \)) the following quadratic form:

$$\text{{CE}}\triangleq \underset{{t}_{0}}{\overset{{t}_{f}}{\int }}{u}^{2}\left(t\right)\text{d}t$$
(16)

where u(t) is the system control input typically known as control.

1.1.5 State trajectory (Tr)

Similar to the previous section, state trajectory (Tr) is another well-known objectives in the optimal control problems. For an arbitrary optimal control problem, the Tr index, typically known as trajectory, is a measure whose value is a representation of the convergence rate of a specific state(s) toward its final desired condition(s). Indeed, the whole area between the state time response, x(t), and the final desired state condition is the state trajectory value which is usually estimated for the interval [to, tf] using the following equation:

$${Tr}_{{x}_{i}}=\underset{{t}_{0}}{\overset{{t}_{f}}{\int }}{({x}_{i}-{r}_{i})}^{2}\text{d}t$$
(17)

where xi(t) and ri are the time response and the final desired condition of the ith state, respectively.

1.2 Multi-objective optimization

This study is based on the concepts of Pareto front and non-dominant optimal solutions whose credibility in the multi-objective analysis has been proven in the previous optimization studies (Coello and Zacatenco 2005). Unlike the single-objective problems, there are many optimal solutions provided, the desired ones of which can then be selected according to the design requirements on the Pareto front. In other words, the weighting factors must be predefined in the single-objective problem, leading to several trials and errors and solving processes to find the desired solution. However, in the multi-objective analysis, the problem is solved and then, based on design requirements, the best solutions are selected just with one execute of the evolutionary algorithm. It is noteworthy that different concepts of the multi-objective optimization are elaborated in the following sections.

1.2.1 Multi-objective optimization problems

The main procedure to solve a given multi-objective optimization problem can be described as follows:

Find the optimal design variable vector X* = [x1,x2,…,xn]T minimizing the objective function vector:

$$F=\left[{f}_{1}\left(X\right),{f}_{2}\left(X\right),\dots ,{f}_{m}\left(X\right)\right]$$
(18)

subject to problem constraints:

$${g}_{i}\left(X\right)\ge 0, i=\text{1,2},\dots ,p$$
(19a)
$${h}_{j}\left(X\right)=0, j=\text{1,2},\dots ,q$$
(19b)

1.2.2 Pareto dominance

An arbitrary vector U = [u1,u2,…,uk] dominates another vector V = [v1,v2,…,vk] (denoted as\(U\prec V\)), if and only if \({u}_{i}<{\nu }_{i} , \forall i\in \text{1,2},\dots ,k\wedge \exists \text{j}\in \text{1,2},\dots ,\text{k }: {u}_{j}<{\nu }_{j}\). It means that there exists at least one uj which is smaller than vj, while the rest of u’s are partially less than their corresponding v’s.

1.2.3 Pareto optimality

A point \(X\in\Omega \) (\(\Omega \) is a feasible region in Rk) is the Pareto optimal with respect to all \({X}^{*}\epsilon\Omega \), if and only if \(F\left({X}^{*}\right)\prec F\left(X\right)\). It means that Pareto optimality of a vector of design variables defines that the solution X* is typically known as Pareto optimal if no other feasible solution X exists dominating X* through the definition of Pareto dominance.

1.2.4 Pareto set

Pareto set (\(\mathcal{P}\)) defines a set in the design variable space including all Pareto optimal vectors, if there exist no other \(X^{\prime}\in\Omega \) dominating any \(X\in {\mathcal{P}}^{*}\). In other words, Pareto set could be defined as follows:

$${\mathcal{P}}^{*}=\left\{X\in\Omega | \nexists {\text{X}}^{\prime}\in\Omega :F\left({X}^{\prime}\right)\prec F\left(X\right)\right\}.$$
(20)

1.2.5 Pareto front

Pareto front (\(\mathcal{P}{\mathcal{F}}^{*}\)) is a set of vectors of objective functions which are obtained using the vectors of design variables in the Pareto set (\({\mathcal{P}}^{*}\)). In other words, Pareto front could be defined as follows:

$$\mathcal{P}{\mathcal{F}}^{*}=\left\{F\left(X\right)| X\in {\mathcal{P}}^{*}\right\}.$$
(21)

Detailed information of the MAGP algorithms

The pseudo-code of the GP algorithm for solving optimal control problems is depicted in Fig. 15.

Fig. 15
figure 15

Pseudo-code of the GP algorithm for solving optimal control problems (color figure online)

The flowchart of the AGP algorithm is presented in Fig. 16. The efficiency of the AGP algorithm in finding the solutions of single-objective optimal control problems, both linear and nonlinear, has been rigorously examined in the study carried out by Mohammadi et al. (2020). The pseudo-code of the AGP process in solving optimal control problems is illustrated in Fig. 17.

Fig. 16
figure 16

AGP algorithm flowchart for the single-objective optimal control problems

Fig. 17
figure 17

Pseudo-code of the AGP algorithm for solving optimal control problems (color figure online)

The pseudo-code of the MAGP algorithm is presented in Fig. 18.

Fig. 18
figure 18

Pseudo-code of the MAGP algorithm (color figure online)

Case studies for single-objective problems (from Mohammadi et al. 2020)

3.1 Linear time-invariant (LTI) system (the linear quadratic regulator problem)

To demonstrate the contribution of the weighting coefficients to the system responses, a spacecraft control system in the linear time-invariant (LTI) form was modeled and simulated by Kirk (2004). In a recent study, Mohammadi et al. (2020) examined the importance of defining performance indices (in the form of either absolute or quadratic) in the system responses aligned with the main design requirements. The mentioned LTI problem is as follows:

$$ \begin{aligned} {\text{LTI system}}\quad & \dot{x}_{1} \left( t \right) = x_{2} \left( t \right), \\ & \dot{x}_{2} \left( t \right) = u\left( t \right), \\ \end{aligned} $$
(22a)
$$ \begin{aligned} {\text{QPI}}\quad & J_{\text{QPI}} = \mathop \int \limits_{0}^{\infty } \left[ {q_{11} \cdot x_{1}^{2} \left( t \right) + q_{22} \cdot x_{2}^{2} \left( t \right) + R \cdot u_{{{\text{QPI}}}}^{2} \left( t \right)} \right]{\text{d}}t, \\ & \left( {q = 4 , q = 0 , R = 0.1, 1, 10, 50} \right), \\ \end{aligned} $$
(22b)
$$ {\text{Initial condition}}\quad x_{1} \left( 0 \right) = 10, \, \quad x_{2} \left( 0 \right) = 0. $$
(22c)

With a comparison between Eqs. (22b) and (10), it can be mentioned that the main goal of the designer is to obtain an optimal control signal u(t) which optimally leads the system states from the initial states of the problem to the final design conditions (x1(tf) = 0, x2(tf) = 0). According to the specified weight coefficients Q and R, the optimal convergence of the first state (the trajectory of x1) and the optimal usage of control signal u(t) (control effort) are the main goals of the mentioned LTI problem. Also, the designer wants to investigate the impact of changing the control effort priority (\(CF=\int u(t)dt\)) on the responses. According to the four different R coefficients, specified in Eq. (22b), the four JQPI have been separately obtained for each of which the solution algorithm needs to be applied. For such problems (LTI problem) in the optimal control with a given weight coefficient, there exists a unique analytical optimal solution (known as the Riccati equation solution) which has been considered as the exact solution. Due to the capability of the Riccati equation to provide an analytical solution to the LTI problems, these types of problems (in which the steady-state equations and performance index are both linear) have taken a special name as linear quadratic regulator (LQR).

Mohammadi et al. (2020) introduced a specific type of GP called archived-based genetic programming (AGP) so as to solve the corresponding LTI problem and examined the efficiency of the algorithm in solving such problems with a unique analytical solution. In the worst case, the AGP solutions converged to the exact solutions of the problem with the maximum error up to 0.0460%. The AGP generations were also observed to converge to the final solution faster than the conventional GP algorithm.

With the AGP algorithm being validated in solving the LQR problem, the API index was proposed and solved with AGP according to the aforementioned main goals of the design requirements. Since the conventional approaches (such as the well-established Riccati equation) have mathematical basis, they are unable to solve API. Accordingly, the AGP algorithm was configured and run similar to QPI and it was observed that the AGP generations converged faster than GP ones in all cases.

In order to investigate the impact of the correct form of the performance index on the obtained optimal control signal aligned with the actual design requirements, the obtained API and QPI responses for each weighting coefficient R were compared. It was shown that the API responses in all cases are more compatible with the main design goals (Mohammadi et al. 2020).

3.2 Linear time-variant (LTV) system

The following problem is one of the most fundamental LTV problems commonly used so as to validate the efficacy of different solving methods in the optimal control (Radhoush et al. 2014).

$$\dot{\text{LTV system}:x}\left(t\right)=t\cdot x\left(t\right)+u\left(t\right),$$
(23a)
$${\text{QPI}:J}_{\text{QPI}}= \frac{1}{2} {\int }_{0}^{1}\left[{x}^{2}\left(t\right)+{u}_{\text{QPI}}^{2}(t)\right]{\text{d}}t,$$
(23b)
$$ {\text{Initial condition}}:\quad x\left( 0 \right) = {1}. $$
(23c)

According to Eq. (23b), the main purpose of the design is to find the optimal control signal that brings the system state variables from the initial condition (x(0) = 1) to the desired final state (x(tf) = 0). Also, based on the coefficients selected by the designer (R = 1, q = 1), the priority of the convergence between the trajectory and the control effort is similar from the designer’s viewpoint. Although the Riccati solution for LTV problems (similar to LTI problems) is the exact solution, it is just numerical (not analytical). Also, other algorithms for solving LTV problems just provide numerical solutions. The responses obtained from different methods are presented in the study by Mohammadi et al. (2020). As shown by Mohammadi et al. (2020), AGP rendered a simplified linear analytical solution for the given LTV problem and the corresponding JQPI gives only 0.8% error as compared to the Riccati exact solution.

It is worth mentioning that in order to evaluate the superiority of AGP over the usual GP, their solution and evolutionary convergence have been compared in Mohammadi et al. (2020), which shows the superiority of AGP in terms of simple structure as well as the considerably high rate of evolutionary convergence (only in 5 generations). After validating AGP against the valid solving methods for the LTV problem, the API index was introduced based on the same main goals of the designer. It was observed that API provides impulse input as the optimal control signal for this problem, which transfers the steady-state variable to zero in a fraction of a second. It is worth noting that the QPI response not only failed to meet the main goal of the final condition, but also the control input signal did not reach zero until the last moment.

3.3 Nonlinear system

Nonlinear systems are one of the main challenges in the optimization problems. Although the nonlinear problems are most similar to the conditions found in nature, their solutions, either numerical or analytical, cannot be obtained through the well-known approaches (such as the Riccati equation). It is worth recalling that the linear time-variant and nonlinear problems examined in this study cannot be solved and their respective analytical solutions cannot be presented using the ordinary optimization approaches.

The nonlinear optimal problems are commonly solvable using the numerical gradient-based methods. Maher and Mohamed (2013) showed that not only are the GP solutions in these kinds of optimal problems more efficient, but the analytical solutions can also be obtained. In order to examine the GP priority over other numerical methods, the following van der Pol equation (the nonlinear equation) was assessed:

$$ \ddot{x}\left( t \right) + \mu \left( {1 - k \cdot x^{2} } \right)\dot{x}\left( t \right) + x\left( t \right) = u\left( t \right). $$
(24)

where \(x\left(t\right)\) is the position coordinate, \(u(t)\) is the input of the equation, and \(\mu \) and \(k\) are the constant coefficients. Being used to describe many systems in the nature (engineering, physics, biological systems), the above-mentioned equation has been one of the most important nonlinear equations in various sciences. The steady-state form of the van der Pol problem could be obtained by considering \(\mu =1\),\(k=1\),\({x}_{2}(t)\triangleq x(t)\) and \({x}_{1}(t)\triangleq \dot{x}(t)\), as follows:

$$ \begin{aligned} {\text{nonlinear system }}\quad & \dot{x}_{1} \left( t \right) = \left( {1 - x_{2}^{2} \left( t \right)} \right) \cdot x_{1} \left( t \right) - x_{2} \left( t \right) + u\left( t \right), \\ & \dot{x}_{2} \left( t \right) = x_{1} \left( t \right), \\ \end{aligned} $$
(25a)
$${\text{QPI} J}_{\text{QPI}}= {\int }_{0}^{10}\left({x}_{1}^{2}\left(t\right)+{x}_{2}^{2}\left(t\right)+{u}_{\text{QPI}}^{2}\left(t\right)\right){\text{d}}t, ( {x}_{1}\left(10\right)=0, {x}_{2}\left(10\right)=0 )$$
(25b)
$${\text{Initial condition }x}_{1}\left(0\right)=0, {x}_{2}\left(0\right)=3.$$
(25c)

For more clarification, the diagram of the nonlinear example considered herein (the simplified van der Pol equation) is presented in Fig. 

Fig. 19
figure 19

Diagram of simplified model of the van der Pol equation

19. As has been identified in this figure, the main focus of the current study is to obtain the input control signals (\(u(t)\)).

According to Eq. (25b), the main goal of the designer is to transfer the state variables from the initial conditions to the desired final design values (x1(tf = 3) = 0, x2(tf = 3) = 0). The AGP algorithm was shown to be more efficient than the GP algorithm (whose solutions are more optimal than the numerical methods) in nonlinear problems (Mohammadi et al. 2020). After validation, AGP would be configured for solving the API based on the aforementioned main goals. The API solution was a semi-impulse signal which was shown to lead the state variables to meet their desired final condition faster than QPI. Furthermore, it is worth noting that the evolutionary convergences of AGP for both QPI and API were faster than GP (similar to all the cases in the LTI and LTV problems).

Case studies for multi-objective problems

It should be noted that when using GA algorithms (including all versions of NSGA) in the multi-objective analysis, we have to firstly specify the general form of the control signal and then run the NSGA algorithms for an optimal problem. Since the main form of the solution for the LTI problems is known (linear feedback of state variables known as control law), both MAGP and NSGAs provide exactly the same non-dominant solutions in such problems. However, in LTV and nonlinear problems which has been assessed later in this section, since there is no specified form of solutions, only MAGP can provide the real optimal non-dominant solutions for such problems.

4.1 Linear time-invariant (LTI) optimal problem

As an example, for the mentioned LTI problem in the manuscript, we know the general form of optimal solution is as follows:

$${u}^{*}\left({x}_{1}\left(t\right),{x}_{2}\left(t\right)\right)=A\cdot {x}_{1}\left(t\right)+B.{x}_{2}\left(t\right),\quad (A, B=\text{constant})$$
(26)

Substituting the above general form of optimal solution in the NSGA algorithms leads to the exactly same Pareto front as the MAGP algorithm presents. Unlike the LTI problem discussed, each optimal non-dominant solution has a different form for the LTV and nonlinear optimal problems; therefore, the limitation of NSGA (accommodating only a specified form of solution) gives rise to its inefficiency for solving multi-objective problems.

Since the main aim of this manuscript is to implement the absolute form of objectives as well as multi-objectivity of an optimization problem, the LTI optimal control problem (the simplest well-studied problem) is elaborated in detail herein so as to thoroughly illustrate the concepts. The LTV and nonlinear problems are also presented in subsequent sections in a more general form.

4.1.1 Problem definition

Kirk (2004) and Mohammadi et al. (2020) investigated the problem of spacecraft path control with the steady-state equation and single-objective index, as described below, respectively:

$$ \begin{aligned} & \dot{x}_{1} \left( t \right) = x_{2} \left( t \right), \\ & \dot{x}_{2} \left( t \right) = u\left( t \right), \\ \end{aligned} $$
(27a)
$$ \begin{aligned} J_{\text{QPI}} & = \mathop \int \limits_{0}^{\infty } \left[ {q_{11} \cdot x_{1}^{2} \left( t \right) + q_{22} \cdot x_{2}^{2} \left( t \right) + R \cdot u_{\text{QPI}}^{2} \left( t \right)} \right]{\text{d}}t \\ & = \left[ {R, q_{11} , q_{22} } \right] \cdot \left[ {\mathop \int \limits_{0}^{\infty } u_{\text{QPI}}^{2} \left( t \right){\text{d}}t, \mathop \int \limits_{0}^{\infty } x_{1}^{2} \left( t \right){\text{d}}t,\mathop \int \limits_{0}^{\infty } x_{2}^{2} \left( t \right){\text{d}}t} \right]^{T} , \\ & \left( {q_{11} = 4, q_{22} = 0, R = 0.1, 1, 10, 50} \right). \\ \end{aligned} $$
(27b)

where \({J}_{\text{QPI}}\) is the single quadratic performance index, \(u(t)\) is the input of the nonlinear system, and \({x}_{1}(t)\) and \({x}_{2}(t)\) are the angle pitch and angular velocity of the spacecraft, respectively.

As presented in Eq. (27b), the single-objective performance index of the proposed case study includes three objectives (\({J}_{1}= {\int }_{0}^{\infty }{u}_{\text{QPI}}^{2}\left(t\right){\text{d}}t,{ J}_{2}= {\int }_{0}^{\infty }{x}_{1}^{2}\left(t\right){\text{d}}t,{ J}_{3}={\int }_{0}^{\infty }{x}_{2}^{2}\left(t\right)dt\)) which are combined with weighting coefficients (R, q11, q22) so as to form the quadratic single-objective performance index JQPI. Furthermore, with a comparison between Eqs. (26) and (10), it can be concluded that the desired final condition for this case study is zero (x1(tf) = 0, x2(tf) = 0).

In the single objective, based on design requirements, the weighting coefficients must be initially defined and then the solving process has to be implemented. Indeed, in this case study from a single-objective viewpoint, based on the value of R (varying from 0 to 50), there exist four single-objective problems to each of which solving processes must be applied. After implementing the single-objective algorithm, only one solution will be obtained for each problem which renders its corresponding performance index to be minimized. As investigated by Kirk (2004) and Mohammadi et al. (2020), the responses of each solution were observed to be completely different from the other solutions with a minimum switching in the weighting factors R. However, based on the multi-objective methods presented in this study, such weighting coefficients are eliminated and the problem can be solved without any predefined values of weighting factors.

The significance of adopting the absolute form in the single-objective problem (for the specific weighting factors) has been thoroughly examined in the previous study by the authors (Mohammadi et al. 2020). In the current study, the AGP algorithm (whose superiority over other methods was validated) was upgraded to MAGP algorithm, leading to a large number of non-dominant points, the finding which is deemed to be a comprehensive comparison between the absolute and quadratic solutions.

4.1.2 MAGP efficacy in evaluating QNS and QTS

By removing the weighting coefficients R and Q from QPI, the single-objective performance index \({J}_{\text{QPI}}\) converts to the objective function vector defined as:

$$ \begin{aligned} & F_{{{\text{QNS}}_{3 \times 1} }} = \left[ {J_{{{\text{QNS}}_{1} }} , J_{{{\text{QNS}}_{2} }} , J_{{{\text{QNS}}_{3} }} } \right]^{T} , \\ & \left( {J_{{{\text{QNS}}_{1} }} = \mathop \int \limits_{0}^{\infty } u_{QNS}^{2} \left( t \right){\text{d}}t, J_{{{\text{QNS}}_{2} }} = \mathop \int \limits_{0}^{\infty } x_{1}^{2} \left( t \right){\text{d}}t, J_{{{\text{QNS}}_{3} }} = \mathop \int \limits_{0}^{\infty } x_{2}^{2} \left( t \right){\text{d}}t} \right). \\ \end{aligned} $$
(28)

The MAGP algorithm for the spacecraft problem is configured with 200 populations over 200 generations considering \(T=\left\{X1, X2, \text{Constant values}\right\}\) and \(F=\left\{\text{Times},\text{ Minus},\text{ Plus}\right\}\) as terminal and function sets, respectively. It should be noted that the various types of functions (such as cos, sin, exponential and logarithmic) have been examined for the configuration of MAGP. However, it was noticed that using these functions would not lead to more optimal values of the objectives. Moreover, they would result in considerably complex structures for the final solutions. In addition, most of such functions could be represented by Taylor series, in the form of T set mentioned above.

In the final stage of the MAGP algorithm, after the individuals of the construction container for each five generations are piled and fronted, 29,800 quadratic non-dominant solutions (QNS) are obtained. Now, substituting the QNS control signals obtained into the system equations (Eq. 27a) and calculating the state variables, the numerical values of AI are computed for the QNS solutions using the following equation:

$$ \begin{aligned} & F_{\text{QTS}} = \left[ {J_{{\text{QTS}_{1} }} , J_{{\text{QTS}_{2} }} , J_{{\text{QTS}_{3} }} } \right], \\ & \left( {J_{{\text{QTS}_{1} }} = \mathop \int \limits_{0}^{\infty } |u_{QNS} \left( t \right)|{\text{d}}t, J_{{\text{QTS}_{2} }} = \mathop \int \limits_{0}^{\infty } |x_{1} \left( t \right)|{\text{d}}t, J_{{\text{QTS}_{3} }} = \mathop \int \limits_{0}^{\infty } |x_{2} \left( t \right)|{\text{d}}t} \right). \\ \end{aligned} $$
(29)

All members of QNS together with the objective functions in the absolute form (FQCV) are denoted as the QTS set.

4.1.3 AGP efficacy in evaluating ANS and ATS

Similar to QNS, in order to obtain the ANS set, the weighting coefficients R and Q are removed from the API definition and the index \({J}_{\text{API}}\) is converted into the objective function vector defined as:

$$ \begin{aligned} & F_{{{\text{ANS}}_{3 \times 1} }} = \left[ {J_{{{\text{ANS}}_{1} }} , J_{{{\text{ANS}}_{2} }} , J_{{{\text{ANS}}_{3} }} } \right]^{T} , \\ & \quad \left( {J_{{{\text{ANS}}_{1} }} = \mathop \int \limits_{0}^{\infty } |u_{\text{ANS}} \left( t \right)|{\text{d}}t, J_{{{\text{ANS}}_{2} }} = \mathop \int \limits_{0}^{\infty } |x_{1} \left( t \right)|{\text{d}}t, J_{{{\text{ANS}}_{3} }} = \mathop \int \limits_{0}^{\infty } |x_{2} \left( t \right)|{\text{d}}t} \right). \\ \end{aligned} $$
(30)

As in the previous configuration, the MAGP algorithm will be executed for AI and after the completion of the algorithm, 29,275 absolute non-dominant solutions (ANS) are obtained. Now, substituting the ANS control signals obtained in the steady-state system equations and calculating the state variables, the numerical values of QI are evaluated for the ANS solutions using the following equation:

$$ \begin{aligned} F_{\text{ATS}} & = \left[ {J_{{{\text{ATS}}_{1} }} , J_{{{\text{ATS}}_{2} }} , J_{{{\text{ATS}}_{3} }} } \right], \\ & \left( {J_{{{\text{ATS}}_{1} }} = \mathop \int \limits_{0}^{\infty } u_{\text{ANS}}^{2} \left( t \right){\text{d}}t, J_{{{\text{ATS}}_{2} }} = \mathop \int \limits_{0}^{\infty } x_{1}^{2} \left( t \right){\text{d}}t, J_{{{\text{ATS}}_{3} }} = \mathop \int \limits_{0}^{\infty } x_{2}^{2} \left( t \right){\text{d}}t} \right). \\ \end{aligned} $$
(31)

All members of ANS together with the objective functions in the quadratic form (FACV) are denoted as the ATS set.

4.1.4 Quadratic Indices (QI) Pareto front

So far, the objective functions of the different control signals in the quadratic form have been calculated. These control signals will be mapped into the points with the coordinates of their objective functions in the QI Pareto front. For better comparison, the quadratic objective function vectors of the QPI and API solutions, obtained previously (Kirk 2004; Mohammadi et al. 2020), are computed and shown along with the QNS and QTS objective functions as numerous points in the two- and three-dimensional views of QI Pareto front (Fig. 

Fig. 20
figure 20

Illustration of QNS and ATS on the QI Pareto front for the spacecraft problem

20). It is worth noting that for a detailed illustration of the points, the axis of the quadratic control effort in this Pareto front has been shown logarithmically. The QNS points in the QI Pareto front form an optimal region (OR) which is denoted as quadratic optimal region (QOR) in this study and any point outside QOR is considered as a non-optimal solution in the QI Pareto front.

As it can be seen in Fig. 20, since the ATS points are all inside QOR, all the optimal signals obtained from the AI Pareto front are also optimal for the QI Pareto front. Furthermore, as the single-objective index (Eq. 27b) contains only two indices, including the control effort and first state trajectory, it was expected that the single-objective points would be located at the lower boundary of OR in the \({J}_{{Q}_{1}}- {J}_{{Q}_{2}}\) viewpoint, which has been correctly placed there. Moreover, since the second-state trajectory was not considered in the single-objective performance index (JQPI), it was expected that the points of single-objective solutions should be located outside QOR in the views where the second-state trajectory is present. However, only one of the points of the quadratic single-objective solutions, i.e., R = 0.1, is correctly outside of the QOR.

4.1.5 Absolute Indices (AI) Pareto front

The objective functions of the different solutions have been calculated in the absolute form in the previous sections. In this section, these solutions are mapped into the points with the coordinates of their objective functions in the AI Pareto front. Once again, for better demonstration, the absolute objective function vectors of the QPI and API solutions, obtained previously (Mohammadi et al. 2020), are evaluated and shown together with the ANS and QTS objective functions as numerous points in the two- and three-dimensional views of AI Pareto front (Fig. 21). It is worth noting that for a detailed illustration of the points, the axis of the absolute control effort in the Pareto front has been plotted logarithmically. Moreover, it is worth mentioning that the ANS points in the AI Pareto front form an OR which is denoted as absolute optimal region (AOR) and any point outside AOR is considered as a non-optimal solution in the AI Pareto front.

Fig. 21
figure 21

Illustration of ANS and QTS on the AI Pareto front for the spacecraft problem

Unlike the ATS set which covered all QOR (Fig. 20), the QTS set does not have this capability in the appropriate coverage of AOR. This lack of proper AOR coverage by the QTS set indicates that a range of the QNS solutions are non-optimal in views of the AI Pareto front. Another important issue to be drawn from Fig. 21 is that due to the absence of a second-state trajectory objective (JA3) in the definition of a single-objective index (JAPI), the points corresponding to the single-objective solutions in the JA3 perspective are all correctly outside AOR. However, from the QI view, most single-objective points were mistakenly present in QOR.

4.1.6 Comparison of the single-objective solution with a representative point from the Pareto front

As mentioned in the Kirk (2004) and Mohammadi et al. (2020), the single-objective solutions obtained from the Riccati equation are considered as the exact solutions of the LTI optimal control problems. In this section, the non-dominant solutions of the MAGP algorithm have been compared with both the Riccati equation solutions (which are considered as the exact solutions for the LTI quadratic single-objective problems) and the absolute single-objective solutions (Mohammadi et al. 2020) in order to validate the efficiency of MAGP. It is worth noting that the efficiency of the suggested algorithm in the single-objective analysis (archived-based genetic programming) was completely validated through comprehensive comparisons with other methods in Mohammadi et al. (2020). In this section, MAGP is validated with the single-objective exact solutions by means of several arbitrary weighting coefficients.

In order to compare the non-dominant solutions with the ones obtained from the single-objective analysis, the nearest point among the NS and TS sets to the single-objective point is defined as the representative point. As an instance, a point is selected around the API single-objective solution with weighting factor of R equal to 50 (APIR=50). As the definition of the single-objective index solely contains the control effort and first state trajectory, the representative point is selected in the view of these two indices. The 10282th point of QNS, denoted as QNS (10,282), is the closest point to the single-objective point APIR=50, whose location is represented in the QI and AI Pareto fronts in Fig. 

Fig. 22
figure 22

Location of point QNS (10,282) as the closest point to the point APIR=50 for the spacecraft problem. a QI Pareto front b AI Pareto front

22. As can be seen, the relative states of the two points APIR = 50 and QNS (10,282) in the AI and QI Pareto fronts are different, indicating the shift of the points in these fronts.

In order to examine the difference between the selected and the single-objective solutions, the numerical values of the indices and their time responses are compared in Table 7 and Fig. 

Fig. 23
figure 23

Comparison between the time responses of APIR=50 and QNS (10,282) solutions for the spacecraft problem

23, respectively. The similarity between the time response of the selected and single-objective solutions is evident in Fig. 23. It is worth noting that most transient responses estimated, by applying the acquired optimal solutions, would be stable up to 15 s. Then, in order to maintain consistency and also to magnify the details, all responses have been plotted up to 15 s from now on. Practically, it can be stated that in the multi-objective analysis, once a program is executed, not only is the designer capable of finding similar solutions to the single-objective optimal control without engaging in the weighting factor determination, but also many other solutions would be presented for a particular problem which could be selected based on the design requirements.

The question that arises herein is that which Pareto front (either AI or QI Pareto front) shows the correct location of the points for the more accurate evaluation of the solutions. This will be discussed in the following sections.

4.1.7 Comparison of non-dominant solutions in the QI and AI Pareto fronts

By comparing Figs. 20 and 21, the OR deformation is evident in both the AI and QI Pareto fronts, indicating that the relative states of the QNS and ANS points in these two Pareto fronts are not the same. In order to determine which form provides a better criterion for measuring the objective functions, two points are selected from each Pareto front and their locations in the other Pareto front as well as their time responses will be examined and compared.

A point is initially selected from the QI Pareto front and then its location in the AI Pareto front is examined and its time responses are assessed. For this purpose, the nearest point to the QPI single-objective point with the weighting factor R equal to 50 (QPIR=50) is selected within the QI Pareto front from the JQ1–JQ2 view (Fig. 20). The selected point is the 21827th individual of the ANS set, which is denoted as ATS (21,827). The relative states of the selected and QPIR=50 points are shown in Fig. 24 from the various QI Pareto views. As can be seen, from the view of the QI Pareto front, the selected and QPIR=50 points are approximately the coincident, indicating the similarity of these solutions in the QI view.

Fig. 24
figure 24

Relative states of QPIR=50 and ATS (21,827) points on the QI Pareto front for the spacecraft problem

The relative states of the selected point in the AI Pareto front (point 21,827 between the ANS set) and QPIR=50 point are shown in Fig. 25 from the various AI Pareto views. One of the matters to be noted is the apparent reduction of the control effort in the AI Pareto front. Furthermore, unlike the QI view, these points are not coincident in the AI Pareto front.

Fig. 25
figure 25

Relative states of QPIR=50 and ANS (21,827) points on the AI Pareto front for the spacecraft problem

In order to investigate the variation of the numerical values of indices in different Pareto fronts, the values of indices and the time responses of QPIR=50 and ANS (21,827) solutions are presented in Table 8 and Fig. 26, respectively. According to Table 8, the most significant difference between the two solutions in the two Pareto fronts is the numerical value of the control effort. Accordingly, the QI Pareto front observes a 6.7% increase (getting worse) for the selected point, while the AI Pareto front observes a 17.6% decrease (getting optimal). As can be observed in Fig. 26a, except for the first 0.46 s, the behavior and convergence of the control signal for ANS (21,827) are more optimal as compared to the QPIR=50 response.

Fig. 26
figure 26

Time responses of the QPIR=50 and ANS (21,827) solutions for the spacecraft problem

As stated in Sect. 3.1, adopting quadratic form leads to the deformation of the original MR. Indeed, the quadratic form is more sensitive to the numbers more than one (i.e., magnifies them) and in turn causes the numbers smaller than one to become negligible in value. To overcome this issue, an absolute form of indices has been proposed which would only result in the negative areas to get positive with no deformation of the main MR. To investigate the MR deformation caused by the quadratic form, the main and quadratic/absolute MR for QPIR=50 and ANS (21,827) is plotted in Figs. 

Fig. 27
figure 27

Main MR of the control effort for the QPIR=50 and ANS (21,827) solutions for the spacecraft problem

27 and

Fig. 28
figure 28

Comparison of MR for the control signal between the QPIR=50 and ANS (21,827) solutions for the spacecraft problem; a absolute form, b quadratic form

28, respectively.

As can be observed in these figures, single-hatched regions indicate the numerical difference of the two control signals. As previously shown, the numerical values of the main and absolute MRs are the same, and according to Table 8, it can be stated that the total area of a green single-hatched region is less than that of the black one with a difference percentage of about 17.6%. However, according to Fig. 28b, in addition to the MR deformation for the two solutions in the quadratic form (e.g., the quadratic MR reaches approximately zero after the first second), the extreme sensitivity of the quadratic MR with the distance from unity is evident. To be more specific, with an increase of about 1.4 units (from 2.8 to 4.2 for the absolute form) in the initial value of the control signal, its quadratic form has suddenly increased up to 9.8 units (from 7.8 to 17.6). In a practical point of view, this sensitivity of the numbers in the quadratic form gives rise to a misevaluation of the actual control error (energy consumption). In other words, the optimal control signal whose control effort is actually reduced is mistakenly presented as an optimal solution with higher control effort in the QI Pareto front.

4.2 Linear time-variant (LTV) optimal problem

In section A4.2, the following single-objective LTV optimal problem was examined.

$$ \begin{aligned} {\text{LTV system}}:\quad & x\left( t \right) = t \cdot x\left( t \right) + u\left( t \right), \\ & \left( {x\left( 0 \right) = 1} \right), \\ \end{aligned} $$
(32a)
$${\text{QPI}}: \quad J_{\text{QPI}}= \frac{1}{2}{\int }_{0}^{1}\left[{x}^{2}\left(t\right)+{u}_{\text{QPI}}^{2}\left(t\right)\right]{\text{d}}t,$$
(32b)
$${\text{API}:}\quad J_{\text{API}}= \frac{1}{2}{\int }_{0}^{1}\left[\left|x\left(t\right)\right|+\left|{u}_{\text{API}}\left(t\right)\right|\right]{\text{d}}t.$$
(32c)

As mentioned, this problem is one the basic LTV problems with which different optimal approaches have been validated throughout the literature (Hwang and Chen 1985; Hsiao and Wang 1999). Similar to the previous section, the single weighting coefficients would be eliminated and therefore the single objectives of JQPI and JAPI would be converted to the following objective vectors:

$${F}_{{\text{QNS}}_{2\times 1}}={\left[{\int }_{0}^{1}{u}_{\text{QPI}}^{2}\left(t\right){\text{d}}t, {J}_{{\text{QNS}}_{2}}={\int }_{0}^{1}{x}^{2}\left(t\right){\text{d}}t\right]}^{T},$$
(33a)
$${F}_{{\text{ANS}}_{2\times 1}}={\left[{\int }_{0}^{1}{|u}_{\text{ANS}}\left(t\right)|{\text{d}}t, {J}_{{\text{QNS}}_{2}}={\int }_{0}^{1}\left|x\left(t\right)\right|{\text{d}}t\right]}^{T}.$$
(33b)

The MAGP algorithm was configured with 200 generations as well as an MAGP function of F = {Times, Minus, Plus}. By running the MAGP algorithm, 11,072 QNS and 861 ANS are provided as the LTV non-dominant solutions. In order to cross validate a set of NS in another Pareto front, the vectors of QCV and ACV have been calculated using Eqs. (6) and (7). The NS and TS sets plotted in the AI and QI Pareto fronts are depicted in Fig. 

Fig. 29
figure 29

Illustration of the Pareto front of different non-dominant solutions for the LTV problem. a QI Pareto front, b AI Pareto front

29.

Unlike the LTI problems, lack of the proper coverage of OR with TS sets is evident in Fig. 29. In fact, not only are the cross-solutions not optimal in the view of either absolute or quadratic form, but also the designer cannot obtain the extensive non-dominant solutions from the other form. As depicted previously, the QI form has given rise to the MR deformation, which in turn results in the underestimation of the MR values. In order to shed light upon the above issue, the relative states of the two points QPI and ANS (786) are presented in Fig. 

Fig. 30
figure 30

Location of the point ANS (786) as the closest point to the point QPI for the LTV problem. a QI Pareto front, b AI Pareto front

30. As shown, the non-dominant point of ANS (786) has been suggested as the single-objective solution of QPI in the view of AI Pareto front (Fig. 30b). As can be observed, the most significant difference between AI and QI Pareto fronts is on the control effort estimation.

For more investigation, the numerical values of objectives as well as control responses are compared in Table 9 and Fig. 31, respectively. It can be observed that the single hatch regions (Fig. 31) and the numerical values of MRs (Table 9) are fairly similar to an error of -6.1%. It is evident that the control signal of ANS (786) converges to zero faster than that of QPI. By generalizing the results of this comparison to other points, it could be concluded that most of the optimal ANS sets cannot be identified using the quadratic form.

Table 7 Comparison of optimal control indices between APIR=50 and QNS (10,282) solutions in the AI and QI Pareto fronts
Fig. 31
figure 31

Main MR of the control effort for the ANS (786) and QPI solutions for the LTV problem

4.3 The nonlinear optimal problem

As mentioned in section A4.3, the nonlinear problems are of utmost importance among all types of optimization problems. The nonlinear systems presented in Eq. (25a) are investigated in the current section. For this problem, two types of single objectives (QPI and API) have been previously assessed.

$${J}_{\text{QPI}}={\int }_{0}^{10}\left[{x}_{1}^{2}\left(t\right) + {x}_{2}^{2}\left(t\right) +{u}_{\text{QPI}}^{2}\left(t\right)\right]{\text{d}}t,$$
(34a)
$$ \begin{aligned} J_{\text{API}} & = \mathop \int \limits_{0}^{10} \left( {\left| {x_{1} \left( t \right)} \right| + \left| {x_{2} \left( t \right)} \right| + \left| {u_{\text{API}} \left( t \right)} \right|} \right){\text{d}}t, \\ & \left( {x_{1} \left( {10} \right) = 0, x_{2} \left( {10} \right) = 0} \right). \\ \end{aligned} $$
(34b)

In section A4.3, for a given weighting factor, the API solution was shown to better meet the main design requirement. To extend the single-objective results, the nonlinear problem has been investigated in the multi-objective form herein. Therefore, the weighting coefficients are removed and as a result, the multi-objective vectors (FQNS and FANS) are obtained as follows:

$${F}_{{\text{QNS}}_{3\times 1}}={\left[{J}_{{\text{QNS}}_{1}}={\int }_{0}^{10}{u}_{\text{QPI}}^{2}\left(t\right){\text{d}}t, {J}_{{\text{QNS}}_{2}}={\int }_{0}^{10}{x}_{1}^{2}\left(t\right){\text{d}}t, {J}_{{\text{QNS}}_{3}}={\int }_{0}^{10}{x}_{2}^{2}\left(t\right){\text{d}}t\right]}^{T},$$
(35a)
$${F}_{{\text{ANS}}_{3\times 1}}={\left[{J}_{{\text{ANS}}_{1}}={\int }_{0}^{10}{|u}_{\text{ANS}}\left(t\right)|{\text{d}}t, {J}_{{\text{ANS}}_{2}}={\int }_{0}^{10}\left|{x}_{1}\left(t\right)\right|{\text{d}}t, {J}_{{\text{ANS}}_{3}}={\int }_{0}^{10}\left|{x}_{2}\left(t\right)\right|{\text{d}}t\right]}^{T}.$$
(35b)

The MAGP algorithm was configured with T = {X1,X2,t,Constant} and F = {Times, Minus, Plus} as the terminal and the functional sets, respectively. With running MAGP, 13,969 and 2951 non-dominant solutions have been obtained as QNS and ANS, respectively. The positions of these non-dominant solutions in the QI and AI Pareto fronts are presented in Figs. 

Fig. 32
figure 32

Illustration of QNS and ATS on the QI Pareto front for the nonlinear problem

32 and

Fig. 33
figure 33

Illustration of ANS and QTS on the AI Pareto front for the nonlinear problem

33. Similar to the LTV problem, the OR cannot be covered by the cross-solutions (TS set). In fact, this matter shows that the optimal non-dominant solutions obtained from a specific form could not be identified by the other one.

In order to further elaborate on the above-mentioned issue, the relative states of the two points QPI and ANS (10,502) are presented in Fig. 

Fig. 34
figure 34

Location of the point ANS (10,502) as the closest point to the point QPI for the nonlinear problem. a QI Pareto front, b AI Pareto front

34. According to this figure, ANS (10,502) (as the nearest ANS point to QPI in the AI Pareto front) will be suggested instead of the obtained single-objective solution in the quadratic form. As can be observed, the relative states of the ANS (10,502) and QPI points in two QI and AI Pareto fronts are completely different.

The numerical values of objectives as well as their responses are depicted in Table 10 and Fig. 

Fig. 35
figure 35

Main MR of the ANS (10,502) and QPI solutions for the nonlinear problem

35 in Appendix. According to Fig. 35a, the control response of the suggested point ANS (10,502) can be observed to be similar to an impulse signal (similar to the API solution pointed out in section A4.2). As previously noted, such forms of control signal (impulse forms) cannot be identified in the quadratic form. Thus, it can be stated that by applying ANS (10,502) to the mentioned nonlinear system, the first trajectory (which converged to zero in almost 4 s) and the control effort would render more optimal responses as compared to QPI.

Table 8 Comparison of optimal control indices between QPIR=50 and ANS (21827) solutions in the QI and AI Pareto fronts
Table 9 Comparison of optimal control indices between the QPI and ANS (786) solutions for the LTV problem
Table 10 Comparison of optimal control indices between the QPI and ANS (10,502) solutions in the AI and QI Pareto fronts

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohammadi, A., Nariman-zadeh, N., Payan, M. et al. Evaluation of the absolute forms of cost functions in optimization using a novel evolutionary algorithm. Soft Comput 27, 16843–16879 (2023). https://doi.org/10.1007/s00500-023-09020-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-023-09020-z

Keywords

Navigation