Abstract
In this chapter, the basics used in this book for the optimization problem are briefly introduced. The organization is shown as follows: (1) the overview of optimization problems, which gives the general forms and the classifications of optimization problems, and some frequently used models are also illustrated; (2) the introductions for the optimization problems with uncertainties, including the stochastic optimization, robust optimization, and interval optimization; (3) the introductions for the convex optimization, including the semidefinite programming (SDP), secondorder cone programming (SOCP), and some convex relaxation skills; (4) the optimization frameworks which are frequently used, including the twostage optimization, and the bilevel optimization. Six examples and the corresponding case studies based on the classic knapsack problem are reformulated to show different optimization problems. The above models are all important topics and not only used in this book, but also in many engineering scenarios. In summary, the contents in this chapter are some basics for optimization problems. The readers who are interesting in the details can refer to the professional mathematical literature or books listed in the reference and who are familiar with these basic optimization skills can also skip this chapter without any inconvenience.
You have full access to this open access chapter, Download chapter PDF
Keywords
2.1 Overview of Optimization Problems
2.1.1 General Forms
In different engineering scenarios, the maximizing or minimizing of some functions relative to some sets are common problems. The corresponding set often represents a range of choices available in a certain situation, and the “solution” infers the “best” or “optimal” choices in this scenario. Some common applications include “minimal cost, maximal profit, minimal error, optimal design, and optimal management”. This type of problem has a general form as follows.
In (2.1), \( f\left( x \right) \) is the objective function, and represents the management tasks; “\( { \hbox{min} } \)” and “\( { \hbox{max} } \)” represent the minimizing and maximizing of \( f\left( x \right) \), respectively; \( x \) is the decision variables, and represents the choices of administrator; \( h\left( x \right) \le 0 \) and \( g\left( x \right) = 0 \) are the inequality and equality constraints to limit the decision variables, which represents the limitations on the choices of administrator by different operating scenarios; \( S \) is the original set for the decision variables, such as continuous variables, binary variables, integer variables, and so on. In this problem, the model expects to find the “best” or “optimal” solution “\( x^{*} \)” which meets the minimization or maximization of \( f\left( x \right) \), and in reality, this may represent the minimization of costs or the maximization of profits. Here we give a simple case, Example 2.1, for the optimization problems.
Example 2.1: Knapsack Problem
Assuming we have \( n \) types of goods and indexed by \( i \in 1,2,3 \ldots ,n \). Each good values \( W_{i} \) and the size is \( S_{i} \), and we have a knapsack with the capacity of \( C \). The problem is how we can pack the goods with the highest value? The model of this problem is shown as follows.
In (2.2), \( x_{i} \) is the “decision variables”, and represents the choice of the ith good or not. If choosing the ith good into the knapsack, \( x_{i} = 1 \) and if not, \( x_{i} = 0 \). \( x_{i} \in \left\{ {0,1} \right\} \) is the original set of “decision variables”. \( \mathop \sum_{i = 1}^{n} W_{i} \cdot x_{i} \) is the “objective function”, and represents the total values of the selected goods, and \( \mathop \sum_{i = 1}^{n} S_{i} \cdot x_{i} \le C \) is the “constraint”, represents the total sizes of goods that should be smaller or equal to the capacity of the knapsack. The “best” or “optimal” solution “\( \left\{ {x_{i}^{*} ,i \in 1,2,3 \ldots ,n} \right\} \)” can achieve the maximization of \( \mathop \sum_{i = 1}^{n} W_{i} \cdot x_{i} \).
Case Study for Example 2.1
Here we test a simple case, and the parameters are shown as follows: \( n = 5 \), \( \left\{ {W_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,3,1,4,7} \right] \), and \( \left\{ {S_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,2,1,2,3} \right] \), and \( C = 6 \). The simulation results are shown in Fig. 2.1.
From the Fig. 2.1, the best solution for Example 2.1 is to select the 3rd, 4th and 5th goods, and the maximal total value is 12, and the total size of goods is 6.
2.1.2 Classifications of Optimization Problems

(1)
Classifications by decision variables
The decision variable \( x \) may consist of different number types, such as continuous variables, binary variables, and integer variables, and the combination of different number types produces different optimization problems. For example, if \( x \) only consists of continuous variables, then problem (2.1) is “continuous optimization”, and if \( x \) only consists of binary variables or integer variables, then problem (2.1) is “binary optimization” or “integer optimization”. If \( x \) simultaneously has continuous variables and integer variables, then problem (2.1) is “mixedinteger optimization”.

(2)
Classifications by the objective function
Generally, objective function \( f\left( x \right) \) can be a scalar or vector and based on it, problem (2.1) is “singleobjective optimization” when \( f\left( x \right) \) is a scalar and “multiobjective optimization” or “vector optimization” when \( f\left( x \right) \) is a vector.

(3)
Linear optimization and nonlinear optimization
In practical cases, \( f\left( x \right), h\left( x \right) \) and \( g\left( x \right) \) may have different mathematical characteristics. If \( f\left( x \right), h\left( x \right) \) and \( g\left( x \right) \) are all linear functions, problem (2.1) is “linear optimization (LP)”, and is “nonlinear optimization (NLP)” if anyone in \( f\left( x \right), h\left( x \right) \) and \( g\left( x \right) \) is nonlinear. Specifically, if \( f\left( x \right) \) is nonlinear, and \( h\left( x \right) \) and \( g\left( x \right) \) are both linear, then problem (2.1) is “linear constrained and nonlinear objective optimization (LCNLP)”, and if \( f\left( x \right) \) is linear, and \( h\left( x \right) \) and \( g\left( x \right) \) are both nonlinear, then problem (2.1) is “nonlinear constrained and linear objective optimization”. Here we give some typical cases, if \( f\left( x \right), h\left( x \right) \) and \( g\left( x \right) \) are all polynomials and the largest power is two, then problem (2.1) is “quadratic optimization (QP)”. Similarly, we can define the “quadraticobjective quadraticconstrained optimization (QCQP)”, “quadraticobjective linearconstrained optimization (LCQP)”, and so on.

(4)
Convex optimization and nonconvex optimization
Before introducing the convex optimization and nonconvex optimization, the convex function and convex set should be described in the first place. Firstly, convex functions should meet (2.3) for any \( x_{1} \) and \( x_{2} \) in the domain of \( f\left( x \right) \) [1].
Then the convex set \( S \) should meet: for any two points in \( S \), denoted as \( s_{1} \) and \( s_{2} \), their linear combination \( \alpha \cdot s_{1} + \left( {1  \alpha } \right) \cdot s_{2} \) is still within \( S \) [1]. Illustrations for convex function and convex set are shown in Fig. 2.2a and b, respectively.
From Fig. 2.2a, \( x_{3} = \alpha \cdot x_{1} + \left( {1  \alpha } \right) \cdot x_{2} \) and \( f_{3} \left( {x_{3} } \right) < \alpha \cdot f_{3} \left( {x_{1} } \right) + \left( {1  \alpha } \right) \cdot f_{3} \left( {x_{2} } \right) \), thus \( f_{3} \left( x \right) \) is a convex function. Similarly, \( f_{1} \left( x \right) \) is a concave function, and \( f_{2} \left( x \right) \) is a convex function and also a concave function. From Fig. 2.2b, any linear combination of \( s_{1} \) and \( s_{2} \) belongs to the same set, which represents the convexity. For the nonconvex set, at least one combination of \( s_{1} \) and \( s_{2} \) is outside the same set, shown in Fig. 2.2b.
With the above definitions, (2.1) is a convex optimization problem when the following two conditions satisfied: (1) \( f\left( x \right) \) is convex in case of minimization and concave in case of maximization; (2) \( S = \left\{ {xh\left( x \right) \le 0,g\left( x \right) = 0,\forall x \in S} \right\} \) is a convex set. The main characteristic of the convex optimization compared with nonconvex optimization is, a local optimal solution of the convex optimization is also the global optimal solution of this convex optimization [1]. This characteristic greatly benefits the applications of convex optimization, and in reality, if we can model or reformulate the problems as convex optimization, then the global optimal solution can be obtained after resolving any local optimal ones. This is one of the main reasons for “the main watershed in optimization problem is not between the linear ones and nonlinear ones, but the convex ones and nonconvex ones” [1].
In summary, the classification methods can be combined to characterize different optimization problems, such as “mixedinteger linear optimization (MILP)”, “mixedinteger nonlinear optimization (MINLP)”, “mixedinteger quadratic optimization (MIQCP)”, and so on.
2.2 Optimization Problems with Uncertainties
Uncertainties are inevitable in reality since the measurement and control both have errors. To ensure safety and reliability, considering uncertainties in optimization problems is necessary, and stochastic optimization, robust optimization, and interval optimization are three main types.
2.2.1 Stochastic Optimization
A general form of stochastic optimization is shown as (2.4) [2].
In stochastic optimization (Eq. 2.4), \( x \) is the first stage decision variables which are not determined by uncertainties; \( X \) is the feasible region of \( x \); \( g\left( x \right) \) is the objective function of the first stage; \( \xi \) is the uncertain variables, and \( Y\left( {x,\xi } \right) \) is the feasible region of \( y \) determined by \( x \) and \( \xi \); \( f\left( y \right) \) is the objective function of the second stage; \( E( \cdot ) \) is the expectation. In this model, the uncertain variable \( \xi \) is depicted by the probability distribution, such as the probability distribution of equipment failure, or the probability distribution of renewable energy output, and so on. Then stochastic optimization seeks the optimal solution within the feasible region defined by the probability distributions. To clearly show the stochastic optimization, Example 2.2 is reformulated as follows.
Example 2.2: Stochastic Knapsack Problem
Based on all the assumptions of Example 2.1, we further assume that for \( \forall i \in \left\{ {1,2, \ldots ,n_{f} } \right\} \), \( W_{i} \) is a constant, and for \( \forall i \in \left\{ {n_{f} ,n_{f} + 1, \ldots ,n} \right\} \), \( W_{i} = W_{c} + \Delta W_{i} \), where \( W_{c} \) is a constant and \( \Delta W_{i} \) follows a pregiven distribution \( \psi \). Then the original knapsack problem becomes (2.5).
where \( \left( {\mathop \sum_{i = 1}^{{n_{f} }} W_{i} \cdot x_{i} + \mathop \sum_{{i = n_{f} }}^{n} W_{c} \cdot x_{i} } \right) \) is “\(  g(x) \)”, and the “\(  \)” is to transform the maximization of (2.5) to the minimization of (2.4), and this term is not influenced by the uncertainties; \( \mathop \sum_{{i = n_{f} }}^{n} \Delta W_{i} \cdot x_{i} \) is \( f\left( y \right) \) which is influenced by the uncertainties; and \( x = \left\{ {x_{i} i = 1,2, \ldots ,n_{f} } \right\} \), and \( y = \left\{ {x_{i} i = n_{f} ,n_{f} + 1, \ldots ,n} \right\} \).
Case Study for Example 2.2
Here we test a simple case, and the parameters are shown as follows: \( n = 5 \), \( \left\{ {W_{i} i = 1,2,3} \right\} = \left[ {2,3,1} \right] \), and \( \left\{ {W_{c} i = 4,5} \right\} = \left[ {4,7} \right] \), and \( \left\{ {\Delta W_{i} i = 4,5} \right\} \) is normally distributed as \( {\mathcal{N}}\left( {0,1} \right) \), and \( \left\{ {S_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,2,1,2,3} \right] \), and \( C = 6 \). The simulation results are shown in Fig. 2.3a and b.
From the Fig. 2.3a, the best solution for Example 2.2 is also to select the 3rd, 4th and 5th goods, and the expected maximal total value is 12.23, and the total size of goods is 6. The main difference between the stochastic optimization (2.5) and the conventional deterministic problem (2.2) is the uncertainties of \( \Delta W_{i} \) will cause the uncertainties of objective function, which is shown as Fig. 2.3 (b).
2.2.2 Robust Optimization
A general form of robust optimization is shown as (2.6) [3].
In robust optimization (Eq. 2.6), the main difference is the uncertain variable \( \xi \) is described by the uncertainty set \( U \), including the upper/lower limits and the uncertainty budget. Then robust optimization seeks the optimal solution in the worst case in the defined uncertainty set and therefore brings conservatism. With above, the primary problem of the uncertainty modeling is how to determine the feasible regions, such as the probability distributions in stochastic optimization and the uncertainty set in robust optimization. Similarly, we can give a robust knapsack problem as Example 2.3.
Example 2.3: Robust Knapsack Problem
Based on all the assumptions of Examples 2.1 and 2.2, we further assume \( \Delta W_{i} \) is within a range denoted as \( \left[ {W_{L} ,W_{U} } \right] \), and the robust knapsack problem can be shown as (2.7). The meaning of each part is similar to the stochastic model of (2.5).
Case Study for Example 2.3
Here we test a simple case, and the parameters are shown as follows: \( n = 5 \), \( \left\{ {W_{i} i = 1,2,3} \right\} = \left[ {2,3,1} \right] \), and \( \left\{ {W_{c} i = 4,5} \right\} = \left[ {4,7} \right] \), and \( \left\{ {\Delta W_{i} i = 4,5} \right\} \in \left[ {  2,1} \right] \), and \( \left\{ {S_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,2,1,2,3} \right] \), and \( C = 6 \). The simulation results are shown in Fig. 2.4.
From the Fig. 2.4, the best solution for Example 2.3 in robust optimization becomes the 2nd, 3rd and 5th goods, and the value of the objective function is 9. This change is due to the risk of the 4th good, since in the worst case, its value becomes 2, and it is not worthy to select. From the above results, we can find the results of robust optimization is conserve.
2.2.3 Interval Optimization
Interval optimization can be viewed as an enhancement of robust optimization and consisted of a lower subproblem and an upper subproblem, shown as (2.8) [4], and the upper subproblem is similar with the robust optimization of (2.6). It should be noted that, for the maximization problem, the lower subproblem is a robust optimization problem. The main advantage of interval optimization is the interval obtained can be used to analyze the influences of uncertainties on the system. A case is given as Example 2.4.
Example 2.4:Interval Knapsack Problem
Case Study for Example 2.4
The parameters of Example 2.4 is the same as Example 2.3, and the decision variables keep the same as Example 2.3, shown as Fig. 2.5, and the range of objective function is [9, 12]. From this, we can see the interval optimization can give both pessimistic and optimistic scenarios.
In summary, how to get the range of uncertain variables, i.e., the probability distribution function or the uncertainty set of \( \xi \), is the basic problem of the optimization model. Nowadays, with the development of measurement and communication technology, more operating data can be transmitted and stored in the control center in realtime. How to use this type of massive data to model the feasible region of uncertainty has become a hot topic, and various methods have been proposed. This topic will discuss in Chap. 4.
2.3 Convex Optimization
The importance of convex optimization has been emphasized in the former context, and in practical cases, we always want to model or reformulate a complex problem as convex ones, and the semidefinite programming (SDP) and the secondorder cone programming (SOCP) are two classic types and have been well studied, which has gained the concerns from both academic and industry.
2.3.1 Semidefinite Programming
The general form of SDP is given as (2.11) [5].
where \( A_{0} , A_{p} \) are all coefficient matrixes; \( X \) is the decision matrix which should be semidefinite; \( b_{p} \) is a coefficient vector; \( S^{n} \) is the real space with n dimensions. Conventional linear optimization (LP) and quadratic optimization (QP) can be both formulated as SDP by defining \( X = x \cdot x^{T} \) [6], then many commercial solvers can be used to solve the reformulated SDP for the global optimal solution, like Sedumi.
2.3.2 SecondOrder Cone Programming
The general form of SOCP is given as (2.12) [7].
where \( f^{T} \), \( A_{i} , b_{i} , c_{i}^{T} , d_{i} , F,g \) are all coefficient vectors or matrixes; \( x \) is the decision variables. It should be noted that the objective function is no need to be linear, and quadratic objective function can also be solved like conventional SOCP. Similarly, many types of optimization problems can be reformulated as SOCP, and several cases are given below to show the usages of SOCP.

(1)
Quadratic terms
For quadratic terms like \( x^{2} \), it can be relaxed by the following (2.13) [8].

(2)
Bilinear terms
For bilinear terms like \( x \cdot y \), it can be relaxed by the following (2.14) [8].
In (2.15), \(  \frac{1}{2}\left( {x^{2} + y^{2} } \right) \) and \(  \frac{1}{2}\left( {x + y} \right)^{2} \) are concave, and the following convexconcave procedure can be used to convexify them [9].
where \( \left( {\bar{x},\bar{y}} \right) \) is a constant reference point.

(3)
Exponential terms
For bilinear terms like \( e^{x} \), it can be relaxed by the following (2.17).
Then at a reference point \( \bar{y} \), (2.17) can be reformulated as (2.18) similarly by the convexconcave procedure [9].
2.4 Optimization Frameworks
2.4.1 TwoStage Optimization
In reality, there are many cases that the decision variables cannot be determined in the same time, and this is the main motivation of twostage optimization. The stochastic and robust optimization models in (2.4) and (2.6) are both twostage optimization. Here we give a general form of twostage optimization as (2.19) [10].
In the above formulation, \( g\left( x \right) \) and \( f\left( y \right) \) are the objective functions of the first stage and the second stage, respectively; and \( x,y \) are the corresponding decision variables; \( l\left( x \right) \le 0 \), \( h\left( y \right) \le 0 \) are the corresponding constraints and \( G\left( {x,y} \right) \le 0 \) is the coupling constraints. It should be noted that, twostage means \( x,y \) cannot be determined in the same time. To clarify this problem, Example 2.5 is given below.
Example 2.5: Twostage Knapsack problem
Based on all the assumptions of Example 2.1, we assume that the ith good when i = 1,2,…,\( n_{1} \) is available now and the ith good when i = \( n_{2} , \ldots ,n \) will be available after some times, and \( n_{2} \le n_{1} \). The objective is still the maximization of the total values, but each good can only be selected one time. Then the optimization problem becomes (2.20).
In the above formulation, \( \mathop \sum_{i = 1}^{{n_{1} }} W_{i} \cdot x_{i} \) and \( \mathop \sum_{{j = n_{2} }}^{n} W_{i} \cdot y_{j} \) are the objective functions of the first stage and the second stage, respectively, and \( \mathop \sum_{i = 1}^{{n_{1} }} S_{i} \cdot x_{i} \le C,\mathop \sum_{{j = n_{2} }}^{n} S_{j} \cdot y_{j} \le C \) are their corresponding constraints, and \( x_{i} + y_{j} \le 1,i \in n_{1} , \ldots ,n_{2} \) is the coupling constraints.
Case study for Example 2.5
Here we test a simple case, and the parameters are shown as follows: \( n = 5 \), \( n_{1} = 3 \) and \( n_{1} = 2 \), and \( \left\{ {W_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,3,1,4,7} \right] \), and \( \left\{ {S_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,2,1,2,3} \right] \), and \( C = 6 \). The simulation results are shown in Fig. 2.6.
From Fig. 2.6, the final objective function is 13 by the final selections of the 1st, 2nd, 3rd, and 5th goods. In the first stage, the capacity of the knapsack is 4, and the 1st and 2nd goods are selected, then in the second stage, the 3rd and 5th goods are selected, and no good has been selected for twice. If the coupling constraint \( x_{i} + y_{j} \le 1,i \in n_{1} , \ldots ,n_{2} \) is eliminated and the value of the 3rd good comes to 2, then the final selections are the 2nd good, and 3rd good for twice and the 5th good, and the objective function comes to 14.
In summary, the coupling constraint in twostage optimization is essential which could influence the final results. Which is proved by many practical cases, the modifications on the coupling constraints benefit the objective function [10, 11].
2.4.2 Bilevel Optimization
Bilevel optimization is a special type of twostage optimization and has a general formulation as following (2.21) [12]. In the following formulation, \( F\left( {x,y} \right) \) represents the upperlevel objective function and \( f\left( {x,z} \right) \) represents the lowerlevel objective function. Similarly, \( x \) represents the upperlevel decision vector and \( y \) represents the lowerlevel decision vector. \( G_{i} \left( {x,y} \right) \) and \( g\left( {x,z} \right) \) represents the inequality constraint functions at the upper and lower levels respectively. We can find that \( y \) is the decision variable of \( F\left( {x,y} \right) \) and also the optimal decision variable to minimize \( f\left( {x,z} \right) \). The upper and lower levels are coupled to achieve the overall optimum. Here we also give Example 2.6.
Example 2.6: Bilevel Knapsack problem
Based on all the assumptions of Example 2.1, we assume inside the outer knapsack with the capacity of \( C_{o} \), there is a small bag which holds the most valuable goods, and the capacity is \( C_{s} \), and the objective is to maximize the total values and also in the small bag. Then the optimization problem becomes (2.22).
In the above formulation, \( \mathop \sum_{i = 1}^{{n_{f} }} W_{i} \cdot x_{i} + \mathop \sum_{{i = n_{f} }}^{n} W_{i} \cdot x_{i} \) and \( \mathop \sum_{{i = n_{f} }}^{n} W_{i} \cdot x_{i} \) are the objective functions of the upper level and lower level, respectively. \( \mathop \sum_{i = 1}^{{n_{f} }} S_{i} \cdot x_{i} \le C_{o}  C_{s} \) and \( \mathop \sum_{{i = n_{f} }}^{n} S_{i} \cdot x_{i} \le C_{s} \) are their constraints, respectively.
Case study for Example 2.6
Here we test a simple case, and the parameters are shown as follows: \( n = 5 \), \( n_{f} = 2 \), and \( \left\{ {W_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,3,1,4,7} \right] \), and \( \left\{ {S_{i} i = 1,2, \ldots ,5} \right\} = \left[ {2,2,1,2,3} \right] \), and \( C_{s} = 3 \), and \( C_{o} = 6 \). The simulation results are shown in Fig. 2.7. From the results, the lower level selects the 5th good in the first place and then in the upper level, the 2nd and 3rd goods are selected, and the final objective function comes to 11.
2.5 Summary
This chapter has briefly introduced the frequently used optimization models in engineering, and listed several important literature in the references for the readers. Simple testcases are also given to show different types of optimization models, and the models above will be used in the rest chapters of this book.
References
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004)
Heyman, D., Sobel, J.: Stochastic Models in Operations Research: Stochastic Optimization. Courier Corporation (2004)
BenTal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press (2009)
Bhurjee, A.K., Panda, G.: Efficient solution of interval optimization problem. Math. Methods Oper. Res. 76(3), 273–288 (2012)
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)
Boyd, S., Vandenberghe, L.: Semidefinite programming relaxations of nonconvex problems in control and combinatorial optimization. In: Communications, Computation, Control, and Signal Processing. Springer, Boston, MA, pp. 279–287 (1997)
Lobo, M., Vandenberghe, L., Boyd, S., et al.: Applications of secondorder cone programming. Linear Aalgebra Appl. 284(1–3), 193–228 (1998)
Zamzam, S., Dall’Anese, E., Zhao, C., et al.: Optimal water–power flowproblem: formulation and distributed optimal solution. IEEE Trans. Control Netw. Syst. 6(1), 37–47 (2018)
Lipp, T., Boyd, S.: Variations and extension of the convex–concave procedure. Optim. Eng. 17(2), 263–287 (2016)
Zeng, B., Zhao, L.: Solving twostage robust optimization problems using a columnandconstraint generation method. Oper. Res. Lett. 41(5), 457–461 (2013)
Zhao, C., Wang, J., Watson, J., et al.: Multistage robust unit commitment considering wind and demand response uncertainties. IEEE Trans. Power Syst. 28(3), 2708–2717 (2013)
Dempe, S.: Foundations of Bilevel Programming. Springer Science & Business Media (2002)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons AttributionNonCommercial 4.0 International License (http://creativecommons.org/licenses/bync/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this chapter
Cite this chapter
Fang, S., Wang, H. (2021). Basics for Optimization Problem. In: OptimizationBased Energy Management for Multienergy Maritime Grids. Springer Series on Naval Architecture, Marine Engineering, Shipbuilding and Shipping, vol 11. Springer, Singapore. https://doi.org/10.1007/9789813367340_2
Download citation
DOI: https://doi.org/10.1007/9789813367340_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 9789813367333
Online ISBN: 9789813367340
eBook Packages: EngineeringEngineering (R0)