Introduction

This paper, considers the following general Fractional Programming Problem (FPP) mathematical model (Jaberipour and Khorram 2010):

min / max z ( x 1 , x 2 , . . . , x n ) = i = 1 p f i ( x ) g i ( x )
(1)
h k ( x ) 0 , k = 1 , . . . , K ; m j ( x ) = 0 , j = 1 , . . . , J ; x i l x i x i u , i = 1 , . . . , n ; g i ( x ) 0 , i = 1 , 2 , . . . , p .
(2)

where f, g, h and m are linear, quadratic, or more general functions. Fractional programming of the form Eq. (1) arises reality whenever rates such as the ratios (profit/revenue), (profit/time), (-waste of raw material/quantity of used raw material), are to be maximized often these problems are linear or at least concave–convex fractional programming. Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance, in solving concrete problems, is steadily increasing. Furthermore, nonlinear optimization models describe practical problems much better than the linear optimization, with many assumptions, does. The FPPs are particularly useful in the solution of economic problems in which different activities use certain resources in different proportions. While the objective is to optimize a certain indicator, usually the most favorable return on allocation ratio subject to the constraint imposed on the availability of resources; it also has a number of important practical applications in manufacturing, administration, transportation, data mining, etc.

The methods to solve FFPs can be broadly classified into exact (traditional) and metaheuristics approaches.

The traditional method as: Wolf (1985) who introduced the parametric approach, Charnes and Cooper (1973) solved the linear FFPs by converting FPP into an equivalent linear programming problem and solved it using already existing standard algorithms for LPP, Farag (2012); Hasan and Acharjee (2011); Hosseinalifam (2009); Stancu-Minasian (1997), reviewed some of the methods that treated solving the FPP as the primal and dual simplex algorithm. The crisscross, which is based on pivoting, within an infinite number of iterations, either solves the problem or indicates that the problem is infeasible or unbounded. The interior point method, as well as Dinkelbach algorithms both reduces the solution of the LFP problem to the solution of a sequence of LP problems. Isbell Marlow method, Martos Algorithm, CambiniMarteins Algorithm, Bitran and Novaes Method, Swarups Method, Harvey M. Wagner and John S. C. Yuan, Hasan, B.M., and Acharjee, S., developed a new method for solving FLPP based on the idea of solving a sequence of auxiliary problems so that the solutions of the auxiliary problems converge to the solution of the FPP.

Moreover, there are many recent approaches employing traditional mathematical methods for solving the ratio optimization FPP as: Dür et al. (2007) who introduced an algorithm called dynamic multistart improving Hit-and-Run (DMIHR) and applied it to the class of fractional optimization problems. DMIHR combines IHR, a well-established stochastic search algorithm, with restarts. The development of this algorithm is based on a theoretical analysis of Multistart Pure Adaptive Search, which relies on the Lipschitz constant of the optimization problem. Shen et al. (2009) proposed algorithm for solving sum of quadratic ratios fractional programs via monotonic function. The proposed algorithm is based on reformulating the problem as a monotonic optimization problem. It turns out that the optimal solution, which is provided by the algorithm, is adequately guaranteed to be feasible and to be close to the actual optimal solution. Jiao et al. (2013) presented global optimization algorithm for sum of generalized polynomial ratios problem which arises in various practical problems. The global optimization algorithm was proposed for solving sum of generalized polynomial ratios problem, which arise in various engineering design problems. By utilizing exponential transformation and new three-level linear relaxation method, a sequence of linear relaxation programming of the initial nonconvex programming problems are derived, which are embedded in a branch and bound algorithm. The algorithm was shown to attain finite convergence to the global minimum through successive refinement of a linear relaxation of the feasible region and/or of the objective function and the subsequent solutions of a series of linear programming sub-problems.

A few studies in recent years used metaheuristics approaches to solve FFPs. Sameeullah et al. (2008) presented a genetic algorithm-based method to solve the linear FFPs. A set of solution point is generated using random numbers, feasibility of each solution point is verified, and the fitness value for all the feasible solution points are obtained. Among the feasible solution points, the best solution point is found out which then replaces the worst solution point. A pair-wise solution points is used for crossover and a new set of solution points is obtained. These steps are repeated for a certain number of generations and the best solution for the given problem is obtained. Calvete et al. (2009) developed a genetic algorithm for the class of bi-level problems in which both level objective functions are linear fractional and the common constraint region is a bounded polyhedron. Jaberipour and Khorram (2010) proposed algorithm for the sum-of-ratios problem-based harmony search algorithm. Bisoi et al. (2011) developed neural networks for nonlinear FPP. The research proposed a new projection neural network model. It is theoretically guaranteed to solve variational inequality problems. The multi-objective mini–max nonlinear fractional programming was defined and its optimality is derived by using its Lagrangian duality. The equilibrium points of the proposed neural network model are found to correspond to the Karush Kuhn Trcker point associated with the nonlinear FPP. Xiao (2010) presented a neural network method for solving a class of linear fractional optimization problems with linear equality constraints. The proposed neural network model have the following two properties. First, it is demonstrated that the set of optima to the problems coincides with the set of equilibria of the neural network models which means the proposed model is complete. Second, it is also shown that the model globally converges to an exact optimal solution for any starting point from the feasible region. Pal et al. (2013) used Particle Swarm Optimization algorithm for solving FFPs. Hezam and Raouf (2013ab, c), introduced solution for integer FPP and complex variable FPP-based Swarm Intelligence under uncertainty.

The purpose of this paper is to investigate the solution for the FPP using Swarm Intelligence. The remainder of this paper is organized as follows. “Methodology” will introduce Swarm Intelligence methodology. Illustrative examples and discussion on the results are presented in “Illustrative examples with discussion and results”. “Industry applications” introduces industry applications. Finally, conclusions are presented `in “Conclusions”.

Methodology

Swarm Intelligence (SI) is research inspired by observing the naturally intelligent behavior of biological agent swarms within their environments. SI algorithms have provided effective solutions to many real-world type optimization problems, that are NP-Hard in nature. This study investigates the effectiveness of employing two relatively new SI metaheuristic algorithms in providing solutions to the FPPs. The algorithms investigated are Particle Swarm Optimization (PSO), and Firefly Algorithm (FA). Brief descriptions of these algorithms are given in the subsections below.

1. Particle Swarm Optimization (PSO)

PSO (Yang 2011) is a population-based stochastic optimization technique developed by Eberhart and Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling.

The characteristics of PSO can be represented as follows:

  • x i k The current position of the particle i at iteration k;

  • v i k The current velocity of the particle i at iteration k;

  • y i k The personal best position of the particle i at iteration k;

  • y ^ i k The neighborhood best position of the particle.

The velocity update step is specified for each dimension j 1 , , N d hence, v i , j represents the j th element of the velocity vector of the i th particle. Thus the velocity of particle i is updated using the following equation

v i k ( t + 1 ) = w v i k ( t ) + c 1 r 1 ( t ) ( y i ( t ) - x i ( t ) ) + c 2 r 2 ( t ) ( y ^ i ( t ) - x i ( t ) ) .
(3)

where w is weighting function, c 1 , 2 are weighting coefficients, r 1 , 2 ( t ) are random numbers between 0 and 1. The current position (searching point in the solution space) can be modified by the following equation:

x i k ( t + 1 ) = x i k + v i k + 1
(4)

Penalty functions

In the penalty functions method, the constrained optimization problem is solved using unconstrained optimization method by incorporating the constraints into the objective function thus transforming it into an unconstrained problem.

Fitness = f ( x ) + Penalty functions Error .

The detailed steps of the PSO algorithm is given as below:

Step 1 Initialize parameters and population.

Step 2: Initialization Randomly set the position and velocity of all particles, within pre-defined ranges and on D dimensions in the feasible space (i.e., it satisfies all the constraints).

Step 3: Velocity updating At each iteration, velocities of all particles are updated according to Eq. (3). After updating, v i k should be checked and maintained within a pre-specified range to avoid aggressive random walking.

Step 4: Position updating Assuming a unit time interval between successive iterations, the positions of all particles are updated according to Eq. (4). After updating, x i k should be checked and limited within the allowed range.

Step 5: Memory updating Update y i k and y ^ i k when the following condition is met.

y i k ( t + 1 ) = y i k ( t ) if f ( x i k ( t + 1 ) ) f ( y i k ( t ) ) . x i k ( t + 1 ) if f ( x i k ( t + 1 ) ) < f ( y i k ( t ) ) .

where f ( x ) is the objective function subject to maximization.

Step 6: Termination checking Repeat Steps 2–4 until definite termination conditions are met, such as a pre-defined number of iterations or a failure to make progress for a fixed number of iterations.

2. Firefly algorithm (FA)

FA Yang (2011), is based on the following idealized behavior of the flashing characteristics of fireflies.

All fireflies are unisex so that one firefly is attracted to other fireflies regardless of their sex.

Attractiveness is proportional to their brightness, thus for any two flashing fireflies, the less bright one will move towards the brighter one. The attractiveness is proportional to the brightness and they both decrease as their distance increases. If no one is brighter than a particular firefly, it moves randomly.

The brightness or the light intensity of a firefly is affected or determined by the landscape of the objective function to be optimized.

The detailed steps of the PSO algorithm is given as below:

Step 1 Define objective function f ( x ) , x = ( x 1 , x 2 , . . . , x d ) , and generate initial population of fireflies placed at random positions within the n-dimensional search space, x i . Define the light absorption coefficient γ.

Step 2 Define the light intensity of each firefly, L i , as the value of the cost function for x i .

Step 3 For each firefly, x i , the light Intensity, L i , is compared for every firefly x j , j 1 , 2 , . . . d .

Step 4 If , L i , is less than L j , then move firefly x i towards x j in n-dimensions.

The value of attractiveness between flies varies relatively the distance r between them:

x i t + 1 = x i t + β exp - γ r t 2 ( x j t - x i t ) + α ϵ i t
(5)

where β is the attractiveness at r = 0 the second term is due to the attraction, while the third term is randomization with the vector of random variables ϵ i being drawn from a Gaussian distribution α [ 0 , 1 ] . The distance between any two fireflies i and j at x i and x j can be regarded as the Cartesian distance or the l 2 norm.

Step 5 Calculate the new values of the cost function for each fly, x i , and update the light intensity, L i .

Step 6 Rank the fireflies and determine the current best.

Step 7 Repeat Steps 2–6 until definite termination conditions are met, such as a pre-defined number of iterations or a failure to make progress for a fixed number of iterations.

Illustrative examples with discussion and results

Ten diverse examples were collected from literature to demonstrate the efficiency and robustness of solving FFPs. The obtained numerical results are compared to their relevance found in references; some examples were also solved using exact method f 1 and f 3 . Table 1 shows they attained the comparison result. The algorithms have been implemented by MATLAB R2011. The simulation parameter settings results of FA are: population size, 50; α (randomness), 0.25; minimum value of β, 0.20; γ (absorption), 1.0; iterations, 500; and PSO are population size of 50, the inertia weight was set to change from 0.9 ( w max ) to 0.4 (warming) over the iterations. Set c 1 :0.12 and c 2 :1.2, , iterations:500.

Table 1 Comparison results of the SI with other methods

The functions related to the difference examples list in the previous table are followers:

f 1 : max z = 4 x + 2 y + 10 x + 2 y + 5

subject to x + 3 y 30 ;

- x + 2 y 5 ; x , y 0 .

f 2 : min z = x + y + 1 x + y + 2 1.5 × x + y + 3 x + y + 4 2.1 × x + y + 5 x + y + 6 1.2 × x + y + 7 x + y + 8 1.1 .

subject to x - y = 0 ;    1 x 2 ;    x , y 0 .

f 3 : min z = x + y + 1 2 x - y + 3

subject to 0 x 1 ;    0 x 1 .

f 4 : max z = 8 x + 7 y - 2.33 ( 9 x 2 + 4 y 2 ) 0.5 20 x + 12 y - 2.33 ( 3 x 2 + 2 x y + 4 y 2 ) 0.5

subject to 2 x + y 18 ;    x + 2 y 16 ;    x , y 0 .

f 5 : max z = 2 x + y x + 2 y

subject to 2 x + y 6 ;    3 x + y 8 ;    - x + y - 1     x , y 1 .

f 6 : max z = - x 2 + 3 x - y 2 + 3 y + 3.5 x + 1 + y x 2 - 2 x + y 2 - 8 y + 20

subject to 2 x + y 6 ;    3 x + y 8 ;    - x + y - 1 ;    1 x 2.25 ;   1 y 4 .

f 7 : max z = - x 2 y 0.5 + 2 x y - 1 - y 2 + 2.8 x - 1 y + 7.5 x y 1.5 + 1 + y + 0.1 - x 2 y - 1 - 3 x - 1 + 2 x y 2 + 9 y - 1 + 12

subject to 2 x - 1 + x y 4 ;    x + 3 x - 1 y 5 ;    x 2 - 3 y 3 2 ;    1 x 3 ; 1 y 3 .

f 8 : max z = 37 x + 73 y + 13 13 x + 13 y + 13 + 63 x - 18 y + 39 13 x + 26 y + 13

subject to 5 x + 3 y = 3 ;    1.5 x 3 ;    x , y 0 .

f 9 : min z = 2 x + y x + 10 + 2 y + 10

subject to - x 2 - y 2 + 3 0 ;    - x 2 - y 2 + 8 y - 3 0 ;    2 x + y 6 ;    3 x + y 8 ;    x - y 1 ;    1 x 3 ;    1 y 4 .

f 10 : max z = 13 x + 13 y + 13 37 x + 73 y + 13 - 1.4 × 64 x - 18 y + 39 13 x + 26 y + 13 1.2 - x + 2 y + 5 v + 50 x + 5 y + 5 v + 50 0.5 × x + 2 y + 4 v + 50 5 y + 4 v + 50 1.1

subject to 2 x + y + 5 v 10 ;    5 x - 3 y = 0 ;    1.5 x 3 ; x , y , v 0 .

The numerical results obtained using PSO and FA techniques are compared to assorted exact methods and metaheuristic techniques as shown in Table 1. Four exact methods were selected for solving the 10 benchmark functions and carrying out the comparison. The four methods are C.C. Transformation, Dinkelbach algorithm, Goal Setting and Approximation and global optimization. Neural network and harmony search are the other two metaheuristic intelligent techniques incorporated in the compare test. The some calculations are obtained out of the numerical solutions of all the ten functions. PSO and FA proved their capability in obtaining the optimal solution for all the test functions. The results were obtained from the PSO, FA almost identical to the obtained using exact methods. PSO and FA proved also to give better results compared to other intelligent techniques, such as neural network and harmony search f 3 , f 10 . Finally, PSO and FA managed to give solutions to problems that could not be solved with exact method due to difficult mathematical calculation for complex nonlinear. Figures 1 and 2 are sample plots of two maximum and minimum function optimization results. Figure 1a shows the objective function optimized value of (0.333) for function f 3 where the blue colored dots on the objective space represent the swarm particle searching for the optimized minimum value. The same particles swarm with the same color could be observed in the decision variable space of Fig. 1b, c at values of (0,0), (0,0) trying to reach the optimized decision variables values using FA, PSO algorithms, respectively. Figure 2a shows the objective function optimized value of (4.0608) for function f 6 where the red colored dots on the objective space represent the swarm particle searching for the optimized maximum value. The same particles swarm with the same color could be observed in the decision variable space of Fig. 2b, c at values of (1,1.7438), (1,1.7377) trying to reach the optimized decision variables values using FA, PSO algorithms, respectively.

Fig. 1
figure 1

Swarm distributions searching for optimization value of f 3

Fig. 2
figure 2

Swarm distributions searching for optimization value of f 6

Industry applications

A. Design of a gear train

A gear train problem is selected from Deb and Srinivasan (2006), and Shen et al. (2011); shown in Fig. 3 below. A compound gear train is to be designed to achieve a specific gear ratio between the driver and driven shafts. It is a pure integer fractional optimization problem used to validate the integer handling mechanism. The gear ratio for gear train is defined as the ratio of the angular velocity of the output shaft to that of the input shaft. It is desirable to produce a gear ratio as close as possible to 1 / 6.931 . For each gear, the number of teeth must be between 12 and 60. The design variables T a , T b , T d , and T f are the numbers of teeth of the gears a , b , d , and f, respectively, which must be integers.

x ¯ = ( T d , T b , T a , T f ) T .

The optimization problem is expressed as:

Fig. 3
figure 3

A gear train

min z = 1 6.931 - T d T b T a T f 2 = 1 6.931 - x 1 x 2 x 3 x 4 2 . subject to 12 x i 60 , i = 1 , 2 , 3 , 4 .

The constraint ensures that the error between obtained gear ratio and the desired gear ratio is not more than the 50 % of the desired gear ratio.

Fig. 4
figure 4

Memory usage indicator

Table 2 Result comparisons between FA and PSO on gear train problem

The detailed accuracy performance concerning the solution of FA and PSO is listed in Table 2. The comparison is held in terms of the best, error, mean, and standard deviation values. These values were obtained out of 20 independent runs. The table also shows the best optimization value, the convergence time and the amount addressed memory resources. First refereeing to the obtained optimization value indicates a better achievement for FA. On the other hand, the convergence time and the amount of addressed memory resources indicate a better achievement for PSO. The PSO algorithm utilizes memory amount of 483–484 as shown in Fig. 4, while FA algorithm utilizes memory amount of 492–494 as shown in the same figure.

B. Proportional integral derivative (PID) controller

Proportional integral derivative (PID) controllers are widely used to build automation equipment in industries; shown in Fig. 5 below. They are easy to design, implement, and are applied well in most industrial control systems process control, motor drives, magnetic, etc.

Correct implementation of the PID depends on the specification of three parameters: proportional gain ( K p ) , integral time ( T i ) and derivative time ( T d ) . These three parameters are often tuned manually by trial and error, which has a major problem in the time needed to accomplish the task. and the fractional-order PID controller parameters vector is ( K p , T i , T d , λ, δ). The PID controller is a special case of the fractional-order PID controller, we simply set λ = δ = 1 .

Assume that the system is modeled by an nth-order process with time delay L:

G p ( s ) = b m s m + b m - 1 s m - 1 + . . . + b 1 s + b 0 s n + a n - 1 s n - 1 + . . . + a 1 s + a 0 × e - L s
(6)

Here, we assume n > m and the system (6) is stable. The fractional-order PID controller has the following transfer function:

G C ( s ) = K p + T i s - λ + T d s δ

The optimization problem is summarized as follows:

min z = z ( K p , T i , T d , λ , δ ) .

subject to     L K p , T i , T d , λ , δ U .

where the z , L , U is given by the designer. Note that the constraint is introduced to guarantee the stability of the closed-loop system. Also, the values of five design parameters ( K p , T i , T d , λ, δ) are directly determined by solving the above optimization problem.

Fig. 5
figure 5

Generic closed loop system

Simulation example

Consider the following the transfer function presented in Maiti et al. (2008):

G p ( s ) = 1 0.8 s 2.2 + 0.5 s 0.9 + 1 .

The initial parameters are chosen randomly in the following range: K p , [ 1 , 1 , 000 ] ; T i , [ 1 , 500 ] ; T d , [ 1 , 500 ] ; λ , [ 0 , 2 ] ; δ , [ 0 , 2 ] . We want to design a controller such that the closed loop system has a maximum peak overshoot M p = 10 % and trise = 0.3 s. This translates to ξ = 0.65 (damping ratio), ω 0 = 2.2 s - 1 (undamped natural frequency). We then find out the positions of the dominant poles of the closed loop system,

P 1 , 2 = - ξ ω 0 ± j ω 0 1 - ξ 2 .

The dominant poles for the closed loop controlled system should lie at ( - 1.43 + j 1.67 ) and ( - 1.43 - j 1.67 ) . For p 1 = ( - 1.43 + j 1.67 ) , the characteristic equation is:

1 + K p + T i ( - 1.43 + j 1.67 ) λ + T d ( - 1.43 + j 1.67 ) δ 0.8 ( - 1.43 + j 1.67 ) 2.2 + 0.5 ( - 1.43 + j 1.67 ) 0.9 + 1 = 0 .

Fig. 6
figure 6

Closed loop unit step response

Table 3 Results for the integer order PID
Table 4 Results for the fractional order PID

Tables 3 and 4 illustrate the calculated optimized parameters of PID controller using PSO, FA and using the algorithm of Maiti et al. (2008). In Table 3 which gives only the result of the integer-order PID controller parameters, when the variables (λ) and (δ) are set to a value of (1), it could be observed that the optimized parameters calculated using FA algorithm generates the best control step response as illustrate in Fig. 6. It could be concluded also from the same figure that PSO algorithm with tuned parameters introduces a better step response than Maiti et al. (2008). Table 4 introduces optimized parameters of the fractional-order PID controller where the same indicated remarks could be observed as that of the integer order.

Conclusions

The paper presents a new approach to solve FFPs based on two Swarm Intelligence (SI) algorithms. The two types are PSO, and FA. Ten-benchmark problem were solved using the two SI algorithm and many other previous approaches. The results employing the two SI algorithms were compared with the other exact and metaheuristic approaches previously used for handling FPPs. The two algorithms proved their effectiveness, reliability and competences in solving different FPP. The two SI algorithms managed to successfully solve large-scale FPP with an optimal solution at a finite point and an unbounded constraint set. The computational results proved that SI turned out to be superior to other approaches for all the accomplished tests yielding a higher and much faster growing mean fitness at less computational time. A better memory utilization was obtained using the PSO algorithm compared to FA algorithm. Two industrial application problems were solved proving the superiority of FA algorithm over PSO algorithm reaching a better optimized solution. A better optimized ratio was obtained that generated a zero traction error in the gear train application and a better control response was obtained in the PID controller application. In the two applications, the best results were acquired using the two SI algorithms with an advantage to the FA algorithm optimization results and an advantage to the PSO algorithm in the computational time.