1 Introduction

Metaheuristic techniques are used to solve hard problems (generally NP-hard problems) instead of, or in conjunction with, exact algorithms. When the problem dimension becomes very large, exact algorithms may not be useful since they are computationally too expensive. In these cases, approximate algorithms (which do not guarantee to find an optimal solution) are a very effective alternative. Metaheuristics are approximate algorithms, which encompass and combine constructive methods, local search strategies, local optima escaping strategies, and population-based search. They include, but they are not restricted to: tabu search [1], simulated annealing [2], evolutionary computation [3], memetic algorithms [4], scatter search [5], iterative improvement (Hill climbing et similia), ant colony optimization [6], particle swarm optimization (PSO) [7], greedy randomized adaptive search procedure (GRASP) [8], iterated local search [9, 10], and variable neighborhood search [11].

Metaheuristics are strategies to design heuristic procedures. Therefore, metaheuristics types are defined according to the type of procedures referred by them. Some of the fundamental types are metaheuristics for relaxation procedures, constructive processes, neighborhood searches, and evolutive procedures.

Some metaheuristics arise by combining different types of metaheuristics, as GRASP [8] that combines a constructive phase with a phase of search improvement. Other metaheuristics are centered in the use of some type of computational resource or special formulation, such as neural networks, ant systems, or the constraint programming, and they are not included clearly in none of the four previous types.

In general, one way or another, all metaheuristics can be conceived as strategies applied to search processes. In fact, in this work, a new metaheuristic, named Fisherman Search Procedure (FSP), is introduced. This search algorithm explores new solutions using a combination of guided search and local search. This metaheuristic was designed with the purpose of developing useful and practical solutions for a wide variety of global optimization problems.

The main advantages are the following: easy implementation; it provides an explicit description of the model-based idea and the own conception of this metaheuristic allows it to be applied to a large number of classical global optimization problems, as well as to those that arise in real-world situations in different areas of applied sciences, engineering, and economy.

In order to introduce our metaheuristic, as well as the experimentation to evaluate its performance, the rest of the paper is organized as follows. In Sect. 2, the FSP Approach is formally introduced. The parameterization phase is explained in Sect. 3. Section 4 is devoted to describe the settings of the experimental study that allows us to evaluate the performance and the computational time of the new proposed algorithm, as well as the results obtained, comparing them with several metaheuristics referenced in the literature. In Sect. 5, four FSP extensions are proposed for further discussion. Finally, in Sect. 6, we conclude our work and point out some directions for future research.

2 FSP approach

In this section, a new metaheuristic for global optimization problems is formally described. FSP is a global optimization method, inspired by the cognitive behavior and the dexterities observed in a fisherman. This method explores new solutions using a combination of guided search and local search. The algorithm is shown in Fig. 1.

Fig. 1
figure 1

The FSP algorithm

Initially, we set N capture points in the whole fishing area (search space). Basically, each capture point is composed of a position vector, \(x_i\) (located in the search space), and a memory of the best solution found by the fisherman in the neighborhood of the capture point \(p_i\). Let \(x_i \in \,X\), where \(X = \left\{ {x_1 ,x_2 ,\dots ,x_N } \right\} \) denotes the set of position vectors of the capture points.

The fisherman global memory is defined as \(g_{\mathrm{best}}\) (e.g., the best among the capture points). Following a fishing trajectory, the fisherman throws the fishing network L times in each capture point.

The fishing network is described by a set of position vectors \(y_{ij} \in \,Y\), \(Y= \left\{ {y_{i1} ,y_{i2},\dots ,y_{iM} } \right\} \) created and defined starting from a reference point, where \(y_{ij}\) is the \(j\)th vector in the \(i\)th capture point, \(i = 1,2,\dots ,N\); \(j = 1,2,\dots ,M\). The value M is the quantity of network position vectors. Each position vector represents a knot of the traditional fishing network.

The equation used to create the network position vectors is the following:

$$\begin{aligned} y_{ij} = x_i + A_j, \end{aligned}$$
(1)

where \(A_j\) is an N-dimensional vector composed of random numbers in the range [\(-c,c\)], where \(c\) is a real number denominated width coefficient.

The network position vectors are evaluated, and if some of them has a fitness value better than the \(p_i\) value of the capture point from where the launching was performed, the capture point is updated with this new position, and the fisherman network is redefined again considering this better solution as the new reference vector. When updating the \(p_i\) value for the \(i\)th capture point, if \(p_i\) is better than \(g_{\mathrm{best}}\), the latter is also updated.

The whole fishing procedure is repeated T times. After each iteration, the improvement rate in each capture point is analyzed. If the function value in \(p_i\) for the \(i\)th capture point improves with respect to previous iteration, we recommend to decrease the value of the width coefficient, with the idea of reducing the risk of skipping the global minimum, in the case when the solution is close to it. This decrease of the width coefficient should be made multiplying by a factor of 0.95 the c value, to guarantee a gradual approximation to global maximum.

This strategy pursues the intuitive idea of narrowing the holes of the fishing network to guarantee the capture. On the other hand, if the function value in \(p_i\) for the \(i\)th capture point does not improve, the value of the coefficient of width should be increased its initial value twice, with the idea of looking for an abrupt change that accelerates the exploration of new fishing areas (search spaces). In this last case, if a better function value is found, we update the \(p_i\) value for the \(i\)th capture point and reset the width coefficient to its initial value.

3 Parameterization

In the parameterization study, every parameter of the metaheuristic is evaluated and analyzed. With this goal in mind, we studied the behavior of the algorithm parameters considering De Jong’s five functions test, which are widely used in these cases.

The configurations of this functions exhibit a balance between complexity and diversity; balance that is necessary to study the heuristic. The dimensions (n), the searching region (S), and the minimum (min) of each one of the functions can be observed in Table 1 and their graphics in Fig. 2.

Table 1 Benchmark functions for the parameterization phase
Fig. 2
figure 2

Test functions for the parameterization

Functions \(f_1\) and \(f_2\), known as the Sphere and Rosenbrock functions, respectively, are convex one-modal functions for two dimensions. Function \(f_3\) is known as the step function; it is discontinuous, convex with a single minimum. Function \(f_4\) corresponds to the function with Gaussian noise that is convex, one-modal noisy. Finally, function \(f_5\), known as Shekel’s foxholes, proposed by Shekel, is a continuous function, non convex, two-dimensional, no square [12].

Our algorithm uses five independent parameters: the number of iterations T, the capture points N, the throws L, the quantity of position vectors M that represent the fishing network, and the width coefficient c. The amount of iterations has a stopping criteria according to the convergency of the algorithm: x numbers of iterations without improving the result will terminate the process. For the tests, this criteria has been set to the 10 % of the given number of iterations.

3.1 Capture points

The number of the capture points is an important factor in the algorithm stability. We carried out tests of the internal heuristics considering a range from 1 to 100 capture points; the results are shown in Fig. 3. In the case of De Jong’s five functions test suite analyzed, we observed that the average of the minimum found stabilizes for values between 80 and 100; our recommended range of values.

Fig. 3
figure 3

Results for De Jong’s five function test, considering the capture points

3.2 Number of iterations and throws

The number of throws and iterations controls the searching extension; a small number of iterations and throws means a quick search. It is better to use the smallest number for which the algorithm can stabilize and find an answer within the desired boundaries.

In Table 2, we can see the average value that the heuristic achieves on these test functions on 30 runs of the algorithm, for a series of guided combinations of the parameter that are analyzed in this subsection.

Table 2 Average values found for different numbers of iterations and throws

For functions \(f_1\) and \(f_2\), we observe that the increase in the number of iterations gradually increases the algorithm precision. In both cases, we can also see that with five throws, the algorithm exhibits a satisfactory stable performance.

For function \(f_3\), the algorithm always reaches the function minimum. For function \(f_4\), the algorithm for different parameter combinations does not improve or worsen its initial precision with respect to the function minimum.

For function \(f_5\), the algorithm is not affected too much by the values of T and L and reaches the minimum in all tests.

This promising results are obtained in part because the number of capture points and the width coefficient values were set according to the best achieved range in the tests that were carried out in Subsects. 3.1 and  3.3.

3.3 Fishing network and the width coefficient

The adjustment of the quantity of network position vectors of the fishing network M constitutes a fundamental factor to achieve the efficiency of the algorithm. In the definition of an adequate value or range for M, we have to consider the following: a big fishing network (of many position vectors) guarantees to us a deep exploration, but increases considerably the computational burden because it has to make more calls to the objective function. On the other hand, if the fishing network is small, the fast convergency of the algorithm does not guarantee a good solution for the problem.

In this sense, it is very important to know the type of the function and the problem we are facing and to establish an effective relation with another essential parameter, the width coefficient c.

If c is too small, the search can be slow and expensive; but if c has a big value and M is also small, the algorithm can end up in an early and imprecise convergency because of the lack of exploration in zones where an optimum might be found.

On the other hand, FSP applies a strategy to update the width coefficient c, but in this section, we will focus in determining the best values of c and M for the De Jong’s functions test, it means, not incorporating any improvement strategy.

In Table 3, we show the recommended values for each parameter of the algorithm and each test function.

Table 3 Recommended parameters for the test functions

Furthermore, if we analyze the searching region (S) corresponding to each function and the values of c, we can appreciate that with an increase in the searching region, the values of c tend to increase too; it is not the case for M.

4 Experimentation and evaluation

In this section, we present the experimental results of the evaluation stage of our proposal in terms of successful runs (convergence on global minimum values) and time consumption. We carried out comparisons considering other metaheuristics and interesting proposals referenced in the literature. For evaluation, we have used a set of benchmark functions broadly used to test the behavior and convergence of heuristic methods. Finally, we show and analyze the results of the achieved performance by different methods.

4.1 Benchmark functions

In order to evaluate the novel metaheuristic, a test suite of benchmark functions previously introduced by Molga and Smutnicki [13] was used (see Table 4). The ranges of search spaces, dimensionalities, and global minimum function values (ideal values) are also included in Table 4. In each case, n is the dimension size of the function, \(f_{\mathrm{min}}\) is its ideal value, and \(S \subset R^n\) is the search space. The problem set contains a diverse set of problems, including unimodal as well as multimodal functions, and functions with correlated and uncorrelated variables.

Functions \(f_1\)\(f_3\) are unimodal. Functions \(f_4\) and \(f_5\) are multimodal functions where the number of local minima increases exponentially with the problem dimension. Functions \(f_6\)\(f_{11}\) are low-dimensional functions, which have only a few local minima.

Table 4 The employed benchmark functions

4.2 Selected metaheuristics for comparison

With the intention of comparing FSP with other heuristic procedures, we selected GRASP, DE, and PSO methods. This selection was carried out with the goal of evaluating the behavior of the proposed method with similar algorithms (GRASP) and with other completely different ones (DE and PSO), demonstrating the potentialities of FSP in the solution of optimization problems.

GRASP [8] is a multistart metaheuristic in which each iteration consists basically of two phases: construction and local search. One possible shortcoming of the standard GRASP framework is the independence of its iterations, i.e., the fact that it does not learn from the history of solutions found in previous iterations. This is so because the basic algorithm discards information about any encountered solution that does not improve the function value. Information gathered from good solutions can be used to implement memory-based procedures to influence the construction phase, by modifying the selection probabilities associated with each element of the restricted candidate list.

PSO is a population-based stochastic optimization technique developed by Eberhart and Kennedy [7] in 1995, inspired by social behavior of bird flocking or fish schooling. The main advantages are as follows: insensitive to scaling of design variables, simple implementation, easily parallelized for concurrent processing, derivative free, very few algorithm parameters, and very efficient global search algorithm. In spite of this, it presents the disadvantage of a slow convergence in refined search stage (weak local search ability).

DE algorithm has been introduced by Storn and Price [14]. DE is a method of optimization belonging to the category of evolutionary computation. It maintains a population of solution candidates, to which a mutation and recombination procedure to generate new individuals is applied. The new individuals are chosen according to the value of its performance function. The main characteristic of DE is the use of test vectors, which compete with the current population individuals in order to survive. One of their biggest advantages is that it overcomes the standard drawbacks of the genetic algorithms: premature convergence and lack of good local search ability. However, DE sometimes has a limited ability to move its population large distances across the search space and would have to face stagnation.

Although FSP not always outperforms the other methods, it solves some of their inconveniences, e.g., the independence of iterations in GRASP, the weak capacity of local search of PSO, and the limited ability to face stagnation in DE.

4.3 Evaluation

The algorithms used for comparison are GRASP [8], PSO with inertia weight and constriction factor (PSO-w-cf) [15, 16], and differential evolution (DE) algorithm [14]. In all experiments, a total of 30 runs are made. The experiment results are listed in Table 5, where Func. = Functions, Algo. = Algorithms, best stands for the best function value over 30 runs, mean indicates the mean best function values, and time stands for the average CPU time (in seconds) consumed within the fixed number of generations. Succ. Runs stand for the success number over 30 runs. All results are reported with a precision level of 1e\(-\)02.

Table 5 Performance of the algorithms

The initial population is generated uniformly and randomly in the range as specified in Table 4. The parameters of (PSO-w-cf) are as follows: learning rate \(c_1=c_2=2\), inertia weight linearly decreased from 0.9 to 0.4 with run time increasing, constriction factor \(\chi = 0.729844\), and the maximum velocity \(v_{\mathrm{max}}\) is set at 20 % of the dynamic range of the variable on each dimension. The parameters of DE are mutation \(f=0.75\), recombination \(\mathrm{cr}=0.5\), number of objective vectors \(\mathrm{NVO}=6\), and strategy \(s=4\).

The parameters of FSP are as follows: the number of iterations and the capture points are fixed between 80 and 100, number of launchings from 3 to 5, the number of points of the fishing network in 100, and the width coefficient was adjusted by the own method beginning in 0.05, according to the characteristics of the functions.

All algorithms are run on a PC with Intel(R) Core(TM) i5 CPU 2.53 GHz.

The results of 30 independent runs for benchmark functions \(f_1\)\(f_{11}\) are summarized in Table 5. From Table 5, FSP is successful over all the 30 runs for \(f_5\), \(f_7\), and \(f_9\). For \(f_4\), it is successful in only 20 % of all runs, but it outperforms all other algorithms. For the functions 1, 6, and 8, FSP always outperforms GRASP and DE methods in successful runs. For \(f_1\), FSP is successful in 23 % of the runs and has less time consumption than GRASP and DE. For \(f_6\) and \(f_8\), FSP is successful in 46 and 40 % of the runs, respectively, for both cases, and has less time consumption than GRASP and the same as DE. For \(f_{10}\) and \(f_{11}\), FSP is successful in 46 and 40 % runs, respectively, and is better than GRASP and PSO-w-cf methods to achieve the desired accuracy. For this function, it also has less time consumption than GRASP and the same as PSO-w-cf. In \(f_2\) and \(f_3\), FSP overcomes GRASP in computational time and successful runs.

The results with benchmark functions allow us to conclude that FSP is suitable for solving optimization problems for unimodal and multimodal functions with satisfactory (second best) convergence ability. Compared with classic GRASP metaheuristic, FSP has better search ability and less time consumption, expressed in terms of successful runs and total runtime, respectively, for the benchmark functions. Moreover, FSP has a competitive performance for all benchmark functions, compared with PSO-w-cf and DE.

5 Future research

In this section, considering the initial conceptions of this work, the authors propose four extensions of the FSP that can be discussed, implemented, and evaluated in future works.

In extension #1, the fisherman updates the position of an specific capture point i just when one of the fishing network’s points improves the function value in \(p_i\). Under this idea, if after a throwing of the net, the function value in \(p_i\) does not improve, no other throw is made in the corresponding capture point until the next iteration.

After each time, the fishing network is thrown and the function value in \(p_i\) for the \(i\)th capture point is improved; the amount of fishing network’s points and the width coefficient values are decreased by a specific factor (0.99). In the next iteration of the method, the values of these two parameters are reset to their default values.

In the beginning, extension #2 is similar to extension #1, because the fisherman does not update the position of a capture point i unless one of the fishing network points (threw from i) improves the function value in \(p_i\). While the capture point position keeps improving, fishing network is thrown continuously and the width coefficient value decreases according to the extension #1 factor.

After each iteration, those capture points whose position did not improve are restarted to a new position. This restart can be random or guided to those unexplored zones.

In extension #3, in each capture point, all throws of the fishing network are made. Different to other extensions, the amount of throws is not conditioned to position improvements. In this case, each time the fishing network is thrown in a specific capture point, the position of this point is always updated with the best points obtained so far by the fishing network.

This idea is based on the intuition that a bigger fish (optimal solution) can be in zones where small fishes live (not so good solutions), those that serve as food to it.

In extension #4, we start by dividing the fishing zone (searching space) in subregions and sending a fisherman to each of them. Each fisherman can apply indistinctly his own extension. This cooperative of fishermen shares a common memory where the best position (\(g_{\mathrm{best}}\)) reached by any of the fishermen is stored.

Even though parallelism is not yet systematically used to speed up or to improve the effectiveness of metaheuristics, parallel implementations are very robust and abound in the literature; see, e.g., Cung et al. [17] for a survey.

In general, each extension has its advantages and disadvantages. In extension #1, the fisherman tries to delimit the best position with abrupt steps, but he can get trapped in a local minimum. In spite of the similitude with extension #1, in extension #2, the fisherman uses the restart of the capture points as the main strategy to avoid an early convergency in local minimums. On the other hand, extension #3 is conceived with the idea that the trajectory, passing through not so promising zones, could take us to the best capture zones, which would be inaccessible otherwise. In extension #4, we suggest parallel implementations to enhance the robustness of the method.

Finally, we conclude that the selection and application of the mentioned FSP extensions, and others, depend on the characteristics and situations where future optimization problems can arise. We are currently working in these extensions of the basic model.

6 Conclusions

In this paper, we have proposed a new metaheuristic called Fisherman Search Procedure. The search algorithm explores new solutions using a combination of guided and local search. Our metaheuristic was designed with the purpose of developing good solutions for a wide variety of global optimization problems.

The main advantages of FSP are the following: it provides an explicit description of the model-based idea, easy implementation, and the conception of this metaheuristic allows it to be applied to a large number of classical global optimization problems, as well as to those that arise in real-world situations in different areas of science, technology, and business.

Finally, we conclude that FSP has better search ability and less time consumption, expressed in terms of successful runs and total runtime, respectively, for the benchmark functions, compared with classic GRASP metaheuristic. In 90 % of the cases, FSP is located among the two better results as for successful runs. On the other hand, with regard to time consumption, FSP shows similar results to PSO and DE, achieving the best and second best results for 82 % of the test functions.

As future work, we plan to analyze the quality of the solution with other benchmark test functions and validate the four proposed extensions under different conditions and optimization problems.