The No-Free Lunch theorem (Wolpert and Macready 1997) provides an important limitation of global optimization algorithms. Improving an algorithm for one class of problem is likely to make it perform more poorly for other problems. This may explain the much larger number of popular global optimization algorithms compared to local optimization algorithms.

This means that when we propose a new or improved global optimization algorithm, it should be targeted towards a particular application or set of applications rather than tested against a fixed set of problems. For a paper describing such an algorithm for Structural and Multidisciplinary Optimization, it should target an application including solid or fluid mechanics models that is not solved well by existing algorithms. This has become difficult to demonstrate because of the large number of available global optimization algorithms.

Problems where the objective function and constraints are differentiable are not likely to satisfy the above requirement. This is because gradient based local optimizers are very efficient in finding solutions for such problems even with large number of design variables. Unless the number of local optima is very high, multi-start execution of local search techniques is likely to yield a solution more efficiently than a global search algorithm. Problems with discrete or combinatorial design variables are more likely to satisfy the requirement.

Finally, many global search algorithms are stochastic in nature. This means that to demonstrate their effectiveness, one must run them multiple times so as to establish the statistical distribution of the solution as function of the design parameters.