The field of optimization has continuously evolved in forms and techniques over the years. This is a natural in light of the continuously changing problems that have surfaced as technology advanced. The complexity of the problems being dealt with has seen a tremendous increase in scaled-up, driven primarily by the intensity as well as dynamicity of the problem domains. The complexity can also be due to the multiple conflicting objectives that need to be optimized simultaneously. While algorithms inspired by the various metaphors of nature have emerged and shown great potential in many areas of problem-solving, it is now evident that ultimately, there is no one single tool that is best suited for the job. As such, one clear avenue of enhancement on the algorithmic front involves marrying different techniques to draw on the strengths of existing optimization methods. The five papers in this issue draw on this theme to advance its cause.

In this issue, we present the first paper by Moritz and Middendorf. Their work describes the dynamic group formation of agents with differing reconfigurable capabilities in resource collection task. The main thrust of this work is on dynamic self-organization. The problem involves finding a set partition of agents into groups such that a utilization function is maximized. The idea is to encourage formation of groups whereby the capabilities of agents within a group complement each other. They concluded that cooperative behavior based on the simple strategy of individual agent determining its own workload results in adaptive behavior that benefits the whole group, hence the overall performance of the system.

Along the line of multi-objective optimization, the second paper by Prakash and Singh, deals with optimization of hard partitional clustering. Clustering is an important task in data mining, with far reaching applications across various disciplines. The authors present a two-stage diversity mechanism in a multi-objective particle swarm optimization framework. With crossover to enhance the exploratory capabilities, their approach relies on non-dominated solutions to find the best solution based on certain metrics. They demonstrated the effectiveness of their approach on UCI real datasets against that of established methods. The results represent a step forward in dealing with clustering involving multiple conflicting objectives.

The next paper by Freire et. al. deals with enhancing multi-objective optimization. The field of multi-objective evolutionary algorithm (MOEA) has made significant progress over the years. Most researchers are well aware of the fact that the performance of optimization algorithms tends to deteriorate as the number of conflicting objectives increases. In this paper, the authors studied the behavior of three MOEA optimization techniques (NSGA-II, SMPSO and GDE3) with corner solutions injected into the population at different stages of evolution. The corner solutions are derived from multi-objective PSO. They showed the effect of the corner solutions on the three MOEA approaches in solving five benchmark problems.

In the paper by Lalwani, Kumar and Gupta, they present a two-level particle swarm optimization applied to solve the multiple sequence alignment of protein problem in bioinformatics. The first level maximizes the matched columns while the second level maximizes the pairwise similarity. They reported that based on simulation results on benchmark datasets, their proposed algorithm achieved very good prediction accuracy with low average pairwise sequence identity score, significantly outperforming several state-of-art algorithms.

The final paper in this issue presents a quantum-inspired evolutionary algorithm (QIEA) to solve the 0/1 knapsack combinational optimization problem. Based on the notion of quantum bits, superposition of states and quantum gates, the authors configured population of qubit individuals in an evolutionary framework to solve a class of difficult knapsack problems. They proposed improvements to the QIEA which they termed as QIEA-PSA through initialization and repair of the collapsed qubit individuals based on information provided by heuristic for the instance, reducing the problem size and re-initialization of population of local best solutions for each new generation. Validation of the improved algorithm on large size knapsack problems showed promising results.

While these papers are demonstrative of the potential of such marriage of techniques to enhance the problem-solving capability, more research works are necessary in order to generate more convincing results. As editors of this special issue, we thank the authors for their contributions. We are very grateful to the referees for having spent their valuable time in reviewing the manuscripts.