Nature-inspired optimization techniques have proven to efficiently solve a wide range of problems related to parallel computing and more generally to computer science. In addition, the intrinsic parallelization and distribution capabilities of nature-inspired techniques can been exploited in order to provide powerful optimization solutions. However many research challenges remain to be addressed, such as the design and implementation of efficient nature-inspired optimization algorithms for massively parallel and distributed architectures and their application to real-world problems.

This special issue is composed of extended versions of selected best papers presented at the 14th International Workshop on Nature Inspired Distributed Computing (NIDISC 2011). These articles provide a good theoretical and practical overview of nature-inspired optimization techniques and their application to metaheuristics and parallel/distributed computing. In the following, we propose a brief overview of the different contributions included in this special issue.

In the first paper by E.A. Macedo et al., “Multiple Biological Sequence Alignment in Heterogeneous Multicore Clusters with User-Selectable Task Allocation Policies” a parallel version of a heuristic iterative algorithm, DIALIGN-TX, is proposed to tackle the Multiple Sequence Alignment (MSA) problem, a well-known NP-Hard bioinformatics problem. The authors empirically demonstrate the execution time reduction obtained with the proposed parallel implementation as well as the impact of the choice of the allocation policy.

The second paper by P. Koros̆ec et al., “Multi-core Implementation of the Differential Ant-Stigmergy Algorithm,” also proposes to parallelize a state-of-the-art optimization algorithm, the Differential Ant-Stigmergy algorithm (DASA) which is a variant of the Ant Colony Optimization (ACO), for numerical (continuous) optimization. The authors focus on DASA parallelization on homogeneous multicore architectures and propose two parallel variants: shared memory (PDASA) and distributed memory (DDASA). They experimentally show the execution time gains on different black-box problems and profile the suitability of PDASA for all types of problems while DDASA requires complex ones. However, PDASA performance requires an expensive hardware architecture (manycore platform).

The third paper by J. Cecilia et al., “Enhancing GPU Parallelism in Nature-inspired Algorithms,” also tackles the parallelization of two nature-inspired optimization algorithms, Ant Colony Optimization (ACO) and Membrane Systems (also named P systems), but on a different hardware architecture, i.e., Graphics Processing Units (GPUs). Indeed, ACO inspired by the ant foraging behavior and P systems that mimic the biochemical process within cells are both inherently massively parallel. The authors have conducted an extensive study on their implementation tuning on different GPU platforms. Their performance is evaluated on two well-known problem classes, Satisfiability problems (SAT) and Traveling Salesman Problems (TSP). Experimental results demonstrate that with an efficient implementation, speed-up factors of 4–5 orders of magnitude can be reached.

The fourth paper by P.D. Yoo et al., “Combining Analytic Kernel Models for Energy-Efficient Data Modeling and Classification,” tackles one key aspect in data center operation, i.e., energy consumption. This work focuses on modeling and classifying/predicting large-scale data with an accuracy equivalent to the state-of-the-art while minimizing its computational cost. To this end, the authors propose a semiparametric framework that combines two different kernel-based and analytic approaches, i.e., a global nonparametric kernel regression model K-Nearest Neighbor (kNN) with a local parametric vector-field reconstruction (VF-RC). A nature-inspired optimization approach, the binary particle swarm optimization (bPSO), is additionally used for improving the VF-RC classification task. Experimental results on large-scale benchmark datasets and comparison to state-of-the-art classification approaches show that the approach reduces the computational complexity of the learning process while ensuring a better or similar test error.

The fifth paper by A. Piwonska et al., “Learning Cellular Automata Rules for Binary Classification Problem,” also focuses on clustering. It proposes to use Genetic Algorithms (GAs) for discovering two-dimensional Cellular Automata (CA) rules to perform binary classification. The experiments conducted on three classification problems outline the performance and scalability of the discovered rules compared to k-nearest neighbors algorithms (k-NN) and human-designed heuristic rules.

In the sixth paper by B. Dorronsoro et al., “Cellular Genetic Algorithms without Additional Parameters,” new nature-inspired algorithms are proposed, i.e., parameterless variants of the cellular genetic algorithm (cGA), a well-known decentralized metaheuristic. These algorithms have already proven to perform well on many hard optimization problems, and various parallel versions have been developed for different architectures such as GPUs or clusters. However, their performance highly relies on their parameterization which includes typical GA parameters and some cGA specific ones, i.e., population and neighborhood shapes. The authors thus avoid these additional parameters setting with self-adaptive cGAs, which combine population shape adaptation strategies based on convergence speed and neighborhood shape adaptation strategies that rely on the allocated individual fitness. Their accuracy and efficiency is experimentally proven against six other cGAs on a large set of continuous and combinatorial optimization benchmarks.

The seventh paper by M. Khouadjia et al., “Multi-Environmental Cooperative Parallel Metaheuristics for Solving Dynamic Optimization Problems,” proposes MEMSO, a Multi-Environmental Multi-Swarm Optimizer. MEMSO uses a parallel cooperative model where independent metaheuristics are run in parallel and exchange information about their search in different subproblems. Indeed, such multipopulation approaches typically perform well in tracking the moving optimum of dynamic problems. The better performance of the proposed MEMSO compared to other metaheuristics is empirically assessed on the dynamic vehicle routing problem (DVRP). A study of different integration policies is also provided. These experiments are conducted on the Grid’5000 testbed, an experimental grid of more than 5000 cores.

In the last paper by M. Seredynski et al., “Analysing the Development of Cooperation in MANETs,” the authors propose to use evolutionary game theory for the search of the fittest packet relaying strategies in Mobile Ad hoc NETworks (MANETs). The authors analyze incentives for cooperation in packet relaying where nodes use their local trust information to estimate the degree of cooperation (DOC) of other nodes which will trigger the decision to forward the information. The best strategies for different network sizes were experimentally found using the proposed nature-inspired approach. In addition, the authors have studied the robustness of the obtained strategies during the evolutionary process (i.e., their stability against the remaining strategies), the influence of the initial network configuration (distribution of initial strategies), and of selfish (never relay) and altruistic (always relay) strategies. They demonstrate that cooperation will emerge in two network settings: small network or network with many selfish nodes.

The guest editor would like to express his sincere gratitude to the Editor-in-Chief of the Journal of Supercomputing, Professor Hamid R. Arabnia, for giving the opportunity to organize this special issue and for his continuous support. He also would like to deeply thank the Springer team and Editorial Office members for their precious help. In addition, he is very grateful to the anonymous reviewers for their time and expertise. Finally, he would like to thank all the authors for their contributions and effort without which this special issue would not be possible.