Random search immune algorithm for community detection

Community detection is a prominent research topic in Complex Network Analysis, and it constitutes an important research field on all those areas where complex networks represent a powerful interpretation tool for describing and understanding systems involved in neuroscience, biology, social science, economy, and many others. A challenging approach to uncover the community structure in complex network, and then revealing the internal organization of nodes, is Modularity optimization. In this research paper, we present an immune optimization algorithm (opt-IA) developed to detect community structures, with the main aim to maximize the modularity produced by the discovered communities. In order to assess the performance of opt-IA, we compared it with an overall of 20 heuristics and metaheuristics, among which one Hyper-Heuristic method, using social and biological complex networks as data set. Unlike these algorithms, opt-IA is entirely based on a fully random search process, which in turn is combined with purely stochastic operators. According to the obtained outcomes, opt-IA shows strictly better performances than almost all heuristics and metaheuristics to which it was compared; whilst it turns out to be comparable with the Hyper-Heuristic method. Overall, it can be claimed that opt-IA, even if driven by a purely random process, proves to be reliable and with efficient performance. Furthermore, to prove the latter claim, a sensitivity analysis of the functionality was conducted, using the classic metrics NMI, ARI and NVI.


Introduction
In the modern interdisciplinary sciences, the complex networks are a powerful interpretation tool useful for the analysis and representation of a wide number of real-world systems and are widely involved in many areas, such as for instance neuroscience, biology, social sciences, economics, and physics. With this graph-based model it is possible to represent connections and interactions of the underlying entities, where vertices are the elementary parts of the real systems, whilst edges represent their mutual interactions (Newman 2003;Boccaletti et al. 2006). Complex networks may contain specific groups of highly interconnected vertices organized in compartments or structure, where each of them has a role and/or a function that satisfy a specific property of cohesion. In terms of graph theory, compartments are represented by partitions of the set of nodes with high internal links density, B Mario Pavone mpavone@dmi.unict.it 1 Department of Mathematics and Computer Science, University of Catania, V.le A. Doria 6, 95125 Catania, Italy called communities or modules, which are loosely associated with other groups (Girvan and Newman 2002;Fortunato 2010).
Finding compartments in a graph-theoretic context is a fundamental issue in the study of network systems, in which often they exhibit significantly different functions and, therefore, a global analysis of the network would be inappropriate and impractical. A detailed analysis of individual communities, instead, may shed some light on the organization of systems and leads to more significant insights into the roles of individuals. This approach can also allow the visualization and analysis of large and complex networks focused on a new higher-level structure, in which each identified community can be compressed in a node belonging to the latter. It is important to emphasize that classical algorithms for graph clustering are not suitable for revealing the properties of community's structures, as they are mainly based on optimal subdivisions of graph in order to guarantee a minimum flow cut. On the other hand, finding the properties of a community's structure requires a complex analysis on the linking models and relationships. Community Detection (CD) has been shown meaningful structure in networks which identifies groups of nodes within which connections are denser than between them, in order to provide very important information for detecting structural and functional relations between objects, such as functional modules in protein-protein interaction networks, groups in online and contact-based social networks, etc. Fortunato (2010) In the last decade, a growing number of new methods and computational approaches have been developed, and several different metrics for community structure evaluation have been introduced, which are used for community detection in graphs Newman 2012;Coscia et al. 2011). The most used and popular method for community detection is to maximize the Modularity of a complex network (Newman and Girvan 2004;, which measures the strength of the structure of the communities detected: high modularity denotes dense connections between the vertices inside a community, but sparse among the nodes belonging to different modules. Basically, it measures the difference between the actual fraction of edges within the community and the expected fraction in a random graph with the same number of nodes and the same degree sequence. In this research work, an immune algorithm, called opt-IA Stefano et al. 2016;Pavone et al. 2012), is presented, which has been designed for detecting communities in complex networks. The proposed opt-IA algorithm is a population-based metaheuristic that takes inspiration from the dynamics and principles of the immune system, and successfully applied in several optimization problems Vitale et al. 2018;Stracquadanio et al. 2015). Three immunological operators are the driving-force of opt-IA -(i) cloning, (ii) hypermutation, and (iii) stochastic aging-which help the algorithm in performing a careful exploration of the search space, avoid getting trapped in local optima, and properly exploiting all information learned during the evolution. It is important to stress that all operators work in a purely random way without any deterministic guide to refine and improve the solutions neither taking advantage from features of the network. However, from the obtained outcomes, the combination of these random operators allows opt-IA to be efficient and reliable. Many computational experiments have been performed in order to evaluate the efficiency and reliability of the proposed algorithm in community detection, and different types of complex networks have been taken into account, from social to biological ones, included many synthetic networks (around 80) in order to evaluate the algorithm in different search scenarios. Overall, then, opt-IA was tested on a data set of 103 instances, from small networks (|V | = 28) to larger ones (|V | = 3000), and a comparative analysis with 20 different optimization algorithms was also performed in order to evaluate opt-IA's robustness. Furthermore, an analysis on the computational time of opt-IA has been performed using the Time-To-Target plots (Aiex et al. 2002;Feo et al. 1994), which are a standard graphical methodology for characterizing the running time of stochastic algorithms, comparing the empirical and theoretical distributions. Although on the one hand opt-IA needs more iterations with respect to the compared algorithms, but always within acceptable times, on the other hand it shows optimal performances relying basically only on random search, and without using any deterministic approach. In addition to the T T T − plots computational time analysis, a study on the asymptotic computational complexity of opt-IA has been conducted, as well, from which is possible to claim that its upper bound running time is O(n 3 ).
Finally, from the obtained outcomes it is possible to assert that opt-IA finds almost always the best modularity value, strictly outperforming most of the compared algorithms. Indeed, analysing the comparisons through a ranking, ordered from the best result to the worst one, it is possible to assert that opt-IA is always in one of the first two positions and often in the first one. Also, on the biological networks, opt-IA finds considerably better modularity than the compared algorithms, especially with respect to H DS A algorithm (Civicioglu 2012), which is a Hyper-Heuristic, and therefore based by definition on the use of different heuristics. On the other hand, it is also important to note that due to the randomness underlying the immunological operators that guide the search process, and therefore to the lack of any deterministic improvement/refinement approach, opt-IA needs a greater number of iterations to find acceptable solutions on the larger biological networks: the more the network size grows, the more generations the algorithm needs. To confirm the robustness and reliability of opt-IA, also a functional sensitivity analysis was conducted on several synthetic networks generated by the L F R algorithm (Lancichinetti et al. 2008;Lancichinetti and Fortunato 2009), and using the well-known community structure similarity metrics, such as: N M I -Normalized Mutual Information (Danon et al. 2005) (mostly used in community detection); AR I -Adjusted Rand Index (Hubert and Arabic 1985); and N V I -Normalized Variation of Information (Meilȃ 2007).
The rest of the paper is structured as follows: the community detection problem and modularity measure are presented and formalized in Sect. 2. The description concerning the proposed algorithm and modularity maximization approach is given in Sect. 3. The description of the experiments conducted on social and biological networks is given in Sect. 4. In this section, the experimental protocol, parameters tuning, convergence and learning analysis, as well as the computational complexity (T T T − plots and asymptotic analysis), are also inspected and described. The effectiveness of the Precompetition operator is also tested and inspected in this section. Comparative analysis on the results of opt-IA against those obtained by the past published algorithms is reported in Sect. 5. Further, in this section a functional sensi-tivity analysis is also presented using N M I , AR I and N V I as community structure similarity metrics. Finally, the conclusions are presented in Sect. 6.

Community detection and modularity maximization
Community, in a complex network level, represents a group of nodes sharing common and similar properties in which the obtained subgraph has a certain number of edges linking a number of vertices of the community that are closer each other, where edges inside community must be in greater numbers within the community than those that connect community vertices with the rest of the network Fortunato 2010;Newman 2012). The aim of CD in graphs is to identify the modules and their hierarchical organization, by using only the information encoded in the graph topology. In particular, it refers to the division of the nodes of a network into groups such that connections are dense within groups but sparser between them. In other words, a cluster corresponds to a set of nodes with more edges inside the set than to the rest of the graph. Although not all networks support such divisions, the existence of good divisions is often taken as evidence of underlying structure or possible interactive behaviours, making CD a useful tool to understand how complex networks are structured and work. CD problem gained the attention of scientist communities to bring valuable explanations to complex networks analysis. For example, in biology, applying graph clustering method on relations among genes or proteins, modelled by networks (Protein-Protein Interaction Network) is possible thanks to group proteins having the same specific patterns and mechanisms operating within the cell (Chen and Yuan 2006), or through analysis of the network produced by neuron interactions, understanding the functional architecture of the brain (Deco and Corbetta 2011). In the same way it is possible to identify, in information networks, clusters of web pages that share some common topics and similarity in a given social network to find individuals with common interests or friendship. A plethora of diverse algorithms and techniques have been proposed for detection of the communities in real-world networks. They differ from one to the other in criteria implementations for solving CD problem. They differ also in defining criteria of identification of communities. These approaches have been applied successfully in different domains of applications and in the many real-world areas (such as biological, chemical, ecological, economic, political, social, etc.).
The opt-IA proposed was developed for the resolution of CD problem through modularity maximization, the most popular and widely accepted method for the community detection. We begin by describing Modularity as a measure the quality of a partitioning of a graph into communities.

Modularity
Modularity proposed by Newman and Girvan (2004) is a benefit function that measures the quality of a particular partitioning of a graph into communities. Originally defined for undirected graphs has been subsequently extended to directed and weighted graphs Mucha et al. 2009;Bickel and Chen 2009). The modularity of a partition is a scalar value (where the maximum value can be 1) used to evaluate the density of links within communities with respect to links between communities (Girvan and Newman 2002;. A larger positive value of modularity indicates better community structure. Modularity maximization is one of the most popular and most widely used methods for community partition. It detects communities by searching over possible partitions of a graph, over which modularity is maximized. In a given subgraph, the modularity function is defined as the difference between the actual density of edges inside the subgraph and the expected density of such edges if the graph was randomly conditioned on its degree distribution (Newman and Girvan 2004). This expected edge density depends on the chosen null model, a random copy of the original graph that maintains the structural properties but not those on the structure of the communities. The idea behind modularity is that a network with inherent community structure usually deviates from random graphs, or rather random graph does not have a community structure. Therefore, the edge density of a subgraph should be greater than the expected density of a subgraph whose nodes are randomly connected.
Given a graph G = (V , E) with |E| = m, and given a partition of G with N C clusters, the benefit function of modularity can be written as: where, for each cluster, i.e. subgraphs c, • l c is the total number of edges and • d c is the sum of the degrees of its vertices, • l c m represents the fraction of edges inside a certain cluster and • d c 2m 2 the fraction of the expected edges if the graph was random (null model).
Although an important resolution limit of the measure of modularity has been underlined by Fortunato and Barthelemy (2007), modularity seems to be a useful measure of the community structures. In fact, algorithms that search for graph partitions that offer optimal modularity are already proposed and generally assert to be able to successfully find communities in very large and complex networks .
Modularity defined in a heuristic way considers a good division the one that places most of the edges of a network within groups and only some of them between groups, in which the relationships of the members among themselves in the communities must be maximized and the relationships of these members with members of other communities must be minimized.
High values of modularity indicate better community structure. We desire a quality function Q which, given a network and a candidate division of that network into groups, assigns a score to each partition of a graph, in order to classify the partitions and evaluate when one partition is better than another (in a graph the partition corresponding to its maximum value should be the best, or at least a very good one). The maximization of modularity is therefore sought after at all costs.
Obviously, a brute force search to optimize Q is impossible above all large and complex graph structures, due to the enormous number of ways in which it is possible to partition a graph. Moreover, it has been proven that the problem of determining communities by using modularity optimization is an NP-complete problem (Brandes et al. 2007), so it is highly unlikely to perform the optimization task and find an optimal solution in polynomial time with respect to the dimension of the graph. Several algorithms for community detection in complex networks have been developed and have yielded satisfactory results in some cases, but not in all situations (the performance of many methods available in large complex networks is far below expectations). Therefore, we need to move to approximate optimization methods, which can find fairly good approximations of maximum modularity in a reasonable time especially when an exhaustive brute force search for the optimal solution is unfeasible, in a very large solution space.
As described in detail in the section below, in our study and in the development of the proposed algorithm, the function provided in Eq. 1 was used as a fitness function on the methods of modularity maximization for the resolution of the CD problem for detecting communities within real-world networks.
3 opt-IA: an immune algorithm for community detection Immune-inspired computation nowadays represents a large and established family of successful algorithms that take inspiration from the mechanisms and dynamics of the immune system with which it protects the living organisms. What makes the immune system source of inspiration from an algorithmic perspective is its ability in detect, recognize, and distinguish entities own to the organism from foreign ones, together with its ability to learn new information and remember those foreign entities already recognized. Three principal theories are at the basis of the immune-inspired algorithms: (1) clonal selection (Pavone et al. 2012;Scollo et al. 2021); (2) negative selection (Fouladvand et al. 2017;Poggiolini and Engelbrecht 2013); and (3) immune networks (Smith and Timmis 2008). Among these, what has proven to be quite efficient is the one based on the clonal selection principle (called Clonal Selection Algorithms-CSA) (Cutello et al. , 2010 mostly in search and optimization applications. The proposed immune algorithm, opt-IA, belongs then to this last class of algorithms, and is based on three main immune operators: (i) static cloning, whose aim is to generate a new population based on the highest fitness values; (ii) hypermutation, which explores the neighbourhood of each point of the search space; and (iii) stochastic aging, which removes solutions from the current population via a stochastic law, helping then opt-IA in escaping from local optima. In addition to these, some diversification strategies have been also designed, whose aim is to keep high and proper diversity into the population, and to perform an appropriate exploration of the search space. The opt-IA algorithm is based on two main concepts, following the biological metaphor: the antigen (Ag), which represents the problem to be solved, and the antibody (Ab), or B cell that is instead a solution for the problem to be solved.
At each timestep t, opt-IA maintains a population of size d of B cells (P (t) ), and each B cell Ab represents a subdivision of the vertices of the graph G = (V , E) in communities. In details, if n is the cardinality of the set of vertices V , a B cell x = {x 1 , . . . , x n } will be a sequence of n integers, between 1 and n, where x i = j indicates that the vertex i belongs to the community j. A description of opt-IA is summarized in the pseudocode shown in Algorithm 1. The proposed algorithm takes as input: the network from which to detect the communities (G); population size (d); number of copies to be generated for each B cell (dup); the mutation rate (M); probability that an element will be removed from the population by the aging operator (P die ), and maximum number of generations allowed (T max ). It returns as output the communities detected and the relative community number, as well as Best, Mean, W orst and standard deviation (St D) values used for the comparisons.
As a first step, i.e. at the timestep t = 0, opt-IA randomly generates d solutions using the uniform distribution, creating then the initial population P (t=0) (line 2 of Algorithm 1): any vertex is assigned to a community, randomly chosen in the range [1, n], with n = |V |. In this way, many communities with few assigned vertices will be generated; it will be the task to of the developed hypermutation operator (see Sect. 3.2) to compact the communities.
Once the population is initialized, the next step is to evaluate the fitness function for each B cell x ∈ P (t) by the function Compute_Fitness(P (t) ) (line 3 of Algorithm 1). In this research work, for each B cell x, such function simply computes the value given by Eq. 1.
After the initialization of the population, and the computation of the fitness of each generated solution, the artificial evolution process begins, where the key operators take place. As in any evolutionary algorithms, opt-IA will end its evolution process once a termination criterion is reached, which has been fixed in our experiments to a maximum number of generations allowed (T max ).

The cloning operator
The first immune operator to be performed is the cloning operator (line 5 of Algorithm 1), which has the main goal of producing a new population with higher affinities (i.e. fitness values), and together with the hypermutation perform careful local search. Just as it happens in nature that all those cells able to better recognize foreign entities will generate more copies of them, the cloning operator duplicates all those solutions that seems to be promising: simply it copies/clones dup times each element of the population, creating a new intermediate population P (clo) of dimensions d × dup. It was developed a static version of the cloning operator, unlike what really happens in biology, 1 because this last shows the disadvantage to guide easily and quickly the algorithm towards local optima. Furthermore, to avoid a premature convergence, we also made dup independent from the fitness function value of the B cell. In a nutshell, if we had chosen to increase the number of clones for high fitness elements, we would have achieved quickly a very homogeneous population, causing in turn a poor exploration of the search space.

The hypermutation operator
The hypermutation operator (line 6 of Algorithm 1) acts on each element of the population P (clo) performing M mutations with the main aim to explore the search space, and the neighbourhood of all solutions found so far. Similarly to the parameter dup, also the mutation rate M is a user-defined parameter and is not related to the fitness function of the solution, to avoid possible premature convergence. Importantly, unlike classical evolutionary algorithms, no mutation probability was considered. Furthermore, the introduction of blind mutations produces individuals with higher affinity (i.e. higher fitness function values), which will be then selected to form improved mature progenies. In this research work, different types of mutation operators have been developed, which can act on a single vertex in the solution, like a local operator, or on a group of nodes, like a global operator: (i) Equiprobability; (ii) Destroy; and (iii) Fuse operators.

Equiprobability operator
This mutation operator is locally applied and tries to find a better neighbour not still explored. Simply, it randomly selects a vertex i, and a community c j among those existing at that moment, and, of course, different from the one to which i belongs (i.e. c i = c j ), and then, the vertex i is moved into the community c j .

Destroy operator
This operator works through a more global perspective than the first operator, as it acts directly on the communities rather than the single nodes. It is carried out as follows: two different communities are randomly selected, c out and c in , which are, respectively, the community from where the vertices will be moved; and the one that will receive such vertices. In particular, c out is selected among the currently existing communities, whilst c in is randomly assigned a value in the range [1, n], with n = |V |. Note that this means that c in could have a value that does not correspond to any existing community. After that, a probability is randomly chosen between 1% and 50%, and based on this probability every vertex in c out will move into c in . As already said, if c in is among the existing ones, then the new vertices will increase the community itself; otherwise, a new community is created, and added to the others.

Fuse operator
This last operator has the main aim to try to reduce and compact the communities. Thus, it chooses randomly two  (Fig. 1). In particular, the first two have proven useful in improving the modularity function value, and in escaping to local optima; the fuse operator, instead, albeit doesn't offer good results individually (see Fig. 1), it also has been taken into account since its goal is to aggregate communities. Trying then to take advantage on the working of each operator, the Equiprobability and Destroy operators are applied with the same probability, whilst the last with low probability for the above reasons (around 49.5%, 49.5%, and 1%).
Finally, once the hypermutation was performed, and then modify all the cloned solutions, for each mutated B cell is computed its fitness value through the function Compute_Fitness(P (hyp) ).

Aging, precompetition, and selection operators
The aging operator has the main goal of helping the algorithm in jumping out of local optima and keep a high diversity inside the population. In this research work, the stochastic aging operator (line 8 of Algorithm 1) has been developed that guides opt-IA to reduce premature convergences as much as possible. In details, the elements of the population are removed at each iteration with a P die probability, which is a user-defined parameter. Diversification of solutions in a population is a crucial feature to avoid getting trapped into local optima. However, this can also become a limitation in carrying out a careful, and accurate exploration of their neighbourhoods, which also plays a key role in the success of the algorithm. Thus, in order to have a right balance between these two key features, the stochastic aging is applied only on the P (t) population. In this way, diversity will be introduced in the population of the best current solutions, which will then compete with their offspring in generating the new population of the next generation, whilst the B cells in P (hyp) will have the task of properly exploring the neighbourhoods.
After the aging operator, to strengthen heterogeneity in the population P (t) , a precompetition step (line 9 of Algorithm 1) has been developed, which simply randomly selects two different B cells from P (t) : if the two solutions, although different, have the same number of community, then the lower fitness one will be removed from P (t) with a 50% probability. This strategy allows, thus, to maintain a more heterogeneous population during the evolutionary cycle, maintaining solutions with different number of community, in order to better explore the search space.
The last operator before the end of the iteration is the creation of the new population for the next timestep t + 1. In opt-IA, the (μ + λ)-selection operator (line 10 of Algorithm 1) has been developed, which selects the best d B cells from the two populations P ( pre) and P (hyp) , without fitness repetitions. This selection operator, with μ ≤ d and λ = (d × dup), identifies the best μ = d elements from the offspring set (P (hyp) ), and old parent B cells (P ( pre) ), therefore ensuring monotonicity in the evolution dynamics. If two selected elements have the same fitness, then only one of them will be chosen randomly.

Benchmarks and behaviour analysis
In this section, we study the features of opt-IA and analyse its behaviour in order to prove its efficiency and reliability. We start by describing the data sets, i.e. the complex net- works used for our studies and comparisons, along with the experimental protocol adopted for all the performed tests. Likewise, we show the experimental tuning on the population size, mutation rate and aging probability. Then, once the best setting of opt-IA's parameters is determined, we show its dynamics and learning abilities. Finally, we present the analysis of the time complexity of opt-IA via the well-known Time-To-Target plots, which are a classical methodology for the running time analysis for any stochastic algorithm. Experimental results and comparisons are shown in Sect. 5.

Data sets and experimental protocol
The opt-IA algorithm, presented in Sect. 3, has been tested and studied on eight different real-world networks, which include three biological, and five social networks. These networks are related to two different areas and are obtained from real-world systems. Features and size of each network are detailed in Table 1. More in details, we considered three social networks: a small size network, called Zachary's Karate Club, and two larger ones, respectively, called Books about US politics and American college football (Krebs 2008). The Zachary's Karate Club network represents the friendships between members of a university's karate club in the US over a period of 2 years (Zachary 1977). It has come to be a standard test network for clustering algorithms. Each node represents a member of the club, and each edge represents the relationship between the two corresponding members of the club. The network called American college football was presented in Girvan and Newman (2002) and represents the football match schedule for the 2000 season. Vertices in the graph represent the teams, whilst edges represent regular season games between the two corresponding teams. In the American Political Books network, the vertices represent the books on the American politics purchased from amazon.com, whilst edges connect pairs of books which are frequently co-purchased. The books in this network, compiled by V. Krebs, were classified by Newman (2012) into liberal or conservative categories, with the exception of a small number of books without a clear ideological bias.
We also took into consideration two different types of networks, which identify social-ecological networks, called respectively Bottlenose dolphins (Lusseau et al. 2003) and Grevy's Zebras (Sundaresan et al. 2007). Bottlenose dolphins are aquatic mammals in the genus Tursiops. The network is built from a community of bottlenose dolphin community living in New Zealand and observed between 1994 and 2001.Edges denotes frequent association. In the Grevy's zebra network, edges represent the interactions between two Equus grevyi (node in the network) if it existed between them during the study.
Finally, six biological networks were also considered, namely: E. coli transcriptional regulatory network (Shen-Orr et al. 2002), C. elegans metabolic reaction network (Duch and Arenas 2005), Cattle protein-protein interaction network (Cattle 2015), Helicobacter pylori protein-protein interaction network (Xenarios et al. 2000;Rain et al. 2001), E. coli metabolic reaction networ (Schellenberger et al. 2010), and S. cerevisiae protein-protein interaction network (1) and (2) (Yu et al. 2008;Bu et al. 2003). In particular, in the gene expression E. coli regulation network, which is a commonly used benchmark, the vertices represent operons, i.e. functioning units of DNA containing a cluster of genes, and edges are directed from a gene that encodes a transcription factor to a gene that it directly regulates it Shen-Orr et al. (2002).
Overall, all these networks, social and biological data in real work systems, are well-known and commonly used dataset for evaluating the efficacy and efficiency of designed algorithms for the community detection problem.
All the performed experiments were carried out on 30 independent runs, whilst the considered stopping criterion was fixed at a maximum number of generations allowed (T max = 4 × 10 3 ). Since opt-IA is basically a blind search algorithm, it follows that without knowledge about the domain and without inclusion of any deterministic refinement approach, obviously opt-IA must evolve for more generations than an any hybrid/memetic approach or a hyperheuristic. Nevertheless, the computational time of opt-IA for reaching the overall best solution is, however, acceptable, as it is shown in Sect. 4.4.
Finally, for each experiment, and then for each network, the following outcomes were computed and shown: best modularity found on all runs; mean value of the best solutions found per each run; wor st modularity value found on the overall; standard deviation (St D); and number of communities discovered (N C ).

Parameters tuning
As described in Sect. 3 (Algorithm 1), the crucial parameters that affect the performances of opt-IA are, respectively: (i) the population size (d); (ii) the duplication factor (dup); (iii) the mutation rate (M); and (iv) the probability to remove a B cell at each iteration (P die ). Therefore, we need to find the best values for these parameters.
In a preliminary work (Spampinato et al. 2019), a small and sparse network (almost_lattice network with 64 nodes) was used to find the best setting for the parameter values. The network was chosen since it shows a particular complex landscape. From this preliminary study, the best parameter combinations obtained were, respectively, d = 8, and dup = 4, whilst mutation rate M varied between 1 to 3. The efficacy of such a setting was also validated by the comparison of opt-IA with the well-known greedy optimization method Louvain (Blondel et al. 2008), one of the most popular algorithms for the community detection. The comparison, which for convenience is reported in Table 3 (Sect. 5), shows that the proposed algorithm outperforms Louvain in all networks tested.
Following these very good results, we ran opt-IA on all the networks which are shown in Table 1 using the same parameter configuration. Although the algorithm was able to find the optimal solutions for some of the social networks, on the larger ones, such as for instance, on the biological networks, we obtained, instead, very poor results. In the light of this, we performed a new study on the parameter tuning, but this time we took into account the Cattle PPI biological network as testbed, which is a large and sparse graph, and consequently a hard enough testbed. It is important to highlight, also, that all experiments we conducted for the parameters tuning were performed on 30 independent runs, so to have more robust and reliable outcomes. Based on the knowledge acquired on opt-IA in previous research works (Pavone et al. 2012;Cutello et al. 2020Cutello et al. , 2019, the population size was set to d = 100, since it is mainly related to the dimension and complexity of the problem tackled. For all the other parameters, instead, the tuning was determined by evaluating the fitness trend at their different values. The first step of this study was conducted on the mutation rate parameter with five different values (M = {1, 2, 3, 4, 5}), using the following parameter configuration settings: population size d = 100, duplication parameter dup = 4, probability of random aging operator P die = 0.02 and T max = 4000.
The outcomes of these experiments are reported in Fig. 2 where we can see the distribution of fitness values over 30 independent runs at the varying of M on the Cattle PPI network. By observing the graph in the figure, it is possible to assert that by increasing the number of mutations the performance of opt-IA decreases considerably with respect to both the best modularity found and the mean. The best performances are obtained using small mutation rate values, that is M = {1, 2}. It is important to note that although for M = 1 opt-IA reaches a better median, the performances for M = 1 and M = 2 are, however, equivalent when compared to the best value found. For this reason, both these values for M have been taken into account for the next experiments. The good results obtained when performing a lower number of mutations are primarily due to the effect of the aging and precompetition operators, which produce good heterogeneity, and consequently require the perturbation operator to carry out the search in the neighbourhood of the current solutions.
The results of these experiments are reported in Figs. 3 and 4.
From both figures, it is clear that better performances are obtained when increasing the value of dup. In particular, higher modularity values are obtained for dup = {8, 9, 10} for both M values. In other words, having more copies of each solution helps opt-IA in carrying out a careful and more accurate search in its neighbourhood. Focusing further the analysis only on these last three values, it is possible to assert that for dup = 9 and dup = 10, opt-IA finds the best modularity more often than for dup = 8.
Inspecting now the plots from the perspective of parameter P die and considering these last two values for dup, we can see that the better performances of opt-IA were obtained for P die = 0.02. With this value, indeed, the algorithm was able to find a better mean, and a lower standard deviation. Each candidate solution will have a 0.02 probability to be removed from the population and such a low probability value is enough to produce a good heterogeneity in the population (when, of course, combined with the precompetition operator). From these last experiments, it also emerges that for M = 1 the performances of opt-IA are considerably better than for M = 2, since the algorithm reaches the best solution more often, with a better mean and standard deviation, proving in turn greater robustness and soundness.
In conclusion, from the overall experimental analysis, the best obtained parameter combination is the following: d = 100, dup = {9, 10}; M = 1 and P die = 0.02. Note that, although dup = 10 showed slightly better performances on the considered network, we took into consideration both values, because their effect is also related to the network density (see results in Sect. 5).

Convergence behaviour
A right convergence behaviour together with a good learning ability is the key factor for any successful stochastic search algorithm. Thus, we conducted a deep analysis on the dynamic behaviour of opt-IA as reported in this section. We used the networks American College Football, Cattle PPI and C. elegans MRN because, being different in types, sizes, density ( in Table 1), and mainly complexity, they allow a more robust analysis. As described above (Sect. 4.1), all these experiments were averaged over 30 independent runs. Figure 5 shows the convergence curves of opt-IA, and, in particular, the best fitness, average fitness of the population, and average fitness of the hypermutated population. In particular, one can see how in all three plots opt-IA shows a very good convergence towards the optimal solution. Indeed, the three curves grow slowly and improve step by step until they reach the best solution. It is important to note that initially the three curves are very close, and then begin to differentiate as they approach the optimal solution (see inset plots). This is due to the diversification of the solutions whose crucial impact happens mainly when the improvements are limited, and consequently when opt-IA needs to get out of local optima.
Simply put, all three curves keep a right distance from each other, confirming the existence of a good degree of diversity among the solutions, which is useful for avoiding and/or escaping from local optima. Furthermore, it is also important to highlight that the curve of the best fitness does not increase monotonically: for some generations, the curve decreases slightly, and this corresponds to the discovery of better fitness values, right in the next generations.
Since opt-IA is driven by random process, together to the analysis of the convergence behaviour becomes important also understand its learning ability, that is the information amount it is able to discover during the search process. The information learned during the evolutionary process clearly affects the overall performance of the algorithm. The information gain, also known as Kullback-Leibler divergence, is a well-known metric for analysing the amount of information gained by an algorithm whilst searching for the solution, i.e. during the learning phase (Kullback 1959;). Basically, it measures the reduction of entropy with respect to an initial distribution function (timestep t = 0); that is, generalizing, the entropy reduction from a prior state to a given next state. A good randomness measure is also given by the Shannon's entropy (Shannon 1948), which is among the most used in Information Theory because it is able to measure the uncertainty of a random process. It defines the entropy of a random value in terms of its distribution probability. It follows that the Kullback-Leibler divergence (Kullback 1959) measures the "distance" between two probability distributions and how different they are.
In order to compute the information gain in opt-IA, we indicate with B t m the number of all those solutions (B cells) having fitness value m at the time t, and we define the distribution function of solutions ( f (t) m ) as the ratio between B t m and d, that is the total number of the solutions: It follows then that the definitions of the information gain K (t, t 0 ), and the entropy, E(t) are given, respectively, by: indicates therefore the amount of the information discovered by opt-IA during the convergence process with respect the initial population P (t=0) . The maximum information-gain principle (Jaynes 2003) tells us that once the search process begins the information gain curve monotonically increases until to reach a peak, which corresponds to the maximum quantity of information discovered. Just after this peak, the curve transitions to a (roughly) steady-state, or in case it is fluctuating, however it will not reach values close to the maximum peak point: Overall, the maximum information-gain principle makes it a suitable tool to perform an appropriate parameter tuning and, at the same time, understand the convergence behaviour of the algorithm both on-line and at run-time.
As for the convergence analysis, the same three networks were also used to evaluate the learning ability of opt-IA.  MRN in plot (c). Inspecting all three plots, we can clearly see how opt-IA quickly gains enough information during the first iterations, reaching regions of search space with a good average, which proves the efficiency of the mutation operators designed to explore the search space. After that, due to the fully random search process and with- out any guided search specific to the problem, the learning process alternates in gaining or losing information, until it reaches the highest peak that corresponds to having reached the best overall solution. The insert plot shows the relative standard deviations. In plot (a) of Fig. 6 it is possible to see the learning behaviour of opt-IA on the American College Football network. In this plot, the algorithm shows a different learning behaviour compared to the other two considered networks, because this network is a little simpler in size and network density. Indeed, once the highest peak is reached (in the generations range [20, 30]), opt-IA begins to lose information until around the 200th iteration when the curve begins to increase again, and therefore it starts to gain again information. In the inset plot, we show the relative deviation standard (σ ) of opt-IA, which measures the amount of dispersion (uncertainty) inside the population. It is interesting to note, indeed, that the iteration point where the algorithm reaches the maximum information gain corresponds exactly to the standard deviation lowest point. Correctly, then, to the maximum information gain corresponds minimum uncertainty. Similarly, at the point of the lowest information gain, reached before 200th generations, there is the highest standard deviation value. The information gain curves displayed in plots (b) and (c) show, instead, a steadier state behaviour once the higher information value is reached. Interestingly, we can see in both plots that after 1000 generations opt-IA begins to discover new information, and particularly in the plot (b) it reaches even the highest information gain value. This is consistent with the standard deviation curves, which reach lowest values just after the 1000 generations.
In conclusion, both analyses (convergence and learning) prove the efficiency and robustness of opt-IA in community

Computational time complexity
The running time of opt-IA for reaching the best solution is another crucial measure to take into account for proving the efficiency of the proposed immune algorithm. We used the Time-To-Target plots (Aiex et al. 2002;Feo et al. 1994) (T T T − plots) which are a standard graphical methodology for data analysis and for characterizing the running time of stochastic algorithms in order to solve a specific optimization problem. They measure the CPU times to find the target of the problem instance tackled. The basic idea behind of T T T − plots is to compare the empirical and theoretical distributions, i.e. it displays the probability that an algorithm will find a solution as good as a target within a given running time.
A Perl program has been proposed by Resende et al. in Aiex et al. (2007) for automatically generating the T T T − plots, which produces two different plots: (i) theoretical Quantile-Quantile plot (Q Q − plot) with superimposed variability information, and (ii) superimposed empirical and theoretical distributions. 2 In order to perform such an analysis, the opt-IA algorithm is run n times on a given instance using the achieving of a target value (i.e. achieve global optimum) as a stopping criterion. Obviously, for each single run, a different seed is considered for the random number generator so to have independent runs. Note that the larger the number n considered, the closer the empirical distribution will be to the theoretical one. Therefore, following the suggestions given in Aiex et al. (2007), we set n = 200 because it has been proven that this value gives very good approximations of the theoretical distributions. This analysis has been conducted on five different networks in size and complexity: Grevy's Zebras, Zachary's Karate Club, Bottlenose Dolphins, Books about US Politics and C. elegans MRN. For a proper analysis, it is important to consider not easy instances, since the exponential distribution would degenerate to a step function, due to the very small CPU times in almost all runs, as asserted in Aiex et al. (2007). Furthermore, these networks have been considered also because the new stopping criterion requires that in the tackled networks/instances the success rate is 100%.
In Figs. 7, 8, and 9, the T T T − plots produced on the cited networks are shown. In each figure, the left plot shows the empirical versus theoretical distribution, whilst in right plots show the Q Q − plots with variability information.
It is important to point out that the T T T − plots experiments on the Books about US Politics network were performed considering the best solution found (t = 0.5272) as target value for the stopping criterion, and opt-IA was able to find it in all 200 runs, although in Table 6 (Sect. 5) the best and mean values are not the same. This confirms that with a larger number of iterations the developed search process is able to discover even better solutions until it reaches the optimal ones, of course, with higher computational complexity time. However, from the relative T T T − plots (bottom plots in Fig. 8), the empirical curve follows the same behaviour of the theoretical one, proving consequently the efficacy of opt-IA on this network. For the C. elegans MRN networkone of the larger networks in the dataset-two different target values were, instead, considered as stopping criteria, since opt-IA found better modularity than the compared algorithms (see Table 6), either as best, mean and wor st values. The first experiment was then conducted considering 0.4185 as target value (upper plots, Fig. 9), which corresponds to the best modularity found among all compared algorithms, and specifically by H DS A. Moreover, because the worst modularity computed by opt-IA is still better than the one found by H DS A, a second T T T − plots experiment was performed setting the stopping target to 0.4221 (bottom plots in Fig. 9), i.e. the worst solution of opt-IA on such a network (see Table 6).
Overall, by inspecting all plots in the three figures (Figs. 7, 8, and 9), it emerges how the empirical curve perfectly fits the theoretical one in all the four social networks, whilst opt-IA improves the theoretical trend in the biological one. In the Q Q − plots, instead, the opt-IA algorithm shows how its results are in most of the cases equal or better than the theoretical ones, and the empirical curve is much faster than the theoretical one. Focusing the analysis only on the plots of Fig. 9, that is the two different targets considered for the biological network, it appears clear how opt-IA easily achieves the same maximum modularity of H DS A, whilst (obviously) needs more time to reach larger values of modularity. However, the empirical curve fits perfectly the theoretical one. Even on the Q Q − plot (Fig. 9b) the empirical curve almost always fits the estimated one, also showing a better behaviour than in that shown in Fig. 9d.

The asymptotic computational analysis
To compute the upper bound of the computational cost of opt-IA, it is important to recall that being opt-IA a population-based algorithm, any computational analysis must be do with respect to the size of the input problem and implementation features, but also with respect the choice of the key parameters, such as population size (d), maximum iteration numbers T max , hypermutation operators, etc. Below, all these issues are properly discussed.
Inspecting the pseudocode of opt-IA (Algorithm 1) it is possible to assert that: • any solution is represented by an array of length n, where n is the number of vertices of the input graph; • the procedure I nitiali zePopulation(d) has the aim to randomly create a population of d solutions. It follows then that the total cost is O(d×n). Note that the parameter d is a constant experimentally set (it is user defined), and therefore it is independent from the size of the input. Besides, in all presented experiments it was set to the value 100, therefore, is possible to assert that the cost of the procedure is actually O(n); • the procedure ComputeFitness() simply evaluates the quality, i.e. the fitness, of all 100 solutions of the population using the Eq. 1. It follows then that the cost of the procedure is O(n 2 ); • the aims of the procedure Cloning() is to create dup copies of each element of the population. Looking to the experimental setting, also dup is a fixed parameter and then independent from the input size as well. This allows us to say that the cost of Cloning() is O(n); • the H ypermutation() procedure mutates each element of the population with constant probability M, and it is based on three different implementations. As each of these is based on the random selection of either a vertex or a cluster, it can happen having to reallocate several vertices in the two most consuming implementations, that is destroy and fuse. It follows then that the procedure has an upper bound O(n); • in the Precompetition() procedure, two vertices are randomly selected, and their community number is checked. If the number is the same (i.e. they belong to the same community), then the vertex with lower fitness is deleted with probability 1/2. Considering that the fitness was already computed, the overall cost is then O(1); • the procedure Stochastic Aging() inspects all elements of the population and each one is removed with a P die probability. Since P die is a user-defined parameter and then constant, we can assert that the overall cost of the procedure is O(n); • finally, Selection() is the last procedure that is performed, which chooses the best d solutions among P ( pre) a and P (hyp) . Recalling that the number of elements (d) in both populations is constant and independent of the size of the input, it follows that the cost of the procedure is therefore O(1).
Then, summing up, from the computational analysis done for each procedure, it is possible to assert that the cost of Let now us consider the number of iterations. It is worth to highlighting that, unlike to the other parameters, the number of generations is instead related with the size of the input: the bigger the input, the bigger is the number of generations that we expect to need. But how does it grow with respect to n? Taking into account that the number of iterations is, however, an input parameter, then a constant number also on large networks, and looking to the results of the presented experiments, it is possible to assert that in the worst case its growth is linear with respect to n. It follows that we can assert that opt-IA needs at most c × n generations to obtain good results, wanting to be very cautious. Indeed, inspecting the convergence behaviours in Fig. 5, for instance plot (b), it is possible to see that after around 1200 iterations (∼ 4.48 × |V |) the algorithm reaches the best solution. The same can be said for the other plots in Figure. In the light of this, adding such bound to the overall computational analysis, it is possible to claim that the upper bound of the opt-IA running time is O(n 3 ).

Precompetition operator effectiveness
The precompetition operator, in addition to the aging operator, plays a key role on the performances of opt-IA since it allows the algorithm to jump away from local optima by introducing heterogeneity in the population. This, of course, is a crucial characteristic especially when addressing hard and complex problem. Although the usefulness and efficiency of the aging operator is well known Jansen and Zarges (2011a, b), little instead is possible to assert on the efficacy of the precompetition operator, and how it affects the performance of opt-IA.
In the light of this, in this section, an analysis on the overall effectiveness of the precompetition operator is presented, and it is shown in Table 2. For this analysis, four biological networks were considered (Cattle PPI, E. coli TRN, C. elegans MRN, and Helicobacter pylori PPI), and used for inspecting the convergence behaviour of opt-IA, by enabling or disabling such an operator. Looking at the outcomes reported in table, the usefulness and efficacy of the precompetition operator is clearly evident: it allows opt-IA to reach better modularity values not only with respect the maximum value found, but also with respect to the mean of the best found values in all independent runs, with the consequence of allowing the algorithm to obtain lower standard deviation values. It is important to highlight that, except for the Cattle PPI network where the best modularity is the same for both versions, the precompetition operator allows opt-IA to produce considerably higher modularity values, proving the successful effect of this operator.
The precompetition operator, in combination with the stochastic aging compensates the indirect elitism provided by the selection operator; therefore, it helps to maintain a right balance of diversity in the population.

Results
We will discuss now the overall experimental results and compared them with the results obtained by state-of-the-art algorithms. It is important to stress first that, in a preliminary work (Spampinato et al. 2019), opt-IA was compared to Louvain algorithm on a set of different, simple, and small networks, which for simplicity are reported in Table 3. By inspecting this table, it becomes clear how opt-IA, based on a pure random search, outperforms one of the best deterministic approach on community detection, which is Louvain algorithm. However, given the low of complexity of these tested networks, a deeply and detailed analysis must be conducted in order to evaluate the real performance of opt-IA. Therefore, for these new experiments, all networks in the data  Table 1 were considered, and the experimental protocol described in Sect. 4.1 was used. The main goal of these experiments, as well as all comparisons made, is to prove the competitiveness and reliability of opt-IA in terms of solution quality found, i.e. maximizing the modularity function (Eq. 1).
To this end, the proposed opt-IA algorithm was compared to several different heuristics and metaheuristics (20 in the overall), each of them designed and developed as a modularity optimization approach (Atay et al. 2017;Li et al. 2020;Doush et al. 2020). Specifically, the algorithms considered for the comparisons, in addition to the Louvain's one, are: B A-Bat Algorithm, a metaheuristic method based on the echolocation behaviour of bats (Yang 2010); G S A-Gravitational Search Algorithm, a metaheuristic optimization algorithm based on the law of gravity and mass interactions (Rashedi et al. 2009); B B−BC-Big Bang-Big Crunch algorithm, an algorithm inspired by the theories of the universe evolution in which, during the main phase, energy dissipation produces disorder and randomness, whilst in a second stage the randomly distributed particles are drawn into an order, i.e. the values in the vectors of the function to be optimized are determined (Erol and Eksin 2006); B ADE-improved Bat algorithm based on Differential Evolutionary, an improved version based on the combination (hybridization) of Bat Algorithm, and Differential Evolution (DE) algorithm (Storn 1995;Storn and Price 1997), where this latter is used in population regeneration process. Both algorithms are used together for the selection of adjacent nodes; SSG A-Scatter Search algorithm based on Genetic Algorithm, a Scatter Search (SS) approach (Glover 1977;Martí et al. 2006) of the best chromosomes provided by Genetic Algorithm (GA) (Holland 1975;Goldberg 1989) and subject-ing the population to the crossover and the mutation processes around the best solutions; H DS A-Hyper-heuristic Differential Search Algorithm, a hyper-heuristics based on the migration of artificial superorganisms (Civicioglu 2012), where each of them in the population migrates between the maximum or minimum solution of the problem using the Differential Search Algorithm (DSA) in the process of regeneration of individuals; M A− N et (Naeni et al. 2015), a memetic algorithm based on the combination of a genetic algorithm with a local search; G AC D (Shi et al. 2009), a genetic algorithm that takes advantage of the efficiency of the locus-based adjacency encoding scheme to represent a community partition; CC − G A-Clustering Coefficient-based Genetic Algorithm (Said et al. 2018), a genetic algorithm that uses the clustering coefficient (CC), which is a social networks analysis measure, to generate a better initial population; M S I G-Multi-Start Iterated Greedy algorithm (Sánchez-Oro and Duarte 2018), which uses a new greedy procedure for generating the initial solutions and reconstruct the solutions, but has the disadvantage of being computationally expensive; I D P SO−RO-Improved Discrete Particle Swarm Optimization with Redefined Operator (Cao et al. 2015), based on particle swarm optimization, in which the update formulas of velocity and position are redefined according to the locus-based adjacency representation; I G-Iterated greedy algorithm (Li et al. 2020) based on an iterative process that combines a destruction phase and a reconstruction phase: a complete candidate solution is partially destructed, and afterwards a new complete candidate solution is reconstructed via a greedy constructive heuristic; M O B A-Multi-Objective Bat Algorithm (Doush et al. 2020), which is a multi-objective bat algorithm adapted to model and solve the community detection problem; and, finally, the following heuristics and metaheuristics E F F (Enhancement FireFly Algorithm), R B (Rosvall and Bergstrom Algorithm), Blondel Algorithm, R N (Ronhovde and Nussinov Algorithm) (Ronhovde and Nussinov 2009), C N M (Newman and Girvan 2004) and M OG A − N et algorithm (Pizzuti 2008), which each was taken from (Doush et al. 2020).
In accordance with what previously described, the parameters setting of opt-IA, in all the performed experiments, are: d = 100 as population size; P die = 0.02 the probability of random aging operator; mutation rate M set to 1; and T max = 4000 as the maximum number of generations used as stopping criterion. The duplication parameter dup in according with the parameters tuning reported in Sect. 4.2 has been set to 4 (dup = 4) for all those instances with |V | < 100 (small social networks), whereas for the larger ones (|V | ≥ 100) the experiments were performed with dup = 9 and dup = 10. Every experiment has been per- formed on 30 independent runs. The obtained outcomes are summarized in Tables 4 and 5. Table 4 reports the results of the proposed algorithm on small social networks, whilst in Table 5 we show the results on larger social networks, and on biological networks. In this last table, the best results obtained for dup = 9 or dup = 10 are also highlighted in boldface.
This different setting of the duplication parameter is obviously due to the simplicity of the first networks (dup = 4) compared to the last ones (dup = 9 or dup = 10), which consequently require a more targeted search, and a less wide exploration of the solution space. Larger networks with a density ≥ 1% (see Table 1) do not require a great variability in the population, and for this reason dup = 9 seems to be the most appropriate value. Indeed, although in the social networks there is little difference in the results between the two dup values (dup = 9 vs dup = 10), in C. elegans MRN, where = 1.98%, a significant improvement is instead obtained in terms of best modularity found (Best), and average of values (Mean). On the other hand, for all networks with a low density ( < 1%) the parameter dup = 10 ensures good average values (Mean) in Cattle PPI instance and best modularity (Best) for E. coli TRN network. In these cases, a small increase in the dup parameter, i.e. having a larger number of duplicates, allows to produce higher variability, and consequently enables to work well on very sparse networks.
Tables 6, 7, 8, 9 report the comparisons of opt-IA with other heuristics and metaheuristics. The results shown are averaged on 30 independent runs for all algorithms. Note that, unlike other algorithms that use a maximum number of generations fixed, M A − N et stops running just only when 30 generations are performed without any improvement. For each table, the best modularity values (Best), average values (Mean), worst modularity (W orst), standard deviation (St D), and number of communities discovered (N C ) are showed, respectively.
By analysing Table 6, it is clear how the proposed opt-IA considerably outperforms all compared algorithms, excepts for H DS A hyper-heuristics. Regarding this latter, however, it is possible to note how both algorithms (opt-IA and H DS A) show identical performances on the first two networks in the table reaching the same values of Best and Mean, whilst on the last two, H DS A outperforms opt-IA only with respect the average values (both reach the same Best values). On the network Bottlenose Dolphins, instead opt-IA strictly outperforms H DS A reaching a better mean value, and a standard deviation value equal to zero. It is important to emphasize that H DS A is a hyper-heuristic and then by conception exploits different type of appropriate heuristics: in each generation it considers the one that returns the best result. It follows obviously that this method is potentially more robust from a Mean value perspective. However, the difference between the values obtained by both heuristics, as Best and Mean, is almost irrelevant, demonstrating that the two algorithms opt-IA and H DS A can be considered comparable in the overall. In Table 7, opt-IA is compared with a second group of more recent metaheuristics methods. Also on this comparison, the proposed algorithm outperforms the compared algorithms in all networks. Indeed, if the comparison is inspected from a ranking perspective with respect to the Best values, opt-IA is always at the top, whilst if it is analysed with respect to the Mean values, it is easy, instead, to assert that it is always among the first two positions and very often in the first one. It is worth emphasizing once again that, whilst these compared algorithms include deterministic and sophisticated strategies, opt-IA is fully random both in the generation of the initial population and in the solutions search process into the search space. Therefore, having shown better performances, it confirms the robustness and efficiency of all designed random operators.
In Table 8, opt-IA is compared with the last heuristics and metaheuristics group, which also includes a multi-objective approach. Here as well, opt-IA outperforms all compared algorithms excepts for EFF, with which instead it alternates in the first position. Indeed, opt-IA outperforms EFF and it is in the first rank on the Bottlenose Dolphins and American College Football networks, whilst it is outperformed by EFF on the Zachary's Karate Club and Books about US Politics networks, resulting in second position. Also on this comparison, therefore, it is possible to assert that the proposed algorithm is ranked always among the first two positions confirming its reliability and efficiency.
In Table 9, opt-IA is compared with the first group of algorithms on biological networks. Unfortunately, no results were found by the other considered algorithms on these networks. Thus, inspecting this table, it is possible to see how opt-IA strictly outperforms all algorithms, included H DS A, on the C. elegans MRN and Helicobacter pylori PPI networks compared to all evaluation metrics (Best, Mean, wor st and St D), and detecting a smaller community value. However, on the other two networks, opt-IA and H DS A are comparable in Cattle PPI with respect to the best value reached, but opt-IA is outperformed by H DS A with respect to the mean values. Also, on the E. coli TRN instance H DS A outperforms opt-IA in all assessment values. It is important to highlight that opt-IA performs better than H DS A on the larger networks. Focusing, finally, the inspection only on the comparison between opt-IA and Louvain it is easy to assert that the first considerably outperforms the latter, excepts for the Helicobacter pylori PPI network. Overall, then, analysing all outcomes and comparisons performed, it is possible to assert that the proposed algorithm opt-IA outperforms all the compared metaheuristics, and shows comparable performances with respect to hyper-heuristic H DS A. It is important to highlight once again that whilst H DS A, being a hyper-heuristic, takes advantage of several efficient heuristics, and each time chooses the best solution among all the ones found by them, opt-IA, instead, is entirely blind to the features of the problem, and it is based only on random search without any deterministic guide. Therefore, taking into account these main differences and features, and, primarily, having found results comparable with those of H DS A, it is possible to confirm the efficiency and reliability of the proposed random search algorithm opt-IA.
In order to study opt-IA on large networks, i.e. with more than 1000 nodes, a further set of networks was considered and tested, and the results are reported in Table 10. Of course, being opt-IA fully based on random search, for these experiments a larger number of iterations was needed (T max = 10 3 ). During these experiments, we saw that, increasing the network size, the combination of the developed operators guided the algorithm towards useless search, disregarding to properly and deeply exploring specific neighbourhoods. However, such behaviour did not happen on all previous tested networks. In the light of this, to indirectly guide the search to explore promising regions in the search space more intensively, a simple modification in opt-IA was made: allow the selection operator to also choose elements having the same fitness. This modified version, reported in Table 10, is labelled as opt-IA 2 , whilst the previous one is called opt-IA 1 . Both versions are compared with the well-known Louvain's algorithm. By comparing the two versions, it appears clear how such a simple change allows opt-IA to improve the modularity values in the overall. At any rate, the results obtained by the best version of opt-IA still remain a bit far than the results obtained by Louvain. This is explainable with the features of opt-IA to be fully based on random search and without any simple deterministic approach. It is very likely that by further increasing the number of generations the gap with Louvain's results will be substantially narrowed. As expected, this is the main limitation of our proposed random search algorithm.
To produce an as large as possible comparison, three other different real networks and 13 other optimization methodologies were considered, in order to evaluate the efficiency of the two variants of opt-IA from the perspective of the quality of the modularity found and are reported in Table 11.    ; page rankbased algorithm (PPC); genetic algorithm based on a novel coding scheme (NGACD); multi-objective genetic algorithm (MOGA/N); two multi-objective evolutionary algorithms (MOEA/D and RMOEA); and two embedding-based algorithms (GEMSEC and DANMF). It is worth to underline that no details are provided in the cited paper about the exper-imental protocol adopted, with particular reference to the fixed generations number. By inspecting the results shown in the Table 11, it emerges that the two algorithms PMCL and ePMCL outperform all compared algorithms and with a considerable gap in terms of modularity found. These better performances are due to the combination between an evolutionary method and Markov chain-based dynamic process, both well known to be efficient search methodologies. In particular in ePMCL, along with an evolutionary approach to optimize the iterative updating, pruning and transition process of the Markov Clustering (MCL) algorithm, a genetic algorithm is also used to find the best parameter combination. At any rate, opt-IA always stays on the top three ranking, on the Les Miserables and Word Adjacencies networks, finding higher modularity values than Louvain and Combo, for instances. As expected, however, the performances of opt-IA decrease on the last network, which is larger than the first two, and this is due, as highlighted many times, to the randomness of the algorithm that would require larger number of generations. By increasing the number of generations, very likely, the gap with the other algorithms would narrow considerably. Furthermore, these experiments confirm to us how the variant of opt-IA that allows to select elements with same fitness (opt-IA 2 ) works better on large networks. It is worth to stress once again that although opt-IA is fully guided by random rules and operators, without any deterministic and solution improvement approach, all presented results have shown and proved its efficiency, its robustness, and its competitiveness with respect to many optimization algorithms, which instead are much more sophisticated and refined in terms of search strategy. In Fig. 10, finally, are displayed the communities detected by opt-IA on the American College Football (plot a), Books about US Politics (plot b) and C. elegans MRN (plot c) networks, respectively.

Functional sensitivity analysis
Although modularity is the commonly used evaluation metric, it tells very little about how similar the detected communities are when compared to the original/target ones. Furthermore, an important limitation in modularity optimization is that it can fail in identifying smaller communities, due to the degree of interconnectivity of the communities (Fortunato and Barthelemy 2007). To this end, we conducted a second experimental step, using synthetic networks generated by L F R algorithm proposed in Lancichinetti et al. (2008); Lancichinetti and Fortunato (2009). The aims of this second experiment are to analyse the convergence behaviour of opt-IA in different complexity scenarios, thanks to the diverse network features which can be generated, and, most importantly, by inspecting how good and similar are the communities uncovered by opt-IA with respect to the target ones. Obviously, since all networks are artificially generated, their community structures are known. It is important to stress how this benchmark faithfully reproduces the key features of real graphs communities, affirming therefore its validation.
The networks generated for this experiment were, respectively, created with 300, 500, 1000, 2000 and 3000 nodes, each of them with average degree 15 and 20, and the maximum degree equal to 50. Furthermore, for each instance, we set: τ 1 = 2 as exponent of the degree distribution; τ 2 = 1 as the distribution of community sizes; min c = 10 and max c = 50, respectively, as minimum, and maximum of the communities' size. All experiments were conducted at the varying of the mixing parameter μ t , which identifies the relationship between the node's external and internal degree with respect to its community: the greater the value of μ t , the greater is number of edges that a node shares with nodes outside of its communities. In order to analyse the performances of opt-IA on several scenarios, the mixing parameter was made to vary in the range {0.1, 0.2, · · · , 0.8}.
Once the synthetic networks were generated, each with different features, a functional sensitivity analysis was conducted using the well-known community structure similarity metrics, such as: (1) MRN (c) correctly extracted, and allows for assessing how similar the detected communities are to real ones; (2) Adjusted Rand Index (AR I ) (Hubert and Arabic 1985), which focuses on pairwise agreement, that is for each possible pair of elements it evaluates how similarly the two partitions treat them; and, finally, (3) Normalized Variation of Information (N V I ) (Meilȃ 2007), expressed using the Shannon entropy, which measures the amount of information lost and gained in changing from one clustering to another one: sum of the information needed to describe C, given C , and the information needed to describe C given C. Note that N M I is the mostly used in community detection tasks. It is important also to point out that the closer to 1 the N M I and AR I values are (closer to 0 for the N V I value, instead), the more similar the uncovered communities are to the target ones.
In Fig. 11 we can see the graphics of N M I (top plots), AR I (middle plot) and N V I (bottom plot) indexes for the L F R benchmarks with 300, 500 and 1000 vertices. By analysing each plot, it is possible to note how the N M I and AR I curves remain on high values (> 0.70) for μ t ≤ 0.6 and μ t ≤ 0.5, respectively, whilst the N V I curve remains on low values for μ t ≤ 0.5. This proves that opt-IA is able to uncover communities roughly closer to the original ones. The two N M I and AR I curves instead begin to decrease, and the N V I curve increases, as the graph begins to get more dense (μ t > 0.6); in this case opt-IA detects community structures not well defined.
In Fig. 12, instead, it is displayed the functional sensitivity analysis conducted on the synthetic networks with 2000 (left column), 3000 (middle column) and 5000 (right column) vertices. By inspecting these plots, it is possible to assert that the N M I curves still continue to remain high for μ t ≤ 0.5, whilst decrease at the increasing of the mixed parameter, corresponding then to more dense networks. It is important to note that the behaviour of the N M I curves on the middle and right plots, that is on 3000 and 5000 vertices, where the N M I curve values are on average high (≥ 0.58), highlight the limit of opt-IA due to its randomness, and, consequently, pointing out the need to have longer iterations for solving larger networks. Same statements can also be made for the AR I and N V I plots for 3000 and 5000 vertices. On the other hand, however, these high N M I curve values obtained by opt-IA prove the ability of the algorithm to detect communities as similar to the target ones as possible. The AR I curve values (middle plot, left column) remain acceptable for all μ t ≤ 0.4 whilst decrease at higher values of μ t . As we have repeatedly said, this is obviously caused by the fully random search at the basis of the algorithm that requires longer time to convergence towards good solutions. Same analysis can be also done for the N V I curves (bottom plot, left column). This is confirmed by looking at the convergence behaviours shown in Sect. 4.3 (Fig. 5), where in each of them the relative convergence is represented by a monotonically increasing curve with respect to the number of generations. Mixing parameter NVI Fig. 12 The curves of the Normalized Mutual Information (top), Adjusted Rand Index (middle) and Normalized Variation of Information (bottom) indexes, performed on 2000, 3000 and 5000 nodes synthetic networks influential problems in many research areas. The proposed algorithm, called opt-IA, is inspired by the clonal selection principle, and consequently is based on three main immune operators, such as cloning, hypermutation and stochastic aging, whose combination allows the algorithm to perform in a proper way the exploration and exploitation of the search space. The presented algorithm is entirely blind to the features of the problem being mainly based on a pure random search of the solutions combined with stochastic operators. In this way, the algorithm can easily jump out from local optimal and perform an extensive exploration thanks to the high diversity in the population produced by the several stochastic strategies developed. The reliability and efficiency of opt-IA in community detection has been tested on several social and biological networks, each of them showing different complexity and dimensions. By inspecting the results of all the performed experiments, it clearly emerges the efficiency and reliability of opt-IA, as well as its robustness as proven in the analysis of the convergence quality and learning capability. Having included a random search strategy in opt-IA along with several stochastic operators, it allows the algorithm to carry out a careful and at the same time vast exploration of the search space. An analysis on the computational time complexity has been also conducted by making use of the Time-To-Target plots (T T T − plots), which confirm that opt-IA albeit it needs more iterations compared to other algorithms (due to its pure randomness), it reaches, however, the best solutions in acceptable times. Indeed, an asymptotic complexity analysis has been also presented, from which it is possible to claim that the upper bound for its running time is O(n 3 ).
In order to assess opt-IA with respect to the state of the art in community detection, the algorithm was compared against about twenty different heuristics and metaheuristics, included one Hyper-Heuristic methodology. From these comparisons, it appears very clear how the proposed algorithm strictly outperforms most of the compared algorithms, except for the Hyper-Heuristic where instead the performances can be considered comparable in the overall. In particular, the main difference of the performances between the Hyper-Heuristic and opt-IA is given on the values of the average of the best solutions found on 30 independent runs. However, this is reasonably foreseeable since the main feature of Hyper-Heuristic methods is the combination of several heuristics, efficient on the problem to be tackled, in order to exploit the strength of one to overcome the weaknesses of the others, whilst opt-IA is an algorithm entirely based on random search combined with pure stochastic operators.
In conclusion, all the outcomes and the analysis conducted prove the reliability of the proposed random search, making opt-IA comparable with sophisticated algorithms, especially on networks that are not too much dense, such as biological networks for instance. Obviously, the limit of the random search, and therefore of opt-IA, is the need to have a large number of generations to converge to acceptable solutions when tackling with wide networks (e.g. |V | ≥ 3000). However, since the solution search process is entirely guided by randomness and stochastic operators, and therefore without any deterministic approach neither any information on the features of the network (opt-IA is fully blind algorithm), it allows on the other hand to be easily adapted and applied in dynamic network scenarios and in situations of high uncertainty.