Memetic Computing

, Volume 5, Issue 2, pp 131–139

Cell state change dynamics in cellular automata


  • David Iclănzan
    • Department of Electrical EngineeringSapientia Hungarian University of Transylvania
  • Anca Gog
    • Department of Computer ScienceBabeş-Bolyai University
    • Department of Computer ScienceBabeş-Bolyai University
Regular Research Paper

DOI: 10.1007/s12293-012-0093-z

Cite this article as:
Iclănzan, D., Gog, A. & Chira, C. Memetic Comp. (2013) 5: 131. doi:10.1007/s12293-012-0093-z


Cellular automata are discrete dynamical systems having the ability to generate highly complex behaviour starting from a simple initial configuration and set of update rules. The discovery of rules exhibiting a high degree of global self-organization is of major importance in the study and understanding of complex systems. This task is not easily achieved since coordinated global information processing must rise from the interactions of simple components with local information and communication. In this paper, a fast supporting heuristic of linear complexity is proposed to encourage the development of rules characterized by increased dynamics with regard to cell state changes. This heuristic is integrated in an evolutionary approach to the density classification task. Computational experiments emphasize the ability of the proposed approach to facilitate an efficient exploration of the search space leading to the discovery of complex rules situated beyond the simple block-expanding rules.


Cellular automataDensity classificationEvolutionary algorithms

1 Introduction

Cellular automata (CAs) represent important tools in the study of complex interactions, emergent behaviour and computational complexity of a system. CAs can be viewed as decentralized structures of simple and locally interacting elements that evolve following a set of rules [22, 23]. These local interactions often give rise to a global coordinated behaviour. Examples of systems where such an emergence occurs include economic systems, insect colonies, the immune system and the brain. CAs provide an idealized environment for studying how simulated evolution can develop systems characterized by “emergent computation” where a global, coordinated behaviour results from the local interaction of simple components [15]. However, programming CA is not an easy task, particularly when the desired computation requires global coordination. This is due to the challenges associated with finding rules able to generate a desired global behaviour based on elements with local information and communication.

One of the most widely studied CA problems is the density classification task (DCT) [14]. This is a prototypical distributed computational task aiming to find the density most present in the initial cellular state. Computational methods engaged for DCT include evolutionary algorithms [6, 15, 16, 18, 19], genetic programming [1], coevolutionary learning [8, 10] and gene expression programming [4]. These methods have been able to generate CA rules with better performance than any known human-written rule for DCT. The particle strategies discovered by genetic algorithms and reported in [2, 15, 16, 20] are able to facilitate global synchronization opening the prospect of using evolutionary models to automatically evolve computation for more complex systems. The potential of evolutionary models to efficiently approach the problem of detecting CA rules facilitating a certain global behaviour is indeed confirmed by various current research results [3, 6, 13, 16, 18]. To the best of our knowledge, the best reported DCT rule was evolved by a two-tier evolutionary algorithm with a performance of \(0.89\) [18]. The success of the method can be attributed to the smart prefiltering of the search space by employing various efficient heuristics.

In this paper, we focus on detecting the formation and propagation of “signals” at runtime. This behaviour can potentially lead to particle based computing and subsequently to the discovery of high performance rules. An automatic signal detection method is proposed to find promising emergent space-time patterns that can guide an evolutionary search process toward more complex, better performing rules. Numerical experiments and results presented for the DCT emphasize the capability of the proposed approach to push the search beyond the simple block-expanding rules and efficiently detect high-performing complex CA rules for DCT. The method and experiments presented in the current paper extend and further analyse the ideas and results reported in [7].

The paper is organised as follows: DCT is defined and related work is reviewed, some important remarks on detecting complex space-time signatures are analysed, the proposed approach to enhance the computational mechanics of CAs is presented, and computational experiments are discussed. Conclusions and some directions for future work are drawn at the end of the paper.

2 The density classification task

The one-dimensional CA capable of performing computational tasks has been extensively studied in the literature [8, 13, 1618]. A lattice of \(N\) two-state cells is used for representing the CA. The state of each cell changes according to a function that takes into account the current states in the neighbourhood. The neighbourhood of a cell is given by the cell itself and its \(r\) neighbours on both sides of the cell, where \(r\) represents the radius of the CA. The initial configuration (IC) of cell states (0s and 1s) for the lattice evolves in discrete time steps updating cells simultaneously according to the CA rule.

DCT aims to find a binary one-dimensional CA able to classify the density of 1s (denoted by \(\rho _0\)) in the IC. If \(\rho _0 > 0.5\) (1 is dominant in the IC) then the CA must reach a fixed-point configuration of 1s otherwise it must reach a fixed-point configuration of 0s within a certain number of time steps. It has been shown that there is no rule that can correctly classify all possible ICs [12]. The length of a CA is usually an odd number to avoid the undecidable case when \(\rho _0 = 0.5\) (the majority should be always defined). Most studies consider the case \(N=149\) and \(r=3\), the latter leading to a neighbourhood size of 7 cells [1, 13, 15, 18]. The performance of a rule measures the classification accuracy of a CA based on the fraction of correct classifications over \(10^4\) ICs selected from an unbiased distribution (\(\rho _0\) is centered around \(0.5\)). The computationally less expensive fitness evaluation takes into account 100 ICs, with uniformly distributed densities. We will call this fitness function F100 in the current paper. All the performances reported in this section refer to this DCT version, also engaged in the current paper.

In order to illustrate a CA evolution, let us consider a simple update rule for radius \(r = 1\), which gives a neighbourhood of 3. The resulting cell state at the next time step for each neighbourhood out of the \(2^3=8\) possible configurations is defined as follows: \(000 \rightarrow 1, 001 \rightarrow 0, 010 \rightarrow 0, 011 \rightarrow 1, 100 \rightarrow 1, 101 \rightarrow 1, 110 \rightarrow 0\) and \(111 \rightarrow 0\). By applying this update rule to a cellular automaton with \(N = 11\) at a time step \(t\), we obtain the lattice depicted in Fig. 1 at time step \(t+1\). It should be noted that the lattice has periodic boundary conditions, meaning that the right hand side neighbour of the last cell is actually the first cell of the lattice.
Fig. 1

DCT example: lattice at time steps t and t + 1, following a simple update rule

DCT is a challenging problem extensively studied due to its simple description and potential to generate a variety of complex behaviours. Gacs, Kurdyumov and Levin [5] proposed the well known GKL rule which approximately computes whether \(\rho _0 > 0.5\). The GKL rule has a good performance (around \(0.81\)) when tested for random 149-bits lattices but gives significant classification errors when the density of 1s in the IC is close to \(0.5\) [15]. Nevertheless, the GKL rule represents the starting point for studying DCT and inspired Packard [19] to make the first attempts to use genetic algorithms for finding CA rules.

Koza [11] used Genetic Programming (GP) to evolve a state-transition rule that enables a cellular automaton to produce certain desired emergent behavior. Continuing Koza’s approach, Andre et al. [1] obtained a rule for DCT with a performance of \(0.823\). This rule has been found by applying a standard GP model with automatically defined functions. The algorithm is also using the island model for parallelization, the initial subpopulations being created locally at each processing node and generations being run asynchronously. Gene Expression Programming (GEP) is a computational paradigm (sharing many similarities with GP) proposed in [4] and applied to DCT. Ferreira [4] showed that GEP outperforms GP for several benchmark problems including DCT. The main difference between the two approaches is that GP is using expression trees as individuals, while GEP is using linear chromosomes that are then translated into expression trees. Compared to GP, Fereira obtains indeed a slightly better performance of \(0.8255\).

The potential of genetic algorithms for computational emergence in the DCT has been extensively investigated [2, 15, 16, 20]. Individuals are encoded as bit strings listing the rule table output bits in lexicographic order of neighbourhood patterns. The evolutionary search process is guided by the F100 function. The three main strategies discovered are default, block-expanding (a refinement of the default strategy) and particle. The performance of default strategies is around \(0.5\). Block-expanding strategies explore the presence of a large block of 0s/1s (correlated with overall high/low density) by expanding it to cover the whole string. The performance value for these strategies is typically between \(0.55\) and \(0.72\) [20]. A more sophisticated rule where global coordination is more visible refers to the particle strategy with a performance value of \(0.72\) or higher [2, 15]. Typically, the evolutionary search process discovers many default strategies followed by block-expanding strategies and few particle rules on some runs [15, 20]. The best particle rule found by Mitchell et al. [15] has a performance of \(0.769\). A similar result in terms of rule performance is reported in [6] using a Circular Evolutionary Algorithm (CEA) with asynchronous search. CEA is able to produce high-quality strategies in a significantly larger number of runs compared to other evolutionary approaches to DCT.

Coevolutionary learning [8, 10] has been shown to be able to produce really good results for DCT (the best rule has a performance of \(0.86\)). The idea is that a population of learners coevolves with a population of problems to facilitate continuous progress. The fitness definition implements a competitive relationship between rules and the density of IC: a rule is better if it correctly classifies more ICs, while the ICs are better if they manage to lead to the failure of the rules. However, coevolutionary models can lead to non-successful search usually associated with the occurrence of the Red Queen effect. This means that individuals tend to specialize with respect to the current training problems as they are evaluated in a changing environment.

Oliveira et al. [3] present a multiobjective evolutionary approach to DCT based on the nondominated sorting genetic algorithm (NSGA). In order to overcome the enormous rule space, the authors integrate a parameter-based heuristic into the search process. This way, the heuristic value itself becomes one of the search process objectives. The other objective is finding a rule that correctly classifies the IC. The algorithm is implemented in a distributed cooperative computational environment and is able to discover new rules with a slightly better performance of \(0.8616\).

Wolz and Oliveira [21] propose a two-tier evolutionary framework able to detect rules with a performance of \(0.889\) clearly superior to the previously best known discovered using coevolutionary algorithms [8]. The approach integrates a basic evolutionary algorithm in a parallel multi-population search algorithm. To the best of our knowledge, the DCT rules presented in [21] are the ones with best performance discovered to date [18]. An important constraint included in the latter two approaches [3, 21] presented above refers to the number of state transitions of a rule following black-white, left-right (BWLR) transformations. Experiments have shown that this BWLR symmetry should be maximal to achieve high-performant DCT rules [18]. Oliveira et al. [18] have shown that the bits related to the complementary-plus-reflection transformation (BWLR-symmetry) are totally symmetrical in the best rules published in [3].

3 Detecting complex space-time signatures

Existing research concerning CA computation emphasizes that well performing CA rules exhibit complex spatio-temporal behaviour, where coherent configurations emerge and interact. The question that naturally arises is how to characterize and group rules according to the emergent space-time patterns and their attractor basin topology. In this context, basin of attraction refers to the set of CA configurations that evolve to the same particular attractor for the vast majority of the rules of interest.

As indicated in the previous section, methods able to produce rules with high performance use a more sophisticated search mechanism, often relying besides the F100 on 4–5 supporting heuristics [3, 21]. In this work, we focus on developing fast supporting heuristics with linear computational complexity that can guide an evolutionary search process towards rules exhibiting particle-like structures. The goal is to address the limitations of the F100 function by encouraging the search to depart from the ordered regime of operation (see Fig. 2).
Fig. 2

Visualization of the interplay between two forces able to guide together the search towards rules exhibiting both good fitness and rich cell-state dynamics

The two forces depicted in Fig. 2 encourage the exploration of different regions of the rule search space. The first force (Force 1) corresponds to F100 and provides a coarse approximation for the performance of a rule. The classical F100 fitness drives evolutionary search away from chaotic modes of operation but in the same time it leads only to simple ordered regimes of operations with a very high probability. Indeed, the first generations of an evolutionary algorithm search are usually dominated by default strategies, which are gradually replaced by block-expanding strategies, and finally (only in about 2 % of runs) particle strategies [15, 20]. One reason for such a low particle strategy discovery rate might be that F100 is not a powerful enough predictor for performance. This fitness function is a coarse approximation of the ability of a rule to perform certain task. The second force (Force 2) introduced in Fig. 2 promotes dynamics by rewarding rules that exhibit on average increased cell state change rates. This corresponds to the novel fitness component proposed in this paper and is intended to encourage the search to move away from ordered regimes of operation in order to obtain emergent rules with a higher probability. The interplay between the two forces (one guiding the search towards chaotic regimes and the other leading to ordered regimes) has the potential to lead to the discovery of particle rules situated between the ordered and chaotic modes of operation.

For the DCT, complex rules exhibit transient phases during which a spatial and temporal transfer of information about the density in local regions takes place. Figure 3 visually depicts the difference between the space-time diagram of: (a) a block-expanding rule dominated by large areas of homogeneous blocks of 1s and 0s, and (b) a particle based rule, where the signal areas exhibit fractal like patterns, maintaining a local symmetry and balance between the density of 1s and 0s. The x axis of the space-time diagram (in Fig. 3 as well as for the rest of diagrams given in the current paper) corresponds to the cells in the unidimensional CA array (therefore, we have N=149 cells on each line) while the y axis corresponds to time points starting from the IC at first time point up to maximum \(2*N\) time points or as many required for the CA to converge to either all 1s or all 0s state cells. The black cells correspond to value 1 cells while the white ones represent cells with value of 0.
Fig. 3

Space-time diagram of: a a simple block-expanding rule (performance 0.64), and b best known rule for DCT (performance 0.89, rule reported in [18])

Particles are the main mechanisms for carrying information over long space-time distances. An expanding black block indicates that the local density is definitively leaning towards a majority of 1s, a white block is a vote for a majority of 0s, while a propagating checkerboard like pattern indicates that a decision can not be made locally but must be postponed to a higher level. In this way, particles can act as signals. Higher level decisions can be made by performing logical operations on the information of particles that collide-interact. Thus, it is important to develop methods that can produce CA exhibiting particle-based computing. In contrast to previous approaches [3] that prune the search space apriori using various heuristics, in this work we focus on techniques that are able to efficiently detect and favor the formation of signals and particle interactions at runtime.

4 Proposed signal detection method for CA rule discovery

A new signal-based heuristic to detect particle emergent CA rules is proposed and engaged in an evolutionary search algorithm for DCT. The search is guided by a fitness function looking for rules exhibiting increased dynamics besides the known F100 measuring the performance of discovered rules in classifying 100 ICs.

The proposed heuristic is developed from the following observations:
  • Particle rules exhibit an increased dynamics with regard to the rate of change for a cell state. In order to propagate in time, signals must maintain locally a roughly equal density of 1s and 0s. Therefore, the cells participating in signal transmission will often change their state between 0 and 1. In contrast, the cells in a CA following a block-expanding strategy quickly converge to a mostly constant state and rarely change their state during evolution.

  • Along with density \(\rho _0\) close to \(0.5\), signals are also characterized by repetition of patterns of 1s and 0s which are bilateral symmetric (mirror-like symmetry). This symmetry enables the propagation of the signal by recursively transforming pattern of 1s in pattern of 0s and vice versa.

As a basic measure of cell dynamics, we introduce a normalized measure of how frequently a change occurs at the level of a cell. This measure is defined as the number of times a cell changes its state during the CA evolution reported to the number of time steps \(T\) of the evolution (i.e. the steps needed to converge or the maximum time steps allowed in case of no convergence). Let S denote the states vector of a cell during the CA evolution. The following equation gives the proposed measure to account for the cell state dynamics at level of S:
$$\begin{aligned} celldyn(S) = \sum _{i=s+1}^T (S(i) \oplus S(i-1))/(T-s), \end{aligned}$$
where \(\oplus \) denotes the exclusive OR operator and \(s\) denotes the time step from which we would like to quantify the dynamics. If the dynamics for the entire evolution is of interest, then \(s\) should be set to \(0\). For a constant cell state during evolution, this measure of dynamics will be zero. For an ever alternating cell (a cell that changes its value in every time step), this value will be 1.
The dynamics of the entire rule are measured as the average cell state change dynamics of individual cells as given in the following equation:
$$\begin{aligned} C(ST) = \frac{1}{n}\sum _{j=1}^{n} celldyn (ST(:,j)), \end{aligned}$$
where “:” denotes the colon operator, thus ST(:, \(j\)) denotes the \(j\)th column of ST—the evolution in time of the \(j\)th cell.

The input of Eq. 2 is a space-time diagram denoted by \(ST\), which is obtained by running a CA upon an IC state until convergence or a maximum time step \(T_{max}\). In practice, for the DCT, \(ST\) is a binary matrix with \(n\) columns – number of cells in the CA and \(T\) rows – the actual time step values. When quantifying the changing dynamics of a CA, it is recommended to start the computation after a prefixed number of time steps, in order to eliminate the first, almost always noisy states from the space-time diagram. This offset is expressed by the parameter \(s\) which has the non-optimized, arbitrary chosen value \(s=20\) in the experiments conducted in this paper. If there are less time steps \(T\) then the prefixed generation threshold \(s\), the value of \(C(ST)\) is not computed and is assigned the default value of 0.

Using the metric given in Eq. 2, one can discriminate between block-expanding CA and CA with high state changing dynamics by computing an average for different space-time diagrams, obtained from random initial states with density close to 0.5. To differentiate between chaotic and complex rules, more sophisticated methods are needed, which can approximate the amount of information in the state alternating regions of the space-time diagram. While in the case of chaotic rules, state changes are expected to be random, in the case of complex rules these signal regions follow a mirror-like symmetric pattern. Dividing the total area of mirror-like symmetric regions by the size of space-time diagram can provide a good approximate of the amount of information—patterns exhibited by the CA. Figure 4 depicts the result of a mirror-like symmetric pattern detection algorithm applied to the space-time diagram presented in Fig. 3b. Lighter colors correspond to bigger detected areas. Unfortunately, a mirror-like symmetric pattern matching search is computationally expensive and cannot be used in its current form to guide the evolutionary search for well performing CA. This is due to the huge computational cost of this pattern recognition task in 2D (where the size of the pattern is unknown) which would have to be engaged several times during each generation of the evolutionary algorithm. However, augmenting the raw performance measure of a CA with a reward value which depends on the cell changing dynamics measured with Eq. 2 can yield improved results as: (i) a high fitness value (measured over 100 ICs, with uniformly distributed densities) implies that the rule is not chaotic; and (ii) a high cell changing dynamics suggest that the rule is not block-expanding. Thus, a CA that maximizes both measures has an increased likelihood to belong to the complex class of CAs, exhibiting particles and emergent computing.
Fig. 4

The mirror-like symmetric pattern for the space-time diagram (see Fig. 3b) of the best known DCT (performance 0.89 [18] )

In order to obtain particle exhibiting CAs with higher frequency, the fitness function should encourage both good classification capability and an increased dynamics with regard to the average cell state change rate of the cells. The first objective is rewarded by the classical F100 measure while the second one by the proposed \(C(ST)\) measure (Eq. 2). Consequently, the augmented fitness is envisioned as a weighted sum of these two components.

The presence of an increased cell state change rate is critical in the later phases of the evolutionary search in order to push it away from simple block-expanding rules. When rewarding the dynamics of a rule, we also wish to avoid the formation of local-optima with a large basin of attraction for CAs having an original mediocre fitness value (close to 0.5) but exhibiting a high cell state change rate. For these cases, the driving force of the reward should not suppress the classification capability signal provided by F100. Taking into account these observations, we fine tune the interplay between the two objectives in the following way: (i) a reward for cell state change dynamics is accounted only for CAs with a classification capability (according to F100) greater than a parameter \(\alpha \) (this parameter approximates the transition threshold from default strategies to block-expanding ones) and (ii) the reward increases linearly with the original fitness value in order to avoid the cases where a high bonus for dynamics would suppress the signal given by F100.

Let \(F100(IC)\) be the percentage of correct classifications over the 100 random ICs (expressed as a real number between 0 and 1). Let \(C_{avg}(ST)\) be the averaged quantified value for cell state change dynamics for these ICs. The proposed modified fitness denoted by \(F100_m(IC)\) is given in Eq. 3 below:
$$\begin{aligned}&F100_m(IC)=F100(IC)\nonumber \\&\quad + max \left[ 0, \sigma * \frac{F100(IC)+\alpha -1}{\alpha }* C_{avg}(ST)\right], \end{aligned}$$
where \(\sigma \) is a positive constant giving weight to the bonus and \(\alpha \) is a real number between 0 and 1. Based on some preliminary experimental results, the parameter values are set to \(\sigma = 3\) and \(\alpha = 0.55\) for the current study.

If the increased dynamics arises from random behaviour and not useful signals, the classification performance of a rule will still be weak according to the proposed \(F100_m\). Thus, the bonus rewards only good enough individuals in proportion with their quality.

5 Computational experiments and results

Computational experiments are performed for DCT in CAs with number of cells \(N=149\) and radius \(r=3\). The objective of these experiments is twofold: (i) to evaluate the proposed \(C(ST)\) measure for known DCT rules, and (ii) to detect DCT rules using an evolutionary algorithm based on the proposed modified \(F100_m\) fitness function.

The first set of experiments aim to assess the performance of the proposed cell state change dynamics measure for existing well-known DCT rules. For this purpose, 50 block-expanding rules (randomly extracted from runs of the genetic algorithm reported in [15]) with performances between 0.56 and 0.62 as well as known highly fit particle rules reported in the literature (see Table 1) are studied. The space-time diagrams from 1000 ICs (generated based on an unbiased distribution) are used in connection with these rules to measure the value of \(C(ST)\) and an average is computed. The block-expanding rules exhibit a low cell state change dynamics, with a maximal average value of only \(0.18\). For particle rules, this value varies between 0.4196 and 0.6706 as presented in the last column of Table 1. These results empirically confirm the introduced measure as a helpful discriminant between poor block-expanding and high performing rules for the DCT.
Table 1

Various high performance rules for the DCT (in hexadecimal coding)

Rule (Hex)





Gacs et al. [5]




Andre et al. [1]




Juille and Pollack [8]




Juille and Pollack [9]




Wolz and Oliveira [18, 21]



In the second set of experiments, the proposed \(F100_m\) fitness is engaged in an evolutionary algorithm with the aim of detecting CA rules for the DCT. For this purpose, we modified the genetic algorithm reported in [15] (described in Sect. 2) by using the \(F100_m\) fitness. The population size is 1,000 individuals and the number of runs considered is 20. Table 2 presents the rules evolved from these experiments, their performance and cell state change dynamics. Looking at the overall algorithm runs outcome, results show that the dominant block-expanding rules are replaced with relatively high fitness rules exhibiting enlarged transient portions in their space-time diagrams. However, in many cases, particle like computational mechanisms are exhibited only when converging towards one class. Thus, for example, when converging toward all 1s, the rule employs some particle computing but when converging towards all 0s, block-expanding strategies are used. Highly fit rules employ particles in all cases.
Table 2

The evolved rules in hexadecimal coding, their performance and measured cell state change dynamics


Rule (Hex)



















































































The average rule performance obtained is 0.6981 with a standard deviation of 0.0698 (see Table 2). As a comparison, the average performance of block-expanding rules for DCT of size 149 is 0.6552, a value empirically obtained by Crutchfield and Mitchell [24]. The highest measured performance for block-expanding rules was 0.685. The best rules evolved by the proposed \(F100_m\) function have a performance of \(0.8081\) (rule number 13 from Table 2) and \(0.8041\) (rule number 20 from Table 2). Figure 5 depicts the space-time diagram for the latter rule emphasizing a complex particle-like behaviour. Typically, the runs from our experiments produced a higher level of computational sophistication, above that of the block-expanding rules (see Fig. 6 for some relevant examples). For instance, the forth evolved rule is dominated almost completely by a chess-board like pattern and has a very high cell state change dynamics at 0.9736 while the performance is lower at 0.664 (see Fig. 6). The performance of detected rules is clearly improved compared to the results of evolutionary search guided by \(F100\) reported in [15] (where the best rule found has a performance of \(0.769\)). It should be emphasized that this improvement in the performance of the evolutionary search was obtained by simply modifying the \(F100\) fitness with a bonus based on cell state change dynamics. A significant further improvement is expected to be obtained by combining this approach with other heuristics in various search schemes (such as those presented in [18] where a rule performance of \(0.89\) is reported).
Fig. 5

Space-time diagrams of rule 00010317031F3D5D1D11FF1F1157FF5F with performance 0.8041 and cell state change dynamics 0.6901
Fig. 6

Space-time diagrams of rules 050507C104113A150083FD0BB7510FD7 (first two plots, rule performance 0.6604, \(C(ST)=0.9736\)) and 000331530113DF551D3DFD1F111FFF1F (last two plots, rule performance 0.7882 and \(C(ST)=0.7009\))

The current formulation of the fitness function clearly encourages cell state change dynamics: the average of this measure is 0.7721 with a standard deviation of 0.1407. This value is much higher than those typically exhibited by block-expanding rules, even above the average range of dynamics measured on high performance particle rules reported in literature. Also, it should be noted that a very high value of quantified dynamics does not always positively correlates with high performances. This is explained by high dynamics associated with the propagation of checkerboard like patterns without convergence for some of the evolved rules. Nevertheless, all rules detected by the evolutionary search guided by \(F100_m\) show large transient portions in their space-time diagrams and overall the found rules exhibit an increased computational sophistication for the DCT. This clearly improves overall the performance of evolutionary algorithms in terms of detecting particle strategies for DCT. In some cases, the performance of the rules is not qualitatively improved. This happens when particle like computational mechanism is exhibited only when converging towards one class or propagated only in one direction. Indeed, cell state change dynamics is not an universal predictor for particle like computation, but along with other heuristic measures has a high potential to bias the search towards the most promising regions.

6 Conclusions and future work

Encouraging the development of rules with increased cell state change dynamics has the potential to guide the evolutionary search away from trivial block-expanding rules. Without relying on any prefiltering, in all our experimental runs, interesting complex rules were detected. This is in contrast with the low discovery percentage of complex rules of a basic evolutionary search. In order to detect particle based rules, a delicate balance between classification accuracy measures and cell state change dynamics must be maintained. Therefore, a multiobjective optimization framework can be suitable for the simultaneous maximization of both desiderates and would open the opportunity for easy inclusion of other supporting heuristics.

Future research focuses on developing computationally efficient pattern recognition methods and rewarding mechanisms, which coupled in a multiobjective optimization framework will hopefully facilitate the formation of transient regions exhibiting bilateral symmetry. Another direction worth exploring refers to a memetic search in the context of an evolutionary algorithm guided by the proposed modified fitness combined with a local search using the \(F100\) fitness function.


This research is supported by Grant PN II TE 320, ”Emergence, auto-organization and evolution: New computational models in the study of complex systems”, funded by CNCS Romania. David Iclănzan acknowledges the financial support of the Sectoral Operational Program for Human Resources Development 2007–2013, co-financed by the European Social Fund, within the project POSDRU 89/1.5/S/60189 with the title ”Postdoctoral Programs for Sustainable Development in a Knowledge Based Society” and the support of Sapientia Institute for Research Programs (KPI).

Copyright information

© Springer-Verlag Berlin Heidelberg 2012