Cellular automata rules solving the wireless sensor network coverage problem

The problem of an optimal coverage of a wireless sensor network area is considered. To solve this problem, a Cellular Automata (CA) approach is proposed. More specifically, the objective is to find CA rules which are able to cover the 2D space by a minimum number of so–called “Sensor Tiles”. A sensor tile consists of a von Neumann neighborhood of range 2 centered at sensor “point” and surrounded by 12 sensing “pixels”. Two probabilistic CA rules were designed that can perform this task. Results of an experimental study show that the first rule evolves very fast stable sub-optimal coverings, starting from a random configuration. The second rule finds optimal coverings, however it needs much more time for their evolution. The results are supported by a theoretical study on von Neumann neighborhoods and borrowing either from heuristics or from the spectral theory of circulant graphs.


Introduction
Wireless sensor networks (WSNs) are one of the main components of currently developing Internet of Things technologies. They are used in many sectors of human activities, such as industry, agriculture, transport, environment protection or logistics to monitor critical, very often remote and difficult to access areas, to discover events or collect data which are used to take decisions. A typical WSN consists of a number of sensors located at the monitored area, and each sensor is equipped with a single use battery. It is expected that some Quality of Service criterion, typically a predefined level of coverage of the monitored area should be fulfilled. The problem of coverage assumes answering to the question: when a battery of a given sensor should be turned on to make it active and able to monitor (cover) its part of the area, having in mind that it will result in spending battery energy, in the situation when perhaps other neighboring sensors are turned on and cover partially the same part of monitored area. This problem is closely related to the other issue, WSN lifetime maximization: keeping the coverage on the requested level should be achieved by a minimal number of active sensors in order to minimize the total energy consumption and prolong this way a lifetime of WSN. With low battery consumption, the lifetime of WSN can be maximized by switching between optimal configurations of active sensors. These issues are known to be NP-hard, and a number of centralized optimization algorithms assuming existence of full knowledge about the problem and an offline execution of an algorithm, and distributed and localized algorithms to find a solution has been proposed (Thai et al. 2008;Ab et al. 2009). In this paper we suggest a novel approach to solve the coverage problem by applying cellular automata (CA). This approach belongs to the class of distributed algorithms and is based on using concurrently only local information, what gives a possibility to use it in real-time in online mode.
Our goal is to find a covering of the 2D space by socalled sensor tiles using CA. Our problem is one of the diverse covering problems (Snyder 2011) and it is related to the NP-complete vertex cover problem introduced by Hakimi (1965). A vertex cover is a set of nodes in a graph such that every edge of the graph has at least one end point in the set. A minimum cover is a vertex cover which has the smallest number of nodes for a given graph. Hakimi proposed a solution method based on Boolean functions, later integer linear programming (Gomes et al. 2006), branch-and-bound, genetic algorithm, and local search (Richter et al. 2007) were used, among others. Other related problems are the Location Set Covering Problem (Church and ReVelle 1976) and the Central Facilities Location Problem (Mehrez 1987). These problems aim to find the locations for P facilities that can be reached within a weighted distance from demand points, minimizing the number of P, or minimizing the average distance, or maximizing the coverage. For covering problems there are a lot of applications, in economy, urban planning, engineering, etc.
We assume that sensors are regularly located in an area to be covered, available at any discrete location of a superimposed grid. The question is how to turn them skillfully ON (active) or OFF (passive) to yield a sensor network with a minimum number of sensors, which we will call min point pattern. As shown in Fig. 1a, each active sensor (here also called point) senses a certain area in a circular range when battery is ON. Several sensors shall cover the whole space. A sensor with its range will be approximated by a discrete area (tile) (Fig. 1b).
Here, the idea is to treat the coverage problem as a pattern formation problem, where Parallel Substitution Algorithms (Achasova et al. 1994) served also as a source of inspiration. For the problem of forming a Domino Pattern we yielded already good results by using a probabilistic CA rule and overlapping tiles. In  the number of dominoes was maximized, whereas in  it was minimized. In Hoffmann (2022); Hoffmann and Seredyński (2020) the current problem of finding an optimal coverage in a WSN was already treated. We want here to follow this general approach of evolving patterns by applying a probabilistic cellular automata rule. Compared to Hoffmann and Seredyński (2020) additional material and a more effective First Rule is presented.
The following section provides the state-of-the-art on coverage issues in wireless sensor networks. In Sect. 3 the sensor tiling problem is described and optimal solutions are presented. In Sect. 4 the theoretical context in which this study takes place is described. In Sect. 5 two probabilistic CA rules are designed and their performance is evaluated in Sect. 6 before the conclusion where future explorations are proposed.

Related work
The coverage problem in WSNs and the lifetime maximization problem related to it are subjects of intensive studies, the results of which can be found in the current literature (for an extensive overview, see, e.g. Yetgin et al. (2017)). A number of centralized and distributed algorithms have been proposed recently. The covering problem is computationally expensive, and exact solutions can be found only for relatively small instances of the problem with use of classical optimization algorithms, such as e.g. linear programming (Cardei et al. 2005) or integer programming (Cheng et al. 2005). For realistic instance sizes of the problem one can rely on applying some heuristic (see, e.g. (Saadi et al. 2020)) or metaheuristic which can deliver approximate, near-optimal solutions.
Currently, one of the most popular approaches to solve the coverage problem is one based on applying Natureinspired metaheuristics, which belong to the class of centralized algorithms and require full information about a state of the system. Different versions of evolutionary algorithms were applied, in particular memetic algorithms (Liao and Ting 2018) and genetic algorithms (Charr et al. 2019;Manju et al. 2018). Another popular technique is Fig. 1 a Sensors cover a certain area, b The circular range of a sensor is approximated by a discrete shape (in red) particle swarm optimization. It was applied to solve the coverage problem considered either as a single objective problem (Jia et al. 2012;Jiao et al. 2019;Jawad et al. 2020) or multi objective problem (He et al. 2019). Two another optimization techniques like harmony search (Alia and Al-Ajouri 2017) and ant colonies were also recently applied (Rathee et al. 2021). A recent paper (Zhong et al. 2020) applies a novel approach called a metaheuristic, where with the use of the evolutionary technique of genetic programming, a high level heuristic to solve the problem is created on the base of a set of low-level heuristics.
During recent years, we can observe an increasing interest in designing distributed algorithms to solve the coverage / lifetime maximization problems in WSNs with use of learning automata (LA) or CA. These algorithms have a number of advantages in comparison with centralized algorithms. They consider a given problem as a distributed system consisting of a number of autonomous agents and focus on a description of local dependencies between agents and their actions to solve collectively a problem. Using this approach results in a more accurate description of a problem and may result in obtaining better results. What is also important, these algorithms do not require full information about a state of the system, can react on current changes of a system states, and therefore can be applied in real-time mode.
LA are a class of reinforcement learning machine algorithms and can be directly used to solve learning and optimization problems. They were applied to solve different variants of the coverage / lifetime maximization problems in WSNs, in particular by Mostafaei and Meybodi (2013); Razi et al. (2017) or Gąsior et al. (2018). Classical CA are not learning machines, and applying them to solve optimization problems requires some effort. The study presented in Tretyakova et al. (2016) is promising and shows that there exists a strong relationship between coverage levels and WSN lifetime, and specific CA rules describing the behavior of CA-based agents controlling the activation of node batteries. One possible way to provide CA with learning and optimization capabilities is to convert them into the Second Order CA, as it was proposed in Seredyński et al. (2021) and consider them as reinforcement learning machines operating in the Spatial Prisoner's Dilemma environment or combining CA model with the spatial-temporal evolutionary process and learning automata theory like it was proposed in Lin et al. (2018). Another way is to find appropriate CA rules to solve the considered problem. Our recent work ) shows a potential of such an approach and in this paper we follow it. In this context, it is also worth to notice the work (Plénet et al. 2021) where CA-based approach to the observability problem was defined in the context of an autonomous network of mobile sensors considered as a control theory system, and oriented on an ability to reconstruct the initial system state.
3 Optimal covering with sensor tiles 3.1 The problem and its CA modeling Given an array of N ¼ ðn Â nÞ cells, also called field. We assume that each cell contains a sensor which is either active or passive. The objective is to find a CA rule that can form a Sensor Coverage Pattern with a minimum number of active sensors that cover the whole area. An active sensor can cover (sense) a certain number of cells in its neighborhood. We can relate an active sensor with its sensed cells to a sensor tile as shown in Fig. 2a. A sensor tile is a discrete approximation of a real area sensed by an active sensor, as depicted in Fig. 1b. Note that sensor tiles are not automata cells, they are only used as a mean to find a cell rule and to define the covering of the space. We call the elements of a tile ''pixels'' in order to not confuse them with the cells of the space. A sensor tile consists of one center pixel (the kernel with the pixel value 1, in blue) and 12 surrounding pixels (the hull with value 0, in yellow). In short, a sensor tile is a certain von Neumann neighborhood of range 2.
Hull pixels of different tiles are allowed to overlap, but not with sensor points. The sensor points are said to have a mutual repulsive action.
We call the number of overlapping pixels at a certain site (x, y) ''overlap '' or ''cover level'' v(x, y). Patterns with overlapping tiles are shown in Fig. 2b, c. The cover level is depicted here by numbers and colors. In the figures presented later, only numbers or colors will be used. Note that the cover level of an active sensor (in blue) is constant v ¼ 1.
The cell state is modeled as q ¼ s for the First Rule (see Sect. 5.1) and as q ¼ ðs; hÞ for the Second Rule (Sect. 5.2). The state s 2 f0; 1g models an inactive/active sensor, and all sensor states build the pattern (a sensor configuration). The hit number h 2 f0; 1; 2; 3; 4; ½À1g stores the number of template hits (explained later in Sect. 5.2); the last symbol in brackets denotes the repulsive action of kernels. We assume cyclic border conditions in order to simplify the problem. Constant zero-boundaries of width 2 could also be used in order to keep the sensor points within certain borders. In the case of a fixed border, an appropriate number of hits has to be assumed for the border cells (e.g., h ¼ 1), because the later described Second Rule reads hits from them.

Optimal solutions
We call a coverage valid, if the sensor tiles cover the whole space without gaps (uncovered cells). There are valid coverages/patterns with a different number of active sensors, between a minimal and a maximal number (as you can see later in Fig. 3). We call a valid coverage with a minimal number of active sensors min sensor pattern (for short min pattern), and a coverage with a maximal number max sensor pattern (for short max pattern). In this paper, we are interested in min patterns, but max patterns will also be considered. Note that there exist many equivalent sensor patterns taking into account the symmetries: translation, rotation, and reflection. When we speak of a pattern, we mean any representative in the class of equivalent patterns.
Using the CA rules described later, valid sensor patterns covering the whole space were found. They are listed in Table 1 for different field sizes. This table presents the number L of sensor tiles (equal to the number of points p), the maximum overlap v max in the set of solutions, and the density RðNÞ ¼ p=N of sensors (point density). E.g. for N ¼ ð7 Â 7Þ, there are 5-tile patterns with (a) jv max ¼ 2j ! 1 (several sites have overlap 2), (b) jv max ¼ 3j ¼ 1 (only one site has overlap 3), (c) jv max ¼ 4j ¼ 4 (four sites have overlap 4). The minimal point density for this example is R min ð49Þ ¼ 5=49 ¼ 0:102, and the maximal density is R max ð49Þ ¼ 8=49 ¼ 0:163: Recall that we search for min point patterns with a minimal point density.
Some min and max sensor patterns are shown in Fig. 3. The following min and max pattern were found (Table 1): • ð3 Â 3Þ : There is only one solution.
• ð4 Â 4Þ : There are two solutions, each with two points.
The maximal overlap level is 3 (appears twice) for the upper one (jv max ¼ 3j ¼ 2 , v max ¼ 3ð2Þ), and 4 for the lower one (v max ¼ 4ð2Þ). There is no special min pattern. • ð5 Â 5Þ : A min pattern with 3 and a max pattern with 5 points exist, but no pattern with 4 points. Note that there exists one cell with cover level of 3 in the min pattern, and there is no min pattern with v max ¼ 2 as we can find for n ¼ 6-10. • n ¼ 6; 7; 8 and 9, 10, 11 : There exist min-max patterns with 4-6, 5-8, 7-10 and 8-13, 8-20, 12-22 points. • ð10 Â 10Þ : It was difficult to discover the min 8 point pattern shown in Fig. 3 (top right). You can observe 4 cells there with cover level 2. They define a square of size ð5 Â 5Þ, and they are placed in the middle of a therefore this field size was used as a test scenario for the rules described in the following. • ð11 Â 11Þ : There exists a regular min pattern ( Fig. 4a) with 11 points that is difficult to evolve by the later described CA rules.
On the right, three solutions with 7 tiles with different v max are shown. Active sensors are shown in black, inactive in white Cellular automata rules solving the wireless... 421 • ð13 Â 13Þ : There exists a regular min pattern ( Fig. 4b) with 13 points with cover level 1 everywhere. So it can be named ''absolute'' min pattern because there exists no overlap at all. Obviously, multiples of size 13 yield such a pattern, too. The point density reaches the absolute minimum of R abs min ð13Þ ¼ 1=13. For comparison, the absolute maximal point density is R abs max ð5; 10; :::Þ ¼ 1=5 for 5 Â 5 max patterns with 5 points, and multiples thereof, e.g. for the 10 Â 10 max pattern.

Tiling the plane with von Neumann neighborhoods of range r
In 2d cellular automata, the von Neumann neighborhood of range r at point ðx 0 ; y 0 Þ is the set of cell centers defined by N r ðx 0 ; y 0 Þ ¼ fðx; yÞ : jx À x 0 j þ jy À y 0 j rg ðr 2 NÞ at a Manhattan distance dððx; yÞ; ðx 0 ; y 0 ÞÞ r: The cardinality of this neighborhood, hereafter referred as r-neighborhood, is the centered square number illustrated in the inset of Fig. 5 for the first usual values of r [ 0 (the 0-neighborhood is the trivial point fðx 0 ; y 0 ÞgÞ: Von Neumann's 2d neighborhoods define a family of polyominoes, that are ''prototiles'' which perfectly tessellate 1 the plane. The labeling scheme is defined from a generating set ðs 1 ; s 2 ÞðrÞ and must satisfy that s 1 ; s 2 and n r be pairwise coprime (Yebra et al. 1985). A usual choice is to set ðs 1 ; s 2 ðrÞÞ ¼ ð1; 2 r þ 1Þ : the first direction (horizontal) is generated by the infinite sequence ð0; 1; 2; :::; n r À 1Þ of elements of Z=n r Z with 0 at the center of the prototile, whence the generation of the second sequence (vertical) is deduced. Thus, a cell c is adjacent to cells c AE 1ðmod n r Þ according to the first direction of generators and is adjacent to cells c AE ð2 r þ 1Þðmod n r Þ according to the second direction. This adjacency relation is formally depicted by the circulant graphs C s 1 s 2 ðrÞ n r -namely C 1;3 5 and C 1;5 13 -of Fig. 6. They are Cayley graphs and therefore have the property of being vertex-transitive (Boesch and Tindell 1984;Monakhova et al. 2020).
The adjacency matrices M n r associated with the (von Neumann) r-circulant C s 1 s 2 ðrÞ n r have handsome spectral properties (Davis 1970). They are bisymmetric and the simple knowledge of coefficients c 1 ; c 2 rþ1 in the first row c 0 ; c 1 ; c 2 ; . . .; c n r À1 suffices to determine the set of eigenvalues. Thereby c 1 ¼ c 2 rþ1 ¼ 1 and where k n r ; k denotes the k-th eigenvalue of matrix M n r : For a given r [ 0 there exists a unique maximal eigenvalue k n r ; 0 ¼ 4 that denotes a 4-regular graph and there exist ðn r À 1Þ=4 equivalence classes with 4 equal eigenvalues and such that where trðM n r Þ denotes the trace of M n r : Thereby from (2) and after simple trigonometric transformations we get k 5;4 ¼ k 5;3 ¼ k 5;2 ¼ k 5;1 for M 5 and Fig. 4 a 11 Â 11 Min pattern with 11 points. b 13 Â 13 Min pattern with 13 points k 13;12 ¼k 13;8 ¼ k 13;5 ¼ k 13;1 ; k 13;11 ¼ k 13;10 ¼ k 13;3 ¼ k 13;2 ; k 13;9 ¼k 13;7 ¼ k 13;6 ¼ k 13;4 for M 13 . In particular, the spectrum highlights the rotational symmetry of the prototiles in Fig. 5 and by comparing with (Yebra et al. 1985), one could verify that the set of eigenvalues does not depend on the chosen labeling scheme. The vertex-transitivity implies that any window of size n r Â n r and embedded into this cellular space will contain exactly n r distinct cells c 2 Z=n r Z whatever the position of the window. Moving the window by a vector ða; bÞ turns into the automorphism s ab mapping Z=n r Z into itself as for any c 2 Z=n r Z: This therefore defines a n r Â n r wraparound toroidal topology that provides the exact number n r of required sensor points. For any multiple n ¼ p Á n r this property remains true, that is, with p 2 n r sensor points.

Min-coverage problem
For other sizes n Â n further analysis should be carried out. The perfect tiling will then give way to patterns with overlapped prototiles. Let V n ¼ ðv ij Þ 1 i;j n therefore be the coverage matrix where element v ij ¼ v ðx; yÞ denotes the cover level at site (x, y) . Let r n be the number of ''0'' in the n Â n field -the number of ''sensor points''-for a with n r vertices labeled ð0; 1; 2; :::; n r À 1Þ and the generating set fs 1 ¼ 1; s 2 ðrÞ ¼ 2 r þ 1g: Any vertex c is connected to c AE s 1 and c AE s 2 ðrÞðmod n r Þ: Connectivity in C 1;3 5 and C 1;5 13 is the image of adjacency in the above tiling Cellular automata rules solving the wireless... 423 given configuration. Then r n also denotes the number of prototiles and we consider the rational n 2 =n 2 surrounded by the two consecutive integers r n À 1 and r n such that and where the sum in matrix V n denotes the global cover index with r n points. It follows that r n defines a lower bound for the required number of points in the n Â n field. However, in most cases this rough lower bound is not a tight bound and the values of r n displayed in Table 2 are often lower than those provided by the simulation. This theoretical inadequacy results from the fact that the toroidal constraint is not taking into account. A heuristic framework is presented, applied for any n ð3 n\13Þ and illustrated in Appendix A 1 . The four following primitives are stated thereafter. The result is the formation of valid min patterns for almost all values of n. The heuristic evolves more or less easily depending on whether or not there exists some ''affinity'' between n and n 2 . The 4-fold rotational symmetry initiated at the beginning is conserved throughout evolution for n ¼ 3; 7; 10 and is eventually achieved for n ¼ 6. It can be checked that the relation n 2 Á r Ã n 4 n 2 holds for those cases. For n ¼ 4 the case is somewhat trivial: at least 2 points are expected from (3), now a n Â n torus has diameter n for n even and there exist r Ã 4 ¼ 2 antipodals at distance 4. The symmetry is broken for n ¼ 5; 8; 9 but an optimal pattern is easy to achieve.
For n ¼ 12 the symmetry holds until the penultimate step, that is, until a single point is added. Indeed, we can observe that n 2 Á ðr Ã n À 1Þ 4 n 2 holds for this case. -------------Initialize (n, r n ð0Þ) -------------// Define a n Â n window with 4-fold rotational symmetry : // For n odd the center is the 0-kernel of a centered prototile, surrounded by a maximum of prototiles whose kernels are strictly inner to the n Â n window // For n even (n [ 4) the center is the 2; 3; 10; 11 ½ 2 Â 2 square, surrounded by a maximum of prototiles whose kernels are strictly inner to the n Â n window; for n ¼ 4 the pattern is empty r Ã n ¼ r n ð0Þ // initial number of points -------------Set_PBC (n; fði; jÞg½; pbcl) -------------// From the inner (i, j) kernel, fix the periodic boundary conditions (PBC) generating the four N-S and E-W outer images ði; j AE nÞ and ði AE n; jÞ as well as the four NW-SE-NE-SW ði AE n; j AE nÞ outer images // {( i, j )} : List of kernels // pbcl: Number of points lost by PBC sequence ; the empty parameter is the default ( Surprisingly, this heuristic evolves badly for n ¼ 11. Although it could lead to a pattern with r 0Ã 11 ¼ 13 points without too much difficulty (it can be checked that relation n 2 Á r 0Ã n 4 n 2 holds), this pattern is far from optimal. The question arises whether this complexity could explain the excessive time required by the simulations for this particular value, as highlighted in the sequel, in Table 5.
Observing for n ¼ 11 that n 2 ¼ n ðn 2 À 2Þ could suggest a possible tessellation of the n Â n field by some n-prototile. Again, the adjacency matrices M 11 are bisymmetric and the simple knowledge of the two first coefficients is needed. An exhaustive examination shows that there exist exactly P 4 k¼1 k ¼ 10 circulants C s l s m n : 1 l\ n À 1 2 ; l\m n À 1 2 ðn ¼ 11Þ that split into two isospectral classes C 1 ¼f C 1;2 11 ; C 1;5 11 ; C 2;4 11 ; C 3;4 11 ; C 3;5 11 g ; C 2 ¼f C 1;3 11 ; C 1;4 11 ; C 2;3 11 ; C 2;5 11 ; C 4;5 11 g and their circulant matrices are such that where k n; k denotes the k-th eigenvalue of matrix M l; m n : Dividing the circulants into the two classes C 1 ; C 2 results from simple trigonometric transformations. Each class C 1 ðresp. C 2 Þ has its own minimum (negative) eigenvalue k 1 ðresp. k 2 Þ and the criterion k 2 \k 1 yields the min valid patterns in C 2 : The whole set of circulant C 11 with their respective pattern is displayed in Appendix A 2 (Figs 24, 25).
Resulting either from heuristic or from spectral theory, a set of min valid patterns ð3 n\13Þ is displayed in Fig. 7. Each cell is ''decorated'' from its von Neumann label of Fig. 5. The number of labels at site (x, y) is the cover level v ij ¼ v ðx; yÞ with one color per eigenvalue in M n 2 : The patterns are consistent with the results of the simulations in Figs. 3-4. They perfectly match for n ¼ 3, n ¼ 6, n ¼ 10 and now n ¼ 11. The number r Ã n of points appears in the last row of Table 2.

Equivalence between Min and Max coverage problems
We close this theoretical framework by pointing out a relation between our problems of minimization and maximization. Basically, our min problem could be associated with a physical distancing configuration while our max problem could be associated with a tightly-coupled configuration and the objective function thus becomes the Table 2 Exact minimal number r Ã n of ''points'' expected from the simulation model. A first estimation of the lower bound r n is derived from (3). The quantity r n ð0Þ denotes the initial value in the heuristic. The sum of cover levels in the penultimate row equals the global cover index n r Á r Ã n . All r Ã n result from heuristic except  The patterns are consistent with the results of the simulations in Figs. 3-4. They perfectly match for n ¼ 3, n ¼ 6, n ¼ 10 and now n ¼ 11 Cellular automata rules solving the wireless... 425 search for the optimal number r n of points in both cases. For simplicity we consider the infinite plane and assert thereby that the pattern must be biperiodic. Obviously for the max problem, trying a 4-fold coverage of the whole hull will prove to be impossible due to the repulsive action of the kernel. Therefore the maximal suitable solution is a 3-fold coverage of the whole hull as suggested in Fig. 3 and illustrated on the decorated pattern of Fig. 8. It follows that solving the min-problem in the 1-neighborhood amounts to solve the max-problem in the 2-neighborhood. By extension, we conjecture that there would exist a close relationship between solving the min-problem in a rneighborhood and solving the max-problem in the ðr þ 1Þneighborhood, shortly: min r ffi max rþ1 with of course an additional condition for a finite field. We refer back to inequalities in (3) and assume that equality holds for n r ¼ n 1 : On the other hand, let that means (i) for the min-problem, all n Â n sites take on value ''1'' regardless of their state either of kernel or of hull (ii) for the max-problem, all r n kernel points take on value ''1'' while the remaining ðn 2 À r n Þ hull sites take on value ''3''. It follows that n 2 Á n 2 ¼ n 1 ð3 n 2 À 2 r n Þ ) n 2 ð3 n 1 À n 2 Þ ¼ 2 r n Á n 1 ) r n Á n 1 ¼ n 2 whence the equivalence min 1 ffi max 2 : We have to keep in mind that we search for a CA rule that converges always or with a high probability to optimal or near-optimal patterns. From our previous work we have learned that it is very difficult or even impossible to design such a rule with Option 1 because we may have to avoid or dissolve conflicts, deadlocks, live-locks, and emerging oscillating, moving or clustering structures, as we know, e.g. from the Game of Life, in order to drive the pattern continuously to an optimum (not to get stuck in suboptimal solution areas).
The remaining options (2-4) are related because the computation of a new configuration is stochastic. It seems that they can be transformed into each other to a certain extent. Here we want to use Option 4 because we have gained good results in solving another problem ) in this way. Moreover, we don't need a clock for synchronization and buffering for the configuration, which is closer to the modeling of natural processes. In contrast to that former solved problem, we address here another difficult problem where the number of tiles is minimized and not maximized.

The first rule
The idea is to modify the current configuration systematically such that valid patterns appear and at last a min pattern. To do this, the CA configuration is searched for tile parts (specific local patterns) and if an almost correct tile part is found, it is corrected, otherwise some random noise is injected.
The tile parts are called templates A i . They are systematically derived from the sensor tile (Fig. 9a). For each of the 13 tile pixels (so-called derivation pixels, marked in red) a template is defined by shifting the tile in a way that the derivation pixel appears in the center. Note that many of these templates are similar under various symmetries: A 3 ; A 4 ; A 5 are rotations of A 2 ; A 7 ; A 8 ; A 9 are rotations of A 6 and A 11 ; A 12 ; A 13 are rotations of A 10 .
We represent a template A i as an array of size ðk Â kÞ of pixels, where k ¼ 2m À 1 and ðm Â mÞ is the size of the tile, enlarged to a square box embedding it. Our tile is of size ð5 Â 5Þ including empty pixels, and the templates are larger because of shifting, maximal of size ð9 Â 9Þ. The pixels within a template are identified by relative coordinates ðDx; DyÞ. The center pixel at ðDx; DyÞ ¼ ð0; 0Þ is called ''reference pixel''. Each template pixel carries a value valðA i ; Dx; DyÞ 2 f0; 1; #g. The value of the reference pixel is called ''reference value'', refvalðA i Þ ¼ valðA i ; 0; 0Þ 2 f0; 1g, which is equal to the value of the derivation pixel. The symbol # represents ''Don't Care'', meaning that a pixel with such a value is not used for matching (or does not exist (empty pixel) in another interpretation). Pixels with a value 0 or 1 are valid pixels, their values are equal to the values derived from the original tile. Some templates can be embedded into arrays smaller than ðk Â kÞ when they have Don't Care symbols at their borders.
We need also to define the term ''neighborhood template'' that is later used in the matching procedure. The neighborhood template A Ã i is the template A i in which the reference value is set to #, in order to exclude the reference pixel from the matching process. The cell processing scheme is: • At time-step t a new configuration is formed by updating N cells in a random order. For each time-step a new random permutation is used. The new configuration is complete after N cell updates (each cell is updated once during this period) and it defines the next configuration at time-step t þ 1. • The rule is applied asynchronously. The new cell state s 0 ¼ f ðs; B Ã Þ is computed and immediately updated without buffering. B Ã denotes the states of the neighbors within a local window, where the center cell s(x, y) is excluded for matching.
The First Rule is the following: The neighborhood templates A Ã i are tested against the corresponding CA cell neighbors B Ã ðx; yÞ in the current ð5 Â 5Þ-window at position (x, y). Thereby the marked reference position ðDx; DyÞ ¼ ð0; 0Þ of a neighborhood template is aligned with the center of the window. Note that we use for testing a window of size ð5 Â 5Þ which is smaller than the full size ð9 Â 9Þ of the neighborhood templates. Therefore, some valid pixels outside the ð5 Â 5Þ-window are not tested (e.g. the bottom 4 yellow pixels of A Ã 10 in Fig. 9b). The implementation with these incomplete neighborhood templates worked very well, but further investigations are necessary for proving to which extent they can be incomplete.
If all values of a neighborhood template A Ã i match with B Ã ðx; yÞ then we register a hit that is stored only temporarily. There can be several hits equal to the number of matching templates. The number of hits approximates the cover level and is equal to it when the pattern becomes stable. If we have at least one hit (Rule part (b)), the sensor state of the current cell s(x, y) is set to the reference value refvalðA i Þ and then we create or validate a correct tile part in the current ð5 Â 5Þ-window. Otherwise we could not find or adjust a correct tile and the local pattern is noisy. Then we inject additional noise (Rule parts (c),(d)) at the current cell position (x, y) in expectation to form a valid tile.
If the current state is 0 we change it to 1 with probability p 0 , and if it is 1 we change it to 0 with probability 1 À p 0 . The idea behind is to inject ''asymmetric'' noise in order to force the evolution into more white or more black cells. Using a low probability p 0 means that white cells mainly stay white, whereas black cells (sensor points) are mainly forced to get white in order to disappear. Note that the cell's state may remain unchanged as default (Rule part (a)) if none of the conditions in (b), (c), (d) triggers an update. The lowest point density have min pattern of field size (13 Â 13) or multiples thereof, with a point density of 1=13 ¼ 0:077. So we may choose this density for p 0 in order to drive (hopefully) the evolution to min patterns.
There can be no conflicts, because the reference value is the same (uniquely derived from the tile) if there are several hits. (Examples: If A Ã 1 matches, there is one hit only, and the reference value is 1. If A Ã 10 ; A Ã 11 ; A Ã 12 ; A Ã 13 match, we get 4 hits with reference values 0). As no conflicts can arise, the sequence of testing the templates does not matter, and one could skip further tests after a first hit.
It is important to note that this rule obeys the criterion of stability, which means that a valid pattern without gaps (uncovered cells) is stable because we have matching hits at every site. Otherwise, some asymmetric random noise is injected in order to drive the evolution to the aimed pattern.
All patterns cover the space as required. Most often the patterns contained 14 points. The average number of points (14:96 ! 13:92) decreases with the probability p 0 . The reason is that the probability 1 À p 0 for injecting zeroes is then higher (favoring low cover levels) than the probability p 0 for injecting ones (favoring points). No min pattern with 8 points and even no pattern with 9 points was found during 10 000 runs. A few near-min patterns with 10 points evolved for p 0 ¼ 0:09 À 0:01. Max patterns were found only for high probabilities p 0 ¼ 0:5; 0:2. We can conclude that min sensor patterns are very rare in the whole set of all valid patterns covering the space. We have chosen the probability p 0 ¼ 0:01 for the following work because the average number of points is lower than for p 0 ¼ 1=13, the value that we first expected to give the best results.
We can conclude that we have found a CA rule that can evolve valid sensor point patterns, but unfortunately the number of active sensors is not necessarily minimal. What is the reason? The rule evolves patterns that fulfill one of the two conditions for each cell at (x, y): These two conditions are only necessary conditions for valid coverages, but not sufficient to define min patterns. It seems to be quite difficult to find local logical conditions that ensure a global minimum of points, except for special cases like n ¼ 13 where there is only one optimal solution with cover level v ¼ 1 everywhere (Fig. 4b). For that special case the second condition can be defined more strictly (''... exists exactly one...'') as sðx; yÞ ¼ 0^9!ðDx; DyÞ : sðx þ Dx; y þ DyÞ ¼ 1 Now we need to improve our CA rule in order to evolve min patterns with a high probability.

The second, improved rule
The purpose of this enhancement is to improve the rule in such a way that the number of points reaches a minimum. Whereas the first rule works with the state q ¼ s only, now the state is extended by the number of hits h, thus the full state q ¼ ðs; hÞ is used. Now all neighborhood templates are tested and all hits are stored for every site (x, y). The number of hits h(x, y) is: • 0 : no neighborhood template matches or there is a gap.
• 1 : one neighborhood template matches where the reference value is zero (yellow colored). • 2-4 : h neighborhood templates match with reference values zero, that means that 2-4 tiles (yellow hull pixels) are overlapping. • [-1] : the neighborhood template A Ã 1 matches where the reference value is 1 (blue). Recall that blue pixels are not allowed to overlap. The symbol in brackets denotes the repulsive action of kernels.
The hit number h(x, y) holds the actual value after matching with all the neighborhood templates. Because of the random sequential updating scheme, the h-values in the (x, y)-neighborhood may not be up-to-date and can carry old values from the former configuration at time-step t À 1. Nevertheless, the h-values correspond mainly to the cover levels v, especially when the pattern becomes more stable. This inaccuracy introduces some additional small Table 3 10 000 runs were performed on ð10 Â 10Þ fields with the First Rule for different probabilities p 0 . The pattern frequency (the number of evolved patterns with a certain number of points) is given. The average number of points p average and the average number of time-steps t average are also presented. The patterns evolve very quickly and remain stable. In order to evolve patterns with a few points the probability p 0 should be kept low

Points
Pattern frequency [1/1000]   noise which can even speed-up the evolution. And when the pattern becomes stable, the hit number equals the cover level: 8ðx; yÞ : hðx; yÞ ¼ vðx; yÞ.
The idea is to minimize the overlap between tiles by destroying cell states with high overlap level (h [ 1) through noise, allowing reordering with a lower number of points. In order to find a rule, we need to study the min point patterns with respect to their overlap values and local situations. From Table 1 and Fig. 3 we can see that min patterns contain some cells with a max overlap v max ¼ 2; 3. (There is a special case with n ¼ 13 or multiples of 13 where there exists a pattern with v max ¼ 1 that we will not be taken into consideration here.) First the new state s 0 ðx; yÞ is computed according to the First Rule, and additionally, the number of all hits h(x, y) is computed and stored. Then the new state is modified to s 00 ðx; yÞ: s 00 ðx;yÞ ¼ s 0 ðx;yÞ default random 2 f0;1g with probabilityp 4 if hðx;yÞ ¼ 4 random 2 f0;1g with probabilityp 3 if C 1 or C 2 or C 3 8 > < > : where C 1 ¼ ðhits3x3ðx; yÞ [ 14Þ, C 2 ¼ ðhits3x3ðx; yÞ [ 13Þ and ðActive 3x3 ðx; yÞ [ 0Þ, C 3 ¼ ðhits3x3ðx; yÞ ¼ 12Þ and ðActive 3 x3 ðx; yÞ ¼ 0Þ and ðhðx; yÞ ¼ 3Þ.
The conditions C 1::3 add additional noise in order to drive the evolution to the optimum when the local hit density is above a certain level. It was quite difficult to find these conditions through many trial and error simulations taking into account the local patterns in ð3 Â 3Þ-windows of valid optimal and near-optimal solutions. It would be interesting to find better conditions through further research. The ultimate goal is to find a rule that always drives to a stable optimal solution, not excluding any solution from the set of all possible solutions.
The function hits3x3ðx; yÞ computes the sum of the hits of inactive cells in a local ð3 Â 3Þ-window with its center at (x, y), where active sensor cells and the center are discarded. The function Active3x3ðx; yÞ computes the sum of active cells in a ð3 Â 3Þ-window. Now, for this improved rule, it is not clear whether the stability criterion is still fulfilled because of the additional noise. In fact, it turned out that reached min patterns are often stable, although some non-min patterns can be stable, too. Extensive simulations showed that noise injection under these additional conditions drive non-min patterns to min patterns. Unfortunately, at the moment, we cannot show that the evolution always ends up with a stable min pattern, because (a) we cannot prove that all reached valid non-min patterns are transient (meaning that then further noise will still be injected), and (b) that all reached min patterns are stable (meaning that then noise injection is always stopped).
A deeper analysis is a subject to further research. It remains an open question, whether a local CA rule can be found that always drives the evolution to a min point pattern, and preferably to any of all possible min patterns, not excluding solutions with a certain max cover level or certain local sensor arrangements.
During a simulation, the number of complete tiles / points L is increasing, decreasing and fluctuating, and at the end the evolution often is driving towards a valid stable pattern, which often is a min pattern. Many experiments showed that optimal min patterns can successfully be found with the Second Rule if (a) the maximal number of time-steps T Limit is chosen large enough and / or (b) several runs with random initial states are performed.
6 Simulation and performance evaluation 6.1 Performance for field size (10 · 10) The improved rule was tested 10 000 times on ð10 Â 10Þfields with random initial states (s 2 0; 1), for T Limit ¼ 1 000 time-steps, with p 4 ¼ 0:1, p 3 ¼ 0:9, and p 0 ¼ 0:01 (yielding best results). For each run, several parameters were recorded, such as the time-stamp for reaching the greatest or smallest number of points in valid patterns. The pattern frequency (the number of evolved patterns with a certain number of points) is given in Table 4. The average number of points p average and the average number of timesteps t average needed are also presented. In order to evolve patterns with a few points, the probability p 0 should be kept low. Now we were able to evolve min patterns with 8 points for a high T Limit . Most of the evolved patterns are close to the optimum, and lie between 9 and 12 for a small computational budget of T Limit ¼ 800. For a high budget of T Limit ¼ 102 400 the average number of points is only p average ¼ 9:58 with t average ¼ 11 237. Compared to the First Rule, the Second Rule (together with the First Rule) performs significantly better and the probability is high to reach an optimal min pattern. Fig. 11 shows the evolution of a stable min pattern with 8 sensors. During the evolution, other valid transient patterns with a different number of complete tiles L appear. The transient patterns between the shown time-steps are not valid, they usually show some tiles but are partially noisy.

Performance for other field sizes
The Improved Rule was also tested on other field sizes and a different number of runs and time limits (Table 5). For sizes up to ð8 Â 8Þ all runs yielded optimal min patterns. For fields larger than ð8 Â 8Þ, min patterns were found among others.
In order to assess the time complexity, we define the computing effort per cell to evolve d% Â R min patterns during R runs within time t T d% ðNÞ as EðT d% ; NÞ ¼ ðT d% ðNÞÞ=N, where the maximal needed time T d% ðNÞ was extracted from the simulation data. If EðNÞ ¼ const:, then the needed time would be in O(N) to reach d% min patterns on average of R runs. We have chosen d ¼ 3 because this was the lowest rate of found min patterns, for n ¼ 10 and n ¼ 11 (Table 5). In our experiments, this effort increases exponential with N as shown in Fig. 12. Therefore, it is costly to compute optimal solutions for large N. But as the CA model is inherently parallel regarding N, we can reduce the computation time significantly on a parallel computer. For large N the algorithm is still applicable, though we need to terminate it due to a restricted computing budget when having found a nearoptimal solution. In order to reduce the computational effort in principle, one could try to find a more sophisticated rule or to follow a divide-and-conquer approach.

Conclusion
In this paper, the problem of an optimal coverage of a wireless sensor network area was considered and solved by means of a probabilistic Cellular Automata (CA). Two CA rules were designed, that can find non-optimal and optimal min sensor patterns. The first rule evolves very fast stable valid patterns, with a peak number of points lying between minimum and maximum. The design principle behind is methodical and based on a set of templates derived from all pixels of the sensor tile. The second rule was designed especially to find min patterns, and it can do so, although the time to evolve an optimal min pattern can exceed the available processing capabilities. Moreover, regarding the r-von Neumann neighborhoods that serve as templates, it has been shown that there is a close relationship between min and max problems, depending only on their objective function. In addition, it has already been shown elsewhere ) that the core of the CA transition rule changes only slightly whatever it is a min problem or a max problem. Regarding the required minimal number of sensor points, the results of the simulation (in Table 4) have been supported by a theoretical study (in Table 1) on von Neumann neighborhoods and borrowing either from heuristics (for almost all values of n) or from the spectral theory of circulant graphs (for n ¼ 11). For this particular size, the question arises whether this complexity could explain the excessive time required by the simulations (in Table 4) or the atypical effort highlighted in Fig. 12.
The ''artificial'' intelligence of this model, based on the power of its template-based system, therefore has its counterpart, namely its limitation in processing power to handle large-scale computing fields, a very time-consuming process. As already mentioned in other words, this weakness therefore makes it a good candidate for implementation on parallel processing environments. In further work the possible sensor locations could be restricted, the charge of batteries could be taken into account, or this approach could be related to the vertex cover problem in order to compare time complexity.
Finally, as introduced through the first figure of this paper, in order to approximate the circular sensing area better, a hexagonal lattice could be more favorable. Two families would then be under study: either a still circulant topology based on Eisenstein-Jacobi networks (Huber 1994;Martínez et al. 2008) or a fractal topology based on the figure of Sierpiński arrowhead (Sierpiński 1916;Désérable 1999).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.