# Robust and adaptive diffusion-based classification in distributed networks

- 1.3k Downloads
- 4 Citations

**Part of the following topical collections:**

## Abstract

Distributed adaptive signal processing and communication networking are rapidly advancing research areas which enable new and powerful signal processing tasks, e.g., distributed speech enhancement in adverse environments. An emerging new paradigm is that of multiple devices cooperating in multiple tasks (MDMT). This is different from the classical wireless sensor network (WSN) setup, in which multiple devices perform one single joint task. A crucial first step in order to achieve a benefit, e.g., a better node-specific audio signal enhancement, is the common unique labeling of all relevant sources that are observed by the network. This challenging research question can be addressed by designing adaptive data clustering and classification rules based on a set of noisy unlabeled sensor observations. In this paper, two robust and adaptive distributed hybrid classification algorithms are introduced. They consist of a local clustering phase that uses a small part of the data with a subsequent, fully distributed on-line classification phase. The classification is performed by means of distance-based similarity measures. In order to deal with the presence of outliers, the distances are estimated robustly. An extensive simulation-based performance analysis is provided for the proposed algorithms. The distributed hybrid classification approaches are compared to a benchmark algorithm where the error rates are evaluated in dependence of different WSN parameters. Communication cost and computation time are compared for all algorithms under test. Since both proposed approaches use robust estimators, they are, to a certain degree, insensitive to outliers. Furthermore, they are designed in a way that they are applicable to on-line classification problems.

## Keywords

Adaptive distributed classification Clustering Labeling Robust Outlier Multi device multi task (MDMT)## 1 Introduction

Recent advances in distributed adaptive signal processing and communication networking are currently enabling novel paradigms for signal and parameter estimation. Based on the principles of adaptive filtering theory [1], a network of devices with node-specific interests adaptively optimizes its behavior, e.g., to jointly solve a decentralized least mean squares problem [2, 3, 4, 5, 6]. Under this new paradigm, multiple devices cooperate in multiple tasks (MDMT). This is different from the classical wireless sensor network setup, in which multiple devices perform one single joint task [2].

The MDMT paradigm can be beneficial, e.g., for speech enhancement in adverse environments [7]. Consider, for example, distributed audio signal enhancement in a public area, such as an airport, a train-station, etc. By cooperating with each other, various devices (e.g., smart-phones, hearing aids, tablets) benefit in enhancing their node-specific audio source of interest, given a received mixture of interfering sound sources [2, 8], e.g., by suppressing noise and interfering sound sources that are not of interest to the user.

Note that in such scenarios, the devices must operate under stringent power and communication constraints and the transmission of observations to a fusion center (FC) is, in many cases, infeasible or undesired. A crucial first step in order to achieve a benefit, e.g., a better node-specific audio signal enhancement, is the common *unique labeling* of all relevant speech sources that are observed by the network [8]. Also in other MDMT signal-enhancement tasks, such as image enhancement, it is of practical importance to answer the question: who observes what? [9].

This challenging research question can be tackled by designing adaptive data clustering and classification rules where each sensor collects a set of unlabeled observations that are drawn from a known number of classes. In particular, object or speaker labeling can be solved by in-network adaptive classification algorithms where a minimum amount of information is exchanged among single-hop neighbors. Various methods have been proposed that deal with distributed data clustering and classification, e.g., [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. In the last few years, several distributed adaptive strategies, such as incremental, consensus, and diffusion least mean squares algorithms have been developed [25]. In [17], a distributed K-Means (DKM) algorithm that uses the consensus strategy was proposed.

**Contributions:** Two robust in-network distributed classification algorithms, i.e., the RDiff K-Med and the CE RDiff K-Med, are proposed. It is shown that the performance of the first algorithm can be approached by the second algorithm with a considerably lower between-sensor communication cost. Unlike the DKM, which serves as a benchmark, the proposed algorithms are adaptive, instead of working with a batch of data. They are thus applicable to real-time classification problems. Furthermore, they are robust against outliers in the feature vectors and can handle non-stationary features. An extensive simulation-based performance analysis is provided that investigates the error rates in dependence of different WSN parameters, and also considers communication cost and computation time.

**Organization:** Section 2 provides the problem formulation, Section 3 provides a brief introduction to the topic of robust estimation of class centroid and covariance. Section 4 is dedicated to the proposal and description of two robust diffusion-based classification algorithms, while Section 5 provides an extensive Monte-Carlo simulation study. Section 6 concludes the paper and provides future research directions.

## 2 Problem formulation and data model

*J*nodes distributed over some geographic region (see Fig. 2). Two nodes are connected if they are able to communicate directly with each other. The set of nodes connected to node \(j \in {1,\ldots,J}=:\mathcal {J}\) is called the neighborhood of node

*j*and is denoted by \(\mathcal {B}_{j} \subseteq \mathcal {J}\). The communication links between the nodes are symmetric and a node is always connected to itself. The number of nodes connected to node

*j*is called the degree of node

*j*and is denoted by \(| \mathcal {B}_{j} |\).

*k*∈1,…,

*K*with

*k*denoting the label of the given class. The total number of classes

*K*is assumed to be known, or estimated a priori. Each class is described by a number of application dependent descriptive statistics (features). The feature estimation process is an application-specific research area of its own (see, e.g., [8, 9]) and is not considered in this article, where we seek for generic adaptive robust clustering and classification methods. In the following, it is assumed that the feature extraction has already been performed, so that the uncertainty of the feature estimation within each class can be modeled by a probability distribution, e.g., the Gaussian. Further, we account for estimation errors in the feature extraction process that we consider as outliers, thus arriving at the following observation model for feature vectors at time instant

*n*,

*n*=1,…,

*N*:

**w**_{ k }denotes the class centroid,

**e**_{ jkn }represents the class-specific uncertainty term with covariance matrix

**Σ**_{ jk },

**o**_{ jn }denotes the outlier term which models disturbances of an unspecified density and Open image in new window .

**e**_{ jkn }is assumed to be temporally and spatially independent, i.e.,

with *j,l*=1,…,*J*, *n,m*=1,…,*N* and *δ* denoting the Kronecker delta function. **e**_{ jkn } is assumed to be zero mean. For reasons of clarity, we drop the index *k* in the observation vectors and refer to them as **d**_{ jn }.

The aim of this paper is thus to enable every node *j* to assign each observation to a cluster *k* based on an estimated feature **d** _{ jn }. The classification/labeling should be real-time capable so that a new observation can be assigned on-line without the necessity of all recorded observations being available. Furthermore, outliers in Eq. (1) should not have a huge effect on the labeling performance. This will be achieved by using robust techniques to estimate the class centroids and covariances, as well as robust distance measures, as described in the next section.

## 3 Robust estimation of class centroid and covariance

The presence of even a small amount of outliers in a data set can have a high impact on classical estimators like the sample mean vector and sample covariance matrix. Though these estimators are optimal under the Gaussian noise assumption, they are extremely sensitive to uncharacteristic observations in the data [26]. For this purpose, robust estimators have been developed which are, to a certain degree, resistant towards outliers in the data.

In the following, a short overview of the concept of M-estimation for the multivariate case is presented, as required by our methods. For a more detailed treatment of the fundamental concepts, see, e.g., [26, 28].

The hybrid classification approach developed in this paper involves estimating the mean and covariance for vector-valued data **d** _{ jn }=(*d* _{1j n },*d* _{2j n },…,*d* _{ qjn })^{ T } with Open image in new window , where *q* is the dimension of the feature space.

In the univariate case, it is possible to define the robust estimates of location and dispersion separately. In the multivariate case, In order to obtain equivariant estimates, it is of advantage to estimate location and dispersion simultaneously [28].

where ∣* Σ*∣ denotes the determinant of

*,*

**Σ***h*

_{ X }(

*x*)=

*c*exp(−

*x*/2) with

*c*=(2

*π*)

^{−q/2}and

*g*

_{ D }(

**d**;

**w**,

*)=(*

**Σ****d**−

**w**)

^{ T }

**Σ**^{−1}(

**d**−

**w**).

**d**

_{ j1},…,

**d**

_{ jN }be an i.i.d. sample from a density of the form (3). M-estimates of the cluster centroids and covariance matrices are defined as solutions of the general system equations

where the functions *ϕ* _{1} and *ϕ* _{2} may be chosen differently. Uniqueness of solutions of (4) and (5) requires that *g* _{ D } *ϕ* _{2}(*g* _{ D }) is a nondecreasing function of *g* _{ D } [28].

*Huber’s functions*[29] with

with **c** _{hub} denoting the Huber’s tuning constant. The function *ρ*(**d**) from Eq. (6) shows quadratic behavior in the central region while increasing linearly to infinity. Outliers are therefore assigned less weight than data close to the model. Note that all maximum likelihood estimators are also M-estimators.

## 4 Proposed methods

In this section, two new robust in-network distributed classification algorithms are presented that extend our previously published algorithm [21] by a robust distance measure that improves the classification/labeling performance, especially if the covariances of the clusters differ significantly.

Since we have no training data available for the classification process, the general idea of the methods is to split the classification/labeling procedure into two main steps: in a *local clustering phase* each node calculates a preliminary estimate of the cluster characteristics (i.e., centroids and covariances) of each cluster using a small number of feature vectors. These preliminary estimates serve as an initialization for the subsequent *global classification phase*. Here, based on these estimates, a new feature is classified using a robust distance measure. The aim is to improve the local classification result by a combination of local processing and communication between the agents.

An advantage of this procedure is that this hybrid approach turns into a mere classification algorithm when the cluster characteristics are known beforehand. In this case, the local clustering phase is not needed.

The methods are based on the diffusion LMS strategy that was introduced in [30]. In this way, the classification is adaptive and can handle streaming data coming from a distributed sensor network. Since the communication cost between the nodes should be kept as low as possible, the second approach is designed with reduced in-network communication. A robust design makes sure that the proposed algorithms are, to a certain degree, resistant towards outliers in the feature vectors. In the following, the two proposed approaches are described in detail.

### 4.1 Robust distance-based K-medians clustering/classification over adaptive diffusion networks (RDiff K-Med)

*j*collects a number of

*N*

_{ t }observations and performs K-medians clustering on these observations. In this way, each node locally partitions its first

*N*

_{ t }observations

**D**

_{ jn }={

**d**

_{ jn },

*n*=1,…,

*N*

_{ t }} into

*k*sets \(\mathcal {C}_{k}\) so that the

*ℓ*

_{1}-distance within each cluster is minimized:

Each center is the component-wise median of the points of each cluster. The features assigned to each class \(\mathcal {C}_{k}\) are stored in an initial feature matrix \(\boldsymbol {S}^{0}_{jk}\). Based on all elements in \(\boldsymbol {S}^{0}_{jk}\), local intermediate estimates of the cluster centroid \(\boldsymbol {\psi }^{0}_{jk}\) and covariance matrix \(\boldsymbol {\Sigma }_{jk}^{0}\) are determined. In the following, the calculation steps are presented in detail.

\(\hat {\boldsymbol {\psi }}^{0}_{jk}\) is thus obtained by computing the median separately for each spatial direction of all elements in \(\boldsymbol {S}^{0}_{jk}\).

Next, proceed by computing a robust local initial estimate of the cluster covariances. In this paper, we compare three estimators, i.e. the sample covariance, Huber’s M-estimator and a computationally simple robust covariance estimator based on the median absolute deviation (MAD).

Huber’s M-estimator, as defined in Eq. (6), is computed via an iteratively reweighted least-squares algorithm, as detailed in [28] with the previously computed \(\hat {\boldsymbol {\psi }}^{0}_{jk}\) as location estimate.

**d**_{ jn }in \(\boldsymbol {S}^{0}_{jk}\) the difference vector

with \(\hat {\boldsymbol {\sigma }}_{jks}^{0}\) denoting the standard deviation estimate in each spatial direction of the feature space. Note that the covariance matrix calculated in Eq. (13) is a diagonal matrix. This computationally simple robust estimator is only applicable when the entries of the feature vectors are assumed to be independent of each other. The estimates of the sample covariance matrix and the M-estimator do not require this assumption and are, in general, not diagonal matrices.

Since the order in which the cluster centroids are stored by K-Medians is random, it may differ between two nodes. Thus, it has to be assured that the data which is exchanged by the nodes refers to the same classes. This is achieved by a unique initial ordering of the class centroids and covariance matrices among all nodes in the network: starting with the class centroids and covariance matrices stored for the first class of a preset reference node, all other nodes calculate the Euclidean distance of the respective entries corresponding to all stored classes and those of the first class of the reference node. The data with the smallest Euclidean distance to the reference entries are re-stored at the position corresponding to the first class. This procedure is repeated for all classes stored by the nodes in the network.

Having obtained a consistent data structure, each node *j* exchanges its own feature vectors \(\boldsymbol {S}^{0}_{jk}\) for each class \(\mathcal {C}_{k}\) with its neighbors \(i \in \mathcal {B}_{j}\). All nodes store their own as well as the features received from their neighbors in an initial matrix \(\boldsymbol {V}^{0}_{jk}\). In the following clustering/classification procedure, \(\boldsymbol {S}^{0}_{jk}\) and \(\boldsymbol {V}^{0}_{jk}\) are extended to **S**_{ jkn } and **V**_{ jkn } in every time step *n* by adding columns containing the new feature vectors received at time step *n*.

This completes the initialization phase, which is followed by the exchange phase, where each new observation **d** _{ jn }, *n*=*N* _{ t }+1,…,*N*, is classified according to the following diffusion-procedure:

*If there are new, unshared feature vectors, each node*

**1. Exchange Step:***j*adds them to

**V**_{ jkn }and broadcasts them to its neighbors \(i \in \mathcal {B}_{j}\).

*Each node*

**2. Adaptation Step:***j*determines preliminary local estimates \(\hat {\boldsymbol {\psi }}_{jkn}\) and \(\hat {\boldsymbol {\Sigma }}_{jkn}^{\ast } \) at time

*n*based on the feature vectors stored in

**V**_{ jkn }analogously to –(13) with

**V**_{ jkn }replacing

**S**_{ jkn }. In order to be capable of dealing with non-stationary time-varying signals, a window length

*l*

_{ w }is introduced which limits the size of

**V**_{ jkn }by only retaining the latest

*l*

_{ w }elements which were added to

**V**_{ jkn }.

*Each node exchanges its intermediate estimates \(\hat {\boldsymbol {\psi }}_{jkn}\) and \(\hat {\boldsymbol {\Sigma }}_{jkn}^{\ast } \) with its neighbors.*

**3. Exchange Step:***Each node*

**4. Combination Step:***j*adapts its estimates according to

*α*denoting an adaptation factor which determines the weight which is given to the own estimate and the neighborhood estimates, respectively, and

*a*

_{ bkn }being a weighting factor chosen as

with subsequent normalization such that \(\sum _{b \in \mathcal {B}_{j}/\{j\}} a_{bkn} =1\).

*In the next step, feature vector*

**5. Classification Step:**

**d**_{ jn }is classified by evaluating its distance to each of the estimated class centroids \(\hat {\boldsymbol {w}}_{jk}\). The considered distance measures are the Euclidean distance and the Mahalanobis distance given by

**d**_{ jn } is assigned to the class \(\mathcal {C}_{k}\) for which the respective distance is minimized.

With *Step 1*, the processing chain then starts at the beginning where the previously classified feature vectors are broadcasted to the neighborhood.

### 4.2 Communicationally Efficient Robust Distance-Based K-Medians Clustering/Classification over Adaptive Diffusion Networks (CE RDiff K-Med)

Since the *RDiff K-Med* may be demanding in terms of communication between sensors, which is a major contributor to the energy consumption of the devices [31], an algorithm is proposed which yields similar performance with reduced in-network communication: the “Communicationally Efficient Robust Distance-Based K-Medians Clustering/Classification over Adaptive Diffusion Networks” (CE RDiff K-Med).

The general procedure is similar to the RDiff K-Med except that there is no exchange of feature vectors between the nodes. The steps of the *CE RDiff K-Med* are the following:

* 1. Adaptation Step:* Based on the feature vectors

**d**_{ jn }stored in

**S**_{ jkn }, each node calculates its intermediate estimates \(\hat {\boldsymbol {\psi }}_{jkn}\) and \(\hat {\boldsymbol {\Sigma }}_{jkn}^{\ast }\) according to (9)–(13).

*Instead of broadcasting the entire feature vectors, the nodes share only their estimates of the cluster centers \(\hat {\boldsymbol {\psi }}_{jkn}\) and the respective covariance matrices \(\hat {\boldsymbol {\Sigma }}_{jkn}^{\ast } \) with their neighbors.*

**2. Exchange Step:***Each sensor*

**3. Combine Step:***j*combines its neighbor’s estimates analogously to (14) and (15) in order to obtain improved estimates \(\hat {\boldsymbol {w}}_{jkn}\) and \(\hat {\boldsymbol {\Sigma }}_{jkn}\).

*Based on the estimates determined in the previous step, the distance measure of the feature vector to the estimates of the class centroids is evaluated and*

**4. Classification Step:**

**d**_{ jn }is classified analogously to the RDiff K-Med. Subsequently,

**d**_{ jn }is added to

**S**_{ jkn }.

## 5 Numerical experiments

This section evaluates the performance of the proposed algorithm numerically in terms of the error rate in a broad range of conditions, i.e., different distributions of the outliers, different percentages of outliers in the feature vectors, different dimensions of the input data, different numbers of clusters and in terms of the adaptation speed in case of non-stationary data. Furthermore, the communication cost for different neighborhood sizes and the computation time as a function of the data dimension is considered. When reasonable, we compare our proposed method to the DKM [17].

### 5.1 Benchmark: distributed K-means (DKM)

*Distributed K-Means*(DKM) algorithm by Forero et al., for details, see [17]. The basic idea of the DKM is to cluster the observations into a given number of groups, such that the sum of squared-errors is minimized, that is

where **w** _{ k } is the cluster center for class *k*, **μ** _{ jnk }∈ [0,1] is the membership coefficient of **d** _{ jn } to class *k*, and *p*∈ [1,+*∞*] is a tuning parameter. The DKM iteratively solves the surrogate augmented Lagrangian of a distributed clustering problem based on (19) while exchanging the resulting parameters among neighboring nodes.

Although the DKM achieves very good performance in many scenarios, a major drawback is that the clustering is performed based on all available data and that it may need a high number of iterations until it converges to its final solution. This property makes the DKM difficult to use in real-time applications where an observation needs to be classified based on streaming data, such as for example in speaker labeling for MDMT speech enhancement [2] or object labeling in MDMT video enhancement for camera networks [9]. In addition to that, the performance of the DKM is limited in scenarios where feature vectors contain outliers.

### 5.2 Simulation setup

The simulations are based on a scenario with *J*=10 nodes which are randomly distributed in space. Each node is connected to the four neighboring nodes which have the smallest Euclidean distance. Unless mentioned otherwise, classification is performed on *K*=3 classes with centers **w** _{1}=(1,1,1)^{ T }, **w** _{2}=(1,4,3)^{ T }, **w** _{3}=(3,1,1)^{ T }. Each sample **d** _{ jn } is drawn at random from class *k* from the density \(\mathcal {N}(\mathbf {d}_{jn};\mathbf {w}_{k},\mathbf {\Sigma }_{k})\) with covariance matrices **Σ**_{1}=(1,0.01,0.01)^{ T } **I** _{3}, **Σ**_{2}=(0.16,4,0.16)^{ T } **I** _{3} and **Σ**_{3}=(0.25,0.01,4)^{ T } **I** _{3}. Each node has *N* _{ J }=80 samples available, 20 for the initialization and 60 for real-time classification. K-Medians is run three times, and the result which minimizes (8) is used for the classification. The parameters for the benchmark algorithm DKM are set *p*=*ν*=2, where *p*=2 enables soft clustering and *ν*=2 is the tuning parameter which yields the best results in the performance tests in [17]. The result is obtained having all *N* _{ J }=80 samples per node available. Since the performance of the DKM depends on the number of iterations, we provide simulation results for multiple choices of the amount of iterations.

The generation of outliers considers a certain percentage of samples to be replaced by a new sample which is drawn from a contaminating distribution (Gaussian or chi-square). The error rate is calculated based on the classified samples excluding any outliers. The displayed results represent the averages that are based on 100 Monte-Carlo runs.

### 5.3 Simulation results

**w**

_{3}=(3,1,1)

^{ T }is changed to

**w**

_{3}=(3,1,1,3,1,1)

^{ T }and

**Σ**_{3}=(0.25,0.01,4)

^{ T }

**I**

_{3}becomes

**Σ**_{3}=(0.25,0.01,4,0.25,0.01,4)

^{ T }

**I**

_{6}in order to obtain data of dimension

*q*=6 and so on. For increasing data dimension, the error rates for all considered algorithms decreases continuously.

*%*corresponds to the outlier free case. Here, the outliers are drawn at random from a Gaussian distribution with the density \(\mathcal {N}((10, 10, 10)^{T},\mathbf {I}_{3})\). The simulation is run with the different estimators of covariance introduced in Section 3, the location is estimated using the median. The robust distance measures result in smaller error rates than the Euclidean distance.

*v*for each class: to a certain percentage of the feature vectors a vector is added which is drawn at random from a chi-square distribution where for each class, different values for

*v*are chosen. This is done in order to create a non-symmetric outlier distribution instead of a constant shift of the mean of the outlier distribution for all classes. In this manner, for the first class \(\mathcal {C}_{1}\), a randomly drawn vector of dimension

*q*with

*v*

_{1}=3 is added to a certain number of data vectors, a vector with

*v*

_{2}=5 is subtracted from corresponding feature vectors of class \(\mathcal {C}_{2}\) and for \(\mathcal {C}_{3}\) a different random number is drawn for each direction in space: generated with

*v*

_{3,1}=4,

*v*

_{3,2}=1 and

*v*

_{3,3}=7 for

*x*,

*y*and

*z*direction, respectively, whereby

*v*

_{3,2}=1 is subtracted from the

*y*-component. For this simulation, a scenario is chosen with more distinct clusters with centroids

**w**

_{1}=(1,1,1)

^{ T },

**w**

_{2}=(0,5,3)

^{ T },

**w**

_{3}=(3,3,7)

^{ T }. The result is given in Fig. 7.

*i*=10 and

*i*=20 and the RDiff K-Med and CE RDiff K-Med with robust estimation methods show a slightly decreasing error rate with a growing number of feature vectors.

**w**

_{1}=(1,0,3)

^{ T },

**w**

_{2}=(1,4,3)

^{ T },

**w**

_{3}=(1,0,6)

^{ T },

**w**

_{4}=(−1,3,3)

^{ T },

**w**

_{5}=(4,4,4)

^{ T },

**w**

_{6}=(6,3,7)

^{ T },

**w**

_{7}=(4.5,7,6)

^{ T }and

**w**

_{8}=(2,4,7)

^{ T }with corresponding covariance matrices

**Σ**_{1}=(0.1,0.1,1)

^{ T }

**I**

_{3},

**Σ**_{2}=(0.1,0.4,1)

^{ T }

**I**

_{3},

**Σ**_{3}=(2,0.1,0.5)

^{ T }

**I**

_{3},

**Σ**_{4}=(0.4,1.6,0.4)

^{ T }

**I**

_{3},

**Σ**_{5}=(0.2,1.2,0.1)

^{ T }

**I**

_{3},

**Σ**_{6}=(0.25,0.3,1.5)

^{ T }

**I**

_{3},

**Σ**_{7}=(0.8,0.5,0.2)

^{ T }

**I**

_{3}and

**Σ**_{8}=(0.5,0.5,0.3)

^{ T }

**I**

_{3}. The outliers are drawn randomly from a Gaussian distribution with the density \(\mathcal {N}((10, 10, 10)^{T},\mathbf {I}_{3})\). The results are provided in Fig. 10.

*l*

_{ w }and different values for

*α*(see Eqs. (14) and (15)) by calculating the error which is given by the norm of the difference between the true value and the estimate of the cluster centroid. Unlike the CE RDiff K-Med the RDiff K-Med stores not only its own feature vectors, but also the feature vectors from its neighborhood, it has \((\mid \mathcal {B}_{j}\mid +1)\) data vectors per time step available instead of only one. In order to make the window sizes for both algorithms comparable,

*l*

_{ w }is chosen such that it contains the feature vectors of

*l*

_{ w }time steps. As a consequence, the compared window length of the RDiff K-Med corresponds to \((\mid \mathcal {B}_{j}\mid +1)\) times the window length of the CE RDiff K-Med. The result is shown in Fig. 11. As depicted in the upper plot, a large window size results in a slower adaptation speed. The RDiff K-Med adapts faster to the true cluster centroid than the CE RDiff K-Med since its estimation is based on more available samples. However, the CE RDiff K-Med yields a smaller error compared to the RDiff K-Med when both have adapted to the true value. The choice of the factor

*α*(see lower plot) has no significant impact on the RDiff K-Med. For the CE RDiff K-Med a smaller value for

*α*(and therefore a higher weighting of the estimates of the neighboring nodes) leads to a higher adaptation speed. Since it has only a small amount of feature vectors available, this method is dependent on the data exchange with its neighbors.

### 5.4 Communication cost and computation time

## 6 Conclusions

Two generic robust diffusion-based distributed hybrid classification algorithms were proposed, which can be adapted to various object/source labeling applications in a decentralized MDMT network. A performance comparison to the DKM was provided and the proposed methods showed promising results. Even in direct comparison with the DKM which permanently has access to all available samples, since it is operating in batch mode, our proposed online methods provide comparable error rates to the DKM using 50 iterations and more. Unlike the DKM, both the RDiff K-Med and CE RDiff K-Med are potentially real-time capable.

The choice of the distance metric has a considerable impact on the performance of the proposed classification algorithms. Using the Mahalanobis distance yields significantly smaller error rates compared to the Euclidean distance while resulting in higher communication costs and computation time.

Future work will include the application of this algorithm to real-world speech source labeling, object labeling in camera networks as well as labeling of semantic information based on occupancy grid maps for autonomous mapping and navigation with multiple rescue robots [32].

## Notes

### Acknowledgements

This work of P. Binder was supported by the LOEWE initiative (Hessen, Germany) within the NICER project and by the German Research Foundation (DFG). The work of M. Muma was supported by the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission (HANDiCAMS), under FET-Open grant number: 323944.

## References

- 1.E Hänsler,
*Statistische Signale: Grundlagen und Anwendungen*(Springer, Berlin, 2013).Google Scholar - 2.A Bertrand, M Moonen, Distributed signal estimation in sensor networks where nodes have different interests. Signal Process.
**92**(7), 1679–1690 (2012).CrossRefGoogle Scholar - 3.N Bogdanovic, J Plata-Chaves, K Berberidis, Distributed incremental-based LMS for node-specific adaptive parameter estimation. IEEE Trans. Signal Process.
**62**(20), 5382–5397 (2014).MathSciNetCrossRefGoogle Scholar - 4.J Plata-Chaves, A Bertrand, M Moonen, in
*Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE Int. Conf. on*. Distributed signal estimation in a wireless sensor network with partially overlapping node-specific interests or source observability, (2015), pp. 5808–5812.CrossRefGoogle Scholar - 5.J Chen, C Richard, AH Sayed, Diffusion LMS over multitask networks. IEEE Trans. Signal Process.
**63**(11), 2733–2748 (2015).MathSciNetCrossRefGoogle Scholar - 6.J Chen, C Richard, AH Sayed, Multitask diffusion adaptation over networks. IEEE Trans. Signal Process.
**62**(16), 4129–4144 (2014).MathSciNetCrossRefGoogle Scholar - 7.E Hänsler, G Schmidt,
*Speech and Audio Processing in Adverse Environments*(Springer, Berlin, 2008).CrossRefGoogle Scholar - 8.S Chouvardas, M Muma, K Hamaidi, S Theodoridis, AM Zoubir, in
*Proc. 40th IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP)*. Distributed robust labeling of audio sources in heterogeneous wireless sensor networks, (2015), pp. 5783–5787.Google Scholar - 9.FK Teklehaymanot, M Muma, B Béjar-Haro, P Binder, AM Zoubir, M Vetterli, in
*Proc. 12th IEEE AFRICON (accepted)*. Robust diffusion-based unsupervised object labelling in distributed camera networks, (2015).Google Scholar - 10.A D’Costa, A Sayeed, in
*IEEE Military Communications Conference (MILCOM)*, 1. Data versus decision fusion for distributed classification in sensor networks, (2003), pp. 585–5901.Google Scholar - 11.D Li, KD Wong, YH Hu, AM Sayeed, Detection, classification and tracking of targets in distributed sensor networks. Technical report, Department of Electrical and Computer Engineering, University of Wisconsin-Madison, USA.Google Scholar
- 12.F Fagnani, S Fosson, C Ravazzi, A distributed classification/estimation algorithm for sensor networks. SIAM J Control Optim.
**52**(1), 189–218 (2014).MathSciNetCrossRefzbMATHGoogle Scholar - 13.M Hai, S Zhang, L Zhu, Y Wang, in
*Ind. Control and Electron. Eng. (ICICEE), 2012 Int. Conf. On*. A survey of distributed clustering algorithms, (2012), pp. 1142–1145.CrossRefGoogle Scholar - 14.E Kokiopoulou, P Frossard, Distributed classification of multiple observation sets by consensus. IEEE Trans. Signal Process.
**59**(1), 104–114 (2011).MathSciNetCrossRefGoogle Scholar - 15.B Malhotra, I Nikolaidis, J Harms, Distributed classification of acoustic targets in wireless audio-sensor networks. Comput. Netw.
**52**(13), 2582–2593 (2008).CrossRefzbMATHGoogle Scholar - 16.RD Nowak, Distributed em algorithms for density estimation and clustering in sensor networks. IEEE Trans. Signal Process.
**51**(8), 2245–2253 (2003).CrossRefGoogle Scholar - 17.P Forero, A Cano, GB Giannakis, et al., Distributed clustering using wireless sensor networks. IEEE J. Sel. Topics Signal Process.
**5**(4), 707–724 (2011).CrossRefGoogle Scholar - 18.S-Y Tu, AH Sayed, Distributed decision-making over adaptive networks. IEEE Trans. Signal Process.
**62**(5), 1054–1069 (2014).MathSciNetCrossRefGoogle Scholar - 19.D Wang, J Li, Y Zhou, in
*IEEE/SP 15th Workshop on Stat. Signal Process. (SSP)*. Support vector machine for distributed classification: a dynamic consensus approach, (2009), pp. 753–756.Google Scholar - 20.X Zhao, AH Sayed, Distributed clustering and learning over networks. IEEE Trans. Signal Process.
**63**(13), 3285–3300 (2015).MathSciNetCrossRefGoogle Scholar - 21.P Binder, M Muma, AM Zoubir, in
*Proc. 40th IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP)*. Robust and computationally efficient diffusion-based classification in distributed networks (Brisbane, Australia, 2015), pp. 3432–3436.Google Scholar - 22.X Zhao, AH Sayed, in
*Proc. International Workshop on Cognitive Information Processing (CIP)*. Clustering via diffusion adaptation over networks, (2012), pp. 1–6.Google Scholar - 23.AH Sayed, Adaptation, learning, and optimization over networks. Found. Trends Mach. Learn.
**7**(4-5), 311–801 (2014).CrossRefzbMATHGoogle Scholar - 24.S Khawatmi, AM Zoubir, AH Sayed, in
*Proc. 23rd European Signal Processing Conf. (EUSIPCO). Nice, France*. Decentralized clustering over adaptive networks (Nice, France, 2015), pp. 2745–2749.Google Scholar - 25.AH Sayed, Adaptive networks. Proc. IEEE.
**102**(4), 460–497 (2014).CrossRefGoogle Scholar - 26.AM Zoubir, V Koivunen, Y Chakhchoukh, M Muma, Robust estimation in signal processing: a tutorial-style treatment of fundamental concepts. Signal Process. Mag. IEEE.
**29**(4), 61–80 (2012).CrossRefGoogle Scholar - 27.PA Forero, V Kekatos, GB Giannakis, Robust clustering using outlier-sparsity regularization. IEEE Trans. Signal Process.
**60**(8), 4163–4177 (2012).MathSciNetCrossRefGoogle Scholar - 28.R Maronna, D Martin, V Yohai,
*Robust Statistics*(John Wiley & Sons, Chichester, 2006).CrossRefzbMATHGoogle Scholar - 29.PJ Huber, et al., Robust estimation of a location parameter. Ann. Math. Stat.
**35**(1), 73–101 (1964).MathSciNetCrossRefzbMATHGoogle Scholar - 30.FS Cattivelli, AH Sayed, Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process.
**58**(3), 1035–1048 (2010).MathSciNetCrossRefGoogle Scholar - 31.D Estrin, L Girod, G Pottie, M Srivastava, in
*IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP)*, 4. Instrumenting the world with wireless sensor networks, (2001), pp. 2033–2036.Google Scholar - 32.S Kohlbrecher, J Meyer, T Graber, K Petersen, U Klingauf, O von Stryk, in
*RoboCup 2013: Robot World Cup XVII*. Hector open source modules for autonomous mapping and navigation with rescue robots (SpringerBerlin, 2014), pp. 624–631.Google Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.