1 Introduction

The wide spread of telecommunication systems has led to the pervasiveness of radiofrequency (RF) signals in almost every environment of daily life. Knowledge of the location of mobile devices is required or beneficial in many applications [1], and numerous localization techniques have been proposed over the years [14]. Techniques based on the received signal strength (RSS) are the preferred option when low cost, simplicity, and technology obliviousness are the main requirements. In some standards, e.g., IEEE 802.15.4, an RSS indicator (RSSI) is encoded directly into the protocol stack [5]. In addition, RSS is readily available from any radio interface through a simple energy detector and can be modeled by the well-known path loss model [6] regardless of the particular communication scheme. Based on that, RSS can be exploited to implement “opportunistic” localization for different wireless technologies, e.g., Wi-Fi [7], FM radio [8], or cellular networks [9]. In the context of wireless sensor networks (WSNs), nodes with known positions (anchors) can be used to localize the nodes with unknown positions (agents). Generally speaking, localization algorithms can be classified according to three important categories.

Centralized vs. distributed. Centralized algorithms, e.g., [1014], require a data fusion center that carries out the computation after collecting information from the nodes, while distributed algorithms, such as [15, 16], rely on self-localization and the computation is spread throughout the network. Centralized algorithms are likely to provide more accurate estimates, but they suffer from scalability problems, especially for large-scale WSNs, while distributed algorithms have the advantage of being scalable and more robust to node failures [17].

Cooperative vs. non-cooperative. In a non-cooperative algorithm, e.g., [18], each agent receives information only from anchors. For all agents to obtain sufficient information to perform localization, non-cooperative algorithms necessitate either long range (and high-power) anchor transmission or a high-density of anchors [17]. In cooperative algorithms, such as [19, 20], inter-agent communication removes the need for all agents to be in range of one (or more) anchors [17].

Bayesian vs. non-Bayesian. In non-Bayesian algorithms, e.g., expectation-maximization (EM) [21], and its variant, expectation-conditional maximization (ECM) [22], the unknown positions are treated as deterministic, while in Bayesian algorithms, e.g., nonparametric belief propagation (NBP) [23] and sum-product algorithm over wireless networks (SPAWN) [24, 25] and its variant Sigma-Point SPAWN [26], the unknown positions are assumed to be random variables with a known prior distribution.

Many existing works on localization using RSS measures, such as [27] and [28], are based on the assumption that the classical path loss propagation model is perfectly known, mostly via a calibration process. However, this assumption is impractical for two reasons. Firstly, conducting the calibration process requires intensive human assistance, which may be not affordable, or may even be impossible in some inaccessible areas. Secondly, the channel characteristics vary due to multipath (fading), and non-negligible modifications occur also due to mid- to long-term changes in the environment, leading to non-stationary channel parameters [29]. This implies that the calibration must be performed continuously [29, 30] because otherwise the resulting mismatch between design assumptions and actual operating conditions leads to severe performance degradation. These facts highlight the need for algorithms that adaptively estimate the environment and the locations. A further difficulty is due to the existence of non-line-of-sight (NLoS) propagation in practical localization environments. Among various works handling the NLoS effect, a majority of them have treated the NLoS meaures as outliers and tried to neglect or mitigate their effect, including the maximum likelihood (ML)-based approach [31, 32], the weighted least-squares (WLS) estimator [32, 33], the constrained localization techniques [34, 35], robust estimators [36, 37], and the method based on virtual stations [38]. In contrast to these works, several approaches, including [21, 22, 39], have proposed specific probabilistic models for the NLoS measures, therewith exploiting the NLoS measures for the localization purpose. In the light of these considerations, our aim is to develop an RSS-based, cooperative localization framework that works in mixed LoS/NLoS environments, requires no knowledge on parameters of the propagation model, and can be realized in a distributed manner.

The following distinction is made: only algorithms that directly use RSS measures as inputs are considered RSS-based in the strict sense, while algorithms such as NBP [23], SPAWN [24], distributed-ECM (D-ECM) [22], and their variants are here called pseudo-RSS-based, because they use range estimates as inputs. Generally speaking, among these two options, RSS-based location estimators are preferred for the following reasons. Firstly, inferring the range estimates from the RSS measures usually requires knowledge on the (estimated) propagation model parameters. The assumption of a priori known parameters violates the calibration-free requirement. Secondly, even with perfectly known model parameters, there exists no efficient estimator for estimating the ranges from the RSS measures, as proven in [40]. Thirdly, dropping the idealistic assumption of known channel parameters and using their estimates introduce an irremovable large bias, as demonstrated in [41]. Based on these considerations, pseudo-RSS-based approaches do not meet the requirements in this work. Furthermore, Bayesian approaches, including NBP [23] and SPAWN [24, 25], do not consider mixed LoS/NLoS environments. As one representative RSS-based cooperative localization algorithm, Tomic et al.’s semidefinite-programming (SDP) estimator in [20] requires no knowledge on the propagation model, but it does not apply to and cannot be readily extended to a mixed LoS/NLoS environment. To the best of our knowledge, the existing works on RSS-based calibration-free localization in a mixed LoS/NLoS environment is rather limited. Yin et al. have proposed an EM-based estimator in [21], but only for the single-agent case. In this paper, we consider a multi-agent case and aim to develop a location estimator that is RSS-based, cooperative, and calibration-free and works in a mixed LoS/NLoS environment. To capture the mixed LoS/NLoS propagation conditions, we adopt the mode-dependent propagation model in [21]. The key difference between this work and [21] lies in whether the localization environment is cooperative or not. More precisely, [21] is concerned with the conventional single-agent localization while this work studies cooperative localization in case of multi-agent. Furthermore, we develop a distributed algorithm, where model parameters and positions are updated locally by treating the estimated agents as anchors, inspired by the works in [4244]. A succinct characterization of the proposed work and its related works is listed in Table 1.

Table 1 Succinct characterization of the related works and the proposed work

Original contributions: We address the problem of RSS-based cooperative localization in a mixed LoS/NLoS propagation environment, requiring no calibration. To characterize such a mixed LoS/NLoS environment, we assume a mode-dependent propagation model with unknown parameters. We derive and analyze a robust, calibration-free, RSS-based distributed cooperative algorithm, based on the ML framework, which is capable of coping with mixed LoS/NLoS conditions. Simulation results, carried out in comparison with a centralized ML algorithm that serves as a benchmark, show that the proposed approach has good overall performance. Moreover, it adaptively estimates the channel parameters, has acceptable communication overhead and computation costs, thus satisfying the major requirements of a practically viable localization algorithm. The convergence analysis of the proposed algorithm is conducted by restating the problem as a graph coloring problem. In particular, we formulate a graph compatibility test and show that for compatible network structures, the convergence is guaranteed. Unlike existing algorithms, the proposed method naturally takes into account the (possible) asymmetry of links between nodes.

The paper is organized as follows. Section 2 formulates the problem and details the algorithms. Section 3 discusses convergence. Section 4 presents the simulations results, while Section 5 concludes the paper. Finally, Appendices A and B contain some analytical derivations which would otherwise burden the reading of the paper.

2 Methods/experimental

2.1 Problem formulation

ConsiderFootnote 1 a directed graph with Na anchor nodes and Nu agent nodes, for a total of N=Na+Nu nodes. In a two-dimensional (2D) scenario, we denote the position of node i by \(\boldsymbol {x}_{i} = [\!x_{i} \ y_{i}]^{\top } \in \mathbb {R}^{2 \times 1}\), where denotes transpose. Between two distinct nodes i and j, the binary variable oji indicates if a measure, onto direction ji, is observed (oji=1) or not (oji=0). In the case when i=j, since a node does not self-measure, we have oii=0. This allows us to define the observation matrix \(\boldsymbol {\mathcal {O}} \in \mathbb {B}^{N \times N}\) with elements \(o_{i,j} \triangleq o_{i \rightarrow j}\) as above. The aforementioned directed graph has connection matrix \(\boldsymbol {\mathcal {O}}\). It is important to remark that, for a directed graph, \(\boldsymbol {\mathcal {O}}\) is not necessarily symmetric; physically, this models possible channel anisotropies, miss-detections and, more generally, link failures. Let mji be a binary variable, which denotes if the link ji is LoS (mji=1) or NLoS (mji=0). Due to physical reasons, mji=mij. We define the LoS/NLoS matrixFootnote 2\(\mathbf {L} \in \mathbb {B}^{N \times N}\) of elements \(l_{i,j} \triangleq m_{i \rightarrow j}\), and we observe that, since mji=mij, the matrix is symmetric, i.e., L=L. We stress that this symmetry is preserved regardless of \(\mathbf {\mathcal {O}}\), as it derives from physical reasons only. Let Γ(i) be the (open) neighborhood of node i, i.e., the set of all nodes from which node i receives observables (RSS measures), formally: \(\Gamma (i) \triangleq \{ j \neq i : o_{j \rightarrow i}=1 \}\). We define Γa(i) as the anchor-neighborhood of node i, i.e., the subset of Γ(i) which contains only anchor nodes as neighbors of node i. We also define Γu(i) as the agent-neighborhood of node i, i.e., the subset of Γ(i) which contains only agent nodes as neighbors of node i. In general, Γ(i)=Γa(i)∪Γu(i).

2.2 Data model

In the sequel, we will assume that all nodes are stationary and that the observation time-window is sufficiently short in order to neglect correlation in the shadowing terms. In practice, such a model simplification allows for a more analytical treatment of the localization problem and has also been used, for example, in [11, 12]. Following the path loss model and the data models present in the literature [17, 21] and denoting by K the number of samples collected on each link over a predetermined time window, we model the received power at time index k for anchor-agent links as

$$ {{}{\begin{aligned} r^{(m)}_{a \rightarrow i}(k) = \left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \| + w_{a \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{a \rightarrow i}=1; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \mathbf{x}_{a} - \mathbf{x}_{i} \|\! + v_{a \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{a \rightarrow i}=0, \end{array}\right. \end{aligned}}} $$
(1)

while, for the agent-agent link,

$$ {}{\begin{aligned} r^{(m)}_{u \rightarrow i}(k)= \left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \| + w_{u \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{u \rightarrow i}=1; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\! + v_{u \rightarrow i}(k),\\ \quad \quad \text{if} \ m_{u \rightarrow i}=0, \end{array}\right. \end{aligned}} $$
(2)

where:

  • i,u, with uΓu(i), are the indexes for the unknown nodes;

  • aΓa(i) is an index for anchors;

  • k=1,…,K is the discrete time index, with K samples for each link;

  • \(p_{0_{\text {LOS/NLOS}}}\) is the reference power (in dBm) for the LoS or NLoS case;

  • αLOS/NLOS is the path loss exponent for the LoS or NLoS case;

  • xa is the known position of anchor a;

  • xu is the unknown position of agent u (similarly for xi);

  • Γa(i), Γu(i) are the anchor- and agent-neighborhoods of node i, respectively;

  • The noise terms wai(k),vai(k),wui(k), and vui(k) are modeled as serially independent and identically distributed (i.i.d.), zero-mean, Gaussian random variables, independent from each other (see below), with variances:

    \(\text {Var} [\! w_{a \rightarrow i}(k) ] = \text {Var} [\! w_{u \rightarrow i}(k) ] = \sigma ^{2}_{\text {LOS}}\),

    \(\text {Var}[\!v_{a \rightarrow i}(k)] = \text {Var}[\!v_{u \rightarrow i}(k)] = \sigma ^{2}_{\text {NLOS}}\),

    and \(\sigma ^{2}_{\text {NLOS}} > \sigma ^{2}_{\text {LOS}} > 0\).

More precisely, letting δi,j be Kronecker’s deltaFootnote 3, the independence assumption is formalized by the following equations

$$ \begin{aligned} &\mathbb{E}\left[w_{j_{1} \rightarrow i_{1}}(k_{1}) w_{j_{2} \rightarrow i_{2}}(k_{2}) \right] = \sigma^{2}_{\text{LOS}} \delta_{k_{1}, k_{2}} \delta_{i_{1}, i_{2}} \delta_{j_{1}, j_{2}} \\ &\mathbb{E}\left[v_{j_{1} \rightarrow i_{1}}(k_{1}) v_{j_{2} \rightarrow i_{2}}(k_{2}) \right] \ = \sigma^{2}_{\text{NLOS}} \delta_{k_{1}, k_{2}} \delta_{i_{1}, i_{2}} \delta_{j_{1}, j_{2}} \\ &\mathbb{E}\left[w_{j_{1} \rightarrow i_{1}}(k_{1}) v_{j_{2} \rightarrow i_{2}}(k_{2}) \right] = 0 \end{aligned} $$
(3)

for any k1,k2,i1,i2,j1Γ(i1),j2Γ(i2). The previous equations imply that two different links are always independent, regardless of the considered time instant. In this paper, we call this property link independence. If only one link is considered, i.e., j2=j1 and i2=i1, then independence is preserved by choosing different time instants, implying that the sequence \(\left \{ w_{j \rightarrow i} \right \}_{k} \triangleq \left \{ w_{j \rightarrow i}(1), w_{j \rightarrow i}(2), \dots \right \}\) is white. The same reasoning applies to the (similarly defined) sequence {vji}k. As a matter of notation, we denote the unknown positions (indexing the agents before the anchors) by \(\boldsymbol {x} \triangleq \left [\boldsymbol {x}^{\top }_{1} \ \cdots \ \boldsymbol {x}^{\top }_{N_{u}}\right ]^{\top } \in \mathbb {R}^{2 N_{u} \times 1}\) and we define η as the collection of all channel parameters, i.e., \(\boldsymbol {\eta } \triangleq \left [\boldsymbol {\eta }^{\top }_{\text {LOS}} \ \boldsymbol {\eta }^{\top }_{\text {NLOS}}\right ]^{\top }\), with \(\boldsymbol {\eta }_{\text {LOS}} \triangleq \left [p_{0_{\text {LOS}}} \ \alpha _{\text {LOS}} \ \sigma ^{2}_{\text {LOS}}\right ]^{\top } \in \mathbb {R}^{3 \times 1}\), \(\boldsymbol {\eta }_{\text {NLOS}} \triangleq \left [p_{0_{\text {NLOS}}} \ \alpha _{\text {NLOS}} \ \sigma ^{2}_{\text {NLOS}}\right ]^{\top } \in \mathbb {R}^{3 \times 1}\).

It is important to stress that, in a more realistic scenario, channel parameters may vary from link to link and also across time. However, such a generalization would produce an under-determined system of equations, thus giving up uniqueness of the solution and, more generally, analytical tractability of the problem. For the purposes of this paper, the observation model above is sufficiently general to solve the localization task while retaining analytical tractability.

2.3 Time-averaged RSS measures

Motivated by a result given in Appendix A, we consider the time-averaged RSS measures, defined as

$$ \bar{r}_{j \rightarrow i} \triangleq \frac{1}{K} \sum\limits_{k=1}^{K} r^{(m)}_{j \rightarrow i}(k), \quad j \in \Gamma(i) $$
(4)

as our new observablesFootnote 4. While it would have been preferable to work with the original data from a theoretical standpoint, several considerations lead to the preference of time-averaged data, most notably: (1) comparison with other algorithms present in the literature, where the data model assumes only one sample per link, i.e., K=1, which is simply a special case in this paper; (2) reduced computational complexity in the subsequent algorithms; (3) if the RSS measures onto a given link needs to be communicated between two nodes, the communication cost is notably reduced, since only one scalar, instead of K samples, needs to be communicated; (4) formal simplicity of the subsequent equations.

Moreover, from Appendix A, it follows that, assuming known L, the ML estimators of the unknown positions based upon an original data or a time-averaged data are actually the same. To see this, it suffices to choose \(\boldsymbol {\theta } = (\boldsymbol {x}, p_{0_{\text {LOS}}}, p_{0_{\text {NLOS}}}, \alpha _{\text {LOS}}, \alpha _{\text { NLOS}})\) and

$$ s_{j \rightarrow i}(\boldsymbol{\theta}) =\left\{ \begin{array}{l} p_{0_{\text{LOS}}} - 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|, \\ \quad \quad \text{if} \ m_{j \rightarrow i}=1 ; \\ p_{0_{\text{NLOS}}} - 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|, \\ \quad \quad \text{if} \ m_{j \rightarrow i}=0 \end{array}\right. $$
(5)

for jΓ(i) and splitting the additive noise term as required. For a fixed link, only one of two cases (LoS or NLoS) is verified, thus applying (34) of Appendix A yields

$$ \begin{aligned} \arg {\underset{\boldsymbol{\theta}}{\max}}\ p\left(r^{(m)}_{j \rightarrow i}(1),\dots, r^{(m)}_{j \rightarrow i}(K) ; \boldsymbol{\theta}, \sigma_{j \rightarrow i}^{2}\right) \\ = \arg {\underset{\boldsymbol{\theta}}{\max}}\ p\left(\bar{r}_{j \rightarrow i}; \boldsymbol{\theta}, \sigma^{2}_{j \rightarrow i}\right) \end{aligned} $$
(6)

where \(\sigma _{j \rightarrow i}^{2}\) is either \(\sigma ^{2}_{\text {LOS}}\) or \(\sigma ^{2}_{\text {NLOS}}\) and the general result follows from link independence.

We define \(\mathcal {R}_{i}\) as the set of all RSS measures that node i receives from anchor neighbors, i.e., \(\mathcal {R}_{i} \triangleq \left \{ r^{(m)}_{a \rightarrow i}(1), \dots, r^{(m)}_{a \rightarrow i}(K) : a \in \Gamma _{a}(i) \right \}\), \(\mathcal {Z}_{i}\) as the set of all RSS measures that node i receives from agent neighbors, i.e., \(\mathcal {Z}_{i} \triangleq \left \{ r^{(m)}_{j \rightarrow i}(1), \dots, r^{(m)}_{j \rightarrow i}(K) : j \in \Gamma _{u}(i) \right \}\), and \(\mathcal {Y}_{i}\) as the set of all RSS measures locally available to node i, i.e., \(\mathcal {Y}_{i} \triangleq \mathcal {R}_{i} \cup \mathcal {Z}_{i}\). Analogously, for time-averaged measures, we define \(\bar {\mathcal {R}}_{i} \triangleq \left \{\bar {r}_{a \rightarrow i} : a \in \Gamma _{a}(i)\right \}\), \(\bar {\mathcal {Z}}_{i} \triangleq \left \{\bar {r}_{j \rightarrow i} : j \in \Gamma _{u}(i)\right \}\), and \(\bar {\mathcal {Y}}_{i} = \bar {\mathcal {R}}_{i} \cup \bar {\mathcal {Z}}_{i}\). Finally, we define

$$ \Upsilon \triangleq \bigcup\limits_{i=1}^{N_{u}} \mathcal{Y}_{i} $$
(7)

which represents the information available to the whole network.

2.4 Single-agent robust maximum likelihood (ML)

We first consider the single-agent case, which we will later use as a building block in the multi-agent case. The key idea is that instead of separately treating the LoS and NLoS cases, e.g., by hypothesis testing, we resort to a two-component Gaussian mixture model for the time-averaged RSS measures. More precisely, we assume that the probability density function (pdf), p(·), of the time-averaged RSS measures, for anchor-agent links, is given by

$$ {}{\begin{aligned} & p(\bar{r}_{a \rightarrow i}) \\ &\quad = \frac{\lambda_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{LOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{LOS}}} \left(\bar{r}_{a \rightarrow i} - p_{0_{\text{LOS}}} + 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \|\right)^{2}} \\ &\quad + \frac{1-\lambda_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{NLOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{NLOS}}} \left(\bar{r}_{a \rightarrow i} - p_{0_{\text{NLOS}}} + 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{a} - \boldsymbol{x}_{i} \|\right)^{2}} \\ \end{aligned}} $$
(8)

and, for agent-agent links,

$$ {}{\begin{aligned} &p(\bar{r}_{u \rightarrow i}) \\ &\quad = \frac{\zeta_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{LOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{LOS}}} \left(\bar{r}_{u \rightarrow i} - p_{0_{\text{LOS}}} + 10 \alpha_{\text{LOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\right)^{2}} \\ &\quad + \frac{1- \zeta_{i}}{\sqrt{2 \pi \frac{\sigma^{2}_{\text{NLOS}}}{K}}} e^{- \frac{K}{2 \sigma^{2}_{\text{NLOS}}} \left(\bar{r}_{u \rightarrow i} - p_{0_{\text{NLOS}}} + 10 \alpha_{\text{NLOS}} \log_{10} \| \boldsymbol{x}_{u} - \boldsymbol{x}_{i} \|\right)^{2} }\\ \end{aligned}} $$
(9)

where:

  • λi∈(0,1) is the mixing coefficient for anchor-agent links of node i;

  • ζi∈(0,1) is the mixing coefficient for agent-agent links of node i.

Empirically, we can intuitively interpret λi as the fraction of anchor-agent links in LoS (for node i), while ζi as the fraction of agent-agent links in LoS (for node i). As in [21], the Markov chain induced by our model is regular and time-homogeneous. From this, it follows that the Markov chain will converge to a two-component Gaussian mixture, giving a theoretical justification to the proposed approach.

Assume that there is a single agent, say node i, with a minimum of three anchorsFootnote 5 in its neighborhood (|Γa(i)|≥3), in a mixed LoS/NLoS scenario. Our goal is to obtain the maximum likelihood estimator (MLE) of the position of node i. Let \(\boldsymbol {\bar {r}}_{i} = \left [\bar {r}_{1 \rightarrow i} \ \cdots \ \bar {r}_{| \Gamma _{a}(i)| \rightarrow i}\right ]^{\top } \in \mathbb {R}^{| \Gamma _{a}(i)| \times 1}\) be the collection of all the time-averaged RSS measures available to node i. Using the previous assumptions and the independency between the links, the joint likelihood functionFootnote 6\(p(\boldsymbol {\bar {r}}_{i} ; \boldsymbol {\theta })\) is given by

$$ p(\boldsymbol{\bar{r}}_{i} ; \boldsymbol{\theta}) = \prod_{a \in \Gamma_{a}(i)} p(\bar{r}_{a \rightarrow i}; \boldsymbol{\theta}) $$
(10)

where θ=(xi,λi,η). Thus, denoting with \(L(\boldsymbol {\theta }; \boldsymbol {\bar {r}}_{i})\) the log-likelihood, we have

$$ L(\boldsymbol{\theta}; \boldsymbol{\bar{r}}_{i}) = \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i}) $$
(11)

The MLE of θ is given by

$$ \hat{\boldsymbol{\theta}}_{ML} = \arg {\underset{\boldsymbol{\theta}}{\max}}\ L(\boldsymbol{\theta} ; \boldsymbol{\bar{r}}_{i}) $$
(12)

where the maximization is subject to several constraints: λi∈(0,1), αLOS > 0, αNLOS>0, \(\sigma ^{2}_{\text {LOS}} > 0\), and \(\sigma ^{2}_{\text {NLOS}} > 0\). In general, the previous maximization admits no closed-form solution, so we must resort to numerical procedures.

2.5 Multi-agent robust ML-based scheme

In principle, our goal would be to have a ML estimate of all the Nu unknown positions, denoted by x. Let \(\boldsymbol {\lambda } \triangleq \left [\lambda _{1} \ \cdots \ \lambda _{N_{u}}\right ]^{\top }\), \(\boldsymbol {\zeta } \triangleq \left [\zeta _{1} \ \cdots \ \zeta _{N_{u}}\right ]^{\top }\) be the collections of the mixing coefficients. Defining θ=(x,λ,ζ,η), the ML joint estimator

$$ \hat{\boldsymbol{\theta}}_{ML} = \arg {\underset{\boldsymbol{\theta}}{\max}}\ p(\Upsilon; \boldsymbol{\theta}) $$
(13)

is, in general, computationally unfeasible and naturally centralized. In order to obtain a practical algorithm, we now resort to a sub-optimal but computationally feasible and distributed approach. The intuition is as follows. Assume, for a moment, that a specific node i knows η, λ, ζ and also all the true positions of its neighbors (which we denote by \(\mathcal {X}_{i}\)). Then, the ML joint estimation problem is notably reduced, in fact,

$$ \hat{\boldsymbol{x}}_{i_{ML}} = \arg {\underset{\boldsymbol{x}_{i}}{\max}}\ p(\Upsilon; \boldsymbol{x}_{i}) $$
(14)

We now make the sub-optimal approximation of avoiding non-local information in order to obtain a distributed algorithm, thus resorting to

$$ \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\max}}\ p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) $$
(15)

where we made explicit the functional dependence on all the other parameters (which, for now, are assumed known). Due to the i.i.d. hypothesis, the “local” likelihood function has the form

$$\begin{array}{*{20}l} p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) \!\!=&\!\! \prod_{a \in \Gamma_{a}(i)} \!\!p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta})\\ &\times \!\!\!\! \prod_{j \in \Gamma_{u}(i)} p(\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \end{array} $$
(16)

where the marginal likelihoods are Gaussian-mixtures and we underline the (formal and conceptual) separation between anchor-agent links and agent-agent links. By taking the natural logarithm, we have

$$\begin{array}{*{20}l} \ln p\left(\mathcal{\bar{Y}}_{i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}\right) =& \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \\ &+ \sum_{j \in \Gamma_{u}(i)} \ln p (\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \end{array} $$
(17)

The maximization problem in (15) then reads

$$ {}{\begin{aligned} \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\max}} & \left\{ \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \right.\\ &\quad \left. + \sum_{j \in \Gamma_{u}(i)} \ln p (\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \mathcal{X}_{i}, \zeta_{i}, \boldsymbol{\eta}) \right\}. \end{aligned}} $$
(18)

We can now relax the initial assumptions: instead of assuming known neighbors positions \(\mathcal {X}_{i}\), we will substitute them with their estimates, \(\hat {\mathcal {X}}_{i}\). Moreover, since the robust ML-based self-calibration can be done without knowing the channel parameters η, we also maximize over them. Lastly, we maximize with respect to the mixing coefficients λi,ζi. Thus, our final approach is

$$ {\begin{aligned} \left(\hat{\boldsymbol{x}}_{i}, \hat{\lambda}_{i}, \hat{\zeta}_{i}, \hat{\boldsymbol{\eta}}\right) = \arg {\underset{\boldsymbol{x}_{i}, \lambda_{i}, \zeta_{i}, \boldsymbol{\eta}}{\max}} & \left\{ \sum_{a \in \Gamma_{a}(i)} \ln p(\bar{r}_{a \rightarrow i} ; \boldsymbol{x}_{i}, \lambda_{i}, \boldsymbol{\eta}) \right. \\ & \quad + \left. \sum_{j \in \hat{\Gamma}_{u}(i)}\! \ln p \left(\bar{r}_{j \rightarrow i} ; \boldsymbol{x}_{i}, \hat{\mathcal{X}}_{i}, \zeta_{i}, \boldsymbol{\eta}\right) \right\} \end{aligned}} $$
(19)

where \(\hat {\Gamma }_{u}(i)\) is the set of all agent neighbors of node i for which estimated positions exist. We can iteratively construct (and update) the set \(\hat {\Gamma }_{u}(i)\), in order to obtain a fully distributed algorithm, as summarized in Algorithm 1.

A few remarks are now in order. First, this algorithm imposes some restrictions on the arbitrariness of the network topology, since the information spreads starting from the agents which were able to self-localize during initialization; in practice, this requires the network to be sufficiently connected. Second, convergence of the algorithm is actually a matter of compatibility: if the network is sufficiently connected (compatible), convergence is guaranteed. Given a directed graph, compatibility can be tested a priori and necessary and sufficient conditions can be found (see Section 4). Third, unlike many algorithms present in the literature, symmetrical links are not necessary, nor do we resort to symmetrization (like NBP): this algorithm naturally takes into account the (possible) asymmetrical links of directed graphs.

2.6 Distributed maximum likelihood (DML)

As a natural competitor of the proposed RDML algorithm, we derive here the distributed maximum likelihood (DML) algorithm, which assumes that all links are of the same type. As its name suggests, this is the non-robust version of the previously derived RDML. As usual, we start with the single-agent case as a building block for the multi-agent case. Using the assumption that all links are the same and the i.i.d. hypothesis, the joint pdf of the time-averaged RSS measures, received by agent i, is given by

$$\begin{array}{*{20}l} p\!\left(\boldsymbol{\bar{r}}_{i} ; \boldsymbol{x}_{i}, p_{0}, \alpha, \sigma^{2}\right) &\,=\, \frac{1}{\left(2 \pi \sigma^{2}\right)^{|\Gamma_{a}(i)|/2}}\\ &\quad\!\!\times\! e^{-\frac{1}{2 \sigma^{2}} \sum_{a \in \Gamma_{a}(i)} (\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \!\| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \|)^{2} } \end{array} $$
(20)

We can now proceed by estimating, with the ML criterion, first p0 as a function of the remaining parameters, followed by α as a function of xi and finally xi. We have

$$ \begin{aligned} &\hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \\ &\arg {\underset{p_{0}}{\min}}\ \sum_{a \in \Gamma_{a}(i)} \left(\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \| \right)^{2}. \\ \end{aligned} $$
(21)

Defining \(s_{a,i} \triangleq 10 \log _{10} \| \boldsymbol {x_{i}} - \boldsymbol {x}_{a} \|\) as the log-distance, \(\boldsymbol {s}_{i} \triangleq \left [s_{1,i} \ s_{2,i} \ \cdots \ s_{|\Gamma _{a}(i)|,i}\right ]^{\top } \in \mathbb {R}^{|\Gamma _{a}(i)| \times 1}\) the column-vector collecting them and \(\boldsymbol {1}_{n} =\ [\!1 \cdots 1]^{\top } \in \mathbb {R}^{n \times 1}\) an all-ones vector of dimension n, the previous equation can be written as

$$ \hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \arg {\underset{p_{0}}{\min}} \| \boldsymbol{\bar{r}}_{i} + \alpha \boldsymbol{s}_{i} - p_{0} \boldsymbol{1}_{|\Gamma_{a}(i)|} \|^{2} $$
(22)

which is a least-squares (LS) problem and its solution is

$$ \hat{p}_{0}(\alpha, \boldsymbol{x}_{i}) = \frac{1}{| \Gamma_{a}(i) |} \sum_{a \in \Gamma_{a}(i)} (\bar{r}_{a \rightarrow i} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \|) $$
(23)

By using this expression, the problem of estimating α as a function of xi is

$$ \hat{\alpha}(\boldsymbol{x}_{i}) = \arg {\underset{\alpha}{\min}}\ \left\| \boldsymbol{P}^{{}^{\boldsymbol{\perp}}}_{\boldsymbol{1}_{|\Gamma_{a}(i)|}}(\boldsymbol{\bar{r}}_{i} + \alpha \boldsymbol{s}_{i}) \right\|^{2} $$
(24)

where, given a full-rank matrix \(\boldsymbol {A} \in \mathbb {R}^{m \times n}\), with mn, \(\boldsymbol {P}^{{}^{\perp }}_{\boldsymbol {A}}\) is the orthogonal projection matrix onto the orthogonal complement of the space spanned by the columns of A. It can be computed via \(\boldsymbol {P}^{{}^{\perp }}_{\boldsymbol {A}} = \boldsymbol {I}_{m} - \boldsymbol {P_{A}}\), where PA=A(AA)−1A is an orthogonal projection matrix and Im is the identity matrix of order m. The solution to problem (24) is given by

$$ \hat{\alpha}(\boldsymbol{x}_{i}) = - \boldsymbol{\left(\tilde{s}_{i}^{\top} \tilde{s}_{i} \right)^{-1} \tilde{s}_{i}^{\top} \tilde{r}_{i} } $$
(25)

where \(\boldsymbol {\tilde {r}}_{i} = \boldsymbol {P}^{{}^{\boldsymbol {\perp }}}_{\boldsymbol {1}_{|\Gamma _{a}(i)|}} \boldsymbol {\bar {r}}_{i}\) and \(\boldsymbol {\tilde {s}}_{i} = \boldsymbol {P}^{\boldsymbol {\perp }}_{\boldsymbol {1}_{|\Gamma _{a}(i)|}} \boldsymbol {s}_{i}\). By using the previous expression, we can finally write

$$ \hat{\boldsymbol{x}_{i}} = \arg {\underset{\boldsymbol{x}_{i}}{\min}}\ \left\| \boldsymbol{P}^{{}^{\boldsymbol{\perp}}}_{\boldsymbol{\tilde{s}_{i}}} \boldsymbol{\tilde{r}}_{i} \right\|^{2} $$
(26)

which, in general, does not admit a closed-form solution, but can be solved numerically. After obtaining \(\hat {\boldsymbol {x}}_{i}\), node i can estimate p0 and α using (23) and (25).

The multi-agent case follows an almost identical reasoning of the RDML. Approximating the true (centralized) MLE by avoiding non-local information and assuming to already have an initial estimate of p0 and α, it is possible to arrive at

$$ {\begin{aligned} \hat{\boldsymbol{x}}_{i} = \arg {\underset{\boldsymbol{x}_{i}}{\min}} & \left\{ \sum_{a \in \Gamma_{a}(i)}\left(\bar{r}_{a \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{a} \| \right)^{2} \right.\\ & \quad \left. + \sum_{j \in \hat{\Gamma}_{u}(i)} \left(\bar{r}_{j \rightarrow i} - p_{0} + 10 \alpha \log_{10} \| \boldsymbol{x_{i}} - \boldsymbol{x}_{j} \| \right)^{2} \right\} \end{aligned}} $$
(27)

where (again) an initialization phase is required and the set of estimated agents-neighbors \(\hat {\Gamma }_{u}(i)\) is iteratively updated. The key difference with RDML is that, due to the assumption of the links being all of the same type, the estimates of p0 and α are broadcasted and a common consensus is reached by averaging. This increases the communication overhead, but lowers the computational complexity, operating a trade-off. The DML algorithm is summarized in Algorithm 2.

Similar remarks as for the RDML can be made for the DML. Again, the network’s topology cannot be completely arbitrary, as the information must spread throughout the network starting from the agents which self-localized, implying that the graph must be sufficiently connected. Necessary and sufficient conditions to answer the compatibility question are the same as RDML. Secondly, the (strong) hypothesis behind the DML derivation (i.e., all links of the same type) allows for a more analytical derivation, up to position estimation, which is a nonlinear least-squares problem. However, it is also its weakness since, as will be shown later, it is not a good choice for mixed LoS/NLoS scenarios.

2.7 Centralized MLE with known nuisance parameters (C-MLE)

The centralized MLE of x with known nuisance parameters, i.e., assuming known L and η, is chosen here as a benchmark for both RDML and DML. In the following, this algorithm will be denoted by C-MLE. Its derivation is simple (see Appendix B) and results in

$$ {\begin{aligned} & \hat{\boldsymbol{x}}_{ML} \\ &= \arg {\underset{\boldsymbol{x}}{\min}}\ \sum\limits_{i=1}^{N_{u}} \sum\limits_{j \in \Gamma(i)} \frac{1}{\sigma^{2}_{j \rightarrow i}} \left(\bar{r}_{j \rightarrow i} - p_{0_{j\rightarrow i}} \,+\, 10 \alpha_{j \rightarrow i} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \| \right)^{2} \end{aligned}} $$
(28)

where \(\left (p_{0_{j \rightarrow i}}, \alpha _{j \rightarrow i}, \sigma ^{2}_{j \rightarrow i}\right)\) are either LoS or NLoS depending on the considered link. It is important to observe that, if all links are of the same type, the dependence from \(\sigma ^{2}_{j \rightarrow i}\) in (28) disappears. From standard ML theory [45], C-MLE is asymptotically (K→+) optimal. The optimization problem (28) is computationally challenging, as it requires a minimization in a 2Nu-dimensional space, but still feasible for small values of Nu.

3 Convergence analysis

The convergence test of our proposed algorithm (and also of DML) can be restated as a graph coloring problem: if all the graph can be colored, then it is compatible and convergence is guaranteed. As it is common in the literature on graph theory, let G=(V,E) be a directed graph, with V denoting the set of nodes and E the set of directed edges. The set of nodes is such that V=VaVu, where Va is the (non-empty) set of anchor nodes and Vu is the (non-empty) set of agent nodes.

Definition 1

(RDML-initializable) A directed graph G is said to be RDML-initializable if and only if there exists at least one agent node, say x, such that |Γa(x)|≥3.

As can be easily checked, the previous statement is a necessary condition: if a graph is not RDML-initializable, then it is incompatible with RDML. To give a necessary and sufficient condition, we introduce the notion of “color.” A node can be either black or white; all anchors are black and all agent nodes start as white, but may become black if some condition is satisfied. The RDML can be rewritten as a graph coloring problem. In order to do this, we define the set \(\hat {\Gamma }_{u}(i) \triangleq \{ j \in \Gamma _{u}(i) : \text {agent} \ j \ \text {is black} \}\), i.e., the subset of agent neighbors of node i which contains only black agent nodes. In general, \(\hat {\Gamma }_{u}(i) \subseteq \Gamma _{u}(i)\). We also define the set \(B_{u} \triangleq \{ i \in V_{u}: \text {agent } {i} \text { is black} \}\), i.e., the set of black agents. In general, BuVu. Given a graph G, we can perform a preliminary test by running the following RDML-coloring algorithm (a better test will be derived later):

  • Initialization (k=0)

    1. 1

      All anchors are colored black and all agents white;

    2. 2

      Every agent i with |Γa(i)|≥3 is colored black;

  • Iterative coloring: Start with k=1

    1. 1

      Every agent with \(\left | \Gamma _{a}(i)| + |\hat {\Gamma }^{(k-1)}_{u}(i) \right | \geq 3\), where \(\hat {\Gamma }^{(k)}_{u}(i)\) is the set \(\hat {\Gamma }_{u}(i)\) at step k, is colored black;

    2. 2

      Every agent j updates is own \(\hat {\Gamma }^{(k)}_{u}(j)\) with the new colored nodes;

    3. 3

      The set \(B^{(k)}_{u}\) is updated, where \(B^{(k)}_{u}\) is the set Bu at step k;

    4. 4

      Set kk+1 and repeat the previous steps until Vu contains only black nodes.

Suppose that the previous algorithm can color the entire graph black in a finite amount of steps, say n. Then, n is called RDML-lifetime.

Definition 2

(RDML-lifetime) A directed graph G is said to have RDML-lifetime equal to n if and only if the RDML-coloring algorithm colors black the set Vu in exactly n steps. If no such integer exists, by convention, n=+.

This allows us to formally define compatibility:

Definition 3

(RDML-compatibility) A directed graph G is said to be RDML-compatible if and only if:

  1. 1

    G is RDML-initializable;

  2. 2

    the RDML-lifetime of G is finite.

Otherwise, G is said to be RDML-incompatible.

In practice, there are only two ways for which a graph is RDML-incompatible: either G cannot be initialized, or the RDML-lifetime of G is infinite. Testing the first condition is trivial; the interesting result is that testing the second condition is made simple thanks to the following.

Theorem 1

An RDML-initializable graph G is RDML-incompatible if and only if there exist an integer h such that

$$ B^{(h)}_{u} = B^{(h-1)}_{u} \; \wedge \; \left| B^{(h)}_{u} \right| < | V_{u} | $$
(29)

that is, if there is a step h in which no more agents can be colored black and at least one agent is still left white.

Proof

(⇐) First, observe that, by construction, \(B^{(k-1)}_{u} \subseteq B^{(k)}_{u}\), as black nodes can only be added. Let \(C_{G} : \mathbb {N} \rightarrow \mathbb {N}\) be the following function

$$ C_{G}(k) = \left| B^{(k)}_{u} \right|. $$
(30)

Since \(B^{(k-1)}_{u} \subseteq B^{(k)}_{u}\), CG is non-decreasing. An RDML-initializable graph is RDML-compatible if and only if it has finite RDML-lifetime, i.e., there must exist n such that CG(n)=|Vu|. But condition (29) implies that there exists h such that CG(h)=CG(h−1)<|Vu|. At step h+1 and all successive steps, CG cannot increase since the set \(B^{(k)}_{u}\) cannot change. To show this, the key observation is that, as the graph G at step h−1 did not satisfy the conditions for \(B^{(h-1)}_{u}\) to grow (by hypothesis), the set was equal to itself at step h, i.e., \(B^{(h-1)}_{u} = B^{(h)}_{u}\). But since no color change happened in Vu at step h, the graph G still does not satisfy the conditions for \(B^{(k)}_{u}\) to grow for kh. Thus, CG(k) becomes a constant function for kh and can never reach the value |Vu|.

(⇒) Since G is an RDML-incompatible graph by hypothesis, at least one agent must be white, so \(\left |B_{u}^{(h)}\right | < |V_{u}|\) is true for any h. Since CG(k) is non-decreasing, it must become constant for some h>k, but this implies that, for some h, \(\left |B^{(h)}_{u}\right | = \left |B^{(h-1)}_{u}\right |\). This implies that, since \(B^{(k-1)}_{u} \subseteq B^{(k)}_{u}\) by construction, \(B^{(h)}_{u} = B^{(h-1)}_{u}\) for some h. □

Definition 4

(RDML-depth) Let G be a directed graph. Then,

$$ h_{G} \triangleq \inf_{h} \left\{ h \in \mathbb{N} : B^{(h)}_{u} = B^{(h+1)}_{u} \right\} $$
(31)

is called the RDML-depth of G.

A complete graph has hG=0, as all agents are colored black during the initialization phase of the RDML-coloring algorithm.

Corollary 1

Let G be a directed graph. Then, hGn, where n is the RDML-lifetime of G.

Proof

If G is not RDML-initializable, hG=0 as \(B^{(0)}_{u} = \emptyset \). If G is RDML-initializable, there are two cases: either n is finite or not. In the latter, n=+ and hG is finite by previous theorem. If n is finite, hG=n since \(\left |B^{(n)}_{u}\right | = |V_{u}|\) by definition of RDML-lifetime. □

The previous corollary proves that hG is always finite, regardless of G. This allows us to write the graph compatibility test, shown in Algorithm 3. Thanks to the previous results, this algorithm always converges and can be used to test a priori if a graph is RDML-compatible or not.

Remark Algorithm 3 can be intuitively explained via a physical metaphor, where, in a metal grid (representing the graph), “heat” (information) spreads out starting from some initial “hot spots” (nodes that are colored black in the first iteration). This spreading is continued, reaching more and more locations on the grid, until the event occurs that further spreading of “heat” does not change the “heat map.” If, at this point, there are cold spots (nodes that have not been colored black), the graph is RDML-incompatible. By contrast, if heat spreads throughout the grid, the graph is RDML-initializable.

Figures 1 and 2 show two examples of RDML-initializable graphs. Figure 1 illustrates the case of a small (Na=4,Nu=4) but highly connected network. In contrast, Fig. 2 displays a larger (Na=15,Nu=20), but weakly connected network (graph depth = 7). In particular, only 90 out of 680 possible directed links are connected, where anchors have three to four links (out of 20). For both cases, graph compatibility can be shown using Algorithm 3.

Fig. 1
figure 1

Example of a compatible graph with Na=4,Nu=4

Fig. 2
figure 2

Example of a compatible graph with Na=15,Nu=20

4 Results and discussion

In this section, we use standard Monte Carlo (MC) simulation to evaluate the performance of the proposed algorithm and its competitors. As a performance metric, we will show the ECDF (empirical cumulative distribution function) of the localization error, defined as \(e_{i} \triangleq \left \| \hat {\boldsymbol {x}}_{i} - \boldsymbol {x}_{i} \right \|\) for agent i, i.e., an estimate of the probability \(\mathcal {P}\left \{ \left \| \hat {\boldsymbol {x}}_{i} - \boldsymbol {x}_{i} \right \| \leq \gamma \right \}\). The ECDFs are obtained by stacking all the localization errors for every agent in a single vector, in order to give a global picture of the algorithm performances. The simulated scenario is as follows. In a square coverage area of 100×100 m2, Na=11 stationary anchors are deployed, as depicted in Fig. 3. The channel parameters are generated as follows: \(p_{0_{\text {LOS}}} \sim \mathcal {U}[-30, 0]\), \(\alpha _{\text {LOS}} \sim \mathcal {U}[2,4]\), and σLOS=6, while, for the NLoS case, \(p_{0_{\text {NLOS}}} \sim \mathcal {N}(0, 25)\), \(\alpha _{\text {NLOS}} \sim \mathcal {U}[3,6]\), and σNLOS=12. Similar settings on the reference power and the path loss exponent can be found in [46, 47], respectively. At each MC trial, Nu=10 agents are randomly (uniformly) generated in the coverage area. Unless stated otherwise, K=40 samples per link are used. Finally, each simulation consists of 100 MC trials.

Fig. 3
figure 3

Anchors positions used in all simulations

The optimization problems (27) and (19) have been solved as follows. For DML, a 2D grid search has been used, while, for RDML and C-MLE, the optimization has been performed with the MATLAB solver fmincon.

4.1 Gaussian noise

Here, we validate our novel approach by considering fully connected networks in three different scenarios. In Fig. 4, all links are assumed to be LoS. As can be seen from the ECDFs, the robust approach has similar performances to the D-ML algorithm, which was designed assuming all links to be of the same type. In Fig. 5, all links are assumed to be NLoS and again we see a similar result. The key difference is in Fig. 6, where a mixed LoS/NLoS scenario is considered and the fraction of NLoS links is random (numerically generated as \(\mathcal {U}[0,1]\) and different for each MC trial). Here, the robust approach outperforms D-ML, validating our strategy. Moreover, it has a remarkably close performance to the centralized algorithm C-MLE. As a result, the robust approach is the preferred option in all scenarios.

Fig. 4
figure 4

All LoS links with white Gaussian noise. Fully connected networks, Na=11,Nu=10

Fig. 5
figure 5

All NLoS links with white Gaussian noise. Fully connected networks, Na=11,Nu=10

Fig. 6
figure 6

Mixed LoS/NLoS scenario with white Gaussian noise. Fully connected networks, randomized NLoS fraction, Na=11,Nu=10

4.2 Non-Gaussian noise for NLoS

In order to evaluate robustness, we consider a model mismatch on the noise distribution by choosing, for the NLoS links, a Studentt distribution with ν=5 degrees of freedom (as usual, serially i.i.d. and independent from the LoS noise sequence). For brevity, we consider only the case of a fully connected network in a mixed LoS/NLoS scenario. As shown by Fig. 7, our proposed approach is able to cope with non-Gaussianity without a significant performance loss, thanks to the Gaussian mixture model.

Fig. 7
figure 7

Mixed LoS/NLoS scenario with Student’s t distributed noise with ν=5 under NLoS. Fully connected networks, randomized NLoS fraction, Na=11,Nu=10

4.3 Cooperative gain

The proposed algorithm exhibits the so called “cooperative gain,” i.e., a performance advantage (according to some performance metric, typically localization accuracy) with respect to a non-cooperative approach, where each node tries to localize itself only by self-localization (12). In our case, the cooperative gain is twofold: first, it allows to improve localization accuracy; second, it allows to localize otherwise non-localizable agents. To show the first point, we consider a mixed LoS/NLoS environment (for simplicity, with white Gaussian noise) and a network is generated at each Monte Carlo trial, assuming a communication radius R=70 m, with an ideal probability of detection model [17]. Moreover, the network is generated in a way as to allow all agents to self-localize and, as a consequence, it is RDML-compatible. In Fig. 8, the ECDFs of the localization error are shown. To show the second point, we consider the toy network depicted in Fig. 9. In this example, agent X is not able to self-localize, since no anchors are in its neighborhood, Γa(X)=. Thus, in a non-cooperative approach, the position of agent X cannot be uniquely determinedFootnote 7, while, in a cooperative approach, the position of agent X can actually be obtained by exchanging information, e.g., the estimated positions of its neighbors.

Fig. 8
figure 8

Cooperative (RDML) and non-cooperative approaches in a mixed LoS/NLoS scenario, in a network with communication radius R=70 m, Na=11,Nu=10

Fig. 9
figure 9

Network of a toy example, with Na=6,Nu=4

4.4 Variable K and NLoS fraction

We next analyze the RDML algorithm by varying the number of samples per link, K, and the fraction of NLoS links. In both cases, we consider (for brevity) fully connected networks with white Gaussian noise; for variable K, the fraction of NLoS is randomized, while, for variable NLoS fraction, K is fixed. In Fig. 10, we observe that the performance increases as K increases, as expected. In Fig. 11, where each point represents 100 MC trials, the median error is chosen as a performance metricFootnote 8 and RDML shows good performances and is not significantly affected by the actual NLoS fraction, which is evidence for its robustness, while D-ML is clearly inferior and suffers from model mismatch.

Fig. 10
figure 10

Error ECDFs of RDML for various values of K in fully connected networks with white Gaussian noise and randomized NLoS fraction, Na=11,Nu=10

Fig. 11
figure 11

Median error of RDML and D-ML for a variable fraction of NLoS links. Fully connected networks, K=40, Na=11,Nu=10

4.5 Communication and computational costs

The communication overhead is evaluated by computing the number of elementary messages sent throughout the network, where an elementary message is simply defined as a single scalar. Sending a d-dimensional message, e.g., (x,y) coordinates, is equivalent to sending d elementary messages. In Fig. 12, the communication overhead of RDML and DML is plotted with respect to the number of agents, assuming a complete graph: for Nu agents, 2Nu(Nu−1) scalars, representing estimated 2D positions, are needed for RDML, while 4Nu(Nu−1) are needed for DML, as estimates of (p0,α) are also necessary. Viewing (2D) position has the fundamental information of a message (d=2), in a complete graph, RDML achieves the theoretical minimum cost in order for all nodes to have complete information, while DML uses auxiliary information. For a general graph, the communication overhead depends on the specific graph topology, but is never greater than the aforementioned value. Thus, for a general graph, the communication cost is upper-bounded by a quadratic function of Nu.

Fig. 12
figure 12

Communication costs for a fully-connected network

Regarding computational complexity, both RDML and DML scale linearly with Nu and they benefit from parallelization, as the main optimization task can be executed independently for each involved node. As already mentioned, DML operates a trade-off between communication and computational complexity; in fact, the DML optimization problem (27) is easier to solve than the RDML optimization problem (19). Both problems are non-convex and may have local minima/maxima, so care must be taken in the optimization procedure.

5 Conclusions

We have developed a novel, robust ML-based scheme for RSS-based distributed cooperative localization in mixed LoS/NLoS scenarios, which, while not being optimal, has good overall accuracy, is adaptive to the environment changes, is robust to NLoS propagation, including non-Gaussian noise, and has communication overhead upper-bounded by a quadratic function of the number of agents and computational complexity scaling linearly with the number of agents, also benefiting from parallelization. The main original contributions are that, unlike many algorithms present in the literature, the proposed approach (a) does not require calibration and (b) does not require symmetrical links (nor does it resort to symmetrization), thus naturally accounting for the topology of directed graphs with asymmetrical links as a result of miss-detections and channel anisotropies. To the best of the authors’ knowledge, this is the first distributed cooperative RSS-based algorithm for directed graphs. The main disadvantage is imposing some restrictions on the arbitrariness of networks’ topology, but these restrictions disappear for sufficiently connected networks. We also derive a compatibility test based on graph coloring, which allows to determine whether the given network is compatible. If it is compatible, convergence of the algorithm is guaranteed. Future work may include the consideration of possible approximations, in order to extend this approach to more general networks, and of alternative models, overcoming the limitations of the standard path loss model.

6 Appendix A: On using the time-averaged sample mean

Let \(\Theta \subseteq \mathbb {R}^{d}\) be the (non-empty) parameter space, θΘ unknown deterministic parameters and \(s : \Theta \rightarrow \mathbb {R}\) a function. Consider the following model

$$ y(k) = s(\boldsymbol{\theta}) + w(k), \quad w(k) \sim \mathcal{N}\left(0, \sigma^{2}\right) $$
(32)

where k=1,…,K and the noise sequence w(k) is zero-mean, i.i.d. Gaussian, with deterministic unknown variance σ2>0. Let

$$ \bar{y} = \frac{1}{K} \sum_{k=1}^{K} y(k) $$
(33)

be the sample mean and let p(y(1),…,y(k);θ,σ2) denote the joint likelihood function. Then,

$$ \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ p\left(y(1),\dots,y(k) ; \boldsymbol{\theta}, \sigma^{2}\right) = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ p\left(\bar{y} ; \boldsymbol{\theta}, \sigma^{2}\right) $$
(34)

Proof

Since the observations are independent,

$$ {}{\begin{aligned} p\left(y(1),\dots,y(k);\boldsymbol{\theta},\sigma^{2}\right) &= \prod\limits_{k=1}^{K} p \left(y(k);\boldsymbol{\theta}, \sigma^{2}\right) \\ & = \frac{1}{(2 \pi \sigma^{2})^{K/2}} e^{-\frac{1}{2 \sigma^{2}} {\sum}_{k=1}^{K} \left(y(k) - s(\boldsymbol{\theta})\right)^{2} } \end{aligned}} $$
(35)

We can now focus on the term in the exponential. By adding and subtracting \(\bar {y}\) from the quadratic term and expanding it, we get

$$ {\begin{aligned} Q &= \sum\limits_{k=1}^{K} (y(k) - s(\boldsymbol{\theta}))^{2} = \sum\limits_{k=1}^{K} ((y(k) -\bar{y}) - (s(\boldsymbol{\theta}) - \bar{y}))^{2}\\ {}&= \sum\limits_{k=1}^{K} (y(k) - \bar{y})^{2} + \sum\limits_{k=1}^{K} (s(\boldsymbol{\theta}) - \bar{y})^{2} - 2 \sum\limits_{k=1}^{K} (y(k) - \bar{y})(s(\boldsymbol{\theta}) - \bar{y}) \end{aligned}} $$
(36)

Observing now that

$$ {\begin{aligned} \sum\limits_{k=1}^{K} \left(y(k) - \bar{y} \right)(s(\boldsymbol{\theta}) - \bar{y}) &= \sum\limits_{k=1}^{K} \left(y(k) s(\boldsymbol{\theta}) - y(k) \bar{y} - \bar{y} s(\boldsymbol{\theta}) + \bar{y}^{2} \right)\\ & = 0 \end{aligned}} $$
(37)

we are left with

$$ Q = \sum\limits_{k=1}^{K} (y(k) - \bar{y})^{2} + \sum\limits_{k=1}^{K} (s(\boldsymbol{\theta}) - \bar{y})^{2} $$
(38)

Thus, the joint pdf can be factorized as follows

$$ {}{\begin{aligned} &p\left(y(1),\dots,y(k) ; \boldsymbol{\theta}, \sigma^{2}\right) &\\ &\qquad = \underbrace{\frac{1}{\left(2 \pi \sigma^{2}\right)^{K/2}} e^{-\frac{1}{2 \sigma^{2}} {\sum}_{k=1}^{K} (y(k) - \bar{y})^{2} }}_{= h\left(y(1),\dots,y(k) ; \sigma^{2}\right)} \underbrace{e^{-\frac{1}{2 \sigma^{2}} {\sum}_{k=1}^{K} (s(\boldsymbol{\theta}) - \bar{y})^{2}}}_{= g\left(\bar{y} ; \boldsymbol{\theta}, \sigma^{2}\right) } \end{aligned}} $$
(39)

As a by-product, by invoking the Neyman-Fisher factorization theorem [45] and assuming known σ2, \(\bar {y}\) is a sufficient statistic for θ. Resuming our proof, we can now observe that

$$ \begin{aligned} &\arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ p\left(y(1),\dots,y(k) ; \boldsymbol{\theta}, \sigma^{2}\right) = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ g\left(\bar{y} ; \boldsymbol{\theta}, \sigma^{2}\right) \\ & = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ \left\{ - \sum\limits_{k=1}^{K} (s(\boldsymbol{\theta}) - \bar{y})^{2} \right\} = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\min}}\ (\bar{y} - s(\boldsymbol{\theta}))^{2} \end{aligned} $$
(40)

since no term in the summation depends on k. Thus,

$$ \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ p\left(y(1),\dots,y(k) ; \boldsymbol{\theta}, \sigma^{2}\right) = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\min}}\ \left(\bar{y} - s(\boldsymbol{\theta})\right)^{2} $$
(41)

On the other hand,

$$ \begin{aligned} & \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ p\left(\bar{y} ; \boldsymbol{\theta}, \sigma^{2} \right) \\ & = \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ \frac{1}{\sqrt{2 \pi \frac{\sigma^{2}}{K}}} \exp \left\{ -\frac{K}{2 \sigma^{2}} (\bar{y} - s(\boldsymbol{\theta}))^{2} \right\} \\ &= \arg {\underset{\boldsymbol{\theta} \in \Theta}{\max}}\ \left\{ -\frac{K}{2 \sigma^{2}} (\bar{y} - s(\boldsymbol{\theta}))^{2} \right\} \\ &= \arg {\underset{\boldsymbol{\theta} \in \Theta}{\min}}\ (\bar{y} - s(\boldsymbol{\theta}))^{2} \end{aligned} $$
(42)

which completes the proof. □

7 Appendix B: C-MLE derivation

The MLE of x is given by

$$ \hat{\boldsymbol{x}}_{ML} = \arg {\underset{\boldsymbol{x}}{\max}}\ p(\Upsilon ; \boldsymbol{x}) $$
(43)

where p(Υ;x) is the joint likelihood function. Since all other parameters are assumed known, the set of all time-averaged measures \(\bar {\Upsilon } = \cup _{i=1}^{N_{u}} \bar {\mathcal {Y}_{i}}\) is a sufficient statistic for x (see Appendix A), from which it follows that

$$ \arg {\underset{\boldsymbol{x}}{\max}}\ p(\Upsilon ; \boldsymbol{x}) = \arg {\underset{\boldsymbol{x}}{\max}}\ p\left(\bar{\Upsilon} ; \boldsymbol{x}\right) $$
(44)

By link independence,

$$ p(\bar{\Upsilon} ; \boldsymbol{x}) = \prod\limits_{i=1}^{N_{u}} \prod\limits_{j \in \Gamma(i)} p(\bar{r}_{j \rightarrow i} ; \boldsymbol{x}) $$
(45)

where \(p(\bar {r}_{j \rightarrow i} ; \boldsymbol {x})\) is the marginal likelihood function. Thus, we have

$$ \begin{aligned} \hat{\boldsymbol{x}}_{ML} &= \arg {\underset{\boldsymbol{x}}{\max}}\ \prod\limits_{i=1}^{N_{u}} \prod\limits_{j \in \Gamma(i)} \frac{1}{\left(2 \pi \sigma^{2}_{j \rightarrow i}\right)} \\ &\quad \times e^{-\frac{1}{2 \sigma^{2}_{j \rightarrow i}} (\bar{r}_{j \rightarrow i} -p_{0_{j\rightarrow i}} +10 \alpha_{j \rightarrow i} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|)^{2} } \end{aligned} $$
(46)

Taking the natural logarithm and neglecting constants,

$$ {\begin{aligned} &\hat{\boldsymbol{x}}_{ML} \\ &= \arg {\underset{\boldsymbol{x}}{\min}}\ \sum\limits_{i=1}^{N_{u}} \sum\limits_{j \in \Gamma(i)} \frac{1}{\sigma^{2}_{j \rightarrow i}} (\bar{r}_{j \rightarrow i} -p_{0_{j\rightarrow i}} + 10 \alpha_{j \rightarrow i} \log_{10} \| \boldsymbol{x}_{j} - \boldsymbol{x}_{i} \|)^{2} \end{aligned}} $$
(47)

which completes the derivation.