Comparing with the NLOS error and measured noise, the synchronization error significantly affects the localization accuracy [21, 22]. In this section, four types of LLS-based algorithms, a quadratic programming (QP) algorithm and data fusion (DF)-based algorithms are presented to deal with the synchronization error.
LLS algorithm with a new variable
As the linear equations shown in (2), if we define the synchronization error as a new variable, the new linear equations can be easily obtained by ignoring the measured noise. Then putting measured parameters into it, we can obtain the following linear equations
$$Z = H \cdot X^{\prime}$$
(3)
where \(X^{\prime} = [X^{T} ,\varepsilon ]^{T}\), \(X = [x,y]^{T}\), T is the symbol of transpose,
$$\begin{aligned} & Z = \left[ {\begin{array}{*{20}l} {y_{1} \left( {\cos (\alpha_{1} ) + \cos (\beta_{1} )} \right) - x_{1} \left( {\sin (\alpha_{1} ) + \sin (\beta_{1} )} \right) - r_{1} \sin (\alpha_{1} - \beta_{1} )} \hfill \\ {y_{1} \left( {\cos (\alpha_{2} ) + \cos (\beta_{2} )} \right) - x_{1} \left( {\sin (\alpha_{2} ) + \sin (\beta_{2} )} \right) - r_{2} \sin (\alpha_{2} - \beta_{2} )} \hfill \\ \vdots \hfill \\ {y_{1} \left( {\cos (\alpha_{L} ) + \cos (\beta_{L} )} \right) - x_{1} \left( {\sin (\alpha_{L} ) + \sin (\beta_{L} )} \right) - r_{L} \sin (\alpha_{L} - \beta_{L} )} \hfill \\ \end{array} } \right] \\ & H = \left[ {\begin{array}{*{20}l} { - \sin (\alpha_{1} ) - \sin (\beta_{1} )} \hfill & {\cos (\alpha_{1} ) + \cos (\beta_{1} )} \hfill & { - \sin (\alpha_{1} - \beta_{1} )} \hfill \\ { - \sin (\alpha_{2} ) - \sin (\beta_{2} )} \hfill & {\cos (\alpha_{2} ) + \cos (\beta_{2} )} \hfill & { - \sin (\alpha_{2} - \beta_{2} )} \hfill \\ \vdots \hfill & \vdots \hfill & \vdots \hfill \\ { - \sin (\alpha_{L} ) - \sin (\beta_{L} )} \hfill & {\cos (\alpha_{L} ) + \cos (\beta_{L} )} \hfill & { - \sin (\alpha_{L} - \beta_{L} )} \hfill \\ \end{array} } \right] \\ \end{aligned}$$
Since the variables in (3) are independent, a new least square (LS) algorithm denoted as LLS is utilized to obtain the estimated location of MS and the synchronization error
$$\hat{X}^{\prime} = (H^{T} H)^{ - 1} H^{T} Z$$
(4)
LLS algorithms with the synchronization error elimination
Instead of defining a new variable, we can obtain the linear equations by eliminating the synchronization error. Three different methods inspired from the literatures [30, 31] are introduced to eliminate the synchronization error, and construct the corresponding linear equations. Then, the LS algorithm is applied to obtain the estimated location of MS. The first method is called LLS-1, through selecting the first equation in (2) as the reference equation, we subtract it from the rest equations to eliminate the synchronization error. Then, the linear equations can be obtained through ignoring the measured noise and doing some mathematic manipulations
$$Z_{1} = H_{1} \cdot X$$
(5)
where
$$\begin{aligned} & Z_{1} = \left[ {\begin{array}{*{20}c} {c_{2} - c_{1} - r_{2} + r_{1} } \\ {c_{3} - c_{1} - r_{3} + r_{1} } \\ \vdots \\ {c_{L} - c_{1} - r_{L} + r_{1} } \\ \end{array} } \right],\;\;H_{1} = \left[ {\begin{array}{*{20}c} {a_{2} - a_{1} } & {b_{2} - b_{1} } \\ {a_{3} - a_{1} } & {b_{3} - b_{1} } \\ \vdots & \vdots \\ {a_{L} - a_{1} } & {b_{L} - b_{1} } \\ \end{array} } \right] \\ & a_{i} = \frac{{ - \sin (\alpha_{i} ) - \sin (\beta_{i} )}}{{\sin (\alpha_{i} - \beta_{i} )}},\;\;b_{i} = \frac{{\cos (\alpha_{i} ) + \cos (\beta_{i} )}}{{\sin (\alpha_{i} - \beta_{i} )}}, \\ & c_{i} = \frac{{y_{1} \left( {\cos (\alpha_{i} ) + \cos (\beta_{i} )} \right) - x_{1} \left( {\sin (\alpha_{i} ) + \sin (\beta_{i} )} \right)}}{{\sin (\alpha_{i} - \beta_{i} )}}. \\ \end{aligned}$$
The second method is called LLS-2, the \({{L \times (L - 1)} \mathord{\left/ {\vphantom {{L \times (L - 1)} 2}} \right. \kern-\nulldelimiterspace} 2}\) linear equations are obtained by choosing two equations from (2) and subtracting each other. Thus, the following equations are employed for the estimated location of MS:
$$Z_{2} = H_{2} \cdot X$$
(6)
where
$$Z_{2} = \left[ {\begin{array}{*{20}c} {c_{2} - c_{1} - r_{2} + r_{1} } \\ {c_{3} - c_{1} - r_{3} + r_{1} } \\ \vdots \\ {c_{L} - c_{L - 1} - r_{L} + r_{L - 1} } \\ \end{array} } \right],\;\;H_{2} = \left[ {\begin{array}{*{20}c} {a_{2} - a_{1} } & {b_{2} - b_{1} } \\ {a_{3} - a_{1} } & {b_{3} - b_{1} } \\ \vdots & \vdots \\ {a_{L} - a_{L - 1} } & {b_{L} - b_{L - 1} } \\ \end{array} } \right].$$
The third method is called LLS-3, instead of obtaining the difference of the equations directly as the LLS-1 and LLS-2 methods, the average equation is obtained first, which is subtracted from all the equations, and will result in \(L\) new linear equations. The equations of the LLS-3 method can be expressed as
$$Z_{3} = H_{3} \cdot X$$
(7)
where
$$Z_{3} = \left[ {\begin{array}{*{20}c} {c_{1} - f - r_{1} + g} \\ {c_{2} - f - r_{2} + g} \\ \vdots \\ {c_{L} - f - r_{L} + g} \\ \end{array} } \right],\;\;H_{3} = \left[ {\begin{array}{*{20}c} {a_{1} - d} & {b_{1} - e} \\ {a_{2} - d} & {b_{2} - e} \\ \vdots & \vdots \\ {a_{L} - d} & {b_{L} - e} \\ \end{array} } \right]$$
$$d = \frac{1}{L}\sum\limits_{i = 1}^{L} {a_{i} } ,\;\;e = \frac{1}{L}\sum\limits_{i = 1}^{L} {b_{i} } ,\;\;f = \frac{1}{L}\sum\limits_{i = 1}^{L} {c_{i} } ,\;\;g = \frac{1}{L}\sum\limits_{i = 1}^{L} {r_{i} } .$$
QP algorithm
In the presence of the measured noise on the range and angle measurements, Eq. (3) will not hold in general. We resort to the optimization method, and define the residual error as the objective function
$$\mathop {\min }\limits_{{X^{\prime } }} \;||Z - H \cdot X^{\prime } ||^{2}$$
(8)
where \({||} \cdot {||}\) denotes the Euclidean norm.
As mentioned in [32], the NLOS error is always positive and assumed to be much larger than the range measurement noise. Therefore, we can relax the nonlinear constraints of \(X^{\prime}\) into linear constraints as shown as follows:
$$\begin{gathered} x \le r_{i} - \varepsilon + x_{1} , - x \le r_{i} - \varepsilon - x_{1} \hfill \\ y \le r_{i} - \varepsilon + y_{1} , - y \le r_{i} - \varepsilon - y_{1} \hfill \\ \end{gathered}$$
(9)
Rewriting (9) in matrix form, we have
$$A_{i} X^{\prime} \le B_{i} ,i = 1, \cdots ,L$$
(10)
where
$$A_{i} = \left[ {\begin{array}{*{20}c} 1 & 0 & 1 \\ { - 1} & 0 & 1 \\ 0 & 1 & 1 \\ 0 & { - 1} & 1 \\ \end{array} } \right],\;\;B_{i} = \left[ {\begin{array}{*{20}c} {r_{i} + x_{1} } \\ {r_{i} - x_{1} } \\ {r_{i} + y_{1} } \\ {r_{i} - y_{1} } \\ \end{array} } \right]$$
Based on the above objective function and linear inequalities, we can build a QP optimization problem, and it can be formulated as
$$\begin{aligned} & \mathop {\min }\limits_{{X^{\prime } }} \;||Z - H \cdot X^{\prime } ||^{2} \\ & {\text{s.t}}.\;A_{i} X^{\prime } \le B_{i} ,i = 1, \ldots ,L \\ \end{aligned}$$
(11)
The QP problem of (11) can be solved using the interior-point method [31].
Data fusion algorithm
From Eqs. (3) and (5)–(7), we know that only three OB scattering paths can obtain the estimated location of MS. If the number of multipath is larger than three, we can divide these measurements into different combinations. Each combination can obtain the intermediate estimate of MS. Finally, data fusion (DF) algorithm is utilized to fuse all the intermediate estimate. The key problem is how to measure the good or bad estimation of each combination. The simplest DF algorithm treated all the intermediate estimates as the same weight is to average all the estimated locations of MS. Thus, the algorithm is denoted as DF-LLS if the LLS is utilized to obtain the intermediate estimate of each combination, while it is denoted as DF-LLS-1 if the LLS-1 is utilized. In addition, the residual weighting algorithm [20] chooses the normalized residual as an indicator to fuse the intermediate estimate of each combination.
Cramer-Rao lower bound (CRLB)
CRLB is the performance bound in terms of the minimum achievable variance provided by any unbiased estimators. The CRLB of our localization model can be easily obtained as the same method in [30]. Given the unknown parameters \(\theta = [X^{T} ,\varepsilon ,x^{\prime}_{1} \cdots ,x^{\prime}_{L} ,y^{\prime}_{1} \cdots ,y^{\prime}_{L} ]^{T}\), the joint probability density function of the range and angle measurements is as follows:
$$p(r_{1} ,\alpha_{1} ,\beta_{1} \cdots r_{L} ,\alpha_{L} ,\beta_{L} |\theta ) = \prod\limits_{i = 1}^{L} {\frac{1}{{\sqrt {(2\pi )^{3} (\sigma_{n}^{2} \sigma_{\alpha }^{2} \sigma_{\beta }^{2} )^{3} } }}e^{{ - \frac{{(r_{i} - r_{i}^{0} - \varepsilon )^{2} }}{{2\sigma_{n}^{2} }} - \frac{{(\alpha_{i} - \alpha_{i}^{0} )^{2} }}{{2\sigma_{\alpha }^{2} }} - \frac{{(\beta_{i} - \beta_{i}^{0} )^{2} }}{{2\sigma_{\beta }^{2} }}}} }$$
(12)
The Fisher information matrix (FIM) with \(2L+3\) unknown parameters can be defined as
$$I(\theta ) = \left[ {\begin{array}{*{20}c} {I_{xx} } & {I_{xy} } & \cdots & {I_{{xy^{\prime}_{L} }} } \\ {I_{xy} } & {I_{yy} } & \cdots & {I_{{yy^{\prime}_{L} }} } \\ \vdots & \vdots & \ddots & \vdots \\ {I_{{xy^{\prime}_{L} }} } & {I_{{yy^{\prime}_{L} }} } & \cdots & {I_{{y^{\prime}_{L} y^{\prime}_{L} }} } \\ \end{array} } \right]_{(2L + 3) \times (2L + 3)}$$
(13)
where \([I(\theta )]_{ij} = - E[\frac{{\partial^{2} \ln p(r_{1} ,\alpha_{1} ,\beta_{1} \cdots r_{L} ,\alpha_{L} ,\beta_{L} |\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}]\), the expressions of them can be found in Appendix A.
The CRLB of variables in \(\theta\) is the \(\left( {i,i} \right)\) entry of \(I^{ - 1} (\theta )\), \(i = 1, \ldots ,L\). If root mean square error (RMSE) is used as the performance criterion. The CRLB about the estimated location of MS in terms of RMSE is given as
$${\text{CRLB}} = \sqrt {[I^{ - 1} (\theta )]_{11} + [I^{ - 1} (\theta )]_{22} }$$
(14)