# Multiple-relay selection in amplify-and-forward cooperative wireless networks with multiple source nodes

- First Online:

- Received:
- Accepted:

DOI: 10.1186/1687-1499-2012-256

- Cite this article as:
- Wu, J., Zhang, Y.D., Amin, M.G. et al. J Wireless Com Network (2012) 2012: 256. doi:10.1186/1687-1499-2012-256

- 4 Citations
- 4k Downloads

## Abstract

In this article, we propose multiple-relay selection schemes for multiple source nodes in amplify-and-forward wireless relay networks based on the sum capacity maximization criterion. Both optimal and sub-optimal relay selection criteria are discussed, considering that sub-optimal approaches demonstrate advantages in reduced computational complexity. Using semi-definite programming convex optimization, we present computationally efficient algorithms for multiple-source multiple-relay selection (MSMRS) with both fixed number and varied number of relays. Finally, numerical results are provided to illustrate the comparisons between different relay selection criteria. It is demonstrated that optimal varied number MSMRS outperforms optimal fixed number MSMRS under the same power constraints.

## Introduction

Multihop relaying has emerged as a promising approach to achieve high-rate coverage in wireless communications [1, 2]. Several amplify-and-forward (AF) and decode-and-forward (DF) relaying techniques have been introduced such as in [2, 3]. Following those pioneer works, a number of cooperative diversity schemes have been proposed, including, for example, distributed space-time coding [3, 4, 5], adaptive power control for relay networks or relay beamforming [6, 7, 8, 9], and relay selection [10, 11, 12, 13, 14, 15, 16, 17, 18, 19].

- 1.
A majority of relay selection rules are restrictive in the sense that they either always use all the available relays or always use just a single relay, such as in [10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. In [21], four simple relay selection criteria are described: Two criteria are based on the selection of a single relay according to mean channel gains, while the other two select all available relays. Selecting all available relays are the simplest approach with multiple relays, and this approach may not be allowed when the sum power limit is less than the summation of the power values of all available relays. A single-relay node is selected based on average channel state information (CSI), e.g., distance or path loss [20, 22, 30], and on the instantaneous fading states of the various links such as in [23].

- 2.Multiple-relay selection for a single source has attracted attention as well [31, 32, 33]. Jing and Jafarkhani proposed sub-optimal two-step optimization approaches for single-source multiple-relay selection in [31, 33]: In the first step, phase rotation is performed at each relay, and thus only power allocation is considered due to signal-to-noise ratio (SNR) consisting of a summation of purely real terms. In the second step, several sub-optimal methods were introduced [31, 33]:
- (a)
By introducing the idea of relay ordering, several schemes with linear complexity were proposed;

- (b)
Based on recursion, a scheme with quadratic complexity was proposed.

- (a)

Although both single- and multiple-relay selection approaches for a single source node network have been investigated, relay selection approaches for multiple source nodes are rarely addressed in literature. Only the following three existing publications [34, 35, 36] have discussed multiple-source relay selection (MSRS) approaches. Elzbieta and Raviraj have proposed MSRS for DF relay networks [34]. Xu et al. have presented MSRS approaches in which only a single source is considered as the desired user over each selected relay per transmission while other sources or users are considered as interferers during the transmission [35]. Guo et al. have analyzed MSRS for opportunistic relays, in other words, only a single source is transmitted over each selected relay per transmission [36]. Further, there have been several recent research works on two-way relay selections [37, 38, 39, 40].

- 1.
Based on the sum capacity criteria, we derive and propose several multiple-relay selection techniques in AF relay networks with multiple source nodes.

- 2.
Using semi-definite programming optimization, we propose computationally efficient algorithms for multiple-source multiple-relay selection (MSMRS) in the presence of both fixed number and varied number of relays.

The following notations are used: ${(\xb7)}^{\mathcal{T}}$ denotes matrix transpose, (·)^{∗}conjugate, ${(\xb7)}^{\mathcal{\mathscr{H}}}$ matrix conjugate transpose, ⊙ Hardmard product operator, ${\left[\mathbf{A}\right]}_{a,b}$ the (*a*,*b*)th entry (element) of matrix **A**, $\mathrm{tr}\left(\xb7\right)$ matrix trace operation, $\mathrm{Re}\left(\xb7\right)$ real part of the object (matrix or variable), $\mathrm{Im}\left(\xb7\right)$ imaginary part of the object (matrix or variable), ${\mathrm{E}}_{\alpha}\left(\xb7\right)$ expectation over random variable or random variable set *α*, $\text{diag}\left(\mathbf{a}\right)$ denotes a square matrix with all-zeros entries except the main diagonal filled with the entries of the vector **a**, *ϕ* denotes empty set, and **X**≽0 denotes that **X** is a positive semi-definite matrix.

## System model and problem formulation

Consider a wireless relay network with *M* source nodes (transmitters), *K* relay nodes, and one destination node (receiver). Each node is equipped with a single antenna. Assume no direct channel path between the source nodes and the destination node. The source nodes and the relay nodes are assumed to share the same transmission channel.

*t*th time channel use, the two-phase AF protocol is performed as follows:

- 1.
In the first phase, the

*m*th source node (transmitter) sends source information symbol ${x}_{m}^{\left(t\right)}$ using power ${P}_{m}^{\left(S\right)}$ to the relay nodes, where*m*=1,…,*M*, $\mathrm{E}\left({\left|{x}_{m}^{\left(t\right)}\right|}^{2}\right)=1$. the information symbols ${x}_{m}^{\left(t\right)}$,*m*=1,…,*M*, are selected randomly from*M*independent codebooks. It is assumed that*M*source nodes simultaneously send uncorrelated signal streams ${x}_{m}^{\left(t\right)},m=1,\dots ,M$, and the corresponding channel symbols are received at relay*k*at the same time. - 2.
In the second phase,

*L*relays with indices $\left\{{k}_{1},\dots ,{k}_{L}\right\}$ are selected according to some criteria, which will be elaborated later. Here,*L*, 1≤*L*≤*K*, is an integer, which is referred to as “*relay selection order*” in this article. Then, the*k*_{i}th relay,*i*=1,…,*L*, scales its received signal power to unity, and, using power ${P}_{{k}_{i}}^{\left(R\right)}$, amplifies and forwards it to the receiver.

Note that, in this two-phase AF protocol, multiple source nodes share the same channels. The transmission and reception among the source nodes, the relay nodes and the destination node are assumed to be perfectly synchronized.

In the *t* th time channel use, the channel from the *m* th source node (transmitter) to the *k* th relay is denoted as ${h}_{m}^{(k,t)}$ and the channel from the *k* th relay to the receiver is denoted as ${g}_{k}^{\left(t\right)}$. The channels are modeled as frequency non-selective Rayleigh fading, and are assumed to independently vary over different time channel uses. Denote ${v}_{k}^{\left(t\right)}$ as the noise component at the *k* th relay, *k*=1,…,*K*, and denote *w*^{(t)} as the noise component at the destination node, where ${v}_{k}^{\left(t\right)}$ and *w*^{(t)} are assumed to be independently and identically distributed (i.i.d.) complex Gaussian random variables with zero mean and unit variance.

*t*th time channel use, the received signal at the

*k*th relay is

*k*th relay is given by

*t*th time channel use, the received signal at destination is then obtained as

where ${\alpha}_{k}^{\left(t\right)}$ is the relay selection factor, whose value is equal to 0 or 1, depending upon different relay selection algorithms.

*ρ*

^{(t)}is the overall system effective SNR, and obtained in our case as

Relay selection could be expressed using set partition. Define relay index set $\Omega =\left\{1,\dots .,K\right\}$. There exist *L* distinct relay indices $\left\{{k}_{1},\dots ,{k}_{L}\right\}$, where $1\le \left\{{k}_{1},\dots ,{k}_{L}\right\}\le K$, such that the following hold: $\left\{{\alpha}_{k}^{\left(t\right)}=0,k\notin \left\{{k}_{1},\dots ,{k}_{L}\right\},1\le k\le K\right\}$ and ${\alpha}_{{k}_{1}}^{\left(t\right)}=\cdots ={\alpha}_{{k}_{L}}^{\left(t\right)}=1$. The optimization problem can be now formulated as $\underset{\Omega}{arg}max\left\{{C}^{\left(t\right)}\right\}$. Since $\underset{a}{log}\left(\xb7\right)$, *a*>1, is a monotonous function, the problem is equivalent to $\underset{\Omega}{arg}max\left\{{\rho}^{\left(t\right)}\right\}$.

Relay selection can be implemented at the destination node (receiver). In this case, the receiver is assumed to know all instantaneous channel state information for source-relay paths and relay-destination paths, which may be obtained through channel estimation. After one relay selection algorithm is performed at the destination node, ${\alpha}_{{k}_{1}}^{\left(t\right)}=\cdots ={\alpha}_{{k}_{L}}^{\left(t\right)}$ are obtained, and then the destination node feedbacks one-digit relay selection information to each relay node. The superscripts ^{(t)} used in this section will be omitted in the rest of the article to simplify the notations whenever no ambiguity arises.

## Fixed number multiple-source multiple-relay selection

*L*, where

*L*>1. This class of approaches are referred to as fixed number multiple-source multiple-relay selection (FN-MSMRS), and the corresponding set partition of relay indices for relay selection can be defined as

### Optimal FN-MSMRS

### Fixed number MSMRS based on semi-definite programming optimization

**p**is a real integer vector with $\left\{0,1\right\}$ entries,

**A**

_{s}is a Hermitian matrix, and

**A**

_{n}is a real-valued matrix. It can be readily checked from (6) that

**1**

_{K}is an all-one column vector of length

*K*. For an arbitrary matrix

**M**of size

*K*×

*K*, the following relationship always holds,

**M**by a function

*f*defined as

where $\mathbf{S}=f\left({\mathbf{P}}^{1/2}\mathrm{Re}\left\{{\mathbf{A}}_{s}\right\}{\mathbf{P}}^{1/2}\right)$ and $\mathbf{N}=f\left({\mathbf{P}}^{1/2}\right.$$\left.{\mathbf{A}}_{n}{\mathbf{P}}^{1/2}\right)$.

where **G**_{k}, *k*=1,…,*K*, are all-zero matrices except ${\left[{\mathbf{G}}_{k}\right]}_{k,k}=1$, *k*=1,…,*K*. The fixed relay selection order is quantified in (18c). Note that, it is necessary to include individual relay selection factor constraints, such as (18d), which are actually related to individual relay power constraints, otherwise individual relay selection factors can be arbitrary in the optimization process. Only using vector **p**, it is hard to quantify individual relay selection factor constraints. However, based on the vector transformation in (13) and (15), individual relay selection factor constraints can be advantageously written as the form shown in (18d).

*u*, where $\frac{\mathrm{tr}\left(\mathbf{S}\mathbf{B}\right)}{\mathrm{tr}\left(\mathbf{N}\mathbf{B}\right)+1}\ge u$, the above optimization problem can be written as

The optimization problem (20) is still non-convex. However, using the bisection Algorithm 1 as shown in Appendix, with the aids of convex programming tools, such as CVX[46, 47] which we have used in the simulations, the problem (20) can be solved iteratively, since it is quasi-convex in each loop within the bisection Algorithm 1, where *u* acts as a constant. This problem (20) can now be efficiently solved by standard interior point algorithms based on semi-definite programming (SDP) [48]. Denote the optimal estimation of **B** through the proposed bisection Algorithm 1 as $\hat{\mathbf{B}}$.

*λ*>0 is chosen to make sure

The problem (26) now could be solved using semi-definite programming without the requirement of a bisection algorithm. Note that the above method could be considered as the extension of Charnes–Cooper algorithm [49] from linear fractional programming to linear quadratic programming.

Note that the above solutions are obtained through removing the rank-1 constraint (19e), which may lead to an increased problem dimension. Thus it is required to convert the semi-definite relaxation solution to some Boolean solution. In [45, 50, 51], a randomization method has been introduced to achieve this conversion. Note that in those works, the randomization approach is implemented without additional constraint. Here, we extend such randomization approach to support extra constraints, such as (20c). Based on the randomization procedure as proposed in the Appendix, the decision of $\underline{\mathbf{c}}$$\hat{\underline{\mathbf{c}}}$, can be obtained, where $\hat{\underline{\mathbf{c}}}={\left[{\left[\hat{\mathbf{c}}\right]}_{1,2},\dots ,{\left[\hat{\mathbf{c}}\right]}_{1,K+1}\right]}^{\mathcal{T}}$ and ${\left[\hat{\mathbf{c}}\right]}_{1,k}$ is the *k* th entry of $\hat{\mathbf{c}}$. It should be noted that Steps 9) and 10) of Algorithm 2 are introduced to satisfy constraint (20c). In [45, 50, 51], only $\mathbf{c}=\text{sign}\left({\mathbf{V}}^{\mathcal{T}}\mathbf{u}\right)$ is used in the randomization process. However, it has been further proved that $\pm \text{sign}\left(\mathbf{c}\right)=\text{sign}\left({\mathbf{V}}^{\mathcal{T}}\mathbf{u}\right)$ holds with probability 1 in Property 2 of [45]^{a}. Thus it may be meaningful to perform both “+” and “−” of “sign” operations in the randomization process as we have proposed in Steps 9) and 10) of Algorithm 2.

- 1.
In the case of solving Problem 20 and using randomization procedure Algorithm 2: SDPFN-MSMRS B1,

- 2.
In the case of solving Problem 20 and using randomization procedure Algorithm 2 without step 10: SDPFN-MSMRS A1,

- 3.
In the case of solving Problem 26 and using randomization procedure Algorithm 2: SDPFN-MSMRS B2,

- 4.
In the case of solving Problem 26 and using randomization procedure Algorithm 2 without step 10: SDPFN-MSMRS A2.

Note that both the solutions of SDPFN-MSMRS B1 and SDPFN-MSMRS A1 require bisection algorithms to solve SDP problems iteratively, while both the solutions of SDPFN-MSMRS B2 and SDPFN-MSMRS A2 do not.

### Best worse FN-MSMRS and random FN-MSMRS

where *k*=1,…,*K*. Then, permutate *a*_{k} in descending order such that *a*_{σ(1)}≥⋯≥*a*_{σ(K)}, where $\sigma \left(\xb7\right)$denotes the permutation function. This yields $\left\{\sigma \left(1\right),\dots ,\sigma \left(L\right)\right\}$, and such a selection criterion is termed as best worse FN MSMRS (BWFN-MSMRS).

For comparison purpose, we also define random fixed number MSMRS (RANDFN-MSMRS), which randomly selects *L* relays, as a baseline benchmark FN-MSMRS scheme.

## Varied number multiple-source multiple-relay selection (MSMRS)

For comparison purpose, a baseline benchmark VN-MSMRS scheme using predetermined relay selection, PVN-MSMRS, is also defined. In this scheme, a feasible relay selection is chosen, assuming that this selection satisfies given relay power constraints, and no more relays can be added, otherwise the given sum power constraint is violated.

### Optimal VN-MSMRS (OVN-MSMRS)

*L*is no longer a fixed number but a variable to be chosen from a set $L\in \left\{1,\dots ,K\right\}$. The proposed optimal selection criterion, OVN-MSMRS, becomes

### Varied number MSMRS based on semi-definite programming optimization

The optimization problem (29) can be solved using the bisection procedure similar to the proposed Algorithm 1 as depicted in Appendix. The difference is that Step 4) of the bisection procedure for VN-MSMRS is changed into “solve the SDP optimization problem (29).” To obtain the estimation of $\underline{\mathbf{c}}$, $\hat{\underline{\mathbf{c}}}$, the randomization procedure Algorithm 3 for VN-MSMRS is proposed in Appendix.

The above proposed MSMRS based on semi-definite programming is defined as SDPVN-MSMRS: the SDPVN-MSMRS using randomization procedure Algorithm 3 is termed as SDPVN-MSMRS B, while the SDPVN-MSMRS using randomization procedure Algorithm 3 without steps 9) and 10) is called as SDPVN-MSMRS A.

## Numerical results

In this section, we present the performance of the sum capacity per time channel use for the relay selection approaches under considerations. In all figures, the horizontal axis indicates unit power *P*, and ${P}_{k}^{\left(R\right)},k=1,\dots ,K,$ and ${P}_{m}^{\left(S\right)},m=1\dots ,M$, are scaled values of *P*. In this section, the number of sources is set to *M*=2. We further assume that channels ${h}_{m}^{(k,t)}$ and ${g}_{k}^{\left(t\right)}$, *m*=1,…,*M* and *k*=1,…,*K*, are Rayleigh fading channel gains (modeled as complex Gaussian with zero mean and unit variance), and they change independently over different time channel uses.

### FN-MSMRS results

In Figures 1 and 2, we assume *K*=8, *L*=4, ${P}_{k}^{\left(R\right)}=\mathrm{PM},k=1,\dots ,K$, and ${P}_{m}^{\left(S\right)}=P,m=1,\dots ,M$. The settings of randomization procedure in SDPFN-MSMRS A1, SDPFN-MSMRS A2, SDPFN-MSMRS B1, and SDPFN-MSMRS B2 are *N*_{c}=2 and *N*_{l}=14.

- 1.
SDPFN-MSMRS A1, SDPFN-MSMRS A2, SDPFN-MSMRS B1, and SDPFN-MSMRS B2 use less unit power

*P*than BWFN-MSMRS by 1.6 and 1.3 dB, respectively; - 2.
BWFN-MSMRS use less unit power

*P*than RANDFN-MSMRS by only 2.2 dB; - 3.
With the advantage of lower complexity, SDPFN-MSMRS A1, SDPFN-MSMRS A2, SDPFN-MSMRS B1, and SDPFN-MSMRS B2 require more unit power

*P*than OFN-MSMRS by 2.2 and 2.5 dB, respectively.

It is observed that both SDPFN-MSMRS B1 and SDPFN-MSMRS B2 achieve notably higher average sum capacity over both SDPFN-MSMRS A1 and SDPFN-MSMRS A2 for the same unit power P. This also verifies the importance of step 10 of Algorithm 2. With very close performance to SDPFN-MSMRS A1 and SDPFN-MSMRS B1, respectively, SDPFN-MSMRS A2 and SDPFN-MSMRS B2 are quite computationally effective due to avoiding the needs of additional bisection algorithms.

### VN-MSMRS results

In this section, the settings of randomization procedure in SDPVN-MSMRS B and SDPVNMSMRS A are *N*_{c}=2 and *N*_{l}=14.

*K*=8,

*M*=2,

*P*

^{(Sum)}=4

*PM*, ${P}_{k}^{\left(R\right)}=\mathrm{PM},k=1,\dots ,K$, ${P}_{m}^{\left(S\right)}=P,m=1,\dots ,M$. From Figure 3, we observe that, to achieve the same average sum capacity per time channel use,

- 1.
SDPVN-MSMRS B and SDPVN-MSMRS A use less unit power

*P*than PVN-MSMRS by 4.2 and 3.9 dB, respectively, - 2.
With the advantage of lower complexity, SDPVN-MSMRS B and SDPVN-MSMRS A require more unit power

*P*than OVN-MSMRS by 1.4 and 1.7 dB, respectively.

Unlike in Figures 3 and 4, relay powers in Figures 5 and 6 are not uniformly distributed, and we assume *K*=8, *M*=2, *P*^{(Sum)}=3.62*PM*, $\left\{{P}_{k}^{\left(R\right)}=\mathrm{PM},k=1,2\right\}$, $\left\{{P}_{k}^{\left(R\right)}=0.65\mathrm{PM},k=3,4\right\}$, $\left\{{P}_{k}^{\left(R\right)}=0.4\mathrm{PM},k=5,\dots ,8\right\}$, $\left\{{P}_{m}^{\left(S\right)}=P,m=1,\dots ,M\right\}$. In Figure 5, similar conclusions can be drawn except for different gains as shown in Figure 3. For example, in Figure 5, SDPVN-MSMRS B uses 3.55 dB less unit power *P* than PVN-MSMRS. The above results verify the importance of steps 9 and 10 of Algorithm 3.

### Comparison between OFN-MSMRS and OVN-MSMRS

In Figures 7 and 8, we compare OFN-MSMRS with OVN-MSMRS under the same power constraints, and we assume *K*=8, *M*=2, *P*^{(Sum)}=4*PM*, $\left\{{P}_{k}^{\left(R\right)}=\mathrm{PM},k=1,\dots ,K\right\}$, $\left\{{P}_{m}^{\left(S\right)}=P,m=1,\dots ,M\right\}$. Note that, for OFN-MSMRS, *P*^{(Sum)}=4*PM* is equivalent to set *L*=4. It is evident that OVN-MSMRS outperforms OFN-MSMRS under the same power constraints. This implies that the best selection solution for some channel realizations may not necessarily always reach full sum power constraints.

Note that the complexity of optimal MSMRS significantly increases when *K* becomes larger. In simulations, we choose a small number of *K*=8, for reduced simulation time. For such low *K* values, the complexity advantage for the proposed approaches may not be that significant. However, with the increase of *K*, complexity advantage for proposed approaches in Sections ‘Fixed number multiple-source multiple-relay selection’ and ‘Varied number multiple-source multiple-relay selection’ will become more pronounced.

## Conclusion

Based on the sum capacity maximization criterion, we have proposed a number of multiple-relay selection approaches for simultaneously transmitting multiple source nodes with fixed power relays in an amplify-and-forward cooperative relay network. We propose computationally efficient algorithms based on semi-definite programming for MSMRS with both fixed number and varied number of relays. We have demonstrated that optimal varied number MSMRS outperforms fixed number MSMRS under the same sum power constraints. Although we have discussed the convex relaxation approaches in this article, as the future research directions, it may be deserved to investigate other non-convex-relaxation approaches with better performance, such as in [52, 53, 54].

## Endnote

^{a} In [45], the authors express sign operation using notation “*σ*” instead of “sign”

## Appendix

### Algorithms

#### Algorithm 1 Bisection procedure

- 1.
Initialize the upper and lower limits of

*u*,*u*^{(U)}and*u*^{(L)}; - 2.
If $\left(\left|{u}^{\left(U\right)}-{u}^{\left(L\right)}\right|<\epsilon \right)$, go to step 7), otherwise go to step 3);

- 3.
$u:=\frac{1}{2}\left({u}^{\left(U\right)}+{u}^{\left(L\right)}\right)$;

- 4.
Perform the SDP optimization procedure for problem (20);

- 5.
If the optimization problem (20) is infeasible or unbounded, {

*u*^{(U)}:=*u*; }

else {

*u*^{(L)}:=*u*;

$\hat{\mathbf{B}}=\mathbf{B}$;

- 6.
Go to step 2);

- 7.
The optimization procedure ends.

#### Algorithm 2 Randomization procedure for FN-MSMRS

- 1.
Compute

**V**such that $\hat{\mathbf{B}}={\mathbf{V}}^{\mathcal{T}}\mathbf{V}$, where $\mathbf{V}=\left[{\mathbf{v}}_{1},\dots ,{\mathbf{v}}_{K}\right]$; - 2.
Set

*a*_{c}=0,*a*_{s}=0, and ${\rho}^{(max)}=0$; - 3.
If

*a*_{s}≠0, go to step 13), otherwise go to step 4); - 4.
If

*a*_{c}≥*N*_{c}, {

choose $\hat{\underline{\mathbf{c}}}$ using BWFN-MSMRS such that *L* entries of $\hat{\underline{\mathbf{c}}}$ equal to 1 and the rest equal to −1, then go to step 13);

}

- 5.
Set

*a*_{l}=0; - 6.
Choose random vector

**u**from the uniform distribution on the unit sphere; - 7.
Compute $\mathbf{c}=\text{sign}\left({\mathbf{V}}^{\mathcal{T}}\mathbf{u}\right)$, and thus obtain $\underline{\mathbf{c}}$ as in (15);

- 8.
Compute $\mathbf{p}=\frac{1}{2}\left(\underline{\mathbf{c}}+1\right)$;

- 9.
If $\left(\sum _{k=1}^{K}{\left[\mathbf{p}\right]}_{1,k}\right)==L$, {

Compute *ρ* based on (18b);

If $\rho >{\rho}^{(max)}$, ${\rho}^{(max)}=\rho $, $\hat{\underline{\mathbf{c}}}=\underline{\mathbf{c}}$;

*a*_{s}=1;

- 10.
If $\left(K-\left(\sum _{k=1}^{K}{\left[\mathbf{p}\right]}_{1,k}\right)\right)==L$, {

Compute **c**=−**c**,${\left[\mathbf{c}\right]}_{1,1}=1$, and thus obtain $\underline{\mathbf{c}}$;

Compute *ρ* based on (18b);

If $\rho >{\rho}^{(max)}$, ${\rho}^{(max)}=\rho $ ,$\hat{\underline{\mathbf{c}}}=\underline{\mathbf{c}}$;

*a*_{s}=1;

- 11.
*a*_{l}=*a*_{l}+ 1; - 12.
If

*a*_{l}≥*N*_{l}, {

*a*_{c}=*a*_{c} + 1;

go to 3);

}

- 13.
The randomization procedure ends.

#### Algorithm 3 Randomization procedure for VN-MSMRS

- 1.
Compute

**V**such that $\hat{\mathbf{B}}={\mathbf{V}}^{\mathcal{T}}\mathbf{V}$, where $\mathbf{V}=\left[{\mathbf{v}}_{1},\dots ,{\mathbf{v}}_{K}\right]$,**v**_{k}is the*k*th column vector of**V**; - 2.
Set

*a*_{c}=0,*a*_{s}=0, and ${\rho}^{(max)}=0$; - 3.
If

*a*_{s}≠0, go to step 13), otherwise go to step 4); - 4.
If

*a*_{c}≥*N*_{c}, {

Choose $\hat{\underline{\mathbf{c}}}$ using optimal multiple-source single-relay selection such that the sum power constraint (28) is satisfied, then go to step 13);

}

- 5.
Set

*a*_{l}=0; - 6.
Choose random vector

**u**from the uniform distribution on the unit sphere; - 7.
Compute $\mathbf{c}=\text{sign}\left({\mathbf{V}}^{\mathcal{T}}\mathbf{u}\right)$, and thus obtain $\underline{\mathbf{c}}$;

- 8.
If the sum power constraint (28) is satisfied, {

Compute *ρ* based on (18b);

If $\rho >{\rho}^{(max)}$, ${\rho}^{(max)}=\rho $, $\hat{\underline{\mathbf{c}}}=\underline{\mathbf{c}}$;

*a*_{s}=1;

- 9.
Compute

**c**=−**c**, ${\left[\mathbf{c}\right]}_{1,1}=1$, and thus obtain $\underline{\mathbf{c}}$; - 10.
If the sum power constraint (28) is satisfied, {

Using **c**, compute *ρ* based on (18b);

If $\rho >{\rho}^{(max)}$, ${\rho}^{(max)}=\rho $ , $\hat{\underline{\mathbf{c}}}=\underline{\mathbf{c}}$;

*a*_{s}=1;

- 11.
*a*_{l}=*a*_{l}+ 1; - 12.
if

*a*_{l}≥*N*_{l}, {

*a*_{c}=*a*_{c} + 1;

go to 3);

}

- 13.
The randomization procedure ends.

## Acknowledgements

The study was performed when J. Wu was with the Center for Advanced Communications, Villanova University, Villanova, PA 19085, USA

## Supplementary material

## Copyright information

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.