Capacityachieving probability measure for a reduced number of signaling points
 919 Downloads
Abstract
Putting bounding constraints on the input of a channel leads in many cases to a discrete capacityachieving distribution with a finite support. Given a finite number of signaling points, we determine reduced subsets and the corresponding optimal probability measures to simplify the receiver design. The objective for the subset selection is to keep the channel quality high by maximizing mutual information and cutoff rate. Two approaches are introduced to obtain a capacityachieving probability measure for the reduced subset. The first one is based on a preceded signaling point selection while the second one chooses the signaling points and corresponding probabilities simultaneously. Numerical results for both approaches show that using only a small number of signaling points achieves a very high mutual information compared to channels utilizing the full set of signaling points.
Keywords
Boundedinput constraint Capacityachieving measure Blahutaarimoto algorithm Reduced signaling set Discrete signaling1 Introduction
By Shannon, the scalar additive Gaussian noise channel subject to average power constraints achieves capacity if the input distribution is Gaussian as well. This result transfers to complex circularly symmetric Gaussian vector channels, as Telatar showed in [1]. His general model particularly applies to multipleinput multipleoutput (MIMO) transmission systems. However, due to the unlimited support of the normal distribution, this input is not realizable in practice. Thus, peak power constraints of different types have been imposed to avoid unbounded power requirements for the transmitter. It is a very interesting implication, that the capacity achieving input distribution then becomes discrete with finite support. Many challenging questions arise in this context. A good overview of previous research on this topic is given in [2]. The discreteness of the capacity achieving distribution was shown in [3] for the real and in [4] for the complex Gaussian channel. By considering conditionally Gaussian vector channels subject to boundedinput constraints by some bounded set \({{\mathcal S}\in{\mathbb{R}}^n}\) reference [5] generalizes a number of previous papers on the subject. Under certain conditions on \({{\mathcal S}}\) the capacity achieving distribution is discrete, which includes the previously mentioned channels as special cases. Noncoherent additive white Gaussian noise channels are investigated in [6] and it is shown that the optimum distribution is discrete. The same conclusion was shown for general fading channels in [7] and for Rician fading channels in [8]. Related topics are discussed in the following two references. In [9] a characterization for the optimum number of mass points is given. Reference [10] investigates the optimum constellation of M equiprobable complex signals for an additive Gaussian channel under average power constraints such that the error probability is minimized.
Summarizing the above, for practical purposes it is sufficient to investigate signaling constellations of a maximum number M of mass points. In this context, the following general problem of outermost interest arises. Given a closed and bounded subset \({{\mathcal S}\in{\mathbb{R}}^n}\) of possible signaling points, determine a discrete input distribution consisting of a maximum number M of support points \({x_1,\ldots,x_M\in{\mathcal S}}\) and probabilities P(X = x _{ i }) = p _{ i }, 1 ≤ i ≤ M, which maximizes the mutual information between channel input and output, thus is capacityachieving in the set of discrete distributions over \({{\mathcal S}}\) with at most M support points. Note that the optimum solution may exhibit p _{ i } = 0 for some \(i\in\{1,\ldots,M\}\) such that the number of effectively used points may be less than M. For the special case of conditional Gaussian channels and a non restricted number of signaling points, a partial answer is given in [2]. However, in general this seems to be a hard problem.
In this paper, we confine ourselves to a large, finite constellation set and ask the question of how to select both a small subset of prescribed cardinality and the associated probability measure such that the mutual information is highest, continuing some of the topics in [11]. As analytical results are extremely difficult to achieve, it is of interest to find algorithms that provide good input distributions and probabilities. The main purpose of using only a small number of signaling points is to simplify the receiver design and corresponding decoding algorithms.
The classical BlahutArimoto algorithm computes in an elegant way the capacity of a discrete memoryless channel [12, 13]. In the context of our distribution selection, we extend the algorithm to the considered case of discrete input and continious output. We prove convergence of the extended algorithm by means of some optimality criterion. As the standard BlahutArimoto algorithm is computationally quite complex, several enhancements have been proposed. The most important ones, the naturalgradientbased algorithm and the accelerated BlahutArimoto algorithm are mentioned in [14]. They converge significantly faster to the capacityachieving distribution and can be extended to our system model. References [15, 16, 17] suggest another interesting enhancement, the iterated BlahutArimoto algorithm. It was proposed for discrete memoryless channels and we extend it to a discrete input continuous output channel.
The material in this correspondence is organized as follows. First, we introduce the precise system model and the problem description in Sect. 2. In Sect. 3 we solve the subset selection by utilizing semidefinite programming and two different relaxation techniques. The probability measure optimization is considered in Sect. 4. Two different approaches are analyzed. The first one is a successive subset probability measure selection, while the second one updates both chosen signaling points and probability measure simultaneously. Numerical results are presented in Sect. 5. The paper concludes with a short summary and outlook in Sect. 6.
2 System model and prerequisites
As outlined in the introduction, a challenging task is to choose a fixed size subset from the whole finite support and the corresponding distribution such that the channel is utilized optimally in terms of maximizing the mutual information between the channel input and the channel output. Being more precise, the task is to find a subset \({{\mathcal X}'\subset{\mathcal X}}\) with cardinality K for some fixed number K and the associated probability measure \(\user2{p}=(p_1,\ldots,p_M)\) where p _{ i } = 0 for \({i\in {\mathcal X}\backslash {\mathcal X}'.}\)
As was shown in [18], a necessary and sufficient condition for a distribution to be capacity achieving is the following proposition. This proposition will be used to show the optimality of our converging algorithm in Sect. 4.
Proposition 1
3 Selecting the subset
In what follows, we consider two different criteria to find the best subset \({{\mathcal X}'}\) of given size K.
3.1 Subset selection using semidefinite programming
Subset selection algorithm
1: Initialization: 
n = M + 2, 
\(\user2{A}={\int}\left[\begin{array}{l} \sqrt{f_1(\user2{y})}\\ \vdots\\ \sqrt{f_M(\user2{y})}\\ \end{array}\right]\cdot \left[ \begin{array}{l} \sqrt{f_1(\user2{y})}\\ \vdots\\ \sqrt{f_M(\user2{y})} \\ \end{array}\right]^T \hbox{d}\user2{y},\) 
\(\user2{B}=\left[ \begin{array}{ll} \user2{A}&{\bf 0}_{2 \times M}\\ {\bf 0}_{M \times 2}&{\bf 0}_{2 \times 2} \\ \end{array}\right],\) 
define an empty vector \(\user2{r}\); 
2: Solve the semidefinite problem: 
\(\hat{\user2{S}}= \hbox{argmin}_{\user2{S}\geq{\bf 0}}\hbox{trace}(\user2{B}\user2{S}) \quad \hbox{s.t.}\) 
S _{ ii } = S _{ in }, ∀ i, S _{ nn } = 1, ∑ _{i=1} ^{ M } S _{ ni } = K, ∑ _{i=1} ^{ M } w _{ i } S _{ ni } + S _{n,M+1} = W. 
3: Adjustment: 
n = M + 1, 
delete the (M + 1)th row and (M + 1)th column of \(\hat{\user2{S}}\), 
\( \user2{B}=\left[ \begin{array}{ll} \user2{A} & {\bf 0}_{1 \times M}\\ {\bf 0}_{M \times 1} & 0 \\ \end{array}\right].\) 
4: Cholesky factorization: 
\(\hat{\user2{S}}=\hat{\user2{V}}^T\hat{\user2{V}}.\) 
5: Randomization: 
6: for \(i = 1,\ldots,N_{rand}\) do 
7: randomly generate a vector \(\user2{u}^{(i)}\) uniformly distributed on a ndimensional unit sphere; 
8: computer \(\tilde{\user2{s}}^{(i)}=\hat{V}^T \user2{u}^{(i)}, \quad \forall i;\) 
9: \(\tilde{s}_n^{(i)} \leftarrow \hbox {sign}(\tilde{s}_n^{(i)});\) 
10: \(\tilde{\user2{s}}^{(i)} \leftarrow \tilde{s}_n^{(i)}\tilde{\user2{s}}^{(i)};\) 
11: quantize the K highest entries of \(\left[\tilde{s}_1^{(i)},\ldots,\tilde{s}_M^{(i)}\right]\) to 1 and the others to 0; 
12: if \(\sum_{\tilde{s}_j^{(i)}=1}{w_j}\leq KW {\bf then} t=\tilde{\user2{s}}^{(i)T} \user2{B} \tilde{\user2{s}}^{(i)}\), 
13: else continue 
14: end if 
15: if t is not yet in vector \(\user2{r } \) then 
r _{ i } = t, 
calculate channel capacity C _{ i } based on \(\tilde{\user2{s}}^{(i)}\). 
16: end if 
17: end for 
18: Choose \(\tilde{\user2{s}}=\hbox{argmax}_{\tilde{\user2{s}}^{(i)}}{C_i}.\) 
19: Take \(\user2{b}=\left[\tilde{s}_1,\ldots,\tilde{s}_M\right]^T\) as approximate solution. 
3.2 Heuristic improvement based on weights of signaling points

the largest K − K _{1} entries,

the second largest K _{1} entries,

the third largest K _{2} entries and

the smallest M − K − K _{2} entries
Overview of algorithm names and abbreviations
Name  Abbreviation  Explanation 

Subset selection algorithm with heuristic improvement  SSA  Chooses a signaling set of size K with respect to a uniform distribution 
SSA plus iterated BlahutArimoto algorithm  SSA_IBA  Chooses a subset first and then improves the distribution 
Truncation of iterated BlahutArimoto algorithm  TIBA  Simultaneous subset and distribution selection 
4 Selecting both signaling constellation and input distribution
In the previous section, we investigated the task of finding an optimum signaling constellation in a bounded set by considering a uniform distribution on the selected subset. Our numerical evaluations in Sect. 5 show that the corresponding mutual information is quite close to the capacity of the channel using the full signaling constellation. However, the uniform distribution we used for the obtained set is, of course, suboptimal and we would like to approach the question of optimizing both the set of signaling points and the corresponding input probability. We investigate this task by two different procedures, which are compared in Sect. 5. First, the subset is chosen according to Sect. 3 and the input distribution is improved using an idea which is based on the Blahut Arimoto algorithm. Second, we improve the selected subsets and probabilities simultaneously.
4.1 Successive subset and distribution selection
The approach described in Sect. 3 gives a subset of signaling points which is capacityachieving in the set of uniform distributions with at most K support points. An open question is, if the chosen subset can be utilized in a better way by using a nonuniform distribution. This question is answered in the following.
The idea is based on the classical BlahutArimoto algorithm which computes in an elegant way the capacity of a discrete memoryless channel [12, 13]. We extend the algorithm to the here considered case of discrete input and continuous output. Extending Theorem 1 in [12] to our system model, we obtain the following proposition.
Proposition 2
 1.The channel capacity is given by$$ C=\max_{\user2{p}}I(\user2{p},\user2{f})= \max_{\user2{p}}\max_{{{\mathcal{P}}}}I_0(\user2{p},{{\mathcal{P}}},\,\user2{f}). $$
 2.For fixed \({\user2{p}, I_0(\user2{p},{\mathcal{P}},\user2{f})}\) is maximized by$$ P(\user2{x}_i\user2{y})=\frac{p_i f_i} {\sum_{j=1}^M p_j f_j}. $$(5)
 3.For fixed \({{\mathcal{P}}, I_0(\user2{p},{\mathcal{P}},\user2{f})}\) is maximized by$$ p_i=\frac{\exp(\int f_i \log P(\user2{x}_i\user2{y}) {\rm d} \user2{y})}{\sum_{j=1}^M \exp(\int f_j \log P(\user2{x}_j\user2{y}) {\rm d} \user2{y})}. $$(6)
As the standard BlahutArimoto algorithm is computationally quite complex, several enhancements have been proposed. The most important ones, the naturalgradientbased algorithm and the accelerated BlahutArimoto algorithm are mentioned in [14]. They converge significantly faster to the capacityachieving distribution and can be extended to our system model.
The performance of the above algorithms can be improved by applying a heuristic method related to the one introduced in Sect. 3.2 and further performing BlahutArimoto for any combined new subset. This improves the performance, since a better subset for a uniform distribution does not always imply to be a better one for the nonuniform case. In addition, the parameters K _{1} and K _{2} should be chosen rather small to allow for additional BlahutArimoto applications in each step.
References [15, 16, 17] suggest another interesting enhancement, the iterated BlahutArimoto algorithm. Though it was proposed for discrete memoryless channels, it can be extended to a discrete input continuous output channel. We abbreviate the successive subset selection obtained by out subset selection algorithm followed by the iterated BlahutArimoto algorithm by SSA_IBA, see also Table 1. This algorithm is motivated by the fact that capacityachieving distributions usually include only a small number of inputs that are assigned nonzero probabilities, especially when the input alphabet size is very large. Thus, great efforts can be made in eliminating inputs which will end up with probability zero. Thereby the algorithm operates on a subset of the whole input alphabet. The algorithm starts from an input alphabet that consists of only two symbols. The alphabet grows by one symbol at every iteration until it includes all symbols with nonzero probabilities. At every iteration, a BlahutArimoto algorithm is used to compute a capacity relative to a partial input alphabet. This approach is discussed in more detail in the following subsection.
The ordinary BlahutArimoto algorithm is slow when the input alphabet is large. The IBA utilizes the BlahutArimoto algorithm only for small sets of signaling points and thus converges faster.
4.2 Simultaneous subset and distribution selection
 1.Determine \(\left\{i,j\right\} \in \mathcal{X}^2\) such thatis maximized over all choices of i and j where$$ C_{\left\{i,j\right\}}=D(f_if_0)=D(f_jf_0) $$in this case. Define \(\mathcal{X}' = \left\{i,j\right\}\) and \({C' = C_{\mathcal{X}'}}\).$$ f_0=p_i f_i+p_j f_j $$
 2.
If \(\mathcal{X}' = \mathcal{X}\), then C = C′ and the algorithm terminates. Otherwise, for all \(m\in \mathcal{X} \backslash \mathcal{X}'\), compute D(f _{ m }f _{0}). If the values computed are all smaller or equal to C′, then C = C′ and the algorithm can be terminated at this point.
 3.
Add the symbol m that maximized D(f _{ m }f _{0}) in Step 2 to the set \(\mathcal{X}'\). Recompute \({C' = C_{\mathcal{X}'}}\) using the BlahutArimoto algorithm and update f _{0} with the new \(\mathcal{X}'\). Return to step 2.
Truncation of Iterated BlahutArimoto Algorithm (TIBA)
1: Initialization: 
\(\mathcal{X}'=\left\{i,j\right\}=\hbox{argmax}_{\left\{i,j\right\} \in \mathcal{X}^2} C_{\left\{i,j\right\}},\quad C'=C_{\left\{i,j\right\}},\quad f_0=p_i f_i+p_j f_j\). 
2: while true do 
3: if \(size(\mathcal{X}')==M\) then 
4: break 
5: else if \(size(\mathcal{X}')<K+K_1\) then 
6: for \( m=1,\ldots,M \) do 
7: if \(m \notin \mathcal{X}'\) then 
8: \(D(f_mf_0)=\int f_m \log{\frac{f_m} {f_0}} {\rm d} \user2{y}\) 
9: end if 
10: end for 
11: \(\left[D_{max},d_{max}\right]=\max_{m=1,\ldots,M}{D(f_mf_0)}\) 
12: if D _{ max } < = C′ then 
13: break 
14: else 
15: \({\mathcal{X}'= \left\{\mathcal{X}', d_{max}\right\}}\) 
16: \([\user2{p}',C']=BA(\mathcal{X}')\) 
17: Run Algorithm3. 
18: \(f_0=\sum_{m \in \mathcal{X}'} p_m f_m\). 
19: end if 
20: else 
21: break 
22: end if 
23: end while 
24: Take \(\left[\mathcal{X}',\user2{p}'\right]\) as approximate solution and C = C′. 
The function BA mentioned in the algorithms is a placeholder for the iterated or noniterated BlahutArimoto algorithm with either the accelerated BlahutArimoto algorithm or naturalgradientbased algorithm. The actual choice depends on the size of K.
Sub Algorithm for TIBA
1: \(k=size(\mathcal{X}')\) 
2: if \(p_{d_{max}}>=\max\left\{10^{3},\frac{1}{k^2}\right\}\) then 
3: \(\mathcal{X}'= \left\{ii\in\mathcal{X}',p_i >=\max\left\{10^{3},\frac{1}{k^2}\right\}\right\}\) 
4: \([\user2{p}',C']=BA(\mathcal{X}')\) 
5: end if 
6: if \(size(\mathcal{X}')==K+K_2\) then 
7: \(\mathcal{X}'= \left\{ii\in\mathcal{X}', p_i \hbox { is among the top } K \hbox { of } \user2{p}'\right\}\) 
8: \([\user2{p}',C']=BA(\mathcal{X}')\) 
9: end if 
Initialization of Iterated BlahutArimoto Algorithm
1: \(C'=0,\epsilon=10^{4}\) 
2: for \(i=1,\ldots,M1\) do 
3: for \(j=i+1,\ldots,M\) do 
4: l = 0, u = 1 
5: \(\lambda=\frac{1}{2}(l+u), \quad f_0=\lambda f_i +(1\lambda) f_j\) 
6: if D(f _{ i }f _{0}) > D(f _{ i }f _{0}) then 
7: l = λ 
8: else 
9: u = λ 
10: end if 
11: if \(D(f_if_0)D(f_if_0)<=\epsilon\) then 
12: if min(D(f _{ i }f _{0}),D(f _{ j }f _{0})) > C′ then 
13: C′ = min(D(f _{ i }f _{0}),D(f _{ i }f _{0})) 
14: \(\mathcal{X}'=\left\{i,j\right\}\) 
15: end if 
16: Continue 
17: else 
18: Go to Step 5 
19: end if 
20: end for 
21: end for 
22: \(\mathcal{X}'\) is the best subset of size 2 and C′ the corresponding capacity. 
5 Simulation results
For our simulations, we use the following scenario. We aim to choose K = 16 signaling points from an MQAM scenario with M = 64. The 64QAM points in the square [−3,3]^{2} are chosen as initial situation. We consider 2dimensional Gaussian noise with covariance matrix \(\left(\begin{array}{ll}1 & \rho \\ \rho & 1\end{array}\right) \) with varying parameter ρ.
5.1 Reducing the number of signaling points
As outlined above, using a uniform distribution over the selected subset provides a lower bound for the capacityachieving nonuniform distribution.
5.1.1 The subset selection algorithm with heuristic improvement: SSA
5.2 Selecting both signaling points and probability measure
So far, we only considered a uniform distribution over the signaling points. The channel can be exploited even better by choosing a more appropriate probability distribution. In Sect. 4, we discussed two algorithms for finding a better probability distribution.
5.2.1 The successive subset and distribution selection: SSA_IBA
5.2.2 Simultaneous subset and distribution selection: TIBA
To show the performance of TIBA, we consider the following two plots. Figure 10 shows the performance of TIBA compared to the channel capacity. As can be seen, TIBA performs quite good. The arising question is, if TIBA finds the best possible subset and probability distribution or if exhaustive search can yield a result which is even closer to the full set capacity. As exhaustive search is computationally too complex, we include three simple aspects in the following pseudo exhaustive search, thereby reducing runtime enormously. As only 28 signaling points are assigned a nonzero probability in the optimum probability measure, see Fig. 7, we consider only those and reduce ourselves to choosing 16 out of 28 rather than 16 out of 64. Moreover, we include the four vertices (3, 3)′, (−3, 3)′, (−3, −3)′, (3, −3)′ to our choice, as they are assigned the highest probabilities in the optimum distribution. Thus, the search further reduces to 12 out of 24 with the four vertices fixed. We also included symmetry aspects in the pseudo exhaustive search which excludes many repeated combinations. The extended BlahutArimoto algorithms converges really fast in the first several iterations. That implies, that we can discard a combination if the capacity achieved after several iterations is still too low compared to other combinations. Thereby we do not waste any time in finding the exact capacities of not interesting combination. Taking all these considerations into account, we obtain our pseudo exhaustive search, which, for the scenario given, yields the results shown on in Fig. 11. Running TIBA gives exactly the same points and probabilities, i.e., our algorithm yields the best possible result.
5.2.3 Comparison of SSA, SSA_IBA and TIBA
6 Conclusions
In this paper, we investigated the challenging problem of finding a reduced, optimal set of signaling points and a corresponding capacity achieving probability measure. By assuming first a uniform distribution on the selected signaling points, we obtained a lower bound for the original problem. We considered both a mutual information and cutoff rate maximizing subset selection and solved these problems by forming a semidefinite programing problem and applying two different relaxation techniques. A heuristic improvement based on weights of signaling points improves the performance of this lower bound, which is close to the channel capacity of the full set of signaling points. Two different ways to tackle the full problem of selecting a subset and a distribution were introduced. Building on a Blahut Arimoto algorithm and a subsequent heuristic improvement, the uniform distribution obtained in the first step is replaced by a more sophisticated one. Using a simultaneous update on both the chosen signaling points and the probability measure, we obtained the so called TIBA algorithm, which performs best under the proposed methods and gives remarkable results compared to exhaustive search.
Our numerical results show that using only a subset of small size can indeed achieve a very high mutual information even compared to the large full input set. This approach helps to highly simplify the receiver design while maintaining a high transmission rate over the channel.
An interesting open problem for future research is the analysis of the choice of K compared to M, e.g., the ratio K/M and analytical results proving our numerical conclusions.
Notes
Acknowledgments
This work was supported by the UMIC Research Centre in the framework of the German government excellence initiative.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
References
 1.Telatar, I. E. (1999). Capacity of multiantenna gaussian channels. European Transactions on Telecommuncations, 10(6), 585–595.CrossRefGoogle Scholar
 2.Chan, T. H., Hranilovic, S., & Kschischang, F. R. (2005). Capacityachieving probability measure for conditionally Gaussian channels with bounded inputs. IEEE Transactions on Information Theory, 51(6), 2073–2088.CrossRefMathSciNetGoogle Scholar
 3.Smith, J. G. (1971). The information capacity of amplitude and varianceconstrained scalar Gaussian channels. Information and Control, 18(6), 203–219.CrossRefzbMATHMathSciNetGoogle Scholar
 4.Shamai, S., & BarDavid, I. (1995). The capacity of average and peakpowerlimited quadrature Gaussian channels. IEEE Transactions on Information Theory, 41(4), 1060–1071.CrossRefzbMATHGoogle Scholar
 5.Chan, T. H., & Kschischang, F. R. (2004). On the discreteness of the capacity achieving probability measure of conditional gaussian channels. In Proceedings ISIT, Chicago, June, p. 347.Google Scholar
 6.Katz, M., & Shamai, S. (2004). On the capacityachieving distribution of the discretetime noncoherent and partially coherent AWGN channels. IEEE Transactions on Information Theory, 50(10), 2257–2270.CrossRefMathSciNetGoogle Scholar
 7.AbouFaycal, I. C., Trott, M. D., & Shamai, S. (2001). The capacity of discretetime memoryless rayleigh fading channels. IEEE Transactions on Information Theory, 47(4), 1290–1301.CrossRefzbMATHMathSciNetGoogle Scholar
 8.Gursoy, M. C., Poor, H. V., & Verdu, S. (2005). The noncoherent rician fading channel–part I: structure of the capacityachieving input. IEEE Transactions of Wireless Communications, 4(5), 2193–2206.CrossRefGoogle Scholar
 9.Sharma, S., & Shamai, S. (2008). Characterizing the discrete capacity achieving distribution with peak power constraint at the transition points. In IEEE international symposium on information theory and Its applications (ISITA), Dec. 2008, pp. 1–6.Google Scholar
 10.Foschini, G. J., Gitlin, R. D., & Weinstein, S. B. (1974). Optimization of twodimensional signal constellations in the presence of Gaussian noise. IEEE Transactions on Communications, 22(1), 28–38.CrossRefGoogle Scholar
 11.Schmeink, A., Mathar, R., & Zhang, H. (2010). Reducing the number of signaling points keeping capacity and cutoff rate high. In The seventh international symposium on wireless communication systems (ISWCS), 2010, submitted.Google Scholar
 12.Blahut, R. (1972). Computation of channel capacity and ratedistortion functions. IEEE Transactions on Information Theory, 18(4), 460–473.CrossRefzbMATHMathSciNetGoogle Scholar
 13.Arimoto, S. (1972). An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Transactions on Information Theory, 18(1), 14–20.CrossRefzbMATHMathSciNetGoogle Scholar
 14.Matz, G., & Duhamel, P. (2004). Information geometric formulation and interpretation of accelerated blahutarimototype algorithms. In Information theory workshop, 2004. IEEE, Oct. 2004, pp. 66–70.Google Scholar
 15.Gallager, R. G. (1968). Information theory and reliable communications. New York: Wiley.Google Scholar
 16.Sayir, J. (2000). Iterating the arimotoblahut algorithm for faster convergence. In Information Theory, 2000. Proceedings of IEEE International Symposium on, 2000, p. 235.Google Scholar
 17.Sayir, J. (1999). On coding by probability transformation. ETH Zürich, PhD Dissert. Nr. 13099, HartungGorre Verlag Konstanz, Germany.Google Scholar
 18.Mathar, R., Schmeink, A., & Zivkovic, M. (2008). Optimum discrete signaling over channels with arbitrary noise distribution. In 2nd International conference on signal processing and communication systems (ICSPCS), Gold Coast, Australia, Dec.Google Scholar
 19.Proakis, J. G. (2001). Digital communications, 4th ed. McGrawHill, August.Google Scholar
 20.Mezghani, A., Ivrlac, M., & Nossek, J. (2008). Achieving nearcapacity on large discrete memoryless channels with uniform distributed selected input. In IEEE international symposium on information theory and Its applications (ISITA), Dec. 2008, pp. 1–6.Google Scholar
 21.Sturm, J. F. (1999). Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones.Google Scholar
 22.Wiesel, A., Eldar, Y., & Shitz, S. (2005). Semidefinite relaxation for detection of 16qam signaling in mimo channels. IEEE Signal Processing Letters, 12(9), 653–656.CrossRefGoogle Scholar
 23.Ma, W.K., Ching, P.C., & Ding, Z. (2004). Semidefinite relaxation based multiuser detection for mary psk multiuser systems. IEEE Transactions on Signal Processing, 52(10), 2862–2872.CrossRefGoogle Scholar
Copyright information
Open AccessThis is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/bync/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.