New generalized variable stepsizes of the CQ algorithm for solving the split feasibility problem
Abstract
Variable stepsize methods are effective for various modified CQ algorithms to solve the split feasibility problem (SFP). The purpose of this paper is first to introduce two new simpler variable stepsizes of the CQ algorithm. Then two new generalized variable stepsizes which can cover the former ones are also proposed in real Hilbert spaces. And then, two more general KM (Krasnosel’skiiMann)CQ algorithms are also presented. Several weak and strong convergence properties are established. Moreover, some numerical experiments have been taken to illustrate the performance of the proposed stepsizes and algorithms.
Keywords
split feasibility problem CQ algorithm variable stepsize generalized variable stepsizeMSC
46E20 46H35 47H14 47L301 Introduction
Since the CQ algorithm for solving the split feasibility problem (SFP) was proposed [1] in order to get better convergence speed, much attention has been paid to improve the variable stepsize of CQ algorithm.
Noting that when applying (1.2) and (1.3) to solve the practical problems such as signal processing and image reconstruction, which can be covered by the SFP, it is hard to avoid that a fixed stepsize related to the norm of A sometimes affects convergence of the algorithms. Therefore, in order not to compute the matrix inverse and the largest eigenvalue of the matrix \(A^{T}A\) and have a sufficient decrease of the objective function at each iteration, people have invented various algorithms with a variable or selfadaptive stepsize. Since Qu [6] presented a searching method by adopting Armijolike search, many similar methods have been proposed, such as [7, 8, 9, 10, 11, 12] etc. However, through these methods, selfadaptive stepsize at each iteration can be achieved, most formats of them are becoming more complex, it is difficult to apply them to some practical problems, and this needs considerable time complexity, especially for the largescale setting and sparse problem.
Recently, Yao et al. [17] have applied (1.5) to an improved CQ algorithm with a generalized Halpern iteration. In paper [18], we have modified the relative parameters with satisfactory conditions. Then, in this paper, we combine the iterations in [17, 18] with the KMCQ iterations in [19, 20]. We propose two more general KMCQ algorithms with the generalized variable stepsize (1.9) or (1.10), and they can be used to approach the minimum norm solution of the SFP that solves special variational inequalities.
The rest of this paper is organized as follows. Section 2 reviews some propositions and known lemmas. Section 3 gives two modified CQ algorithms with simpler variable stepsizes and shows the weak convergence. Section 4 presents two general KMCQ algorithms with the generalized variable stepsizes and proves the strong convergence. In Section 5, we include numerical experiments to testify the better performance of the proposed stepsizes and algorithms with typical problems of signal processing and image restoration. Finally, Section 6 gives some conclusions and further research aim.
2 Preliminaries

‘→’ stands for strong convergence;

‘⇀’ stands for weak convergence;

I stands for the identity mapping on H.
Recall that a mapping \(T:C \to H\) is nonexpansive iff \(\Vert {Tx  Ty} \Vert \le \Vert {x  y} \Vert \) for all \(x,y \in C\).
Recall also that the nearest point projection from H onto C, denoted by \(P_{C} \), assigns to each \(x \in H\) the unique point \(P_{C} x \in C\) with the property \(\Vert {x  P_{C} x} \Vert \le \Vert {x  y} \Vert \), \(\forall y \in C\). We collect the basic properties of \(P_{C} \) as follows.
Proposition 2.1
 (\(\mathrm{p}_{1}\))

\(\langle{x  P_{C} x,y  P_{C} x} \rangle \le0\) for all \(x \in H\), \(y \in C\);
 (\(\mathrm{p}_{2}\))

\(\Vert {P_{C} x  P_{C} y} \Vert ^{2} \le \langle{P_{C} x  P_{C} y,x  y} \rangle\) for every \(x,y \in H\);
 (\(\mathrm{p}_{3}\))

\(\Vert {P_{C} x  P_{C} y} \Vert ^{2} \le \Vert {x  y} \Vert ^{2}  \Vert {(I  P_{C} )x  (I  P_{C} )y} \Vert ^{2}\) for all \(x,y \in H\);
 (\(\mathrm{p}_{4}\))

\(\langle{(I  P_{C} )x  (I  P_{C} )y,x  y} \rangle\ge \Vert {(I  P_{C} )x  (I  P_{C} )y} \Vert ^{2}\) for all \(x,y \in H\);
 (\(\mathrm{p}_{5}\))

\(\Vert {P_{C} x  z} \Vert ^{2} \le \Vert {x  z} \Vert ^{2}  \Vert {P_{C} x  x} \Vert ^{2}\) for all \(x \in H\), \(z\in C\).
In a Hilbert space H, the next following facts are well known.
Proposition 2.2
 (i)
\(\Vert {x\pm y} \Vert ^{2} = \Vert x \Vert ^{2}\pm2 \langle {x,y} \rangle+ \Vert y \Vert ^{2}\);
 (ii)
\(\Vert {tx + (1  t)y} \Vert ^{2} = t \Vert x \Vert ^{2} + (1  t) \Vert y \Vert ^{2}  t(1  t) \Vert {x  y} \Vert ^{2}\).
For the SFP, we assume that the following conditions are satisfied in a Hilbert space [5]:
(i) The solution set of the SFP is nonempty.
It is easily seen that \(C \subseteq C_{n} \) and \(Q \subseteq Q_{n} \) for all n.
Proposition 2.3
see [22]
 (i)
\(\sum_{n = 1}^{\infty}{t_{n} = \infty} \);
 (ii)
\(\overline{\lim} _{n} b_{n} \le0\) or \(\sum_{n = 1}^{\infty}{ \vert {t_{n} b_{n} } \vert } < \infty\).
Proposition 2.4
For the \(\ell_{1}\)ball \(B_{R} = \{ {x \in\ell_{2} : \Vert x \Vert _{1} \le R} \}\) with \(R: = \Vert {x^{\ast}} \Vert _{1} \), where \(x^{\ast}\in\ell_{2} \) is the solution of problem (2.4), we replace the thresholding with the projection \(\mathrm{P}_{B_{R} } \), and with a slight abuse of notation, we denote \(\mathrm{P}_{B_{R} } \) by \(\mathrm{P}_{R} \); then we introduce two properties of \(\ell_{2}\)projections onto \(\ell_{1}\)balls [25].
Lemma 2.1
For any \(a \in\ell_{2} \) and for \(\mu> 0\), \(\Vert {\mathrm{S}_{\mu}(a)} \Vert _{1} \) is a piecewise linear, continuous, decreasing function of μ; moreover, if \(a \in\ell_{1}\), then \(\Vert {\mathrm{S}_{0} (a)} \Vert _{1} = \Vert a \Vert _{1} \) and \(\Vert {\mathrm{S}_{\mu}(a)} \Vert _{1} = 0\) for \(\mu\ge\max_{i} \vert {a_{i} } \vert \).
Lemma 2.2
If \(\Vert a \Vert _{1} > R\), then the \(\ell _{2}\) projection of a on the \(\ell_{1}\)ball with radius R is given by \(\mathrm{P}_{R} (a) = \mathrm{S}_{\mu}(a)\), where μ (depending on a and R) is chosen such that \(\Vert {\mathrm{S}_{\mu}(a)} \Vert _{1} = R\). If \(\Vert a \Vert _{1} \le R\), then \(\mathrm{P}_{R} (a) = \mathrm{S}_{0} (a) = a\).
Next, we discuss a method to compute μ.
Proposition 2.5
[25]
3 CQ algorithms with two simpler variable stepsizes
In this section, two simpler variable stepsizes are proposed below. The advantages of the two stepsizes, comparing with (1.5) and (1.6), are that neither prior information about the matrix norm A nor any other conditions on Q and A are required.
3.1 A simpler variable stepsize for CQ algorithm
We propose a new and simpler variable stepsize method for solving the feasibility problem. The algorithm is presented as follows.
Algorithm 3.1
Remark 3.1
We can easily approximate the upper bound λ of the eigenvalue interval to the symmetric matrix \(A^{T}A\) from [1, 3], thus for any \(x_{n \ge0} \), we can obtain \({\tau_{n}} \in (0,{2 / \lambda}) \subset(0,{2 / L})\), where L is the largest eigenvalue of \(A^{T}A\).
Now we prove the convergence property of Algorithm 3.1.
Theorem 3.1
If \(\Gamma\ne\emptyset\) and \(\underline{\lim} _{n} {\tau_{n}}(2  \lambda{\tau_{n}}) \ge\sigma > 0\), the sequence \(\{x_{n} \}\) generated by Algorithm 3.1 converges weakly to a solution of SFP (1.1).
Proof
Next we show \(\bar{x} \in\Gamma\).
Firstly, we show \(\bar{x} \in C\). We prove it from two cases.
If there exist infinite \(n_{i} \) such that \(\nu_{n_{i} } = 0\) or \(\xi _{n_{i} } = 0\), from (3.7) and (3.8) it leads to \(\tilde{x} = \bar {x}\), so the contradiction happens. Therefore, \(\nu_{n_{i} } > 0\) and \(\xi_{n_{i} } \ne0\) for sufficiently large \(n_{i} \). We go on to divide the discussion into two cases.
Since \(\{ \Vert {x_{n}  x^{\ast}} \Vert ^{2}\}\) is decreasing, both x̄ and x̃ are the accumulation points of \(\{x_{n} \}\), then \(\Vert {\tilde{x}  x^{\ast}} \Vert ^{2} = \Vert {\bar {x}  x^{\ast}} \Vert ^{2}\). It is a contradiction, so this case will not occur.
(2) If \(\inf\{ \Vert {\xi_{n_{i} } } \Vert \} = 0\), then we get the lower semicontinuity of \(\partial c(x)\). Since \(c(x)\) is convex, then x̄ is a minimizer of \(c(x)\) over \(H_{1} \). Since \(c(x^{\ast}) \le0\), then \(c(\bar{x}) \le c(x^{\ast}) \le0\). So \(\bar{x} \in C\).
Assume \(\nu_{n_{i} } > 0\) and \(\xi_{n_{i} } \ne0\) for sufficiently large \(n_{i} \). If \(\inf\{ \Vert {\xi_{n_{i} } } \Vert \} = 0\), such as above, it follows \(\bar{x} \in C\). If \(\inf\{ \Vert {\xi_{n_{i} } } \Vert \} > 0\), similar to Case 1(1), it leads to \(\bar{x} = P_{C(\bar{x})} (\bar{x})\), which implies \(c(\bar{x}) + \langle{\bar{\xi},\bar{x}  \bar{x}} \rangle\le0\). So \(\bar{x} \in C\).
In summary, we can conclude \(\bar{x} \in C\).
Therefore x̄ is a solution of SFP. Thus we may replace \(x^{*} \) in (3.4) with x̄, and get \(\{ { \Vert {x_{n}  \bar{x}} \Vert } \}\) is convergent. Because there exists a subsequence \(\{ { \Vert {x_{n_{i} }  \bar{x}} \Vert } \}\) convergent to 0, then \(x_{n} \to \bar{x}\) as \(n \to\infty\). □
3.2 The other simpler variable stepsize for CQ algorithm
In this part, we introduce the other simpler choice of the stepsize \(\tau_{n} \), which also is a variable stepsize to CQ algorithm. Either combining with the relaxed CQ algorithm [4], we have the next algorithm.
Algorithm 3.2
Obviously, (3.14) is also consistent with Remark 3.1. Thus, similar to the proof of Theorem 3.1, we can deduce that Algorithm 3.2 converges weakly to a solution of SFP (1.1).
4 Two general KMCQ algorithms with generalized variable stepsize
In this section, we integrate the variable stepsizes from (1.5) to (1.8) and obtain a variable stepsize that can cover them. After that, we apply it to improve the algorithms presented in [17] and [18] and construct two algorithms for approximating some solution of (1.1).
Let \(\psi:C \to H_{1} \) be a δcontraction with \(\delta\in(0,1)\), let \(r:H_{1} \to H_{1}\backslash\Theta\) and \(q:{H_{1}} \to{H_{2}}\backslash \Theta\) be nonzero operators, where Θ denotes the zero point.
4.1 A generalized variable stepsize for a general KMCQ algorithm
The next recursion not only possesses a more generalized adaptive descent step, but it also can be implemented easily by the relaxed method.
Algorithm 4.1
Theorem 4.1
 (\(C_{1}\))

\(\quad\lim_{n \to\infty} \alpha_{n} = 0\) and \(\sum_{n = 1}^{\infty}{\alpha_{n} = \infty} \);
 (\(C_{2}\))

\(\quad0 < \underline{\lim} _{n} {\beta_{n}} \).
Proof
Since \(P_{\Gamma}:H_{1} \to\Gamma\subset C\) is nonexpansive and \(\psi:C \to H_{1} \) is δcontractive, therefore, we have \(P_{\Gamma}\psi:C \to C\) is a contraction with \(\delta\in(0,1)\). By the Banach contractive mapping principle, there exists a unique \(x^{\ast}\in C\) such that \(x^{\ast}= P_{\Gamma}\psi x^{\ast}\). By virtue of (\(\mathrm{p}_{1}\)), we see that (4.2) holds true.
Assume that x̂ is an accumulation point of \(\{ {x_{n} } \}\) and \(x_{n_{i} } \to\hat{x}\), where \(\{x_{n_{i} } \}_{i = 1}^{\infty}\) is a subsequence of \(\{ {x_{n} } \}\). Next we will prove that x̂ is a solution of SFP.
On the one hand, we show \(\hat{x} \in C\).
On the other hand, we need to show \(A\hat{x} \in Q\).
Therefore, x̂ is a solution of SFP.
Thus we may replace \(x^{*} \) in (4.7) with x̂ and get \(\{ { \Vert {x_{n}  \hat{x}} \Vert } \}\) is convergent. Because there exists a subsequence \(\{ { \Vert {x_{n_{i} }  \hat{x}} \Vert } \}\) convergent to 0, then \(x_{n} \to\hat{x}\) as \(n \to\infty\). □
4.2 The other extended algorithm
Let \(h:C \to H_{1} \) be a κcontraction. Let \(B:H_{1} \to H_{1} \) be a selfadjoint strongly positive bounded linear operator with coefficient \(\lambda\in(0,1)\), for \(\forall x \in H_{1} \), there exists \(\langle{Bx,x} \rangle\ge\lambda \Vert x \Vert ^{2}\). Take a constant σ such that \(0 < \sigma\kappa< \lambda\).
Algorithm 4.2
Theorem 4.2
 (\(C_{1}\))

\(\lim_{n \to\infty} \alpha_{n} = 0\) and \(\sum_{n = 1}^{\infty}{\alpha_{n} = \infty} \);
 (\(C_{2}\))

\(0 < \underline{\lim} _{n} {\beta_{n}} \).
5 Numerical experiments and results
Problem (5.2) is a particular case of SFP (1.1) where \(C = \{ {x \in\mathrm{R}^{N}: \Vert x \Vert _{1} \le t} \}\) and \(Q = \{y\}\), i.e., find \(\Vert x \Vert _{1} \le t\) such that \(Ax=y\). Therefore, CQ algorithm can be applied to solve (5.2). From Propositions 2.4 and 2.5 the projection onto C can be easily computed [15], while Lemmas 2.1 and 2.2 show the special situation of Proposition 2.4.
Next, following the experiments in [15, 26], we choose two particular problems, i.e., the compressed sensing and the image deconvolution, which are covered by (5.1). The experiments compare the performances of the proposed stepsizes of the CQ algorithm in this paper with the stepsizes in [15] and [16].
5.1 Compressed sensing
5.2 Image deconvolution
In this subsection, we apply the CQ algorithms in the paper to image deconvolution. The observation model can also be described as (5.1), we wish to estimate an original image x from an observation y, while matrix A represents the observation operator, and ε is a sample of a zeromean white Gaussian field of variance σ. For the 2D image deconvolution problem, A is a blockcirculant matrix with circulant blocks [27]. We stress that the goal of these experiments is not to assess the restored precision of the algorithms, but to apply the algorithms in paper to solve this particular SFP, then compare the iterative speed and restored precision of the proposed stepsizes against the CQ algorithms.
According to papers [26, 27], we also take the wellknown Cameraman image. In the experiments, we employ Haar wavelets, and the blur point spread functions are uniform blur with size \(9\times9\), \(h_{ij} = (1 + i^{2} + j^{2})^{  1}\), for \(i,j =  4,\ldots,4\) and for \(i,j =  7,\ldots,7\). The noise variance is \(\sigma^{2} = 0.308\), 2 and 8, respectively. We have \(N = M = 256^{2}\), then the blockcirculant matrix A can be constructed by the blur point spread functions, and A may be very illconditioned. Set all the threshold values \(\mu= 0.25\), t is the sum of all the pixel values in the original image. Moreover, we use \(y=\mathit{IFFT}(\mathit{FFT}(A).*\mathit{FFT}(x))+\varepsilon\) to obtain the observation, where FFT is the fast Fourier transform, IFFT is the inverse fast Fourier transform. Other settings in the above stepsizes and algorithms are the same as in 4.1. We set the initial image \(x_{0} = 0\) and also follow the stop rule \({ \Vert {x_{n + 1}  x_{n} } \Vert } / { \Vert {x_{n} } \Vert } < 10^{  3}\).
Results for the restorations of different stepsizes and algorithms
Blur kernel  \(\boldsymbol{\sigma^{2}}\)  Algorithms  SNR (dB)  n  CPU time (s) 

9 × 9 uniform  0.308  CQ with (1.5)  16.1802  52  3.3883 
CQ with (1.6)  16.1722  48  3.1636  
14.5266  19  1.3847  
14.5265  19  1.3199  
14.2464  34  2.4280  
\(h_{ij}=(1+i^{2}+j^{2})^{  1}\) for i,j = −4,…,4  2  CQ with (1.6)  22.6184  19  1.3007 
CQ with (1.5)  22.4329  17  1.1718  
19.8401  14  1.0746  
19.8401  14  1.0266  
19.4912  33  2.3306  
\(h_{ij}=(1+i^{2}+j^{2})^{  1}\) for i,j = −7,…,7  8  CQ with (1.6)  12.7305  39  2.6739 
CQ with (1.5)  14.6363  42  2.8604  
19.3561  18  1.3390  
19.3560  18  1.3057  
18.6140  33  2.3628 
6 Conclusions and discussion
In this paper, we have proposed two simpler variable stepsizes for the CQ algorithm. Compared with the other related variable stepsizes, they also need not to compute the largest eigenvalue of A and can be calculated easily. Furthermore, we also presented a more general KMCQ algorithm with generalized variable stepsizes. As a special case, we deduced another general format. Obviously, both the general algorithms with the generalized variable stepsizes can solve the SFP and some special variational inequality problem better. The corresponding weak and strong convergence properties have also been established. In the experiments, through the compressed sensing and image deconvolution models, we compare the proposed stepsizes with the former ones, the results obtained from the proposed stepsizes and algorithms appear to be significantly better.
We should notice that the values of parameter \(\rho_{n} \) are fixed in the above experiments. Actually, a different value of \(\rho_{n} \) can also affect the convergence speed of the algorithms. Therefore, our future work is to find the method to choose a selfadaptive sequence \(\{ {\rho_{n} } \}\).
Notes
Acknowledgements
The authors would like to thank the associate editor and the referees for their comments and suggestions. The research was supported by Professor Haiyun Zhou.
References
 1.Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002) MathSciNetCrossRefMATHGoogle Scholar
 2.Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221239 (1994) MathSciNetCrossRefMATHGoogle Scholar
 3.Byrne, CL: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004) MathSciNetCrossRefMATHGoogle Scholar
 4.Yang, QZ: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20, 12611266 (2004) MathSciNetCrossRefMATHGoogle Scholar
 5.Xu, HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 26, 117 (2010) MathSciNetCrossRefGoogle Scholar
 6.Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 16551665 (2005) MathSciNetCrossRefMATHGoogle Scholar
 7.Qu, B, Xiu, N: A new halfspacerelaxation projection method for the split feasibility problem. Linear Algebra Appl. 428, 12181229 (2008) MathSciNetCrossRefMATHGoogle Scholar
 8.Wang, W, Gao, Y: A modified algorithm for solving the split feasibility problem. Int. Math. Forum 4, 13891396 (2009) MathSciNetMATHGoogle Scholar
 9.Wang, Z, Yang, Q: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 217, 53475359 (2011) MathSciNetMATHGoogle Scholar
 10.Li, M: Improved relaxed cq methods for solving the split feasibility problem, advanced modeling and optimization. Appl. Math. Comput. 13, 305317 (2011) MATHGoogle Scholar
 11.Abdellah, B, Muhammad, AN: On descentprojection method for solving the split feasibility problems. J. Glob. Optim. 54, 627639 (2012) MathSciNetCrossRefMATHGoogle Scholar
 12.Dong, Q, Yao, Y, He, S: Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in Hilbert spaces. Optim. Lett. 8, 10311046 (2014). doi: 10.1007/s1159001306194 MathSciNetCrossRefMATHGoogle Scholar
 13.Yang, Q: On variablestep relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166179 (2005) MathSciNetCrossRefMATHGoogle Scholar
 14.Wang, F, Xu, H, Su, M: Choices of variable of the steps of the cq algorithm for the split feasibility problem. Fixed Point Theory 12, 489496 (2011) MathSciNetMATHGoogle Scholar
 15.López, G, MartínMáquez, V, Wang, F: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 118 (2012) MathSciNetCrossRefGoogle Scholar
 16.Zhou, HY, Wang, PY: Adaptively relaxed algorithms for solving the split feasibility problem with a new stepsize. J. Inequal. Appl. 2014, 448 (2014) CrossRefGoogle Scholar
 17.Yao, Y, Postolache, M, Liou, Y: Strong convergence of a self adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 201 (2013) MathSciNetCrossRefMATHGoogle Scholar
 18.Zhou, HY, Wang, PY: Some remarks on the paper “strong convergence of a selfadaptive method for the split feasibility problem”. Numer. Algorithms 70, 333339 (2015) MathSciNetCrossRefMATHGoogle Scholar
 19.Dang, Y, Gao, Y: The strong convergence of a kmcqlike algorithm for a split feasibility problem. Inverse Probl. 27, 015007 (2011) MathSciNetCrossRefMATHGoogle Scholar
 20.Yu, X, Shahzad, N, Yao, Y: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 6, 14471462 (2012) MathSciNetCrossRefMATHGoogle Scholar
 21.Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004) MathSciNetCrossRefMATHGoogle Scholar
 22.Xu, H: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240256 (2002) MathSciNetCrossRefMATHGoogle Scholar
 23.Deubechies, I, Defrise, M, DeMol, C: An iterative thresholding algorithm for linear inverse problems. Commun. Pure Appl. Math. 57, 14131457 (2004) MathSciNetCrossRefGoogle Scholar
 24.Chambolle, A, DeVore, R, Lee, N: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7, 319335 (1998) MathSciNetCrossRefMATHGoogle Scholar
 25.Daubechies, I, Fornasier, M, Loris, I: Accelerated projected gradient method for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 14, 764792 (2008) MathSciNetCrossRefMATHGoogle Scholar
 26.Figueiredo, M, Nowak, R, Wright, S: A gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586598 (2007) CrossRefGoogle Scholar
 27.Figueiredo, M, Nowak, R: A bound optimization approach to waveletbased image deconvolution. In: IEEE International Conference on Image Processing  ICIP’2005, pp. 14 (2005) Google Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.