# The Proximal Alternating Minimization Algorithm for Two-Block Separable Convex Optimization Problems with Linear Constraints

- 200 Downloads

## Abstract

The Alternating Minimization Algorithm has been proposed by Paul Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of this method does not usually correspond to the calculation of a proximal operator through a closed formula affects the implementability of the algorithm. In this paper, we allow in each block of the objective a further smooth convex function and propose a proximal version of the algorithm, which is achieved by equipping the algorithm with proximal terms induced by variable metrics. For suitable choices of the latter, the solving of the two subproblems in the iterative scheme can be reduced to the computation of proximal operators. We investigate the convergence of the proposed algorithm in a real Hilbert space setting and illustrate its numerical performances on two applications in image processing and machine learning.

## Keywords

Proximal AMA Lagrangian Saddle points Subdifferential Convex optimization Fenchel duality## Mathematics Subject Classification

47H05 65K05 90C25## 1 Introduction

Tseng introduced in [1] the so-called Alternating Minimization Algorithm (AMA) to solve optimization problems with two-block separable linear constraints and two nonsmooth convex objective functions, one of these assumed to be strongly convex. The numerical scheme consists in each iteration of two minimization subproblems, each involving one of the two objective functions, and of an update of the dual sequence which approaches asymptotically a Lagrange multiplier of the dual problem.

The strong convexity of one of the objective functions allows to reduce the corresponding minimization subproblem to the calculation of the proximal operator of a proper, convex and lower semicontinuous function. This is for the second minimization problem in general not the case; thus, with the exception of some very particular cases, one has to use a subroutine in order to compute the corresponding iterate. This may have a negative influence on the convergence behaviour of the algorithm and affect its computational tractability. One possibility to avoid this is to properly modify this subproblem with the aim of transforming it into a proximal step, and, of course, without losing the convergence properties of the algorithm. The papers [2] and [3] provide convincing evidences for the efficiency and versatility of proximal point algorithms for solving nonsmooth convex optimization problems; we also refer to [4] for a block coordinate variable metric forward–backward method.

In this paper, we address in a real Hilbert space setting a more involved two-block separable optimization problem, which is obtained by adding in each block of the objective a further smooth convex function. To solve this problem, we propose a so-called Proximal Alternating Minimization Algorithm (Proximal AMA), which is obtained by inducing in each of the minimization subproblems additional proximal terms defined by means of positively semidefinite operators. The two smooth convex functions in the objective are evaluated via gradient steps. For appropriate choices of these operators, we show that the minimization subproblems turn into proximal steps and the algorithm becomes an iterative scheme formulated in the spirit of the full splitting paradigm. We show that the generated sequence converges weakly to a saddle point of the Lagrangian associated with the optimization problem under investigation. The numerical performances of Proximal AMA are illustrated in particular in comparison with AMA for two applications in image processing and machine learning.

A similarity of AMA to the classical Alternating Direction Method of Multipliers (ADMM) algorithm, introduced by Gabay and Mercier [5], is obvious. In [6, 7, 8] (see also [9, 10]), proximal versions of the ADMM algorithm have been proposed and proved to provide a unifying framework for primal-dual algorithms for convex optimization. Parts of the convergence analysis for the Proximal AMA are carried out in a similar spirit to the convergence proofs in these papers.

## 2 Preliminaries

### Algorithm 2.1

*(AMA)*Choose \(p^0 \in \mathbb {R}^r\) and a sequence of strictly positive stepsizes \((c_k)_{k\ge 0}\). For all \(k \ge 0\), set:

### Theorem 2.1

It is the aim of this paper to propose a proximal variant of this algorithm, called Proximal AMA, which overcomes its drawbacks, and to investigate its convergence properties.

In the remainder of this section, we will introduce some notations, definitions and basic properties that will be used in the sequel (see [11]). Let \(\mathcal {H}\) and \(\mathcal {G}\) be real Hilbert spaces with corresponding inner products \(\langle \cdot , \cdot \rangle \) and associated norms \(\Vert \cdot \Vert =\sqrt{\langle \cdot , \cdot \rangle }\). In both spaces, we denote by \(\rightharpoonup \) the weak convergence and by \(\rightarrow \) the strong convergence.

We say that a function \(f:\mathcal {H} \rightarrow \overline{\mathbb {R}}\) is proper, if its domain satisfies the assumption \({{\,\mathrm{dom}\,}}f:=\{x\in \mathcal {H}: f(x)<+\infty \}\ne \emptyset \) and \(f(x) > -\infty \) for all \(x \in \mathcal {H}\). Let be \(\Gamma (\mathcal {H})=\{f:\mathcal {H} \rightarrow \overline{\mathbb {R}}: f \text { is proper, convex and lower semicontinuous} \}\).

*f*is defined as \(\partial f(x)=\{u\in \mathcal {H}: f(y)\ge f(x)+\langle u,y-x\rangle \forall y\in \mathcal {H}\}\), if \(f(x) \in \mathbb {R}\), and as \(\partial f(x) = \emptyset \), otherwise.

The infimal convolution of two proper functions \(f,g:\mathcal{H}\rightarrow \overline{\mathbb {R}}\) is the function \(f\Box g:\mathcal{H}\rightarrow \overline{\mathbb {R}}\), defined by \((f\Box g)(x)=\inf _{y\in \mathcal{H}}\{f(y)+g(x-y)\}\).

*f*at

*x*, where \(\gamma >0\), is defined as

*C*is

*C*relative to its affine hull.

Let \(A:\mathcal {H}\rightarrow \mathcal {G}\) be a linear continuous operator. The operator \(A^*:\mathcal {G}\rightarrow \mathcal {H}\), fulfilling \(\langle A^*y,x \rangle = \langle y, Ax \rangle \) for all \(x \in \mathcal {H}\) and \(y \in \mathcal {G}\), denotes the adjoint operator of *A*, while \(\Vert A\Vert :=\sup \{\Vert Ax\Vert : \Vert x\Vert \le 1 \}\) denotes the norm of *A*.

## 3 The Proximal Alternating Minimization Algorithm

The two-block separable optimization problem we are going to investigate in this paper has the following formulation.

### Problem 3.1

We allow the Lipschitz constant of the gradients of the functions \(h_1\) and \(h_2\) to be zero. In this case, the functions are affine.

*L*, if

*L*if and only if \((x^*,z^*)\) is an optimal solution of (5), \(p^*\) is an optimal solution of its Fenchel dual problem

*L*is guaranteed when (5) has an optimal solution and, for instance, the Attouch–Brézis-type condition

*L*. Conversely, if \((x^*,z^*,p^*)\) is a saddle point of the Lagrangian

*L*, thus, \((x^*,z^*,p^*)\) satisfies relation (8), then \((x^*,z^*)\) is an optimal solution of (5) and \(p^*\) is an optimal solution of (6).

### Remark 3.1

If \((x_1^*,z_1^*,p_1^*)\) and \((x_2^*,z_2^*,p_2^*)\) are two saddle points of the Lagrangian *L*, then \(x_1^*=x_2^*\). This follows easily from (8), by using the strong monotonicity of \(\partial f\) and the monotonicity of \(\partial g\).

In the following, we formulate the Proximal Alternating Minimization Algorithm to solve (5). To this end, we modify Tseng’s AMA by evaluating in each of the two subproblems the functions \(h_1\) and \(h_2\) via gradient steps, respectively, and by introducing proximal terms defined through two sequences of positively semidefinite operators \((M_1^k)_{k \ge 0}\) and \((M_2^k)_{k \ge 0}\).

### Algorithm 3.1

*(Proximal AMA)*Let \((M_1^k)_{k \ge 0} \subseteq \mathcal {S}_+(\mathcal {H})\) and \((M_2^k)_{k \ge 0} \subseteq \mathcal {S}_+(\mathcal {G})\). Choose \((x^0,z^0,p^0) \!\!\in \mathcal {H}\times \mathcal {G}\times \mathcal {K}\) and a sequence of stepsizes \((c_k)_{k\ge 0} \subseteq (0,+\infty )\). For all \(k\ge 0\), set:

### Remark 3.2

The sequence \((z^k)_{k \ge 0}\) is uniquely determined if there exists \(\alpha _k > 0\) such that \(c_kB^*B+M_2^k \in \mathcal {P}_{\alpha _k}(\mathcal {G})\) for all \(k \ge 0\). This actually ensures that the objective function in subproblem (10) is strongly convex.

### Remark 3.3

The convergence of the Proximal AMA method is addressed in the next theorem.

### Theorem 3.1

*L*be nonempty. We assume that \(M_1^k-\frac{L_1}{2}{{\,\mathrm{Id}\,}}\in \mathcal {S}_+(\mathcal {H}), M_1^k \succcurlyeq M_1^{k+1}\), \( M_2^k-\frac{L_2}{2}{{\,\mathrm{Id}\,}}\in \mathcal {S}_+(\mathcal {G}), M_2^k \succcurlyeq M_2^{k+1}\) for all \(k \ge 0\) and that \((c_k)_{k \ge 0}\) is a monotonically decreasing sequence satisfying

- (i)
there exists \(\alpha >0\) such that \(M_2^k-\frac{L_2}{2}{{\,\mathrm{Id}\,}}\in \mathcal {P}_{\alpha }(\mathcal {G})\) for all \(k \ge 0\);

- (ii)
there exists \(\beta >0\) such that \(B^*B\in \mathcal {P}_{\beta }(\mathcal {G})\);

*L*.

### Proof

*L*. This means that it fulfils the system of optimality conditions

*L*. Let be \(({\bar{z}},{\bar{p}}) \in \mathcal {G}\times \mathcal {K}\) such that the subsequence \((x^{k_j}, z^{k_j}, p^{k_j})_{j\ge 0}\) converges weakly to \((x^*,{\bar{z}},{\bar{p}})\) as \(j\rightarrow +\infty \). From (16), we have

*f*is sequentially closed in the strong-weak topology (see [11, Proposition 20.33]), it follows

*u*,

*v*) in the graph of \(\partial (g+h_2)^*\) and for all \(j\ge 0\)

*j*converge to \(+\infty \) and receive

*L*.

In the following, we show that sequence \((x^k,z^k,p^k)_{k \ge 0}\) converges weakly. To this end, we consider two sequential cluster points \((x^*,z_1,p_1)\) and \((x^*,z_2,p_2)\). Consequently, there exists \((k_s)_{s \ge 0}\), \(k_s \rightarrow + \infty \) as \(s \rightarrow + \infty \), such that the subsequence \((x^{k_s}, z^{k_s}, p^{k_s})_{s \ge 0}\) converges weakly to \((x^*,z_1,p_1)\) as \(s \rightarrow + \infty \). Furthermore, there exists \((k_t)_{t \ge 0}\), \(k_t \rightarrow + \infty \) as \(t \rightarrow + \infty \), such that that a subsequence \((x^{k_t}, z^{k_t}, p^{k_t})_{t \ge 0}\) converges weakly to \((x^*,z_2,p_2)\) as \(t \rightarrow + \infty \). As seen before, \((x^*,z_1,p_1)\) and \((x^*,z_2,p_2)\) are both saddle points of the Lagrangian *L*.

*L*, we obtain

*M*in the strong topology as \(k \rightarrow +\infty \) (see [17, Lemma 2.3]). Furthermore, let \(c:=\lim _{k \rightarrow +\infty } c_k >0\). Taking the limits in (28) along the subsequences \((k_s)_{s \ge 0}\) and \((k_t)_{t \ge 0}\), it yields

*L*.

Assume now that condition (ii) holds, namely that there exists \(\beta > 0\) such that \(B^*B \in \mathcal {P}_\beta (\mathcal {H})\). Then \(\beta \Vert z_1-z_2\Vert ^2 \le \Vert Bz_1-Bz_2\Vert ^2\) for all \(z_1, z_2 \in \mathcal {G}\), which means that, if \((x_1^*,z_1^*,p_1^*)\) and \((x_2^*,z_2^*,p_2^*)\) are two saddle points of the Lagrangian *L*, then \(x_1^*=x_2^*\) and \(z_1^*=z_2^*\).

*L*, we fixed at the beginning of the proof and the generated sequence \((x^k, z^k, p^k)_{k \ge 0}\) we receive because of (23) that

If \(h_1=0\) and \(h_2=0\), and \(M_1^k=0\) and \(M_2^k = 0\) for all \(k \ge 0\), then the Proximal AMA method becomes the AMA method as it has been proposed by Tseng [1]. According to Theorem 3.1 (for \(L_1=L_2=0\)), the generated sequence converges weakly to a saddle point of the Lagrangian, if there exists \(\beta >0\) such that \(B^*B\in \mathcal {P}_{\beta }(\mathcal {G})\). In finite-dimensional spaces, this condition reduces to the assumption that *B* is injective.

## 4 Numerical Experiments

In this section, we compare the numerical performances of AMA and Proximal AMA on two applications in image processing and machine learning. The numerical experiments were performed on a computer with an Intel Core i5-3470 CPU and 8 GB DDR3 RAM.

### 4.1 Image Denoising and Deblurring

*i*-th row and the

*j*-th column, for \(1 \le i \le M, 1 \le j \le N\).

We solved the Fenchel dual problem of (31) by AMA and Proximal AMA and determined in this way an optimal solution of the primal problem, too. The reason for this strategy was that the Fenchel dual problem of (31) is a convex optimization problem with two-block separable linear constraints and objective function.

*f*and

*g*have full domains, strong duality for (31)–(32) holds.

*g*is the indicator function of the set \([-\lambda ,\lambda ]^n\times [-\lambda ,\lambda ]^n\); thus, \(\text {Prox}_{\sigma _kg^*}\) is the projection operator \(\mathcal {P}_{[-\lambda ,\lambda ]^n\times [-\lambda ,\lambda ]^n}\) on the set \([-\lambda ,\lambda ]^n\times [-\lambda ,\lambda ]^n\). The iterative scheme reads for all \(k \ge 0\):

*g*is the indicator function of the set \(S:=\left\{ (v,w)\in \mathbb {R}^n\times \mathbb {R}^n:\max _{1\le i\le n}\sqrt{v_i^2+w_i^2}\le \lambda \right\} \); thus, \(\text {Prox}_{\sigma _kg^*}\) is the projection operator \(P_S : \mathbb {R}^n \times \mathbb {R}^n \rightarrow S\) on

*S*, defined as

We used in our experiments a Gaussian blur of size \(9 \times 9\) and standard deviation 4, which led to an operator *A* with \(\Vert A\Vert ^2=1\) and \(A^*=A\). Furthermore, we added Gaussian white noise with standard deviation \(10^{-3}\). We used for both algorithms a constant sequence of stepsizes \(c_k=2 -10^{-7}\) for all \(k \ge 0\). One can notice that \((c_k)_{k \ge 0}\) fulfils (12). For Proximal AMA, we considered \(\sigma _k=\frac{1}{8.00001 \cdot c_k}\) for all \(k \ge 0\), which ensured that every matrix \(M_2^k=\frac{1}{\sigma _k}\text {I}-c_kL^* L\) is positively definite for all \(k \ge 0\). This is actually the case, if \(\sigma _kc_k\Vert L\Vert ^2<1\) for all \(k \ge 0\). In other words, assumption (i) in Theorem 3.1 was verified.

In Figs. 1, 2, 3 and 4, we show how Proximal AMA and AMA perform when reconstructing the blurred and noisy coloured MATLAB test image “office_ 4” of \(600 \times 903\) pixels (see Fig. 5) for different choices for the regularization parameter \(\lambda \) and by considering both the anisotropic and isotropic total variation as regularization functionals. In all considered instances that Proximal AMA outperformed AMA from the point of view of both the convergence behaviour of the sequence of the function values and of the sequence of ISNR (Improvement in signal-to-noise ratio) values. An explanation could be that the number of iterations Proximal AMA makes in a certain amount of time is more than double the number of outer iterations performed by AMA.

### 4.2 Kernel-Based Machine Learning

In this subsection, we will describe the numerical experiments we carried out in the context of classifying images via support vector machines.

*K*is positively definite, the function

*f*is \(\lambda _{\min }(K)\)-strongly convex, where \(\lambda _{\min }(K)\) denotes the minimal eigenvalue of

*K*, and differentiable, and it holds \(\nabla f(x)= Kx\) for all \(x \in \mathbb {R}^n\). For an element of the form \(p = (p_1,\ldots ,p_n) \in \mathbb {R}^n\), it holds

*C*and \(\sigma \)) and for the RMSE (root-mean-square deviation) for the sequence of primal iterates.

Performance evaluation of Proximal AMA (with \(\tau _k=10\) for all \(k \ge 0\)) and AMA for the classification problem with \(C=1\) and \(\sigma =0.2\)

Algorithm | Misclassification rate at 0.7027% | RMSE \(\le 10^{-3}\) |
---|---|---|

Proximal AMA | 8.18 s (145) | 23.44 s (416) |

AMA | 8.65 s (153) | 26.64 s (474) |

Performance evaluation of Proximal AMA (with \(\tau _k=102\) for all \(k \ge 0\)) and AMA for the classification problem with \(C=1\) and \(\sigma =0.25\)

Algorithm | Misclassification rate at 0.7027% | RMSE \(\le 10^{-3}\) |
---|---|---|

Proximal AMA | 141.78 s (2448) | 629.52 s (10,940) |

AMA | 147.99 s (2574) | 652.61 s (11,368) |

## 5 Perspectives and Open Problems

- (1)
carry out investigations related to the convergence rates for both the iterates and objective function values of Proximal AMA; as emphasized in [10] for the Proximal ADMM algorithm, the use of variable metrics can have a determinant role in this context, as they may lead to dynamic stepsizes which are favourable to an improved convergence behaviour of the algorithm (see also [15, 21]);

- (2)consider a slight modification of Algorithm 3.1, by replacing (11) with$$\begin{aligned} p^{k+1} = p^k+\theta c_k(b-Ax^{k+1}-Bz^{k+1}), \end{aligned}$$
where \(0<\theta < \frac{\sqrt{5}+1}{2}\) and to investigate the convergence properties of the resulting scheme; it has been noticed in [22] that the numerical performances of the classical ADMM algorithm for convex optimization problems in the presence of a relaxation parameter with \(1<\theta <\frac{\sqrt{5}+1}{2}\) outperform the ones obtained when \(\theta =1\);

- (3)
embed the investigations made in this paper in the more general framework of monotone inclusion problems, as it was recently done in [10] starting from the Proximal ADMM algorithm.

## 6 Conclusions

The Proximal AMA method has the advantage over the classical AMA method that, as long as the sequence of variable metrics is chosen appropriately, it performs proximal steps when calculating new iterates. In this way, it avoids the use in every iteration of minimization subroutines. In addition, it handles properly smooth and convex functions which might appear in the objective. The sequences of generated iterates converge to a primal–dual solution in the same setting as for the classical AMA method. The fact that instead of solving of minimization subproblems one has only to make proximal steps, may lead to better numerical performances, as we show in the experiments on image processing and support vector machines classification.

## Notes

### Acknowledgements

Open access funding provided by Austrian Science Fund (FWF). The work of SB and GW has been partially supported by DFG (Deutsche Forschungsgemeinschaft), project WA922/9-1. The work of RIB has been partially supported by FWF (Austrian Science Fund), project I 2419-N32. The work of ERC has been supported by FWF, project P 29809-N32. The authors are thankful to two anonymous referees for helpful comments and remarks which improved the presentation of the manuscript.

## References

- 1.Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim.
**29**(1), 119–138 (1991)MathSciNetCrossRefzbMATHGoogle Scholar - 2.Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci.
**2**(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar - 3.Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 185–212. Springer, New York (2011)Google Scholar
- 4.Chouzenoux, E., Pesquet, J.-C., Repetti, A.: A block coordinate variable metric forward–backward algorithm. J. Global Optim.
**66**(3), 457–485 (2016)MathSciNetCrossRefzbMATHGoogle Scholar - 5.Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl.
**2**, 17–40 (1976)CrossRefzbMATHGoogle Scholar - 6.Attouch, H., Soueycatt, M.: Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces. Applications to games, PDE’s and control. Pac. J. Optim.
**5**(1), 17–37 (2009)MathSciNetzbMATHGoogle Scholar - 7.Fazel, M., Pong, T.K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications in system identification and realization. SIAM J. Matrix Anal. Appl.
**34**, 946–977 (2013)MathSciNetCrossRefzbMATHGoogle Scholar - 8.Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim.
**24**, 269–297 (2014)MathSciNetCrossRefzbMATHGoogle Scholar - 9.Banert, S., Boţ, R.I., Csetnek, E.R.: Fixing and extending some recent results on the ADMM algorithm. Preprint arXiv:1612.05057 (2017)
- 10.Boţ, R.I., Csetnek, E.R.: ADMM for monotone operators: convergence analysis and rates. Adv. Comput. Math. https://doi.org/10.1007/s10444-018-9619-3
**(to appear)** - 11.Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)CrossRefzbMATHGoogle Scholar
- 12.Boţ, R.I.: Conjugate Duality in Convex Optimization. Lecture Notes in Economics and Mathematical Systems, vol. 637. Springer, Berlin (2010)Google Scholar
- 13.Bredies, K., Sun, H.: A proximal point analysis of the preconditioned alternating direction method of multipliers. J. Optim. Theory Appl.
**173**(3), 878–907 (2017)MathSciNetCrossRefzbMATHGoogle Scholar - 14.Bredies, K., Sun, H.: Preconditioned Douglas–Rachford splitting methods for convex-concave saddle-point problems. SIAM J. Numer. Anal.
**53**(1), 421–444 (2015)MathSciNetCrossRefzbMATHGoogle Scholar - 15.Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis.
**40**(1), 120–145 (2011)MathSciNetCrossRefzbMATHGoogle Scholar - 16.Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci.
**3**(4), 1015–1046 (2010)MathSciNetCrossRefzbMATHGoogle Scholar - 17.Combettes, P.L., Vũ, B.C.: Variable metric quasi-Fejér monotonicity. Nonlinear Anal.
**78**, 17–31 (2013)MathSciNetCrossRefzbMATHGoogle Scholar - 18.Boţ, R.I., Hendrich, C.: Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization. J. Math. Imaging Vis.
**49**(3), 551–568 (2014)MathSciNetCrossRefzbMATHGoogle Scholar - 19.Hendrich, C.: Proximal Splitting Methods in Non-smooth Convex Optimization. Ph.D. Thesis, Technical University of Technology, Chemnitz, Germany (2014)Google Scholar
- 20.Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total-variation-based noise removal algorithms. Physica D Nonlinear Phenom.
**60**(1–4), 259–268 (1992)MathSciNetCrossRefzbMATHGoogle Scholar - 21.Boţ, R.I., Csetnek, E.R., Heinrich, A., Hendrich, C.: On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems. Math. Program.
**150**(2), 251–279 (2015)MathSciNetCrossRefzbMATHGoogle Scholar - 22.Fortin, M., Glowinski, R.: On decomposition-coordination methods using an augmented Lagrangian. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems. North-Holland, Amsterdam (1983)Google Scholar

## Copyright information

**OpenAccess**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.