Skip to main content

New Preconditioners Applied to Linear Programming and the Compressive Sensing Problems

Abstract

In this paper, we present new preconditioners based on the incomplete Cholesky factorization and on the splitting preconditioner. In the first approach, we consider the interior point methods that are very efficient for solving linear programming problems. The results of the numerical tests for this problem present satisfactory results in relation to the time and number of iterations. In the second approach, we apply a new preconditioner in compressive sensing (CS) problems, which is an efficient technique to acquire and reconstruct signal. An approach for solving this problem is the primal-dual Newton conjugate gradients. We present a new preconditioner, in the construction of which we exploited the fact that close to a solution we can split the variables into two groups and the matrices satisfy certain properties, as demonstrated in a method known from the literature (Fountoulakis 2015). Once the preconditioner exploiting these features has been computed, we apply an incomplete Cholesky factorization on it, and use the factor found as the true preconditioner. Therefore, the new preconditioner is the result of the combination of two preconditioners. The results obtained are satisfactory in relation to the time and the quality of the reconstructed image.

Introduction

Linear programming problems can be found in a variety of contexts, such as fleet and route logistics planning, operational strategies in mining, steel, agriculture, and others. Among the approaches in the literature for solving linear programming problems, in the first section of the paper, we use the predictor-corrector interior point method.

A linear optimization problem consists of minimizing or maximizing a linear objective function, subject to a finite set of linear constraints. Restrictions can be either equality or inequality. The standard form of a linear optimization problem is called a primal problem and is given by:

$$ \begin{array}{c} \text{ minimize} \ c^{T}x \\ \text{ subject to} \ \quad Ax=b \\ \qquad \qquad \quad \qquad x\geq0 \end{array}, $$
(1)

where A is a constraint matrix belonging to , x is a column vector belonging to , whose components are called primal variables, and b and c are column vectors belonging to and , respectively, with c being the costs associated with the elements of x.

We know that the matrix of the linear system can be ill-conditioned, so the use of the preconditioners is mandatory when using iterative methods. We present a new preconditioner for linear programming problems, which we will call the type splitting factor and present numerical results.

An efficient technique in acquiring and reconstructing signals is compressive sensing (CS), also known as compressive sampling [5]. Compressive sensing is applied in the areas of photography [13], magnetic resonance [16], and tomography [9], among others. A new method applied to compressive sensing is presented in this paper.

We present our preconditioner applied to the compressive sensing problem. Following the same reasoning presented for linear programming problems, we will apply the incomplete Cholesky factorization in the preconditioner developed in [10], considering that like the splitting preconditioner, it achieves better performance close to a solution. Numerical experiments related to this new preconditioner are also presented.

The results obtained, both for linear programming problems and compression sensing problems, are satisfactory concerning the preconditioners already known in the literature.

Preconditioners for Linear Systems

In the interior point methods, we obtain a linear system corresponding to the Newton method applied to the optimality conditions of the problem. The solution of the system can be done by means of direct methods, such as LU and Cholesky factorizations or iterative methods, such as the conjugate gradient method (CGM). The Cholesky factorization [11] is the most often used method for the linear system solution. The computation of the factors can be very expensive, since in problems with sparse matrices its structure can be affected with fill-in, that is, an appearance of extra nonzero entries in the Cholesky factor. Equipped with appropriate preconditioners, iterative methods become a good alternative. As the matrix of the system is positive definite, we apply the conjugate gradient method which has a good performance when the matrices are well conditioned; in case this does not occur, preconditioning becomes necessary.

Incomplete Factor of the Splitting Preconditioner

The splitting preconditioner MMT [18] arose from interior point methods that use iterative approaches, in particular, the conjugate gradient method.

Consider the matrix AD− 1AT, with \(A=[{\mathscr{B}} \quad \mathcal {N}]P\), where P is a permutation matrix such that \({\mathscr{B}}\) is not singular, we can rewrite it as follows:

$$ AD^{-1}A^{T}=\mathcal{B}D_{\mathcal{B}}^{-1}\mathcal{B}^{T} + \mathcal{N}D_{\mathcal{N}}^{-1}\mathcal{N}^{T}. $$
(2)

The construction of this preconditioner MMT is based on the behavior of matrix D in the final iterations of the interior point method.

Consider Dk, the matrix D corresponding to k-th iteration, the elements of Dk are given by:

$$ d_{ii}^{k}=\frac{{z_{i}^{k}}}{{x_{i}^{k}}}, \qquad 1\leq i \leq n, $$
(3)

note that all elements dii are positive.

As the method converges to the solution, we can split the primal and dual variables, \({x^{k}_{i}}\) e \({z_{i}^{k}}\), in two subsets \({\mathscr{B}}\) and \(\mathcal {N}\), such that, \({\mathscr{B}}\) corresponds to the subset that approaches to x > 0 and z = 0, and \(\mathcal {N}\) to the subset that approaches to x = 0 and z > 0, with x,z solution of the linear programming problem. Thus, we have that the variables will be divided into subsets \({\mathscr{B}}\) and \(\mathcal {N}\),

  \({\mathscr{B}}\) \(\mathcal {N}\)
\({x_{i}^{k}}\) \(\rightarrow x^{*}>0 \) \(\rightarrow z^{*}=0\)
\({z_{i}^{k}}\) \(\rightarrow z^{*}=0 \) \(\rightarrow z^{*}>0\)

as the method approaches to the solution. We are considering that the problem is non-degenerate.

Considering \(M={\mathscr{B}}D_{{\mathscr{B}}}^{-\frac {1}{2}}\), that is, pre-multiplying the sum obtained in Eq. 2, by \(D_{{\mathscr{B}}}^{\frac {1}{2}}{\mathscr{B}}^{-1}\), and post-multiplying by its transpose, we obtain the preconditioned matrix:

$$ D_{\mathcal{B}}^{\frac{1}{2}}\mathcal{B}^{-1}(AD^{-1}A^{T})\mathcal{B}^{-T}D_{\mathcal{B}}^{\frac{1}{2}}=I+D_{\mathcal{B}}^{\frac{1}{2}}\mathcal{B}^{-1}\mathcal{N}D_{\mathcal{N}}^{-1}\mathcal{N}^{T}\mathcal{B}^{-T}D_{\mathcal{B}}^{\frac{1}{2}} $$
$$ =I+WW^{T}, \text{with} \quad W=D_{\mathcal{B}}^{\frac{1}{2}}\mathcal{B}^{-1}\mathcal{N}D_{\mathcal{N}}^{-\frac{1}{2}}. $$
(4)

Note that the preconditioner is more efficient close to the solution because (4) will be closer to the identity matrix.

The splitting preconditioner is given by \(BD_{B}^{-1}B^{T}\). Applying incomplete Cholesky factorization to \(BD_{B}^{-1}B^{T}\), we determine the splitting factor \(\widehat {L}\), which is the preconditioner presented in this paper. Thus, we obtain the preconditioned matrix:

$$ \widehat{L}^{-1}AD^{-1}A^{T}\widehat{L}^{-T}=\widehat{L}^{-1}BD_{B}^{-1}B^{T}\widehat{L}^{-T}+\widehat{L}^{-1}ND_{N}^{-1}N^{T}\widehat{L}^{-T}. $$
(5)

As \(\widehat {L}\widehat {L}^{T}\) is an approximation of \(BD_{B}^{-1}B^{T},\) incomplete factor works as a good preconditioner. We also highlight that for small problems, the computational cost to determine the incomplete factor is cheap.

In summary, in this paper, we present the preconditioners obtained by computing a factorization of the splitting preconditioner. These preconditioners are given by the factor obtained by the incomplete Cholesky factorization [3] of the matrix defined by the splitting preconditioner [18]. Thus, if we want to solve the normal system from interior point methods with AD− 1A coefficient matrix, where MMT represents the splitting preconditioner, we apply the incomplete Cholesky factorization in it by obtaining the factor \(\widehat {L}\): this factor is the new preconditioner, being the preconditioned matrix of the linear system given by \(\widehat {L}^{-1}AD^{-1}A^{T}\widehat {L}^{-T}\).

For large-scale problems, the incomplete factorization obtained a less accurate approximation for \(\widehat {L}\widehat {L}^{T}\), which made the incomplete factor not a good preconditioner [14].

We present two preconditioners:

First preconditioner Applied to the solution of linear systems arising from interior point methods for linear programming. In this approach, we implemented the predictor-corrector interior point method in Matlab. The new preconditioner is used in all iterations: to obtain it, we apply the incomplete Cholesky factorization in the matrix \({\mathscr{B}}D_{{\mathscr{B}}}^{-1}{\mathscr{B}}^{T}\) referring to the splitting preconditioner, on what \(A=[{\mathscr{B}} \quad \mathcal {N}]P\) and P is a permutation matrix such that \({\mathscr{B}}\) is nonsingular [18]. The incomplete factor \(\widehat {L}\) obtained is called splitting factor. The preconditioned matrix is \(\widehat {L}^{-1}AD^{-1}A^{T}\widehat {L}^{-T}\).

Second preconditioner The second preconditioner is specific for the compressive sensing and it is implemented in Matlab. It is shown that the known preconditioner of the literature for this problem resembles the splitting preconditioner, in the sense that it has a better performance in the final iterations. An incomplete Cholesky factorization is applied to the preconditioner obtaining a pseudo-splitting factor.

The following table presents the features of the two proposed preconditioners.

Approach First Second
Language Matlab Matlab
Type LPP Compressive sensing
Factorization ICF ICF
Matrix \({\mathscr{B}}D_{{\mathscr{B}}}^{-1}{\mathscr{B}}^{T}\) \(\widetilde {N}\)

In the table, the second line corresponds to the type of problem we are solving, LPP indicates that it is a linear programming problem; compressive sensing is the problem discussed in the second part of this paper. In the third line, ICF indicates that the incomplete Cholesky factorization is used in the matrix of the splitting preconditioner. In the last line, we can see the matrices defined by the splitting preconditioner. \(\widetilde {N}\) is defined in the next section.

Compressive Sensing

The theory about CS asserts that certain signals and images can be recovered through few samples or measurements compared with traditional methods [5]. This is possible due to the sparsity of the signal of interest and the matrix satisfies the condition of the restricted isometry property (RIP) [4]. We know that:

– Sparsity:

It exploits the fact that many natural signals have sparse representation in appropriate bases.

– Restricted isometry property (RIP):

It shows the efficiency of matrices that capture sparse signal information.

For more details, see [7].

The goal is to create a new second-order method applied to compressive sensing. A method called primal-dual Newton conjugate gradients (pdNCG) is presented in [10]. We will modify this method seeking more efficient results.

Proposed Method

We present the proposed preconditioner for the compressive sensing problems, based on the preconditioner splitting factor, which we call pseudo-splitting factor, discussed above.

Approach

The method that we present denoted by pdNCGs is based on the application of the incomplete Cholesky factorization in the matrix of the preconditioner suggested by Fountoulakis [10]. The method is based on the idea that, just like the splitting preconditioner, the preconditioner presented in [6] also has a better performance in the final iterations of the method, that is, close to the solution. Thus, given the preconditioner \(\widetilde {N}:=\tau sym(\widetilde {B})+\rho I_{n}\), where τ and ρ are positive scalars and the matrix \(\widetilde {B}\) is developed in [6], we apply the incomplete Cholesky factorization to matrix \(\tilde N\), and the factor obtained is the preconditioner that we will use. Thus, since \(\widehat {L}\) is the factor obtained by the incomplete Cholesky factorization of matrix \(\tilde N\), the preconditioned matrix has the form \(\widehat {L}^{-1}\tilde {N}\widehat {L}^{-T}\).

Computational Tests

In this section, we present the tests performed for linear programming problems and compressive sensing. In both cases, the implementation was performed in Matlab.

Computational Experiments for Linear Programming Problems

We present the results obtained using the splitting factor as preconditioner in all iterations of the interior point method, compared with the results obtained by the method with the splitting preconditioner also used in all iterations.

In this test, we implemented a predictor-corrector interior point method for bounded variables. The matrix of the splitting preconditioner (\({\mathscr{B}}D^{-1}_{{\mathscr{B}}}{\mathscr{B}}^{T}\)) is determined and then the incomplete Cholesky factorization of this matrix, which corresponds to the splitting factor, is used as preconditioner. In order to determine the incomplete Cholesky factor, the Matlab internal function ichol is used. In it, we use drop tolerance with a value equal to 10− 3. We also use a diagonal shift with an appropriate α value.

The experiments are performed with an own implementation in Matlab R2017a on Linux, in a computer with 4 GB of memory and an Intel®; Core(TM) i3-2367M 1.40-GHz processor.

In this test, 31 problems were considered, from different collections. They can be found at:

Table 1 presents the data of the problems used. In all problems, the primal variables are nonnegative.

Table 1 General problem data

The results obtained are presented in Table 2, where the time obtained for the solution is given in seconds. It corresponds to the number of iterations of the interior point method, and FO indicates whether the solution reached is optimal.

Table 2 Splitting factor and splitting preconditioners

In the column referring to time, the values in red in Table 2 correspond to the problems in which the method with the splitting factor preconditioner was faster. In the It column, which corresponds to the number of iterations required for convergence, the values in red refer to the problems with fewer iterations in relation to the splitting preconditioner. When the same number of iterations was observed for the two preconditioners, we denote its value in bold.

The method with the new preconditioner was able to solve all the 31 problems, while the splitting preconditioner was able to solve 29 of the 31 since for degen2 and scorpion problems it did not achieve convergence. We indicate in blue the time and number of iterations obtained by the method with the splitting factor preconditioner for these two problems.

With respect to time, the new approach obtained a better time in 18 problems, and for the problems ken_07, nug08, and pds_02 the differences are significant. For the qap8 problem, the new preconditioner did not achieve a favorable time. We highlight that the largest matrix of the system is related to the pds_02 problem, in which the new preconditioner obtained a time approximately 11 times faster. In the work [14], it is noted that for the problems pds_06 (with dimension 9145×28,472), pds_10 (with dimension 15,637×48,780), pds_20 (with dimension 32,276×106,180), and pds_30 (with dimension 47,957×156,042), the results obtained despite being better regarding processing time were not significant.

We could not reduce the number of iterations in many problems; in total, there were 6 of them. However, we can notice from the bold values, that the same number of iterations was obtained for most problems, 19 in total.

Numerical Experiments to Compressive Sensing

The numerical experiments were performed in Matlab R2017a on Linux, in a computer with 4 GB of memory and an Intel®; Core(TM) i3-2367M 1.4-GHz processor. The pdNCG code presented in [6] is used.

To build the pdNCGs code, the matrix of the preconditioner (\(\tilde N\)) is determined and then the incomplete Cholesky factorization of this matrix, which corresponds to the splitting factor, is used as preconditioner. In order to determine the incomplete Cholesky factor, the Matlab internal function ichol is used. In it, we use drop tolerance with a value equal to 10− 3. We also use a diagonal shift with an appropriate α value.

The images (Figs.123456) used for the tests are the following: House and Peppers (Fig. 3), Lena and Fingerprint (Fig. 4), Boat and Barbara (Fig. 5), and Shepp-Logan (Fig. 6), which are standard images from the image processing community; and also Dice, Ball, and Cup (Fig. 1), Letter and Logo (Fig. 2), sampled using a single-pixel camera [8]. In total, they add up to 12 images: House and Peppers which have 256×256 pixels, Lena, Fingerprint, Boat, and Barbara which have 512×512 pixels. The size of the Shepp-Logan image varies according to the experiment. The Dice, Ball, Cup, Letter, and Logo images have 64×64 pixels.

Fig. 1
figure1

Dice, Ball, and Cup, extracted from http://dsp.rice.edu/cscamera

Fig. 2
figure2

Letter and Logo, extracted from http://dsp.rice.edu/cscamera

Fig. 3
figure3

House and Peppers, with 2562 pixels

Fig. 4
figure4

Lena and Fingerprint, with 5122 pixels

Fig. 5
figure5

Boat and Barbara, with 5122 pixels

Fig. 6
figure6

Shepp-Logan, with variable size

We verify the efficiency of the proposed method, pdNCGs, compared with four state-of-the-art first-order methods: TFOCS (Templates for First-Order Conic Solvers) [1], which solves two types of problems, one without restriction (TFOCS_unc) and the other with constraint (TFOCS_con); TVAL3 (Total Variation minimization by Augmented Lagrangian and Alternating direction Algorithms) [15]; TwIST (Two-step Iterative Soft Thresholding) [2]; and the pdNCG method [6] which is the method that we modified in search of more efficiency.

We measure the quality of the reconstructed image using the peak-signal-to-noise ratio (PSNR) function [12], given by:

$$ PSNR=10 log_{10}\left( \frac{peakval^{2}}{MSE}\right), $$

where MSE (mean square error) is the mean squared error between the solution and the original noiseless image.

The methods pdNCGs, pdNCG, TVAL3, and TwIST terminate when the PSNR of the solution is equal to or greater than the PSNR of the solution obtained by TFOCS_unc, which ensures that the methods terminate when a solution of the same or equal quality of the TFOCS_unc is obtained.

Single-Pixel Camera

In the first test, the TFOCS_con, pdNCG, and pdNCGs methods are compared in image reconstruction problems, where the data is sampled using a simple-pixel camera. The value of the objective at the optimal solution is unknown as well as the noise level, so the reconstruction of the image can only be visually compared. For all experiments, the matrix is a partial basis of Walsh, with n = 642 and m ≈ 0,4n. For all experiments, 40% of measurements are selected uniformly and randomly.

The reconstructions of the images are shown in Figs. 78910, and 11. The following tables show the times in seconds of each method to obtain the solution. The times when pdNCGs got the best result are highlighted in bold.

  TFOCS_con pdNCG pdNCGs
Time 42.92 10.72 6.66
Fig. 7
figure7

Dice figure reconstructed by TFOCS_con, pdNCG, and pdNCGs, respectively

  TFOCS_con pdNCG pdNCGs
Time 43.46 28.29 68.47
Fig. 8
figure8

Ball figure reconstructed by TFOCS_con, pdNCG, and pdNCGs, respectively

  TFOCS_con pdNCG pdNCGs
Time 61.49 25.3 65.76
Fig. 9
figure9

Cup figure reconstructed by TFOCS_con, pdNCG, and pdNCGs, respectively

  TFOCS_con pdNCG pdNCGs
Time 38.59 51.32 47.59
Fig. 10
figure10

Letter figure reconstructed by TFOCS_con, pdNCG, and pdNCGs, respectively

  TFOCS_con pdNCG pdNCGs
Time 86.68 62.24 32.91
Fig. 11
figure11

Logo figure reconstructed by TFOCS_con, pdNCG, and pdNCGs, respectively

In general, the methods reach the same image quality. The pdNCGs and pdNCG methods achieve a better time in two figures while TFOCS gets a better time in one case. The authors of [6] commented that making certain adjustments in pdNCG, a better time could be obtained in all the images, but that a default (standard) implementation was used to make a fair comparison. In general, pdNCGs and pdNCG achieved best performance, being that pdNCG is still better, since for the Ball and Cup images it was about two times faster than pdNCGs; in Dice and Logo images, in which pdNCGs obtained the best times. The difference of the times obtained in Dice is small, and in Logo the time of pdNCG is about twice that of pdNCGs.

Performance Relative to Noise Level

In this second test, the performance of the approaches is analyzed when the noise level in the problems is increased (thus, for the reduction of PSNR). PSNR is measured in dB (decibels). In this experiment, the images used are as follows: House, Peppers, Lena, Fingerprint, Boat, and Barbara (Figs. 3 and 4).

Table 3 shows that the results obtained for PSNR decay of the original image, from 90 to 15 dB. The table also shows that the PSNR obtained by the methods when the original image is corrupted by noise, and the time required by the methods, given in seconds. The CS matrix for all experiments is a partial discrete cosine transform (DCT), with \(m \approx \frac {n}{4}\).

Table 3 Performance of TFOCs, TVAL3, pdNCG, and pdNCGs regarding the increased noise level

We denote in bold the values of time and PSNR when the new pdNCGs method obtains a better result than the other methods. The asterisk corresponds to the problems in which a good reconstruction of the image is not obtained; this is determined by comparing the value of the PSNR obtained from the method to the PSNR obtained by the TFOCS method _unc.

As already mentioned in [6], the pdNCG method performs well for problems with high noise levels, which is also achieved for the new method.

Comparing the new pdNCGs method with its original version pdNCG, we found that it achieved a very significant time reduction, achieving in some cases a sixfold time reduction.

Performance Regarding the Size of the Problem

In this test, we analyze the performance of the implementations as the problem size n increases. In this experiment, the Shepp-Logan image (6) is used. The CS matrix is a partial DCT matrix [19], with \(m \approx \frac {n}{4} \). The sampled signals have PSNR equal to 10 dB. Table 4 shows the results obtained regarding the time and PSNR, ranging in size from 642 to 10242 pixels.

Table 4 Performance of TwIST, TFOCS, TVAL3, pdNCG, and pdNCGs for increasing problem size

In bold, we highlight when the pdNCGs method gets the best PSNR; we notice that this is achieved when the image has 642 pixels. Problems where the method did not converge were indicated by an asterisk.

Notice that TVAL3 did not converge to a solution with greater or equal PSNR value in comparison with the TFCOS_unc method for three image sizes (2562, 5122, and 10242). The TwIST method did not achieve the desired PSNR solution for image reconstruction in any case. In general, the pdNCG method performs better as the size problem increases. Comparing pdNCGs with pdNCG, we noticed that the new method is only time efficient for the image of bigger amount of pixels, although the time differences between the two methods are not large. On the other hand, it is noticed that in the first two images (642 and 1282), pdNCGs obtains a better quality in the reconstruction of the image.

Performance Regarding the Smoothing Parameter

In this test, we analyze the dependence of the method in relation to the smoothing parameter μ. The performance of the pdNCGs, pdNCG, and pdNCG without preconditioner is reported in Table 5 by PCGs, PCG, and CG, respectively. We used for the tests the images: House, Peppers, Lena, Fingerprint, Boat, and Barbara (Figs. 3 and 4). The CS matrix is a partial DCT matrix, with \( m \approx \frac {n}{4} \), and n equal to the number of pixels in each image. For all experiments, the sampled signals have PSNR equal to 15 dB. The results obtained by each method with respect to the time and PSNR obtained are presented in Table 5.

Table 5 Performance of methods for decreasing values of the smoothing parameter μ

In bold, we highlight the problems where pdNCGs achieves the best time, or the best image quality. We noticed that for each parameter μ, pdNCGs obtained at least 4 of the 6 best reconstructions possible. For the FingerPrint and Barbara images, pdNCGs requires a large amount of time for the solution when the μ parameter is 10− 10 and 10− 13, compared with the time obtained by pdNCG.

The preconditioner is only activated when the μ parameter is less than or equal to 10− 4 [6]; it can be seen in the table, where the time for convergence in the preconditioned methods has a large increase when changing the parameter μ = 10− 2 to μ = 10− 4. When the preconditioner is active, we notice that the time is stable for the μ variation, which confirms the efficiency of the preconditioner.

Performance Regarding the Number of Measurements

In this experiment, we compared four methods for decreasing the number of measures m. The figures used for the test are as follows: House, Peppers, Lena, Fingerprint, Boat, and Barbara (Figs. 3 and 4).

The CS matrix is a partial DCT matrix. For all experiments, the sampled signals have PSNR equal to 15 dB (Table 6).

Table 6 Performance regarding the number of measurements m

In bold, we highlight the problems for which pdNCGs obtains the best times compared with other methods. Notice that for 75% of samples, which means that \( m \approx \frac {3n}{4} \), where n is the number of pixels of the image to be reconstructed, pdNCGs obtains the best time in all comparisons. Often, the pdNCG takes twice the time in comparison with the pdNCGs. The TVAL3 method did not achieve a good reconstruction of the image for most problems.

Conclusions

In this work, we present a new preconditioner for linear programming problems, as well as a new preconditioner for compressive sensing problems. In the first experiment performed in Matlab, 31 linear programming problems are considered. We found that the new preconditioner solves all of them, and the method using the splitting preconditioner solves 29 problems. In terms of time, the new approach reaches a better time in 18 problems, and for three of them in particular the differences are very large. It managed to reduce the number of iterations of the interior point method for six problems, and the same number of iterations was reached for the majority of the test problems. In the second part of the paper, the new preconditioner called a pseudo-splitting factor is incorporated into a method known in the literature, and we apply it to compressive sensing problems. In general, in all tests, it obtained satisfactory results regarding the time and quality of the reconstructed image. The results are compared with state-of-the-art first-order methods, and the method that was improved. Based on evidence reported in this paper, we can conclude that the method with the new preconditioner is more efficient for the applications tested.

References

  1. 1.

    Becker SR, Candès EJ, Grant MC (2011) Templates for convex cone problems with applications to sparse signal recovery. Math Program Comput 3(3):165–218

    Article  Google Scholar 

  2. 2.

    Bioucas-Dias JM, Figueiredo MA (2007) A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans Image Process 16(12):2992–3004

    Article  Google Scholar 

  3. 3.

    Campos FF (1995) Analysis of conjugate gradients-type methods for solving linear equations. Ph.D. thesis University of Oxford

  4. 4.

    Candès EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inform Theory 51(12):4203–4215

    Article  Google Scholar 

  5. 5.

    Candès EJ, Wakin MB (2008) An introduction to compressive sampling. Signal Processing Magazine, IEEE 25(2):21–30

    Article  Google Scholar 

  6. 6.

    Dassios I, Fountoulakis K, Gondzio J (2015) A preconditioner for a primal-dual Newton conjugate gradients method for compressed sensing problems. SIAM J Sci Comput 37(6):A2783–A2812

    Article  Google Scholar 

  7. 7.

    Duarte MF, Baraniuk RG (2013) Spectral compressive sensing. Appl Comput Harmon Anal 35(1):111–129

    Article  Google Scholar 

  8. 8.

    Duarte MF, Davenport MA, Takbar D, Laska JN, Sun T, Kelly KF, Baraniuk RG (2008) Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine 25(2):83–91

    Article  Google Scholar 

  9. 9.

    Firooz MH, Roy S (2010) Network tomography via compressed sensing. In: Global telecommunications conference (GLOBECOM 2010). IEEE, Miami, pp 1–5

  10. 10.

    Fountoulakis K (2015) Higher-order methods for large-scale optimization. Ph.D. thesis, University of Edinburgh

  11. 11.

    Golub GH, Van Loan CF (2012) Matrix computations, 3rd edn. JHU Press, Baltimore

    Google Scholar 

  12. 12.

    Gonzalez RC, Woods RE (2002) Digital image processing. Prentice Hall, Upper Saddle River NJ

  13. 13.

    Huang G, Jiang H, Matthews K, Wilford P (2013) Lensless imaging by compressive sensing. In: 20th IEEE international conference on image processing (ICIP). IEEE, Melbourne, pp 2101–2105

  14. 14.

    Kikuchi PA (2017) Novos pré-condicionadores aplicados a problemas de programação linear e ao problema compressive sensing. Ph.D. thesis, University of Campinas. In portuguese

  15. 15.

    Li C, Yin W, Jiang H, Zhang Y (2013) An efficient augmented Lagrangian method with applications to total variation minimization. Comput Optim Appl 56(3):507–530

    Article  Google Scholar 

  16. 16.

    Lustig M, Donoho D, Pauly JM (2007) Sparse MRI: the application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 58(6):1182–1195

    Article  Google Scholar 

  17. 17.

    Ni T, Zhai J (2016) A matrix-free smoothing algorithm for large-scale support vector machines. Inform Sci 358:29–43

    Article  Google Scholar 

  18. 18.

    Oliveira AR, Sorensen DC (2005) A new class of preconditioners for large-scale linear systems from interior point methods for linear programming. Linear Algebra Appl 394:1–24

    Article  Google Scholar 

  19. 19.

    Rao KR, Yip P (1990) Discrete cosine transform. Academic Press, Inc., Boston, MA, p xviii+ 490. Algorithms, advantages, applications

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Paula A. Kikuchi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kikuchi, P.A., Oliveira, A.R.L. New Preconditioners Applied to Linear Programming and the Compressive Sensing Problems. SN Oper. Res. Forum 1, 36 (2020). https://doi.org/10.1007/s43069-020-00029-w

Download citation

Keywords

  • Preconditioner
  • Linear programming
  • Signal processing
  • Compressive sensing