Skip to main content
Log in

Element-wise uniqueness, prior knowledge, and data-dependent resolution

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Techniques for finding regularized solutions to underdetermined linear systems can be viewed as imposing prior knowledge on the unknown vector. The success of modern techniques, which can impose priors such as sparsity and non-negativity, is the result of advances in optimization algorithms to solve problems which lack closed-form solutions. Techniques for characterization and analysis of the system to determine when information is recoverable, however, still typically rely on closed-form solution techniques such as singular value decomposition or a filter cutoff estimate. In this letter we propose optimization approaches to broaden the approach to system characterization. We start by deriving conditions for when each unknown element of a system admits a unique solution, subject to a broad class of types of prior knowledge. With this approach we can pose a convex optimization problem to find “how unique” each element of the solution is, which may be viewed as a generalization of resolution to incorporate prior knowledge. We find that the result varies with the unknown vector itself, i.e., it is data-dependent, such as when the sparsity of the solution improves the chance it can be uniquely reconstructed. The approach can be used to analyze systems on a case-by-case basis, estimate the amount of important information present in the data, and quantitatively understand the degree to which the regularized solution may be trusted.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Backus, G., Gilbert, F.: The resolving power of gross earth data. Geophys. J. R. Astron. Soc. 16(2), 169–205 (1968)

    Article  MATH  Google Scholar 

  2. Becker, S.R., Cands, E.J., Grant, M.C.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3(3), 165–218 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertero, M., Mol, C.D., Pike, E.R.: Linear inverse problems with discrete data. I. General formulation and singular system analysis. Inverse Probl. 1(4), 301 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  4. Boyd, S.P., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

  5. Bruckstein, A.M., Elad, M., Zibulevsky, M.: On the uniqueness of non-negative sparse and redundant representations. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, pp. 5145–5148 (2008)

  6. Candes, E.J.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 346(910), 589–592 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candes, E.J., Fernandez-Granda, C.: Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67, 906–956 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dantzig, G.: Linear Programming and Extensions. Princeton University Press, Princeton (1998)

    MATH  Google Scholar 

  10. Dillon, K., Fainman, Y.: Bounding pixels in computational imaging. Appl. Opt. 52(10), D55–D63 (2013)

    Article  Google Scholar 

  11. Donoho, D.L.: Neighborly Polytopes and Sparse Solutions of Underdetermined Linear Equations. In: Technical Report, Stanford University (2005)

  12. Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  13. Donoho, D.L., Tanner, J.: Neighborliness of randomly projected simplices in high dimensions. Proc. Natl. Acad. Sci. USA 102(27), 9452–9457 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  14. Donoho, D.L., Tanner, J.: Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. USA 102(27), 9446–9451 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. Donoho, D.L., Tanner, J.: Counting the faces of randomly-projected hypercubes and orthants, with applications. Discrete Comput. Geom. 43(3), 522–541 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Gill, P.E., Murray, W., Wright, M.H.: Numerical Linear Algebra and Optimization. Advanced Book Program. Addison-Wesley Pub. Co., Boston (1991)

    Google Scholar 

  17. Grant, M., Boyd, S.: Graph implementations for nonsmooth convex programs. In: Blondel, V., Boyd, S., Kimura, H. (eds.) Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences, vol. 371, pp. 95–110. Springer, Berlin/Heidelberg (2008)

    Chapter  Google Scholar 

  18. Grant, M. and Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx (2013)

  19. Petra, S., Schrader, A., Schnrr, C.: 3D tomography from few projections in experimental fluid dynamics. In: Nitsche, W., Dobriloff, C. (eds.) Imaging Measurement Methods for Flow Analysis, No. 106 in Notes on Numerical Fluid Mechanics and Multidisciplinary Design, pp. 63–72. Springer, Berlin, Heidelberg (2009)

    Chapter  Google Scholar 

  20. Press, W.H.: Numerical Recipes. The Art of Scientific Computing, 3rd edn. Cambridge University Press, Cambridge (2007)

    MATH  Google Scholar 

  21. Stark, P.B.: Generalizing resolution. Inverse Probl. 24(3), 034014 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Tillmann, A., Pfetsch, M.: The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60(2), 1248–1259 (2014)

    Article  MathSciNet  Google Scholar 

  23. Willett, R.M., Marcia, R.F., Nichols, J.M.: Compressed sensing for practical optical imaging systems: a tutorial. Opt. Eng. 50(7), 072 (2011). 601-072, 601-13

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keith Dillon.

Appendix

Appendix

In this appendix we will describe how a selection of variations on prior knowledge can be formulated as linear inequality constraints. Again the classical case with no prior knowledge is based on the solution set \(F_{EC}\), with \(\mathbf D=\mathbf 0\) and \(\mathbf d=\mathbf 0\).

$$\begin{aligned} F_\mathrm{{EC}} = \{\mathbf x \in \mathbb {R}^n | \mathbf A \mathbf x = \mathbf b \}. \end{aligned}$$
(10)

Application of our bounds testing problem with the feasible set \(\mathbf x \in F_{EC}\) forms an equality-constrained linear program [16], for which optimality conditions give the row space condition \(\mathbf A^T \mathbf y = \mathbf e_k\).

Non-negativity results in the solution set

$$\begin{aligned} F_{NN} = \{\mathbf x \in \mathbb {R}^n | \mathbf A \mathbf x = \mathbf b , \mathbf x \ge \mathbf 0 \}. \end{aligned}$$
(11)

This can be implemented in our system with the simple definitions, \(\mathbf D = \mathbf I\), \(\mathbf d = \mathbf 0\), using the identity matrix and a vector of zeros.

\(\ell _1-regularization\) can be formulated as a case of non-negativity, which can be used to determine whether we have a unique optimal solution to the Basis Pursuit problem [8],

$$\begin{aligned}&\alpha = \underset{\mathbf x}{\min } \; \Vert \mathbf x \Vert _1 \nonumber \\&\mathbf A \mathbf x = \mathbf b. \end{aligned}$$
(12)

This can be tested by analyzing in the uniqueness of the solutions in the following set,

$$\begin{aligned} F_{BP}&= \{\mathbf x \in \mathbb {R}^n | \mathbf A \mathbf x = \mathbf b , \, \Vert \mathbf x \Vert _1 = \alpha \} \nonumber \\&= \{\mathbf x \in \mathbb {R}^n | \mathbf A \mathbf x = \mathbf b , \, \Vert \mathbf x \Vert _1 \le \alpha \}. \end{aligned}$$
(13)

This is equivalent to the following non-negative system,

$$\begin{aligned} F_{NN} = \{{\hat{\mathbf {x}}} \in \mathbb {R}^{2n} | {\hat{\mathbf {A}}} {\hat{\mathbf {x}}} = {\hat{\mathbf {b}}} , {\hat{\mathbf {x}}} \ge \mathbf 0 \}. \end{aligned}$$
(14)

With the definitions

$$\begin{aligned} {\hat{\mathbf {A}}} = \begin{pmatrix} \mathbf A, -\mathbf A \\ \mathbf 1^T \end{pmatrix}, \;\; {\hat{\mathbf {b}}} = \begin{pmatrix} \mathbf b \\ \alpha \end{pmatrix}. \end{aligned}$$
(15)

This can be seen by defining \(\mathbf x = {\hat{\mathbf {x}}}_{(1)} - {\hat{\mathbf {x}}}_{(2)}\), where \({\hat{\mathbf {x}}}^T =\begin{pmatrix}{\hat{\mathbf {x}}}_{(1)}^T, {\hat{\mathbf {x}}}_{(2)}^T\end{pmatrix}\) and \({\hat{\mathbf {x}}}_{(1)} \ge \mathbf 0\), \({\hat{\mathbf {x}}}_{(2)}\ge \mathbf 0\). We relate bounds found using the feasible set of Eq. (14) to the bounds for the set of Eq. (13) by noting that at the minimum, where we get \(\alpha \) as the optimal for Eq. (12), \({\hat{\mathbf {x}}}_{(1)}\) and \({\hat{\mathbf {x}}}_{(1)}\) are complementary. If they were not, we could take advantage of this fact to reduce the minimum of \(\Vert \mathbf x \Vert _1 = {\hat{\mathbf {x}}}_{(1)} + {\hat{\mathbf {x}}}_{(2)}\) further.

Box constraints define the following set,

$$\begin{aligned} F_{BOX} = \{\mathbf x \in \mathbb {R}^n \; | \; \mathbf A \mathbf x = \mathbf b , \; \mathbf d_{min} \le \mathbf x \le \mathbf d_{max} \}. \end{aligned}$$
(16)

Here \(\mathbf d_{min}\) and \(\mathbf d_{max}\) are vectors defining the box. We can formulate this as Eq. (2) with the definitions

$$\begin{aligned} \mathbf D = \begin{pmatrix} +\mathbf I \\ -\mathbf I \end{pmatrix}, \;\; \mathbf d = \begin{pmatrix} +\mathbf d_{min} \\ -\mathbf d_{max} \end{pmatrix}. \end{aligned}$$
(17)

We can view this as a more general version of regularization with the infinity norm, e.g.,

$$\begin{aligned} F_{BOX} = \{\mathbf x \in \mathbb {R}^n | \mathbf A \mathbf x = \mathbf b , \, \Vert \mathbf x \Vert _\infty \le d \}. \end{aligned}$$
(18)

Denoising can be viewed as a dual to regularization, where rather than requiring the solution set be regular, we require the error in the linear system to be regular, as in the following,

$$\begin{aligned} F_{DN} = \{\mathbf x \in \mathbb {R}^n \; | \; \Vert \mathbf A \mathbf x - \mathbf b \Vert \le \sigma , \}. \end{aligned}$$
(19)

We can also form a denoised version of the non-negativity case using the infinity norm as follows,

$$\begin{aligned} F_{NND} = \{\mathbf x \in \mathbb {R}^n \; | \; \Vert \mathbf A \mathbf x - \mathbf b \Vert _\infty \le \sigma , \; \mathbf x \ge \mathbf 0 \}. \end{aligned}$$
(20)

This can be formulated as mixed constraints with no linear constraint term (i.e., “\(\mathbf A\)” and “\(\mathbf b\)” in the original linear system are all zeros), and with

$$\begin{aligned} \mathbf D =\begin{pmatrix} -\mathbf A \\ \mathbf A \\ \mathbf I \end{pmatrix}, \;\; \mathbf d = \begin{pmatrix} -\mathbf b - \sigma \mathbf 1 \\ \mathbf b - \sigma \mathbf 1 \\ \mathbf 0 \end{pmatrix}. \end{aligned}$$
(21)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dillon, K., Fainman, Y. Element-wise uniqueness, prior knowledge, and data-dependent resolution. SIViP 11, 41–48 (2017). https://doi.org/10.1007/s11760-016-0889-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-016-0889-2

Keywords

Navigation