Skip to main content

On DC Based Methods for Phase Retrieval

  • Conference paper
  • First Online:
Approximation Theory XVI (AT 2019)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 336))

Included in the following conference series:

Abstract

In this paper, we develop a new computational approach which is based on minimizing the difference of two convex functions (DC) to solve a broader class of phase retrieval problems. The approach splits a standard nonlinear least squares minimizing function associated with the phase retrieval problem into the difference of two convex functions and then solves a sequence of convex minimization subproblems. For each subproblem, the Nesterov accelerated gradient descent algorithm or the Barzilai-Borwein (BB) algorithm is adopted. In addition, we apply the alternating projection method to improve the initial guess in [20] and make it much more closer to the true solution. In the setting of sparse phase retrieval, a standard 1 norm term is added to guarantee the sparsity, and the subproblem is solved approximately by a proximal gradient method with the shrinkage-threshold technique directly. Furthermore, a modified Attouch-Peypouquet technique is used to accelerate the iterative computation, which leads to more effective algorithms than the Wirtinger flow (WF) algorithm and the Gauss-Newton (GN) algorithm and etc. Indeed, DC based algorithms are able to recover the solution with high probability when the measurement number m ≈ 2n in the real case and m ≈ 3n in the complex case, where n is the dimension of the true solution. When m ≈ n, the 1-DC based algorithm is able to recover the sparse signals with high probability. Our main results show that the DC based methods converge to a critical point linearly. Our study is a deterministic analysis while the study for the Wirtinger flow (WF) algorithm and its variants, the Gauss-Newton (GN) algorithm, the trust region algorithm is based on the probability analysis. Finally, the paper discusses the existence and the number of distinct solutions for phase retrieval problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116, 5–16 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than 1∕k 2. SIAM J. Optim. 26, 1824–1834 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  3. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Balan, R., Casazza, P., Edidin, D.: On signal reconstruction without phase. Appl. Comput. Harmon. Anal. 20(3), 345–356 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  6. Beck, A., Eldar, Y.C.: Sparsity constrained nonlinear optimization: optimality conditions and algorithms. SIAM J. Optim. 23(3), 1480–1509 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linearinverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bostan, E., Soltanolkotabi, M., Ren, D., Waller, L.: Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy. In: 2018 25th IEEE International Conference on Image Processing, pp. 3823–3827 (2018)

    Google Scholar 

  9. Cai, T.T., Li, X., Ma, Z.: Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. Ann. Stat. 44(5), 2221–2251 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Candes, E.J., Eldar, Y.C., Strohmer, T., Voroninski, V.: Phase retrieval via matrix completion. SIAM Rev. 57(2), 225–251 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Candes, E.J., Li, X., Soltanolkotabi, M.: Phase retrieval from coded diffraction patterns. Appl. Comput. Harmon. Anal. 39(2), 277–299 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Candes, E.J., Li, X., Soltanolkotabi, M.: Phase retrieval via Wirtinger flow: theory and algorithms. IEEE Trans. Inf. Theory 61(4), 1985–2007 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, Y., Candes, E.J.: Solving random quadratic systems of equations is nearly as easy as solving linear systems. Commun. Pure Appl. Math. 70(5), 822–883 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Conca, A., Edidin, D., Hering, M., Vinzant, C.: An algebraic characterization of injectivity in phase retrieval. Appl. Comput. Harmon. Anal. 38(2), 346–356 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Dai, Y.H., Liao, L.Z.: R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dainty, C., Fienup, J.R.: Phase retrieval and image reconstruction for astronomy. In: Image Recovery: Theory and Application, pp. 231–275. Academic, New York (1987)

    Google Scholar 

  17. Deutsch, F.R.: Best Approximation in Inner Product Spaces. Springer Science and Business Media, New York (2012)

    MATH  Google Scholar 

  18. Fienup, J.R.: Phase retrieval algorithms: a comparison. Appl. Opt. 21(15), 2758–2769 (1982)

    Article  Google Scholar 

  19. Fulton, W.: Intersection Theory, vol. 2. Springer Science and Business Media, New York (2013)

    MATH  Google Scholar 

  20. Gao, B., Xu, Z.: Phaseless recovery using the Gauss–Newton method. IEEE Trans. Signal Process. 65(22), 5885–5896 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: The 30th International Conference on Machine Learning, pp. 37–45 (2013)

    Google Scholar 

  22. Harris, J.: Algebraic Geometry: A First Course, vol. 133. Springer Science and Business Media, New York (2013)

    MATH  Google Scholar 

  23. Hartshorne, R.: Algebraic Geometry, vol. 52. Springer Science and Business Media, New York (2013)

    MATH  Google Scholar 

  24. Huang, M., Xu, Z.: Phase retrieval from the norms of affine transformations (2018). arXiv:1805.07899

    Google Scholar 

  25. Jaganathan, K., Eldar, Y.C., Hassibi, B.: Phase retrieval: an overview of recent developments (2015). arXiv:1510.07713v1

    Google Scholar 

  26. Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. Inst. Fourier 48(3), 769–784 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  27. Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels. Les équations aux dérivées partielles 117, 87–89 (1963)

    MATH  Google Scholar 

  28. Miao, J., Ishikawa, T., Johnson, B., Anderson, E.H., Lai, B., Hodgson, K.O.: High resolution 3D X-ray diffraction microscopy. Phys. Rev. Lett. 89(8), 088303 (2002)

    Article  Google Scholar 

  29. Millane, R.P.: Phase retrieval in crystallography and optics. J. Opt. Soc. Am. A 7(3), 394–411 (1990)

    Article  Google Scholar 

  30. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87. Springer Science and Business Media, New York (2013)

    MATH  Google Scholar 

  31. Robert, W.H.: Phase problem in crystallography. J. Opt. Soc. Am. A 10(5), 1046–1055 (1993)

    Article  Google Scholar 

  32. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  MATH  Google Scholar 

  33. Shafarevich, I.R.: Basic Algebraic Geometry 1. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  34. Shechtman, Y., Beck, A., Eldar, Y.C.: GESPAR: efficient phase retrieval of sparse signals. IEEE Trans. Signal Process. 62(4), 928–938 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  35. Shechtman, Y., Eldar, Y.C., Cohen, O., Chapman, H.N., Miao, J., Segev, M.: Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Process. Mag. 32(3), 87–109 (2015)

    Article  Google Scholar 

  36. Sun, J., Qu, Q., Wright, J.: A geometric analysis of phase retrieval. Found. Comput. Math. 18(5), 1131–1198 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  37. Tao, P.D.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133, 23–46 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  38. Vinzant, C.: A small frame and a certificate of its injectivity. In: 2015 International Conference on Sampling Theory and Applications, pp. 197–200 (2015)

    Google Scholar 

  39. Wang, Y., Xu, Z.: Phase retrieval for sparse signals. Appl. Comput. Harmon. Anal. 37, 531–544 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  40. Wang, Y., Xu, Z.: Generalized phase retrieval: measurement number, matrix recovery and beyond. Appl. Comput. Harmon. Anal. 47(2), 423–446 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  41. Wen, B., Chen, X., Pong, T.K. A proximal difference-of-convex algorithm with extrapolation. Comput. Optim. Appl. 69(2), 297–324 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wu, C., Li, C., Long, Q.: A DC programming approach for sensor network localization with uncertainties in anchor positions. J. Ind. Manag. Optim. 10(3), 817–826 (2014)

    MathSciNet  MATH  Google Scholar 

  43. Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6(3), 1758–1789 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  44. Zariski, O.: On the purity of the branch locus of algebraic functions. Proc. Natl. Acad. Sci. 44(8), 791–796 (1958)

    Article  MathSciNet  MATH  Google Scholar 

  45. Zhang, H., Liang, Y.: Reshaped Wirtinger flow for solving quadratic systems of equations. In: Advances in Neural Information Processing Systems, pp. 2622–2630 (2016)

    Google Scholar 

  46. Zheng, Y., Zheng, B.: A new modified Barzilai–Borwein gradient method for the quadratic minimization problem. J. Optim. Theory Appl. 172(1), 179–186 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors “Ming-Jun Lai and Abraham Varghese” are partly supported by the National Science Foundation under grant DMS-1521537.

Zhiqiang Xu was supported by NSFC grant (11422113, 91630203, 11331012) and by National Basic Research Program of China (973 Program 2015CB856000).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming-Jun Lai .

Editor information

Editors and Affiliations

Appendix

Appendix

In this section we give some deterministic description of the minimizing function F as well as strong convexity of F 2. We will show that at any global minimizer, the Hessian matrix of F is positive definite in the real case and is nonnegative positive definite in the complex case. These results are used when we apply the KL inequality. For convenience, let \(A_\ell = {\mathbf {a}}_\ell \bar {\mathbf {a}}_\ell ^\top \) be the Hermitian matrix of rank one for  = 1, ⋯ , m.

Definition 3

We say a j, j = 1, ⋯ , m are a 0-generic if there exists a positive constant a 0 ∈ (0, 1) such that

$$\displaystyle \begin{aligned}\|({\mathbf{a}}_{j_1}^*\mathbf{y}, \ldots, {\mathbf{a}}_{j_n}^*\mathbf{y})\|\ge a_0 \|\mathbf{y}\|, \quad \forall \mathbf{y}\in \mathbb{C}^n \end{aligned}$$

holds for any 1 ≤ j 1 < j 2 < ⋯ < j n ≤ m.

Theorem 8

Let m  n. Assume a j, j = 1, ⋯ , m are a 0 -generic for some constant a 0 . If there exist n nonzero elements among the measurements b j, j = 1, ⋯ , m, then for the phase retrieval problem with f(x) = |x|2 , F 2 is positive definite.

Proof

Recall that \(F_2= 2\sum _{i=1}^m b_i f({\mathbf {a}}_k^\top \mathbf {x})\). Then the Hessian matrix of F 2 is

$$\displaystyle \begin{aligned}H_{F_2}(\mathbf{x})= 2\sum_{i=1}^m b_i f''({\mathbf{a}}_k^\top \mathbf{x}) {\mathbf{a}}_i{\mathbf{a}}_i^\top .\end{aligned}$$

Note that f″(x) = 2. Thus we have

$$\displaystyle \begin{aligned} H_{F_2}(\mathbf{x}) = 4 \sum_{\ell=1}^m b_\ell A_\ell. \end{aligned}$$

Let \(b_0=\min \{b_{\ell }\neq 0\}\). Then

$$\displaystyle \begin{aligned}{\mathbf{y}}^\top H_{F_2}(\mathbf{x})\mathbf{y} \ge 4 b_0 \|({\mathbf{a}}_{j_1}^*\mathbf{y}, \ldots, {\mathbf{a}}_{j_n}^*\mathbf{y})\|{}^2\ge 4b_0 a_0^2 \|\mathbf{y}\|{}^2.\end{aligned}$$

Thus, F 2 is strongly convex. □

Theorem 9

Let H F(x) be the Hessian matrix of the function F(x) and let x be a global minimizer of (2). Suppose that a j, j = 1, ⋯ , m are a 0 -generic. Then H F(x ) is positive definite in a neighborhood of x .

Proof

Recall that \(A_\ell ={\mathbf {a}}_\ell \bar {\mathbf {a}}_\ell ^\top \) for  = 1, ⋯ , m. It is easy to see

$$\displaystyle \begin{aligned}\nabla F(\mathbf{x})= 2\sum_{\ell=1}^m ({\mathbf{x}}^\top A_\ell\mathbf{x}- b_\ell) A_\ell \mathbf{x}\end{aligned}$$

and the entries h ij of the Hessian H F(x) is

$$\displaystyle \begin{aligned}h_{ij} = \frac{\partial}{\partial x_i}\frac{\partial}{\partial x_j} f(\mathbf{x}) = 2\sum_{\ell=1}^m ({\mathbf{x}}^\top A_\ell\mathbf{x}- b_\ell) a_{ij}(\ell) + 4 \sum_{p=1}^n a_{i,p}(\ell) x_p \sum_{q=1}^n a_{j,q}(\ell)x_q, \end{aligned}$$

where \(A_\ell = [a_{ij}(\ell )]_{ij=1}^n\). Since (x ) A x  = b for all  = 1, ⋯ , m, the first summation term of h ij above is zero at x . Let M(y) = y H f(x )y be a quadratic function of y. Then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} M(\mathbf{y}) &\displaystyle =&\displaystyle 4 \sum_{\ell=1}^m ({\mathbf{y}}^\top A_\ell {\mathbf{x}}^* ({\mathbf{x}}^*)^\top A_\ell \mathbf{y} = 4\sum_{\ell=1}^m |{\mathbf{y}}^\top A_\ell {\mathbf{x}}^*|{}^2\\ &\displaystyle =&\displaystyle 4 \sum_{\ell=1}^m |{\mathbf{y}}^\top {\mathbf{a}}_\ell|{}^2 |\bar{\mathbf{a}}_\ell^\top {\mathbf{x}}^*|{}^2 \ge 4a_0 \|{\mathbf{x}}^*\|{}^2 \|\mathbf{y}\|{}^2, \end{array} \end{aligned} $$

where the inequality follows from the fact that a j, j = 1, ⋯ , m are a 0-generic. It implies that H F(x ) is positive definite. □

Next, we show that the Hessian H F(x ) is nonnegative definite at the global minimizer x in the complex case. To this end, we first introduce some notations. Write a  = a  + i c for  = 1, ⋯ , m. For z = x + iy, we have \({\mathbf {a}}_\ell ^\top {\mathbf {z}}^*= b_\ell \) for the global minimizer z . Writing \(f_\ell (\mathbf {x}, \mathbf {y}) = |{\mathbf {a}}_\ell ^\top \mathbf {z}|{ }^2 -b_\ell =(a_\ell ^\top \mathbf {x} - c_\ell ^\top \mathbf {y})^2 + (c_\ell ^\top \mathbf {x}+ a_\ell ^\top \mathbf {y})^2 -b_\ell \), we consider

$$\displaystyle \begin{aligned} f(\mathbf{x}, \mathbf{y}) = \frac{1}{m} \sum_{\ell=1}^m f_\ell^2. \end{aligned}$$

The gradient of f can be easily computed as follows: ∇f = [∇x f, ∇y f] with

$$\displaystyle \begin{aligned} \nabla_{\mathbf{x}} f(\mathbf{x}, \mathbf{y}) = \frac{4}{m}\sum_{\ell=1}^m f_\ell(\mathbf{x}, \mathbf{y}) [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) a_\ell + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) c_\ell] \end{aligned}$$

and

$$\displaystyle \begin{aligned} \nabla_{\mathbf{y}} f(\mathbf{x}, \mathbf{y}) = \frac{4}{m}\sum_{\ell=1}^m f_\ell(\mathbf{x}, \mathbf{y}) [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) (-c_\ell) + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) a_\ell]. \end{aligned}$$

Furthermore, the Hessian of F is given by

$$\displaystyle \begin{aligned} H_F (\mathbf{x}, \mathbf{y}) = \left[ \nabla_{\mathbf{x}} \nabla_{\mathbf{x}} f(\mathbf{x}, \mathbf{y}) \ \nabla_{\mathbf{x}} \nabla_{\mathbf{y}} f(\mathbf{x}, \mathbf{y}); \, \nabla_{\mathbf{y}} \nabla_{\mathbf{x}} f(\mathbf{x}, \mathbf{y} ) \, \nabla_{\mathbf{y}}\nabla_{\mathbf{y}} f(\mathbf{x}, \mathbf{y}) \right], \end{aligned}$$

where the terms ∇xx f(x, y), ⋯ , ∇yy f(x, y) are given below.

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \nabla_{\mathbf{x}}\nabla_{\mathbf{x}} f(\mathbf{x}, \mathbf{y}) = \frac{4}{m}\sum_{\ell=1}^m f_\ell(\mathbf{x}, \mathbf{y}) [a_\ell a_\ell^\top + c_\ell c_\ell^\top] \\ &\displaystyle &\displaystyle + \frac{8}{m}\sum_{\ell=1}^m [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) a_\ell + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) c_\ell] [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) a_\ell^\top + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) c_\ell^\top] \end{array} \end{aligned} $$

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \nabla_{\mathbf{x}}\nabla_{\mathbf{y}} f(\mathbf{x}, \mathbf{y}) = \frac{4}{m}\sum_{\ell=1}^m f_\ell(\mathbf{x}, \mathbf{y}) [a_\ell (-c_\ell)^\top + c_\ell a_\ell^\top] \\ &\displaystyle &\displaystyle + \frac{8}{m}\sum_{\ell=1}^m [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) a_\ell + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) c_\ell] [(a_\ell^\top \mathbf{x}- c_\ell^\top \mathbf{y}) (-c_\ell)^\top + (c_\ell^\top \mathbf{x} + a_\ell^\top \mathbf{y}) a_\ell^\top]. \end{array} \end{aligned} $$

The terms ∇yx f(x, y) and ∇yy f(x, y) can be obtained similarly.

Theorem 10

For phase retrieval problem in the complex case, the Hessian matrix H f(x , y ) at any global minimizer z  := (x , y ) satisfies H f(x , y ) ≥ 0. Furthermore, H f(x , y ) = 0 along the direction [−(y ), (x )].

Proof

At the global minimizer z  = x  + iy , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \nabla_{\mathbf{x}}\nabla_{\mathbf{x}} f({\mathbf{x}}^*, {\mathbf{y}}^*) = \frac{8}{m}\sum_{\ell=1}^m &\displaystyle &\displaystyle [(a_\ell^\top {\mathbf{x}}^*- c_\ell^\top {\mathbf{y}}^*) a_\ell + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell] \times \\ &\displaystyle &\displaystyle [(a_\ell^\top {\mathbf{x}}^*- c_\ell^\top {\mathbf{y}}^*) a_\ell^\top + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell^\top], \end{array} \end{aligned} $$
$$\displaystyle \begin{aligned} \begin{array}{rcl} \nabla_{\mathbf{x}}\nabla_{\mathbf{y}} f({\mathbf{x}}^*, {\mathbf{y}}^*) = \frac{8}{m}\sum_{\ell=1}^m &\displaystyle &\displaystyle [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) a_\ell+ (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell] \times \\ &\displaystyle &\displaystyle [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) (-c_\ell)^\top + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) a_\ell^\top] \end{array} \end{aligned} $$

and similar for the other two terms. It is easy to check that for any w = u + iv with \(\mathbf {u}, \mathbf {v}\in \mathbb {R}^n\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle [{\mathbf{u}}^\top \, {\mathbf{v}}^\top]^\top H_F({\mathbf{x}}^*, {\mathbf{y}}^*) \left[\mathbf{u} \,; \mathbf{v}\right]\\ &\displaystyle =&\displaystyle \frac{8}{m}\sum_{\ell=1}^m [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) a_\ell^\top \mathbf{u} + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell^\top \mathbf{u} ]^2 \\ &\displaystyle &\displaystyle + \frac{8}{m}\sum_{\ell=1}^m [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) (-c_\ell)^\top\mathbf{v} + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) a_\ell^\top\mathbf{v}]^2\\ &\displaystyle &\displaystyle + \frac{8}{m}\sum_{\ell=1}^m 2[(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) a_\ell^\top \mathbf{u} + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell^\top \mathbf{u} ] \times \\ &\displaystyle &\displaystyle [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) (-c_\ell)^\top\mathbf{v} + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) a_\ell^\top\mathbf{v}]\\ &\displaystyle =&\displaystyle \frac{8}{m}\sum_{\ell=1}^m [(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) a_\ell^\top \mathbf{u} + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) c_\ell^\top \mathbf{u} +(a_\ell^\top {\mathbf{x}}^* - c_\ell^\top {\mathbf{y}}^*) (-c_\ell)^\top\mathbf{v} \\ &\displaystyle &\displaystyle + (c_\ell^\top {\mathbf{x}}^* + a_\ell^\top {\mathbf{y}}^*) a_\ell^\top\mathbf{v}]^2\\ &\displaystyle \ge &\displaystyle 0. \end{array} \end{aligned} $$

It means that H f(x , y ) ≥ 0. Furthermore, if we choose u = −y and v = x , then it is easy to show that

$$\displaystyle \begin{aligned}{}[-({\mathbf{y}}^*)^\top \, ({\mathbf{x}}^*)^\top]^\top H_f({\mathbf{x}}^*, {\mathbf{y}}^*) \left[-{\mathbf{y}}^* \,; {\mathbf{x}}^*\right]=0, \end{aligned}$$

which gives that the Hessian H F along this direction is zero. □

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, M., Lai, MJ., Varghese, A., Xu, Z. (2021). On DC Based Methods for Phase Retrieval. In: Fasshauer, G.E., Neamtu, M., Schumaker, L.L. (eds) Approximation Theory XVI. AT 2019. Springer Proceedings in Mathematics & Statistics, vol 336. Springer, Cham. https://doi.org/10.1007/978-3-030-57464-2_6

Download citation

Publish with us

Policies and ethics