Advertisement

Learning nonlinear input–output maps with dissipative quantum systems

  • Jiayin Chen
  • Hendra I. NurdinEmail author
Article

Abstract

In this paper, we develop a theory of learning nonlinear input–output maps with fading memory by dissipative quantum systems, as a quantum counterpart of the theory of approximating such maps using classical dynamical systems. The theory identifies the properties required for a class of dissipative quantum systems to be universal, in that any input–output map with fading memory can be approximated arbitrarily closely by an element of this class. We then introduce an example class of dissipative quantum systems that is provably universal. Numerical experiments illustrate that with a small number of qubits, this class can achieve comparable performance to classical learning schemes with a large number of tunable parameters. Further numerical analysis suggests that the exponentially increasing Hilbert space presents a potential resource for dissipative quantum systems to surpass classical learning schemes for input–output maps.

Keywords

Machine learning with quantum systems Dissipative quantum systems Universality property Reservoir computing Nonlinear input–output maps Fading memory maps Nonlinear time series Stone–Weierstrass theorem 

Notes

References

  1. 1.
    Preskill, J.: Quantum computing in the NISQ era and beyond (2018). Arxiv preprint arXiv:1801.00862
  2. 2.
    Mills, M.: Hearing aids and the history of electronics miniaturization. IEEE Ann. Hist. Comput. 22(3), 24 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Aaronson, S., Arkhipov, A.: The computational complexity of linear optics. In: Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC), pp. 333–342 (2011)Google Scholar
  4. 4.
    Lund, A.P., Bremner, M.J., Ralph, T.C.: Quantum sampling problems, Boson sampling and quantum supremacy. NPJ Quantum Inf. 3(1), 15 (2017)ADSCrossRefGoogle Scholar
  5. 5.
    Bremner, M.J., Jozsa, R., Shepherd, D.J.: Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy. Proc. R. Soc. A 467, 459 (2010)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Boixo, S., Isakov, S.V., Smelyanskiy, V.N., Babbush, R., Ding, N., Jiang, Z., Bremner, M.J., Martinis, J.M., Neven, H.: Characterizing quantum supremacy in near-term devices. Nat. Phys. 14(6), 595 (2018)CrossRefGoogle Scholar
  7. 7.
    Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195 (2017)ADSCrossRefGoogle Scholar
  8. 8.
    Farhi, E., Goldstone, J., Gutmann, S.: A quantum approximate optimization algorithm (2014). ArXiv preprint arXiv:1411.4028
  9. 9.
    Peruzzo, A., McLean, J., Shadbolt, P., Yung, M., Zhou, X., Love, P.J., Aspuru-Guzik, A., O’Brien, J.L.: A variational eigenvalue solver on a quantum processor. Nat. Commun. 5, 4213 (2013)ADSCrossRefGoogle Scholar
  10. 10.
    McClean, J.R., Romero, J., Babbush, R., Aspuru-Guzik, A.: The theory of variational hybrid quantum-classical algorithms. New J. Phys. 18, 023023 (2016)ADSCrossRefGoogle Scholar
  11. 11.
    Wang, D., Higgott, O., Brierley, S.: A generalised variational quantum eigensolver (2018). ArXiv preprint arXiv:1802.00171
  12. 12.
    Mitarai, K., Negoro, M., Kitagawa, M., Fujii, K.: Quantum circuit learning. Phys. Rev. A 98(3), 032309 (2018)ADSCrossRefGoogle Scholar
  13. 13.
    Kandala, A., Mezzacapo, A., Temme, K., Takita, M., Brink, M., Chow, J.M., Gambetta, J.M.: Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549, 242 (2017)ADSCrossRefGoogle Scholar
  14. 14.
    Otterbach, J.S., et al.: Unsupervised machine learning on a hybrid quantum computer (2017). ArXiv preprint arXiv:1712.05771
  15. 15.
    Verstraete, F., Wolf, M.M., Cirac, J.I.: Quantum computation and quantum-state engineering driven by dissipation. Nat. Phys. 5(9), 633 (2009)CrossRefGoogle Scholar
  16. 16.
    Alvarez-Rodriguez, U., Lamata, L., Escandell-Montero, P., Martín-Guerrero, J.D., Solano, E.: Supervised quantum learning without measurements. Sci. Rep. 7(1), 13645 (2017)ADSCrossRefGoogle Scholar
  17. 17.
    Fujii, K., Nakajima, K.: Harnessing disordered-ensemble quantum dynamics for machine learning. Phys. Rev. Appl. 8(2), 024030 (2017)ADSCrossRefGoogle Scholar
  18. 18.
    Nakajima, K., Fujii, K., Negoro, M., Mitarai, K., Kitagawa, M.: Boosting computational power through spatial multiplexing in quantum reservoir computing. Phys. Rev. Appl. 11(3), 034021 (2019)ADSCrossRefGoogle Scholar
  19. 19.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communications. Science 304, 5667 (2004)CrossRefGoogle Scholar
  20. 20.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531 (2002)CrossRefzbMATHGoogle Scholar
  21. 21.
    Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127 (2009)CrossRefzbMATHGoogle Scholar
  22. 22.
    Pavlov, A., van de Wouw, N., Nijmeijer, H.: Convergent systems: analysis and synthesis. In: Meurer, T., Graichen, K., Gilles, E.D. (eds.) Control and Observer Design for Nonlinear Finite and Infinite Dimensional Systems. Lecture Notes in Control and Information Science, vol. 322, pp. 131–146. Springer, Berlin (2005)CrossRefzbMATHGoogle Scholar
  23. 23.
    Boyd, S., Chua, L.: Fading memory and the problem of approximating nonlinear operators with Volterra series. IEEE Trans. Circuits Syst. 32(11), 1150 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Appelant, L., et al.: Information processing using a single dynamical node as complex systems. Nat. Commun. 2, 468 (2011)ADSCrossRefGoogle Scholar
  25. 25.
    Torrejon, J., et al.: Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428 (2017)CrossRefGoogle Scholar
  26. 26.
    Grigoryeva, L., Ortega, J.P.: Echo state networks are universal. Neural Netw. 108, 495 (2018)CrossRefGoogle Scholar
  27. 27.
    Grigoryeva, L., Ortega, J.P.: Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems. J. Mach. Learn. Res. 19(1), 892 (2018)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Buehner, M., Young, P.: A tighter bound for the echo state property. IEEE Trans. Neural Netw. 17(3), 820 (2006)CrossRefGoogle Scholar
  29. 29.
    Dieudonné, J.: Foundations of Modern Analysis. Read Books Ltd, Redditch (2013)zbMATHGoogle Scholar
  30. 30.
    Ni, X., Verhaegen, M., Krijgsman, A.J., Verbruggen, H.B.: A new method for identification and control of nonlinear dynamic systems. Eng. Appl. Artif. Intell. 9(3), 231 (1996)CrossRefGoogle Scholar
  31. 31.
    Atiya, A.F., Parlos, A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 11(3), 697 (2000)CrossRefGoogle Scholar
  32. 32.
    Dormand, J.R., Prince, P.J.: A family of embedded Runge–Kutta formulae. J. Comput. Appl. Math. 6(1), 19 (1980)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Lukoševičius, M.: A practical guide to applying echo state networks. In: Neural Networks: Tricks of the Trade, pp. 659–686. Springer (2012)Google Scholar
  34. 34.
    Trotter, H.F.: On the product of semi-groups of operators. Proc. Am. Math. Soc. 10(4), 545 (1959)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Suzuki, M.: Relationship among exactly soluble models of critical phenomena. I: 2D Ising model, dimer problem and the generalized XY-model. Prog. Theor. Phys. 46(5), 1337 (1971)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Vandersypen, L.M., Steffen, M., Breyta, G., Yannoni, C.S., Sherwood, M.H., Chuang, I.L.: Experimental realization of Shor’s quantum factoring algorithm using nuclear magnetic resonance. Nature 414(6866), 883 (2001)ADSCrossRefGoogle Scholar
  37. 37.
    IBM Q 20 Tokyo. https://www.research.ibm.com/ibm-q/technology/devices/. Accessed: 10 April 2019
  38. 38.
    Nielsen, M.A., Chuang, I.L.: Quantum computation and quantum information: 10th anniversary edition, 10th edn. Cambridge University Press, New York (2011)zbMATHGoogle Scholar
  39. 39.
    Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer Series in Statistics New York, vol. 1. Springer, New York (2001)zbMATHGoogle Scholar
  40. 40.
    Bouten, L., van Handel, R., James, M.R.: A discrete invitation to quantum filtering and feedback. SIAM Rev. 51(2), 239 (2009)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Gross, J.A., Caves, C.M., Milburn, G.J., Combes, J.: Qubit models of weak continuous measurements: markovian conditional and open-system dynamics. Quantum Sci. Technol. 3(2), 024005 (2018)ADSCrossRefGoogle Scholar
  42. 42.
    Aleksandrowicz, G. et al.: Qiskit: an open-source framework for quantum computing (2019).  https://doi.org/10.5281/zenodo.2562110
  43. 43.
    Richter, S., Werner, R.F.: Ergodicity of quantum cellular automata. J. Stat. Phys. 82(3–4), 963 (1996)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    Rudin, W., et al.: Principles of Mathematical Analysis, vol. 3. McGraw-Hill, New York (1964)zbMATHGoogle Scholar
  45. 45.
    Perez-Garcia, D., Wolf, M.M., Petz, D., Ruskai, M.B.: Contractivity of positive and trace-preserving maps under \(L_p\) norms. J. Math. Phys. 47(8), 083506 (2006)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    Kubrusly, C.S.: A concise introduction to tensor product. Far East J. Math. Sci. 22(2), 137 (2006)MathSciNetzbMATHGoogle Scholar
  47. 47.
    Lang, S.: Complex Analysis. Graduate Texts in Mathematics. Springer-Verlag (1985). https://books.google.com.au/books?id=7S7vAAAAMAAJ CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Electrical Engineering and TelecommunicationsThe University of New South Wales (UNSW)SydneyAustralia

Personalised recommendations