Skip to main content

Linear Structure of Training Samples in Quantum Neural Network Applications

  • Conference paper
  • First Online:
Service-Oriented Computing – ICSOC 2023 Workshops (ICSOC 2023)

Abstract

Quantum Neural Networks (QNNs) use sets of training samples supplied as quantum states to approximate unitary operators. Recent results show that the average quality, measured as the error of the approximation, depends on the number of available training samples and the degree of entanglement of these samples. Furthermore, the linear structure of the training samples plays a vital role in determining the average quality of the trained QNNs. However, these results evaluate the quality of QNNs independently of the classical pre- and post-processing steps that are required in real-world applications. How the linear structure of the training samples affects the quality of QNNs when the classical steps are considered is not fully understood. Therefore, in this work, we experimentally evaluate QNNs that approximate an operator that predicts the outputs of a function from the automotive engineering area. We find that the linear structure of the training samples also influences the quality of QNNs in this real-world use case.

The authors would like to thank Thomas Wolf for providing the car-model data and for support with the use case and Rahul Banerjee for useful discussions. This work was partially funded by the BMWK projects PlanQK (01MK20005N), EniQmA (01MQ22007B), and SeQuenC (01MQ22009B).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/UST-QuAntiL/linear_struct_QNN.

References

  1. Documentation on torch.optim.Adam. https://pytorch.org/docs/stable/generated/torch.optim.Adam.html. Accessed 31 Aug 2023

  2. Qiskit: Summary of quantum operations: Standard rotations. https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html#Standard-Rotations. Accessed 30 Aug 2023

  3. Beer, K., et al.: Training deep quantum neural networks. Nat. Commun. 11(1), 808 (2020)

    Article  Google Scholar 

  4. Benedetti, M., Garcia-Pintos, D., Perdomo, O., Leyton-Ortega, V., Nam, Y., Perdomo-Ortiz, A.: A generative modeling approach for benchmarking and training shallow quantum circuits. NPJ Quant. Inf. 5(1), 45 (2019)

    Article  Google Scholar 

  5. Benedetti, M., Lloyd, E., Sack, S., Fiorentini, M.: Parameterized quantum circuits as machine learning models. Quant. Sci. Technol. 4(4), 043001 (2019)

    Article  Google Scholar 

  6. Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195–202 (2017)

    Article  Google Scholar 

  7. Caro, M.C., et al.: Generalization in quantum machine learning from few training data. Nat. Commun. 13(1), 4919 (2022)

    Article  Google Scholar 

  8. Cerezo, M., et al.: Variational quantum algorithms. Nat. Rev. Phys. 3(9), 625–644 (2021)

    Article  Google Scholar 

  9. Du, Y., Hsieh, M.H., Liu, T., Tao, D.: Expressive power of parametrized quantum circuits. Phys. Rev. Res. 2, 033125 (2020)

    Article  Google Scholar 

  10. Du, Y., Hsieh, M.H., Liu, T., You, S., Tao, D.: Learnability of quantum neural networks. PRX Quant. 2, 040337 (2021)

    Article  Google Scholar 

  11. Leymann, F., Barzen, J.: The bitter truth about gate-based quantum algorithms in the NISQ era. Quant. Sci. Technol. 5(4), 044007 (2020)

    Article  Google Scholar 

  12. Mandl, A., Barzen, J., Leymann, F., Vietz, D.: On reducing the amount of samples required for training of QNNs: constraints on the linear structure of the training data. arXiv:2309.13711 [quant-ph] (2023)

  13. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  14. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  15. Poland, K., Beer, K., Osborne, T.J.: No free lunch for quantum machine learning. arXiv:2003.14103 [quant-ph] (2020)

  16. Schuld, M., Bocharov, A., Svore, K.M., Wiebe, N.: Circuit-centric quantum classifiers. Phys. Rev. A 101(3) (2020)

    Google Scholar 

  17. Schuld, M., Petruccione, F.: Supervised Learning with Quantum Computers. Quantum Science and Technology. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-319-96424-9

  18. Sharma, K., Cerezo, M., Holmes, Z., Cincio, L., Sornborger, A., Coles, P.J.: Reformulation of the no-free-lunch theorem for entangled datasets. Phys. Rev. Lett. 128(7), 070501 (2022)

    Article  MathSciNet  Google Scholar 

  19. Sharma, K., Khatri, S., Cerezo, M., Coles, P.J.: Noise resilience of variational quantum compiling. New J. Phys. 22(4), 043006 (2020)

    Article  MathSciNet  Google Scholar 

  20. Sim, S., Johnson, P.D., Aspuru-Guzik, A.: Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Adv. Quant. Technol. 2(12), 1900070 (2019)

    Article  Google Scholar 

  21. Virtanen, P., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nat. Methods 17, 261–272 (2020)

    Article  Google Scholar 

  22. Volkoff, T., Holmes, Z., Sornborger, A.: Universal compiling and (no-)free-lunch theorems for continuous-variable quantum learning. PRX Quant. 2, 040327 (2021)

    Article  Google Scholar 

  23. Wang, X., Du, Y., Tu, Z., Luo, Y., Yuan, X., Tao, D.: Transition role of entangled data in quantum machine learning. arXiv:2306.03481 [quant-ph] (2023)

  24. Weder, B., Barzen, J., Leymann, F., Zimmermann, M.: Hybrid quantum applications need two orchestrations in superposition: a software architecture perspective. In: Proceedings of the 18th IEEE International Conference on Web Services (ICWS 2021), pp. 1–13. IEEE (2021)

    Google Scholar 

  25. Weigold, M., Barzen, J., Leymann, F., Salm, M.: Encoding patterns for quantum algorithms. IET Quant. Commun. 2(4), 141–152 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Mandl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mandl, A., Barzen, J., Bechtold, M., Keckeisen, M., Leymann, F., Vaudrevange, P.K.S. (2024). Linear Structure of Training Samples in Quantum Neural Network Applications. In: Monti, F., et al. Service-Oriented Computing – ICSOC 2023 Workshops. ICSOC 2023. Lecture Notes in Computer Science, vol 14518. Springer, Singapore. https://doi.org/10.1007/978-981-97-0989-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0989-2_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0988-5

  • Online ISBN: 978-981-97-0989-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics