Skip to main content

Tensor Algebra on an Optoelectronic Microchip

  • Conference paper
  • First Online:
Intelligent Computing (SAI 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 711))

Included in the following conference series:

Abstract

Tensor algebra lies at the core of computational science and machine learning. Due to its high usage, entire libraries exist dedicated to improving its performance. Conventional tensor algebra performance boosts focus on algorithmic optimizations, which in turn lead to incremental improvements. In this paper, we describe a method to accelerate tensor algebra a different way: by outsourcing operations to an optical microchip. We outline a numerical programming language developed to perform tensor algebra computations that is designed to leverage our optical hardware’s full potential. We introduce the language’s current grammar and go over the compiler design. We then show a new way to store sparse rank-n tensors in RAM that outperforms conventional array storage (used by C++, Java, etc.). This method is more memory-efficient than Compressed Sparse Fiber (CSF) format and is specifically tuned for our optical hardware. Finally, we show how the scalar-tensor product, rank-n Kronecker product, tensor dot product, Khatri-Rao product, face-splitting product, and vector cross product can be compiled into operations native to our optical microchip through various tensor decompositions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We refer to Definition 3 in the general case to provide a complete definition, but only discuss implementation in the vector case.

  2. 2.

    The RAM referred to throughout this section is a simplified virtual abstraction. Hence, we freely interact with it using numbers in the decimal system. The actual RAM is referred to when discussing compilation to target architectures, which will be done in a future paper.

  3. 3.

    Exact RAM indices are not included.

  4. 4.

    Apollo does not yet support user-defined subroutines, so a local segment is not required.

  5. 5.

    \(\mathcal {X}\) is assumed to be a tensor of rank \(n>0\), since the parser would map the scalar case to scalar multiplication.

  6. 6.

    Higher rank cross products can be defined using the Levi-Civita symbol \(\epsilon _{ijk}\), which we omit due to relatively few applications.

References

  1. Arrays

    Google Scholar 

  2. Comsol multiphysics® software - understand, predict, and optimize

    Google Scholar 

  3. Engineering simulation software \(|\) ansys products

    Google Scholar 

  4. Multiphysics modeling

    Google Scholar 

  5. Blalock, D., Guttag, J.: Multiplying matrices without multiplying. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, vol. 139. Proceedings of Machine Learning Research, pp. 992–1004. PMLR, 18–24 July 2021

    Google Scholar 

  6. Briola, A., Turiel, J.D., Marcaccioli, R., Aste, T.: Deep reinforcement learning for active high frequency trading. CoRR, abs/2101.07107 (2021)

    Google Scholar 

  7. Bro, R.: Multi-way Analysis in the Food Industry. Models. Algorithms and Applications

    Google Scholar 

  8. Budampati, R.S., Sidiropoulos, N.D.: Khatri-Rao space-time codes with maximum diversity gains over frequency-selective channels. In: Sensor Array and Multichannel Signal Processing Workshop Proceedings, 2002. IEEE (2003)

    Google Scholar 

  9. Chambers, R.L., Dorfman, A.H., Wang., S.: Limited information likelihood analysis of survey data. J. R. Stat. Soc. Ser. B Stat. Methodol. 60(2), 397–411 (1998)

    Google Scholar 

  10. Cole, C.: Optical and electrical programmable computing energy use comparison. Opt. Express 29(9), 13153–13170 (2021)

    Article  Google Scholar 

  11. Corob-Msft. Arrays (c++)

    Google Scholar 

  12. Dahl, G., Leinaas, J.M., Myrheim, J., Ovrum, E.: A tensor product matrix approximation problem in quantum physics. Linear Algebra Appl. 420(2), 711–725 (2007)

    Google Scholar 

  13. Dunlavy, D.M., Kolda, T.G., Kegelmeyer, W.P.: 7. Multilinear Algebra for Analyzing Data with Multiple Linkages, pp. 85–114

    Google Scholar 

  14. Eisele, R.: 3D cross product

    Google Scholar 

  15. Garg, S., Lou, J., Jain, A., Nahmias, M.A.: Dynamic precision analog computing for neural networks. CoRR, abs/2102.06365 (2021)

    Google Scholar 

  16. Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. CoRR, abs/1609.09106 (2016)

    Google Scholar 

  17. Jagtap, A.D., Shin, Y., Kawaguchi, K., Em Karniadakis, G.: Deep kronecker neural networks: a general framework for neural networks with adaptive activation functions. CoRR, abs/2105.09513 (2021)

    Google Scholar 

  18. Keyes, D.E., et al.: Multiphysics simulations: challenges and opportunities. Int. J. High Perform. Comput. Appl. 27(1), 4–83 (2013)

    Google Scholar 

  19. Kjolstad, F., Kamil, S., Chou, S., Lugato, D., Amarasinghe, S.: The tensor algebra compiler. Proc. ACM Program. Lang. 1(OOPSLA), 77:1–77:29 (2017)

    Google Scholar 

  20. Kola, T., et al.: Tensor toolbox for matlab v. 3.0, 3 2017

    Google Scholar 

  21. Lehrer, J.: 1,084 days: How toy story 3 was made, June 2010

    Google Scholar 

  22. Lev-Ari, H.: Efficient solution of linear matrix equations with applications to multistatic

    Google Scholar 

  23. Van Loan, C.F.: The ubiquitous kronecker product. J. Comput. Appl. Math. 123(1), 85–100 (2000). Numerical Analysis 2000. Vol. III: Linear Algebra

    Google Scholar 

  24. Nisan, N., Schocken, S.: The Elements of Computing Systems: Building a Modern Computer from First Principles. The MIT Press, Cambridge (2021)

    Google Scholar 

  25. Peltzer, P., Lotz, J., Naumann, U.: Eigen-ad: algorithmic differentiation of the eigen library. CoRR, abs/1911.12604 (2019)

    Google Scholar 

  26. Rabanser, S., Shchur, O., Günnemann, S.: Introduction to tensor decompositions and their applications in machine learning (2017)

    Google Scholar 

  27. Sims, C.A., Stock, J.H., Watson, M.W.: Inference in linear time series models with some unit roots. Econometrica 58(1), 113 (1990)

    Google Scholar 

  28. Slyusar, V.: New matrix operations for dsp, 11 1999

    Google Scholar 

  29. Smith, S., Ravindran, N., Sidiropoulos, N.D., Karypis, G.: Splatt: efficient and parallel sparse tensor-matrix multiplication. In: 2015 IEEE International Parallel and Distributed Processing Symposium, pp. 61–70 (2015)

    Google Scholar 

  30. Srivastava, N.K.: Design and generation of efficient hardware accelerators for sparse and dense tensor computations (2020)

    Google Scholar 

  31. Tew, P.A.: An investigation of sparse tensor formats for tensor libraries. M.eng. thesis, Massachusetts Institute of Technology, Cambridge, MA, June 2016

    Google Scholar 

  32. Xu, H., Kostopoulou, K., Dutta, A., Li, X., Ntoulas, A., Kalnis, P.: Deepreduce: a sparse-tensor communication framework for federated deep learning. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 21150–21163. Curran Associates Inc (2021)

    Google Scholar 

Download references

Acknowledgments

We thank Dhruv Anurag for Apollo-related discussion and testing. We thank Jagadeepram Maddipatla for creating test cases. We thank Dr. Jonathan Osborne for mathematical discussion and advice. We thank Mr. Emil Jurj for supporting this project. We thank Shihao Cao for support and useful discussion about the project’s future. Finally, we thank our families for extended support and patience.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sathvik Redrouthu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Redrouthu, S., Athavale, R. (2023). Tensor Algebra on an Optoelectronic Microchip. In: Arai, K. (eds) Intelligent Computing. SAI 2023. Lecture Notes in Networks and Systems, vol 711. Springer, Cham. https://doi.org/10.1007/978-3-031-37717-4_3

Download citation

Publish with us

Policies and ethics