Skip to main content

Universal: Reliable, Reproducible, and Energy-Efficient Numerics

  • Conference paper
  • First Online:
Next Generation Arithmetic (CoNGA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13253))

Included in the following conference series:

Abstract

Universal provides a collection of arithmetic types, tools, and techniques for performant, reliable, reproducible, and energy-efficient algorithm design and optimization. The library contains a full spectrum of custom arithmetic data types ranging from memory-efficient fixed-size arbitrary precision integers, fixed-points, regular and tapered floating-points, logarithmic, faithful, and interval arithmetic, to adaptive precision integer, decimal, rational, and floating-point arithmetic. All arithmetic types share a common control interface to set and query bits to simplify numerical verification algorithms. The library can be used to create mixed-precision algorithms that minimize the energy consumption of essential algorithms in embedded intelligence and high-performance computing. Universal contains command-line tools to help visualize and interrogate the encoding and decoding of numeric values in all the available types. Finally, Universal provides error-free transforms for floating-point and reproducible computation and linear algebra through user-defined rounding techniques.

Developed by open-source developers, and supported and maintained by Stillwater Supercomputing Inc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Universal number library (2017). https://github.com/stillwater-sc/universal

  2. Carson, E., Higham, N.J.: A new analysis of iterative refinement and its application to accurate solution of ill-conditioned sparse linear systems. SIAM J. Sci. Comput. 39(6), A2834–A2856 (2017)

    Article  MathSciNet  Google Scholar 

  3. Carson, E., Higham, N.J.: Accelerating the solution of linear systems by iterative refinement in three precisions. SIAM J. Sci. Comput. 40(2), A817–A847 (2018)

    Article  MathSciNet  Google Scholar 

  4. Cox, M.G., Hammarling, S.: Reliable Numerical Computation. Clarendon Press, Oxford (1990)

    MATH  Google Scholar 

  5. Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. (TOMS) 33(2), 13-es (2007)

    Google Scholar 

  6. Gottschling, P., Wise, D.S., Adams, M.D.: Representation-transparent matrix algorithms with scalable performance. In: Proceedings of the 21st Annual International Conference on Supercomputing, pp. 116–125 (2007)

    Google Scholar 

  7. Granlund, T.: GNU MP. The GNU Multiple Precision Arithmetic Library 2(2) (1996)

    Google Scholar 

  8. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: International Conference on Machine Learning, pp. 1737–1746. PMLR (2015)

    Google Scholar 

  9. Gustafson, J.L., Yonemoto, I.T.: Beating floating point at its own game: posit arithmetic. Supercomput. Frontiers Innovations 4(2), 71–86 (2017)

    Google Scholar 

  10. Haidar, A., Tomov, S., Dongarra, J., Higham, N.J.: Harnessing GPU tensor cores for fast FP16 arithmetic to speed up mixed-precision iterative refinement solvers. In: SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 603–613. IEEE (2018)

    Google Scholar 

  11. Haidar, A., Wu, P., Tomov, S., Dongarra, J.: Investigating half precision arithmetic to accelerate dense linear system solvers. In: Proceedings of the 8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, pp. 1–8 (2017)

    Google Scholar 

  12. Higham, N.J., Pranesh, S., Zounon, M.: Squeezing a matrix into half precision, with an application to solving linear systems. SIAM J. Sci. Comput. 41(4), A2536–A2551 (2019)

    Article  MathSciNet  Google Scholar 

  13. Hittinger, J., et al.: Variable precision computing. Technical report, Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) (2019)

    Google Scholar 

  14. Horowitz, M.: 1.1 computing’s energy problem (and what we can do about it). In: 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10–14. IEEE (2014)

    Google Scholar 

  15. Intel Corporation: BFLOAT16 - Hardware Numerics Definition (2018). https://www.intel.com/content/dam/develop/external/us/en/documents/bf16-hardware-numerics-definition-white-paper.pdf

  16. Jouppi, N.P., et al.: In-Datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)

    Google Scholar 

  17. Kharya, P.: TensorFloat-32 in the a100 GPU accelerates AI training HPC up to 20x. NVIDIA Corporation, Technical report (2020). https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/

  18. Lloyd, G.S., Lindstrom, P.G.: ZFP hardware implementation. Technical report, Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) (2020)

    Google Scholar 

  19. Maddock, J., Kormanyos, C., et al.: Boost multiprecision (2018)

    Google Scholar 

  20. McCleeary, R.: Lazy exact real arithmetic using floating point operations (2019)

    Google Scholar 

  21. Molisch, A.F., et al.: Hybrid beamforming for massive MIMO: a survey. IEEE Commun. Mag. 55(9), 134–141 (2017)

    Article  Google Scholar 

  22. Muhammad, K., Ullah, A., Lloret, J., Del Ser, J., de Albuquerque, V.H.C.: Deep learning for safe autonomous driving: current challenges and future directions. IEEE Trans. Intell. Transp. Syst. 22(7), 4316–4336 (2020)

    Article  Google Scholar 

  23. Omtzigt, E.T.L., Gottschling, P., Seligman, M., Zorn, W.: Universal numbers library: design and implementation of a high-performance reproducible number systems library. arXiv:2012.11011 (2020). https://arxiv.org/abs/2012.11011

  24. Priest, D.M.: Algorithms for Arbitrary Precision Floating Point Arithmetic. University of California, Berkeley (1991)

    Book  Google Scholar 

  25. Siek, J.G., Lumsdaine, A.: The matrix template library: a generic programming approach to high performance numerical linear algebra. In: Caromel, D., Oldehoeft, R.R., Tholburn, M. (eds.) ISCOPE 1998. LNCS, vol. 1505, pp. 59–70. Springer, Heidelberg (1998). https://doi.org/10.1007/3-540-49372-7_6

    Chapter  Google Scholar 

  26. Siek, J.G., Lumsdaine, A.: The matrix template library: a unifying framework for numerical linear algebra. In: Demeyer, S., Bosch, J. (eds.) ECOOP 1998. LNCS, vol. 1543, pp. 466–467. Springer, Heidelberg (1998). https://doi.org/10.1007/3-540-49255-0_152

    Chapter  Google Scholar 

  27. Dally, W.J., et al.: Neural network accelerator using logarithmic-based arithmetic (2021). https://uspto.report/patent/app/20210056397

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to E. Theodore L. Omtzigt .

Editor information

Editors and Affiliations

Appendix A: Squeezing Algorithms

Appendix A: Squeezing Algorithms

figure q
figure r
figure s

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Omtzigt, E.T.L., Quinlan, J. (2022). Universal: Reliable, Reproducible, and Energy-Efficient Numerics. In: Gustafson, J., Dimitrov, V. (eds) Next Generation Arithmetic. CoNGA 2022. Lecture Notes in Computer Science, vol 13253. Springer, Cham. https://doi.org/10.1007/978-3-031-09779-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09779-9_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09778-2

  • Online ISBN: 978-3-031-09779-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics