Advertisement

Mathematical Abstraction in a Simple Programming Tool for Parallel Embedded Systems

  • Fritz Mayer-LindenbergEmail author
Conference paper
  • 269 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11657)

Abstract

We explain the application of a mathematical abstraction to arrive at a simple tool for a variety of parallel embedded systems. The intended target systems are networks of processors used in numeric applications such as digital signal processing and robotics. The processors can include mixes of simple processors configured on an FPGA (field programmable gate array) operating on various number codes. To cope with such hardware and to be able to implement numeric computations with some ease, a new language, π-Nets, was needed and supported by a compiler. Compilation builds on a netlist identifying the processors available for the particular application. It also integrates facilities to simulate entire many-threaded applications to analyze for the precision and the specified timing. The main focus of the paper will be on the language design, however, that firmly builds on mathematical considerations. The abstraction chosen to deal with the various codes is to program on the basis of real numbers, and to do so in terms of predefined operations on tuples. A separate step is then needed to execute on some processor. To deal with errors, the number set is enlarged to also contain ‘invalid’ data. Further simplification is through the generous overloading of scalar operations to tuples e.g. used as complex signal vectors. Operating on the reals also fits to high-precision embedded computing or performing computations on one or several PCs. To these features, π-Nets adds simple, non standard structures to handle parallelism and real time control. Finally, there is a simple way to specify the target networks with enough detail to allow for compilation and even modeling configurable, FPGA based components in an original way. The paper concludes by a short presentation of an advanced target and by a funny example program.

Keywords

Real computing Tuple data Substitutions Processor networks Compilation 

References

  1. 1.
    Mayer-Lindenberg, F.: A management scheme for the basic types in high level languages. In: Vojtáš, P., Bieliková, M., Charron-Bost, B., Sýkora, O. (eds.) SOFSEM 2005. LNCS, vol. 3381, pp. 390–393. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-30577-4_46CrossRefGoogle Scholar
  2. 2.
    Mayer-Lindenberg, F.: A modular processor architecture for high-performance computing applications on FPGA. In: Conference on Computer Design, CDES 2012, Las Vegas, USA (2012). https://134.28.202.18/t3resources/ict/dateien/Mitarbeiter/f-mayer-lindenberg/Las_Vegas.pdf
  3. 3.
    Mayer-Lindenberg, F.: Dedicated Digital Processors: Methods in Hardware/Software System Design. Wiley, London (2004)Google Scholar
  4. 4.
    Mayer-Lindenberg, F.: High-level FPGA programming through mapping process networks to FPGA resources. In: 2009 International Conference on Reconfigurable Computing and FPG as ReConFig 2009, Cancun, Quintana Roo, Mexico, pp. 302–307 (2009).  https://doi.org/10.1109/reconfig.2009.73
  5. 5.
    Mayer-Lindenberg, F.: Fifth on the transputer. Microprocessing Microprogramming 19(5), 367–373 (1987).  https://doi.org/10.1016/0165-6074(87)90248-1CrossRefGoogle Scholar
  6. 6.
    Desbrun, M., Hirani, A.N., Leok, M., Marsden, J.E.: Discrete exterior calculus. https://arxiv.org/abs/math/0508341v2
  7. 7.
    Marsden, J.E., West, M.: Discrete mechanics and variational integrators. Acta Numerica 10(1), 357–514 (2001).  https://doi.org/10.1017/S096249290100006XMathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Bücker, H.M., Corliss, G., Hovland, P., Naumann, U., Norris, B.: Automatic Differentiation: Applications, Theory, and Implementations. Lecture Notes in Computational Science and Engineering. Springer, Heidelberg (2006).  https://doi.org/10.1007/3-540-28438-9zbMATHCrossRefGoogle Scholar
  9. 9.
    Sebesta, R.W.: Concepts of Programming Languages. The Benjamin/Cummings Series in Computer Science. Benjamin/Cummings, Redwood City (1989)zbMATHGoogle Scholar
  10. 10.
    Krishnamurthy, E.V.: Parallel Processing: Principles and Practice. Addison-Wesley, Sydney (1989)zbMATHGoogle Scholar
  11. 11.
    Eckert, M., Meyer, D., Haase, J., Klauer, B.: Operating system concepts for reconfigurable computing: review and survey. Int. J. Reconfigurable Comput. 2016, 1–11 (2016).  https://doi.org/10.1155/2016/2478907. Article No. 2478907CrossRefGoogle Scholar
  12. 12.
    Kawamura, A., Ota, H., Rösnick, C., Ziegler, M.: Computational complexity of smooth differential equations. In: Rovan, B., Sassone, V., Widmayer, P. (eds.) MFCS 2012. LNCS, vol. 7464, pp. 578–589. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32589-2_51zbMATHCrossRefGoogle Scholar
  13. 13.
    Blum, L., Cucker, F., Shub, M., Smale, S.: Complexity and Real Computation. Springer, New York (1998).  https://doi.org/10.1007/978-1-4612-0701-6zbMATHCrossRefGoogle Scholar
  14. 14.
    Padula, A.D., Scott, S.D., Symes, W.W.: A software framework for abstract expression of coordinate-free linear algebra and optimization algorithms. ACM Trans. Math. Softw. 36(2), 1–36 (2009).  https://doi.org/10.1145/1499096.1499097. Article No. 8MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Knuth, D.E.: Literate Programming. Comput. J. 27(2), 97–111 (1984).  https://doi.org/10.1093/comjnl/27.2.97zbMATHCrossRefGoogle Scholar
  16. 16.
    Nierhaus, G.: Algorithmic Composition. Paradigms of Automated Music Generation. Springer, Wien (2009).  https://doi.org/10.1007/978-3-211-75540-2zbMATHCrossRefGoogle Scholar
  17. 17.
    Knuth, D.: The Art of Computer Programming, Volume 2. The: Seminumerical Algorithms 4.2.3, 3rd edn. Addison-Wesley, Sydney (1998)Google Scholar
  18. 18.
    Hida, Y., Li, S., Bailey, D.: Library for double-double and quad-double arithmetic (2008). https://www.researchgate.net/publication/228570156_Library_for_Double-Double_and_Quad-Double_Arithmetic
  19. 19.
    Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. (TOMS) 33(2), 1–14 (2007).  https://doi.org/10.1145/1236463.1236468. Article No. 13MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
  21. 21.
    Mayer-Lindenberg, F., Beller, V.: An FPGA-based floating-point processor array supporting a high-precision dot product. In: 2006 IEEE International Conference on Field Programmable Technology, Bangkok, Thailand, pp. 317–320. IEEE (2006).  https://doi.org/10.1109/fpt.2006.270337
  22. 22.
    Li, X., Phung, C.F., Maskell, D.L.: FPGA overlays: hardware-based computing for the masses. In: Proceedings of the Eighth International Conference on Advances in Computing, Electronics and Electrical Technology - CEET 2018, pp. 25–31. SEEK Digital Library (2018).  https://doi.org/10.15224/978-1-63248-144-3-12
  23. 23.
    Hafner, P.R.: On the graphs of Hoffman-Singleton and Higman-Sims. Electron. J. Comb. 11(1), 1–33 (2004). Article No. R77MathSciNetzbMATHGoogle Scholar
  24. 24.
    Parallelrechner ER-4, Technische Universität Hamburg. www.tuhh.de/ict/forschung/parallelrechner-er-4.html

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Technical University of HamburgHamburgGermany

Personalised recommendations