Skip to main content

Part of the book series: Springer Theses ((Springer Theses))

  • 1832 Accesses

Abstract

In this chapter we will address three questions: (1) What is reservoir computing? (2) What does it have to do with optics and electronics? (3) What are FPGAs?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This is obviously a debatable point. But it did work for me—my true revelation on reservoir computing, how and why it works, happened when I saw what is around—so I am going to stick to this plan.

  2. 2.

    Note that the example in Fig. 1.2 does contain several loops.

  3. 3.

    Although I cannot guarantee the completeness of this list, I did my best to cite all experimental setups known at the moment of writing these lines.

  4. 4.

    Photonics is quite a tricky term. I am yet to find an established and precise definition and, in my experience, various scientists interpret this concept differently. In the present work, for simplicity, I make no distinction between these three terms.

  5. 5.

    Technically, it is not null: the SLD is emitting light, hence the DC voltage \(V_\text {DC}\sim I_0/2\) is present. But we can ignore it, since it is filtered by the amplifier.

  6. 6.

    Note that the delay T is the total propagation time from the MZ optical output to its electric input, that is, the full loop. In other words, fibre patch cords and electrical cables also add up to the delay, but their contribution is relatively small.

  7. 7.

    It seems that the engineers ran out of inspiration when they named their devices! Do not worry if you get lost in all these acronyms, though—we will not use them past this section.

  8. 8.

    An Artix evaluation board can be purchased for as low as $100.

  9. 9.

    Implementation times of my designs never exceeded an hour, though.

  10. 10.

    From a few seconds, up to a minute, in my experience.

References

  1. Fernando, Chrisantha and Sampsa Sojakka. 2003. Pattern recognition in a bucket. In European conference on artificial life, 588–597. Springer

    Chapter  Google Scholar 

  2. van Leeuwen, Jan. 1990. Handbook of theoretical computer science: Algorithms and complexity. Elsevier

    Google Scholar 

  3. Ralston, Anthony, Edwin D. Reilly, and David Hemmendinger. 2000. Encyclopedia of computer science. Nature Publishing Group.

    Google Scholar 

  4. Reilly, Edwin D. 2003. Milestones in computer science and information technology. Greenwood Publishing Group

    Google Scholar 

  5. Tucker, Allen B. 2004. Computer science handbook. CRC Press

    Google Scholar 

  6. Peter, J. 2005. Denning. Is computer science science?". Communications of the ACM 48 (4): 27–31.

    Article  Google Scholar 

  7. Winston, Patrick Herny. 1984. Artificial intelligence. Addison-Wesley

    Google Scholar 

  8. Michalski, Ryszard S., Jaime G. Carbonell, and Tom M. Mitchell. 1984. Machine learning an artificial intelligence approach. Morgan Kaufmann Publication Incorporated

    Google Scholar 

  9. Mitchell, Tim Michael. 1997. Machine learning. McGraw-Hill Education

    Google Scholar 

  10. Russell, Stuart Jonathan, Peter Norvig, John F Canny, Jitendra M Malik, and Douglas D Edwards. 2003. Artificial intelligenc e: A modern approach. Prentice hall Upper Saddle River.

    Google Scholar 

  11. Bishop, Christopher M. 2006. Pattern recognition and machine learning. Springer

    Google Scholar 

  12. Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2013. The elements of statistical learning: data mining, inference, and prediction. New York: Springer. 34 Chapter I. Introduction

    Google Scholar 

  13. Navada, A., A.N. Ansari, S. Patil, and B.A. Sonkamble. 2011. Overview of use of decision tree algorithms in machine learning. In 2011 IEEE control and system graduate research colloquium, 37–42. June 2011

    Google Scholar 

  14. Kotsiantis, S.B. 2013. Decision trees: a recent overview. Artificial Intelligence Review 39 (4): 261–283.

    Article  Google Scholar 

  15. Charniak, Eugene. 1991. Bayesian networks without tears. AI Magazine 12 (4): 50.

    Google Scholar 

  16. Nielsen, Thomas Dyhre, and Finn Verner Jensen. 2009. Bayesian networks and decision graphs. Springer Science & Business Media

    Google Scholar 

  17. Dasarathy, Belur V. 1991. Nearest neighbor (NN) norms: NN pattern classification techniques

    Google Scholar 

  18. Naomi, S. 1992. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46 (3): 175–185.

    MathSciNet  Google Scholar 

  19. Shakhnarovich, Gregory, Trevor Darrell, and Piotr Indyk. Nearest-neighbor methods in learning and vision: Theory and practice (neural information processing). The MIT Press

    Google Scholar 

  20. Cristianini, Nello, and John Shawe-Taylor. 2000. An introduction to support vector machines and other kernel-based learning methods. Cambridge university Press

    Google Scholar 

  21. Kecman, Vojislav. 2001. Learning and soft computing: Support vector ma- chines, neural networks, and fuzzy logic models. MIT Press

    Google Scholar 

  22. Steinwart, Ingo, and Andreas Christmann. 2008. Support vector machines. Springer Science & Business Media

    Google Scholar 

  23. Salcedo-Sanz, Sancho. 2014. José Luis Rojo-Álvarez, Manel Martínez-Ramón, and Gustavo Camps-Valls. Support vector machines in engineering: An overview. In Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery 4 (3): 234–267.

    Article  Google Scholar 

  24. Hertz, John, Anders Krogh, and Richard G. Palmer. 1991. Introduction to the theory of neural computation. Addison-Wesley/Addison Wesley Longman

    Article  ADS  Google Scholar 

  25. Bishop, Christopher M. 1995. Neural networks for pattern recognition. Oxford University Press

    Google Scholar 

  26. Gurney, Kevin. 1997. An introduction to neural networks. CRC Press

    Google Scholar 

  27. Haykin, Symon. 1999. Neural networks: A comprehensive foundation

    Google Scholar 

  28. Yoshua, Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1798–1828.

    Article  Google Scholar 

  29. Deng, Li, and Y. Dong. 2014. Foundations and trends®in signal processing. Signal Processing 7: 3–4.

    Google Scholar 

  30. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521 (7553): 436–444.

    Article  ADS  Google Scholar 

  31. Schmidhuber, Jürgen. 2015. Deep learning in neural networks: An overview. Neural Networks 61: 85–117.

    Article  Google Scholar 

  32. Hastie, Trevor, Jerome Friedman, and Robert Tibshirani. 2001. Overview of supervised learning. In The elements of statistical learning, 9–40. Springer

    Chapter  Google Scholar 

  33. Kotsiantis, Sotiris B., I. Zaharakis, and P. Pintelas. 2007. Supervised machine learning: A review of classification techniques. In Emerging artificial intelligence applications in computer engineering 160: 3–24.

    Google Scholar 

  34. Sutton, Richard S, Andrew G Barto. 1998. Reinforcement learning: Anintroduction. Vol. 1. 1. MIT Press Cambridge

    Google Scholar 

  35. Szepesvári, Csaba. 2009. Algorithms for reinforcement learning. Morgan and Claypool

    Google Scholar 

  36. Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2001. The elements of statistical learning. Vol. 1. Springer Series in Sstatistics New York

    Google Scholar 

  37. Xu, Lei. 2001. An overview on unsupervised learning from data mining perspective. Advances in self-organising maps, 181–209. London: Springer, London.

    Google Scholar 

  38. Ghahramani, Zoubin. 2004. Unsupervised learning. In Advanced lectureson machine learning. Springer, 72–112.

    Chapter  Google Scholar 

  39. Chapelle, O., B. Schölkopf, and A. Zien. 2006. Semi-supervised Learning. Adaptive computation and machine learning: MIT Press.

    Google Scholar 

  40. McCulloch, Warren S., and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics 5 (4): 115–133.

    Article  MathSciNet  Google Scholar 

  41. Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: Anlntroduction to computational geometry. Cambridge, Mass: MIT Press.

    MATH  Google Scholar 

  42. Werbos, Paul. 1974. Beyond regression: New tools for prediction and analysis in the behavioral sciences

    Google Scholar 

  43. Paul, J. 1990. Werbos. Backpropagation through time: What it does and howto do it". Proceedings of the IEEE 78 (10): 1550–1560.

    Article  Google Scholar 

  44. Alan, L. 1952. Hodgkin and Andrew F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology 117 (4): 500.

    Article  Google Scholar 

  45. FitzHugh, Richard. 1955. Mathematical models of threshold phenomena inthe nerve membrane. The Bulletin of Mathematical Biophysics 17: 257–278.

    Article  Google Scholar 

  46. Gerstner, Wulfram. 2001. A framework for spiking neuron models: The spikeresponse model. Handbook of Biological Physics 4: 469–516.

    Article  Google Scholar 

  47. Gerstner, Wulfram, and Kistler, Werner M. 2002. Spiking neuron models: Single neurons, populations, plasticity. Cambridge University Press

    Google Scholar 

  48. Izhikevich, Eugene M. 2004. Which model to use for cortical spiking neurons? IEEE transactions on neural networks 15 (5): 1063–1070.

    Article  Google Scholar 

  49. Haykin, Simon. 1998. Neural networks: A comprehensive foundation. Prentice Hall. 36 Chapter I. Introduction

    Google Scholar 

  50. Rosenblatt, Frank. 1961. Principles of neurodynamics. Cornell Aeronautical Lab Inc Buffalo NY: Perceptrons and the theory of brain mechanisms. Tech. rep.

    Google Scholar 

  51. Mandic, Danilo P., and Jonathon A. Chambers et al. 2001. Recurrent neural networks for prediction: Learning algorithms, architectures and stability. Wiley Online Library

    Google Scholar 

  52. Lipton, Z.C., J. Berkowitz, and C. Elkan. 2015. A critical review of recurrent neural networks for sequence learning. In: ArXiv e-prints arXiv:1506.00019 (2015).

  53. Turchetti, Claudio. 2004. Stochastic models of neural networks. Vol. 102. IOS Press

    Google Scholar 

  54. Wong, Eugene. 1991. Stochastic neural networks. Algorithmica 6 (1–6): 466.

    Article  MathSciNet  Google Scholar 

  55. Maass, Wolfgang. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10 (9): 1659–1671.

    Article  Google Scholar 

  56. Maass, Wolfgang, and Christopher M Bishop. 2001. Pulsed neural networks. MIT Press

    Google Scholar 

  57. Ponulak, Filip, and Andrzej Kasiński. 2011. Introduction to spiking neuralnetworks: Information processing, learning and applications. 71: 409–33.

    Google Scholar 

  58. Grüning, André and Sander M Bohte. 2014. Principles and challenges: Spiking neural networks. In ESANN.

    Google Scholar 

  59. Orr, Mark J.L. etal. 1996. Introduction to radial basis function networks

    Google Scholar 

  60. Bors, Adrian G. 2001. Introduction of the radial basis function (rbf) networks. Online symposium for electronics engineers. 1 (1): 1–7.

    Google Scholar 

  61. Wu, Yue, Hui Wang, Biaobiao Zhang, and K-L Du. Using radial basis function networks for function approximation and classification. In ISRN Applied Mathematics 2012 (2012).

    Article  ADS  MathSciNet  Google Scholar 

  62. Jaeger, Herbert, and Harald Haas. 2004. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304: 78–80.

    Article  ADS  Google Scholar 

  63. Maass, Wolfgang, Thomas Natschláger, and Henry Markram. 2002. Realtime computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14: 2531–2560.

    Article  Google Scholar 

  64. Jaeger, Herbert. 2001. The echo state approach to analysing and training recurrent neural networks—with an Erratum note. In GMD report 148

    Google Scholar 

  65. Rodan, Ali, and Peter Tino. 2011. Minimum complexity echo state network. IEEE Transactions on Neural Networks 22: 131–144.

    Article  Google Scholar 

  66. Duport, François, Bendix Schneider, Anteo Smerieri, Marc Haelterman, and Serge Massar. 2012. All-optical reservoir computing. In Optics Express 20: 22783–22795. I.4. References 37

    Article  ADS  Google Scholar 

  67. Dejonckheere, Antoine, François Duport, Anteo Smerieri, Li Fang, Jean-Louis Oudar, Marc Haelterman, and Serge Massar. 2014. All-optical reservoir computer based on saturation of absorption. Optics Express 22: 10868–10881.

    Article  ADS  Google Scholar 

  68. Antonik, Piotr, Marc Haelterman, and Serge Massar. 2017. Brain-inspired photonic signal processor for generating periodic patterns and emulating chaotic systems. In Physical Review Applied 7: 054014.

    Google Scholar 

  69. Amemiya, Takeshi. 1985. Advanced econometrics. Harvard University Press

    Google Scholar 

  70. Tikhonov, Andrei Nikolaevich, A.V. Goncharsky, V.V. Stepanov, and Anatoly G. Yagola. 1995. Numerical methods for the solution of ill-posed problems, vol. 328. Netherlands: Springer.

    Book  Google Scholar 

  71. Hermans, Michiel. 2012. Expanding the theoretical framework of reservoir computing. PhD thesis. Ghent University

    Google Scholar 

  72. Singh, Jaspreet, Sandeep Ponnuru, and Upamanyu Madhow. 2009. Multigigabit communication: The ADC bottleneck. In IEEE international conference on Ultra-Wideband, 2009. ICUWB, 22–27.IEEE

    Google Scholar 

  73. Sobel, David Amory, and Robert W. Brodersen. 2009. A 1 Gb/s mixed-signal baseband analog front-end for a 60 GHz wireless receiver. IEEE Journal of Solid-State Circuits 44 (4): 1281–1289.

    Article  Google Scholar 

  74. Feng, Xiaodong, Guanghui He, and Jun Ma. 2010. A new approach to reduce the resolution requirement of the ADC for high data rate wireless receivers. In 2010 IEEE 10th international conference on signal processing (ICSP), 1565–1568. IEEE

    Google Scholar 

  75. Yong, Su-Khiong, Pengfei Xia, and Alberto Valdes-Garcia. 2011. 60 GHz technology for Gbps WLAN and WPAN: from theory to practice. Wiley

    Google Scholar 

  76. Hassan, Khursheed, Theodore S Rappaport, and Jeffrey G Andrews. 2010. Analog equalization for low power 60 GHz receivers in realistic multipath channels. In 2010 IEEE global telecommunications conference (GLOBE-COM 2010), 1–5. IEEE.

    Google Scholar 

  77. Malone, Jerry, and Mark A. Wickert. 2011. Practical volterra equalizers for wideband satellite communications with twta nonlinearities. In Digital signal processing workshop and IEEE signal processing education workshop (DSP/SPE), 2011 IEEE, 48–53. IEEE

    Google Scholar 

  78. Bauduin, Marc, Anteo Smerieri, Serge Massar, and François Horlin. 2015. Equalization of the non-linear satellite communication channel with an echo state network. In Vehicular technology conference (VTC Spring), IEEE 81st, 1–5. IEEE

    Google Scholar 

  79. Mathews, V John, and Junghsi Lee. 1994. Adaptive algorithms for bilinear filtering. In SPIE’s 1994 international symposium on optics, imaging, and instrumentation. International Society for Optics and Photonics. 317–327.

    Google Scholar 

  80. Whitle, Peter. 1951. Hypothesis testing in time series analysis. Vol. 4. Almqvist & Wiksells. 38 Chapter I. Introduction

    Google Scholar 

  81. Hannan, Edward James. 2009. Multiple time series. Vol. 38. Wiley & Sons

    Google Scholar 

  82. Paquot, Yvan, François Duport, Anteo Smerieri, Joni Dambre, Benjaminschrauwen, Marc Haelterman, and Serge Massar. 2012. Optoelectronic reservoir computing. Scientific Reports 2: 287.

    Google Scholar 

  83. Vinckier, Quentin, François Duport, Anteo Smerieri, Kristof Vandoorne, Peter Bienstman, Marc Haelterman, and Serge Massar. 2015. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica 2 (5): 438–446.

    Article  Google Scholar 

  84. Hermans, Michiel, Piotr Antonik, Marc Haelterman, and Serge Massar. 2016. Embodiment of learning in electro-optical signal processors. Physical Review Letters 117: 128301.

    Google Scholar 

  85. Schúrmann, Felix, Karlheinz Meier, and Johannes Schemmel. 2004. Edge of chaos computation in mixed-mode VLSI-A Hard liquid. In NIPS, 1201–1208.

    Google Scholar 

  86. Appeltant, Lennert, Miguel Cornelles Soriano, Guy Van der Sande, JanDanckaert, Serge Massar, Joni Dambre, Benjamin Schrauwen, Claudio R Mirasso, and Ingo Fischer. 2011. Information processing using a single dynamical node as complex system. Nature Communications 2: 468.

    Article  ADS  Google Scholar 

  87. Larger, Laurent, M.C. Soriano, L. Daniel Brunner, Jose M. Appeltant, Luis Pesquera Gutiérrez, Claudio R. Mirasso, and Ingo Fischer. 2012. Photonic information processing beyond Turing: An optoelectronic implementation of reservoir computing. Optic Express 20: 3241–3249.

    Article  ADS  Google Scholar 

  88. Martinenghi, Romain, Sergei Rybalko, Maxime Jacquot, Yanne KouomouChembo, and Laurent Larger. 2012. Photonic nonlinear transient computing with multiple-delay wavelength dynamics. Physical Review Letters 108: 244101.

    Google Scholar 

  89. Brunner, Daniel, Miguel C. Soriano, Claudio R. Mirasso, and Ingo Fischer. 2013. Parallel photonic information processing at gigabyte per second data rates using transient states. Nature Communications 4: 1364.

    Article  ADS  Google Scholar 

  90. Vandoorne, Kristof, Pauline Mechet, Thomas Van Vaerenbergh, Martin Fiers, Geert Morthier, David Verstraeten, Benjamin Schrauwen, Joni Dambre, and Peter Bienstman. 2014. Experimental demonstration of reservoir computing on a silicon photonics chip. Nature Communications 5: 3541.

    Google Scholar 

  91. Haynes, Nicholas D., Miguel C. Soriano, David P. Rosin, Ingo Fischer, and Daniel J. Gauthier. 2015. Reservoir computing with a single timedelay autonomous Boolean node. Physical Review E 91 (2): 020801.

    Article  ADS  Google Scholar 

  92. Torrejon, Jacob, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi,Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Akio Fukushima, Hitoshi Kubota, et al. 2017. Neuromorphic computing with I.4. References 39nanoscale spintronic oscillators. In arXiv preprint arXiv:1701.07715

  93. Larger, Laurent, Antonio Baylón-Fuentes, Romain Martinenghi, Vladimir S. Udaltsov, Yanne K. Chembo, and Maxime Jacquot. 2017. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Physical Review X 7, 011015.

    Google Scholar 

  94. Akrout, Akram, Arno Bouwens, François Duport, Quentin Vinckier, Marc Haelterman, and Serge Massar. 2016. Parallel photonic reservoir computing using frequency multiplexing of neurons. In arXiv:1612.08606

  95. Akrout, Akram, Piotr Antonik, Marc Haelterman, and Serge Massar. 2017. Towards autonomous photonic reservoir computer based on frequency parallelism of neurons. Proceedins SPIE 10089. 100890S- 100890S-7.

    Google Scholar 

  96. Kadric, Edin. 2011. An FPGA implementation for a high-speed optical link with a PCIe interface. PhD thesis

    Google Scholar 

  97. Franz, Kaitlyn. 2015. History of the FPGA. http://blog.digilentinc.com/history-of-the-fpga/.

  98. Wikipedia. Transistor. 2017. http://en.wikipedia.org/wiki/ Transistor.

  99. Stavinov, Evgeni. 2011. 100 Power tips for FPGA designers. CreateSpace Independent Publishing Platform

    Google Scholar 

  100. Waldrop, M. Mitchell. 2016. The chips are down for Moore’s law. Nature 530: 144–147.

    Article  ADS  Google Scholar 

  101. Bright, Peter. 2016. Moore’s law really is dead this time. https://arstechnica.com/information-technology/2016/02/moores- law-really-is-dead-this-time/.

  102. Virtex-6 Family Overview. 2012. DS150 (v2.4). Xilinx Inc.

    Google Scholar 

  103. Virtex-6 FPGA DSP48E1 Slice. 2011. UG369. Xilinx Inc.

    Google Scholar 

  104. Getting Started with the Xilinx Virtex-6 FPGA ML605 Evaluation Kit. 2011. UG533 (v1.5). Xilinx Inc..

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Piotr Antonik .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Antonik, P. (2018). Introduction. In: Application of FPGA to Real‐Time Machine Learning. Springer Theses. Springer, Cham. https://doi.org/10.1007/978-3-319-91053-6_1

Download citation

Publish with us

Policies and ethics