Development of Physical Super-Turing Analog Hardware
In the 1930s, mathematician Alan Turing proposed a mathematical model of computation now called a Turing Machine to describe how people follow repetitive procedures given to them in order to come up with final calculation result. This extraordinary computational model has been the foundation of all modern digital computers since the World War II. Turing also speculated that this model had some limits and that more powerful computing machines should exist. In 1993, Siegelmann and colleagues introduced a Super-Turing Computational Model that may be an answer to Turing’s call. Super-Turing computation models have no inherent problem to be realizable physically and biologically. This is unlike the general class of hyper-computer as introduced in 1999 to include the Super-Turing model and some others. This report is on research to design, develop and physically realize two prototypes of analog recurrent neural networks that are capable of solving problems in the Super-Turing complexity hierarchy, similar to the class BPP/log*. We present plans to test and characterize these prototypes on problems that demonstrate anticipated Super- Turing capabilities in modeling Chaotic Systems.
KeywordsNeural Networks Analog Computing Super-Turing Computation Hypercomputing
Unable to display preview. Download preview PDF.
- 1.Turing, A.: On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42, 230–265 (1936)Google Scholar
- 2.Turing, A.M.: Intelligent Machinery, report for National Physical Laboratory. In: Meltzer, B., Michie, D. (eds.) Machine Intelligence 7 (1969)Google Scholar
- 3.Siegelmann, H.T.: Computation Beyond the Turing Limit. Science 238(28), 632–637 (1995)Google Scholar
- 4.Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing Limit, Birkhauser, Boston (December 1998)Google Scholar
- 6.Audhkhasi, K., Osoba, O., Kosko, B.: Noise Benefits in Backpropagaton and Deep Bidirectional Pre-training. In: Procedings of the International Joint Conference on Neural Networks, Dallas Texas, USA, pp. 2254–2261 (2013)Google Scholar
- 7.Goodman, J.W., Dias, A.R., Woody, L.M., Erickson, J.: Parallel Optical Incoherent Matrix-Vector Multiplier, Technical Report L-723-1, Department of Electrical Engineering. Stanford University (February 15, 1979)Google Scholar
- 8.Bade, S.L., Hutchings, B.L.: FPGA-Based Stochastic Neural Networks – Implementation. In: IEEE FPGAs for Custom Computing Machines Workshop, Napa, CA, pp. 189–198 (1994)Google Scholar
- 9.Younger, A.S., Redd, E.: Computing by Means of Physics-Based Optical Neural Networks. In: Developments in Computational Modeling 2010. EPTCS 26, pp. 159–167 (2010) arXiv:1006.1434v1Google Scholar
- 11.Younger, A.S.: Learning in Fixed-Weight Recurrent Neural Networks, Ph.D. Dissertation. University of Utah (1996)Google Scholar