Advertisement

Deep IA-BI and Five Actions in Circling

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11935)

Abstract

Deep bidirectional Intelligence (BI) via YIng YAng (IA) system, or shortly Deep IA-BI, is featured by circling A-mapping and I-mapping (or shortly AI circling) that sequentially performs each of five actions. A basic foundation of IA-BI is bidirectional learning that makes the cascading of A-mapping and I-mapping (shortly A-I cascading) approximate an identical mapping, with a nature of layered, topology-preserved, and modularised development. One exemplar is Lmser that improves autoencoder by incremental bidirectional layered development of cognition, featured by two dual natures DPN and DCW. Two typical IA-BI scenarios are further addressed. One considers bidirectional cognition and image thinking, together with a proposal that combines theories of Hubel-Wiesel’s versus Chen’s. The other considers bidirectional integration of cognition, knowledge accumulation, and abstract thinking for improving implementation of searching, optimising, and reasoning. Particularly, an IA-DSM scheme is proposed for solving a doubly stochastic matrix (DSM) featured combinatorial tasks such as travelling salesman problem, and also a Subtree driven reasoning scheme is proposed for improving production rule based reasoning. In addition, some remarks are made on relations of Deep IA-BI to Hubel and Wiesel theory, Sperry theory, and A5 problem solving paradigm.

Keywords

Bidirectional Cognition Image thinking Abstract thinking Inferring Reasoning Topology Optimising Production rule 

References

  1. 1.
    Ballard, D.H.: Modular learning in neural networks. In: AAAI, pp. 279–284 (1987)Google Scholar
  2. 2.
    Bell, A.J., Sejnowski, T.J.: The independent components of natural scenes are edge filters. Vision Res. 37(23), 3327–3338 (1997)CrossRefGoogle Scholar
  3. 3.
    Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Chen, L.: Topological structure in visual perception. Science 218(4573), 699–700 (1982)CrossRefGoogle Scholar
  5. 5.
    Chen, L.: The topological approach to perceptual organization. Vis. Cogn. 12(4), 553–637 (2005)CrossRefGoogle Scholar
  6. 6.
    Cooper, L.N., Liberman, F., Oja, E.: A theory for the acquisition and loss of neuron specificity in visual cortex. Biol. Cybern. 33(1), 9–28 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Cottrell, G., Munro, P., Zipser, D.: Image compression by backpropagation: an example of extensional programming. In: Sharkey, N.E. (ed.) Models of Cognition: A Review of Cognition Science, Nonvood, pp. 208–240 (l989)Google Scholar
  8. 8.
    Dang, C., Xu, L.: A barrier function method for the nonconvex quadratic programming problem with box constraints. J. Global Optim. 18(2), 165–188 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Dang, C., Xu, L.: A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem. Neural Netw. 14(2), 217–230 (2001)CrossRefGoogle Scholar
  10. 10.
    Dang, C., Xu, L.: A Lagrange multiplier and hopfield-type barrier function method for the traveling salesman problem. Neural Comput. 14(2), 303–324 (2002)zbMATHCrossRefGoogle Scholar
  11. 11.
    Dayan, P., Hinton, G.E., Neal, R.M., Zemel, R.S.: The Helmholtz machine. Neural Comput. 7(5), 889–904 (1995)CrossRefGoogle Scholar
  12. 12.
    Elman, J.L., Zipser, D.: Learning the hidden structure of speech. J. Acoust. Soc. Am. 83(4), 1615–1626 (1988)CrossRefGoogle Scholar
  13. 13.
    Fukushima, K.: Cognitron: a self-organizing multilayered neural network. Biol. Cybern. 20(3–4), 121–136 (1975)CrossRefGoogle Scholar
  14. 14.
    Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)zbMATHCrossRefGoogle Scholar
  15. 15.
    Fukushima, K., Miyake, S., Ito, T.: Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans. Syst. Man Cybern. 5, 826–834 (1983)CrossRefGoogle Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  17. 17.
    Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)CrossRefGoogle Scholar
  18. 18.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Hinton, G.E., Sejnowski, T.J., et al.: Learning and relearning in Boltzmann machines. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, no. 282–317, p. 2 (1986)Google Scholar
  21. 21.
    Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)Google Scholar
  23. 23.
    Huang, W., Tu, S., Xu, L.: Revisit Lmser and its further development based on convolutional layers. CoRR abs/1904.06307 (2019)Google Scholar
  24. 24.
    Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)CrossRefGoogle Scholar
  25. 25.
    Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195(1), 215–243 (1968)CrossRefGoogle Scholar
  26. 26.
    LeCun, Y., et al.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)Google Scholar
  27. 27.
    LeCun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications in vision. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 253–256. IEEE (2010)Google Scholar
  28. 28.
    Li, P., Tu, S., Xu, L.: GAN flexible Lmser for super-resolution. In: ACM International Conference on Multimedia, 21–25 October 2019, Nice, France. ACM (2019)Google Scholar
  29. 29.
    Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)CrossRefGoogle Scholar
  30. 30.
    Martin, K.A.: A brief history of the feature detector. Cereb. Cortex 4(1), 1–7 (1994)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Pan, Y.: The synthesis reasonning. Pattern Recog. Artif. Intell. 9, 201–208 (1996)Google Scholar
  32. 32.
    Pearl, J.: Fusion, propagation, and structuring in belief networks. Artif. Intell. 29(3), 241–288 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo (1988)zbMATHGoogle Scholar
  34. 34.
    Qian, X.: On thinking sciences. Chin. J. Nat. 8, 566 (1983)Google Scholar
  35. 35.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  36. 36.
    Rubner, J., Schulten, K.: Development of feature detectors by self-organization. Biol. Cybern. 62(3), 193–199 (1990)CrossRefGoogle Scholar
  37. 37.
    Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2(6), 459–473 (1989)CrossRefGoogle Scholar
  38. 38.
    Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRefGoogle Scholar
  39. 39.
    Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)CrossRefGoogle Scholar
  40. 40.
    Xu, L.: Least MSE reconstruction for self-organization: (i) multi-layer neural nets and (ii) further theoretical and experimental studies on one layer nets. In: Proceedings of International Joint Conference on Neural Networks-1991-Singapore, pp. 2363–2373 (1991)Google Scholar
  41. 41.
    Xu, L.: Combinatorial optimization neural nets based on a hybrid of Lagrange and transformation approaches. In: Proceedings of World Congress on Neutral Networks, pp. 399–404 (1994)Google Scholar
  42. 42.
    Xu, L.: Bayesian-Kullback coupled Ying-Yang machines: unified learnings and new results on vector quantization. In: Proceedings of the International Conference on Neural Information Process (ICONIP 1995), pp. 977–988 (1995)Google Scholar
  43. 43.
    Xu, L.: On the hybrid LT combinatorial optimization: new U-shape barrier, sigmoid activation, least leaking energy and maximum entropy. In: Proceedings of the ICONIP, vol. 95, pp. 309–312 (1995)Google Scholar
  44. 44.
    Xu, L., Oja, E., Kultanen, P.: A new curve detection method Randomized Hough Transform (RHT). Pattern Recogn. Lett. 11, 331–338 (1990)zbMATHCrossRefGoogle Scholar
  45. 45.
    Xu, L.: Investigation on signal reconstruction, search technique, and pattern recognition. Ph.D. dissertation, Tsinghua University, December 1986Google Scholar
  46. 46.
    Xu, L.: Least mean square error reconstruction principle for self-organizing neural-nets. Neural Netw. 6(5), 627–648 (1993)CrossRefGoogle Scholar
  47. 47.
    Xu, L.: A unified learning scheme: Bayesian-Kullback Ying-Yang machine. In: Advances in Neural Information Processing Systems, pp. 444–450 (1996)Google Scholar
  48. 48.
    Xu, L.: BYY prod-sum factor systems and harmony learning. Invited talk. In: Proceedings of International Conference on Neural Information Processing (ICONIP 2000), vol. 1, pp. 548–558 (2000)Google Scholar
  49. 49.
    Xu, L.: Data smoothing regularization, multi-sets-learning, and problem solving strategies. Neural Netw. 16(5–6), 817–825 (2003)CrossRefGoogle Scholar
  50. 50.
    Xu, L.: A unified perspective and new results on RHT computing, mixture based learning, and multi-learner based problem solving. Pattern Recogn. 40(8), 2129–2153 (2007)zbMATHCrossRefGoogle Scholar
  51. 51.
    Xu, L.: Bayesian Ying-Yang system, best harmony learning, and five action circling. Front. Electr. Electron. Eng. China 5(3), 281–328 (2010)MathSciNetCrossRefGoogle Scholar
  52. 52.
    Xu, L.: Codimensional matrix pairing perspective of BYY harmony learning: hierarchy of bilinear systems, joint decomposition of data-covariance, and applications of network biology. Front. Electr. Electron. Eng. China 6, 86–119 (2011)CrossRefGoogle Scholar
  53. 53.
    Xu, L.: On essential topics of BYY harmony learning: current status, challenging issues, and gene analysis applications. Front. Electr. Electron. Eng. 7(1), 147–196 (2012)Google Scholar
  54. 54.
    Xu, L.: Further advances on Bayesian Ying Yang harmony learning. Appl. Inform. 2(5) (2015) Google Scholar
  55. 55.
    Xu, L.: The third wave of artificial intelligence. KeXue (Sci. Chin.) 69(3), 1–5 (2017). (in Chinese)Google Scholar
  56. 56.
    Xu, L.: Deep bidirectional intelligence: AlphaZero, deep IA search, deep IA infer, and TPC causal learning. Appl. Inform. 5(5), 38 (2018)Google Scholar
  57. 57.
    Xu, L.: An overview and perspectives on bidirectional intelligence: Lmser duality, double ia harmony, and causal computation. IEEE/CAA J. Autom. Sin. 6(4), 865–893 (2019)MathSciNetCrossRefGoogle Scholar
  58. 58.
    Xu, L., Oja, E.: Randomized Hough transform: basic mechanisms, algorithms, and computational complexities. CVGIP Image Underst. 57(2), 131–154 (1993)CrossRefGoogle Scholar
  59. 59.
    Xu, L., Yan, P., Chang, T.: Algorithm cnneim-a and its mean complexity. In: Proceedings of 2nd International Conference on Computers and Applications, Beijing, 24–26 June 1987, pp. 494–499. IEEE Press (1987)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Centre for Cognitive Machines and Computational Health (CMaCH), SEIEEShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Neural Computation Research Centre, Brain and Intelligence Sci-Tech InstituteZhangJiang National LabShanghaiChina

Personalised recommendations