An Overview of Hybrid Neural Systems

  • Stefan Wermter
  • Ron Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1778)

Abstract

This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews, R., Diederich, J., Tickle, A.B.: A survey and critique of techniques for extracting rules from trained artificial networks. Technical report, Queensland University of Technology (1995)Google Scholar
  2. 2.
    Andrews, R., Geva, S.: Rules and local function networks. In: Proceedings of the Rule Extraction From Trained Artificial Neural Networks Workshop, Artificial Intelligence and Simulation of Behaviour, Brighton UK (1996)Google Scholar
  3. 3.
    Barnden, J.: Complex symbol-processing in Conposit. In: Sun, R., Bookman, L. (eds.) Architectures incorporating neural and symbolic processes. Kluwer, Boston (1994)Google Scholar
  4. 4.
    Barnden, J.A., Holyoak, K.J. (eds.): Advances in connectionist and neural computation theory, vol. 3. Ablex Publishing Corporation, Greenwich (1994)Google Scholar
  5. 5.
    Benitz, J., Castro, J., Requena, J.I.: Are artificial neural networks black boxes? IEEE Transactions on Neural Networks 8(5), 1156–1164 (1997)CrossRefGoogle Scholar
  6. 6.
    Bogacz, R., Giraud-Carrier, C.: A novel modular neural architecture for rule- based and similarity-based reasoning. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 63–77. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Bologna, G.: Symbolic rule extraction form the DIMLP neural network. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 240–254. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  8. 8.
    Bologna, G., Pellegrini, C.: Accurate decomposition of standard MLP classification responses into symbolic rules. In: International Work Conference on Artificial and Natural Neural Networks, IWANN 1997, Lanazrote, Canaries, pp. 616–627 (1997)Google Scholar
  9. 9.
    Cheng, Y., Fortier, P., Normandin, Y.: A system integrating connectionist and symbolic approaches for spoken language understanding. In: Proceedings of the International Conference on Spoken Language Processing, Yokohama, pp. 1511–1514 (1994)Google Scholar
  10. 10.
    Churchland, P.S., Sejnowski, T.J.: The Computational Brain. MIT Press, Cambridge (1992)Google Scholar
  11. 11.
    Corbett-Clarke, T., Tarassenko, L.: A principled framework and technique for rule extraction from multi-layer perceptrons. In: Proceedings of the 5th International Conference on Artificial Neural Networks, Cambridge, England, July 1997, pp. 233–238 (1997)Google Scholar
  12. 12.
    Diederich, J., Long, D.L.: Efficient question answering in a hybrid system. In: Proceedings of the International Joint Conference on Neural Networks, Singapore (1992)Google Scholar
  13. 13.
    Dorffner, G.: Neural Networks and a New AI. Chapman and Hall, London (1997)Google Scholar
  14. 14.
    Fanty, M.A.: Learning in structured connectionist networks. Technical Report 252, University of Rochester, Rochester, NY (1988)Google Scholar
  15. 15.
    Feldman, J., Bailey, D.: Layered hybrid connectionist models for cognitive science. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 14–27. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  16. 16.
    Feldman, J.A., Ballard, D.H.: Connectionist models and their properties. Cognitive Science 6, 205–254 (1982)CrossRefGoogle Scholar
  17. 17.
    Feldman, J.A., Lakoff, G., Bailey, D.R., Narayanan, S., Regier, T., Stolcke, A.: L0 - the first five years of an automated language acquisition project. AI Review 8 (1996)Google Scholar
  18. 18.
    Frasconi, P., Gori, M., Sperduti, A.: Integration of graphical rules with adaptive learning of structured information. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 211–225. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  19. 19.
    Fu, L.M.: Rule learning by searching on adapted nets. In: Proceedings of the National Conference on Artificial Intelligence, pp. 590–595 (1991)Google Scholar
  20. 20.
    Fu, L.M.: Neural Networks in Computer Intelligence. McGraw-Hill, Inc., New York (1994)Google Scholar
  21. 21.
    Gallant, S.I.: Neural Network Learning and Expert Systems. MIT Press, Cambridge (1993)MATHGoogle Scholar
  22. 22.
    Gallant, S.I.: Context vectors: a step toward a grand unified representation. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 204–210. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  23. 23.
    Gelfand, J., Handleman, D., Lane, S.: Integrating knowledge-based systems and neural networks for robotic skill. In: Proceedings of the International Joint Conference on Artificial Intelligence, San Mateo, CA, pp. 193–198 (1989)Google Scholar
  24. 24.
    Giles, L., Omlin, C.W.: Extraction, insertion and refinement of symbolic rules in dynamically driven recurrent neural networks. Connection Science 5, 307–337 (1993)CrossRefGoogle Scholar
  25. 25.
    Goonatilake, S., Khebbal, S.: Intelligent Hybrid Systems. Wiley, Chichester (1995)Google Scholar
  26. 26.
    Hallack, N.A., Zaverucha, G., Barbosa, V.C.: Towards a hybrid model of first- order theory refinement. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 92–106. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  27. 27.
    Hammerton, J.A., Kalman, B.L.: Holistic symbol computation and the sequential RAAM: An evaluation. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 298–312. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  28. 28.
    Hendler, J.: Developing hybrid symbolic/connectionist models. In: Barnden, J.A., Pollack, J.B. (eds.) Advances in Connectionist and Neural Computation Theory. High Level Connectionist Models, vol. 1, pp. 165–179. Ablex Publishing, Norwood (1991)Google Scholar
  29. 29.
    Hilario, M.: An overview of strategies for neurosymbolic integration. In: Proceedings of the Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches, Montreal, pp. 1–6 (1995)Google Scholar
  30. 30.
    Hölldobler, S.: A structured connectionist unification algorithm. In: Proceedings of the National Conference of the American Association on Artificial Intelligence, Boston, MA, vol. 90, pp. 587–593 (1990)Google Scholar
  31. 31.
    Hölldobler, S., Kalinke, Y., Wunderlich, J.: A recursive neural network for reflexive reasoning. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 46–62. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  32. 32.
    Honkela, S.: Self-organizing maps in symbol processing. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 348–362. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  33. 33.
    Jang, J.S.R., Sun, C.T.: Functional equivalence between radial basis function networks and fuzzy inference systems. IEEE Transactions Neural Networks 4(1), 156–159 (1993)CrossRefGoogle Scholar
  34. 34.
    Jurafsky, D., Wooters, C., Tajchman, G., Segal, J., Stolcke, A., Fosler, E., Morgan, N.: The Berkeley Restaurant Project. In: Proceedings of the International Conference on Speech and Language Processing, Yokohama, pp. 2139–2142 (1994)Google Scholar
  35. 35.
    Kanerva, P.: Large patterns make great symbols: an example of learning from example. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 194–203. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  36. 36.
    Kirkham, C., Harris, T.: Development of a hybrid neural network/expert system for machine health monitoring. In: Rao, R. (ed.) Proceedings of the 8th International Congress on Condition Monitoring and Engineering Management, COMADEM 1995, pp. 55–60 (1995)Google Scholar
  37. 37.
    Kraetzschmar, G.K., Sablatnoeg, S., Enderle, S., Palm, G.: Application of neuro- symbolic integration for environment modelling in mobile robots. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 387–401. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  38. 38.
    Kremer, S.C.: A theory of grammatical induction in the connectionist paradigm. Technical Report PhD dissertation, Dept. of Computing Science, University of Alberta, Edmonton (1996)Google Scholar
  39. 39.
    Kremer, S.C., Kolen, J.: Dynamical recurrent networks for sequential data processing. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 107–122. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  40. 40.
    Kurfeβ, F.: Unification on a connectionist simulator. In: Kohonen, T., Mäkisara, K., Simula, O., Kangas, J. (eds.) Artificial Neural Networks, pp. 471–476. North-Holland, Amsterdam (1991)Google Scholar
  41. 41.
    Kwasny, S.C., Faisal, K.A.: Connectionism and determinism in a syntactic parser. In: Sharkey, N. (ed.) Connectionist natural language processing, pp. 119–162. Lawrence Erlbaum, Hillsdale (1992)Google Scholar
  42. 42.
    Lange, T., Dyer, M.: High-level inferencing in a connectionist network. Connection Science 1, 181–217 (1989)CrossRefGoogle Scholar
  43. 43.
    Lees, B., Kumar, B., Mathew, A., Corchado, J., Sinha, B., Pedreschi, R.: A hybrid case-based neural network approach to scientific and engineering data analysis. In: Proceedings of the Eighteenth Annual International Conference of the British Computer Society Specialist Group on Expert Systems, Cambridge, pp. 245–260 (1998)Google Scholar
  44. 44.
    Lipson, H., Siegelmann, H.T.: High order eigentensors as symbolic rules in competitive learning. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 286–297. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  45. 45.
    MacIntyre, J., Smith, P.: Application of hybrid systems in the power industry. In: Medsker, L. (ed.) Intelligent Hybrid Systems, pp. 57–74. Kluwer Academic Publishers, Dordrecht (1995)Google Scholar
  46. 46.
    Mayberry, M.R., Miikkulainen, R.: Combining maps and distributed represen- tations for shift-reduce parsing. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 144–157. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  47. 47.
    McGarry, K., Wermter, S., MacIntyre, J.: Hybrid neural systems: from simple coupling to fully integrated neural networks. Neural Computing Surveys 2, 62–94 (1999)Google Scholar
  48. 48.
    Medsker, L.R.: Hybrid Neural Network and Expert Systems. Kluwer Academic Publishers, Boston (1994)MATHGoogle Scholar
  49. 49.
    Medsker, L.R.: Hybrid Intelligent Systems. Kluwer Academic Publishers, Boston (1995)MATHGoogle Scholar
  50. 50.
    Miikkulainen, R.: Subsymbolic Natural Language Processing. MIT Press, Cambridge (1993)Google Scholar
  51. 51.
    Morris, W.C., Cottrell, G.W., Elman, J.L.: A connectionist simulation of the empirical acquisition of grammatical relations. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 175–193. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  52. 52.
    Mozer, M.C.: Neural net architectures for temporal sequence processing. In: Weigend, A., Gershenfeld, N. (eds.) Time series prediction: Forecasting the future and understanding the past, Redwood City, pp. 243–264. Addison-Wesley, Reading (1993)Google Scholar
  53. 53.
    Omlin, C.W., Giles, C.L.: Extraction and insertion of symbolic information in recurrent neural networks. In: Honavar, V., Uhr, L. (eds.) Artificial Intelligence and Neural Networks: Steps Towards principled Integration, pp. 271–299. Academic Press, San Diego (1994)Google Scholar
  54. 54.
    Omlin, C.W., Giles, C.L.: Extraction of rules from discrete-time recurrent neural networks. Neural Networks 52, 41–52 (1996)CrossRefGoogle Scholar
  55. 55.
    Omlin, C.W., Giles, L., Thornber, K.K.: Fuzzy knowledge and recurrent neural networks: A dynamical systems perspective. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 123–143. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  56. 56.
    Orovas, C., Austin, J.: A cellular neural associatie array for symbolic vision. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 372–386. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  57. 57.
    Park, J., Sandberg, I.W.: Universal approximation using radial basis function networks. Neural Computation 3, 246–257 (1991)CrossRefGoogle Scholar
  58. 58.
    Park, N.S.: Addressing knowledge representation issues in connectionist symbolic rule encoding for general inference. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 78–91. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  59. 59.
    Peterson, T., Sun, R.: An RBF network alternative for a hybrid architecture. In: International Joint Conference on Neural Networks, Ancorage, AK (May 1998)Google Scholar
  60. 60.
    Pollack, J.B.: Recursive distributed representations. Artificial Intelligence 46, 77–105 (1990)CrossRefGoogle Scholar
  61. 61.
    Reilly, R.: Evolution of symbolisation: Signposts to a bridge between connectionist and symbolic systems. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 363–371. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  62. 62.
    Reilly, R.G., Sharkey, N.E.: Connectionist Approaches to Natural Language Processing. Lawrence Erlbaum Associates, Hillsdale (1992)Google Scholar
  63. 63.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  64. 64.
    Sharkey, N., Ziemke, N.T.: Life, mind and robots: The ins and outs of embodied cognition. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 313–332. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  65. 65.
    Shastri, L.: A model of rapid memory formation in the hippocampal system. In: Proceedings of the Meeting of the Cognitive Science Society, Stanford, pp. 680–685 (1997)Google Scholar
  66. 66.
    Shastri, L.: Types and quantifiers in SHRUTI: a connectionist model of rapid reasoning and relational processing. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 28–45. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  67. 67.
    Shastri, L., Ajjanagadde, V.: From simple association to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings. Behavioral and Brain Sciences 16(3), 417–494 (1993)CrossRefGoogle Scholar
  68. 68.
    Shavlik, J.: A framework for combining symbolic and neural learning. In: Honavar, V., Uhr, L. (eds.) Artificial Intelligence and Neural Networks: Steps towards principled Integration, pp. 561–580. Academic Press, San Diego (1994)Google Scholar
  69. 69.
    Smolensky, P.: On the proper treatment of connnectionism. Behavioral and Brain Sciences 11(1), 1–74 (1988)CrossRefMathSciNetGoogle Scholar
  70. 70.
    Sperduti, A., Starita, A., Goller, C.: Learning distributed representations for the classifications of terms. In: Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, pp. 494–515 (1995)Google Scholar
  71. 71.
    Sun, R.: On variable binding in connectionist networks. Connection Science 4(2), 93–124 (1992)CrossRefGoogle Scholar
  72. 72.
    Sun, R.: Integrating Rules and Connectionism for Robust Commonsense Reasoning. Wiley, New York (1994)MATHGoogle Scholar
  73. 73.
    Sun, R.: Hybrid connectionist-symbolic models: A report from the IJCAI95 works- hop on connectionist-symbolic integration. Artificial Intelligence Magazine (1996)Google Scholar
  74. 74.
    Sun, R.: Supplementing neural reinforcement learning with symbolic methods. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 333–347. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  75. 75.
    Sun, R., Alexandre, F.: Proceedings of the Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches. McGraw-Hill, Inc., Montreal (1995)Google Scholar
  76. 76.
    Sun, R., Alexandre, F.: Connectionist Symbolic Integration. Lawrence Erlbaum Associates, Hillsdale (1997)Google Scholar
  77. 77.
    Sun, R., Bookman, L.A.: Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Boston (1995)MATHGoogle Scholar
  78. 78.
    Sun, R., Peterson, T.: Autonomous learning of sequential tasks: experiments and analyses. IEEE Transactions on Neural Networks 9(6), 1217–1234 (1998)CrossRefGoogle Scholar
  79. 79.
    Thrun, S.: Extracting rules from artificial neural networks with distributed representations. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems 7. MIT Press, San Mateo (1995)Google Scholar
  80. 80.
    Thrun, S.: Explanation-Based Neural Network Learning. Kluwer, Boston (1996)MATHGoogle Scholar
  81. 81.
    Tickle, A., Maire, F., Bologna, G., Andrews, R., Diederich, J.: Lessons from past, current issues and future research directions in extracting the knowledge embedded in artificial neural networks. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 226–239. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  82. 82.
    Tino, P., Dorffner, G., Schittenkopf, C.: Understanding state space organization in recurrent neural networks with iterative function systems dynamics. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 255–269. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  83. 83.
    Tirri, H.: Replacing the pattern matcher of an expert system with a neural network. In: Goonatilake, S., Khebbal, S. (eds.) Intelligent Hybrid Systems. John Wiley and Sons, Chichester (1995)Google Scholar
  84. 84.
    Vaughn, M.L., Cavill, S.J., Taylor, S.J., Foy, M.A., Fogg, A.J.B.: Direct knowledge extraction and interpretation from a multilayer perceptron network that performs low back pain classification. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 270–285. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  85. 85.
    Waltz, D.: The importance of importance. In: Presentation at Workshop on Hybrid Neural Symbolic Integration, Breckenridge, CO (1998)Google Scholar
  86. 86.
    Waltz, D.L., Feldman, J.A.: Connectionist Models and their Implications. Ablex Publishing, Greenwich (1988)Google Scholar
  87. 87.
    Wermter, S.: Hybrid Connectionist Natural Language Processing. Chapman and Hall, Thomson International (1995)Google Scholar
  88. 88.
    Wermter, S.: Preference Moore machines for neural fuzzy integration. In: Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, pp. 840–845 (1999)Google Scholar
  89. 89.
    Wermter, S.: The hybrid approach to artificial neural network-based language processing. In: Dale, R., Moisl, H., Somers, H. (eds.) A Handbook of Natural Language Processing. Marcel Dekker, New York (2000)Google Scholar
  90. 90.
    Wermter, S.: Knowledge extraction from transducer neural networks. Applied Intelligence: The International Journal of Artificial Intelligence, Neural Networks, and Complex Problem-Solving Techniques 12, 27–42 (2000)Google Scholar
  91. 91.
    Wermter, S., Arevian, G., Panchev, C.: Towards hybrid neural learning internet agents. In: Sun, R., Wermter, S. (eds.) Hybrid Neural Systems 1998. LNCS, vol. 1778, pp. 158–174. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  92. 92.
    Wermter, S., Meurer, M.: Building lexical representations dynamically using artificial neural networks. In: Proceedings of the International Conference of the Cognitive Science Society, Stanford, pp. 802–807 (1997)Google Scholar
  93. 93.
    Wermter, S., Panchev, C., Arevian, G.: Hybrid neural plausibility networks for news agents. In: Proceedings of the National Conference on Artificial Intelligence, Orlando, USA, pp. 93–98 (1999)Google Scholar
  94. 94.
    Wermter, S., Riloff, E., Scheler, G.: Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. Springer, Berlin (1996)Google Scholar
  95. 95.
    Wermter, S., Weber, V.: SCREEN: Learning a flat syntactic and semantic spoken language analysis using artificial neural networks. Journal of Artificial Intelligence Research 6(1), 35–85 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Stefan Wermter
    • 1
  • Ron Sun
    • 2
  1. 1.Centre for Informatics, SCETUniversity of SunderlandSunderlandUK
  2. 2.CECS DepartmentUniversity of MissouriColumbiaUSA

Personalised recommendations