Skip to main content
Log in

Neural-network-based blackboard demon subsystems

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

We propose two neural network architectures involving feedforward and probabilistic neural network models to simulate the blackboard demon subsystem, which is responsible for triggering knowledge sources in accordance with the complex monitoring conditions on the hypotheses in the blackboard. Both architectures involve a preprocessor module to perform the transformation between symbolic hypotheses and numerical ones. They learn and record the triggering relationships between hypotheses and knowledge sources in the network links and perform better blackboard monitoring function than the traditional symbolic demon subsystems. In comparison, the probabilistic neural network-like architecture performs better when there is a possibility of using a centralized knowledge representation and when it only involves one-to-one or one-to-many mapping between hypotheses and triggering patterns. The feedforward architecture may be useful when a distributed knowledge representation is possible and when time requirements for training the architecture to learn the complex mappings are not too strict.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. L.D. Erman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy. “The Hearsay-II speech understanding system: Integrating knowledge to resolve uncertainty,”ACM Comput. Surveys, vol. 12, pp. 213–253, 1980.

    Google Scholar 

  2. C.S. Ho. “Development of a blackboard shell with context blackboard-based control loop,”IEE Proc. E, vol. 138, no. 1, pp. 1–12, 1991.

    Google Scholar 

  3. C.S. Ho and C.H. Keng. “Development of a generalized knowledge source structure for blackboard systems,”Knowledge-Based Systems: An International Journal (in press).

  4. C.S. Ho, “Development of a meta-blackboard shell,” inProc. 2nd Int. Conf. Tools for Artifi. Intell., Washington, D.C., 1990, pp. 544–550.

  5. C.S. Ho, R.C. Chen, and J.M. Lin, “Design and implementation of a generalized blackboard structuring subsystem,” inProc. Int. Comput. Symp., Hsin Chu, Taiwan, 1990, pp. 645–650.

  6. D.E. Rumelhart and J.L. McClelland,Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I: Foundations, The MIT Press: Cambridge, MA, 1986.

    Google Scholar 

  7. B. Widrow, “30 years of adaptive neural networks: Perceptron, madaline and backpropagation,”IEEE Proc., vol. 78, no. 9, September 1990.

  8. D.F. Specht, “Probabilistic neural networks,”Neural Networks, vol. 3, pp. 109–118, 1990.

    Google Scholar 

  9. P.S. Maloney, “The use of probabilistic neural networks to improve solution times for hull-to-emitter correlation problems,” inProc. Int. Joint Conf. Neural Networks, 1989, pp. 289–294.

  10. S.I. Gallant, “Connectionist expert systems,”Commun. ACM, vol. 31, pp. 152–169, February 1988.

    Google Scholar 

  11. T. Kohonen,Self-Organization and Associative Memory (2nd ed.), Springer-Verlag: New York, 1987.

    Google Scholar 

  12. P.Y. Chen and J.S. Wang, “A smart checking system—a neural network approach,” inProc. Int. Comput. Symp., 1990, pp. 885–890.

  13. R.W. Hamming,Coding and Information Systems, Prentice-Hall: Englewood Cliffs, NJ, 1980.

    Google Scholar 

  14. V. Cherkassky and H. Lari-Najafi, “Data representation for diagnostic neural networks,”IEEE Expert, vol. 7, no. 5, pp. 43–53, October 1992.

    Google Scholar 

  15. R.P. Lippmann, “An introduction to computing with neural nets,”IEEE Acoust. Speech Signal Processing, vol. 4, no. 2, pp. 4–22, April 1987.

    Google Scholar 

  16. S.E. Fahlman and C. Lebiere, “The cascade-correlation learning architecture,” technical report, Computer Science Department, Carnegie-Mellon University, 1988.

  17. M.R. Spiegel,Mathematical Handbook of Formulas and Tables, McGraw-Hill: New York, 1978.

    Google Scholar 

  18. L.A. Zadeh, “Fuzzy sets,”Inform. Control, vol. 8, pp. 338–353, 1965.

    Google Scholar 

  19. C.C. Hsu, “Neural Network Models for Blackboard Demon Subsystems,” Master's thesis, Department of Electronic Engineering, National Taiwan Institute of Technology, Taiwan, 1992.

    Google Scholar 

  20. D. Baily and D. Thompson, “How to develop neural-network applications,”AI Expert, vol. 5, no. 6, pp. 38–47, June 1990.

    Google Scholar 

  21. Y. Hirose, K. Yamashita, and S. Hijya, “Back-propagation algorithm which varies the number of hidden units,”Neural Networks, vol. 4, pp. 61–66, 1991.

    Google Scholar 

  22. J.E. Dayhoff,Neural Network Architectures: An Introduction, Van Nostrand Reinhold: New York, 1990.

    Google Scholar 

  23. D.B. Parker, “Optimal algorithms for adaptive networks: Second order backpropagation, second order direct propagation, and second order Hebbian learning,” inProc. Int. Conf. Neural Networks, 1987, pp. 593–600.

  24. R.L. Watrous, “Learning algorithms for connectionist networks: Applied gradient methods of nonlinear optimization,” inProc. Int. Conf. Neural Networks, 1987, pp. 619–627.

  25. J.P. Cater, “Successfully using peak learning rates of 10 (and greater) in backpropagation networks with heuristic learning algorithm,” inProc. Int. Conf. Neural Networks, 1987, pp. 645–651.

  26. W. Fakhr and M.I. Elmasry, “A fast learning technique for the multilayer perceptron,” inProc. Int. Joint Conf. Neural Networks, 1990, pp. 257–261.

  27. R.A. Jacobs, “Increased rates of convergence through learning rate adaption,”Neural Networks, vol. 1, pp. 295–307, 1988.

    Google Scholar 

  28. M.K. Weir, “A method for self-determination of adaptive learning rates in back propagation,”Neural Networks, vol. 3, pp. 371–379, 1991.

    Google Scholar 

  29. S.E. Fahlman, “Faster-learning variation on backpropagation: An empirical study,” technical report, Computer Science Department, Carnegie-Mellon University, 1988.

  30. M. Hagiwara, “Novel back propagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection,” inProc. Int. Joint Conf. Neural Networks, 1990, pp. 625–630.

  31. N. Weymare and J.P. Martens, “A fast and robust learning algorithm for feedforward neural networks,”Neural Networks, vol. 4, pp. 361–369, 1991.

    Google Scholar 

  32. C.L. Forgy, “Rete: A fast algorithm for the many pattern/many object pattern match problem,”Artif. Intell., vol. 19, pp. 17–37, September 1982.

    Google Scholar 

  33. M.A. Sartori, K.M. Passino, and P.J. Antsaklis, “A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems,”IEEE Trans. Knowledge Data Eng., vol. 4, no. 2, pp. 290–297, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

HO, CS., HSU, CC. Neural-network-based blackboard demon subsystems. Appl Intell 3, 143–158 (1993). https://doi.org/10.1007/BF00871894

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00871894

Key words

Navigation