Towards Integration of Memory Based Learning and Neural Networks

  • Chung-Kwan Shin
  • Sang Chan Park


We propose a hybrid prediction system of neural network (NN) and memory based learning (MBR). NN and MBR are frequently applied to data mining with various objectives. NN and MBR can be directly applied to classification and regression without additional transformation mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. In our hybrid system of NN and MBR, the feature weight set which is calculated from the trained NN plays the core role in connecting both learning strategies and the explanation on prediction can be given by obtaining and presenting the most similar examples from the case base. Experimental results show that the hybrid system has a high potential in solving data mining problems.


Neural Network Soft Computing Input Feature Feature Weighting Irrelevant Feature 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Benitez JM, Castro JL, Requena I. Are Neural Networks Black Boxes? IEEE Trans. Neural Networks 1997; 8(5): 1156–1164.CrossRefGoogle Scholar
  2. 2.
    Towell G, Shavlik W. Refining Symbolic Knowledge Using Neural Networks. In: Machine Learning: A Multistrategy Approach. Morgan Kaufmann, San Mateo, CA, 1994; 405–429.Google Scholar
  3. 3.
    Park SC, Lam SM, Gupta A. Rule Extraction from Neural Networks: Enhancing the Explanation Capability. Journal of Expert Systems 1995; 2: 57–71.Google Scholar
  4. 4.
    Liu H, Setiono R. Effective Data Mining Using Neural Networks. IEEE Trans. on Knowledge and Data Engineering 1996; 8(6): 957–961.CrossRefGoogle Scholar
  5. 5.
    Tickle AB, Andrews R, Golea M, Diederich J. The Truth will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded Within Trained Neural Networks. IEEE Trans. on Neural Networks 1998; 9(6): 1057–1068.CrossRefGoogle Scholar
  6. 6.
    Cost S, Salzberg S. A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. Machine Learning 1993; 10(1): 57–78.Google Scholar
  7. 7.
    Wettschereck D, Aha DW. In: Weighting Features. Proceedings of ICCBR 1995; 347–358.Google Scholar
  8. 8.
    Hastie T, Tibshirani R. Discriminant Adaptive Nearest Neighbor Classification. IEEE Trans. on Pattern Analysis and Machine Intelligence 1996; 18(6): 607–616.CrossRefGoogle Scholar
  9. 9.
    Wettschereck D, Aha DW, Mohri T. A Review and Empirical Evaluation of Feature Weighting Methods for a Class of Lazy Learning Algorithms. AI Review 1997;11:273–314.Google Scholar
  10. 10.
    Shneiderman B. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison-Wesley, Reading, MA, 1997.Google Scholar
  11. 11.
    Kang BS, Lee JH, Shin CK, Yu SJ, Park SC. Hybrid Machine Learning System for Integrated Yield Management in Semiconductor Manufacturing. Expert System With Applications 1998; 15: 123–132.CrossRefGoogle Scholar
  12. 12.
    Blake C, Keogh E, Merz CJ. UCI Repository of Machine Learning Databases []. University of California, Department of Information and Computer Science, Irvine, CA, 1999.Google Scholar
  13. 13.
    Weigend AS, Rumelhart DE, Huberman BA. Generalization by Weight-Elimination with Application to Forecasting. Advances in Neural Information Processing Systems 1991; 3: 875–882.Google Scholar
  14. 14.
    Reed R. Pruning Algorithms: A Survey. IEEE Trans. on Neural Networks 1993; 4(5): 740–747.CrossRefGoogle Scholar
  15. 15.
    Setiono R, Liu H. Neural-Network Feature Selector. IEEE Trans. on Neural Networks 1997; 8(3): 654–662.CrossRefGoogle Scholar
  16. 16.
    Bishop CM. Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 1997.Google Scholar
  17. 17.
    Karnin ED. A Simple Procedure for Pruning Back-Propagation Trained Neural Networks. IEEE Trans. on Neural Networks 1990; 1(2): 239–242.CrossRefGoogle Scholar
  18. 18.
    Cun YL, Denker JS, Solla SA. Optimal Brain Damage. In: Advances in Neural Information Processing (2). MIT Press, Cambridge, MA, 1989; 598–605.Google Scholar
  19. 19.
    Segee BE, Carter MJ. Fault Tolerance of Pruned Multilayer Networks. In: Proc. Int. Joint Conf. Neural Networks, vol. II, 1991; 447–452.Google Scholar
  20. 20.
    Daelemans W, Gillis S, Durieux G. The Acquisition of Stress: A Data Oriented Approach. Computational Linguistics 1994; 20(3): 421–455.Google Scholar
  21. 21.
    Rachlin J, Kasif S, Salzberg S, Aha D. Towards a Better Understanding of Memory Based and Bayesian Classifiers. In: Proc. Internati. Conf. on Machine Learning, New Brunswick, 1994; 242–250.Google Scholar
  22. 22.
    Kohavi R, Langley P, Yun Y. The Utility of Feature Weighting in Nearest-Neighbor Algorithms. In: Proceedings of ECML-97, 1997.Google Scholar
  23. 23.
    Reategui E, Campbell JA, Borghetti S. Using a Neural Network to Learn General Knowledge in a Case-Based System. In: Proceedings of ICCBR-95, 1995; 528–537.Google Scholar
  24. 24.
    Shin CK, Park SC. Memory and Neural Network Based Prediction System. Expert Systems with an Application 1999; 16: 145–155.CrossRefGoogle Scholar
  25. 25.
    Okamoto S, Satoh K. An Average-Case Analysis of k-Nearest Neighbor Classifier. In: Proceedings of ICCBR-95. 1995; 254–264.Google Scholar
  26. 26.
    Dash M, Liu H. Feature Selection for Classification. Intelligent Data Analysis 1997; 1(3).Google Scholar
  27. 27.
    Street WN, Wolberg WH, Mangasarian OL. Nuclear Feature Extraction for Breast Tumor Diagnosis. In: IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, 1993; 861–870.Google Scholar
  28. 28.
    Quinlan R. Bagging, Boosting, and C4.5. In: Proceedings on the AAAI-96. 1996; 725–730.Google Scholar
  29. 29.
    Gorman RP, Sejnowski TJ. Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets. Neural Networks 1988; 1: 75–89.CrossRefGoogle Scholar
  30. 30.
    Quinlan R. Combining Instance-Based and Model-Based Learning. In: Proceedings on the Tenth International Conference of Machine Learning, 1993; 236–243.Google Scholar
  31. 31.
    Bennett KP. Decision Tree Construction Via Linear Programming. In: Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, 1992; 97–101.Google Scholar
  32. 32.
    Shin CK, Hong HK, Park SC. A Hybrid Machine Learning Strategy in Credit Evaluation. In: Proceedings of the 2nd Asia-Pacific Industrial Engineering and Management Systems, 1999, 331–334.Google Scholar

Copyright information

© Springer-Verlag London Limited 2001

Authors and Affiliations

  • Chung-Kwan Shin
  • Sang Chan Park

There are no affiliations available

Personalised recommendations