Skip to main content
Log in

Hierarchical Incremental Class Learning with Reduced Pattern Training

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Hierarchical Incremental Class Learning (HICL) is a new task decomposition method that addresses the pattern classification problem. The HICL is proven to be a good classifier but closer examination reveals areas for potential improvement. This paper proposes a theoretical model to evaluate the performance of HICL and presents an approach to improve the classification accuracy of HICL by applying the concept of Reduced Pattern Training (RPT). The theoretical analysis shows that HICL can achieve better classification accuracy than Output Parallelism [Guan and Li: IEEE Transaction on Neural Networks, 13 (2002), 542–550]. The procedure for RPT is described and compared with the original training procedure. The RPT reduces systematically the size of the training data set based on the order of sub-networks built. The results from four benchmark classification problems show much promise for the improved model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Auda G., Kamel M., Raafat H. (1996): Modular neural network architectures for classification. IEEE International Conference on Neural Networks, 2, 1279–1284

    Google Scholar 

  2. Jacobs R.A., Jordan M.I., Nowlan M.I., Hinton G.E. (1991): Adaptive mixtures of local experts. Neural Computation, 3, 79–87

    Google Scholar 

  3. Auda G., Kamel M., Raafat H. (1994): A new neural network structure with cooperative modules. World Congress on Computational Intelligence, 3, 1301–1306

    Google Scholar 

  4. Jacobs R., Jordan M., Barto A. (1991): Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science, 15, 219–250

    Article  Google Scholar 

  5. Murre, J.: Learning and categorization in modular neural networks. Harvester-Wheatcheaf. (1992).

  6. Romaniuk S.G., Hall L.O. (1993): Divide and conquer neural networks. Neural Networks, 6, 1105–1116

    Article  Google Scholar 

  7. Sharkey A.J.C. (1997): Modularity, combining and artificial neural nets. Connection Science, 9, 3–10

    Article  Google Scholar 

  8. Feldman J. (1989): Neural representation of conceptual knowledge. In: Nadel et al. (eds). Neural Connections, Mental Computation. MIT Press, Cambridge, Massachusetts, USA

    Google Scholar 

  9. Anand R., Mehrotra K., Mohan C.K., Ranka S. (1995): Efficient classification for multiclass problems using modular neural networks. IEEE Transaction on Neural Networks, 6, 117–124

    Article  Google Scholar 

  10. Lu, B. L., Kita, H. and Nishikawa, Y.: A multisieving neural network architecture that decomposes learning tasks automatically. Proceedings of IEEE Conference on Neural Networks, pp. 1319–1324, Orlando, FL, (1994).

  11. Lu B.L., Ito M. (1999): Task decomposition and module combination based on class relations: A modular neural network for pattern classification. IEEE Transaction on Neural Networks, 10, 1244–1256

    Article  Google Scholar 

  12. Guan S.-U., Li P. (2002): A Hierarchical incremental learning approach to task decomposition. Journal of Intelligent Systems, 12, 194–205

    Google Scholar 

  13. Guan S.-U., Zhu F. (2004): Class decomposition for GA-based classifier agents – a pitt approach. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34, 381–392

    Google Scholar 

  14. Guan S.-U., Neo T.N., Bao C (2004): Task decomposition using pattern distributor. Journal of Intelligent Systems, 13, 123–150

    Google Scholar 

  15. Guan S.-U., Li S.C., Tan S.K. (2004): Neural network task decomposition based on output partitioning. Journal of the Institution of Engineers Singapore, 44, 78–89

    Google Scholar 

  16. Guan S.-U., Li P. (2004): Incremental learning in terms of output attributes. Journal of Intelligent Systems, 13, 95–122

    Google Scholar 

  17. Guan S.-U., Zhu F. (2005): A class decomposition approach for GA-based classifier agents. Engineering Applications of Artificial Intelligence, 18, 271–278

    Article  Google Scholar 

  18. Guan, S.-U. and Li, S. C.: An approach to parallel growing and training of neural networks, Proceedings of 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems, Honolulu, Hawaii, USA (2000).

  19. Guan S.-U., Li S. (2002): Parallel growing and training of neural networks using output parallelism. IEEE Transaction on Neural Networks, 13, 542–550

    Article  Google Scholar 

  20. Squires, C. S. and Shavlik, J. W.: Experimental analysis of aspects of the cascade-correlation learning architecture, Machine Learning Research Group Working Paper 91–1, Computer Science Department, University of Wisconsin-Madison (1991).

  21. Auda G., Kamel M., Raafat H. (1996): Modular neural network architectures for classification. IEEE International Conference on Neural Networks, 2, 1279–1284

    Google Scholar 

  22. Lehtokangas M. (1999): Modeling with constructive backpropagation. Neural Networks, 12, 707–716

    Article  Google Scholar 

  23. Riedmiller, M. and Braun, H.: A direct adaptive method for faster backpropagation learning: the PRPOP algorithm, Proceedings of the IEEE International Conference on Neural Networks, (1993), 586–591

  24. Prechelt, L.: PROBEN1: A set of neural network benchmark problems and benchmarking rules, Technical Report 21/94, Department of Informatics, University of Karlsruhe, Germany (1994).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng-Uei Guan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guan, SU., Bao, C. & Sun, RT. Hierarchical Incremental Class Learning with Reduced Pattern Training. Neural Process Lett 24, 163–177 (2006). https://doi.org/10.1007/s11063-006-9019-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-006-9019-4

Keywords

Navigation