Journal of Intelligent Information Systems
, Volume 25, Issue 3, pp 275291
First online:
A FixedDistribution PAC Learning Theory for Neural FIR Models
 Kayvan NajarianAffiliated withComputer Science Department, University of North Carolina at Charlotte Email author
Rent the article at a discount
Rent now* Final gross prices may vary according to local VAT.
Get AccessAbstract
The PAC learning theory creates a framework to assess the learning properties of static models. This theory has been extended to include learning of modeling tasks with mdependent data given that the data are distributed according to a uniform distribution. The extended theory can be applied for learning of nonlinear FIR models with the restriction that the data are unformly distributed.
In this paper, The PAC learning scheme is extended to deal with any FIR model regardless of the distribution of the data. This fixeddistribution mdependent extension of the PAC learning theory is then applied to the learning of FIR threelayer feedforward sigmoid neural networks.
 Title
 A FixedDistribution PAC Learning Theory for Neural FIR Models
 Journal

Journal of Intelligent Information Systems
Volume 25, Issue 3 , pp 275291
 Cover Date
 200511
 DOI
 10.1007/s108440050194y
 Print ISSN
 09259902
 Online ISSN
 15737675
 Publisher
 Kluwer Academic Publishers
 Additional Links
 Topics
 Keywords

 PAC learning
 nonlinear FIR model
 multilayer feedforward neural networks
 mdependency
 Industry Sectors
 Authors

 Kayvan Najarian ^{(1)}
 Author Affiliations

 1. Computer Science Department, University of North Carolina at Charlotte, 9201, University City Blvd, Charlotte, NC, 28223, USA