Journal of Intelligent Information Systems

, Volume 25, Issue 3, pp 275-291

First online:

A Fixed-Distribution PAC Learning Theory for Neural FIR Models

  • Kayvan NajarianAffiliated withComputer Science Department, University of North Carolina at Charlotte Email author 

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


The PAC learning theory creates a framework to assess the learning properties of static models. This theory has been extended to include learning of modeling tasks with m-dependent data given that the data are distributed according to a uniform distribution. The extended theory can be applied for learning of nonlinear FIR models with the restriction that the data are unformly distributed.

In this paper, The PAC learning scheme is extended to deal with any FIR model regardless of the distribution of the data. This fixed-distribution m-dependent extension of the PAC learning theory is then applied to the learning of FIR three-layer feedforward sigmoid neural networks.

PAC learning nonlinear FIR model multi-layer feedforward neural networks m-dependency