Abstract
We present a systematic method for incorporating prior knowledge (hints) into the learning-from-examples paradigm. The hints are represented in a canonical form that is compatible with descent techniques for learning. We focus in particular on the monotonicity hint, which states that the function to be learned is monotonic in some or all of the input variables. The application of monotonicity hints is demonstrated on two real-world problems-a credit card application task, and a problem in medical diagnosis. We report experimental results which show that using monotonicity hints leads to a statistically significant improvement in performance on both problems. Monotonicity is also analyzed from a theoretical perspective. We consider the class M of monotonically increasing binary output functions. Necessary and sufficient conditions for monotonic separability of a dichotomy are proven. The capacity of M is shown to depend heavily on the input distribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Akaike, H. Fitting autoregressive models for prediction, Ann. Inst. Stat. Math. 21:243–247,1969.
Abu-Mostafa, Y. S. Learning from hints in neural networks. Journal of Complexity 6:192–198,1990.
Abu-Mostafa, Y. S. A method for learning from hints In S. Hanson et al, editors, Advances in Neural Information Processing Systems 5, pp. 73–80, 1993.
Minsky, M. L. and Papert, S. A. Perceptrons, MIT Press, 1969.
Cover, T.M. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. IEEE Transactions on Electronic Computers 14:326–334,1965.
Vapnik, V.N. and Chervonenkis, A.Y On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theory of Probability and Its Applications 16:264–280,1971.
Moody, J.E. The E f fective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems. In J.E. Moody, S.J. Hanson, R.P. Lippman, editors, Advances in Neural Information Processing 4:847–854. Morgan Kaufmann, San Mateo, CA, 1992.
Abu-Mostafa, Y.S. Hints Neural Computation 7:639–671,1995.
Abu-Mostafa, Y.S. Hints and the VC Dimension Neural Computation 4:1993
Vapnik, V. The Nature of Statistical Learning Theory. Springer-Verlag, New York, NY, 1995.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media New York
About this chapter
Cite this chapter
Sill, J., Abu-Mostafa, Y. (1997). Monotonicity: Theory and Implementation. In: Docampo, D., Figueiras-Vidal, A.R., Pérez-González, F. (eds) Intelligent Methods in Signal Processing and Communications. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-2018-3_6
Download citation
DOI: https://doi.org/10.1007/978-1-4612-2018-3_6
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-1-4612-7383-7
Online ISBN: 978-1-4612-2018-3
eBook Packages: Springer Book Archive