Abstract
Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions due to the curse-of-dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose locally adaptive nearest neighbor classification methods to try to minimize bias. We use locally linear support vector machines as well as quasiconformal transformed kernels to estimate an effective metric for producing neighborhoods that are elongated along less discriminant feature dimensions and constricted along most discriminant ones. As a result, the class conditional probabilities can be expected to be approximately constant in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other competing techniques using a variety of data sets.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Rights and permissions
About this chapter
Cite this chapter
Peng, J., Heisterkamp, D., Dai, H. Adaptive Discriminant and Quasiconformal Kernel Nearest Neighbor Classification. In: Wang, L. (eds) Support Vector Machines: Theory and Applications. Studies in Fuzziness and Soft Computing, vol 177. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10984697_8
Download citation
DOI: https://doi.org/10.1007/10984697_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-24388-5
Online ISBN: 978-3-540-32384-6
eBook Packages: EngineeringEngineering (R0)