Pattern Analysis and Applications

, Volume 14, Issue 2, pp 165–174 | Cite as

Classification through incremental max–min separability

  • Adil M. BagirovEmail author
  • Julien Ugon
  • Dean Webb
  • B. Karasözen
Theoretical Advances


Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers.


Classification Data mining Data analysis Supervised learning Piecewise linear classifier 



Dr. Adil Bagirov is the recipient of an Australian Research Council Australian Research Fellowship (Project number: DP 0666061). Dr. Adil Bagirov and Prof. Bülent Karasözen are thankful for the support of TUBITAK (Turkish Scientific and Technical Research Council) and the Australian Mathematical Sciences Institute which initiated this current work by their supporting mutual visits. We would like to thank two anonymous referees for their useful suggestions that improved the quality of the paper.


  1. 1.
    Astorino A, Gaudioso M (2002) Polyhedral separability through successive LP. J Optim Theory Appl 112(2):265–293MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Bobrowski L (1991) Design of piecewise linear classifiers from formal neurons by a basis exchange technique. Pattern Recognit 24:863–870CrossRefGoogle Scholar
  3. 3.
    Kostin A (2006) A simple and fast multi-class piecewise linear pattern classifier. Pattern Recognit 39:1949–1962zbMATHCrossRefGoogle Scholar
  4. 4.
    Palm HC (1990) A new piecewise linear classifier. Pattern recognition. In: Proceedings 10th international conference on 16–21 June, vol 1, pp 742–744Google Scholar
  5. 5.
    Chai B, Huang T, Zhuang X, Zhao Y, Sklansky J (1996) Piecewise linear classifiers using binary tree structure and genetic algorithm. Pattern Recognit 29:1905–1917CrossRefGoogle Scholar
  6. 6.
    Bagirov AM (2005) Max–min separability. Optim Methods Softw 20(2–3):271–290MathSciNetGoogle Scholar
  7. 7.
    Bagirov AM, Ugon J (2005) Supervised data classification via max–min separability. In: Jeyakumar V, Rubinov AM (eds). Continuous optimisation: current trends and modern applications. Springer, Berlin, pp 175–208Google Scholar
  8. 8.
    Sklansky J, Wassel GS (1981) Pattern classifiers and trainable machines. Springer, BerlinzbMATHGoogle Scholar
  9. 9.
    Sklansky J, Michelotti L (1998) Locally trained piecewise linear classifiers. IEEE Trans Pattern Anal Mach Intell 2(2):101–111CrossRefGoogle Scholar
  10. 10.
    Tenmoto H, Kudo M, Shimbo M (1998) Piecewise linear classifiers with an appropriate number of hyperplanes. Pattern Recognit 31(11):1627–1634CrossRefGoogle Scholar
  11. 11.
    Herman GT, Yeung KTD (1992) On piecewise-linear classification. IEEE Trans Pattern Anal Mach Intell 14(7):782–786CrossRefGoogle Scholar
  12. 12.
    Sanchez NS, Triantaphyllou E, Chen J, Liao TW (2002) An incremental learning algorithm for constructing Boolean functions from positive and negative examples. Comput Oper Res 29:1677–1700MathSciNetCrossRefGoogle Scholar
  13. 13.
    Kuncheva L (2000) Clustering and selection model of classifier combination. In: Proceedings of knowledge-based engineering systems and allied technologies. Brighton, UKGoogle Scholar
  14. 14.
    Jackowski K, Wozniak M (2009) Algorithm of designing compound recognition system on the basis of combining classifiers with simultaneous splitting feature space into competence areas. Pattern Anal Appl. doi: 10.1007/s10044-008-0137-7
  15. 15.
    Raducanu B, Vitra J (2008) Online nonparametric discriminant analysis for incremental subspace learning and recognition. Pattern Anal Appl 11(3):259–268MathSciNetCrossRefGoogle Scholar
  16. 16.
    Leski JM (2003) Neuro-fuzzy system with learning tolerant to imprecision. Fuzzy Sets Syst 138(2):427–439MathSciNetCrossRefGoogle Scholar
  17. 17.
    Leski JM (2004) Epsilon-insensitive fuzzy c-regression models: Introduction to epsilon-insensitive fuzzy modeling. IEEE Trans Syst Man Cybern Part B Cybern 34(1):4–15CrossRefGoogle Scholar
  18. 18.
    Park Y, Sklansky J (1989) Automated design of multiple-class piecewise linear classifiers. J Classif 6:195–222MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Lo Z-P, Bavarian B (1991) Comparison of a neural network and a piecewise linear classifier. Pattern Recognit Lett 12(11):649–655CrossRefGoogle Scholar
  20. 20.
    Schulmeister B, Wysotzki F (1994) The piecewise linear classifier DIPOL92. In: Bergadano F, De Raedt L (eds) Proceedings of the European conference on machine learning on machine learning (Catania, Italy). Springer, New York, pp 411–414Google Scholar
  21. 21.
    Michie D, Spiegelhalter DJ, Taylor CC (eds) (1994) Machine learning, neural and statistical classification. Ellis Horwood, LondonGoogle Scholar
  22. 22.
    Bagirov AM (1999) Minimization methods for one class of nonsmooth functions and calculation of semi-equilibrium prices. In: Eberhard A et al (eds) Progress in optimization: contribution from Australasia. Kluwer, Dordrecht, pp 147–175Google Scholar
  23. 23.
    Bagirov AM (2002) A method for minimization of quasidifferentiable functions. Optim Methods Softw 17(1):31–60MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Asuncion A, Newman DJ (2007) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA.
  25. 25.
    Witten IH, Frank E (2005) Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San FranciscozbMATHGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2010

Authors and Affiliations

  • Adil M. Bagirov
    • 1
    Email author
  • Julien Ugon
    • 1
  • Dean Webb
    • 1
  • B. Karasözen
    • 2
  1. 1.CIAO, School of Information Technology and Mathematical SciencesThe University of BallaratVictoriaAustralia
  2. 2.Department of Mathematics and Institute of Applied MathematicsMiddle East Technical UniversityAnkaraTurkey

Personalised recommendations