Skip to main content

Alternating Relaxed Twin Bounded Support Vector Clustering


Maximum margin clustering (MMC) and its improved version are based on the spirit of support vector machine, which will inevitably lead to prohibitively computational complexity when these learning models are encountered with an enormous amount of patterns. To accelerate the clustering efficiency, we propose alternating twin bounded support vector clustering to decompose the original large problem in MMC and its variants into two smaller sized ones, in which solving expensive semi-definite programming is avoided by performing alternating optimization between cluster-specific model parameters and instance-specific labeling assignments. Also the structural risk minimization principle is implemented to obtain good generalization. Additionally, in order to avoid premature convergence, a relaxed version of our algorithm is proposed, in which the hinge loss in the original twin bounded support vector machine is replaced with the Laplacian loss. These two versions can be easily extended to the nonlinear context via kernel tricks. To investigate the efficacy of our clustering algorithm, several experiments are conducted on a number of synthetic and real-world datasets. Experimental results demonstrate that the proposed method has better performance than other existing clustering approaches in terms of clustering accuracy and time efficiency and also possesses the powerful ability to process larger-scaled datasets.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7


  1. 1.

    Jain, A., & Dubes, R. (1998). Algorithms for clustering data. Englewood Cliffs: Prentice Hall.

    MATH  Google Scholar 

  2. 2.

    Hartigan, J. A., & Wong, M. A. (1979). A k-means clustering algorithm. Applied Statistics, 28, 100–108.

    Article  MATH  Google Scholar 

  3. 3.

    Shi, J., & Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888–905.

    Article  Google Scholar 

  4. 4.

    Redner, R., & Walker, H. (1984). Mixture densities, maximum likelihood and the EM algorithm. SIAM Review, 26(2), 195–239.

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Ng, A. Y., Jordan, M. I., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. In NIPS.

  6. 6.

    Xu, L., Neufeld, J., Larson, B., & Schuurmans, D. (2004). Maximum margin clustering. In NIPS.

  7. 7.

    Valizadegan, H., & Jin, R. (2007). Generalized maximum margin clustering and unsupervised kernel learning. In NIPS, pp. 1417–1424.

  8. 8.

    Zhang, K., Tsang, I. W., & Kowk, J. T. (2007). Maximum margin clustering made practical. In ICML.

  9. 9.

    Xu, L.,& Schuurmans, D. (2005). Unsupervised and semi-supervised multi-class support vector machines. In AAAI.

  10. 10.

    Wang, Y. X., & Xu, H. (2013). Noisy sparse subspace clustering. Journal of Machine Learning Research, 17(1), I-89.

    MathSciNet  Google Scholar 

  11. 11.

    Hershey, J. R., Chen, Z., Roux, J. L., et al. (2016). Deep clustering: Discriminative embeddings for segmentation and separation. In IEEE international conference on acoustics, speech and signal processing. IEEE, pp. 31–35.

  12. 12.

    Zhang, X., Zhang, X., & Liu, H. (2016). Self-adapted multi-task clustering. In International joint conference on artificial intelligence. AAAI Press, pp. 2357–2363.

  13. 13.

    Zhang, L., Zhang, Q., Du, B., et al. (2017). Adaptive manifold regularized matrix factorization for data clustering. In Twenty-sixth international joint conference on artificial intelligence, pp. 3399–3405.

  14. 14.

    Akbulut, Y., et al. (2016). KNCM: Kernel neutrosophic c-means clustering. Applied Soft Computing, 52, 714–724.

    Article  Google Scholar 

  15. 15.

    Lanckriet, G. R. G., et al. (2002). Learning the kernel matrix with semi-definite programming. In Nineteenth international conference on machine learning. Morgan Kaufmann, pp. 323–330.

  16. 16.

    Jayadeva, Khemchandani, R., & Chandra, S. (2007). Twin support vector machines for pattern classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 905.

    Article  MATH  Google Scholar 

  17. 17.

    Cristianini, Nello, & Shawe-Taylor, J. (2000). An introduction to support vector machines. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  18. 18.

    Lobo, M. S., Vandenberghe, L., Boyd, S., et al. (1998). Applications of second-order cone programming. Linear Algebra and its Applications, 284(1–3), 193–228.

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Nesterov, I. E., & Nemirovski, A. S. (1994). Interior point polynomial algorithms in convex programming, SAM (p. 13). Philadelphia: Studies in Applied Mathematics, SIAM.

    Book  Google Scholar 

  20. 20.

    Platt, J. C. (1999). Fast training of support vector machines using sequential minimal optimization. MIT Press, 1999, 185–208.

    Google Scholar 

Download references


This work was supported by the National High Technology Research and Development Program of China (863 Program) (2011AA010706), the National Natural Science Foundation of China (61133016, 61772117), Ministry of Education-China Mobile Communications Corporation Research Funds (MCM20121041), the General Equipment Department Foundation (61403120102), and the Sichuan Hi-Tech industrialization program (2017GZ0308).

Author information



Corresponding author

Correspondence to Jiayan Fang.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Fang, J., Liu, Q. & Qin, Z. Alternating Relaxed Twin Bounded Support Vector Clustering. Wireless Pers Commun 102, 1129–1147 (2018).

Download citation


  • Maximum margin clustering
  • Twin support vector machine
  • Alternating optimization
  • Hinge loss
  • Laplacian loss