We are delighted to present this special issue of Machine Learning Journal with selected papers from the Sixth Asian Conference on Machine Learning (ACML 2014) held in Nha Trang City, Vietnam from 26 to 28 November 2014. ACML aims at providing a leading international forum for researchers in machine learning and related fields to share their new ideas and achievements. While located in Asia, the conference has a wide visibility to the international community. ACML was the first machine learning conference with two cycles of submissions with a strict double-blind review process, and this tradition continues. ACML 2014 received 80 submissions from 20 countries across Asia, Australasia, Europe and North America. Each paper was assigned with two meta-reviewers and at least four reviewers. In the end, 25 papers were accepted into the main program, accounting for an acceptance rate of 31.25 % (Phung and Li 2014).

Papers of high quality were invited to submit a significantly extended version to this special issue. The selection was made by the team of guest editors consisting of Program Chairs, General Chairs and the Steering Committee Chair, on the basis of the scientific quality and potential impact of these papers, as indicated by the conference reviewers and the quality of the presentation and posters. These extended papers have been reviewed again according to the peer-review process set out by the journal criteria. In the end, six papers were selected for this special issue.

The paper Online Passive–Aggressive Active Learning by Jing Lu, Peilin Zhao and Steven Hoi presents a new family of algorithms for online active learning, called Passive–Aggressive Active (PAA) learning algorithms. The key idea is to utilize not only misclassified instances but also correctly classified instances with low confidence, as in the Passive–Aggressive technique. The proposed PAA learning algorithms work well in several settings such as binary classification, multi-class classification and cost-sensitive classification, with strong theoretical justification and empirical support.

The paper Bibliographic Analysis on Research Publications using Authors, Categorical Labels and the Citation Network by Kar Wai Lim and Wray Buntine presents a new nonparametric bibliographic topic model that jointly combines authors, contents and the citation network into a single model. Supervision was further incorporated into the topic model to enhance the clustering task. Novel and efficient inference algorithms were developed and applied to CiteSeerX dataset, made available online, consisting of 168K documents and approximately 62K authors where improved performance was shown for both model fitting and clustering tasks in comparison with several existing baselines.

The paper Large Margin Classification with Indefinite Similarities by Ibrahim Alabdulmohsin, Moustapha Cisse, Xin Gao and Xiangliang Zhang demonstrates that the 1-norm support vector machine (SVM) proposed previously has more advantages compared to the other methods proposed recently, for classification with indefinite similarities in which the similarity functions are not symmetric positive semidefinite. The authors provide theoretical and empirical evidence to show that 1-norm SVM indeed has more advantages in terms of simplicity, interpretability, and performance. They also give theoretical analysis to relate 1-norm SVM with other well-established learning algorithms such as neural networks, SVM, and nearest neighbour.

In the paper Learning Undirected Graphical Models Using Persistent Sequential Monte Carlo by Hanchen Xiong, Sandor Szedmak and Justus Piater, the authors present an analysis on the strength and limitations of learning algorithms through the lens of the sequential Monte Carlo (SMC) based on the analogy between Robbins-Monro’s stochastic approximation procedure and SMC. Subsequently, a novel approach using Persistent SMC to learning undirected graphical models was proposed where it was shown that the sampling space is more effectively explored and robust when learning rates are high or model distributions are high-dimensional, which often cause standard algorithms to deteriorate.

The paper V-shape Interval Insensitive Loss for Ordinal Classification by Kostiantyn Antoniuk, Vojtech Franc and Vaclav Hlavac addresses the problem of learning for ordinal classification from partially annotated examples, in which each training example is annotated by an interval of labels rather than a single label. The authors propose an interval-insensitive loss function for the learning task, give theoretical justification of learning using the loss function, propose a method for learning a classifier with a surrogate loss function, and demonstrate the effectiveness of the method in a real world task.

The paper A Column-wise Update Algorithm for Nonnegative Matrix Factorization in Bregman Divergence with an Orthogonal Constraint by Keigo Kimura, Mineichi Kudo and Yuzuru Tanaka proposes a new column-wise update algorithm to speed up the training process for the Orthogonal Nonnegative Matrix Factorization by transforming the matrix-based orthogonal constraint into a set of column-wise orthogonal constraints. Extensive experiments were conducted to demonstrate the strength of the proposed approach.

This special issue would not have been possible without the contribution of many people. We wish to thank all authors for their contributions to this special issue. We also would like to express our sincere gratitude to all the referees for the time and effort in ensuring the quality of the submissions for this issue. We also wish to thank Dragos Margineantu, editor for special issues at MLJ, for his guidance and support, as well as Melissa Fearon, Venkat Ganesan, Sudha Subramanian from the Springer team for their assistance, throughout the organization and production of this special issue.