Skip to main content
Log in

ALDL: a novel method for label distribution learning

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

Data complexity has increased manifold in the age of data-driven societies. The data has become huge and inherently complex. The single-label classification algorithms that were discrete in their operation are losing prominence since the nature of data is not monolithic anymore. There are now cases in machine learning where data may belong to more than one class or multiple classes. This nature of data has created the need for new algorithms or methods that are multi-label in nature. Label distribution learning (LDL) is a new way to view multi-labelled algorithms. It tries to quantify the degree to which a label defines an instance. Therefore, for every instance there is a label distribution. In this paper, we introduce a new learning method, namely, angular label distribution learning (ALDL). It is based on the angular distribution function, which is derived from the computation of the length of the arc connecting two points in a circle. Comparative performance evaluation in terms of mean-square error (MSE) of the proposed ALDL has been made with algorithm adaptation of k-NN (AA-kNN), multilayer perceptron, Levenberg–Marquardt neural network and layer-recurrent neural network LDL datasets. MSE is observed to decrease for the proposed ALDL. ALDL is also highly statistically significant for the real world datasets when compared with the standard algorithms for LDL.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

Abbreviations

AA-kNN:

algorithm adaptation k-nearest neighbour

ADF:

angular distribution function

ALDL:

angular label distribution learning

ANN:

artificial neural network

LDL:

label distribution learning

LM:

Levenberg–Marquardt

LRN:

layer-recurrent network

MP:

multilayer perceptron

MSE:

mean-square error

References

  1. Zhang M L and Zhou Z H 2007 ML-KNN: a lazy learning approach to multi-label learning. Pattern Recogn. 40: 2038–2048

    Article  Google Scholar 

  2. Zhou Z H et al 2012 Multi-instance multi-label learning. Artif. Intell. 176: 2291–2320

    Article  MathSciNet  Google Scholar 

  3. Zhang Y, Zincir-Heywood N and Milios E 2005 Narrative text classification for automatic key phrase extraction in web document corpora. In: Proceedings of the 7th Annual ACM International Workshop on Web Information and Data Management, pp. 51–58

  4. Li T, Ogihara M and Li Q 2003 A comparative study on content-based music genre classification. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 282–289

  5. Boutell M R et al 2004 Learning multi-label scene classification. Pattern Recogn. 37: 1757–1771

    Article  Google Scholar 

  6. Tsoumakas G and Ioannis K 2007 Multi-label classification: an overview. Int. J. Data Ware. Min. 3: 1–13

    Article  Google Scholar 

  7. Tsoumakas G, Ioannis K and Ioannis V 2011 Random k-labelsets for multilabel classification. IEEE Trans. Knowl. Data Eng. 23: 1079–1089

    Article  Google Scholar 

  8. Zhu S, Ji X, Xu W and Gong Y 2005 Multi-labelled classification using maximum entropy method. In: SIGIR, pp. 274–281

  9. Ho T K 1998 The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20: 832–844

    Article  Google Scholar 

  10. Breiman L 1996 Bagging predictors. Mach. Learn. 24: 123–140

    MATH  Google Scholar 

  11. Nasierding G, Abbas Z K and Grigorios T 2010 A triple-random ensemble classification method for mining multi-label data. In: IEEE International Conference on Data Mining Workshops, pp. 49–56

  12. Read J, Bernhard P and Geoff H 2008 Multi-label classification using ensembles of pruned sets. In: Eighth IEEE International Conference on Data Mining, pp. 995–1000

  13. Read J et al 2011 Classifier chains for multi-label classification. Mach. Learn. 85: 333

    Article  MathSciNet  Google Scholar 

  14. Geng X 2016 Label distribution learning. IEEE Trans. Knowl. Data Eng. 28: 1734–1748

    Article  Google Scholar 

  15. Geng X and Luo L 2014 Multilabel ranking with inconsistent rankers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014, pp. 3742–3747

  16. Vapnik V N and Vlamimir V 1998 Statistical Learning Theory, Vol. 1, New York, Wiley

    Google Scholar 

  17. Larose D T and Larose C D 2014 Discovering knowledge in data: an introduction to data mining. John Wiley & Sons, New Jersey

  18. Geng X, Smith-Miles K and Zhou Z H 2009 Facial age estimation by multilinear subspace analysis. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 865–868

  19. Geng X, Yin C and Zhou Z H 2013 Facial age estimation by learning from label distributions. IEEE Trans. Pattern Anal. Mach. Intell. 35: 2401–2412

    Article  Google Scholar 

  20. Han J, Pei J and Kamber M 2011 Data mining: concepts and techniques. Elsevier, MA, USA

  21. Zhang M L and Zhou Z H 2005 A k-nearest neighbor based algorithm for multi-label classification. In: IEEE International Conference on Granular Computing, pp. 718–721

  22. Raab D H and Green E H 1961 A cosine approximation to the normal distribution. Psychometrika. 26: 447–50

    Article  MathSciNet  Google Scholar 

  23. Eisen M B et al 1998 Cluster analysis and display of genome-wide expression patterns. In: Proceedings of the National Academy of Sciences, pp. 14863–14868

  24. Lyons M et al 1998 Coding facial expressions with gabor wavelets. In: Automatic Face and Gesture Recognition, 1998. Proceedings of Third IEEE International Conference, pp. 200–205

  25. Ahonen T, Abdenour H and Matti P. 2006 Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell.. 28: 2037–2041

    Article  Google Scholar 

  26. Sanger, T D 1989 Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2: 459–473

    Article  Google Scholar 

  27. Hagan M T et al 1996 Neural Network Design, vol. 20, Boston, PWS Publishing Company.

    Google Scholar 

  28. Liu Q and Jun W 2008 A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans. Neural Netw. 19: 558–570

    Article  Google Scholar 

Download references

Acknowledgements

We are thankful to the Media Lab Asia, Department of Electronics and Information Technology (DEITY), Ministry of Communications and Information Technology, Government of India, for providing us support for carrying out this work as a part of the sponsored project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to MAINAK BISWAS.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

BISWAS, M., KUPPILI, V. & EDLA, D.R. ALDL: a novel method for label distribution learning. Sādhanā 44, 53 (2019). https://doi.org/10.1007/s12046-018-0996-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12046-018-0996-6

Keywords

Navigation