Advertisement

Sensing and Imaging

, 20:30 | Cite as

Structuring Description for Product Image Data with Multilabel

  • Yong Dai
  • Yi LiEmail author
  • Li-Jun Liu
Original Paper

Abstract

Existing product data, presented as description documents, product images, etc., are very significant references for designing a new proposal. However, traditionally, the data collected manually only contain the product images without description documents. Obtaining reasonable description of product is a challenging task due to the high cost and labor-intensive process of human annotation. In this paper, a new approach is introduced to solve this problem and improve the efficiency of description by exploiting the potential information of product images. We propose a robust framework with multi-label learning to annotate each product image with several labels, which makes a brief description of the product from different aspects. An efficient method is proposed to accomplish product data acquisition, arrangement and analysis task to construct new data in images collection step. Furthermore, for the newly structured data, robust algorithms are employed to accomplish the data processing. Based on the processed data, automatic and semi-automatic multi-label annotation methods are applied to generate the labels for product images. We present the prediction results by several state-of-the-art multi-label learning classifiers based on the features extracted from different convolution neural framework. The results are evaluated by effective measures to validate the quality of the labeling result, as well as the predictive performance with respect to the number of the training examples. Experimental results indicate that the quality of the annotation is reasonable, and the proposed method can achieve excellent prediction accuracy based on the annotated data, generating satisfactory description for the products.

Keywords

Design knowledge Description documents Product image data Data preprocess Multi-label learning 

Notes

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 61772186).

Supplementary material

11220_2019_249_MOESM1_ESM.cls (42 kb)
Supplementary material 1 (cls 42 KB)
11220_2019_249_MOESM2_ESM.bst (33 kb)
Supplementary material 2 (bst 32 KB)

References

  1. 1.
    Aksac, A., Ozyer, T., & Alhajj, R. (2017). Complex networks driven salient region detection based on superpixel segmentation. Pattern Recognition, 66, 268–279.CrossRefGoogle Scholar
  2. 2.
    Boutell, M. R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multi-label scene classification. Pattern Recognition, 37(9), 1757–1771.CrossRefGoogle Scholar
  3. 3.
    Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.zbMATHCrossRefGoogle Scholar
  4. 4.
    Cheng, M., Mitra, N. J., Huang, X., Torr, P. H. S., & Hu, S. (2011, June). Global contrast based salient region detection. In IEEE conference on computer vision and pattern recognition (pp. 409–416).Google Scholar
  5. 5.
    Chua, T., Tang, J., Hong, R., Li, H., Luo, Z., & Zheng, Y. (2009). NUS-WIDE: A real-world web image database from national university of singapore. ACM International Conference on Image and Video Retrieval, 48, 1–9.Google Scholar
  6. 6.
    Clare, A., & King, R. D. (2002). Knowledge discovery in multi-label phenotype data. Lecture Notes in Computer Science, 2168, 42–53.zbMATHCrossRefGoogle Scholar
  7. 7.
    Dimitrovski, I., Kocev, D., Loskovska, S., & Deroski, S. (2014). Fast and efficient visual codebook construction for multi-label annotation using predictive clustering trees. Pattern Recognition Letters, 38(3), 38–45.CrossRefGoogle Scholar
  8. 8.
    Dollar, P., Wojek, C., Schiele, B., & Perona, P. (2012). Pedestrian detection: An evaluation of the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4), 743–761.CrossRefGoogle Scholar
  9. 9.
    Everingham, M., Gool, L. V., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRefGoogle Scholar
  10. 10.
    Fortunato, S. (2010). Community detection in graphs. Physics Reports, 486(3), 75–174.MathSciNetCrossRefGoogle Scholar
  11. 11.
    Gibaja, E., & Ventura, S. (2014). Multilabel learning: A review of the state of the art and ongoing research. Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery, 4(6), 411–444.CrossRefGoogle Scholar
  12. 12.
    He, K., Zhang, X., Ren, S., & Sun, J. (2016, June). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778).Google Scholar
  13. 13.
    Hou, Q., Cheng, M., Hu, X., Borji, A., Tu, Z., & Torr, P. H. S. (2019). Deeply supervised salient object detection with short connections. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(4), 815–828.CrossRefGoogle Scholar
  14. 14.
    Ioffe, S., & Szegedy, C. (2015, February). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448–456).Google Scholar
  15. 15.
    Jia, D., Russakovsky, O., Krause, J., Bernstein, M. S., Berg, A., & Li, F. F. (2014, April). Scalable multi-label annotation. In SIGCHI conference on human factors in computing systems (pp. 3099–3102).Google Scholar
  16. 16.
    Jia, X., Sun, F., Li, H., Cao, Y., & Zhang, X. (2017). Image multi-label annotation based on supervised nonnegative matrix factorization with new matching measurement. Neurocomputing, 219, 518–525.CrossRefGoogle Scholar
  17. 17.
    Jiang, M., Pan, Z., & Li, N. (2017). Multi-label text categorization using L21-norm minimization extreme learning machine. Neurocomputing, 261, 4–10.CrossRefGoogle Scholar
  18. 18.
    Karalas, K., Tsagkatakis, G., Zervakis, M., & Tsakalides, P. (2016). Land classification using remotely sensed data: Going multilabel. IEEE Transactions on Geoscience and Remote Sensing, 54(6), 3548–3563.CrossRefGoogle Scholar
  19. 19.
    Li, J., Rao, Y., Jin, F., Chen, H., & Xiang, X. (2016). Multi-label maximum entropy model for social emotion classification over short text. Neurocomputing, 210, 247–256.CrossRefGoogle Scholar
  20. 20.
    Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014, April). Microsoft COCO: Common objects in context. In European conference on computer vision (pp. 740–755).Google Scholar
  21. 21.
    Luaces, O., Díez, J., Barranquero, J., del Coz, J. J., & Bahamonde, A. (2012). Binary relevance efficacy for multilabel classification. Progress in Artificial Intelligence, 1(4), 303–313.CrossRefGoogle Scholar
  22. 22.
    Madjarov, G., Kocev, D., Gjorgjevikj, D., & Deroski, S. (2012). An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 45(9), 3084–3104.CrossRefGoogle Scholar
  23. 23.
    Mas, J. A. D., & Marzal, J. A. (2016). Single users’ affective responses models for product form design. International Journal of Industrial Ergonomics, 53, 102–114.CrossRefGoogle Scholar
  24. 24.
    Nowak, S., Lukashevich, H., & Dunker, P. (2010, January). Performance measures for multilabel evaluation: A case study in the area of image classification. In International conference on multimedia information retrieval (pp. 35–44)Google Scholar
  25. 25.
    Qi, Y., Zhang, G., & Li, Y. (2018). An auto-segmentation algorithm for multi-label image based on graph cut. Sensing and Imaging, 19(1), 13–26.CrossRefGoogle Scholar
  26. 26.
    Read, J., Pfahringer, B., Holmes, G., & Frank, E. (2011). Classifier chains for multi-label classification. Machine Learning, 85(3), 333–359.MathSciNetCrossRefGoogle Scholar
  27. 27.
    Rokach, L., Schclar, A., & Itach, E. (2014). Ensemble methods for multi-label classification. Expert Systems with Applications, 41(16), 7507–7523.CrossRefGoogle Scholar
  28. 28.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.MathSciNetCrossRefGoogle Scholar
  29. 29.
    Santos, A. M., Canuto, A. M. P., & Neto, A. F. (2010, August). Evaluating classification methods applied to multi-label tasks in different domains. In International conference on hybrid intelligent systems (pp. 61–66)Google Scholar
  30. 30.
    Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In International conference on learning representations.Google Scholar
  31. 31.
    Sivarajah, U., Kamal, M. M., Irani, Z., & Vishanth, W. (2017). Critical analysis of big data challenges and analytical methods. Journal of Business Research, 70, 263–286.CrossRefGoogle Scholar
  32. 32.
    Spolaôr, N., Lee, H. D., Takaki, W. S. R., & Wu, F. C. (2015). Feature selection for multi-label learning: A systematic literature review and some experimental evaluations. International Journal of Computational Intelligence Systems, 8(2), 3–15.CrossRefGoogle Scholar
  33. 33.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015, June). Going deeper with convolutions. In IEEE conference on computer vision and pattern recognition (pp. 1–9).Google Scholar
  34. 34.
    Szymański, P., & Kajdanowicz, T. (2017, February). A scikit-based Python environment for performing multi-label classification. ArXiv e-prints.Google Scholar
  35. 35.
    Tomar, D., & Agarwal, S. (2015, October). Multi-label classifier for emotion recognition from music. In International conference on advanced computing, networking and informatics (pp. 111–123).Google Scholar
  36. 36.
    Tsoumakas, G., Katakis, I., & Vlahavas, I. (2011). Random k-labelsets for multilabel classification. IEEE Transactions on Knowledge and Data Engineering, 23(7), 1079–1089.CrossRefGoogle Scholar
  37. 37.
    Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010, July). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667–685).Google Scholar
  38. 38.
    Ukil, A. (2002). Support vector machine. Computer Science, 1(4), 1–28.Google Scholar
  39. 39.
    Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., & Torralba, A. (2010, June). SUN database: large-scale scene recognition from abbey to zoo. In IEEE conference on computer vision and pattern recognition (pp. 3485–3492).Google Scholar
  40. 40.
    Yang, Y. (1999). An evaluation of statistical approaches to text categorization. Information Retrieval, 1(1), 69–90.CrossRefGoogle Scholar
  41. 41.
    You, M., Liu, J., Li, G., & Chen, Y. (2012). Embedded feature selection for multi-label classification of music emotions. International Journal of Computational Intelligence Systems, 5(4), 668–678.CrossRefGoogle Scholar
  42. 42.
    Zhang, K., Wang, J., Hua, B., & Lu, L. (2013, October). DHash: A cache-friendly TCP lookup algorithm for fast network processing. In 38th annual IEEE conference on local computer networks (pp. 484–491).Google Scholar
  43. 43.
    Zhang, M., & Zhou, Z. (2014). A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 26(8), 1819–1837.CrossRefGoogle Scholar
  44. 44.
    Zhou, L., Pan, S., Wang, J., & Vasilakos, A. V. (2017). Machine learning on big data: Opportunities and challenges. Neurocomputing, 237, 350–361.CrossRefGoogle Scholar
  45. 45.
    Zou, W., Liu, Z., Kpalma, K., Ronsin, J., Zhao, Y., & Komodakis, N. (2015). Unsupervised joint salient region detection and object segmentation. IEEE Transactions on Image Processing, 24(11), 3858–3873.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Electrical and Information EngineeringHunan UniversityChangshaChina
  2. 2.School of DesignHunan UniversityChangshaChina
  3. 3.Key Laboratory of Visual Perception and Artificial Intelligence of Hunan ProvinceChangshaChina

Personalised recommendations