Skip to main content
Log in

Freshness uniformity measurement network based on multi-layer feature fusion and histogram layer

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The arrangement of products on supermarket freshness shelves exhibits a certain pattern and displays distinct texture characteristics. In recent years, many studies have applied texture extraction algorithms in deep learning, such as the Histogram Layer Residual Network (HistNet). However, this algorithm still has obvious disadvantages, such as neglecting the optimal representation of multi-scale texture features and lacking feature selection during extraction. To address these issues, this paper introduces a novel texture classification network—Multi-Scale Feature Histogram Network (MFHisNet). First, we design a Multi-Scale Feature Fusion Module (MF-Block) to achieve a multi-level representation of texture information. Then, we utilize an attention module (CBAM) to weight crucial information and suppress background interference for deeper level texture features. Experimental results demonstrate that the model achieves accuracies of 82.12 ±2.04\(\%\), 73.13±1.10\(\%\), and 83.46±0.62\(\%\) on the GTOS-mobile, DTD, and MINC-2500 datasets, respectively. Furthermore, based on the proposed model, we propose a measurement method that uses cosine similarity to measure the uniformity of freshness placement, and the effectiveness of this method was verified on the dataset we collected.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The GTOS-mobile dataset can be accessed via https://drive.google.com/file/d/1Hd1G7aKhsPPMbNrk4zHNJAzoXvUzWJ9M/view. The DTD dataset can be accessed via https://www.robots.ox.ac.uk/vgg/data/dtd/ and MINC-2500 dataset can be accessed via http://opensurfaces.cs.cornell.edu/static/minc/minc-2500.tar.gz.

References

  1. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479–3487 (2015)

  2. Caputo, B., Hayman, E., Mallikarjuna, P.: Class-specific material categorisation. In: Tenth IEEE International Conference on Computer Vision (ICCV’05) IEEE, vol. 2, pp. 1597–1604 (2005)

  3. Quan, Y., Xu, Y., Sun, Y., Luo, Y.: Lacunarity analysis on image patterns for texture classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 160–167 (2014)

  4. Xue, J., Zhang, H., Dana, K.: Deep texture manifold for ground terrain recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 558–567 (2018)

  5. Bruno, A., Collorec, R., Bézy-Wendling, J., Reuzé, P., Rolland, Y.: Texture analysis in medical imaging. In: Contemporary perspectives in three-dimensional biomedical imaging. IOS Press. pp. 133–164 (1997)

  6. Cula, O.G., Dana, K.J.: Compact representation of bidirectional texture functions. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1. IEEE, vol. 1, pp. I–I (2001)

  7. Leung, T., Malik, J.: Representing and recognizing the visual appearance of materials using three-dimensional textons. Int. J. Comput. Vis. 43, 29 (2001)

    Article  Google Scholar 

  8. Malik, J., Belongie, S., Leung, T., Shi, J.: Contour and texture analysis for image segmentation. Int. J. Comput. Vis. 43, 7 (2001)

    Article  Google Scholar 

  9. Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on statistical learning in computer vision, ECCV, vol. 1. Prague. vol. 1, pp. 1–2 (2004)

  10. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol. 2. IEEE., vol. 2, pp. 2169–2178 (2006)

  11. Cimpoi, M., Maji, S., Vedaldi, A.: Deep filter banks for texture recognition and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3828–3836 (2015)

  12. Hu, Y., Long, Z., AlRegib, G.:Multi-level texture encoding and representation (multer) based on deep neural networks. In: 2019 IEEE International Conference on Image Processing (ICIP) IEEE. pp. 4410–4414 (2019)

  13. Liu, L., Chen, J., Fieguth, P., Zhao, G., Chellappa, R., Pietikäinen, M.: From BoW to CNN: two decades of texture representation for texture classification. Int. J. Comput. Vis. 127, 74 (2019)

    Article  Google Scholar 

  14. Basu, S., Mukhopadhyay, S., Karki, M., DiBiano, R., Ganguly, S., Nemani, R., Gayaka, S.: Deep neural networks for texture classification—a theoretical analysis. Neural Netw. 97, 173 (2018)

    Article  PubMed  Google Scholar 

  15. Cavalin, P., Oliveira, L.S.: A review of texture classification methods and databases. In: 2017 30th SIBGRAPI Conference on graphics, patterns and images tutorials (SIBGRAPI-T) IEEE. pp. 1–8 (2017)

  16. Liu, L., Fieguth, P., Guo, Y., Wang, X., Pietikäinen, M.: Local binary features for texture classification: taxonomy and experimental study. Pattern Recogn. 62, 135 (2017)

    Article  ADS  CAS  Google Scholar 

  17. Paul, R., Hawkins, S.H., Hall, L.O., Goldgof, D.B., Gillies, R.J.: Combining deep neural network and traditional image features to improve survival prediction accuracy for lung cancer patients from diagnostic CT. In: 2016 IEEE international conference on systems, man, and cybernetics (SMC) IEEE. pp. 002,570–002,575 (2016)

  18. Peeples, J., Xu, W., Zare, A.: Histogram layers for texture analysis. IEEE Trans. Artif. Intell. 3(4), 541 (2021)

    Article  Google Scholar 

  19. Zhang, H., Xue, J., Dana, K.: Deep ten: texture encoding network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 708–717 (2017)

  20. Zhai, W., Cao, Y., Zhang, J., Zha, Z.J.: Deep multiple-attribute-perceived network for real-world texture recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3613–3622 (2019)

  21. Yang, Z., Lai, S., Hong, X., Shi, Y., Cheng, Y., Qing, C.: DFAEN: double-order knowledge fusion and attentional encoding network for texture recognition. Expert Syst. Appl. 209, 118223 (2022)

    Article  Google Scholar 

  22. Bu, X., Wu, Y., Gao, Z., Jia, Y.: Deep convolutional network with locality and sparsity constraints for texture classification. Pattern Recogn. 91, 34 (2019)

    Article  ADS  Google Scholar 

  23. Basu, S., Karki, M., Mukhopadhyay, S., Ganguly, S., Nemani, R., DiBiano, R., Gayaka, S.: A theoretical analysis of deep neural networks for texture classification. In: 2016 International Joint Conference on Neural Networks (IJCNN) IEEE. pp. 992–999 (2016)

  24. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning. PMLR. pp. 2048–2057 (2015)

  25. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang, X.: Residual attention network for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3156–3164 (2017)

  26. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132–7141 (2018)

  27. Luo, C., Zhan, J., Hao, T., Wang, L., Gao, W.: Shift-and-balance attention. arXiv preprint arXiv:2103.13080 (2021)

  28. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp. 3–19 (2018)

  29. Gao, Z., Xie, J., Wang, Q., Li, P.: Global second-order pooling convolutional networks. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. pp. 3024–3033 (2019)

  30. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7794–7803 (2018)

  31. Bello, I., Zoph, B., Vaswani, A., Shlens, J., Le, Q.V.: Attention augmented convolutional networks. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 3286–3295 (2019)

  32. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 510–519 (2019)

  33. Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Lin, H., Zhang, Z., Sun, Y., He, T., Mueller, J., Manmatha, R.: et al.: Resnest: split-attention networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2736–2746 (2022)

  34. Cao, Y., Xu, J., Lin, S., Wei, F., Hu, H.:Gcnet: non-local networks meet squeeze-excitation networks and beyond. In: Proceedings of the IEEE/CVF international conference on computer vision workshops (2019)

  35. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q.:ECA-Net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 11534–11542 (2020)

  36. Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., Vedaldi, A.: Describing textures in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3606–3613 (2014)

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61772252, the Scientific Research Foundation of the Education Department of Liaoning Province under Grant LJKZ0965, and the Huzhou Science and Technology Plan Project under Grants 2022GZ08 and 2023ZD2004.

Author information

Authors and Affiliations

Authors

Contributions

YZ and YZconceived the idea. YZ and CY realized the idea and wrote the main manuscript text. CF and CY prepared all figures and tables. YZ, ZX, and QL provided supervision. All authors reviewed the manuscript.

Corresponding author

Correspondence to Yong Zhang.

Ethics declarations

Conflict of interest

The authors declare that there are no financial or personal relationships with other people or organizations that could inappropriately influence this study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zang, Y., Yu, C., Fu, C. et al. Freshness uniformity measurement network based on multi-layer feature fusion and histogram layer. SIViP 18, 1525–1538 (2024). https://doi.org/10.1007/s11760-023-02837-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02837-z

Keywords

Navigation