Advertisement

On the Exploitation of One Class Classification to Distinguish Food Vs Non-Food Images

  • Giovanni Maria FarinellaEmail author
  • Dario Allegra
  • Filippo Stanco
  • Sebastiano Battiato
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9281)

Abstract

In the last years automatic food image understanding has become an important research challenge for the society. This is because of the serious impact that food intake has in human life. Food recognition engines, can help the monitoring of the patient diet and his food intake habits. Nevertheless, distinguish among different classes of food is not the first question for assisted dietary monitoring systems. Prior to ask what class of food is depicted in an image, a computer vision system should be able to distinguish between food vs non-food images. In this work we consider one-class classification method to distinguish food vs non-food images. The UNICT-FD889 dataset is used for training purpose, whereas other two datasets of food and non-food images has been downloaded from Flickr to test the method. Taking into account previous works, we used Bag-of-Words representation considering different feature spaces to build the codebook. To give possibility to the community to work on the considered problem, the datasets used in our experiments are made publicly available.

Keywords

Food understanding One class classification Bag of words 

References

  1. 1.
    Kong, F., Tan, J.: Dietcam: Automatic dietary assessment with mobile camera phones. Pervasive and Mobile Computing 8(1), 147–163 (2012)CrossRefGoogle Scholar
  2. 2.
    Kim, S., Schap, T., Bosch, M., Maciejewski, R., Delp, E.J., Ebert, D.S., Boushey, C.J.: Development of a mobile user interface for image-based dietary assessment. In: International Conference on Mobile and Ubiquitous Multimedia, pp. 13:1–13:7 (2010)Google Scholar
  3. 3.
    Feasibility testing of an automated image-capture method to aid dietary recallGoogle Scholar
  4. 4.
    Xu, C., He, Y., Khannan, N., Parra, A., Boushey, C., Delp, E.: Image-based food volume estimation. In: International Workshop on Multimedia for Cooking and Eating Activities, pp. 75–80 (2013)Google Scholar
  5. 5.
    Zhu, F., Bosch, M., Woo, I., Kim, S., Boushey, C.J., Ebert, D.S., Delp, E.J.: The use of mobile devices in aiding dietary assessment and evaluation. Journal of Selected Topics Signal Processing 4(4), 756–766 (2010)CrossRefGoogle Scholar
  6. 6.
    O’Loughlin, G., Cullen, S.J., McGoldrick, A., O’Connor, S., Blain, R., O’Malley, S., Warrington, G.D.: Using a wearable camera to increase the accuracy of dietary analysis. American Journal of Preventive Medicine 44(3), 297–301 (2013)CrossRefGoogle Scholar
  7. 7.
    Anthimopoulos, M.M., Gianola, L., Scarnato, L., Diem, P., Mougiakakou, S.G.: A food recognition system for diabetic patients based on an optimized bag-of-features model. IEEE Journal of Biomedical and Health Informatics 18(4), 1261–1271 (2014)CrossRefGoogle Scholar
  8. 8.
    Puri, M., Zhiwei, Z., Qian, Y., Divakaran, A., Sawhney, H.: Recognition and volume estimation of food intake using a mobile device. In: Workshop on Applications of Computer Vision, pp. 1–8 (2009)Google Scholar
  9. 9.
    Fontana, J.M., Sazonon, E.: Detection and characterization of food intake by wearable sensors. In: Wearable Sensors, pp. 591–616 (2014)Google Scholar
  10. 10.
    Beijbom, O., Joshi, N., Morris, D., Saponas, S., Khullar, S.: Menu-match: restaurant-specific food logging from images. In: IEEE Winter Conference on Applications of Computer Vision, pp. 844–851 (2015)Google Scholar
  11. 11.
    Farinella, G.M., Allegra, D., Stanco, F.: A benchmark dataset to study the representation of food images. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014 Workshops. LNCS, vol. 8927, pp. 584–599. Springer, Heidelberg (2015) Google Scholar
  12. 12.
    Jiménez, A.R., Jain, A.K., Ruz, R.C., Rovira, J.L.P.: Automatic fruit recognition: a survey and new results using range/attenuation images. Pattern Recognition 32(10), 1719–1736 (1999)CrossRefGoogle Scholar
  13. 13.
    Joutou, T., Yanai, K.: A food image recognition system with multiple kernel learning. In IEEE International Conference in Image Processing, pp. 285–288 (2009)Google Scholar
  14. 14.
    Ravì, D., Lo, B., Yang, G.F: Real-time food intake classification and energy expenditure estimation on a mobile device. In: Body Sensor Networks Conference (2015)Google Scholar
  15. 15.
    Matsuda, Y., Hoashi, H., Yanai, K.: Recognition of multiple-food images by detecting candidate regions. In: IEEE International Conference on Multimedia and Expo, pp. 25–30 (2012)Google Scholar
  16. 16.
    Matsuda, Y., Yanai, K.: Multiple-food recognition considering co-occurrence employing manifold ranking. In: International Conference on Pattern Recognition, pp. 2017–2020 (2012)Google Scholar
  17. 17.
    Chen, M., Dhingra, K., Wu, W., Yang, L., Sukthankar, R., Yang, J.: Pfid: Pittsburgh fast-food image dataset. In: IEEE International Conference on Image Processing (2009)Google Scholar
  18. 18.
    Kitamura, K., Yamasaki, T., Aizawa, K.: Foodlog: capture, analysis and retrieval of personal food images via web. In: Proceedings of the Workshop on Multimedia for Cooking and Eating Activities, pp. 23–30 (2009)Google Scholar
  19. 19.
    Kagaya, H., Aizawa, K., Ogawa, M.: Food detection and recognition using convolutional neural network. In: Proceedings of the ACM International Conference on Multimedia, pp. 1085–1088 (2014)Google Scholar
  20. 20.
    Khan, S.S., Madden, M.G.: A survey of recent trends in one class classification. In: Coyle, L., Freyne, J. (eds.) AICS 2009. LNCS, vol. 6206, pp. 188–197. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  21. 21.
    Schölkopf, B., Platt, J.C., Shawe-Taylor, J.C., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Computation 13(7), 1443–1471 (2001)CrossRefzbMATHGoogle Scholar
  22. 22.
    Lowe, D.G.: Object recognition from local scale-invariant features. IEEE International Conference on Computer Vision 2, 1150–1157 (1999)Google Scholar
  23. 23.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  24. 24.
    Qi, X., Xiao, R., Guo, J., Zhang, L.: Pairwise rotation invariant co-occurrence local binary pattern. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 158–171. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  25. 25.
    Farinella,G.M., Moltisanti, M., Battiato, S.: Classifying food images represented as bag of textons. In: IEEE International Conference on Image Processing, pp. 5212–5216 (2014)Google Scholar
  26. 26.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines (2001)Google Scholar
  27. 27.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 512 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Giovanni Maria Farinella
    • 1
    Email author
  • Dario Allegra
    • 1
  • Filippo Stanco
    • 1
  • Sebastiano Battiato
    • 1
  1. 1.Image Processing Laboratory, Department of Mathematics and Computer ScienceUniversity of CataniaCataniaItaly

Personalised recommendations