MANGO - Mobile Augmented Reality with Functional Eating Guidance and Food Awareness

  • Georg WaltnerEmail author
  • Michael Schwarz
  • Stefan Ladstätter
  • Anna Weber
  • Patrick Luley
  • Horst Bischof
  • Meinrad Lindschinger
  • Irene Schmid
  • Lucas Paletta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9281)


The prevention of cardiovascular diseases becomes more and more important, as malnutrition accompanies today’s fast moving society. While most people know the importance of adequate nutrition, information on advantageous food is often not at hand, such as in daily activities. Decision making on individual dietary management is closely linked to the food shopping decision. Since food shopping often requires fast decision making, due to stressful and crowded situations, the user needs a meaningful assistance, with clear and rapidly available associations from food items to dietary recommendations. This paper presents first results of the Austrian project (MANGO) which develops mobile assistance for instant, situated information access via Augmented Reality (AR) functionality to support the user during everyday grocery shopping. Within a modern diet - the functional eating concept - the user is advised which fruits and vegetables to buy according to his individual profile. This specific oxidative stress profile is created through a short in-app survey. Using a built-in image recognition system, the application automatically classifies video captured food using machine learning and computer vision methodology, such as Random Forests classification and multiple color feature spaces. The user can decide to display additional nutrition information along with alternative proposals. We demonstrate, that the application is able to recognize food classes in real-time, under real world shopping conditions, and associates dietary recommendations using situated AR assistance.


Mobile application Video based food recognition Augmented reality Nutrition recommender Functional eating concept 


  1. 1.
    Waxman, A., Norum, K.R.: Why a global strategy on diet, physical activity and health? The growing burden of non-communicable diseases. Public Health Nutrition 7, 381–383 (2004)CrossRefGoogle Scholar
  2. 2.
    World Health Organization: European Action Plan for Food and Nutrition Policy, pp. 2007–2012 (2008)Google Scholar
  3. 3.
    Oliveira, L., Costa, V., Neves, G., Oliveira, T., Jorge, E., Lizarraga, M.: A mobile, lightweight, poll-based food identification system. Pattern Recognition 47(5), 1941–1952 (2014)CrossRefGoogle Scholar
  4. 4.
    Zhang, W., Yu, Q., Siddiquie, B., Divakaran, A., Sawhney, H.: “Snap-n-Eat” Food Recognition and Nutrition Estimation on a Smartphone. DST (2015)Google Scholar
  5. 5.
    Maruyama, Y., de Silva, G.C., Yamasaki, T., Aizawa, K.: Personalization of food image analysis. In: VSMM, pp. 75–78 (2010)Google Scholar
  6. 6.
    Hoashi, H., Joutou, T., Yanai, K.: Image recognition of 85 food categories by feature fusion. In: ISM, pp. 296–301 (2010)Google Scholar
  7. 7.
    Yang, S., Chen, M., Pomerleau, D., Sukthankar, R.: Food recognition using statistics of pairwise local features. In: CVPR, pp. 2249–2256 (2010)Google Scholar
  8. 8.
    Farinella, G.M., Moltisanti, M., Battiato, S.: Classifying food images represented as bag of textons. In: ICIP, pp. 5212–5216 (2014)Google Scholar
  9. 9.
    Chen, M.Y., Yang, Y.H., Ho, C.J., Wang, S.H., Liu, S.M., Chang, E., Yeh, C.H., Ouhyoung, M.: Automatic Chinese Food Identification and Quantity Estimation. In: SIGGRAPH, pp. 29:1–29:4 (2012)Google Scholar
  10. 10.
    Matsuda, Y., Hoashi, H., Yanai, K.: Recognition of multiple-food images by detecting candidate regions. In: ICME, pp. 25–30 (2012)Google Scholar
  11. 11.
    Anthimopoulos, M.M., Gianola, L., Scarnato, L., Diem, P., Mougiakakou, S.G.: A Food Recognition System for Diabetic Patients Based on an Optimized Bag-of-Features Model. JBHI 18(4), 1261–1271 (2014)Google Scholar
  12. 12.
    Bossard, L., Guillaumin, M., Van Gool, L.: Food-101 – mining discriminative components with random forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VI. LNCS, vol. 8694, pp. 446–461. Springer, Heidelberg (2014) Google Scholar
  13. 13.
    Kawano, Y., Yanai, K.: Food image recognition with deep convolutional features. In: UbiComp Adjunct, pp. 589–593 (2014)Google Scholar
  14. 14.
    Bolle, R.M., Connell, J.H., Haas, N., Mohan, R., Taubin, G.: Veggievision: a produce recognition system. In: WACV, pp. 244–251 (1996)Google Scholar
  15. 15.
    Zhang, Y., Wang, S., Ji, G., Phillips, P.: Fruit classification using computer vision and feedforward neural network. Journal of Food Engineering 143, 167–177 (2014)CrossRefGoogle Scholar
  16. 16.
    Jiménez, A.R., Jain, A.K., Ceres, R., Pons, J.L.: Automatic fruit recognition: a survey and new results using Range/Attenuation images. Pattern Recognition 32(10), 1719–1736 (1999)CrossRefGoogle Scholar
  17. 17.
    Zhang, B., Huang, W., Li, J., Zhao, C., Fan, S., Wu, J., Liu, C.: Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: A review. Food Research International 62, 326–343 (2014)CrossRefGoogle Scholar
  18. 18.
    Costa, C., Antonucci, F., Pallottino, F., Aguzzi, J., Sun, D.W., Menesatti, P.: Shape Analysis of Agricultural Products: A Review of Recent Research Advances and Potential Application to Computer Vision. FABT 4(5), 673–692 (2011)Google Scholar
  19. 19.
    Maruyama, T., Kawano, Y., Yanai, K.: Real-time mobile recipe recommendation system using food ingredient recognition. In: IMMPD Workshop, pp. 27–34 (2012)Google Scholar
  20. 20.
    Lindschinger, M., Nadlinger, K., Adelwöhrer, N., Holweg, K., Wögerbauer, M., Birkmayer, J., Smolle, K.H., Wonisch, W.: Oxidative stress: potential of distinct peroxide determination systems. CCLM 42(8), 907–914 (2004)CrossRefGoogle Scholar
  21. 21.
    Wonisch, W., Falk, A., Sundl, I., Winklhofer-Roob, B., Lindschinger, M.: Oxidative stress increases continuously with bmi and age with unfavourable profiles in males. Aging Male 15(3), 159–165 (2012)CrossRefGoogle Scholar
  22. 22.
    Karalus, B., Lindschinger, M.: Eat yourself beautiful, smart and sexy with functional eating (in German). Riva Verlag, Munich (2008)Google Scholar
  23. 23.
    Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)CrossRefzbMATHGoogle Scholar
  24. 24.
    Khan, R., van de Weijer, J., Khan, F.S., Muselet, D., Ducottet, C., Barat, C.: Discriminative color descriptors. In: CVPR, pp. 2866–2873 (2013)Google Scholar
  25. 25.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  26. 26.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. IJCV 60(2), 91–110 (2004)CrossRefGoogle Scholar
  27. 27.
    Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. In: CVPR, vol. 1, pp. 886–893 (2005)Google Scholar
  28. 28.
    Bosch, A., Zisserman, A., Munoz, X.: Representing shape with a spatial pyramid kernel. In: CIVR, New York, NY, USA, pp. 401–408 (2007)Google Scholar
  29. 29.
    Chen, M., Dhingra, K., Wu, W., Yang, L., Sukthankar, R.: PFID: pittsburgh fast-food image dataset. In: ICIP, pp. 289–292 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Georg Waltner
    • 1
    Email author
  • Michael Schwarz
    • 2
  • Stefan Ladstätter
    • 2
  • Anna Weber
    • 2
  • Patrick Luley
    • 2
  • Horst Bischof
    • 1
  • Meinrad Lindschinger
    • 3
  • Irene Schmid
    • 3
  • Lucas Paletta
    • 2
  1. 1.Graz University of TechnologyGrazAustria
  2. 2.JOANNEUM RESEARCH Forschungsgesellschaft MbHGrazAustria
  3. 3.Institute for Nutritional and Metabolic DiseasesSchwarzl Outpatient ClinicLassnitzhöheAustria

Personalised recommendations