Advertisement

Things at Your Desk: A Portable Object Dataset

  • Saptakatha AdakEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1022)

Abstract

Object Recognition has been a field in Computer Vision research, which is far from being solved when it comes to localizing the object of interest in an unconstrained environment, captured from different viewing angles. Lack of benchmark datasets clogs the progress in this field since the last decade, barring the subset of a single dataset, alias the Office dataset, which attempted to boost research in the field of pose-invariant detection and recognition of portable object in unconstrained environment. A new challenging object dataset with 30 categories has been proposed with a vision to boost the performances of the task of object recognition for portable objects, thus enhancing the study of cross domain adaptation, in conjunction to the Office dataset. Images of various hand-held objects are captured by the primary camera of a smartphone, where they are photographed under unconstrained environment with varied illumination conditions at different viewing angles. The monte-carlo object detection and recognition has been performed for the proposed dataset, facilitated by existing state-of-the-art transfer learning techniques for cross-domain recognition of objects. The baseline accuracies for existing Domain Adaptation methods, published recently, are also presented in this paper, for the kind perusal of the researchers. A new technique has also been proposed based on the activation maps of the AlexNet to detect objects, alongwith a Generative Adversarial Network (GAN) based Domain Adaptation technique for Object Recognition.

Keywords

Domain adaptation Generative adversarial network (GAN) Object detection Object recognition 

Notes

Acknowledgements

We gratefully thank the faculty and researchers of Visualization and Perception Lab, IIT Madras, for their valuable insight into this research.

References

  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). CVIU 110(3), 346–359 (2008)Google Scholar
  2. 2.
    Beijbom, O.: Domain adaptations for computer vision applications. arXiv:1211.4860 (2012)
  3. 3.
    Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: COMPSTAT, pp. 177–186. Springer (2010)Google Scholar
  4. 4.
    Dai, W., Yang, Q., Xue, G.R., Yu, Y.: Boosting for transfer learning. In: ICML, pp. 193–200. ACM (2007)Google Scholar
  5. 5.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. IEEE CVPR 1, 886–893 (2005)Google Scholar
  6. 6.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE CVPR, pp. 248–255 (2009)Google Scholar
  7. 7.
    Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. IJCV 88(2), 303–338 (2010)CrossRefGoogle Scholar
  8. 8.
    Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. CVIU 106(1), 59–70 (2007)Google Scholar
  9. 9.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE TPAMI 32(9), 1627–1645 (2010)CrossRefGoogle Scholar
  10. 10.
    Filliat, D.: A visual bag of words method for interactive qualitative localization and mapping. In: IEEE ICRA, pp. 3921–3926 (2007)Google Scholar
  11. 11.
    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: IEEE CVPR, pp. 2066–2073 (2012)Google Scholar
  12. 12.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  13. 13.
    Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: IEEE ICCV, pp. 999–1006 (2011)Google Scholar
  14. 14.
    Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset (2007)Google Scholar
  15. 15.
    Hoffman, J., Rodner, E., Donahue, J., Darrell, T., Saenko, K.: Efficient learning of domain-invariant image representations. arXiv:1301.3224 (2013)
  16. 16.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE CVPR, pp. 1125–1134 (2017)Google Scholar
  17. 17.
    Jhuo, I.H., Liu, D., Lee, D., Chang, S.F.: Robust visual domain adaptation with low-rank reconstruction. In: IEEE CVPR, pp. 2168–2175 (2012)Google Scholar
  18. 18.
    Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)Google Scholar
  19. 19.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  20. 20.
    Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV, pp. 740–755. Springer (2014)Google Scholar
  21. 21.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML, vol. 37, pp. 97–105. JMLR.org (2015)Google Scholar
  22. 22.
    Mandelli, E., Chow, G., Kolli, N.: Phase-detect autofocus (Jan 14 2016), uS Patent App. 14/995,784Google Scholar
  23. 23.
    Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.J.: Introduction to wordnet: an on-line lexical database. Int. J. Lexicogr. 3(4), 235–244 (1990)CrossRefGoogle Scholar
  24. 24.
    Nene, S.A., Nayar, S.K., Murase, H., et al.: Columbia object image library (coil-20) (1996)Google Scholar
  25. 25.
    Opelt, A., Pinz, A.: Object localization with boosting and weak supervision for generic object recognition. In: Image Analysis, pp. 431–438 (2005)Google Scholar
  26. 26.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE TKDE 22(10), 1345–1359 (2010)Google Scholar
  27. 27.
    Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. IJCV 77(1), 157–173 (2008)CrossRefGoogle Scholar
  28. 28.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: ECCV, pp. 213–226 (2010)Google Scholar
  29. 29.
    Samanta, S., Banerjee, S., Das, S.: Unsupervised method of domain adaptation on representation of discriminatory regions of the face image for surveillance face datasets. In: Proceedings of the 2nd International Conference on Perception and Machine Intelligence, pp. 123–132. ACM (2015)Google Scholar
  30. 30.
    Selvan, A.T., Samanta, S., Das, S.: Domain adaptation using weighted sub-space sampling for object categorization. In: ICAPR, pp. 1–5. IEEE (2015)Google Scholar
  31. 31.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
  32. 32.
    Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P.V., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: NIPS, pp. 1433–1440 (2008)Google Scholar
  33. 33.
    Sun, B., Feng, J., Saenko, K.: Return of frustratingly easy domain adaptation. In: AAAI, vol. 6, p. 8 (2016)Google Scholar
  34. 34.
    Suykens, J.A., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999)CrossRefGoogle Scholar
  35. 35.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE CVPR, pp. 1–9 (2015)Google Scholar
  36. 36.
    Torralba, A., Murphy, K.P., Freeman, W.T.: Sharing features: efficient boosting procedures for multiclass object detection. In: IEEE CVPR, vol. 2, pp. II–II (2004)Google Scholar
  37. 37.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: IEEE CVPR, vol. 1, p. 4 (2017)Google Scholar
  38. 38.
    Zhang, L., Zhang, D.: Robust visual knowledge transfer via extreme learning machine-based domain adaptation. IEEE TIP 25(10), 4959–4973 (2016)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE CVPR, pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.VP Lab, Department of Computer Science and EngineeringIIT MadrasChennaiIndia

Personalised recommendations