Advertisement

Trademark Image Similarity Search

  • Girish Showkatramani
  • Sashi Nareddi
  • Chris Doninger
  • Greg Gabel
  • Arthi Krishna
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 850)

Abstract

A trademark may be a word, phrase, symbol, sound, color, scent or design, or a combination of these, that identifies and distinguishes the products or services of a particular source from those of others. One of the crucial steps both prior to filing of the trademark applications as well as during the review of these applications is conducting a thorough trademark search to determine whether the proposed mark is likely to cause confusion with prior registered trademarks and pending trademark applications. Currently, the trademark applicants or their representatives and examining attorneys manually search the United States Patent and Trademark Office (USPTO) database that contains all of the active and inactive trademark registrations and applications. This search process relies on words and Trademark Design codes (which are hand annotated labels of design features) to search for images, thereby limiting the overall search process to primarily text-based search. For marks having image characteristics, users visually look at the image and other design characteristics and compare it with existing registered or pending trademarks to determine its uniqueness. Overall, the process of exhaustively looking at all the images that are categorized using a specific design code, while comprehensive, may take a substantial amount of time.

Recently, Convolutional Networks (CNNs) have revolutionized the field of computer vision and demonstrated excellent performance in image classification and feature extraction. In this study, we utilize CNN to address the problem of searching trademarks similar to a chosen mark based on the image characteristics. A corpus of trademark images are pre-processed and then passed through a trained neural network to extract the image features. We then use these features to perform image search using the approximate nearest neighbor (ANN) variant of the nearest neighbor search (NNS) algorithm as depicted in Fig. 2. NNS is a form of proximity search that aims to find closest (or most similar) data points/items from a collection of data points/items.

This system thereby seeks to provide an efficient image-based search alternate to the current keyword and category of design code combination of searching.

Keywords

Approximate nearest neighbors CBIR Deep learning LIRE Solr Trademark image search 

References

  1. 1.
    Patent Office Department of Commerce: Trademark Manual of Examining Procedure (TMEP). Department of Commerce, Patent and Trademark Office, Washington, D.C. (1974)Google Scholar
  2. 2.
    Bartow, A.: Likelihood of confusion. San Diego Law Rev. 41 (2004)Google Scholar
  3. 3.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  4. 4.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014)Google Scholar
  5. 5.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  6. 6.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  7. 7.
    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: International Conference in Machine Learning (2014)Google Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, pp. 1106–1114 (2012)Google Scholar
  9. 9.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)Google Scholar
  10. 10.
    Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K., Le, Q.V., Ng, A.Y.: Large scale distributed deep networks. In: NIPS, pp. 1232–1240 (2012)Google Scholar
  11. 11.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  12. 12.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  13. 13.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015)Google Scholar
  14. 14.
    Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., Warde-Farley, D., Bengio, Y.: Theano: new features and speed improvements (2012)Google Scholar
  15. 15.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, pp. 675–678. ACM (2014)Google Scholar
  16. 16.
    Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. Technical report IDIAP-RR 02-46, IDIAP (2002)Google Scholar
  17. 17.
    Agarwal, A., Akchurin, E., Basoglu, C., Chen, G., Cyphers, S., Droppo, J., Eversole, A., Guenter, B., Hillebrand, M., Hoens, R., Huang, X., Huang, Z., Ivanov, V., Kamenev, A., Kranen, P., Kuchaiev, O., Manousek, W., May, A., Mitra, B., Nano, O., Navarro, G., Orlov, A., Padmilac, M., Parthasarathi, H., Peng, B., Reznichenko, A., Seide, F., Seltzer, M.L., Slaney, M., Stolcke, A., Wang, Y., Wang, H., Yao, K., Yu, D., Zhang, Y., Zweig, G.: An introduction to computational networks and the computational network toolkit. Technical report MSR-TR-2014-112, August 2014. https://github.com/Microsoft/CNTK
  18. 18.
    Szegedy, C., et al.: Inception-v4, Inception-ResNet and the impact of residual connections on learning (2016)Google Scholar
  19. 19.
  20. 20.
  21. 21.
    Lux, M., Chatzichristofis, S.A.: LIRE: lucene image retrieval: an extensible Java CBIR library. In: ACM Multimedia (2008)Google Scholar
  22. 22.
    Merkel, D.: Docker: lightweight linux containers for consistent development and deployment. Linux J. 2014(239), 2 (2014)Google Scholar

Copyright information

© This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018

Authors and Affiliations

  • Girish Showkatramani
    • 1
  • Sashi Nareddi
    • 1
  • Chris Doninger
    • 1
  • Greg Gabel
    • 1
  • Arthi Krishna
    • 1
  1. 1.United States Patent and Trademark OfficeAlexandriaUSA

Personalised recommendations