Everything Gets Better All the Time, Apart from the Amount of Data
The paper first addresses the main issues in current content-based image retrieval to conclude that the largest factors of innovations are found in the large size of the datasets, the ability to segment an image softly, the interactive specification of the user’s wish, the sharpness and invariant capabilities of features, and the machine learning of concepts. Among these everything gets better every year apart from the need for annotation which gets worse with every increase in the dataset size. Therefore, we direct our attention to the question what fraction of images needs to be labeled to get an almost similar result compared to the case when all images would have been labeled by annotation? And, how can we design an interactive annotation scheme where we put up for annotation those images which are most informative in the definition of the concept (boundaries)? It appears that we have developed an random followed by a sequential annotation scheme which requires annotating 1% equal to 25 items in a dataset of 2500 faces and non-faces to yield an almost identical boundary of the face-concept compared to the situation where all images would have been labeled. This approach for this dataset has reduced the effort of annotation by 99%.
Unable to display preview. Download preview PDF.
- 1.Campbell, C., Cristianini, N., Smola, A.: Query learning with large margin classifiers. In: Proc. 17th International Conf. on Machine Learning, pp. 111–118. Morgan Kaufmann, CA (2000)Google Scholar
- 3.Geusebroek, J.M., van den Boomgaard, R., Smeulders, A.W.M., Geerts, H.: Color invariance. IEEE Trans. on PAMI 23(12), 1338–1350 (2001)Google Scholar
- 5.Kaufman, L., Rousseeuw, P.J.: Finding groups in data: An introduction to cluster analysis. JohnWiley & Sons, West Sussex (1990)Google Scholar
- 6.Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: Croft, W.B., van Rijsbergen, C.J. (eds.) Proceedings of SIGIR 1994, 17th ACM International Conference on Research and Development in Information Retrieval, pp. 3–12. Springer, Heidelberg (1994)Google Scholar
- 7.Lowe, D.G.: Object recognition from local scale-invariant features. In: Proc. IEEE Conf. on Computer Vision, pp. 1150–1157 (1999)Google Scholar
- 8.Mundy, J.L., Zisserman, A.: Geometric Invariance in Computer Vision. MIT Press, Cambridge (1992)Google Scholar
- 12.Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.C.: Content-based image retrieval at the end of the early years. IEEE Trans. on PAMI 22(12), 1349–1380 (2000)Google Scholar
- 13.Sung, K.K., Poggio, T.: Example-based learning for view-based human face detection. IEEE Trans. on PAMI 20(1), 39–51 (1998)Google Scholar
- 14.Tong, S., Chang, E.: Support vector machine active learning for image retrieval. In: Proceedings of the 9th ACM int. conf. on Multimedia, pp. 107–118 (2001)Google Scholar
- 16.Tuytelaars, T., Turina, A., Van Gool, L.J.: Noncombinatorial detection of regular repetitions under perspective skew. IEEE Trans. on PAMI 25(4), 418–432 (2003)Google Scholar
- 21.Zhu, J., Hastie, T.: Kernel logistic regression and the import vector machine. In: Advances in Neural Information Processing Systems (2001)Google Scholar