Abstract
Recently, Convolutional Neural Networks (CNNs) have shown unprecedented success in the field of computer vision, especially on challenging image classification tasks by relying on a universal approach, i.e., training a deep model on a massive dataset of supervised examples. While unlabeled data are often an abundant resource, collecting a large set of labeled data, on the other hand, are very expensive, which often require considerable human efforts. One way to ease out this is to effectively select and label highly informative instances from a pool of unlabeled data (i.e., active learning). This paper proposed a new method of batch-mode active learning, Dual Active Sampling (DAS), which is based on a simple assumption, if two deep neural networks (DNNs) of the same structure and trained on the same dataset give significantly different output for a given sample, then that particular sample should be picked for additional training. While other state of the art methods in this field usually require intensive computational power or relying on a complicated structure, DAS is simpler to implement and, managed to get improved results on Cifar-10 with preferable computational time compared to the core-set method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ducoffe, M., Precioso, F.: Adversarial active learning for deep networks: a margin based approach (2016)
Gal, Y., Ghahramani, Z.: Bayesian convolutional neural networks with Bernoulli approximate variational inference. In: NIPS (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2014)
Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research) (2013)
Ravi, S., Larochelle, H.: Meta-learning for batch mode active learning (2018)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)
Sener, O., Savarese, S.: Active learning for convolutional neural networks: a core-set approach. In: ICLR 2018 (2018)
Settles, B.: Active Learning. Morgan and Claypool Publishers, San Rafael (2012)
Takahashi, R., Matsubara, T., Uehara, K.: Data augmentation using random image cropping and patching for deep CNNs (2018)
Wang, K., Zhang, D., Li, Y., Zhang, R., Lin, L.: Cost-effective active learning for deep image classification. IEEE Trans. Circuits Syst. Video Technol. 27, 2591–2600 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Phan, J., Ruocco, M., Scibilia, F. (2019). Dual Active Sampling on Batch-Incremental Active Learning. In: Bach, K., Ruocco, M. (eds) Nordic Artificial Intelligence Research and Development. NAIS 2019. Communications in Computer and Information Science, vol 1056. Springer, Cham. https://doi.org/10.1007/978-3-030-35664-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-35664-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35663-7
Online ISBN: 978-3-030-35664-4
eBook Packages: Computer ScienceComputer Science (R0)