Abstract
Knowledge distillation is a popular method where a large trained network (teacher) is implemented to train a smaller network (student). To decrease the need for training a much larger network (teacher) for real time application, one student self-knowledge distillation was introduced as a solid technique for compressing neural networks specially for real time applications. However, most of the existing methods consider only one type of knowledge and apply one-student one-teacher learning strategy. This paper presents a collaborative multiple-student single-teacher system (CMSST). The proposed approach is based on real time applications that contain temporal information, which play an important role in understanding. We designed a backbone old student network with target complexity for deployment, during training, once the old student provides high-quality soft labels to guide the hierarchical new student, it also offers the opportunity for the new student to make meaningful improvements based on the students’ revised feedback via the shared intermediate representations. Moreover, we introduced soft target label smoothing technique to the CMSST. Experimental results showed that the accuracy can be improved on newly developed teacher knowledge distillation by 1.5% on the UCF-101. Also the accuracy was improved by 1.15% compared to normal huge teacher knowledge distillation on CIFAR100 dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bhat, P., Arani, E., Zonooz, B.: Distill on the go: online knowledge distillation in self-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)
Gou, J., et al.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)
Berthelier, A., Chateau, T., Duffner, S., Garcia, C., Blanc, C.: Deep model compression and architecture optimization for embedded systems: a survey. J. Signal Process. Syst. 93(8), 863–878 (2021)
Kim, K., et al.: Self-knowledge distillation with progressive refinement of targets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)
Wang, L., Yoon, K.-J.: Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell. (2021)
Pan, B., et al.: Spatio-temporal graph for video captioning with knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)
Vu, D.-Q., Le, N., Wang, J.-C.: Teaching yourself: a self-knowledge distillation approach o action recognition. IEEE Access 9, 105711–105723 (2021)
Cho, J.H., Hariharan, B.: On the efficacy of knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, vol. 2, no. 7 (2015)
Yun, S., Park, J., Lee, K., Shin, J.: Regularizing class-wise predictions via self-knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13876–13885 (2020)
Yuan, L., et al.: Revisiting knowledge distillation via label smoothing regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision nd Pattern Recognition (2020)
Knights, J., et al.: Temporally coherent embeddings for self-supervised video representation learning. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE (2021)
Han, T., Xie, W., Zisserman, A.: Video representation learning by dense predictive coding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)
Luo, D., et al.: Video cloze procedure for self-supervised spatio-temporal learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34. no. 7 (2020)
Furlanello, T., et al.: Born again neural networks. In: International Conference on Machine Learning. PMLR (2018)
Patrick, M., et al.: Multi-modal self-supervision from generalized data transformations. arXiv preprint arXiv:2003.04298 (2020)
Zhang, X., et al.: Adacos: adaptively scaling cosine logits for effectively learning deep face representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)
Dubey, A., et al.: Maximum-entropy fine grained classification. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Haarnoja, T., et al.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning. PMLR (2018)
Wang, F., et al.: Additive margin softmax for face verification. IEEE Signal Process. Lett. 25(7), 926–930 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zain, A., Jian, Y., Zhou, J. (2022). Collaborative Multiple-Student Single-Teacher for Online Learning. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13529. Springer, Cham. https://doi.org/10.1007/978-3-031-15919-0_43
Download citation
DOI: https://doi.org/10.1007/978-3-031-15919-0_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15918-3
Online ISBN: 978-3-031-15919-0
eBook Packages: Computer ScienceComputer Science (R0)