Abstract
Knowledge distillation is a well-known method of model compressing but it is largely under-used. Knowledge distillation has the ability to reduce the size of a model by transferring the knowledge from a large (pre-trained or not) model into a smaller one with minimum loss to the accuracy, for a better fit in edge devices and the devices with low computational power. Furthermore, it can be used also as a performance enhancer for the student model as we’ll see in the experiments since it doesn’t need any additional data to be effective. This paper argues that the knowledge distillation method or one of its variations should always be used on deep learning models. In our experiments we trained and tested our models on some well-known datasets (MNIST, CIFAR10 and CIFAR100) to prove that the knowledge distillation and two of its variations give positive results when applied on over-fitted or under-fitted teacher models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zhu, M., Gupta, S.: To prune or not to prune: exploring the efficacy of pruning for model compression (2017)
Sung, W., Shin, S., Hwang, K.: Resiliency of deep neural networks under quantization (2015)
Bucila, C., Caruana, R., Niculescu-Mizil, A.: Model compression, vol. 10(1145), pp. 535–541 (2006)
Ba, L.J., Caruana, R.: Do deep nets really need to be deep? Adv. Neural. Inf. Process. Syst. 3, 2654–2662 (2014)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015)
Mirzadeh, S.I., Farajtabar, M., Li, A., Levine, N., Matsukawa, A., Ghasemzadeh, H.: Improved knowledge distillation via teacher assistant. In: AAAIth AAAI, editor, AAAI 2020–34th AAAI Conference on Artificial Intelligence, pp. 5191–5198 (2020)
Yuan, L., Tay, F., Li, G., Wang, T., Feng, J.: Revisiting knowledge distillation via label smoothing regularization, vol. 10(1109), pp. 3902–3910 (2020)
Krizhevsky, A., Nair, V., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Ding, X., Ding, G., Zhou, X., Guo, Y., Liu, J., Han, J.: Global sparse momentum SGD for pruning very deep neural networks (2019)
He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks, vol. 10(24963), pp. 2234–2240 (2018)
Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: towards lossless CNNs with low-precision weights (2017)
Zhao, Y., Gao, X., Bates, D., Mullins, R.D., Xu, C.: Focused quantization for sparse CNNs. In: NeurIPS (2019)
Wu, J., Leng, C., Wang, Y., Lin, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices (2015)
Zagoruyko, N., Komodakis, S.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer (2016)
Srinivas, S., Fleuret, F.: Knowledge transfer with Jacobian matching (2018)
Li, Z., Hoiem, D.: Learning without forgetting. In: ECCV (2016)
Jang, Y., Lee, H., Hwang, S.J., Shin, J.: Learning what and where to transfer. In: Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research (2019)
Zhao, H., Sun, X., Dong, J., Chen, C., Dong, Z.: Highlight every step: knowledge distillation via collaborative teaching. IEEE Trans. Cybern. 52, 2070–2081 (2022)
Yuan, F., et al.: Reinforced multi-teacher selection for knowledge distillation (2020)
Nayak, G.K., Mopuri, K.R., Chakraborty, A.: Effectiveness of arbitrary transfer sets for data-free knowledge distillation, vol. 10(1109), pp. 1429–1437 (2021)
Shen, C., Wang, X., Yin, Y., Song, J., Luo, S., Song, M.: Progressive network grafting for few-shot knowledge distillation (2020)
Ye, J., Ji, Y., Wang, X., Gao, X., Song, M.: Data-free knowledge amalgamation via group-stack dual-GAN, vol. 10(1109), pp. 12513–12522 (2020)
Liu, R., Fusi, N., Mackey, L.: Model compression with generative adversarial networks (2018)
Micaelli, P., Storkey, A.: Zero-shot knowledge transfer via adversarial belief matching (2019)
Li, C., et al.: Block-wisely supervised neural architecture search with knowledge distillation (2019)
Peng, H., Du, H., Yu, H., Li, Q., Liao, J., Fu, J.: Cream of the crop: distilling prioritized paths for one-shot neural architecture search (2020)
Kim, J., Bhalgat, Y., Lee, J., Patel, C., Kwak, N.: QKD: quantization-aware knowledge distillation (2019)
Shin, S., Boo, Y., Sung, W.: Knowledge distillation for optimization of quantized deep neural networks, vol. 10(1109), pp. 1–6 (2020)
Roheda, S., Riggan, B., Krim, H., Dai, L.: Cross-modality distillation: a case for conditional generative adversarial networks, vol. 10(1109), pp. 2926–2930 (2018)
Tian, Y., Krishnan, D., Isola, P.: Contrastive representation distillation (2019)
Chen, H., Wang, Y., Xu, C., Tao, D.: Learning student networks via feature embedding. IEEE Trans. Neural Networks Learn. Syst. 10(1109), 1–11 (2020)
Yao, H., et al.: Graph few-shot learning via knowledge transfer (2019)
Meng, Z., Li, J., Zhao, Y., Gong, Y.: Conditional teacher-student learning, vol. 10, p. 1109 (2019)
Chen, D., et al.: AAAI (2021)
Passalis, N., Tzelepi, M., Tefas, A.: Heterogeneous knowledge distillation using information flow modeling (2020)
Zhang, Z., Sabuncu, M.R.: Self-distillation as instance-specific label smoothing (2020)
Mobahi, H., Farajtabar, M., Bartlett, P.: Self-distillation amplifies regularization in Hilbert space (2020)
Kim, K., Ji, B., Yoon, D., Hwang, S.: Self-knowledge distillation: a simple way for better generalization. arXiv:abs/2006.12000 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Khaider, Y., Rahhali, D., Alami, H., En Nahnahi, N. (2024). Exploring the Knowledge Distillation. In: Tabaa, M., Badir, H., Bellatreche, L., Boulmakoul, A., Lbath, A., Monteiro, F. (eds) New Technologies, Artificial Intelligence and Smart Data. INTIS INTIS 2022 2023. Communications in Computer and Information Science, vol 1728. Springer, Cham. https://doi.org/10.1007/978-3-031-47366-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-47366-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47365-4
Online ISBN: 978-3-031-47366-1
eBook Packages: Computer ScienceComputer Science (R0)