Abstract
Performance predictors are commonly dedicated to mitigating the substantial resource consumption of neural architecture search. Nevertheless, existing performance predictors are typically constructed based on the randomly sampled training data. Such a sampling method will not only lead to unnecessary computation budget caused by the sampled similar architectures, but also induce performance deterioration resulting from the poor spanning of search space. In this paper, we propose a contrastive learning-based sampling method to address the aforementioned issues. Specifically, we first encode the architectures as directed acyclic graphs, based on which a large number of architectures are augmented to learn invariant knowledge of architectures. After that, we maximize agreement based on augmented architectures to express similar architectures to analogous representations. Consequently, representative architectures are selected through clustering similar architectures to improve the spanning of the search space. We conduct extensive experiments on NAS-Bench-101 and NAS-Bench-201. The experimental results show that the proposed method can improve the predictive ability of performance predictors compared with the random sampling-based ones and can help search superior architectures when integrating with neural architecture search. In addition, an ablation study shows the effectiveness of contrastive learning and the clustering method used in the proposed sampling method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning (2020)
Chen, Z., Zhan, Y., Yu, B., Gong, M., Du, B.: Not all operations contribute equally: hierarchical operation-adaptive predictor for neural architecture search. In: 2021 IEEE/CVF International Conference on Computer Vision, pp. 10488–10497 (2021)
Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967)
Crespo, R., Alvarez, C., Hernandez, I., Garcia, C.: A spatially explicit analysis of chronic diseases in small areas: a case study of diabetes in Santiago, Chile. Int. J. Health Geograph. 19(1), 1–13 (2020)
Deng, B., Yan, J., Lin, D.: Peephole: Predicting Network Performance Before Training. arXiv e-prints arXiv:1712.03351 (2017)
Ding, K., Xu, Z., Tong, H., Liu, H.: Data augmentation for deep graph learning: a survey. ACM SIGKDD Explor. Newsl 24(2), 61–77 (2022)
Dong, X., Yang, Y.: Nas-bench-201: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326 (2020)
Elsken, T., Hendrik Metzen, J., Hutter, F.: Neural architecture search: a survey. arXiv e-prints arXiv:1808.05377 (2018)
Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise, pp. 226–231. AAAI Press (1996)
Falkner, S., Klein, A., Hutter, F.: BOHB: robust and efficient hyperparameter optimization at scale. In: International Conference on Machine Learning, pp. 1437–1446. PMLR (2018)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Liu, Y., et al.: Graph self-supervised learning: a survey. IEEE Trans. Knowl. Data Eng. 1–1 (2022). https://doi.org/10.1109/TKDE.2022.3172903
Liu, Y., Tang, Y., Sun, Y.: Homogeneous architecture augmentation for neural predictor. In: 2021 IEEE/CVF International Conference on Computer Vision, pp. 12229–12238 (2021)
Loh, W.Y.: Classification and regression trees. Wiley Interdisciplinary Rev. Data Mining Knowl. Discov. 1(1), 14–23 (2011)
Luxburg, U.: A tutorial on spectral clustering. Stat. Comput. 17(4), 395–416 (2007)
Milligan, G.W., Cooper, M.: Methodology review: clustering methods. Appl. Psychol. Meas. 11, 329–354 (1987). https://api.semanticscholar.org/CorpusID:121335572
Sen, P.K.: Estimates of the regression coefficient based on Kendall’s tau. J. Am. Stat. Assoc. 63(324), 1379–1389 (1968)
Sohn, K.: Improved deep metric learning with multi-class n-pair loss objective. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Sun, Y., Wang, H., Xue, B., Jin, Y., Yen, G.G., Zhang, M.: Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor. IEEE Trans. Evol. Comput. 24(2), 350–364 (2020)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018)
Verma, V., Qu, M., Lamb, A., Bengio, Y., Kannala, J., Tang, J.: Graphmix: regularized training of graph neural networks for semi-supervised learning. arxiv e-prints, art. arXiv preprint arXiv:1909.11715 (2019)
Wen, W., Liu, H., Chen, Y., Li, H., Bender, G., Kindermans, P.-J.: Neural predictor for neural architecture search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 660–676. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_39
Wu, B., et al.: Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10734–10742 (2019)
Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., Hutter, F.: NAS-bench-101: towards reproducible neural architecture search. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 7105–7114 (2019)
Zhang, T., Ramakrishnan, R., Livny, M.: Birch: an efficient data clustering method for very large databases. In: Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data, pp. 103–114 (1996)
Zhu, R., et al.: Aligraph: a comprehensive graph neural network platform. In: Proceedings of the VLDB Endowment, vol. 12. no. 12, pp. 2094–2105 (2019)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of the Web Conference 2021, pp. 2069–2080 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xie, J., Feng, Y., Sun, Y. (2024). A Sampling Method for Performance Predictor Based on Contrastive Learning. In: Liu, T., Webb, G., Yue, L., Wang, D. (eds) AI 2023: Advances in Artificial Intelligence. AI 2023. Lecture Notes in Computer Science(), vol 14471. Springer, Singapore. https://doi.org/10.1007/978-981-99-8388-9_18
Download citation
DOI: https://doi.org/10.1007/978-981-99-8388-9_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8387-2
Online ISBN: 978-981-99-8388-9
eBook Packages: Computer ScienceComputer Science (R0)