Skip to main content

Advertisement

Log in

Bash comment generation via data augmentation and semantic-aware CodeBERT

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

Understanding Bash code is challenging for developers due to its syntax flexibility and unique features. Bash lacks sufficient training data compared to comment generation tasks in popular programming languages. Furthermore, collecting more real Bash code and corresponding comments is time-consuming and labor-intensive. In this study, we propose a two-module method named Bash2Com for Bash code comments generation. The first module, NP-GD, is a gradient-based automatic data augmentation component that enhances normalization stability when generating adversarial examples. The second module, MASA, leverages CodeBERT to learn the rich semantics of Bash code. Specifically, MASA considers the representations learned at each layer of CodeBERT as a set of semantic information that captures recursive relationships within the code. To generate comments for different Bash snippets, MASA employs LSTM and attention mechanisms to dynamically concentrate on relevant representational information. Then, we utilize the Transformer decoder and beam search algorithm to generate code comments. To evaluate the effectiveness of Bash2Com, we consider a corpus of 10,592 Bash code and corresponding comments. Compared with the state-of-the-art baselines, our experimental results show that Bash2Com can outperform all baselines by at least 10.19%, 11.81%, 2.61%, and 6.13% in terms of the performance measures BLEU-3/4, METEOR, and ROUGR-L. Moreover, the rationality of NP-GD and MASA in Bash2Com are verified by ablation studies. Finally, we conduct a human evaluation to illustrate the effectiveness of Bash2Com from practitioners’ perspectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://stackoverflow.com/questions/18096670/what-does-z-mean-in-bash

  2. https://github.com/syhstudy/Bash2Com

  3. https://eval.ai/web/challenges/challenge-page/674/leaderboard/1831

  4. https://github.com/Maluuba/nlg-eval

  5. https://huggingface.co/microsoft/unixcoder-base

  6. https://huggingface.co/razent/cotext-1-cc

  7. https://huggingface.co/uclanlp/plbart-base

  8. https://huggingface.co/Salesforce/codet5-base

  9. https://github.com/huggingface/transformers

  10. https://huggingface.co/microsoft/codebert-base

  11. https://chat.openai.com/

  12. https://github.com/syhstudy/Bash2Com/blob/master/limitation_data.csv

  13. https://github.com/NTDXYG/BASHEXPLAINER

  14. https://github.com/syhstudy/Bash2Com/blob/master/README_add.md

References

  • Ahmad, W., Chakraborty, S., Ray, B., Chang, K.-W.: A transformer-based approach for source code summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4998–5007 (2020)

  • Ahmad, W., Chakraborty, S., Ray, B., Chang, K.-W.: Unified pre-training for program understanding and generation. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655–2668 (2021)

  • Alon, U., Brody, S., Levy, O., Yahav, E.: code2seq: Generating sequences from structured representations of code. In: International Conference on Learning Representations (2018)

  • Anaby-Tavor, A., Carmeli, B., Goldbraich, E., Kantor, A., Kour, G., Shlomov, S., Tepper, N., Zwerdling, N.: Do not have enough data? deep learning to the rescue! In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 7383–7390 (2020)

  • Banerjee, S., Lavie, A.: Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In: Proceedings of the Acl Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation And/or Summarization, pp. 65–72 (2005)

  • Cao, K., Chen, C., Baltes, S., Treude, C., Chen, X.: Automated query reformulation for efficient search based on query logs from stack overflow. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering, pp. 1273–1285 (2021)

  • Chen, F., Fard, F.H., Lo, D., Bryksin, T.: On the transferability of pre-trained language models for low-resource programming languages. In: Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension, pp. 401–412 (2022)

  • Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder–decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, p. 1724 (2014)

  • Clark, K., Luong, M.-T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators. In: International Conference on Learning Representations (2019)

  • Dong, X.L., Zhu, Y., Fu, Z., Xu, D., de Melo, G.: Data augmentation with adversarial training for cross-lingual nli. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pp. 5158–5167 (2021)

  • Ebrahimi, J., Rao, A., Lowd, D., Dou, D.: Hotflip: White-box adversarial examples for text classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 31–36 (2018)

  • Eddy, B.P., Robinson, J.A., Kraft, N.A., Carver, J.C.: Evaluating source code summarization techniques: Replication and expansion. In: 2013 21st International Conference on Program Comprehension, pp. 13–22 (2013)

  • Fatima, S., Ghaleb, T.A., Briand, L.: Flakify: a black-box, language model-based predictor for flaky tests. IEEE Trans. Softw. Eng. 49(4), 1912–1927 (2022)

    Article  Google Scholar 

  • Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Qin, B., Liu, T., Jiang, D., Zhou, M.: CodeBERT: a pre-trained model for programming and natural languages. ACL Anthology, 1536–1547 (2020)

  • Fleiss, J.L.: Measuring nominal scale agreement among many raters. Psychol. Bull. 76(5), 378 (1971)

    Article  Google Scholar 

  • Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3th International Conference on Learning Representations (2015)

  • Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)

    Article  Google Scholar 

  • Guo, D., Lu, S., Duan, N., Wang, Y., Zhou, M., Yin, J.: Unixcoder: Unified cross-modal pre-training for code representation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 7212–7225 (2022)

  • Gu, J., Salza, P., Gall, H.C.: Assemble foundation models for automatic code summarization. In: 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering, pp. 935–946 (2022)

  • Gu, Y., Han, X., Liu, Z., Huang, M.: Ppt: Pre-trained prompt tuning for few-shot learning. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 8410–8423 (2022)

  • Haiduc, S., Aponte, J., Marcus, A.: Supporting program comprehension with source code summarization. In: 2010 Acm/ieee 32nd International Conference on Software Engineering, vol. 2, pp. 223–226 (2010)

  • Haiduc, S., Aponte, J., Moreno, L., Marcus, A.: On the use of automated text summarization techniques for summarizing source code. In: 2010 17th Working Conference on Reverse Engineering, pp. 35–44 (2010)

  • Haque, S., LeClair, A., Wu, L., McMillan, C.: Improved automatic summarization of subroutines via attention to file context. In: Proceedings of the 17th International Conference on Mining Software Repositories, pp. 300–310 (2020)

  • He, H.: Understanding source code comments at large-scale. In: ESEC/FSE 2019: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1217–1219. Association for Computing Machinery, New York, NY, USA (2019)

  • Hochreiter, S., Schmidhuber, J.: Lstm can solve hard long time lag problems. Advances in neural information processing systems 9 (1996)

  • Husain, H., Wu, H.-H., Gazit, T., Allamanis, M., Brockschmidt, M.: Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436 (2019)

  • Hu, X., Li, G., Xia, X., Lo, D., Jin, Z.: Deep code comment generation. In: 2018 IEEE/ACM 26th International Conference on Program Comprehension, pp. 200–20010 (2018)

  • Hu, X., Li, G., Xia, X., Lo, D., Jin, Z.: Deep code comment generation with hybrid lexical and syntactical information. Empir. Softw. Eng. 25(3), 2179–2217 (2020)

    Article  Google Scholar 

  • Hu, X., Gao, Z., Xia, X., Lo, D., Yang, X.: Automating user notice generation for smart contract functions. In: 2021 36th IEEE/ACM International Conference on Automated Software Engineering, pp. 5–17 (2021)

  • Irsan, I.C., Zhang, T., Thung, F., Kim, K., Lo, D.: Picaso: Enhancing api recommendations with relevant stack overflow posts. arXiv preprint arXiv:2303.12299 (2023)

  • Iyer, S., Konstas, I., Cheung, A., Zettlemoyer, L.: Summarizing source code using a neural attention model. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Long Papers, pp. 2073–2083 (2016)

  • Iyer, S., Konstas, I., Cheung, A., Zettlemoyer, L.: Summarizing source code using a neural attention model. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 2073–2083 (2016)

  • Jawahar, G., Sagot, B., Seddah, D.: What does bert learn about the structure of language? In: 57th Annual Meeting of the Association for Computational Linguistics (2019)

  • Jiang, H., He, P., Chen, W., Liu, X., Gao, J., Zhao, T.: Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2177–2190 (2020)

  • Kan, J.-W., Chien, W.-C., Wang, S.-D.: Grid structure attention for natural language interface to bash commands. In: 2020 International Computer Symposium, pp. 67–72 (2020)

  • Kenton, J.D.M.-W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)

  • Kondratyuk, D., Straka, M.: 75 languages, 1 model: Parsing universal dependencies universally. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 2779–2795 (2019)

  • Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: A lite bert for self-supervised learning of language representations. In: International Conference on Learning Representations (2019)

  • LeClair, A., Jiang, S., McMillan, C.: A neural model for generating natural language summaries of program subroutines. In: 2019 IEEE/ACM 41st International Conference on Software Engineering, pp. 795–806 (2019)

  • LeClair, A., Haque, S., Wu, L., McMillan, C.: Improved code summarization via a graph neural network. In: Proceedings of the 28th International Conference on Program Comprehension, pp. 184–195 (2020)

  • Lin, X.V., Wang, C., Pang, D., Vu, K., Ernst, M.D.: Program synthesis from natural language using recurrent neural networks. University of Washington Department of Computer Science and Engineering, Seattle, WA, USA, Tech. Rep. UW-CSE-17-03-01 (2017)

  • Lin, X.V., Wang, C., Pang, D., Vu, K., Zettlemoyer, L., Ernst, M.D.: Program synthesis from natural language using recurrent neural networks. In: Technical Report UW-CSE-17-03-01, University of Washington Department of Computer Science and Engineering (2017)

  • Lin, X.V., Wang, C., Zettlemoyer, L., Ernst, M.D.: Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (2018)

  • Lin, H., Chen, X., Chen, X., Cui, Z., Miao, Y., Zhou, S., Wang, J., Su, Z.: Gen-fl: quality prediction-based filter for automated issue title generation. J. Syst. Softw. 195, 111513 (2023)

    Article  Google Scholar 

  • Liu, Z., Xia, X., Hassan, A.E., Lo, D., Xing, Z., Wang, X.: Neural-machine-translation-based commit message generation: how far are we? In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 373–384 (2018)

  • Liu, K., Yang, G., Chen, X., Zhou, Y.: El-codebert: Better exploiting codebert to support source code-related classification tasks. In: Proceedings of the 13th Asia-Pacific Symposium on Internetware, pp. 147–155 (2022)

  • Liu, K., Yang, G., Chen, X., Yu, C.: SOTitle: A Transformer-based Post Title Generation Approach for Stack Overflow, pp. 577–588 (2022)

  • Li, J., Liang, X., Wei, Y., Xu, T., Feng, J., Yan, S.: Perceptual generative adversarial networks for small object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1222–1230 (2017)

  • Li, Z., Wu, Y., Peng, B., Chen, X., Sun, Z., Liu, Y., Paul, D.: Setransformer: A transformer-based code semantic parser for code comment generation. In: IEEE Transactions on Reliability (2022)

  • Li, Z., Wu, Y., Peng, B., Chen, X., Sun, Z., Liu, Y., Yu, D.: Secnn: a semantic CNN parser for code comment generation. J. Syst. Softw. 181, 111036 (2021)

    Article  Google Scholar 

  • Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations (2018)

  • Mastropaolo, A., Cooper, N., Palacio, D.N., Scalabrino, S., Poshyvanyk, D., Oliveto, R., Bavota, G.: Using transfer learning for code-related tasks. IEEE Trans. Softw. Eng. 49(4), 1580–1598 (2022)

    Article  Google Scholar 

  • Miyato, T., Dai, A.M., Goodfellow, I.: Adversarial training methods for semi-supervised text classification. In: 5th International Conference on Learning Representations (2017)

  • Miyato, T., Maeda, S.-I., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 1979–1993 (2018)

    Article  Google Scholar 

  • Moreno-Barea, F.J., Strazzera, F., Jerez, J.M., Urda, D., Franco, L.: Forward noise adjustment scheme for data augmentation. In: 2018 IEEE Symposium Series on Computational Intelligence, pp. 728–734 (2018)

  • Morris, J., Lifland, E., Yoo, J.Y., Grigsby, J., Jin, D., Qi, Y.: Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 119–126 (2020)

  • Newham, C.: Learning the bash shell: uniX shell programming. O’Reilly Media Inc (2005)

    Google Scholar 

  • Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

  • Phan, L., Tran, H., Le, D., Nguyen, H., Annibal, J., Peltekian, A., Ye, Y.: Cotext: Multi-task learning with code-text transformer. In: Proceedings of the 1st Workshop on Natural Language Processing for Programming, pp. 40–47 (2021)

  • Provilkov, I., Emelianenko, D., Voita, E.: Bpe-dropout: Simple and effective subword regularization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1882–1892 (2020)

  • Rey, D., Neuhäuser, M.: Wilcoxon-signed-rank test. In: International Encyclopedia of Statistical Science, pp. 1658–1659 (2011)

  • ROUGE, L.C.: A package for automatic evaluation of summaries. In: Proceedings of Workshop on Text Summarization of ACL (2004)

  • Schmidt, M., Fung, G., Rosales, R.: Fast optimization methods for l1 regularization: A comparative study and two new approaches. In: Machine Learning: ECML 2007: 18th European Conference on Machine Learning, Warsaw, Poland, September 17-21, 2007. Proceedings 18, pp. 286–297 (2007)

  • Shi, E., Wang, Y., Gu, W., Du, L., Zhang, H., Han, S., Zhang, D., Sun, H.: Cocosoda: Effective contrastive learning for code search. In: 2023 IEEE/ACM 45th International Conference on Software Engineering, pp. 2198–2210 (2023)

  • Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116 (2017)

  • Simon-Gabriel, C.-J., Ollivier, Y., Bottou, L., Schölkopf, B., Lopez-Paz, D.: First-order adversarial vulnerability of neural networks and input dimension. In: International Conference on Machine Learning, pp. 5809–5817 (2019)

  • Su, T.-C., Cheng, H.-C.: Sesamebert: Attention for anywhere. In: 2020 IEEE 7th International Conference on Data Science and Advanced Analytics, pp. 363–369 (2020)

  • Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Advances in neural information processing systems 27 (2014)

  • Trizna, D.: Shell language processing: Unix command parsing for machine learning. arXiv preprint arXiv:2107.02438 (2021)

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł, Polosukhin, I.: Attention is all you need. Adva. Neural Inf. Process. Syst. 30, 5998–6008 (2017)

    Google Scholar 

  • Vedantam, R., Lawrence Zitnick, C., Parikh, D.: Cider: Consensus-based image description evaluation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566–4575 (2015)

  • Wang, Y., Wang, W., Joty, S., Hoi, S.C.: Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8696–8708 (2021)

  • Wei, J., Zou, K.: Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 6382–6388 (2019)

  • Wei, B., Li, G., Xia, X., Fu, Z., Jin, Z.: Code generation as a dual task of code summarization. Advances in neural information processing systems 32 (2019)

  • Wei, B., Li, Y., Li, G., Xia, X., Jin, Z.: Retrieve and refine: exemplar-based neural comment generation. In: 2020 35th IEEE/ACM International Conference on Automated Software Engineering, pp. 349–360 (2020)

  • Wiseman, S., Rush, A.M.: Sequence-to-sequence learning as beam-search optimization. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1296–1306 (2016)

  • Wu, H., Zhao, H., Zhang, M.: Code summarization with structure-induced transformer. In: Findings of the Association for Computational Linguistics, pp. 1078–1090 (2021)

  • Xia, X., Bao, L., Lo, D., Xing, Z., Hassan, A.E., Li, S.: Measuring program comprehension: a large-scale field study with professionals. IEEE Trans. Softw. Eng. 44(10), 951–976 (2017)

    Article  Google Scholar 

  • Xie, Q., Dai, Z., Hovy, E., Luong, T., Le, Q.: Unsupervised data augmentation for consistency training. Adv. Neural Inf. Process. Syst. 33, 6256–6268 (2020)

    Google Scholar 

  • Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)

  • Yang, G., Chen, X., Cao, J., Xu, S., Cui, Z., Yu, C., Liu, K.: Comformer: Code comment generation via transformer and fusion method-based hybrid code representation. In: 2021 8th International Conference on Dependable Systems and Their Applications, pp. 30–41 (2021)

  • Yang, G., Zhou, Y., Chen, X., Yu, C.: Fine-grained Pseudo-code Generation Method via Code Feature Extraction and Transformer, pp. 213–222 (2021)

  • Yang, G., Chen, X., Zhou, Y., Yu, C.: Dualsc: Automatic generation and summarization of shellcode via transformer and dual learning. In: IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2022, Honolulu, HI, USA, March 15-18, 2022, pp. 361–372 (2022)

  • Yang, G., Liu, K., Chen, X., Zhou, Y., Yu, C., Lin, H.: Ccgir: information retrieval-based code comment generation method for smart contracts. Knowledge-Based Syst. 237, 107858 (2022)

    Article  Google Scholar 

  • Yang, G., Zhou, Y., Chen, X., Zhang, X., Han, T., Chen, T.: Exploitgen: template-augmented exploit code generation based on codebert. J. Syst. Softw. 197, 111577 (2023)

    Article  Google Scholar 

  • Ye, W., Xie, R., Zhang, J., Hu, T., Wang, X., Zhang, S.: Leveraging code generation to improve code retrieval and summarization via dual learning. In: Proceedings of The Web Conference 2020, pp. 2309–2319 (2020)

  • Yu, C., Yang, G., Chen, X., Liu, K., Zhou, Y.: Bashexplainer: Retrieval-augmented bash code comment generation based on fine-tuned codebert. In: 2022 IEEE International Conference on Software Maintenance and Evolution, pp. 82–93 (2022)

  • Zhang, J., Wang, X., Zhang, H., Sun, H., Liu, X.: Retrieval-based neural source code summarization. In: 2020 IEEE/ACM 42nd International Conference on Software Engineering, pp. 1385–1397 (2020)

  • Zhang, X., Zhou, Y., Han, T., Chen, T.: Training deep code comment generation models via data augmentation. In: 12th Asia-Pacific Symposium on Internetware, pp. 185–188 (2020)

  • Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., et al.: A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)

  • Zhu, C., Cheng, Y., Gan, Z., Sun, S., Goldstein, T., Liu, J.: Freelb: Enhanced adversarial training for natural language understanding. In: 8th International Conference on Learning Representations (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaolin Ju or Xiang Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, Y., Ju, X., Chen, X. et al. Bash comment generation via data augmentation and semantic-aware CodeBERT. Autom Softw Eng 31, 30 (2024). https://doi.org/10.1007/s10515-024-00431-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10515-024-00431-2

Keywords

Navigation