Abstract
Accurate and precise identification of adeno-associated virus (AAV) vectors play an important role in dose-dependent gene therapy. Although solid-state nanopore techniques can potentially be used to characterize AAV vectors by capturing ionic current, the existing data analysis techniques fall short of identifying them from their ionic current profiles. Recently introduced machine learning methods such as deep convolutional neural network (CNN), developed for image identification tasks, can be applied for such classification. However, with smaller data set for the problem in hand, it is not possible to train a deep neural network from scratch for accurate classification of AAV vectors. To circumvent this, we applied a pre-trained deep CNN (GoogleNet) model to capture the basic features from ionic current signals and subsequently used fine-tuning-based transfer learning to classify AAV vectors. The proposed method is very generic as it requires minimal preprocessing and does not require any handcrafted features. Our results indicate that fine-tuning-based transfer learning can achieve an average classification accuracy between 90 and 99% in three realizations with a very small standard deviation. Results also indicate that the classification accuracy depends on the applied electric field (across nanopore) and the time frame used for data segmentation. We also found that the fine-tuning of the deep network outperforms feature extraction-based classification for the resistive pulse dataset. To expand the usefulness of the fine-tuning-based transfer learning, we have tested two other pre-trained deep networks (ResNet50 and InceptionV3) for the classification of AAVs. Overall, the fine-tuning-based transfer learning from pre-trained deep networks is very effective for classification, though deep networks such as ResNet50 and InceptionV3 take significantly longer training time than GoogleNet.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Availability of Data and Material
All data associated with this paper is publicly available at https://github.com/mstfwsulab/AAV-classification.
Code Availability
All codes are available upon request.
References
Li, C., & Samulski, R. J. (2020). Engineering adeno-associated virus vectors for gene therapy. Nature Reviews Genetics, 21(4), 255–272
FlotteT. R., Afione, S. A., Conrad, C., McGrath, S. A., Solow, R., Oka, H., Zeitlin, P. L., Guggino, W. B., & Carter, B. J. F. (1993). Stable in vivo expression of the cystic fibrosis transmembrane conductance regulator with an adeno-associated virus vector. Proceedings of the National Academy of Sciences, 90(22), 10613–10617
Li, C., Bowles, D. E., van Dyke, T., & Samulski, R. J. (2005). Adeno-associated virus vectors: Potential applications for cancer gene therapy. Cancer Gene Therapy, 12(12), 913–925
Naso, M. F., Tomkowicz, B., Perry, W. L., & Strohl, W. R. (2017). Adeno-Associated Virus (AAV) as a vector for gene therapy. BioDrugs, 31(4), 317–334
Gimpel, A. L., Katsikis, G., Sha, S., Maloney, A. J., Hong, M. S., Nguyen, T. N. T., Wolfrum, J., Springs, S. L., Sinskey, A. J., Manalis, S. R., Barone, P. W., & Braatz, R. D. (2021). Analytical methods for process and product characterization of recombinant adeno-associated virus-based gene therapies. Molecular Therapy-Methods & Clinical Development, 20, 740–754
Lock, M., McGorray, S., Auricchio, A., Ayuso, E., Beecham, E. J., Blouin-Tavel, V., Bosch, F., Bose, M., Byrne, B. J., Caton, T., Chiorini, J. A., Chtarto, A., Clark, K. R., Conlon, T., Darmon, C., Doria, M., Douar, A., Flotte, T. R., Francis, J. D., & Snyder, R. O. (2010). Characterization of a recombinant adeno-associated virus type 2 reference standard material. Human Gene Therapy, 21(10), 1273–1285
Fried, J. P., Swett, J. L., Nadappuram, B. P., Mol, J. A., Edel, J. B., Ivanov, A. P., & Yates, J. R. (2021). In situ solid-state nanopore fabrication. Chemical Society Reviews, 50(8), 4974–4992
Karawdeniya, B. I., Bandara, Y., Khan, A. I., Chen, W. T., Vu, H. A., Morshed, A., Suh, J., Dutta, P., & Kim, M. J. (2020). Adeno-associated virus characterization for cargo discrimination through nanopore responsiveness. Nanoscale, 12(46), 23721–23731
Marques, A. D., Kummer, M., Kondratov, O., Banerjee, A., Moskalenko, O., & Zolotukhin, S. (2021). Applying machine learning to predict viral assembly for adeno-associated virus capsid libraries. Molecular Therapy-Methods & Clinical Development, 20, 276–286
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Proc. Advances in Neural Information Processing Systems, 25, 1097–1105
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556
Zeiler M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8689. Springer, Cham. https://doi.org/10.1007/978-3-319-10590-1_53
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9
He, K. M., Zhang, X. Y., Ren, S. Q., Sun, J., & IEEE. (2016). Deep residual learning for image recognition. Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 770–778
Xiao, Z. W., Xu, X., Xing, H. L., Qu, R., Song, F. H., & Zhao, B. W. (2021). IEEE RNTS: Robust neural temporal search for time series classification. Proceedings of International Joint Conference on Neural Networks (IJCNN), 1–8
Xiao, Z. W., Xu, X., Zhang, H. X., & Szczerbicki, E. (2021). A new multi-process collaborative architecture for time series classification. Knowledge-Based Systems, 220, 106934
Xiao, Z. W., Xu, X., Xing, H. L., Luo, S. X., Dai, P. L., & Zhan, D. W. (2021). RTFN: A robust temporal feature network for time series classification,". Information Sciences, 571, 65–86
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444
Xu, Y., Mo, T., Feng, Q., Zhong, P., Lai, M., Eric, I., & Chang, C. (2014). Deep learning of feature representation with multiple instance learning for medical image analysis. Proc. 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 1626–1630
Pärnamaa, T., & Parts, L. (2017). Accurate classification of protein subcellular localization from high-throughput microscopy images using deep learning. G3: Genes, Genomes, Genetics, 7(5), 1385–1392
Nanni, L., Ghidoni, S., & Brahnam, S. (2017). Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognition, 71, 158–172
Pan, S. J., & Yang, Q. A. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359
Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big data, 3(1), 9
Tan, C. Q., Sun, F. C., Kong, T., Zhang, W. C., Yang, C., & Liu, C. F. (2018). A survey on deep transfer learning. Artificial Neural Networks and Machine Learning - ICANN 2018. Pt Iii, 11141, 270–279
Mabu, S., Atsumo, A., Kido, S., Kuremoto, T., & Hirano, Y. (2020). Investigating the effects of transfer learning on ROI-based classification of chest CT scan images: A case study on diffuse lung diseases. Journal of Signal Processing Systems, 92(3), 307–313
Sharif Razavian, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 806–813
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014). DeCAF: A deep convolutional activation feature for generic visual recognition. Proc. International Conference on Machine Learning, 647–655
Hur, C., & Kang, S. (2020). On-device partial learning technique of convolutional neural network for new classes. Journal of Signal Processing Systems. https://doi.org/10.1007/s11265-020-01520-7
Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks?. Proc. Advances in Neural Information Processing Systems, 3320–3328
Bayramoglu, N., & Heikkilä, J. (2014). Transfer learning for cell nuclei classification in histopathology images. Proc. European Conference on Computer Vision, 532–539
Li, Z. Z., & Hoiem, D. (2018). Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2935–2947
Shia, W. C., & Chen, D. R. (2021). Classification of malignant tumors in breast ultrasound using a pretrained deep residual network model and support vector machine. Computerized Medical Imaging and Graphics, 87, 101829
Hira, Z. M., & Gillies, D. F. (2015). A review of feature selection and feature extraction methods applied on microarray data. Advances in Bioinformatics, 198363
Guo, Y., Shi, H., Kumar, A., Grauman, K., Rosing, T., & Feris, R. (2018). SpotTune: transfer learning through adaptive fine-tuning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4805–4814
Ali, M., Son, D. H., Kang, S. H., & Nam, S. R. (2017). An accurate CT saturation classification using a deep learning approach based on unsupervised feature extraction and supervised fine-tuning strategy. Energies, 10(11), 1830
Boyd, A., Czajka, A., & Bowyer, K. (2019). Deep learning-based feature extraction in iris recognition: Use existing models, fine-tune or train from scratch? Proc. 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS), IEEE, 1–9
Bai, Y., Yi, J. Y., Tao, J. H., Wen, Z. Q., & Fan, C. H. (2020). A public Chinese dataset for language model adaptation. Journal of Signal Processing Systems, 92(8), 839–851
Reyes, A. K., Caicedo, J. C., & Camargo, J. E. (2015). Fine-tuning deep convolutional networks for plant recognition. CLEF (Working Notes), 1391, 467–475
Zhou, Z., Shin, J., Zhang, L., Gurudu, S., Gotway, M., & Liang, J. (2017). Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 7340–7351
Kensert, A., Harrison, P. J., & Spjuth, O. (2019). Transfer learning with deep convolutional neural networks for classifying cellular morphological changes. SLAS DISCOVERY: Advancing Life Sciences R&D, 24(4), 466–475
Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to fine-tune BERT for text classification? Proc. China National Conference on Chinese Computational Linguistics, Springer, 194–206
Swati, Z. N. K., Zhao, Q., Kabir, M., Ali, F., Ali, Z., Ahmed, S., & Lu, J. (2019). Brain tumor classification for MR images using transfer learning and fine-tuning. Computerized Medical Imaging and Graphics, 75, 34–46
Nazir, M., Shakil, S., & Khurshid, K. (2021). Role of deep learning in brain tumor detection and classification (2015 to 2020): A review. Computerized Medical Imaging and Graphics, 101940
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–1958
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning, MIT press Cambridge
Adelabu, S., Mutanga, O., & Adam, E. (2015). Testing the reliability and stability of the internal accuracy assessment of random forest for classifying tree defoliation levels using different validation methods. Geocarto International, 30(7), 810–821
Xu, Y., & Goodacre, R. (2018). On splitting training and validation set: A comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. Journal of Analysis and Testing, 2(3), 249–262
Hsu, C. W., & Lin, C. J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2), 415–425
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016) IEEE Rethinking the Inception Architecture for Computer Vision. Proceddings of the IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826
Dong, N., Zhao, L., Wu, C. H., & Chang, J. F. (2020). Inception v3 based cervical cell classification combined with artificially extracted features. Applied Soft Computing, 93, 106311
Xia, X. L., Xu, C., & Nan, B. (2017). IEEE Inception-v3 for Flower Classification. 2nd International Conference on Image, Vision and Computing, 783–787
Tian, X., & Chen, C. (2019). IEEE Modulation Pattern Recognition Based on Resnet50 Neural Network. 2nd IEEE International Conference on Information Communication and Signal Processing, 34–38
Wang, C., Chen, D. L., Hao, L., Liu, X. B., Zeng, Y., Chen, J. W., & Zhang, G. K. (2019). Pulmonary image classification based on inception-v3 transfer learning model. IEEE Access, 7, 146533–146541
Acknowledgement
The research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number 1R21GM134544. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Author information
Authors and Affiliations
Contributions
Aminul Islam Khan: Methodology, Investigation, Data Processing, Writing and Analysis. Min Jun Kim: Writing and Funding Acquisition. Prashanta Dutta: Methodology, Writing and Analysis, Supervision, and Funding Acquisition.
Corresponding author
Ethics declarations
Ethics Approval
Not applicable.
Consent to Participate
Yes.
Consent for Publication
Yes.
Conflicts of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Khan, A.I., Kim, M.J. & Dutta, P. Fine-tuning-based Transfer Learning for Characterization of Adeno-Associated Virus. J Sign Process Syst 94, 1515–1529 (2022). https://doi.org/10.1007/s11265-022-01758-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-022-01758-3