Skip to main content

A novel approach for personalized response model: deep learning with individual dropout feature ranking

Abstract

Deep learning is the fastest growing field in artificial intelligence and has led to many transformative innovations in various domains. However, lack of interpretability sometimes hinders its application in hypothesis-driven domains such as biology and healthcare. In this paper, we propose a novel deep learning model with individual feature ranking. Several simulated datasets with the scenarios that contributing features are correlated and buried among non-contributing features were used to characterize the novel analysis approach. A publicly available clinical dataset was also applied. The performance of the individual level dropout feature ranking model was compared with commonly used artificial neural network model, random forest model, and population level dropout feature ranking model. The individual level dropout feature ranking model provides a reasonable prediction of the outcomes. Unlike the random forest model and population level dropout feature ranking model, which can only identify global-wise contributing features (i.e., at population level), the individual level dropout feature ranking model allows further identification of impactful features on response at individual level. Therefore, it provides a basis for clustering patients into subgroups. This may provide a new tool for enriching patients in clinical drug development and developing personalized or individualized medicine.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

References

  1. Bellot A et al (2019) A hierarchical bayesian model for personalized survival predictions. IEEE J Biomed Health Inform 23(1):72–78

    Article  Google Scholar 

  2. Simonyan K, et al. Deep inside convolutional networks: visualising image classification models and saliency maps. https://arxiv.org/pdf/1312.6034.pdf

  3. Li Y et al (2016) Deep feature selection: theory and application to identify enhancers and promoters. J Comput Biol. 23(5):322–336

    Article  CAS  Google Scholar 

  4. Fisher A, Rudin C, Dominici F (2018) Model class reliance: variable importance measures for any machine learning model class, from the "rashomon" perspective

  5. Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, pp 4768–4777

  6. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM

  7. Kingma D et al. Auto-encoding variational Bayes. https://arxiv.org/pdf/1312.6114.pdf

  8. Maeda SI (2014) A Bayesian encourages dropout. arXiv preprint https://arxiv.org/1412.7003

  9. Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22(1):79–86

    Article  Google Scholar 

  10. Chang et al (2015) Dropout feature ranking for deep learning models. Bioinformatics. 2015. https://arxiv.org/pdf/1712.08645.pdf

  11. Louizos C, Welling M, Kingma DP (2017) Learning Sparse Neural Networks through L0 Regularization. arXiv preprint https://arxiv.org/1712.01312

  12. Mitchell TJ, Beauchamp JJ (1988) Bayesian variable selection in linear regression. J Am Stat Assoc 83(404):1023–1032

    Article  Google Scholar 

  13. Maddison C et al. The concrete distribution: a continuous relaxation of discrete random variables. https://arxiv.org/pdf/1611.00712.pdf

  14. Jang E, Gu S, Poole, B (2016) Categorical reparameterization with Gumbel–Softmax. arXiv preprint https://arxiv.org/1611.01144

  15. https://biostat.mc.vanderbilt.edu/wiki/Main/SupportDesc

  16. CleverT DA, Unterthiner T, Hochreiter S (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint https://arxiv.org/1511.07289

  17. https://github.com/fmfn/BayesianOptimization

  18. https://scikit-learn.org/

  19. Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Comput Appl Math 20:53–65

    Article  Google Scholar 

  20. Fowkles EB, Mallows CL (1983) A method for comparing two hierarchical clusterings. J Am Stat Assoc 78(383):553–569

    Article  Google Scholar 

  21. Maaten L et al (2008) Visualizing data using t-SNE. J Mach Learn Res 9:2579–2605

    Google Scholar 

  22. Stekhoven DJ, Buehlmann P (2012) MissForest—nonparametric missing value imputation for mixed-type data. Bioinformatics 28(1):112–118. https://doi.org/10.1093/bioinformatics/btr597

    Article  CAS  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hao Zhu.

Ethics declarations

Conflict of interest

All other authors declared no competing interests for this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Disclaimer: The views expressed in this paper are those of the authors and do not necessarily represent the views of the FDA.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 17 kb)

Supplementary file2 (DOCX 20 kb)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, R., Liu, Q., Feng, G. et al. A novel approach for personalized response model: deep learning with individual dropout feature ranking. J Pharmacokinet Pharmacodyn 48, 165–179 (2021). https://doi.org/10.1007/s10928-020-09724-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10928-020-09724-x

Keywords

  • Variational lower bound
  • Individual dropout feature ranking
  • Deep learning
  • Machine learning
  • Artificial intelligence