Skip to main content

Advertisement

Log in

Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications

  • Theoretical advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Thanks to the significant developments in healthcare industries, various types of medical data are generated. Analysing such valuable resources aid healthcare experts to understand the illnesses more precisely and provide better clinical services. Machine learning as one of the capable tools could assist healthcare experts in achieving expressive interpretation and making proper decisions. As annotation of medical data is a costly and sensitive task that can be performed just by healthcare professionals, label-free methods could be significantly promising. Interpretability and evidence-based decision are other concerns in medicine. These needs were our motivators to propose a novel clustering method based on hierarchical Dirichlet process mixtures of multivariate Beta distributions. To learn it, we applied batch and online variational methods for finding the proper number of clusters as well as estimating model parameters at the same time. The effectiveness of the proposed models is evaluated on three medical real applications, namely oropharyngeal carcinoma diagnosis, osteosarcoma analysis, and white blood cell counting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Amirkhani M, Manouchehri N, Bouguila N (2020) Fully Bayesian learning of multivariate beta mixture models. In: 2020 IEEE 21st international conference on Information Reuse and Integration for Data Science (IRI). IEEE, pp 120–127

  2. Anuradha S, Satyanarayana C (2017) Medical image segmentation based on beta mixture distribution for effective identification of lesions. In: Recent developments in intelligent computing, communication and devices. Springer, pp 133–140

  3. Bellot A, Schaar MVD (2020) Flexible modelling of longitudinal medical data: a Bayesian nonparametric approach. ACM Trans Comput Healthc 1(1):1–15

    Article  Google Scholar 

  4. Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin

    MATH  Google Scholar 

  5. Blei DM, Kucukelbir A, McAuliffe JD (2017) Variational inference: a review for statisticians. J Am Stat Assoc 112(518):859–877

    Article  MathSciNet  Google Scholar 

  6. Bouguila N, Ziou D (2006) Unsupervised selection of a finite Dirichlet mixture model: an MML-based approach. IEEE Trans Knowl Data Eng 18(8):993–1009

    Article  Google Scholar 

  7. Cancer Imaging Archive (2021) Osteosarcoma dataset. https://wiki.cancerimagingarchive.net/

  8. Chefira R, Rakrak S (2021) A knowledge extraction pipeline between supervised and unsupervised machine learning using Gaussian mixture models for anomaly detection. J Comput Sci Eng 15(1):1–17

    Article  Google Scholar 

  9. Chen J, Gong Z, Liu W (2020) A Dirichlet process biterm-based mixture model for short text stream clustering. Appl Intell 50(5):1609–1619

    Article  Google Scholar 

  10. Chunyan X, Yuqing S, Zhe L, Xiang B (2017) A medical image fusion algorithm based on contourlet transform and t mixture models. J Nanjing Normal Univ (Nat Sci Ed) 2017:1

    MATH  Google Scholar 

  11. Cruz D, Jennifer C, Castor LC, Mendoza CMT, Jay BA, Jane LSC, Brian PTB et al (2017) Determination of blood components (wbcs, rbcs, and platelets) count in microscopic images using image processing and analysis. In: 2017 IEEE 9th international conference on Humanoid. Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM). IEEE, pp 1–7

  12. Dietz Z, Lippitt W, Sethuraman S (2019) Stick-breaking processes, clumping, and Markov chain occupation laws. arxiv arXiv:1901.08135

  13. Edalati-rad A, Mosleh M (2019) Improving brain tumor diagnosis using MRI segmentation based on collaboration of beta mixture model and learning automata. Arab J Sci Eng 44(4):2945–2957

    Article  Google Scholar 

  14. Fan W, Bouguila N (2014) Online data clustering using variational learning of a hierarchical Dirichlet process mixture of Dirichlet distributions. In: International conference on database systems for advanced applications. Springer, pp 18–32

  15. Fan W, Bouguila N, Ziou D (2012) Variational learning for finite Dirichlet mixture models and applications. IEEE Trans Neural Netw Learn Syst 23(5):762–774

    Article  Google Scholar 

  16. Fan W, Sallay H, Bouguila N, Bourouis S (2016) Variational learning of hierarchical infinite generalized Dirichlet mixture models and applications. Soft Comput 20(3):979–990

    Article  Google Scholar 

  17. Figueiredo MAT, Jain AK (2002) Unsupervised learning of finite mixture models. IEEE Trans Pattern Anal Mach Intell 24(3):381–396

    Article  Google Scholar 

  18. Fuse T, Kamiya K (2017) Statistical anomaly detection in human dynamics monitoring using a hierarchical Dirichlet process hidden Markov model. IEEE Trans Intell Transp Syst 18(11):3083–3092

    Article  Google Scholar 

  19. Glanz H, Carvalho L (2018) An expectation-maximization algorithm for the matrix normal distribution with an application in remote sensing. J Multivar Anal 167:31–48

    Article  MathSciNet  MATH  Google Scholar 

  20. Gunning D (2017) Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2

  21. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B (2009) Histopathological image analysis: a review. IEEE Rev Biomed Eng 2:147–171

    Article  Google Scholar 

  22. Hu C, Fan W, Du JX, Bouguila N (2019) A novel statistical approach for clustering positive data based on finite inverted Beta-Liouville mixture models. Neurocomputing 333:110–123

    Article  Google Scholar 

  23. Iliashenko O, Bikkulova Z, Dubgorn A (2019) Opportunities and challenges of artificial intelligence in healthcare. In: E3S Web of Conferences, EDP Sciences, vol 110, p 02028

  24. Ji Z, Xia Y, Sun Q, Chen Q, Feng D (2014) Adaptive scale fuzzy local Gaussian mixture model for brain MR image segmentation. Neurocomputing 134:60–69

    Article  Google Scholar 

  25. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y (2017) Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2(4):230–243

    Article  Google Scholar 

  26. Jones G, Clancy NT, Helo Y, Arridge S, Elson DS, Stoyanov D (2017) Bayesian estimation of intrinsic tissue oxygenation and perfusion from RGB images. IEEE Trans Med Imaging 36(7):1491–1501

    Article  Google Scholar 

  27. Kaggle (2017) Bccd dataset. https://www.kaggle.com/paultimothymooney/blood-cells

  28. Kasa SR, Bhattacharya S, Rajan V (2020) Gaussian mixture copulas for high-dimensional clustering and dependency-based subtyping. Bioinformatics 36(2):621–628

    Article  Google Scholar 

  29. Li D, Zamani S, Zhang J, Li P (2019) Integration of knowledge graph embedding into topic modeling with hierarchical Dirichlet process. In: Proceedings of the 2019 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol 1 (long and short papers), pp 940–950

  30. Li K, Ma Z, Robinson D, Ma J (2018) Identification of typical building daily electricity usage profiles using Gaussian mixture model-based clustering and hierarchical clustering. Appl Energy 231:331–342

    Article  Google Scholar 

  31. Lin PP, Patel S (2013) Osteosarcoma. In: Bone sarcoma. Springer, pp 75–97

  32. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88

    Article  Google Scholar 

  33. Liu H, Duan Z, Chen C, Wu H (2019) A novel two-stage deep learning wind speed forecasting method with adaptive multiple error corrections and bivariate Dirichlet process mixture model. Energy Convers Manag 199:111975

    Article  Google Scholar 

  34. Llera A, Huertas I, Mir P, Beckmann CF (2019) Quantitative intensity harmonization of dopamine transporter SPECT images using gamma mixture models. Mol Imaging Biol 21(2):339–347

    Article  Google Scholar 

  35. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  36. Ma Z, Teschendorff AE (2013) A variational Bayes beta mixture model for feature selection in DNA methylation studies. J Bioinform Comput Biol 11(04):1350005

    Article  Google Scholar 

  37. Ma Z, Rana PK, Taghia J, Flierl M, Leijon A (2014) Bayesian estimation of Dirichlet mixture model with variational inference. Pattern Recognit 47(9):3143–3157

    Article  MATH  Google Scholar 

  38. Manouchehri N, Bouguila N (2019) A probabilistic approach based on a finite mixture model of multivariate beta distributions. In: ICEIS (1), pp 373–380

  39. Manouchehri N, Bouguila N (2020) A frequentist inference method based on finite bivariate and multivariate beta mixture models. In: Mixture models and applications. Springer, pp 179–208

  40. Manouchehri N, Nguyen H, Bouguila N (2019) Component splitting-based approach for multivariate beta mixture models learning. In: 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, pp 1–5

  41. Manouchehri N, Bouguila N, Fan W (2021) Nonparametric variational learning of multivariate beta mixture models in medical applications. Int J Imaging Syst Technol 31(1):128–140

    Article  Google Scholar 

  42. Manouchehri N, Kalra M, Bouguila N (2021) Online variational inference on finite multivariate beta mixture models for medical applications. IET Image Process 15(9):1869–1882. https://doi.org/10.1049/ipr2.12154

    Article  Google Scholar 

  43. Manouchehri N, Rahmanpour M, Bouguila N (2021) Entropy-based variational inference for semi-bounded data clustering in medical applications. In: Artificial intelligence and data mining in healthcare. Springer, pp 179–195

  44. Markopoulos AK (2012) Current aspects on oral squamous cell carcinoma. Open Dent J 6:126

    Article  Google Scholar 

  45. Massano J, Regateiro FS, Januário G, Ferreira A (2006) Oral squamous cell carcinoma: review of prognostic and predictive factors. Oral Surg Oral Med Oral Pathol Oral Radiol Endodontol 102(1):67–76

    Article  Google Scholar 

  46. McDowell IC, Manandhar D, Vockley CM, Schmid AK, Reddy TE, Engelhardt BE (2018) Clustering gene expression time series data using an infinite Gaussian process mixture model. PLoS Comput Biol 14(1):e1005896

    Article  Google Scholar 

  47. McLachlan GJ, Lee SX, Rathnayake SI (2019) Finite mixture models. Annu Rev Stat Appl 6:355–378

    Article  MathSciNet  Google Scholar 

  48. Mehrtash H, Duncan K, Parascandola M, David A, Gritz ER, Gupta PC, Mehrotra R, Nordin ASA, Pearlman PC, Warnakulasuriya S et al (2017) Defining a global research and policy agenda for betel quid and areca nut. Lancet Oncol 18(12):e767–e775

    Article  Google Scholar 

  49. Min Z, Liu L, Meng MQH (2019) Generalized non-rigid point set registration with hybrid mixture models considering anisotropic positional uncertainties. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 547–555

  50. Miotto R, Wang F, Wang S, Jiang X, Dudley JT (2018) Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform 19(6):1236–1246

    Article  Google Scholar 

  51. Mou W, Ma YA, Wainwright MJ, Bartlett PL, Jordan MI (2021) High-order Langevin diffusion yields an accelerated MCMC algorithm. J Mach Learn Res 22(42):1–41

    MathSciNet  MATH  Google Scholar 

  52. Naesseth C, Linderman S, Ranganath R, Blei D (2018) Variational sequential Monte Carlo. In: International conference on artificial intelligence and statistics. PMLR, pp 968–977

  53. Olkin I, Liu R (2003) A bivariate beta distribution. Stat Probab Lett 62(4):407–412

    Article  MathSciNet  MATH  Google Scholar 

  54. Olkin I, Trikalinos TA (2015) Constructions for a bivariate beta distribution. Stat Probab Lett 96:54–60

    Article  MathSciNet  MATH  Google Scholar 

  55. Paisley J, Wang C, Blei DM, Jordan MI (2014) Nested hierarchical Dirichlet processes. IEEE Trans Pattern Anal Mach Intell 37(2):256–270

    Article  Google Scholar 

  56. Rahman TY (2019) A histopathological image repository of normal epithelium of oral cavity and oral squamous cell carcinoma. https://data.mendeley.com/

  57. Sammaknejad N, Zhao Y, Huang B (2019) A review of the expectation maximization algorithm in data-driven process identification. J Process Control 73:123–136

    Article  Google Scholar 

  58. Scully C, Bagan J et al (2009) Oral squamous cell carcinoma overview. Oral Oncol 45(4/5):301–308

    Article  Google Scholar 

  59. Shen Y, Zhang L, Zhang J, Yang M, Tang B, Li Y, Lei K (2018) CBN: Constructing a clinical Bayesian network based on data from the electronic medical record. J Biomed Inform 88:1–10

    Article  Google Scholar 

  60. Shenhav L, Thompson M, Joseph TA, Briscoe L, Furman O, Bogumil D, Mizrahi I, Pe’er I, Halperin E (2019) Feast: fast expectation–maximization for microbial source tracking. Nat Methods 16(7):627–632

  61. Taniguchi T, Yoshino R, Takano T (2018) Multimodal hierarchical Dirichlet process-based active perception by a robot. Front Neurorobot 12:22

    Article  Google Scholar 

  62. Teh YW, Jordan MI, Beal MJ, Blei DM (2006) Hierarchical Dirichlet processes. J Am Stat Assoc 101(476):1566–1581

    Article  MathSciNet  MATH  Google Scholar 

  63. Tran D, Ranganath R, Blei DM (2017) Hierarchical implicit models and likelihood-free variational inference. arXiv preprint arXiv:170208896

  64. Tresp V, Overhage JM, Bundschus M, Rabizadeh S, Fasching PA, Yu S (2016) Going digital: a survey on digitalization and large-scale data analytics in healthcare. Proc IEEE 104(11):2180–2206

    Article  Google Scholar 

  65. Trianasari N, Sumertajaya I, Mangku IW et al (2021) Bivariate beta mixture model with correlations. Commun Math Biol Neurosci 2021:Article-ID

  66. Uzunova H, Schultz S, Handels H, Ehrhardt J (2019) Unsupervised pathology detection in medical images using conditional variational autoencoders. Int J Comput Assist Radiol Surg 14(3):451–461

    Article  Google Scholar 

  67. Vellido A (2020) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 32(24):18069–18083

    Article  Google Scholar 

  68. Wang C, Paisley J, Blei D (2011) Online variational inference for the hierarchical Dirichlet process. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp 752–760

  69. Wang C, Paisley J, Blei D (2011) Online variational inference for the hierarchical Dirichlet process. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, JMLR workshop and conference proceedings, pp 752–760

  70. Wang G et al (2020) A fast MCMC algorithm for the uniform sampling of binary matrices with fixed margins. Electron J Stat 14(1):1690–1706

    Article  MathSciNet  MATH  Google Scholar 

  71. Wang JC, Lee YS, Chin YH, Chen YR, Hsieh WC (2015) Hierarchical Dirichlet process mixture model for music emotion recognition. IEEE Trans Affect Comput 6(3):261–271

    Article  Google Scholar 

  72. Wang Y, Blei DM (2019) Frequentist consistency of variational Bayes. J Am Stat Assoc 114(527):1147–1161

    Article  MathSciNet  MATH  Google Scholar 

  73. Wang Y, Miller AC, Blei DM (2019) Comment: Variational autoencoders as empirical Bayes. Stat Sci 34(2):229–233

    Article  MATH  Google Scholar 

  74. Watanabe S (2013) A widely applicable Bayesian information criterion. J Mach Learn Res 14:867–897

    MathSciNet  MATH  Google Scholar 

  75. World Health Organisation (2020) Oral cancer. https://www.who.int/news-room/fact-sheets/detail/oral-health

  76. Yerebakan HZ, Dundar M (2017) Partially collapsed parallel Gibbs sampler for Dirichlet process mixture models. Pattern Recognit Lett 90:22–27

    Article  Google Scholar 

  77. Zeng P, Zhou X (2017) Non-parametric genetic prediction of complex traits with latent Dirichlet process regression models. Nat Commun 8(1):1–11

    Article  Google Scholar 

  78. Zhao Y, Shrivastava AK, Tsui KL (2018) Regularized Gaussian mixture model for high-dimensional clustering. IEEE Trans Cybern 49(10):3677–3688

    Article  Google Scholar 

  79. Zhou Q, Yu T, Zhang X, Li J (2020) Bayesian inference and uncertainty quantification for medical image reconstruction with Poisson data. SIAM J Imaging Sci 13(1):29–52

    Article  MathSciNet  MATH  Google Scholar 

  80. Zhou RG, Wang W (2021) Online nonparametric Bayesian analysis of parsimonious Gaussian mixture models and scenes clustering. ETRI J 43(1):74–81

    Article  MathSciNet  Google Scholar 

  81. Zhu Y, Tang Y, Tang Y, Elton DC, Lee S, Pickhardt PJ, Summers RM (2020) Cross-domain medical image translation by shared latent Gaussian mixture model. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 379–389

  82. Zygogianni AG, Kyrgias G, Karakitsos P, Psyrri A, Kouvaris J, Kelekis N, Kouloulias V (2011) Oral squamous cell cancer: early detection and the role of alcohol and smoking. Head Neck Oncol 3(1):2

    Article  Google Scholar 

Download references

Acknowledgements

The completion of this research was made possible thanks to the Natural Sciences and Engineering Research Council of Canada (NSERC) and Fonds de Recherche du Québec - Nature et technologies (FRQNT) and National Natural Science Foundation of China (61876068). The authors would like to thank the associate editor and reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Narges Manouchehri.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Hyperparameters of equations (30) to (35)

\(\rho _{jit}\) as the hyperparameter of \(Q(\mathbf {Z})\) is found by:

$$\begin{aligned} \rho _{jit}&= \frac{\exp ({\tilde{\rho }}_{jit})}{\sum _{f=1}^T exp({\tilde{\rho }}_{jit})} \end{aligned}$$
(46)
$$\begin{aligned} {\tilde{\rho }}_{jit}&= \sum _{k=1}^K \big <W_{jtk}\big >\left[ {\tilde{R}}_k\right. + \sum _{d=1}^{D}\big ({\overline{\alpha }}_{kd}-1\big )\ln Y_{jid} \nonumber \\&\quad -\sum _{d=1}^{D}\big ({\overline{\alpha }}_{kd}+1\big )\ln (1 - Y_{jid})\nonumber \\&\quad -\mid {{\overline{\alpha }}_j}\mid \ln \left[ 1+\sum _{d=1}^{D}\frac{ Y_{jid}}{(1-Y_{jid})}\right] +\langle \ln \pi _{jt}^{\prime } \rangle \left. +\sum _{s=1}^{t-1}\langle \ln (1-\pi _{js}^{\prime })\rangle \right] \end{aligned}$$
(47)

\({\tilde{R}}_{k}\) following [15] is calculated by:

$$\begin{aligned} {\tilde{R}}_{j}&= \,\ln \frac{\Gamma \big (\sum _{d=1}^{D}{\overline{\alpha }}_{jd}\big )}{\prod _{d=1}^{D}\Gamma \big ({\overline{\alpha }}_{jd}\big )}+ \sum _{d=1}^{D}{\overline{\alpha }}_{jd}\Bigg [\varPsi \Bigg (\sum _{d=1}^{D}{\overline{\alpha }}_{jd}\Bigg )- \varPsi \big ({\overline{\alpha }}_{jd}\big ) \Bigg ]\nonumber \\&\quad \times \Big [\big<\ln \alpha _{jd}\big>-\ln {\overline{\alpha }}_{jd}\Big ]+ \nonumber \\&\frac{1}{2}\sum _{d=1}^{D}{\overline{\alpha }}_{jd}^2\Bigg [\varPsi '\Bigg (\sum _{d=1}^{D}{\overline{\alpha }}_{jd}\Bigg ) - \varPsi '\big ({\overline{\alpha }}_{jd}\big )\Bigg ]\nonumber \\&\quad \times \Big<\big (\ln \alpha _{jd} - \ln {\overline{\alpha }}_{jd}\big )^2\Big> \nonumber \\&+ \frac{1}{2}\sum _{a=1}^{D} \sum _{b=1, a\ne b}^{D}{\overline{\alpha }}_{ja}\,{\overline{\alpha }}_{jb}\Bigg [\varPsi '\Bigg (\sum _{d=1}^{D}{\overline{\alpha }}_{jd}\Bigg ) \nonumber \\& \times \Big (\big<\ln {\overline{\alpha }}_{ja}\big>-\ln {\overline{\alpha }}_{ja}\Big ) \times \Big (\big < \ln {\overline{\alpha }}_{jb}\big >-\ln {\overline{\alpha }}_{jb}\Big )\Bigg ] \end{aligned}$$
(48)

\(\varPsi (.)\) and \(\varPsi '(.)\) are the digamma and trigamma functions, respectively. Similarly, \(\vartheta _{jtk}\) of the factor Q(W) is defined by:

$$\begin{aligned} \vartheta _{jtk}&= \frac{\exp ({\tilde{\vartheta }}_{jtk})}{\sum _{f=1}^K exp({\tilde{\vartheta }}_{jtf})} \end{aligned}$$
(49)
$$\begin{aligned} {\tilde{\vartheta }}_{jtk}&= \sum _{i=1}^N \big <Z_{jit}\big >\Bigg [{\tilde{R}}_k + \sum _{d=1}^{D}\big ({\overline{\alpha }}_{kd}-1\big )\ln Y_{jid} \nonumber \\&-\sum _{d=1}^{D}\big ({\overline{\alpha }}_{kd}+1\big )\ln (1 - Y_{jid})\nonumber \\&\quad -\mid {{\overline{\alpha }}_j}\mid \ln \Big [ 1 +\sum _{d=1}^{D}\frac{ Y_{jid}}{(1-Y_{jid})}\Big ] +\langle \ln \psi _{k}^{\prime } \rangle +\sum _{s=1}^{k-1}\langle \ln (1-\psi _{s}^{\prime })\rangle \Bigg ] \end{aligned}$$
(50)

\(a_{jt}\) and \(b_{jt}\) as the hyperparameters of the factor \(Q(\pi ^\prime )\) are:

$$\begin{aligned} a^*_{jt}=1+\sum _{i=1}^{N}\left\langle Z_{jit}\right\rangle , \quad b^*_{jt}=\lambda _{jt}+\sum _{i=1}^{N} \sum _{s=t+1}^{T}\langle Z_{jis}\rangle \end{aligned}$$
(51)

\(c_k\) and \(d_k\) as the hyperparameters of the factor \(Q(\psi ^\prime )\) are calculated by:

$$\begin{aligned} c^*_{k}=1+\sum _{j=1}^{M} \sum _{t=1}^{T}\left\langle W_{jtk}\right\rangle , \quad d^*_{k}=\gamma _{k}+\sum _{j=1}^{M} \sum _{t=1}^{T} \sum _{s=k+1}^{K}\langle W_{jts}\rangle \end{aligned}$$
(52)

The hyperparameters \(u_{kd}^*\) and \(\nu _{kd}\) of the factor \(Q(\alpha )\) are updated as follows:

$$\begin{aligned} u_{kd}^*&= u_{kd}+\sum _{j=1}^M \sum _{t=1}^T \langle W_{jtk}\rangle \sum _{i=1}^N \big<Z_{jit}\big>{\overline{\alpha }}_{kd} \times \nonumber \\&\Bigg [\varPsi \Bigg (\sum _{s=1}^{D}{\overline{\alpha }}_{ks}\Bigg )-\varPsi \big ({\overline{\alpha }}_{kd}\big )+\sum _{s \ne l}^{D} {\overline{\alpha }}_{ks} \varPsi '\Bigg (\sum _{s=1}^{D}{\overline{\alpha }}_{ks}\Bigg )\nonumber \\&\quad \times \Big (\big <\ln \alpha _{ks}\big >-\ln {\overline{\alpha }}_{ks}\Big )\Bigg ] \end{aligned}$$
(53)
$$\begin{aligned} \nu _{kd}^*&= \nu _{kd}- \sum _{j=1}^M \sum _{t=1}^T \langle W_{jtk}\rangle \sum _{i=1}^N\big <Z_{jit}\big >\nonumber \\&\quad \times \left[ \ln Y_{jid} - \ln (1-Y_{jid}) - \ln \left[ 1+\sum _{d=1}^{D}\frac{Y_{jid}}{(1-Y_{jid})}\right] \right] \end{aligned}$$
(54)

The expected values of above mentioned equations are given by:

$$\begin{aligned} {\bar{\alpha }}_{kd}=\left\langle \alpha _{kd}\right\rangle&=\frac{u^*_{kd}}{v^*_{kd}} \end{aligned}$$
(55)
$$\begin{aligned} \big<Z_{jit}\big>&= \rho _{jit} , \quad \big <W_{jtk}\big >= \vartheta _{jtk} \end{aligned}$$
(56)
$$\begin{aligned} &\big<\ln \pi _{jt}^{\prime }\big>=\varPsi (a_{jt})-\varPsi (a_{jt}+b_{jt}) , \nonumber \\& \big <\ln (1-\pi _{jt}^{\prime })\big>=\varPsi (b_{jt})-\varPsi (a_{jt}+b_{jt}) \end{aligned}$$
(57)
$$\begin{aligned} &\big<\ln \psi _{k}^{\prime } \big>=\varPsi (c_{k})-\varPsi (c_{k}+d_{k}) , \nonumber \\& \big <\ln (1-\psi _{k}^{\prime } )\big >=\varPsi (d_{k})-\varPsi (c_{k}+d_{k}) \end{aligned}$$
(58)
$$\begin{aligned} &\langle \ln \alpha _{kd}\rangle=\varPsi (u_{kd}^{*})-\ln v_{kd}^{*}, \nonumber \\& \Big <\big (\ln \alpha _{kd} - \ln {\overline{\alpha }}_{kd}\big )^2\Big >=\big [\varPsi (u^*_{kd})-\ln u^*_{kd}\big ]^2 + \varPsi ^{\prime }(u^*_{kd}) \end{aligned}$$
(59)

1.2 Hyperparameters of equation (37)

$$\begin{aligned} \rho _{jtr}&= \frac{\exp ({\tilde{\rho }}_{jtr})}{\sum _{f=1}^T \exp ({\tilde{\rho }}_{jtr})} \end{aligned}$$
(60)
$$\begin{aligned} &{\tilde{\rho }}_{jtr}= \sum _{k=1}^K \big <W^{(r-1)}_{jtk}\big >\left[ \tilde{R_k}^{(r-1)}\right. + \sum _{d=1}^{D}\big ({{\overline{\alpha }}_{kd}}^{(r-1)}-1\big )\ln X_{jrl} -\sum _{d=1}^{D}\big ({{\overline{\alpha }}_{kd}}^{(r-1)}+1\big )\ln (1 - X_{jrl}) - \\ \nonumber &\mid {{\overline{\alpha }}_j}\mid \ln \left[ 1+\sum _{d=1}^{D}\frac{ X_{jrl}}{(1-X_{jrl})}\right]+\langle \ln \pi _{jt}^{\prime (r-1)} \rangle\left. +\sum _{s=1}^{t-1}\left\langle \ln (1-\pi _{js}^{\prime (r-1)})\right\rangle \right] \end{aligned}$$
(61)

\({\tilde{R}}_{j}\) is calculated by equation (48).

1.3 Equation (40)

$$\begin{aligned} {\tilde{\vartheta }}_{jtk}&= N\rho _{jtr} \left[ \tilde{R_k}^{(r-1)} + \sum _{d=1}^{D}\big ({{\overline{\alpha }}_{kd}}^{(r-1)}-1\big )\ln X_{jrl}\right. -\sum _{d=1}^{D}\big ({{\overline{\alpha }}_{kd}}^{(r-1)}+1\big )\ln (1 - X_{jrl})\nonumber \\&\quad -\mid {{\overline{\alpha }}_j}\mid \ln \left[ 1+\sum _{d=1}^{D}\frac{ X_{jrl}}{(1-X_{jrl})}\right] +\langle \ln \psi _{k}^{\prime (r-1)} \rangle \left. +\sum _{s=1}^{k-1}\langle \ln (1-\psi _{k}^{\prime (r-1)})\rangle \right] \end{aligned}$$
(62)

1.4 Hyperparameters of equations (41) to (43)

$$\begin{aligned} a^{(r)}_{jt}&= a^{(r-1)}_{jt} + \xi _r \varDelta a_{jt}^{(r)}, \quad b^{(r)}_{jt} = b^{(r-1)}_{jt} + \xi _r \varDelta b_{jt}^{(r)} \end{aligned}$$
(63)
$$\begin{aligned} c^{(r)}_{k}&= c^{(r-1)}_{k} + \xi _r \varDelta c_{k}^{(r)} , \quad d^{(r)}_{k} = d^{(r-1)}_{k} + \xi _r \varDelta d_{k}^{(r)} \end{aligned}$$
(64)
$$\begin{aligned} u^{*(r)}_{kd}&= u^{*(r-1)}_{kd} + \xi _r \varDelta u_{kd}^{*(r)} , \quad v^{*(r)}_{kd} = v^{*(r-1)}_{kd} + \xi _r \varDelta v_{kd}^{*(r)} \end{aligned}$$
(65)
$$\begin{aligned} \varDelta a_{jt}^{(r)}= 1 + N \rho _{jtr} - a_{jt}^{(r-1)} , \quad \varDelta b_{jt}^{(r)}= \lambda _{jt} + N \sum _{s=t+1}^{T} \rho _{jsr} - b_{jt}^{(r-1)} \end{aligned}$$
(66)
$$\begin{aligned} &\varDelta c_{k}^{(r)}= 1 + \sum _{j=1}^{K} \sum _{t=1}^{T} \vartheta _{jtk}^{(r)} - c_{k}^{(r-1)} , &\varDelta d_{k}^{(r)} = \gamma _{k} + \sum _{j=1}^{M} \sum _{t=1}^{T} \sum _{m=k+1}^{K} \vartheta _{jtk}^{(r)} - d_{k}^{(r-1)} \end{aligned}$$
(67)
$$\begin{aligned} \varDelta u_{kd}^{*(t)}&=u_{kd}+ N \sum _{j=1}^{M} \sum _{t=1}^{T} \vartheta _{jtk}^{(r)} \rho _{jtr} {\bar{\alpha }}_{kd}^{(r-1)} \left[ \varPsi \left( \sum _{s=1}^{D} {\bar{\alpha }}_{ks}^{(r-1)}\right) \right. \nonumber \\&\quad -\varPsi ({\bar{\alpha }}_{kd}^{(r-1)}) +\sum _{s \ne l}^{D} {\bar{\alpha }}_{ks}^{(r-1)} {\varPsi }^{\prime }\left( \sum _{s=1}^{D} {\bar{\alpha }}_{ks}^{(r-1)}\right) (\langle \ln \alpha _{k s}^{(r-1)}\rangle \nonumber \\&\left. \quad -\ln {\bar{\alpha }}_{k s}^{(r-1)}- \ln {\bar{\alpha }}_{k s}^{(r-1)}\right] - u_{kd}^{*(t-1)} \end{aligned}$$
(68)
$$\begin{aligned} \varDelta v_{kd}^{*(t)}&= v_{kd} - N \sum _{j=1}^{M} \sum _{t=1}^{T} \vartheta _{jtk}^{(r)} \rho _{jtr} \left[ \ln Y_{jid} - \ln (1-Y_{jid}) - \ln \left[ 1+\sum _{d=1}^{D}\frac{Y_{jid}}{(1-Y_{jid})}\right]\right] - v_{kd}^{*(t-1)} \end{aligned}$$
(69)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Manouchehri, N., Bouguila, N. & Fan, W. Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications. Pattern Anal Applic 24, 1731–1744 (2021). https://doi.org/10.1007/s10044-021-01023-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-021-01023-6

Keywords

Navigation