Advertisement

GPU-Accelerated Clique Tree Propagation for Pouch Latent Tree Models

  • Leonard K. M. PoonEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11276)

Abstract

Pouch latent tree models (PLTMs) are a class of probabilistic graphical models that generalizes the Gaussian mixture models (GMMs). PLTMs produce multiple clusterings simultaneously and have been shown better than GMMs for cluster analysis in previous studies. However, due to the considerably higher number of possible structures, the training of PLTMs is more time-demanding than GMMs. This thus has limited the application of PLTMs on only small data sets. In this paper, we consider using GPUs to exploit two parallelism opportunities, namely data parallelism and element-wise parallelism, for PTLMs. We focus on clique tree propagation, since this exact inference procedure is a strenuous task and is recurrently called for each data sample and each model structure during PLTM training. Our experiments with real-world data sets show that the GPU-accelerated implementation procedure can achieve up to 52x speedup over the sequential implementation running on CPUs. The experiment results signify promising potential for further improvement on the full training of PLTMs with GPUs.

Keywords

GPU acceleration Clique tree propagation Pouch latent tree models Parallel computing Probabilistic graphical models 

Notes

Acknowledgement

Research on this article was supported by the Education University of Hong Kong under grant RG70/2017-1018R, the Top-up Fund of Dean’s Research Fund, and the Small Research Grant of the Department of Mathematics and Information Technology.

References

  1. 1.
    Al-Ayyoub, M., Abu-Dalo, A.M., Jararweh, Y., Jarrah, M., Sa’d, M.A.: A GPU-based implementations of the fuzzy C-means algorithms for medical image segmentation. J. Supercomput. 71(8), 3149–3162 (2015)CrossRefGoogle Scholar
  2. 2.
    Anandkumar, A., Chaudhuri, K., Hsu, D., Kakade, S.M., Song, L., Zhang, T.: Spectral methods for learning multivariate latent tree structure. In: Advances in Neural Information Processing Systems, vol. 24, pp. 2025–2033 (2012)Google Scholar
  3. 3.
    Araújo, G.F., Macedo, H.T., Chella, M.T., Estombelo Montesco, C.A., Medeiros, M.V.O.: Parallel implementation of expectation-maximisation algorithm for the training of Gaussian mixture models. J. Comput. Sci. 10(10), 2124–2134 (2014)CrossRefGoogle Scholar
  4. 4.
    Arefin, A.S., Riveros, C., Berretta, R., Moscato, P.: kNN-MST-agglomerative: a fast and scalable graph-based data clustering approach on GPU. In: 7th International Conference on Computer Science Education, pp. 585–590 (2012)Google Scholar
  5. 5.
    Bistaffa, F., Bombieri, N., Farinelli, A.: An efficient approach for accelerating bucket elimination on GPUs. IEEE Trans. Cybern. 47(11), 3967–3979 (2017)CrossRefGoogle Scholar
  6. 6.
    Bixler, R.M.: Sparse matrix belief propagation. Master’s thesis, Virginia Tech (2018)Google Scholar
  7. 7.
    Bouveyrona, C., Brunet-Saumard, C.: Model-based clustering of high-dimensional data: a review. Comput. Stat. Data Anal. 71, 52–78 (2014)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Chen, P., Zhang, N.L., Liu, T., Poon, L.K.M., Chen, Z., Khawar, F.: Latent tree models for hierarchical topic detection. Artif. Intell. 250, 105–124 (2017)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Chen, P., Zhang, N.L., Poon, L.K.M., Chen, Z.: Progressive EM for latent tree models and hierarchical topic detection. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 1498–1504 (2016)Google Scholar
  10. 10.
    Choi, M.J., Tan, V.Y.F., Anandkumar, A., Willsky, A.S.: Learning latent tree graphical models. J. Mach. Learn. Res. 12, 1771–1812 (2011)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Cowell, R.G., Dawid, A.P., Lauritzen, S.L., Spiegelhalter, D.J.: Probabilistic Networks and Expert Systems. Springer, New York (1999).  https://doi.org/10.1007/b97670CrossRefzbMATHGoogle Scholar
  12. 12.
    Darwiche, A.: Modeling and Reasoning with Bayesian Networks. Cambridge University Press, Cambridge (2009)CrossRefGoogle Scholar
  13. 13.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 39(1), 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Eslami, H., Kasampalis, T., Kotsifakou, M.: A GPU implementation of tiled belief propagation on markov random fields. In: Eleventh ACM/IEEE International Conference on Formal Methods and Models for Codesign, pp. 143–146 (2013)Google Scholar
  15. 15.
    Fioretto, F., Pontelli, E., Yeoh, W., Dechter, R.: Accelerating exact and approximate inference for (distributed) discrete optimization with GPUs. Constraints 23(1), 1–43 (2018)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Grauer-Gray, S., Kambhamettu, C., Palaniappan, K.: GPU implementation of belief propagation using CUDA for cloud tracking and reconstruction. In: IAPR Workshop on Pattern Recognition in Remote Sensing, pp. 1–4 (2008)Google Scholar
  17. 17.
    Grauer-Gray, S., Cavazos, J.: Optimizing and auto-tuning belief propagation on the GPU. In: Cooper, K., Mellor-Crummey, J., Sarkar, V. (eds.) LCPC 2010. LNCS, vol. 6548, pp. 121–135. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-19595-2_9CrossRefGoogle Scholar
  18. 18.
    Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Comput. Surv. 31(3), 264–323 (1999)CrossRefGoogle Scholar
  19. 19.
    Kumar, N.S.L.P., Satoor, S., Buck, I.: Fast parallel expectation maximization for Gaussian mixture models on GPUs using CUDA. In: 11th IEEE International Conference on High Performance Computing and Communications, pp. 103–109 (2009)Google Scholar
  20. 20.
    Lauritzen, S.L., Jensen, F.: Stable local computation with conditional Gaussian distributions. Stat. Comput. 11, 191–203 (2001)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Liu, T.F., Zhang, N.L., Chen, P., Liu, A.H., Poon, L.K.M., Wang, Y.: Greedy learning of latent tree models for multidimensional clustering. Mach. Learn. 98(1–2), 301–330 (2015)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Machlica, L., Vanek, J., Zajic, Z.: Fast estimation of Gaussian mixture model parameters on GPU using CUDA. In: 12th International Conference on Parallel and Distributed Computing, Applications and Technologies, pp. 167–172 (2011)Google Scholar
  23. 23.
    McLachlan, G.J., Peel, D.: Finite Mixture Models. Wiley, New York (2000)CrossRefGoogle Scholar
  24. 24.
    Mourad, R., Sinoquet, C., Zhang, N.L., Liu, T., Leray, P.: A survey on latent tree models and applications. J. Artif. Intell. Res. 47(1), 157–203 (2013)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Pangborn, A.D.: Scalable data clustering using GPUs. Ph.D. thesis, Rochester Institute of Technology (2010)Google Scholar
  26. 26.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, San Mateo (1988)zbMATHGoogle Scholar
  27. 27.
    Poon, L.K.M.: Clustering with multidimensional mixture models: analysis on world development indicators. In: Cong, F., Leung, A., Wei, Q. (eds.) ISNN 2017. LNCS, vol. 10261, pp. 153–160. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59072-1_19CrossRefGoogle Scholar
  28. 28.
    Poon, L.K.M., Zhang, N.L., Chen, T., Wang, Y.: Variable selection in model-based clustering: to do or to facilitate. In: Proceedings of the 27th International Conference on Machine Learning, pp. 887–894 (2010)Google Scholar
  29. 29.
    Poon, L.K.M., Zhang, N.L., Liu, T., Liu, A.H.: Model-based clustering of high-dimensional data: variable selection versus facet determination. Int. J. Approx. Reason. 54(1), 196–215 (2013)CrossRefGoogle Scholar
  30. 30.
    Sanders, J., Kandrot, E.: CUDA by Example: An Introduction to General-Purpose GPU Programming. Addison-Wesley Professional, Boston (2010)Google Scholar
  31. 31.
    Shalom, S.A.A., Dash, M., Tue, M.: Graphics hardware based efficient and scalable fuzzy c-means clustering. In: Proceedings of the 7th Australasian Data Mining Conference, vol. 87, pp. 179–186 (2008)Google Scholar
  32. 32.
    Wang, Y., Zhang, N.L., Chen, T.: Latent tree models and approximate inference in Bayesian networks. J. Artif. Intell. Res. 32, 879–900 (2008)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Xu, Y., Chen, H., Klette, R., Liu, J., Vaudrey, T.: Belief propagation implementation using CUDA on an NVIDIA GTX 280. In: Nicholson, A., Li, X. (eds.) AI 2009. LNCS (LNAI), vol. 5866, pp. 180–189. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-10439-8_19CrossRefGoogle Scholar
  34. 34.
    Zhang, N.L.: Hierarchical latent class models for cluster analysis. J. Mach. Learn. Res. 5, 697–723 (2004)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Zhang, N.L., Poon, L.K.M.: Latent tree analysis. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 4891–4897 (2017)Google Scholar
  36. 36.
    Zheng, L., Mengshoel, O.: Exploring multiple dimensions of parallelism in junction tree message passing. In: Proceedings of the 2013 UAI Application Workshops: Big Data meet Complex Models and Models for Spatial, Temporal and Network Data (2013)Google Scholar
  37. 37.
    Zheng, L., Mengshoel, O.: Optimizing parallel belief propagation in junction treesusing regression. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 757–765 (2013)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2018

Authors and Affiliations

  1. 1.The Education University of Hong KongHong Kong SARChina

Personalised recommendations