A Hierarchical Mixture Density Network

  • Fan Yang
  • Jaymar Soriano
  • Takatomi Kubo
  • Kazushi Ikeda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10637)


The relationship among three correlated variables could be very sophisticated, as a result, we may not be able to find their hidden causality and model their relationship explicitly. However, we still can make our best guess for possible mappings among these variables, based on the observed relationship. One of the complicated relationships among three correlated variables could be a two-layer hierarchical many-to-many mapping. In this paper, we proposed a Hierarchical Mixture Density Network (HMDN) to model the two-layer hierarchical many-to-many mapping. We apply HMDN on an indoor positioning problem and show its benefit.


Mixture Density Network Hierarchical many-to-many mappings 


  1. 1.
    Bazzani, L., Larochelle, H., Torresani, L.: Recurrent mixture density network for spatiotemporal visual attention. arXiv preprint (2016). arXiv:1603.08199
  2. 2.
    Berio, D., Akten, M., Leymarie, F.F., Grierson, M., Plamondon, R.: Sequence generation with a physiologically plausible model of handwriting and recurrent mixture density networks (2016)Google Scholar
  3. 3.
    Bishop, C.M.: Mixture density networks (1994)Google Scholar
  4. 4.
    Herzallah, R., Lowe, D.: A mixture density network approach to modelling and exploiting uncertainty in nonlinear control problems. Eng. Appl. Artif. Intell. 17(2), 145–158 (2004)CrossRefGoogle Scholar
  5. 5.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  6. 6.
    Iso, H., Wakamiya, S., Aramaki, E.: Density estimation for geolocation via convolutional mixture density network. arXiv preprint (2017). arXiv:1705.02750
  7. 7.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint (2013). arXiv:1312.6114
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  9. 9.
    Moon, S., Park, Y., Suh, I.H.: Predicting multiple pregrasping poses by combining deep convolutional neural networks with mixture density networks. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9949, pp. 581–590. Springer, Cham (2016). doi: 10.1007/978-3-319-46675-0_64 CrossRefGoogle Scholar
  10. 10.
    Randall, J., Amft, O., Bohn, J., Burri, M.: LuxTrace: indoor positioning using building illumination. Pers. Ubiquit. Comput. 11(6), 417–428 (2007)CrossRefGoogle Scholar
  11. 11.
    Rehder, E., Wirth, F., Lauer, M., Stiller, C.: Pedestrian prediction by planning using deep neural networks. arXiv preprint (2017). arXiv:1706.05904
  12. 12.
    Richmond, K.: A trajectory mixture density network for the acoustic-articulatory inversion mapping. In: Ninth International Conference on Spoken Language Processing (2006)Google Scholar
  13. 13.
    Torres-Sospedra, J., Montoliu, R., Martínez-Usó, A., Avariento, J.P., Arnau, T.J., Benedito-Bordonau, M., Huerta, J.: UJIIndoorLoc: a new multi-building and multi-floor database for wlan fingerprint-based indoor localization problems. In: 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 261–270. IEEE (2014)Google Scholar
  14. 14.
    Wang, W., Xu, S., Xu, B.: Gating recurrent mixture density networks for acoustic modeling in statistical parametric speech synthesis. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5520–5524. IEEE (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Fan Yang
    • 1
  • Jaymar Soriano
    • 1
  • Takatomi Kubo
    • 1
  • Kazushi Ikeda
    • 1
  1. 1.Graduate School of Information ScienceNara Institute of Science and TechnologyIkomaJapan

Personalised recommendations