Risk Scores Learned by Deep Restricted Boltzmann Machines with Trained Interval Quantization

  • Nataliya SokolovskaEmail author
  • Yann Chevaleyre
  • Jean-Daniel Zucker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10934)


A compact easily applicable and highly accurate classification model is of a big interest in decision making. A simple scoring system which stratifies patients efficiently can help a clinician in diagnostics or with the choice of treatment. Deep learning methods are becoming the preferred approach for various applications in artificial intelligence and machine learning, since they usually achieve the best accuracy. However, deep learning models are complex systems with non-linear data transformation, what makes it challenging to use them as scoring systems. The state-of-the-art deep models are sparse, in particular, deep models with ternary weights are reported to be efficient in image processing. However, the ternary models seem to be not expressive enough in many tasks. In this contribution, we introduce an interval quantization method which learns both the codebook index and the codebook values, and results in a compact but powerful model.

We show by experiments on several standard benchmarks that the proposed approach achieves the state-of-the-art performance in terms of generalizing accuracy, and outperforms modern approaches in terms of storage and computational efficiency. We also consider a real biomedical problem of a type 2 diabetes remission, and discuss how the trained model can be used as a predictive medical score, and be helpful for physicians.



This work was supported by PEPS (CNRS, France), project MaLeFHYCe, and by the French National Research Agency (ANR JCJC DiagnoLearn).


  1. 1.
    Antman, E., Cohen, M., Bernink, P., McCabe, C., Horacek, T., Papuchis, G., Mautner, B., Corbalan, R., Radley, D., Braunwald, E.: The TIMI risk score for unstable angina/non-ST elevation MI. J. Am. Med. Assoc. 284, 835–842 (2000)CrossRefGoogle Scholar
  2. 2.
    Courbariaux, M., Bengio, Y., David, J.-P.: Binaryconnect: training deep neural networks with binary weights during propagations. In: NIPS (2015)Google Scholar
  3. 3.
    Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: NIPS (2016)Google Scholar
  4. 4.
    Gage, B.F., Waterman, A.D., Shannon, W., Boechler, M., Rich, M.W., Radford, M.J.: Validation of clinical classification schemes for predicting stroke. J. Am. Med. Assoc. 285, 2864–2870 (2001)CrossRefGoogle Scholar
  5. 5.
    Le Gall, J.-R., Lemeshow, S., Saulnier, F.: A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study. J. Am. Med. Assoc. 270, 2957–2963 (1993)CrossRefGoogle Scholar
  6. 6.
    Golovin, D., Sculley, D., McMahan, H.B., Young, M.: Large-scale learning with less RAM via randomization. In: ICML (2013)Google Scholar
  7. 7.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  8. 8.
    Hinton, G., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Hjelma, D., Calhouna, V., Salakhutdinov, R., Allena, E., Adali, T., Plisa, S.: Restricted Boltzmann machines for neuroimaging: an application in identifying intrinsic networks. NeuroImage 96, 245–260 (2014)CrossRefGoogle Scholar
  10. 10.
    Hwang, K., Sung, W.: Fixed-point feedforward deep neural network design using weights +1, 0, and \(-1\). In: SiPS (2014)Google Scholar
  11. 11.
    Kim, M., Smaragdis, P.: Bitwise neural networks. In: ICML Workshop on Resource-Efficient Machine Learning (2015)Google Scholar
  12. 12.
    Knaus, W.A., Zimmerman, J.E., Wagner, D.P., Draper, E.A., Lawrence, D.E.: APACHE-acute physiology and chronic health evaluation: a physiologically based classification system. Crit. Care Med. 9, 591–597 (1981)CrossRefGoogle Scholar
  13. 13.
    Li, F., Zhang, B., Liu, B.: Ternary weight networks. In: NIPS, Workshop on EMDNN (2016)Google Scholar
  14. 14.
    Lichman, M.: UCI Machine Learning Repository (2013)Google Scholar
  15. 15.
    Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplications. CoRR, abs/1510.03009 (2015)Google Scholar
  16. 16.
    Moreno, R., Metnitz, P., Almeida, E., Jordan, B., Bauer, P., Abizanda, R., Campos, R.A., Iapichino, G., Edbrooke, D., Capuzzo, M., Le Gall, J.-R.: SAPS 3-from evaluation of the patient to evaluation of the intensive care unit. Part 2: development of a prognostic model for hospital mortality at ICU admission. Intensive Care Med. 31, 1345–1355 (2005)CrossRefGoogle Scholar
  17. 17.
    Nguyen, T.D., Phung, D., Huynh, V., Lee, T.: Supervised restricted Boltzmann machines. In: UAI (2017)Google Scholar
  18. 18.
    Peters, A., Hothorn, T., Lausen, B.: ipred: Improved predictors. R News 2(2), 33–36 (2002)Google Scholar
  19. 19.
    Salakhutdinov, R., Hinton, G.: Deep Boltzmann machines. In: AISTATS (2009)Google Scholar
  20. 20.
    Salakhutdinov, R., Hinton, G.: A better way to pretrain deep Boltzmann machines. In: NIPS (2012)Google Scholar
  21. 21.
    Salakhutdinov, R., Hinton, G.: An efficient learning procedure for deep Boltzmann machines. Neural Comput. 24(8), 1967–2006 (2012)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Salakhutdinov, R., Larochelle, H.: Efficient learning of deep Boltzmann machines. In: AISTATS (2010)Google Scholar
  23. 23.
    Srivastava, N., Salakhutdinov, R.: Multimodal learning with deep Boltzmann machines. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Srivastava, N., Salakhutdinov, R., Hinton, G.: Modeling documents with deep Boltzmann machines. In: UAI (2013)Google Scholar
  25. 25.
    Still, C.D., et al.: A probability score for preoperative prediction of type 2 diabetes remission following RYGB surgery. Lancet Diabetes Endocrinol. 2(1), 38–45 (2014)CrossRefGoogle Scholar
  26. 26.
    Ustun, B., Rudin, C.: Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102, 349–391 (2015)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Ustun, B., Rudin, C.: Learning optimized risk scores from large-scale datasets. In: KDD (2017)Google Scholar
  28. 28.
    Zhang, Y., Salakhutdinov, R., Chang, H.-A., Glass, J.: Resource configurable spoken query detection using deep Boltzmann machines. In: ICASSP (2012)Google Scholar
  29. 29.
    Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. In: ICLR (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Nataliya Sokolovska
    • 1
    Email author
  • Yann Chevaleyre
    • 2
  • Jean-Daniel Zucker
    • 3
  1. 1.Paris Sorbonne University (Paris 6, UPMC), INSERMParisFrance
  2. 2.University Paris DauphineParisFrance
  3. 3.IRD, INSERMBondy, ParisFrance

Personalised recommendations