Skip to main content

Neural Additive and Basis Models with Feature Selection and Interactions

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14647))

Included in the following conference series:

  • 214 Accesses

Abstract

Deep neural networks (DNNs) exhibit attractive performance in various fields but often suffer from low interpretability. The neural additive model (NAM) and its variant called the neural basis model (NBM) use neural networks (NNs) as nonlinear shape functions in generalized additive models (GAMs). Both models are highly interpretable and exhibit good performance and flexibility for NN training. NAM and NBM can provide and visualize the contribution of each feature to the prediction owing to GAM-based architectures. However, when using two-input NNs to consider feature interactions or when applying them to high-dimensional datasets, training NAM and NBM becomes intractable due to the increase in the computational resources required. This paper proposes incorporating the feature selection mechanism into NAM and NBM to resolve computational bottlenecks. We introduce the feature selection layer in both models and update the selection weights during training. Our method is simple and can reduce computational costs and model sizes compared to vanilla NAM and NBM. In addition, it enables us to use two-input NNs even in high-dimensional datasets and capture feature interactions. We demonstrate that the proposed models are computationally efficient compared to vanilla NAM and NBM, and they exhibit better or comparable performance with state-of-the-art GAMs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Although high-dimensional datasets with sparse features can be handled by NBM with the specialized implementation, as shown in [21], it cannot be applied to dense features. In our experiments, NA\(^2\)M and NB\(^2\)M could not run on more than hundred features, and training NAM and NBM slowed down on more than thousand features in dense feature datasets.

  2. 2.

    Our method can be extended to three or more input shape functions to capture high-order feature interactions while it compromises the interpretability.

References

  1. Agarwal, R., et al.: Neural additive models: interpretable machine learning with neural nets. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  2. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) (2013)

    Google Scholar 

  3. Arik, S.Ö., Pfister, T.: TabNet: attentive interpretable tabular learning. In: AAAI Conference on Artificial Intelligence, vol. 35, no. 8, pp. 6679–6687 (2021). https://doi.org/10.1609/aaai.v35i8.16826

  4. Chang, C., Caruana, R., Goldenberg, A.: NODE-GAM: neural generalized additive model for interpretable deep learning. In: International Conference on Learning Representations (ICLR) (2022)

    Google Scholar 

  5. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785

  6. Epsilon: Large scale learning challenge (2008). https://k4all.org/project/large-scale-learning-challenge/

  7. Fanty, M., Cole, R.: Spoken letter recognition. In: Advances in Neural Information Processing Systems, vol. 3 (1990)

    Google Scholar 

  8. Gorishniy, Y., Rubachev, I., Khrulkov, V., Babenko, A.: Revisiting deep learning models for tabular data. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  9. Guillermo: ChaLearn AutoML challenge (2000). http://automl.chalearn.org/data/

  10. Guyon, I., Gunn, S., Ben-Hur, A., Dror, G.: Result analysis of the NIPS 2003 feature selection challenge. In: Advances in Neural Information Processing Systems, vol. 17 (2004)

    Google Scholar 

  11. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with Gumbel-softmax. In: International Conference on Learning Representations (ICLR) (2017)

    Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2015). https://doi.org/10.48550/arXiv.1412.6980

  13. Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158 (2012). https://doi.org/10.1145/2339530.2339556

  14. Lou, Y., Caruana, R., Gehrke, J., Hooker, G.: Accurate intelligible models with pairwise interactions. In: 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 623–631 (2013). https://doi.org/10.1145/2487575.2487579

  15. Maddison, C.J., Mnih, A., Teh, Y.W.: The concrete distribution: a continuous relaxation of discrete random variables. In: International Conference on Learning Representations (ICLR) (2017)

    Google Scholar 

  16. Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability (2019). https://doi.org/10.48550/arXiv.1909.09223

  17. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  18. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(85), 2825–2830 (2011)

    MathSciNet  Google Scholar 

  19. Peters, B., Niculae, V., Martins, A.F.T.: Sparse sequence-to-sequence models. In: 57th Annual Meeting of the Association for Computational Linguistics (ACL). pp. 1504–1519 (2019). https://doi.org/10.18653/v1/P19-1146

  20. Popov, S., Morozov, S., Babenko, A.: Neural oblivious decision ensembles for deep learning on tabular data. In: International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  21. Radenovic, F., Dubey, A., Mahajan, D.: Neural basis models for interpretability. In: Advances in Neural Information Processing Systems, vol. 35 (2022)

    Google Scholar 

  22. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778

  23. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Natur. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

    Article  Google Scholar 

  24. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: International Conference on Computer Vision (ICCV), pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74

  25. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(56), 1929–1958 (2014)

    MathSciNet  Google Scholar 

  26. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017). https://doi.org/10.48550/arXiv.1708.07747

Download references

Acknowledgements

This work was partially supported by JSPS KAKENHI (JP20H04240, JP20H04254, JP22H03590, JP23H00491, JP23H03466), JST PRESTO (JPMJPR2133), NEDO (JPNP18002, JPNP20006), and a grant from the Kanagawa Prefectural Government of Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shinichi Shirakawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kishimoto, Y., Yamanishi, K., Matsuda, T., Shirakawa, S. (2024). Neural Additive and Basis Models with Feature Selection and Interactions. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14647. Springer, Singapore. https://doi.org/10.1007/978-981-97-2259-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2259-4_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2261-7

  • Online ISBN: 978-981-97-2259-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics