Skip to main content

CASHformer: Cognition Aware SHape Transformer for Longitudinal Analysis

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Abstract

Modeling temporal changes in subcortical structures is crucial for a better understanding of the progression of Alzheimer’s disease (AD). Given their flexibility to adapt to heterogeneous sequence lengths, mesh-based transformer architectures have been proposed in the past for predicting hippocampus deformations across time. However, one of the main limitations of transformers is the large amount of trainable parameters, which makes the application on small datasets very challenging. In addition, current methods do not include relevant non-image information that can help to identify AD-related patterns in the progression. To this end, we introduce CASHformer, a transformer-based framework to model longitudinal shape trajectories in AD. CASHformer incorporates the idea of pre-trained transformers as universal compute engines that generalize across a wide range of tasks by freezing most layers during fine-tuning. This reduces the number of parameters by over 90% with respect to the original model and therefore enables the application of large models on small datasets without overfitting. In addition, CASHformer models cognitive decline to reveal AD atrophy patterns in the temporal sequence. Our results show that CASHformer reduces the reconstruction error by \(73\%\) compared to previously proposed methods. Moreover, the accuracy of detecting patients progressing to AD increases by \(3\%\) with imputing missing longitudinal shape data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/rwightman/pytorch-image-models.

  2. 2.

    https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html.

References

  1. Azcona, E.A., et al.: Analyzing brain morphology in Alzheimer’s disease using discriminative and generative spiral networks. bioRxiv (2021)

    Google Scholar 

  2. Couronné, R., Vernhet, P., Durrleman, S.: Longitudinal self-supervision to disentangle inter-patient variability from disease progression. In: de Bruijne, M., et al. (eds.) Longitudinal self-supervision to disentangle inter-patient variability from disease progression. LNCS, vol. 12902, pp. 231–241. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_22

    Chapter  Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional trans-formers for language understanding. In: NAACL (2019)

    Google Scholar 

  4. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929 (2020)

  5. Dua, M., Makhija, D., Manasa, P., Mishra, P.: A CNN-RNN-LSTM based amalgamation for Alzheimer’s disease detection. J. Med. Biol. Eng. 40(5), 688–706 (2020)

    Article  Google Scholar 

  6. Feng, C., et al.: Deep learning framework for Alzheimer’s disease diagnosis via 3d-CNN and FSBI-LSTM. IEEE Access 7, 63605–63618 (2019)

    Article  Google Scholar 

  7. Garland, M., Heckbert, P.S.: Surface simplification using quadric error metrics. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, pp. 209–216 (1997)

    Google Scholar 

  8. Gong, S., Chen, L., Bronstein, M., Zafeiriou, S.: SpiralNet++: a fast and highly efficient mesh convolution operator. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  9. Gutiérrez-Becker, B., Wachinger, C.: Learning a conditional generative model for anatomical shape analysis. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) Information Processing in Medical Imaging. LNCS, vol. 11492, pp. 505–516. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_39

    Chapter  Google Scholar 

  10. Hong, X., et al.: Predicting Alzheimer’s disease using LSTM. IEEE Access 7, 80893–80901 (2019)

    Article  Google Scholar 

  11. Jack, C.R., et al.: The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Resonan. Imaging 27(4), 685–691 (2008)

    Article  Google Scholar 

  12. Jack, C.R., Holtzman, D.M.: Biomarker modeling of Alzheimer’s disease. Neuron 80(6), 1347–1358 (2013)

    Article  Google Scholar 

  13. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension]. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880 (2020)

    Google Scholar 

  14. Li, S., et al.: Few-shot domain adaptation with polymorphic transformers. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 330–340. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_31

    Chapter  Google Scholar 

  15. Li, Z., et al.: Train large, then compress: rethinking model size for efficient training and inference of transformers. arXiv preprint arXiv:2002.11794 (2020)

  16. Lindberg, O., et al.: Shape analysis of the hippocampus in Alzheimer’s disease and subtypes of frontotemporal lobar degeneration. J. Alzheimer’s Dis. JAD 30(2), 355 (2012)

    Article  Google Scholar 

  17. Lu, K., Grover, A., Abbeel, P., Mordatch, I.: Pretrained transformers as universal computation engines. arXiv preprint arXiv:2103.05247 (2021)

  18. Mofrad, S.A., Lundervold, A.J., Vik, A., Lundervold, A.S.: Cognitive and MRI trajectories for prediction of Alzheimer’s disease. Sci. Rep. 11(1), 1–10 (2021)

    Article  Google Scholar 

  19. Mohs, R.C., et al.: Development of cognitive instruments for use in clinical trials of antidementia drugs: additions to the Alzheimer’s disease assessment scale that broaden its scope. Alzheimer disease and associated disorders (1997)

    Google Scholar 

  20. Patenaude, B., Smith, S.M., Kennedy, D.N., Jenkinson, M.: A Bayesian model of shape and appearance for subcortical brain segmentation. NeuroImage 56(3), 907–922 (2011)

    Article  Google Scholar 

  21. Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  22. Ranjan, A., Bolkart, T., Sanyal, S., Black, M.J.: Generating 3d faces using convolutional mesh autoencoders. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 725–741. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_43

    Chapter  Google Scholar 

  23. Sarasua, I., Lee, J., Wachinger, C.: Geometric deep learning on anatomical meshes for the prediction of Alzheimer’s disease. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1356–1359. IEEE (2021)

    Google Scholar 

  24. Sarasua, I., Pölsterl, S., Wachinger, C.: TransforMesh: a transformer network for longitudinal modeling of anatomical meshes. In: Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (eds.) MLMI 2021. LNCS, vol. 12966, pp. 209–218. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87589-3_22

    Chapter  Google Scholar 

  25. Valanarasu, J.M.J., Oza, P., Hacihaliloglu, I., Patel, V.M.: Medical transformer: gated axial-attention for medical image segmentation. In: de Bruijne, M., et al. (eds.) Medical transformer: Gated axial-attention for medical image segmentation. LNCS, vol. 12901, pp. 36–46. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_4

    Chapter  Google Scholar 

  26. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6000–6010 (2017)

    Google Scholar 

  27. Yu, S., et al.: MIL-VT: multiple instance learning enhanced vision transformer for fundus image classification. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12908, pp. 45–54. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87237-3_5

    Chapter  Google Scholar 

  28. Zhao, Q., Liu, Z., Adeli, E., Pohl, K.M.: Longitudinal self-supervised learning. Med. Image Anal. 71, 102051 (2021)

    Article  Google Scholar 

Download references

Acknowledgment

This research was partially supported by the Bavarian State Ministry of Science and the Arts and coordinated by the bidt, and the Federal Ministry of Education and Research in the call for Computational Life Sciences (DeepMentia, 031L0200A). We gratefully acknowledge the computational resources provided by the Leibniz Supercomputing Centre (www.lrz.de).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ignacio Sarasua .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 295 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sarasua, I., Pölsterl, S., Wachinger, C. (2022). CASHformer: Cognition Aware SHape Transformer for Longitudinal Analysis. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13431. Springer, Cham. https://doi.org/10.1007/978-3-031-16431-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16431-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16430-9

  • Online ISBN: 978-3-031-16431-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics