Skip to main content
Log in

Generative image transformer (GIT): unsupervised continuous image generative and transformable model for [123I]FP-CIT SPECT images

  • Original Article
  • Published:
Annals of Nuclear Medicine Aims and scope Submit manuscript

A Correction to this article was published on 06 January 2022

This article has been updated

Abstract

Objective

Recently, generative adversarial networks began to be actively studied in the field of medical imaging. These models are used for augmenting the variation of images to improve the accuracy of computer-aided diagnosis. In this paper, we propose an alternative new image generative model based on transformer decoder blocks and verify the performance of our model in generating SPECT images that have characteristics of Parkinson’s disease patients.

Methods

Firstly, we proposed a new model architecture that is based on a transformer decoder block and is extended to generate slice images. From few superior slices of 3D volume, our model generates the rest of the inferior slices sequentially. Our model was trained by using [123I]FP-CIT SPECT images of Parkinson's disease patients that originated from the Parkinson’s Progression Marker Initiative database. Pixel values of SPECT images were normalized by the specific/nonspecific binding ratio (SNBR). After training the model, we generated [123I]FP-CIT SPECT images. The transformation of images of the healthy control case SPECT images into PD-like images was also performed. Generated images were visually inspected and evaluated using the mean absolute value and asymmetric index.

Results

Our model was successfully generated and transformed into PD-like SPECT images. The mean absolute SNBR was mostly less than 0.15 in absolute value. The variation of the obtained dataset images was confirmed by the analysis of the asymmetric index.

Conclusions

These results showed the potential ability of our new generative approach for SPECT images that the generative model based on the transformer realized both generation and transformation by a single model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Change history

References

  1. Goodfellow Ian J, Pouget-Abadie J, Mirza M, Bing Xu, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Adv Neural Inf Process Syst. 2014;3:2672–80.

    Google Scholar 

  2. Radford A, Metz L, and Chintal S. Unsupervised representation learning with deep convolutional generative adversarial networks. (2015).arXiv:1511.06434

  3. Onishi Y, Teramoto A, Tsujimoto M, Tsukamoto T, Saito K, Toyama H, et al. Automated pulmonary nodule classification in computed tomography images using a deep convolutional neural network trained by generative adversarial networks. BioMed Red Int. 2019. https://doi.org/10.1155/2019/6051939.

    Article  Google Scholar 

  4. Koshino K, Werner Rudolf A, Toriumi F, Javadi Mehrbod S, Pomper Martin G, Solnes Lilja B, et al. Generative adversarial networks for the creation of realistic artificial brain magnetic resonance images. Tomography. 2018;4(4):159.

    Article  Google Scholar 

  5. Islam J, Zhang Y. GAN-based synthetic brain PET image generation. Brain Inform. 2020;7:1–12.

    Article  CAS  Google Scholar 

  6. Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321–31.

    Article  Google Scholar 

  7. Mirza M and Osindero S. Conditional generative adversarial nets. (2014). arXiv:1411.1784

  8. Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. IEEE international conference on computer vision, 2017. p. 2223–2232.

  9. Xia T, Chartsias A, Tsaftaris SA. Consistent brain ageing synthesis. In: Medical image computing and computer-assisted intervention. Champaign: Springer; 2019. p. 750–758.

  10. Ronneberger O, Fischer P, and Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention. Champaign: Springer; 2015. p. 234–241.

  11. Kimura Y, Watanabe A, Yamada T, Watanabe S, Nagaoka T, Nemoto M, et al. AI approach of cycle-consistent generative adversarial networks to synthesize PET images to train computer-aided diagnosis algorithm for dementia. Ann Nucl Med. 2020. https://doi.org/10.1007/s12149-020-01468-5.

    Article  PubMed  Google Scholar 

  12. Wei J, Suriawinata A, Vaickus L, Ren Bing, Liu X, Wei J, et al. Generative Image Translation for Data Augmentation in Colorectal Histopathology Images. (2019). arXiv:1910.05827

  13. Liyan S, Wang J, Huang Y, Ding X, Greenspan H, Paisley J (2020) An adversarial learning approach to medical image synthesis for lesion detection. IEEE J Biomed Health Inform 4: 2303–2314

    Google Scholar 

  14. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Advances in neural information processing systems. 2017. p. 5998–6008.

  15. Cornia M, Stefanini M, Baraldi L, and Cucchiara R. Meshed-memory transformer for image captioning. In: Proceedings of the EEE Computer Society conference on computer vision and pattern recognition. 2020. p. 10578–10587.

  16. Girdhar R, Carreira J, Doersch C, Zisserman A. Video action transformer network. In: Proceedings of the IEEE Computer Society conference on computer vision and pattern recognition. 2019. p. 244–253

  17. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, and Zagoruyko S. End-to-end object detection with transformers. (2020). arXiv:2005.12872

  18. Parmar N, Vaswani A, Uszkoreit J, Kaiser Ł, Shazeer N, Ku A, et al. Image transformer. (2018). arXiv:1802.05751

  19. Chen M, Radford A, Child R, Wu J, Jun H, Luan D, et al. Generative pretraining from pixels. In: Proceedings of the 37th international conference on machine learning. 2020.

  20. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. OpenAI Blog. 2019;1(8):9.

    Google Scholar 

  21. Devlin J, Chang M-W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. (2018). arXiv:1810.04805

  22. Fragkiadaki K, Agrawal P, Levine S, and Malik J. Learning visual predictive models of physics for playing billiards. (2015). arXiv:1511.07404

  23. Lotter W, Kreiman G, and Cox D. Deep predictive coding networks for video prediction and unsupervised learning. (2016). arXiv:1605.08104

  24. Marek K, Jennings D, Lasch S, Siderowf A, Tanner C, Simuni T, et al. The parkinson progression marker initiative (PPMI). Prog Neurobiol. 2011;95:629–35.

    Article  Google Scholar 

  25. Tossici-Bolt L, Hoffmann SMA, Kemp PM, Mehta RL, Fleming JS. Quantification of [123I]FP-CIT SPECT brain images: an accurate technique for measurement of the specific binding ratio. Eur Img J Nucl Med Mol. 2006;33:1491–9.

    Article  Google Scholar 

  26. Xiong R, Yang Y, He D, Zheng K, Zheng S, Xing C, et al. On layer normalization in the transformer architecture. (2020). arXiv:2002.04745

  27. Ba JL, Kiros JR, Hinton GE. Layer normalization. (2016). arXiv:1607.06450

  28. Diganta M. Mish: A self regularized non-monotonic neural activation function. (2019). arXiv:1908.08681

  29. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–58.

    Google Scholar 

  30. Clevert D-A, Unterthiner T, and Hochreiter S. Fast and accurate deep network learning by exponential linear units (ELUs). (2015). arXiv:1511.07289

  31. Kingma DP, Ba JL. Adam: a method for stochastic optimization. 2014. arXiv:1412.6980

  32. Smith LN. Cyclical learning rates for training neural networks. In: IEEE winter conference on applications of computer vision. 2017. p. 464–472

  33. Seide F, Agarwal A. CNTK: Microsoft's open-source deep-learning toolkit. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. p. 2135–2135

  34. Hayashi T, Mishina M, Sakamaki M, Sakamoto Y, Suda S, Kimura K. Effect of brain atrophy in quantitative analysis of 123I iofupane SPECT. Ann Nucl Med. 2019;33(8):579–85.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

The data we used in training and validation in this study were obtained from the Parkinson’s Progression Marker Initiative (PPMI) database (https://www.ppmi-info.org/data). PPMI—a public private partnership—was funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including Abbvie, Avid, Biogen Idec, Bristol-Myers Squibb, Covance, GE Healthcare, Genentech, GlaxoSmithKline, Lilly, Lundbeck, Merck, Meso Scale Discovery, Pfizer, Piramal, Roche, and UCB.

The authors thank professor Nobukatsu Sawamoto (MD, neurologist) and Associate professor Koichi Ishizu (MD, radiologist) for reviewing generated SPECT images. Both belong to the Graduate School of Medicine, Kyoto University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shogo Watanabe.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Watanabe, S., Ueno, T., Kimura, Y. et al. Generative image transformer (GIT): unsupervised continuous image generative and transformable model for [123I]FP-CIT SPECT images. Ann Nucl Med 35, 1203–1213 (2021). https://doi.org/10.1007/s12149-021-01661-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12149-021-01661-0

Keywords

Navigation