Skip to main content

Machine Learning—Basic Unsupervised Methods (Cluster Analysis Methods, t-SNE)

  • Chapter
  • First Online:
Clinical Applications of Artificial Intelligence in Real-World Data
  • 292 Accesses

Abstract

Understanding how trained deep neural networks achieve their inferred results is challenging but important for relating how patterns in the input data affect other patterns in the output results. We present a visual analytics approach to this problem that consists of two mappings. The so-called forward mapping shows the relative impact of user-selected input patterns to all elements of the output. The backward mapping shows the relative impact of all input elements to user-selected patterns in the output. Our approach is generically applicable to any regressor mapping between two multidimensional real-valued spaces (input to output), is simple to implement, and requires no specific knowledge of the regressor’s internals. We demonstrate our method for two applications using image data—a MRI T1-to-T2 generator and a MRI-to-pseudo-CT generator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (XAI): a survey. 2020. arXiv:2006.11371

  2. Ribeiro M, Singh S, Guestrin C. Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of ACM SIGMOD KDD; 2016, p. 1135–44.

    Google Scholar 

  3. Adadi A, Berrada M, Bhateja V, Satapathy S, Satori H. Explainable AI for healthcare: from black box to interpretable models. Embedded Syst Artif Intell. 2020;1076:327–37.

    Google Scholar 

  4. Yang G, Ye O, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond. 2021. arXiv:2102.01998

  5. Rodrigues F, Espadoto M, Hirata R, Telea AC. Constructing and visualizing high-quality classifier decision boundary maps. Information. 2019;10(9):280.

    Article  Google Scholar 

  6. Garcia R, Telea A, da Silva B, Torresen J, Comba J. A task-and-technique centered survey on visual analytics for deep learning model engineering. Comput Graph. 2018;77:30–49.

    Article  Google Scholar 

  7. van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. 2021. arXiv:2107.10912 [eess.IV]

  8. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell. 2017;40(4):834–48.

    Article  Google Scholar 

  9. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. 2015. arXiv:1506.01497

  10. Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep learning for brain MRI segmentation: state of the art and future directions. J Digit Imaging. 2017;30(4):449–59.

    Article  Google Scholar 

  11. Spinner T, Schlegel U, Schäfer H, El-Assady M. explAiner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Vis Comput Graph. 2020;26(1):1064–74.

    Google Scholar 

  12. Hohman F, Kahng M, Pienta R, Chau DH. Visual analytics in deep learning: an interrogative survey for the next frontiers. IEEE Trans Vis Comput Graph. 2018;25(8):2674–93.

    Article  Google Scholar 

  13. Rauber PE, Fadel SG, Falcão AX, Telea AC. Visualizing the hidden activity of artificial neural networks. IEEE Trans Vis Comput Graph. 2016;23(1):101–10.

    Article  Google Scholar 

  14. Seifert C, Aamir A, Balagopalan A, Jain D, Sharma A, Grottel S, Gumhold S. Visualizations of deep neural networks in computer vision: a survey. In: Transparent data mining for big and small data. Springer; 2017, p. 123–44.

    Google Scholar 

  15. Ma Y, Fan A, He J, Nelakurthi AR, Maciejewski R. A visual analytics framework for explaining and diagnosing transfer learning processes. 2020. arXiv:2009.06876

  16. Maaten LVD, Hinton G. Visualizing data using t-SNE. J Mach Learn Res. 2008;9:2579–605.

    MATH  Google Scholar 

  17. Rauber PE, Falcão AX, Telea AC. Projections as visual aids for classification system design. Inf Vis. 2018;17(4):282–305.

    Article  Google Scholar 

  18. Benato BC, Telea AC, Falcão AX. Semi-supervised learning with interactive label propagation guided by feature space projections. In: Conference on graphics, patterns and images (SIBGRAPI); 2018. p. 392–99.

    Google Scholar 

  19. Sedlmair M, Aupetit M. Data-driven evaluation of visual quality measures. Comput Graph Forum. 2015;34(3):201–10.

    Article  Google Scholar 

  20. Bernard J, Zeppelzauer M, Sedlmair M, Aigner W. VIAL: a unified process for visual interactive labeling. Vis Comput. 2018;34(9):1189–207.

    Article  Google Scholar 

  21. Behrisch M, Korkmaz F, Shao L, Schreck T. Feedback-driven interactive exploration of large multidimensional data supported by visual classifier. In: IEEE conference on visual analytics science and technology (VAST); 2014. p. 43–52.

    Google Scholar 

  22. Tuia D, Volpi M, Copa L, Kanevski M, Munoz-Mari J. A survey of active learning algorithms for supervised remote sensing image classification. IEEE J Sel Top Signal Process. 2011;5(3):606–17.

    Article  Google Scholar 

  23. Saito PTM, Suzuki CTN, Gomes JF, Rezende PJ, Falcão AX. Robust active learning for the diagnosis of parasites. Pattern Recogn. 2015;48(11):3572–83.

    Article  Google Scholar 

  24. Ren M, Zeng W, Yang B, Urtasun R. Learning to reweight examples for robust deep learning. In: International conference on machine learning; 2018, p. 4334–343.

    Google Scholar 

  25. Harley AW, An interactive node-link visualization of convolutional neural networks. In: International symposium on visual computing; 2015, p. 867–77.

    Google Scholar 

  26. Bernard J, Zeppelzauer M, Lehmann M, Müller M, Sedlmair M. Towards user-centered active learning algorithms. Comput Graph Forum. 2018;37(3):121–32.

    Article  Google Scholar 

  27. Zeiler MD Fergus R. Visualizing and understanding convolutional networks. In: European conference on computer vision; 2014. p. 818–33.

    Google Scholar 

  28. Pezzotti N, Höllt T, Van Gemert J, Lelieveldt BPF, Eisemann E, Vilanova A. Deepeyes: progressive visual analytics for designing deep neural networks. IEEE Trans Vis Comput Graph. 2017;24(1):98–108.

    Article  Google Scholar 

  29. Liu M, Shi J, Li Z, Li C, Zhu J, Liu S. Towards better analysis of deep convolutional neural networks. IEEE Trans Vis Comput Graph. 2016;23(1):91–100.

    Article  Google Scholar 

  30. Wongsuphasawat K, Smilkov D, Wexler J, Wilson J, Mane D, Fritz D, Krishnan D, Viégas FB, Wattenberg M. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Trans Vis Comput Graph 2017;24(1):1–12.

    Google Scholar 

  31. Abadi M et al. TensorFlow: large-scale machine learning on heterogeneous systems. 2015. Software available from tensorflow.org. https://www.tensorflow.org/

  32. Choo J, Liu S. Visual analytics for explainable deep learning. IEEE Comput Graph Appl. 2018;38(4):84–92.

    Article  Google Scholar 

  33. Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H. Understanding neural networks through deep visualization. In: Deep learning workshop, international conference on machine learning (ICML); 2015.

    Google Scholar 

  34. Hohman F, Park H, Robinson C, Chau DH. SUMMIT: scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans Vis Comput Graph. 2019;26(1):1096–106.

    Article  Google Scholar 

  35. Zeiler MD, Krishnan D, Taylor GW, Fergus R. Deconvolutional networks. In: IEEE conference on computer vision and pattern recognition; 2010. p. 2528–35.

    Google Scholar 

  36. Kahng M, Andrews PY, Kalro A, Chau DH. ActiVis: visual exploration of industry-scale deep neural network models. IEEE Trans Vis Comput Graph. 2017;24(1):88–97.

    Article  Google Scholar 

  37. Nguyen A, Yosinski J, Clune J. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. In: Visualization for deep learning workshop, international conference in machine learning; 2016. arXiv:1602.03616

  38. Dosovitskiy A, Brox T. Inverting visual representations with convolutional networks. In: IEEE conference on computer vision and pattern recognition; 2016, p. 4829–37.

    Google Scholar 

  39. Simonyan K, Vedaldi A, Zisserman A. Deep inside convolutional networks: visualising image classification models and saliency maps. 2013. arXiv:1312.6034

  40. Mahendran A, Vedaldi A. Visualizing deep convolutional neural networks using natural pre-images. Int J Comput Vis. 2016;120(3):233–55.

    Article  MathSciNet  Google Scholar 

  41. Kahng M, Thorat N, Chau DH, Viégas FB, Wattenberg M. Gan lab: understanding complex deep generative models using interactive visual experimentation. IEEE Trans Vis Comput Graph. 2018;25(1):310–20.

    Article  Google Scholar 

  42. Liu M, Shi J, Cao K, Zhu J, Liu S. Analyzing the training processes of deep generative models. IEEE Trans Vis Comput Graph. 2017;24(1):77–87.

    Article  Google Scholar 

  43. Wang J, Gou I, Yang H, Shen H-W. Ganviz: a visual analytics approach to understand the adversarial game. IEEE Trans Vis Comput Graph. 2018;24(6):1905–17.

    Article  Google Scholar 

  44. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization,. In: IEEE conference on computer vision and pattern recognition; 2016, p. 2921–29.

    Google Scholar 

  45. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: visual explanations from deep networks via gradient-based localization. In: IEEE international conference on computer vision; 2017. p. 618–26.

    Google Scholar 

  46. Li H, Tian Y, Mueller K, Chen X. Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vis Comput. 2019;83:70–86.

    Article  Google Scholar 

  47. Mahendran A, Vedaldi A. Salient deconvolutional networks. In: European conference on computer vision; 2016. p. 120–35.

    Google Scholar 

  48. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surveys (CSUR). 2018;51(5):1–42.

    Article  Google Scholar 

  49. Bazzani L, Bergamo A, Anguelov D, Torresani L. Self-taught object localization with deep networks. In: IEEE winter conference on applications of computer vision (WACV), 2016; p. 1–9.

    Google Scholar 

  50. Li D, Huang J-B, Li Y, Wang S, Yang M-H. Weakly supervised object localization with progressive domain adaptation. In: IEEE conference on computer vision and pattern recognition; 2016. p. 3512–520.

    Google Scholar 

  51. Zhang D, Han J, Cheng G, Yang M-H. Weakly supervised object localization and detection: a survey. IEEE Trans Pattern Anal Mach Intell. 2021.

    Google Scholar 

  52. Masci J, Meier U, Cireşan D, Schmidhuber J. Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks; 2011. p. 52–9.

    Google Scholar 

  53. Nair, V., Hinton GE. Rectified linear units improve restricted Boltzmann machines. In: International conference on machine learning (ICML); 2010. p. 807–14.

    Google Scholar 

  54. Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: International conference on machine learning (ICML); 2013, p. 1139–147.

    Google Scholar 

  55. Taylor JR, Williams N, Cusack R, Auer T, Shafto MA, Dixon M, Tyler LK, Henson RN, et al. The Cambridge Centre for ageing and neuroscience (Cam-CAN) data repository: structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample. Neuroimage. 2017;144:262–9.

    Article  Google Scholar 

  56. Fonov VS, Evans AC, McKinstry RC, Almli CR, Collins DL. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. Neuroimage. 2009;47:S102.

    Article  Google Scholar 

  57. Sofroniew N et al.. napari/napari: 0.4.12rc2,’ Oct 2021. Available https://doi.org/10.5281/zenodo.5587893

  58. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. In: IEEE Trans on Pattern Anal Mach Intell. 2012;34(11):2274–282.

    Google Scholar 

  59. Owrangi A, Greer P, Glide-Hurst C. MRI-only treatment planning: benefits and challenges. Phys Med Biol. 2018;63(5).

    Google Scholar 

  60. Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: a review. Int J Med Phys Res Pract. 2021;48(11):6537–66.

    Google Scholar 

  61. Low D. MRI guided radiotherapy. In: Cancer treatment and research. Springer; 2017. p. 41–67.

    Google Scholar 

  62. Maspero M, Savelije M, Dinkla A, Seevinck P, Intven M, Jurgenliemk-Schulz I, Kerkmeijer L, van den Berg C. Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys Med Biol. 2018;10(63).

    Google Scholar 

  63. Isola P, Zhu I-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: IEEE conference on computer vision and pattern recognition; 2017, p. 1125–134.

    Google Scholar 

  64. Klein S, Staring M, Murphy K, Viergever MA, Pluim JP. Elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imaging. 2009;29(1):196–205.

    Article  Google Scholar 

  65. Shamonin DP, Bron EE, Lelieveldt BP, Smits M, Klein S, Staring M. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer’s disease. Front Neuroinf. 2014;7:50.

    Google Scholar 

  66. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention (MICCAI). Springer; 2015. p. 234–41.

    Google Scholar 

  67. Wang Z, Bovik A, Sheikh H, Simoncelli E. Image quality assessment: from error visibility to structural similarity. IEEE Trans Imag Process. 2004;13(4):600–12.

    Article  Google Scholar 

Download references

Acknowledgements

We acknowledge the help of Mathijs de Boer with the implementation and evaluation of the MRI-to-pseudo-CT application. We also acknowledge the advice of dr. Matteo Maspero in developing the Pix2Pix generating network used for the same application and prof. Nico van den Berg for stimulating discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Espadoto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Espadoto, M., Martins, S.B., Branderhorst, W., Telea, A. (2023). Machine Learning—Basic Unsupervised Methods (Cluster Analysis Methods, t-SNE). In: Asselbergs, F.W., Denaxas, S., Oberski, D.L., Moore, J.H. (eds) Clinical Applications of Artificial Intelligence in Real-World Data. Springer, Cham. https://doi.org/10.1007/978-3-031-36678-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36678-9_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36677-2

  • Online ISBN: 978-3-031-36678-9

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics