Skip to main content

Visualization and Deep Learning in Data Science

  • Chapter
  • First Online:
Apply Data Science
  • 1536 Accesses

Abstract

People quickly and efficiently take in visually processed information. The processing of data of any kind is therefore an important and heavily researched area in data science and all its surrounding fields. The aim is to simplify complex information as much as possible so that the core information can be transported as simply and clearly as possible without significant loss of meaning. In turn, there is an increasing need to automatically process images and image information, whether for facial recognition as a biometric feature, for personal assistants, or for the evaluation of camera images in driverless cars. The article shows what possibilities exist in each case and how algorithms, e.g. in deep learning, can train and improve themselves or each other.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brell C, Brell J, Kirsch S (2016) Statistik von Null auf Hundert, 2. Aufl. Springer, Berlin

    Google Scholar 

  2. Kahl T, Zimmer F (Hrsg) (2020) Interaktive Datenvisualisierung in Wissenschaft und Unternehmenspraxis. Springer, Wiesbaden

    Google Scholar 

  3. Davies J (2020) Word cloud generator. https://www.jasondavies.com/wordcloud/. Accessed: 8. Dec 2020

  4. Provost F, Fawcett T (2017) Data Science for Business: What you need to know about data mining and data-analytic thinking. ‎ O'Reilly and Associates

    Google Scholar 

  5. OpenGeoDB Downloads (2020) OpenGeoDb, Die freie Geoinformatik-Wissensdatenbank. http://opengeodb.giswii.org/index.php?title=OpenGeoDB_Downloads&oldid=13822. Accessed: 8. Dec 2020

  6. Backhaus K, Erichson B, Plinke W, Weiber R (2018) Multivariate analysemethoden, 15th edn. Springer, Berlin

    Google Scholar 

  7. Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(2008):2579–2605

    MATH  Google Scholar 

  8. Wattenberg et al (2016) How to use t-SNE effectively. Distill. https://doi.org/10.23915/distill.00002

  9. Bitkom e. V. (2018) Digitalisierung gestalten mit dem Periodensystem der Künstlichen Intelligenz. Ein Navigationssystem für Entscheider. Hg. v. Bitkom e. V. Berlin. https://www.bitkom.org/sites/default/files/2018-12/181204_LF_Periodensystem_online_0.pdf

  10. Abdelhafiz D, Yang C, Ammar R, Nabavi S (2019) Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 20 (Suppl 11): 281. https://doi.org/10.1186/s12859-019-2823-4

  11. Ding Y, Sohn JH, Kawczynski MG et al (2019) A deep learning model to predict a diagnosis of alzheimer disease by using 18F-FDG PET of the brain. Radiology 290(2):456–464. https://doi.org/10.1148/radiol.2018180958

    Article  Google Scholar 

  12. Wang Z, Chen J, Hoi SCH (2019) Deep learning for image super-resolution: a survey. https://arxiv.org/pdf/1902.06068

  13. van den Oord A, Kalchbrenner N, Kavukcuoglu K (2016) Pixel recurrent neural networks. https://arxiv.org/pdf/1601.06759

  14. van den Oord A, Kalchbrenner N, Vinyals O, Espeholt L, Graves A, Kavukcuoglu K (2016) Conditional image generation with pixelCNN decoders. https://arxiv.org/pdf/1606.05328

  15. van den Oord A, Dieleman S, Zen H et al (2016) WaveNet: a generative model for raw audio. https://arxiv.org/pdf/1609.03499

  16. Isola, P, Zhu, J-Y, Zhou T, Efros AA (2016) Image-to-image translation with conditional adversarial networks. https://arxiv.org/pdf/1611.07004

  17. Goodfellow I, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. In: Advances in neural information processing systems 27: annual conference on neural information processing systems, pp 2672–2680. https://papers.nips.cc/paper/5423-generative-adversarial-nets

  18. Karras T, Laine S, Aila T (2018) A style-based generator architecture for generative adversarial networks. https://arxiv.org/pdf/1812.04948

  19. Gatys LA, Ecker AS, Bethge M (2015) A neural algorithm of artistic style. http://arxiv.org/pdf/1508.06576v2

  20. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507. https://doi.org/10.1126/science.1127647

    Article  MathSciNet  MATH  Google Scholar 

  21. Razavi A, van den Oord A, Vinyals O (2019) Generating diverse high-fidelity images with VQ-VAE-2. https://arxiv.org/pdf/1906.00446

  22. Park T, Liu M-Y, Wang T-C, Zhu J-Y (2019) Semantic image synthesis with spatially-adaptive normalization. https://arxiv.org/pdf/1903.07291

  23. Goodfellow I, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. https://arxiv.org/pdf/1412.6572

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jens Kaufmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kaufmann, J., Retkowitz, D. (2023). Visualization and Deep Learning in Data Science. In: Barton, T., MĂĽller, C. (eds) Apply Data Science. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-38798-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-38798-3_2

  • Published:

  • Publisher Name: Springer Vieweg, Wiesbaden

  • Print ISBN: 978-3-658-38797-6

  • Online ISBN: 978-3-658-38798-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics