Abstract
Complementary to the classical use of feature-based condensation and temporal subsampling for in situ visualization, learning-based data upscaling has recently emerged as an interesting approach that can supplement existing in situ volume visualization techniques. By upscaling we mean the spatial or temporal reconstruction of a signal from a reduced representation that requires less memory to store and sometimes even less time to generate. The concrete tasks where upscaling has been shown to work effectively are geometry upscaling, to infer high-resolution geometry images from given low-resolution images of sampled features; upscaling in the data domain, to infer the original spatial resolution of a 3D dataset from a downscaled version; and upscaling of temporally sparse volume sequences, to generate refined temporal features. In this book chapter, we aim at providing a summary of existing learning-based upscaling approaches and a discussion of possible use cases for in situ volume visualization. We discuss the basic foundation of learning-based upscaling, and review existing works in image and video super-resolution from other fields. We then show the specific adaptations and extensions that have been proposed in visualization to realize upscaling tasks beyond color images, discuss how these approaches can be employed for in situ visualization, and provide an outlook on future developments in the field.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai, K., Li, W., Desbrun, M., Liu, X.: Dynamic upsampling of smoke through dictionary-based learning (2019). arXiv:1910.09166
Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Lear. 2(1), 1–127 (2009)
Caballero, J., Ledig, C., Aitken, A.P., Acosta, A., Totz, J., Wang, Z., Shi, W.: Real-time video super-resolution with spatio-temporal networks and motion compensation (2016). arXiv:1611.05250
Chaitanya, C.R.A., Kaplanyan, A.S., Schied, C., Salvi, M., Lefohn, A., Nowrouzezahrai, D., Aila, T.: Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder. ACM Trans. Graph. 36(4), 98:1–98:12 (2017)
Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). arXiv:1406.1078
Chu, M., Thuerey, N.: Data-driven synthesis of smoke flows with CNN-based feature descriptors. ACM Trans. Graph. 36(4), 69:1–69:14 (2017)
Chu, M., Xie, Y., Leal-Taixé, L., Thuerey, N.: Temporally coherent gans for video super-resolution (TecoGAN) (2018). arXiv:1811.09393
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Proceedings of European Conference on Computer Vision, pp. 184–199 (2014)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)
Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: Proceedings of Annual Conference on Neural Information Processing Systems, pp. 658–666 (2016)
Eckert, M.-L., Um, K., Thuerey, N.: ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning. ACM Trans. Graph. 38(6), 239:1–239:16 (2019)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Proceedings of Annual Conference on Neural Information Processing Systems, pp. 2672–2680 (2014)
Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., Schmidhuber, J.: A novel connectionist system for improved unconstrained handwriting recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 855–868 (2009)
Guo, L., Ye, S., Han, J., Zheng, H., Gao, H., Chen, D.Z., Wang, J.-X., Wang, C.: SSR-VFD: Spatial super-resolution for vector field data analysis and visualization. In: Proceedings of IEEE Pacific Visualization Symposium, pp. 71–80 (2020)
Han, J., Tao, J., Zheng, H., Guo, H., Chen, D.Z., Wang, C.: Flow field reduction via reconstructing vector data from 3D streamlines using deep learning. IEEE Comput. Graph. Appl. 39(4), 54–67 (2019)
Han, J., Wang, C.: TSR-TVD: temporal super-resolution for time-varying data analysis and visualization. IEEE Trans. Vis. Comput. Graph. 26(1), 205–215 (2020)
Han, J., Wang, C.: SSR-TVD: spatial super-resolution for time-varying data analysis and visualization. IEEE Trans. Vis. Comput. Graph. (Under Minor Revision) (2020)
Han, J., Zheng, H., Xing, Y., Chen, D.Z., Wang, C.: V2V: a deep learning approach to variable-to-variable selection and translation for multivariate time-varying data. IEEE Trans. Vis. Comput. Graph. 27(2) (2021). In Press
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Jo, Y., Oh, S.W., Kang, J., Kim, S.J.: Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3224–3232 (2018)
Johnson, J., Alahi, A., Li, F.-F.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of European Conference on Computer Vision, pp. 694–711 (2016)
Kappeler, A., Yoo, S., Dai, Q., Katsaggelos, A.K.: Video super-resolution with convolutional neural networks. IEEE Trans. Comput. Imaging 2(2), 109–122 (2016)
Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)
Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)
Kramer, M.A.: Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 37(2), 233–243 (1991)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep Laplacian pyramid networks for fast and accurate superresolution. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 624–632 (2017)
LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nature 521(7553), 436–444 (2015)
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)
Liu, D., Wang, Z., Fan, Y., Liu, X., Wang, Z., Chang, S., Huang, T.: Robust video super-resolution with learned temporal dynamics. In: Proceedings of International Conference on Computer Vision, pp. 2526–2534 (2017)
Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: Proceedings of International Conference on Computer Vision, pp. 2813–2821 (2017)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: representing scenes as neural radiance fields for view synthesis (2020). arXiv:2003.08934
Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: Proceedings of International Conference for Learning Representations (2018)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of International Conference on Machine Learning, pp. 807–814 (2010)
Nimier-David, M., Vicini, D., Zeltner, T., Wenzel, J.: Mitsuba 2: a retargetable forward and inverse renderer. ACM Trans. Graph. 38(6), 203:1–203:17 (2019)
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Sajjadi, M.S.M., Schólkopf, B., Hirsch, M.: EnhanceNet: single image super-resolution through automated texture synthesis. In: Proceedings of IEEE International Conference on Computer Vision, pp. 4501–4510 (2017)
Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pp. 6626–6634 (2018)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-C.: Convolutional LSTM network: a machine learning approach for precipitation Nowcasting. In: Proceedings of Advances in Neural Information Processing Systems, pp. 802–810 (2015)
Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556
Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3147–3155 (2017)
Tao, X., Gao, H., Liao, R., Wang, J., Jia, J.: Detail-revealing deep video super-resolution. In: Proceedings of IEEE International Conference on Computer Vision, pp. 4482–4490 (2017)
Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: Proceedings of IEEE International Conference on Computer Vision, pp. 4809–4817 (2017)
Wang, X., Chan, K.C., Yu, K., Dong, C., Loy, C.C.: EDVR: Video restoration with enhanced deformable convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)
Weiss, S., Chu, M., Thuerey, N., Westermann, R.: Volumetric isosurface rendering with deep learning-based super-resolution. IEEE Trans. Vis. Comput. Graph. (Accepted) (2019)
Weiss, S., Işık, M., Thies, J., Westermann, R.: Learning adaptive sampling and reconstruction for volume visualization (2020). arXiv:2007.10093
Werhahn, M., Xie, Y., Chu, M., Thuerey, N.: A multi-pass GAN for fluid flow super-resolution. Proc. ACM Comput. Graph. Inter. Tech. 2(1), 10:1–10:21 (2019)
Xiao, X., Wang, H., Yang, X.: A CNN-based flow correction method for fast preview. Comput. Graph. Forum 38(2), 431–440 (2019)
Xie, Y., Franz, E., Chu, M., Thuerey, N.: tempoGAN: a temporally coherent, volumetric GAN for super-resolution fluid flow. ACM Trans. Graph. 37(4), 95:1–95:15 (2018)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising (2016). arXiv:1608.03981
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)
Zhou, Z., Hou, Y., Wang, Q., Chen, G., Lu, J., Tao, Y., Lin, H.: Volume upscaling with convolutional neural networks. In: Proceedings of Computer Graphics International, pp. 38:1–38:6 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Weiss, S., Han, J., Wang, C., Westermann, R. (2022). Deep Learning-Based Upscaling for In Situ Volume Visualization. In: Childs, H., Bennett, J.C., Garth, C. (eds) In Situ Visualization for Computational Science. Mathematics and Visualization. Springer, Cham. https://doi.org/10.1007/978-3-030-81627-8_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-81627-8_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81626-1
Online ISBN: 978-3-030-81627-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)