Abstract
Machine learning (ML) is becoming increasingly popular in meteorological decision-making. Although the literature on explainable artificial intelligence (XAI) is growing steadily, user-centered XAI studies have not extend to this domain yet. This study defines three requirements for explanations of black-box models in meteorology through user studies: statistical model performance for different rainfall scenarios to identify model bias, model reasoning, and the confidence of model outputs. Appropriate XAI methods are mapped to each requirement, and the generated explanations are tested quantitatively and qualitatively. An XAI interface system is designed based on user feedback. The results indicate that the explanations increase decision utility and user trust. Users prefer intuitive explanations over those based on XAI algorithms even for potentially easy-to-recognize examples. These findings can provide evidence for future research on user-centered XAI algorithms, as well as a basis to improve the usability of AI systems in practice.
Supported by the Korean Institute of Information & Communications Technology Planning & Evaluation (IITP) and the Korean Ministry of Science and ICT(MSIT) under grant agreement No. 2019-0-00075 (Artificial Intelligence Graduate School Program (KAIST)) and No. 2022-0-00984 (Development of Plug-and-Play Explainable Artificial Intelligence Method), and from the Korea Meteorological Administration (KMA) and Korean National Institute of Meteorological Sciences (NIMS) under grant agreement No. KMA2021-00123 (Developing Intelligent Assistant Technology and Its Application for Weather Forecasting Process).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)
Başağaoğlu, H., et al.: A review on interpretable and explainable artificial intelligence in hydroclimatic applications. Water 14(8), 1230 (2022)
Bhatt, U., Weller, A., Moura, J.M.: Evaluating and aggregating feature-based model explanations. arXiv preprint arXiv:2005.00631 (2020)
Bradley, C., et al.: Explainable artificial intelligence (XAI) user interface design for solving a Rubik’s Cube. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds.) HCI International 2022-Late Breaking Posters: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, 26 June–1 July 2022, Proceedings, Part II, pp. 605–612. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-19682-9_76
Chaput, R., Cordier, A., Mille, A.: Explanation for humans, for machines, for human-machine interactions? In: Explainable Agency in Artificial Intelligence WS, AAAI-2021 (2021)
Chromik, M., Butz, A.: Human-XAI interaction: a review and design principles for explanation user interfaces. In: Ardito, C., Lanzilotti, R., Malizia, A., Petrie, H., Piccinno, A., Desolda, G., Inkpen, K. (eds.) INTERACT 2021. LNCS, vol. 12933, pp. 619–640. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85616-8_36
Dell, M., Jones, B.F., Olken, B.A.: What do we learn from the weather? The new climate-economy literature. J. Econ. Lit. 52(3), 740–798 (2014)
Ding, Z., Han, X., Liu, P., Niethammer, M.: Local temperature scaling for probability calibration. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6889–6899 (2021)
Espeholt, L., et al.: Deep learning for twelve hour precipitation forecasts. Nat. Commun. 13(1), 5145 (2022)
van der Geest, K., et al.: The impacts of climate change on ecosystem services and resulting losses and damages to people and society. In: Mechler, R., Bouwer, L.M., Schinko, T., Surminski, S., Linnerooth-Bayer, J.A. (eds.) Loss and Damage from Climate Change. CRMPG, pp. 221–236. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-72026-5_9
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: International Conference on Machine Learning, pp. 1321–1330 (2017)
Haynes, K., Lagerquist, R., McGraw, M., Musgrave, K., Ebert-Uphoff, I.: Creating and evaluating uncertainty estimates with neural networks for environmental-science applications. In: Artificial Intelligence for the Earth Systems, pp. 1–58 (2023)
Kalnay, E.: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press (2003)
Kim, C., Yun, S.Y.: Precipitation nowcasting using grid-based data in South Korea region. In: 2020 International Conference on Data Mining Workshops (ICDMW), pp. 701–706. IEEE (2020)
Ko, J., Lee, K., Hwang, H., Oh, S.G., Son, S.W., Shin, K.: Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci. 161, 105072 (2022)
Korea Meteorological Agency: Haneulsarang (2022). https://www.kma.go.kr/download_01/kma_202002.pdf
Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)
Liao, Q.V., Pribić, M., Han, J., Miller, S., Sow, D.: Question-driven design process for explainable AI user experiences. arXiv preprint arXiv:2104.03483 (2021)
Liao, Q.V., Varshney, K.R.: Human-centered explainable AI (XAI): from algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021)
McGovern, A., Ebert-Uphoff, I., Gagne, D.J., Bostrom, A.: Why we need to focus on developing ethical, responsible, and trustworthy artificial intelligence approaches for environmental science. Environ. Data Sci. 1, e6 (2022)
McGovern, A., Gagne, D.J., Williams, J.K., Brown, R.A., Basara, J.B.: Enhancing understanding and improving prediction of severe weather through spatiotemporal relational learning. Mach. Learn. 95, 27–50 (2014)
Mizutori, M., Guha-Sapir, D.: Economic losses, poverty and disasters 1998–2017. United Nations Office for Disaster Risk Reduction, vol. 4, pp. 9–15 (2017)
Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116(44), 22071–22080 (2019)
Naeini, M.P., Cooper, G.F., Hauskrecht, M.: Obtaining well calibrated probabilities using Bayesian Binning. In: 2015 Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2901–2907 (2015)
Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. arXiv preprint arXiv:2201.08164 (2022)
Platt, J.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in Large Margin Classifiers, vol. 10, no. 3, pp. 61–74 (1999)
Rasp, S., Thuerey, N.: Data-driven medium-range weather prediction with a Resnet pretrained on climate simulations: a new model for WeatherBench. J. Adv. Model. Earth Syst. 13(2), e2020MS002405 (2021)
Ravuri, S., et al.: Skilful precipitation nowcasting using deep generative models of radar. Nature 597(7878), 672–677 (2021)
Ren, X., et al.: Deep learning-based weather prediction: a survey. Big Data Res. 23, 100178 (2021)
Roebber, P.J.: Visualizing multiple measures of forecast quality. Weather Forecast. 24(2), 601–608 (2009)
Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660–2673 (2016)
Schwalbe, G., Finzel, B.: A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Min. Knowl. Disc., 1–59 (2023). https://doi.org/10.1007/s10618-022-00867-8
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Shin, Y., Kim, J.H., Chun, H.Y., Jang, W., Son, S.W.: Classification of synoptic patterns with mesoscale mechanisms for downslope windstorms in Korea using a self-organizing map. J. Geophys. Res. Atmos. 127(6), e2021JD035867 (2022)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
Sønderby, C.K., et al..: MetNet: a neural weather model for precipitation forecasting. arXiv preprint arXiv:2003.12140 (2020)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)
Yun, S.: Development of short-term precipitation prediction technology using artificial intelligence. Atmos. Res. 237, 104845 (2021)
Zadrozny, B., Elkan, C.P.: Obtaining calibrated probability estimates from decision trees and Naive Bayesian classifiers. In: International Conference on Machine Learning (2001)
Zhongming, Z., Linong, L., Xiaona, Y., Wangqiang, Z., Wei, L., et al.: Atlas of mortality and economic losses from weather, climate and water extremes (1970–2019). Weather Climate Water Temps Climate EAU (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, S. et al. (2023). Explainable AI-Based Interface System for Weather Forecasting Model. In: Degen, H., Ntoa, S., Moallem, A. (eds) HCI International 2023 – Late Breaking Papers. HCII 2023. Lecture Notes in Computer Science, vol 14059. Springer, Cham. https://doi.org/10.1007/978-3-031-48057-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-48057-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-48056-0
Online ISBN: 978-3-031-48057-7
eBook Packages: Computer ScienceComputer Science (R0)