Skip to main content

Scenario-Based Requirements Elicitation for User-Centric Explainable AI

A Case in Fraud Detection

  • Conference paper
  • First Online:
Machine Learning and Knowledge Extraction (CD-MAKE 2020)

Abstract

Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cirqueira, D., Hofer, M., Nedbal, D., Helfert, M., Bezbradica, M.: Customer purchase behavior prediction in e-commerce: a conceptual framework and research agenda. In: Ceci, M., Loglisci, C., Manco, G., Masciari, E., Ras, Z. (eds.) NFMCP 2019. LNCS (LNAI), vol. 11948, pp. 119–136. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-48861-1_8

    Chapter  Google Scholar 

  2. Bielozorov, A., Bezbradica, M., Helfert, M.: The role of user emotions for content personalization in e-commerce: literature review. In: Nah, F.F.-H., Siau, K. (eds.) HCII 2019. LNCS, vol. 11588, pp. 177–193. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22335-9_12

    Chapter  Google Scholar 

  3. Cakir, G., Bezbradica, M., Helfert, M.: The Shift from financial to non-financial measures during transition into digital retail–a systematic literature review. In: International Conference on Business Information Systems, pp. 189–200. Springer, Cham, June 2019. https://doi.org/10.1007/978-3-030-20485-3_15

  4. Iftikhar, R., Pourzolfaghar, Z., Helfert, M.: Omnichannel value chain: mapping digital technologies for channel integration activities. In: Siarheyeva, A., Barry, C., Lang, M., Linger, H., Schneider, C. (eds.) Information Systems Development: Information Systems Beyond 2020 (ISD2019 Proceedings). ISEN Yncréa Méditerranée, Toulon, France (2019)

    Google Scholar 

  5. Cirqueira, D., Helfert, M., Bezbradica, M.: Towards preprocessing guidelines for neural network embedding of customer behavior in digital retail. In: Proceedings of the 2019 3rd International Symposium on Computer Science and Intelligent Control, pp. 1–6, September 2019

    Google Scholar 

  6. Ryman-Tubb, N.F., Krause, P., Garn, W.: How artificial intelligence and machine learning research impacts payment card fraud detection: a survey and industry benchmark. Eng. Appl. Artif. Intell. 76, 130–157 (2018)

    Google Scholar 

  7. Mohseni, S., Zarei, N., Ragan, E.D.: A Multidisciplinary survey and framework for design and evaluation of explainable AI systems. arXiv: Human-Computer Interaction (2019)

    Google Scholar 

  8. Miller, A.T.: “But why?” understanding explainable artificial intelligence. XRDS: Crossroads ACM Mag. Students 25(3), 20–25 (2019)

    Google Scholar 

  9. Chazette, L., Schneider, K.: Explainability as a non-functional requirement: challenges and recommendations. Requirements Eng. 22, 1–22 (2020). https://doi.org/10.1007/s00766-020-00333-1

    Article  Google Scholar 

  10. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? (2017). arXiv preprint arXiv:1712.09923

  11. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6

    Book  Google Scholar 

  12. Goebel, R., et al.: Explainable AI: the new 42? In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 295–303. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_21

    Chapter  Google Scholar 

  13. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Google Scholar 

  14. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15, May 2019

    Google Scholar 

  15. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences (2017). arXiv preprint arXiv:1712.00547

  16. Moalosi, M., Hlomani, H., Phefo, O.S.: Combating credit card fraud with online behavioural targeting and device fingerprinting. Int. J. Electron. Secur. Digital Forensics 11(1), 46–69 (2019)

    Google Scholar 

  17. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wiley Interdisc. Rev. Data Min. Knowl. Disc. 9(4), e1312 (2019)

    Google Scholar 

  18. Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI-Künstliche Intelligenz 20, 1–6 (2020). https://doi.org/10.1007/s13218-020-00636-z

    Article  Google Scholar 

  19. Akula, A.R., et al.: X-tom: explaining with theory-of-mind for gaining justified human trust (2019). arXiv preprint arXiv:1909.06907

  20. Delaney, B.C., Fitzmaurice, D.A., Riaz, A., Hobbs, F.R.: Can computerised decision support systems deliver improved quality in primary care? Bmj 319(7220), 1281 (1999)

    Google Scholar 

  21. Leite, R.A., et al.: Eva: visual analytics to identify fraudulent events. IEEE Trans. Vis. Comput. Graph. 24(1), 330–339 (2017)

    Google Scholar 

  22. Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inf. 3(2), 119–131 (2016). https://doi.org/10.1007/s40708-016-0042-6

    Article  Google Scholar 

  23. Abdul, A., et al.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors In Computing Systems. ACM (2018)

    Google Scholar 

  24. Spinner, T., Schlegel, U., Schäfer, H., El-Assady, M.: explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans. Vis. Comput. Graph. 26(1), 1064–1074 (2019)

    Google Scholar 

  25. Chatzimparmpas, A., Martins, R.M., Jusufi, I., Kerren, A.: A survey of surveys on the use of visualization for interpreting machine learning models. Inf. Vis. 19, 1473871620904671 (2020)

    Google Scholar 

  26. Chatzimparmpas, A., Martins, R.M., Jusufi, I., Kucher, K., Rossi, F., Kerren, A.: The State of the art in enhancing trust in machine learning models with the use of visualizations. In: Computer Graphics Forum (Print)

    Google Scholar 

  27. Bell, S.: Learning with Information Systems: Learning Cycles in Information Systems Development. Routledge, United Kingdom (2013)

    Google Scholar 

  28. Ostrowski, L., Helfert, M.: Reference model in design science research to gather and model information. In: AMCIS 2012 Proceedings 3 (2012). https://aisel.aisnet.org/amcis2012/proceedings/SystemsAnalysis/3

  29. Browne, G.J., Rogich, M.B.: An empirical investigation of user requirements elicitation: comparing the effectiveness of prompting techniques. J. Manage. Inf. Syst. 17(4), 223–249 (2001)

    Google Scholar 

  30. Carroll, J.M.: Becoming social: expanding scenario-based approaches in HCI. Behav. Inf. Technol. 15(4), 266–275 (1996)

    Google Scholar 

  31. Malle, B.F.: Time to give up the dogmas of attribution: an alternative theory of behavior explanation. Advances in Experimental Social Psychology, pp. 297–352. Academic Press, Massachusetts (2011)

    Google Scholar 

  32. Preece, A., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI (2018). arXiv preprint arXiv:1810.00184

  33. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    MathSciNet  MATH  Google Scholar 

  34. Linsley, D., Shiebler, D., Eberhardt, S., Serre, T.: Global-and-local attention networks for visual recognition (2018). arXiv preprint arXiv:1805.08819

  35. Seo, S., Huang, J., Yang, H., Liu, Y.: August. Interpretable convolutional neural networks with dual local and global attention for review rating prediction. In: Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 297–305, August 2017

    Google Scholar 

  36. Doshi-Velez, F., Been, K.: Towards a rigorous science of interpretable machine learning (2017). arXiv preprint arXiv:1702.08608

  37. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE (2018)

    Google Scholar 

  38. Laughlin, B., Sankaranarayanan, K., El-Khatib, K.: A service architecture using machine learning to contextualize anomaly detection. J. Database Manage. (JDM) 31(1), 64–84 (2020)

    Google Scholar 

  39. Antwarg, L., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using SHAP (2019). arXiv preprint arXiv:1903.02407

  40. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  41. Weerts, H.J.P., van Ipenburg, W., Pechenizkiy, M.: A human-grounded evaluation of shap for alert processing (2019). arXiv preprintarXiv:1907.03324

  42. Weerts, H.J.P., van Ipenburg, W., Pechenizkiy, M.: Case-based reasoning for assisting domain experts in processing fraud alerts of black-boxmachine learning models (2019). arXiv preprint arXiv:1907.03334

  43. Dilla, W.N., Raschke, R.L.: “Data visualization for fraud detection: practice implications and a call for future research”. Int. J. Account. Inf. Syst. 16, 1–22 (2015)

    Google Scholar 

  44. Leite, R.A., Gschwandtner, T., Miksch, S., Gstrein, E., Kuntner, J.: Visual analytics for event detection: focusing on fraud. Vis. Inf. 2(4), 198–212 (2018)

    Google Scholar 

  45. Munzner, T.: A nested model for visualization design and validation. IEEE Trans. Vis. Comput. Graph. 15(6), 921–928 (2009)

    Google Scholar 

  46. Franklin, L., Pirrung, M., Blaha, L., Dowling, M., Feng, M.: Toward a visualization-supported workflow for cyber alert management using threat models and human-centered design. In: 2017 IEEE Symposium on Visualization for Cyber Security (VizSec), pp. 1–8. IEEE, October 2017

    Google Scholar 

  47. Hall, M., et al.: A systematic method to understand requirements for explainable AI (XAI) systems. In: Proceedings of the IJCAI Workshop on eXplainable Artificial Intelligence (XAI 2019), Macau, China (2019)

    Google Scholar 

  48. Köhl, M.A., Baum, K., Langer, M., Oster, D., Speith, T., Bohlender, D.: Explainability as a non-functional requirement. In: 2019 IEEE 27th International Requirements Engineering Conference (RE), pp. 363–368. IEEE, September 2019

    Google Scholar 

  49. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences (2020). arXiv preprint arXiv:2001.02478

  50. Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., Hussmann, H.: Bringing transparency design into practice. In: 23rd International Conference on Intelligent User Interfaces, pp. 211–223, March 2019

    Google Scholar 

  51. Wolf, C.T.: Explainability scenarios: towards scenario-based XAI design. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 252–257, March 2019

    Google Scholar 

  52. West, J., Bhattacharya, M.: Intelligent financial fraud detection: a comprehensive review. Comput. Secur. 57, 47–66 (2016)

    Google Scholar 

  53. Dick, J., Hull, E., Jackson, K.: Requirements Engineering. Springer, United Kingdom (2017)

    MATH  Google Scholar 

  54. Rosson, M.B., Carroll, J.M.: Human-computer interaction. Scenario-Based Design, pp. 161–180. CRC Press, New Jersey (2009)

    Google Scholar 

  55. Maguire, M., Bevan, N.: User requirements analysis. In: IFIP World Computer Congress, TC 13, Boston, MA, pp. 133–148. Springer, August 2002. https://doi.org/10.1007/978-0-387-35610-5_9

  56. Hertzum, M.: Making use of scenarios: a field study of conceptual design. Int. J. Hum. Comput. Stud. 58(2), 215–239 (2003)

    Google Scholar 

  57. Diaper, D., Stanton, N.: The Handbook of Task Analysis for Human-Computer Interaction. CRC Press, New Jersey (2003)

    Google Scholar 

  58. Go, K., Carroll, J.M.: The handbook of task analysis for human-computer interaction. Scenario-Based Task Analysis, p. 117. CRC Press, New Jersey (2003)

    Google Scholar 

  59. Raj, S.B.E., Portia, A.A.: Analysis on credit card fraud detection methods. In: 2011 International Conference on Computer, Communication and Electrical Technology (ICCCET). IEEE (2011)

    Google Scholar 

  60. Dal Pozzolo, A., Boracchi, G., Caelen, O., Alippi, C., Bontempi, G.: Credit card fraud detection: a realistic modeling and a novel learning strategy. IEEE Trans. Neural Networks Learn. Syst. 29(8), 3784–3797 (2017)

    Google Scholar 

  61. Witzel, A., Reiter, H.: The Problem-Centred Interview. Sage, California (2012)

    Google Scholar 

  62. Forstner, A., Nedbal, D.: A problem-centered analysis of enterprise social software project. Procedia Comput. Sci. 121, 389–397 (2017)

    Google Scholar 

  63. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 18, xiii–xxiii (2002)

    Google Scholar 

  64. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Google Scholar 

  65. Gunning: explainable artificial intelligence (XAI), Defense Advanced Research Projects Agency (DARPA) (2018). http://www.darpa.mil/program/explainable-artificial-intelligence, Accessed 6 June 2018

  66. Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI (2019). arXiv preprint arXiv:1902.01876

  67. Leite, R.A., Gschwandtner, T., Miksch, S., Gstrein, E., Kuntner, J.: Visual analytics for fraud detection and monitoring. In: 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 201–202. IEEE, October 2015

    Google Scholar 

  68. Novikova, E., Kotenko, I., Fedotov, E.: Interactive multi-view visualization for fraud detection in mobile money transfer services. Int. J. Mobile Comput. Multimedia Commun. (IJMCMC) 6(4), 73–97 (2014)

    Google Scholar 

  69. Argyriou, E.N., Symvonis, A., Vassiliou, V.: A fraud detection visualization system utilizing radial drawings and heat-maps. In: 2014 International Conference on Information Visualization Theory and Applications (IVAPP), pp. 153–160. IEEE, January 2014

    Google Scholar 

  70. Chang, R., et al.: Scalable and interactive visual analysis of financial wire transactions for fraud detection. Inf. Vis. 7(1), 63–76 (2008)

    Google Scholar 

  71. Shi, Y., Liu, Y., Tong, H., He, J., Yan, G., Cao, N.: Visual analytics of anomalous user behaviors: a survey (2019). arXiv preprint arXiv:1905.06720

  72. Sun, J., et al: FraudVis: understanding unsupervised fraud detection algorithms. In: 2018 IEEE Pacific Visualization Symposium (PacificVis), pp. 170–174. IEEE, April 2018

    Google Scholar 

  73. Ahmed, M., Mahmood, A.N., Islam, M.R.: A survey of anomaly detection techniques in financial domain. Future Gener. Comput. Syst. 55, 278–288 (2016)

    Google Scholar 

  74. Phua, C., et al.: A comprehensive survey of data mining-based fraud detection research (2010). arXiv preprint arXiv:1009.6119

  75. Bolton, R.J., Hand, D.J.: Statistical fraud detection: a review. Stat. Sci. 14, 235–249 (2002)

    MathSciNet  MATH  Google Scholar 

  76. Weerts, H.J.P.: Interpretable machine learning as decision support for processing fraud alerts, 24 Jun 2019

    Google Scholar 

  77. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences (2017). arXiv preprintarXiv:1704.02685

  78. Böhmer, K., Rinderle-Ma, S.: Mining association rules for anomaly detection in dynamic process runtime behavior and explaining the root cause to users. Inf. Syst. 90, 101438 (2019)

    Google Scholar 

  79. Guo, S., Jin, Z., Chen, Q., Gotz, D., Zha, H., Cao, N.: Visual anomaly detection in event sequence data (2019). arXiv preprint arXiv:1906.10896

  80. Zhao, X., Wu, Y., Lee, D.L., Cui, W.: iforest: interpreting random forests via visual analytics. IEEE Trans. Vis. Comput. Graph. 25(1), 407–416 (2018)

    Google Scholar 

  81. Mejia-Lavalle, M.: Outlier detection with innovative explanation facility over a very large financial database. In: 2010 IEEE Electronics, Robotics and Automotive Mechanics Conference, pp. 23–27. IEEE, September 2010

    Google Scholar 

  82. Novikova, E., Kotenko, I.: Visualization-driven approach to fraud detection in the mobile money transfer services. In: Algorithms, Methods, and Applications in Mobile Computing and Communications, pp. 205–236. IGI Global (2019)

    Google Scholar 

  83. Collaris, D., van Wijk, J.J.: ExplainExplore: visual exploration of machine learning explanations. In: 2020 IEEE Pacific Visualization Symposium (PacificVis), pp. 26–35. IEEE, June 2020

    Google Scholar 

  84. Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.M.: Explainable AI for designers: a human-centered perspective on mixed-initiative co-creation. In: 2018 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. IEEE, August 2018

    Google Scholar 

  85. Didimo, W., Liotta, G., Montecchiani, F., Palladino, P.: An advanced network visualization system for financial crime detection. In: 2011 IEEE Pacific Visualization Symposium, pp. 203–210. IEEE, March 2011

    Google Scholar 

  86. Ko, S., et al.: A survey on visual analysis approaches for financial data. Comput. Graph. Forum 35(3), 599–617 (2016)

    Google Scholar 

  87. Olszewski, D.: Fraud detection using self-organizing map visualizing the user profiles. Knowl. Based Syst. 70, 324–334 (2014)

    Google Scholar 

  88. Perez, D.G., Lavalle, M.M.: Outlier detection applying an innovative user transaction modeling with automatic explanation. In: 2011 IEEE Electronics, Robotics and Automotive Mechanics Conference, pp. 41–46. IEEE, November 2011

    Google Scholar 

  89. Huang, M.L., Liang, J., Nguyen, Q.V.: A visualization approach for frauds detection in financial market. In: 2009 13th International Conference Information Visualisation, pp. 197–202. IEEE, July 2009

    Google Scholar 

  90. Collaris, D., Vink, L.M., van Wijk, J.J.: Instance-level explanations for fraud detection: a case study (2018). arXiv preprint arXiv:1806.07129

  91. Lin, H., Gao, S., Gotz, D., Du, F., He, J., Cao, N.: Rclens: Interactive rare category exploration and identification. IEEE Trans. Vis. Comput. Graph. 24(7), 2223–2237 (2017)

    Google Scholar 

  92. Leite, R.A., Gschwandtner, T., Miksch, S., Gstrein, E., Kuntner, J.: Visual analytics for fraud detection: focusing on profile analysis. In: EuroVis (Posters), pp. 45–47, June 2016

    Google Scholar 

  93. Xie, C., Chen, W., Huang, X., Hu, Y., Barlowe, S., Yang, J.: VAET: a visual analytics approach for e-transactions time-series. IEEE Trans. Vis. Comput. Graph. 20(12), 1743–1752 (2014)

    Google Scholar 

  94. Gal, G., Singh, K., Best, P.: Interactive visual analysis of anomalous accounts payable transactions in SAP enterprise systems. Manag. Auditing J. 31, 35–63 (2016)

    Google Scholar 

  95. Didimo, W., Liotta, G., Montecchiani, F.: Network visualization for financial crime detection. J. Vis. Lang. Comput. 25(4), 433–451 (2014)

    Google Scholar 

  96. Rieke, R., Zhdanova, M., Repp, J., Giot, R., Gaber, C.: Fraud detection in mobile payments utilizing process behavior analysis. In: 2013 International Conference on Availability, Reliability and Security, pp. 662–669. IEEE, September 2013

    Google Scholar 

  97. Leite, R.A., Gschwandtner, T., Miksch, S., Gstrein, E., Kuntner, J.: Network analysis for financial fraud detection. In: EuroVis (Posters), pp. 21–23, June 2018

    Google Scholar 

  98. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, August 2016

    Google Scholar 

  99. Gupta, N., Eswaran, D., Shah, N., Akoglu, L., Faloutsos, C.: Beyond outlier detection: LookOut for pictorial explanation. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 122–138. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10925-7_8

    Chapter  Google Scholar 

  100. Vojíř, S., Zeman, V., Kuchař, J., Kliegr, T.: EasyMiner. eu: web framework for interpretable machine learning based on rules and frequent itemsets. Knowl. -Based Syst. 150, 111–115 (2018)

    Google Scholar 

  101. Chmielewski, M., Stąpor, P.: Hidden information retrieval and evaluation method and tools utilising ontology reasoning applied for financial fraud analysis. In: MATEC Web of Conferences, vol. 210, pp. 02019. EDP Sciences (2018)

    Google Scholar 

  102. Vaculík, K., Popelínský, L.: DGRMiner: anomaly detection and explanation in dynamic graphs. In: Boström, H., Knobbe, A., Soares, C., Papapetrou, P. (eds.) IDA 2016. LNCS, vol. 9897, pp. 308–319. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46349-0_27

    Chapter  Google Scholar 

  103. Kobayashi, M., Ito, T.: A transactional relationship visualization system in Internet auctions. In: 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’07), pp. 248–251. IEEE, November 2007

    Google Scholar 

  104. Chmielewski, M., Stąpor, P.: Money laundering analytics based on contextual analysis. Application of problem solving ontologies in financial fraud identification and recognition. In: Information Systems Architecture and Technology: Proceedings of 37th International Conference on Information Systems Architecture and Technology–ISAT 2016–Part I, pp. 29–39. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-46583-8_3

  105. Wang, D., et al.: A Semi-supervised graph attentive network for financial fraud detection. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 598–607. IEEE, November 2019

    Google Scholar 

  106. Chang, R., et al.: WireVis: visualization of categorical, time-varying data from financial transactions. In: 2007 IEEE Symposium on Visual Analytics Science and Technology, pp. 155–162. IEEE, October 2007

    Google Scholar 

  107. Didimo, W., et al.: Vis4AUI: visual analysis of banking activity networks. In: GRAPP/IVAPP, pp. 799–802 (2012)

    Google Scholar 

  108. Mokoena, T., Lebogo, O., Dlaba, A., Marivate, V.: Bringing sequential feature explanations to life. In: 2017 IEEE AFRICON, pp. 59–64. IEEE, September 2017

    Google Scholar 

  109. Hao, M.C., Dayal, U., Sharma, R.K., Keim, D.A., Janetzko, H.: Visual analytics of large multidimensional data using variable binned scatter plots. In: Visualization and Data Analysis, vol. 7530, p. 753006. International Society for Optics and Photonics, January 2010

    Google Scholar 

  110. Turner, R.: A model explanation system. In: 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE, September 2016

    Google Scholar 

  111. Dumas, M., McGuffin, M.J., Lemieux, V.L.: FinanceVis. net-a visual survey of financial data visualizations. In: Poster Abstracts of IEEE Conference on Visualization, vol. 2, p. 8, November 2014

    Google Scholar 

  112. Carminati, M., Caron, R., Maggi, F., Epifani, I., Zanero, S.: BankSealer: an online banking fraud analysis and decision support system. In: IFIP International Information Security Conference, pp. 380–394. Springer, Berlin, Heidelberg, June 2014. https://doi.org/10.1007/978-3-642-55415-5_32

  113. Das, S., Islam, M.R., Jayakodi, N.K., Doppa, J.R.: Active anomaly detection via ensembles: insights, algorithms, and interpretability (2019). arXiv preprint arXiv:1901.08930

  114. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Thirty-Second AAAI Conference on Artificial Intelligence, April 2018

    Google Scholar 

  115. Byrne, R.M.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI, pp. 6276–6282, August 2019

    Google Scholar 

  116. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)

    Google Scholar 

  117. Molnar, C.: Interpretable Machine Learning. Lulu. com, North Carolina (2019)

    Google Scholar 

Download references

Acknowledgements

This research was supported by the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 765395; and supported, in part, by Science Foundation Ireland grant 13/RC/2094.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Douglas Cirqueira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cirqueira, D., Nedbal, D., Helfert, M., Bezbradica, M. (2020). Scenario-Based Requirements Elicitation for User-Centric Explainable AI. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science(), vol 12279. Springer, Cham. https://doi.org/10.1007/978-3-030-57321-8_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-57321-8_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57320-1

  • Online ISBN: 978-3-030-57321-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics