Arras L, Horn F, Montavon G, Müller K-R, Samek W (2017) “What is relevant in a text document?”: an interpretable machine learning approach. PLoS One 12(8):1–23
Article
Google Scholar
Attenberg J, Weinberger K, Smola Q, Dasgupta A, Zinkevich M (2009) Collaborative email-spam filtering with the hashing-trick. In: Proceedings of the 6th conference on email and anti-spam, pp 1–5
Breiman L (2001) Random forests. Mach Learn 45:5–32
Article
Google Scholar
Brozovsky L, Petricek V (2007) Recommender system for online dating service. In: Proceedings of conference Znalosti, VSB, Ostrava, Czech Republic
Cha M, Mislove A, Gummadi KP (2009) A measurement-driven analysis of information propagation in the flickr social network. In: Proceedings of the 18th international World wide web conference, pp 1–10. https://doi.org/10.1145/1526709.1526806
Chen D, Fraiberger SP, Moakler R, Provost F (2017) Enhancing transparency and control when drawing data-driven inferences about individuals. Big Data 5(3):197–212
Article
Google Scholar
Chhatwal R, Gronvall P, Huber N, Keeling R, Zhang J, Zhao H (2018) Explainable text classification in legal document review: a case study of explainable predictive coding. In: 2018 IEEE international conference on big data, pp 1905–1911. https://doi.org/10.1109/BigData.2018.8622073
Craven MW (1996) Extracting comprehensible models from trained neural networks. The University of Wisconsin, Madison
Google Scholar
De Cnudde S, Martens D, Evgeniou T, Provost F (2019a) A benchmarking study of classification techniques for behavioral data. Int J Data Sci Anal 9(17):1–43
De Cnudde S, Moeyersoms J, Stankova M, Tobback E, Javaly V, Martens D (2019b) What does your Facebook profile reveal about your creditworthiness? Using alternative data for microfinance. J Oper Res Soc 70(3):353–363
De Cnudde S, Ramon Y, Martens D, Provost F (2019c) Deep learning for big. Sparse Behav Data Big Data 7(4):286–307
Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning, pp 1–13. ArXiv preprint arXiv:1702.08608
Fagerland M, Lydersen S, Laake P (2013) McNemar test for binary matched-pairs data: mid-p and asymptotic better than exact conditional. BMC Med Res Meth 13:1–8
Article
Google Scholar
Fernandez, C, Provost, F, Han, X (2019) Explaining data-driven decisions made by AI systems: the counterfactual approach, pp 1–33. arXiv:2001.07417
Freitas A (2014) Comprehensible classification models: a position paper. SIGKDD Explor Newsl 15(1):1–10
Article
Google Scholar
Goodman B, Flaxman S (2016) EU regulations on algorithmic decision-making and a “right to explanatio”. AI Maga 38:50–57
Article
Google Scholar
Gregor S, Benbasat I (1999) Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q 23(4):497–530
Article
Google Scholar
Harper FM, Konstan JA (2015) The movielens datasets: history and context. ACM Trans Interact Intell Syst. https://doi.org/10.1145/2827872
Hsu C-N, Chung H-H, Huang H-S (2004) Mining skewed and sparse transaction data for personalized shopping recommendation. Mach Learn 57(1):35–59
Article
Google Scholar
Joachims T (1998) Text categorization with suport vector machines: learning with many relevant features. In: Proceedings of ECML-98, 10th European conference on machine learning, vol 1398, pp 137–142
Junqué de Fortuny E, Martens D, Provost F (2013) Predictive modeling with big data: is bigger really better? Big Data 1(4):215–226
Article
Google Scholar
Kosinski M, Stillwell D, Graepel T (2013) Private traits and attributes are predictable from digital records of human behavior. Natl Acad Sci 110(15):5802–5805
Article
Google Scholar
Lang K (1995) Newsweeder: learning to filter netnews. In: Proceedings of the twelfth international conference on machine learning, pp 331–339
Lipton ZC (2018) The mythos of interpretability. ACM Queue 16(3):1–28
Article
Google Scholar
Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems 30 (NIPS 2017). Curran Associates Inc., pp 4765–4774
Martens D, Baesens B, van Gestel T, Vanthienen J (2007) Comprehensible credit scoring models using rule extraction from support vector machines. EJOR 183(3):1466–1476
Article
Google Scholar
Martens D, Provost F (2014) Explaining data-driven document classifications. MIS Q 38(1):73–99
Google Scholar
Martens D, Provost F, Clark J, Junque de Fortuny E (2016) Mining massive fine-grained behavior data to improve predictive analytics. MIS Q 40:869–888
Article
Google Scholar
Matz SC, Kosinski M, Nave G, Stillwell DJ (2017) Psychological targeting as an effective approach to digital mass persuasion. PNAS 114(48):12714–12719
Article
Google Scholar
Moeyersoms, J, d’Alessandro, B, Provost, F, Martens, D (2016) Explaining classification models built on high-dimensional sparse data. In: Workshop on human interpretability, machine learning: WHI 2016, 23 June 2016, New York, USA/Kim, Been [edit.], pp 36–40
Nguyen D (2018) Comparing automatic and human evaluation of local explanations for text classification. In: 16th conference of the NACA for CL, pp 1–10
Provost F (2014) Understanding decisions driven by big data: from analytics management to privacy-friendly cloaking devices. Keynote Lecture, Strata Europe. https://learning.oreilly.com/library/view/strata-hadoop/9781491917381/video203329.html. Accessed 17 June 2019
Provost F, Fawcett T (2013) Data science for business: what you need to know about data mining and data-analytic thinking, 1st edn. O’Reilly Media Inc., USA
Provost F, Martens D, Murray A (2015) Finding similar mobile consumers with a privacy-friendly geosocial design. Inf Sys Res 26(2):243–265
Article
Google Scholar
Ras G, van Gerven M, Haselager P (2018) Explanation methods in deep learning: users, values, concerns and challenges. In: Explainable and interpretable models in computer vision and machine learning. Springer, pp 19–36
Ribeiro MT, Singh S, Guestrin, C (2016) Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144
Samek W, Binder A, Montavon G, Bach S, Müller K-R (2015) Evaluating the visualization of what a deep neural network has learned, pp 1–13. Arxiv preprint arXiv:1509.06321
Schreiber EL, Korf RE, Moffitt MD (2018) Optimal multi-way number partitioning. J ACM 65(4):1–61
Google Scholar
Shmueli G (2017) Analyzing behavioral big data: methodological, practical, ethical, and moral issues. Qual Eng 29(1):57–74
Google Scholar
Sokol K, Flach PA (2019) Counterfactual explanations of machine learning predictions: opportunities and challenges for AI safety. In: CEUR workshop proceedings, pp 1–5
Strumbelj E, Kononenko I (2010) An efficient explanation of individual classifications using game theory. J Mach Learn Res 11:1–18
MathSciNet
MATH
Google Scholar
Wachter S, Mittelstadt BD, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law Technol 31(2):1–47
Google Scholar