Abstract
The diagnosis of cause–effect relations boosts the both predictive and prescriptive models and their analysis. The recent trend and growth in machine learning, leading the field of predictive analytics, base the certain case for causal analysis. Causal analysis supports the study of causes and effect as they are observed and their underneath relationship directing such trends, are analysed for predictive modelling. The relation in cause–effect provides much needed information on interdependency in the various features. The interdependency helps to identify the valuable or influential features which supports for any certain trend, which is of interest. In this chapter, machine learning and necessity for causal analysis is discussed in depth. The main focus point is to provide different scenario where causal integration into machine learning is useful but lacks the process of inference. A depth view is shown for different situations where causal analysis can produce and largely effect the standard industrial process of predictive modelling. Next to this trend, a more advanced step is prescriptive analysis. Going one step further of predicting what is expected, is prescriptive analytics focus on what to avoid to achieve the required target. In this case, causal analysis can be used to find feature dependency model, leading to the prescriptive future model. As seen in machine learning modelling, much effort is put upon setting up the predictive modelling. But most industry require a better understanding of feature dependency and a model leading to business success while avoiding losses, a prescriptive modelling. In deep learning, much effort is lost on building the most accurate predictive modelling. But neural networks never seen in the lights of models which are explainable. The weights produced in most accurate models of deep learning, does not provide interpretability or their effectiveness in producing final results. With the rise of explainable AI, there is now efforts put onto the explain ability of the deep networks. This new trend/flow now opened the windows for causal explanations in deep neural network models. The task to explain deep learning models, require understanding on hidden weights and their influence on final neural outputs. Causal influence of input features through the weights inform their contribution towards neural outputs. Through explainable neural network weights, the better models can be built to overpower or underpower the effects of input features. In this note, the chapter provide how causal influence is useful and their significance for causal influence analysis. The criteria for causal influence analysis in deep neural models are discussed with examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bollen KA (1989) Structural equations with latent variables. Wiley
He YB, Geng Z (2008) Active learning of causal networks with intervention experiments and optimal designs. J Mach Learn Res 9:2523–2547
Hyvärinen A, Smith SM (2013) Pairwise likelihood ratios for estimation of non- gaussian structural equation models. J Mach Learn Res 14:111–152
Parida PK, Marwala T, Chakraverty S (2016) An overview of recent advancements in causal studies. Arch Comput Methods Eng 24(2):319–335
Parida PK, Marwala T, Chakraverty S (2018) A multivariate additive noise model complete causal discovery. Neural Netw 103:44–54
Pearl J (2009) Causal inference in statistics: an overview. Statistical Surveys, 3:96–146. Pellet JP and Elisseeff A (2008). Using markov blankets for causal structure learning. J Mach Learn Res 9:1295–1342
Peters J, Mooij MJ, Janzing D, Schölkopf B (2014) Causal discovery with continuous additive noise models. J Mach Learn Res 15(1):2009–2053
Petrovi\'{c}, L. and Dimitrijevi\'{c}, S. (2011) Invariance of statistical causality under convergence. Statist Probab Lett 81(9):1445–1448
Shimizu S, Inazumi T, Sogawa Y, Hyvärinen A, Kawahara Y, Washio T, Hoyer PO, Bollen K (2011) Directlingam: a direct method for learning a linear non-gaussian structural equation model. J Mach Learn Res 12:1225–1248
Shimizu S, Hyvärinen A, Kano Y, Hoyer PO (2005) Discovery of non-gaussian linear causal models using ICA. In: Proceedings of the 21st conference on uncertainty in artificial intelligence, pp 526–533
Shpitser I, Pearl J (2008) Complete identification methods for the causal hierarchy. J Mach Learn Res 9:1941–1979
Spirtes P, Glymour C, Scheines R (1993) Causation, Prediction, and Search. Springer Verlag
Sun X, Janzing D, Schölkopf B (2006) Causal inference by choosing graphs with most plausible Markov Kernels. In: Proceeding of the 9th international symposium art international and mathematics, Fort Lauderdale, FL
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Parida, P.K. (2022). Causality and Its Applications. In: Chakraverty, S. (eds) Soft Computing in Interdisciplinary Sciences. Studies in Computational Intelligence, vol 988. Springer, Singapore. https://doi.org/10.1007/978-981-16-4713-0_11
Download citation
DOI: https://doi.org/10.1007/978-981-16-4713-0_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-4712-3
Online ISBN: 978-981-16-4713-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)