Skip to main content

Causality and Its Applications

  • Chapter
  • First Online:
Soft Computing in Interdisciplinary Sciences

Part of the book series: Studies in Computational Intelligence ((SCI,volume 988))

  • 317 Accesses

Abstract

The diagnosis of cause–effect relations boosts the both predictive and prescriptive models and their analysis. The recent trend and growth in machine learning, leading the field of predictive analytics, base the certain case for causal analysis. Causal analysis supports the study of causes and effect as they are observed and their underneath relationship directing such trends, are analysed for predictive modelling. The relation in cause–effect provides much needed information on interdependency in the various features. The interdependency helps to identify the valuable or influential features which supports for any certain trend, which is of interest. In this chapter, machine learning and necessity for causal analysis is discussed in depth. The main focus point is to provide different scenario where causal integration into machine learning is useful but lacks the process of inference. A depth view is shown for different situations where causal analysis can produce and largely effect the standard industrial process of predictive modelling. Next to this trend, a more advanced step is prescriptive analysis. Going one step further of predicting what is expected, is prescriptive analytics focus on what to avoid to achieve the required target. In this case, causal analysis can be used to find feature dependency model, leading to the prescriptive future model. As seen in machine learning modelling, much effort is put upon setting up the predictive modelling. But most industry require a better understanding of feature dependency and a model leading to business success while avoiding losses, a prescriptive modelling. In deep learning, much effort is lost on building the most accurate predictive modelling. But neural networks never seen in the lights of models which are explainable. The weights produced in most accurate models of deep learning, does not provide interpretability or their effectiveness in producing final results. With the rise of explainable AI, there is now efforts put onto the explain ability of the deep networks. This new trend/flow now opened the windows for causal explanations in deep neural network models. The task to explain deep learning models, require understanding on hidden weights and their influence on final neural outputs. Causal influence of input features through the weights inform their contribution towards neural outputs. Through explainable neural network weights, the better models can be built to overpower or underpower the effects of input features. In this note, the chapter provide how causal influence is useful and their significance for causal influence analysis. The criteria for causal influence analysis in deep neural models are discussed with examples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bollen KA (1989) Structural equations with latent variables. Wiley

    Book  Google Scholar 

  2. He YB, Geng Z (2008) Active learning of causal networks with intervention experiments and optimal designs. J Mach Learn Res 9:2523–2547

    MathSciNet  MATH  Google Scholar 

  3. Hyvärinen A, Smith SM (2013) Pairwise likelihood ratios for estimation of non- gaussian structural equation models. J Mach Learn Res 14:111–152

    MathSciNet  MATH  Google Scholar 

  4. Parida PK, Marwala T, Chakraverty S (2016) An overview of recent advancements in causal studies. Arch Comput Methods Eng 24(2):319–335

    Google Scholar 

  5. Parida PK, Marwala T, Chakraverty S (2018) A multivariate additive noise model complete causal discovery. Neural Netw 103:44–54

    Google Scholar 

  6. Pearl J (2009) Causal inference in statistics: an overview. Statistical Surveys, 3:96–146. Pellet JP and Elisseeff A (2008). Using markov blankets for causal structure learning. J Mach Learn Res 9:1295–1342

    Google Scholar 

  7. Peters J, Mooij MJ, Janzing D, Schölkopf B (2014) Causal discovery with continuous additive noise models. J Mach Learn Res 15(1):2009–2053

    MathSciNet  MATH  Google Scholar 

  8. Petrovi\'{c}, L. and Dimitrijevi\'{c}, S. (2011) Invariance of statistical causality under convergence. Statist Probab Lett 81(9):1445–1448

    Google Scholar 

  9. Shimizu S, Inazumi T, Sogawa Y, Hyvärinen A, Kawahara Y, Washio T, Hoyer PO, Bollen K (2011) Directlingam: a direct method for learning a linear non-gaussian structural equation model. J Mach Learn Res 12:1225–1248

    MathSciNet  MATH  Google Scholar 

  10. Shimizu S, Hyvärinen A, Kano Y, Hoyer PO (2005) Discovery of non-gaussian linear causal models using ICA. In: Proceedings of the 21st conference on uncertainty in artificial intelligence, pp 526–533

    Google Scholar 

  11. Shpitser I, Pearl J (2008) Complete identification methods for the causal hierarchy. J Mach Learn Res 9:1941–1979

    MathSciNet  MATH  Google Scholar 

  12. Spirtes P, Glymour C, Scheines R (1993) Causation, Prediction, and Search. Springer Verlag

    Book  Google Scholar 

  13. Sun X, Janzing D, Schölkopf B (2006) Causal inference by choosing graphs with most plausible Markov Kernels. In: Proceeding of the 9th international symposium art international and mathematics, Fort Lauderdale, FL

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pramod Kumar Parida .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Parida, P.K. (2022). Causality and Its Applications. In: Chakraverty, S. (eds) Soft Computing in Interdisciplinary Sciences. Studies in Computational Intelligence, vol 988. Springer, Singapore. https://doi.org/10.1007/978-981-16-4713-0_11

Download citation

Publish with us

Policies and ethics