Abstract
In this study, we propose a Multi-abstraction-level Feature extractor with a Bayesian network (MFB) that can output intermediate patterns as feature values and can be shared across different abstraction level classifiers. To leverage the patterns from intermediate layers, we implemented a bidirectional network based on a Bayesian network to accurately calculate posterior probabilities. Experimental testing confirmed that a MFB could be constructed successfully on an actual computer to achieve pattern extraction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828
Dehaene S, Cohen L, Sigman M, Vinckier F (2005) The neural code for written words: a proposal. Trends Cogn Sci 9(7):335–341
Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) DeCAF: a deep convolutional activation feature for generic visual recognition. In: International conference on machine learning, pp 647–655
Du Y, Zhang R, Zargari A, Thai TC, Gunderson CC, Moxley KM, Liu H, Zheng B, Qiu Y (2018) A performance comparison of low-and high-level features learned by deep convolutional neural networks in epithelium and stroma classification. In: Medical imaging 2018: digital pathology, vol 10581. International Society for Optics and Photonics, p 1058116
Gray H, Goss CM (1973) Anatomy of the human body, by Henry Gray, 29th american ed., edited by charles mayo goss. with new drawings by don m. alvarado. edn. Lea & Febiger Philadelphia
He Y, Kavukcuoglu K, Wang Y, Szlam A, Qi Y (2014) Unsupervised feature learning by deep sparse coding. In: Proceedings of the 2014 SIAM international conference on data mining. SIAM, pp 902–910
Hosoya H (2012) Multinomial Bayesian learning for modeling classical and non-classical receptive field properties. Neural Comput 24(8):2119–2150. https://doi.org/10.1162/NECO_a_00310
Ichisugi Y, Sano T (2016) Regularization methods for the restricted Bayesian network besom. In: Proceedings of the neural information processing: 23rd international conference, ICONIP 2016, Part I. Springer, pp 290–299. https://doi.org/10.1007/978-3-319-46687-3_32
Kingma DP, Mohamed S, Rezende DJ, Welling M (2014) Semi-supervised learning with deep generative models. In: Advances in neural information processing systems, pp 3581–3589
Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems, vol 25. Curran Associates, Inc., pp 1097–1105
Le Q (2013) Building high-level features using large scale unsupervised learning. In: 2013 IEEE International conference acoustics, speech and signal processing (ICASSP), pp 8595–8598. https://doi.org/10.1109/ICASSP.2013.6639343
LeCun Y (1998) The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Linsker R (1998) Self-organization in a perceptual network. Computer 21(3):105–117. https://doi.org/10.1109/2.36
Nakada H, Ichisugi Y (2018) Context-dependent robust text recognition using large-scale restricted Bayesian network. Proc Comput Sci 123:314–320
Nishino K, Inaba M (2018) Constructing hierarchical Bayesian networks with pooling. In: Proceedings of the thirty-second AAAI conference on artificial intelligence (AAAI-18). AAAI Press, pp 8125–8126
Pearl J (1985) Bayesian networks: a model of self-activated memory for evidential reasoning. In: Cognitive science society, pp 329–334
Ramachandran VS, Gregory RL (1991) Perceptual filling in of artificially induced scotomas in human vision. Nature 350(6320):699–702
Soleymani S, Dabouei A, Kazemi H, Dawson J, Nasrabadi NM (2018) Multi-level feature abstraction from convolutional neural networks for multimodal biometric identification. arXiv preprint arXiv:1807.01332
Vincent P, Larochelle H, Bengio Y, Manzagol PA (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, ICML 2008. ACM, New York, pp 1096–1103. https://doi.org/10.1145/1390156.1390294
Acknowledgements
This work was supported by JSPS Grant-in-Aid for JSPS Fellows, JP17J09110.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Nishino, K., Tezuka, H., Inaba, M. (2020). Toward Shareable Multi-abstraction-level Feature Extractor Based on a Bayesian Network. In: Burduk, R., Kurzynski, M., Wozniak, M. (eds) Progress in Computer Recognition Systems. CORES 2019. Advances in Intelligent Systems and Computing, vol 977. Springer, Cham. https://doi.org/10.1007/978-3-030-19738-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-19738-4_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-19737-7
Online ISBN: 978-3-030-19738-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)