Abstract
The Sigma cognitive architecture is the beginning of an integrated computational model of intelligent behavior aimed at the grand goal of artificial general intelligence (AGI). However, whereas it has been proven to be capable of modeling a wide range of intelligent behaviors, the existing implementation of Sigma has suffered from several significant limitations. The most prominent one is the inadequate support for inference and learning on continuous variables. In this article, we propose solutions for this limitation that should together enhance Sigma’s level of grand unification; that is, its ability to span both traditional cognitive capabilities and key non-cognitive capabilities central to general intelligence, bridging the gap between symbolic, probabilistic, and neural processing. The resulting design changes converge on a more capable version of the architecture called PySigma. We demonstrate such capabilities of PySigma in neural probabilistic processing via deep generative models, specifically variational autoencoders, as a concrete example.
Keywords
- Sigma
- Cognitive architecture
- Probabilistic graphical model
- Message passing algorithm
- Approximate inference
- Deep generative model
This is a preview of subscription content, access via your institution.
Buying options


References
Akbayrak, S., De Vries, B.: Reparameterization gradient message passing. In: European Signal Processing Conference, EUSIPCO, September 2019 (September 2019). https://doi.org/10.23919/EUSIPCO.2019.8902930
Dauwels, J.: On variational message passing on factor graphs. In: Proceedings of the IEEE International Symposium on Information Theory, pp. 2546–2550 (2007). https://doi.org/10.1109/ISIT.2007.4557602
Dauwels, J., Korl, S., Loeliger, H.A.: Particle methods as message passing. In: Proceedings of the IEEE International Symposium on Information Theory, pp. 2052–2056 (2006). https://doi.org/10.1109/ISIT.2006.261910
Dillon, J.V., et al.: TensorFlow distributions (2017). http://arxiv.org/abs/1711.10604
Ihler, A., McAllester, D.: Particle belief propagation. In: van Dyk, D., Welling, M. (eds.) Proceedings of the 12th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 5, pp. 256–263, 16–18 April 2009. PMLR, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA (2009). http://proceedings.mlr.press/v5/ihler09a.html
Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux, New York (2011). https://doi.org/10.1037/h0099210
Kim, H., Robert, C.P., Casella, G.: Monte Carlo statistical methods. Technometrics 42(4), 430 (2000). https://doi.org/10.2307/1270959
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: 2nd International Conference on Learning Representations, Conference Track Proceedings, ICLR 2014, Banff, AB, Canada, 14–16 April 2014 (2014)
Kralik, J.D., et al.: Metacognition for a common model of cognition. Procedia Comput. Sci. 145, 730–739 (2018). https://doi.org/10.1016/j.procs.2018.11.046, https://www.sciencedirect.com/science/article/pii/S1877050918323329. Postproceedings of the 9th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2018 (Ninth Annual Meeting of the BICA Society), held 22–24 August 2018 in Prague, Czech Republic
Laird, J.E.: The Soar Cognitive Architecture (2018). https://doi.org/10.7551/mitpress/7688.001.0001
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf
Rosenbloom, P.S., Demski, A., Ustun, V.: Rethinking sigma’s graphical architecture: an extension to neural networks. In: Steunebrink, B., Wang, P., Goertzel, B. (eds.) AGI-2016. LNCS (LNAI), vol. 9782, pp. 84–94. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41649-6_9
Rosenbloom, P.S., Demski, A., Ustun, V.: The sigma cognitive architecture and system: towards functionally elegant grand unification. J. Artif. Gen. Intell. 7(1), 1–103 (2016). https://doi.org/10.1515/jagi-2016-0001. https://www.degruyter.com/downloadpdf/j/jagi.2016.7.issue-1/jagi-2016-0001/jagi-2016-0001.pdf
Rosenbloom, P.S., Demski, A., Ustun, V.: Toward a neural-symbolic sigma: introducing neural network learning. In: Proceedings of the 15th International Conference on Cognitive Modeling, ICCM 2017, pp. 73–78 (2017). http://www.doc.ic.ac.uk/~sgc/teaching/pre2012/v231/lecture13.html
Winn, J.: Variational message passing and its applications. Ph.D. thesis (2003). http://johnwinn.org/Publications/thesis/Winn03_thesis.pdf
Acknowledgements
Part of the effort depicted is sponsored by the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, and that the content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We also would like to thank Dr. Paul Rosenbloom for his comments and suggestions, which helped improve the quality of this paper. More importantly, we appreciate Dr. Rosenbloom’s continuous and invaluable guidance in enhancing our understanding of cognitive architectures and the design choices for Sigma.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, J., Ustun, V. (2022). PySigma: Towards Enhanced Grand Unification for the Sigma Cognitive Architecture. In: Goertzel, B., Iklé, M., Potapov, A. (eds) Artificial General Intelligence. AGI 2021. Lecture Notes in Computer Science(), vol 13154. Springer, Cham. https://doi.org/10.1007/978-3-030-93758-4_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-93758-4_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93757-7
Online ISBN: 978-3-030-93758-4
eBook Packages: Computer ScienceComputer Science (R0)