Skip to main content

PySigma: Towards Enhanced Grand Unification for the Sigma Cognitive Architecture

  • Conference paper
  • First Online:
Artificial General Intelligence (AGI 2021)

Abstract

The Sigma cognitive architecture is the beginning of an integrated computational model of intelligent behavior aimed at the grand goal of artificial general intelligence (AGI). However, whereas it has been proven to be capable of modeling a wide range of intelligent behaviors, the existing implementation of Sigma has suffered from several significant limitations. The most prominent one is the inadequate support for inference and learning on continuous variables. In this article, we propose solutions for this limitation that should together enhance Sigma’s level of grand unification; that is, its ability to span both traditional cognitive capabilities and key non-cognitive capabilities central to general intelligence, bridging the gap between symbolic, probabilistic, and neural processing. The resulting design changes converge on a more capable version of the architecture called PySigma. We demonstrate such capabilities of PySigma in neural probabilistic processing via deep generative models, specifically variational autoencoders, as a concrete example.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akbayrak, S., De Vries, B.: Reparameterization gradient message passing. In: European Signal Processing Conference, EUSIPCO, September 2019 (September 2019). https://doi.org/10.23919/EUSIPCO.2019.8902930

  2. Dauwels, J.: On variational message passing on factor graphs. In: Proceedings of the IEEE International Symposium on Information Theory, pp. 2546–2550 (2007). https://doi.org/10.1109/ISIT.2007.4557602

  3. Dauwels, J., Korl, S., Loeliger, H.A.: Particle methods as message passing. In: Proceedings of the IEEE International Symposium on Information Theory, pp. 2052–2056 (2006). https://doi.org/10.1109/ISIT.2006.261910

  4. Dillon, J.V., et al.: TensorFlow distributions (2017). http://arxiv.org/abs/1711.10604

  5. Ihler, A., McAllester, D.: Particle belief propagation. In: van Dyk, D., Welling, M. (eds.) Proceedings of the 12th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 5, pp. 256–263, 16–18 April 2009. PMLR, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA (2009). http://proceedings.mlr.press/v5/ihler09a.html

  6. Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux, New York (2011). https://doi.org/10.1037/h0099210

  7. Kim, H., Robert, C.P., Casella, G.: Monte Carlo statistical methods. Technometrics 42(4), 430 (2000). https://doi.org/10.2307/1270959

    Article  Google Scholar 

  8. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: 2nd International Conference on Learning Representations, Conference Track Proceedings, ICLR 2014, Banff, AB, Canada, 14–16 April 2014 (2014)

    Google Scholar 

  9. Kralik, J.D., et al.: Metacognition for a common model of cognition. Procedia Comput. Sci. 145, 730–739 (2018). https://doi.org/10.1016/j.procs.2018.11.046, https://www.sciencedirect.com/science/article/pii/S1877050918323329. Postproceedings of the 9th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2018 (Ninth Annual Meeting of the BICA Society), held 22–24 August 2018 in Prague, Czech Republic

  10. Laird, J.E.: The Soar Cognitive Architecture (2018). https://doi.org/10.7551/mitpress/7688.001.0001

  11. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf

  12. Rosenbloom, P.S., Demski, A., Ustun, V.: Rethinking sigma’s graphical architecture: an extension to neural networks. In: Steunebrink, B., Wang, P., Goertzel, B. (eds.) AGI-2016. LNCS (LNAI), vol. 9782, pp. 84–94. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41649-6_9

    Chapter  Google Scholar 

  13. Rosenbloom, P.S., Demski, A., Ustun, V.: The sigma cognitive architecture and system: towards functionally elegant grand unification. J. Artif. Gen. Intell. 7(1), 1–103 (2016). https://doi.org/10.1515/jagi-2016-0001. https://www.degruyter.com/downloadpdf/j/jagi.2016.7.issue-1/jagi-2016-0001/jagi-2016-0001.pdf

  14. Rosenbloom, P.S., Demski, A., Ustun, V.: Toward a neural-symbolic sigma: introducing neural network learning. In: Proceedings of the 15th International Conference on Cognitive Modeling, ICCM 2017, pp. 73–78 (2017). http://www.doc.ic.ac.uk/~sgc/teaching/pre2012/v231/lecture13.html

  15. Winn, J.: Variational message passing and its applications. Ph.D. thesis (2003). http://johnwinn.org/Publications/thesis/Winn03_thesis.pdf

Download references

Acknowledgements

Part of the effort depicted is sponsored by the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, and that the content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We also would like to thank Dr. Paul Rosenbloom for his comments and suggestions, which helped improve the quality of this paper. More importantly, we appreciate Dr. Rosenbloom’s continuous and invaluable guidance in enhancing our understanding of cognitive architectures and the design choices for Sigma.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jincheng Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, J., Ustun, V. (2022). PySigma: Towards Enhanced Grand Unification for the Sigma Cognitive Architecture. In: Goertzel, B., Iklé, M., Potapov, A. (eds) Artificial General Intelligence. AGI 2021. Lecture Notes in Computer Science(), vol 13154. Springer, Cham. https://doi.org/10.1007/978-3-030-93758-4_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93758-4_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93757-7

  • Online ISBN: 978-3-030-93758-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics